[Ffmpeg-devel-irc] ffmpeg.log.20160224

burek burek021 at gmail.com
Thu Feb 25 02:05:01 CET 2016


[00:02:02 CET] <Rajko> JEEB, so does it put the extradata into the output AVFrame ?
[00:02:23 CET] <Rajko> on filter_filter ?
[00:02:35 CET] <lroe> so this config *should* have created an m3u8 file in /tmp/hls? http://paste.debian.net/hidden/678d22d3/
[00:04:33 CET] <JEEB> Rajko: bsfs are kind of special
[00:04:39 CET] <JEEB> special like in "special olympics"
[00:04:45 CET] <Rajko> because like... how can it do that
[00:04:51 CET] <Rajko> you only get one avframe back in filter_filter
[00:05:01 CET] <Rajko> does it output multiple NALs in one AVFrame ?
[00:05:04 CET] <JEEB> https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/h264_mp4toannexb_bsf.c#L155
[00:05:25 CET] <Rajko> the stuff that happens if idr_sps_seen is 0
[00:05:27 CET] <JEEB> the whole thing doesn't even touch AVFrames
[00:05:37 CET] <Rajko> it does, it replaces the header with 0001 at least
[00:05:49 CET] <JEEB> well I mean API-wise
[00:05:52 CET] <Rajko> 'replaces' being it allocates a new data and puts it there
[00:06:11 CET] <JEEB> but yes looking at that I see it probably allocates another buffer
[00:06:26 CET] <JEEB> so that the contents of the AVFrame get converted
[00:06:39 CET] <Rajko> im asking if theres no sps/pps in the first avframe you give it
[00:06:48 CET] <Rajko> does it prepend them in the output avframe
[00:06:57 CET] <Rajko> or doe sit just straight up replace the avframe with a sps/pps avframe
[00:07:03 CET] <Rajko> or does nothing other than convert header
[00:07:30 CET] <JEEB> see the if (!ctx->extradata_parsed) part :) as far as I can tell it just pushes it into the "front" of the buffer
[00:07:38 CET] <JEEB> with h264_extradata_to_annexb
[00:07:42 CET] <Rajko> 'insert' seems that it outputs more than 1 NAL in the out AVFrame
[00:07:51 CET] <Rajko> so theres multiple startcodes
[00:08:27 CET] <Rajko> which means i gotta split it back again if im outputting to something that cant handle more than oen
[00:08:34 CET] <JEEB> yes
[00:08:41 CET] <JEEB> that's how I understand that code
[00:08:50 CET] <JEEB> with a quick look, since I've never used it API-wise
[00:09:00 CET] <Rajko> JEEB, how about i just do it myself :D
[00:09:43 CET] <Rajko> way more efficient than it allocating a new copy every time i call it that i then have to free
[00:09:55 CET] <JEEB> sure
[00:10:09 CET] <Rajko> the only difference between annexb and avcc is the extradata format and the 4 byte header ?
[00:10:15 CET] <JEEB> yup
[00:10:17 CET] <Rajko> great
[00:10:39 CET] <JEEB> AVCc can have N byte header with the length (variable, because WhyNot), Annex B has three and four byte headers
[00:11:09 CET] <Rajko> ffmpeg avcodec decides to use one or the other based on extradata format
[00:11:22 CET] <Rajko> (with null extradata it expects annexb)
[00:11:34 CET] <JEEB> the truth is less pretty
[00:11:40 CET] <JEEB> it probably in many cases just probes
[00:11:52 CET] <Rajko> im saying avcodec, not ffmpeg binary as a whole
[00:12:05 CET] <JEEB> I think at least in AVC and HEVC decoders there's probing for AVCc/Annex B start codes
[00:12:17 CET] <JEEB> or well, start code for the latter
[00:12:19 CET] <Rajko> well it didnt work when it had no header or avcc header
[00:12:22 CET] <Rajko> with null extradata
[00:12:32 CET] <Rajko> but it did work with 0,0,1 or 0,0,0,1 with null extradata
[00:12:44 CET] <JEEB> that's surprising
[00:13:06 CET] <JEEB> because mp4 without any parameter sets out-of-band is kind of legal
[00:13:10 CET] <JEEB> I guess the demuxer might do some magic
[00:13:23 CET] <Rajko> well it needs to know how many bytes is the header
[00:13:26 CET] <Rajko> in case of avcc
[00:13:30 CET] <Rajko> so with null extradata it cant do that
[00:13:33 CET] <JEEB> yes, so the AVCc structure should be there
[00:13:45 CET] <JEEB> it just doesn't have to contain any decoding-related parameter sets
[00:14:05 CET] <JEEB> so yeah, you are correct then that it will not work without the AVCc structure there
[00:14:07 CET] <Rajko> how does it figure out profile/level with annexb extradata though
[00:14:10 CET] <Rajko> thats not in that, just sps/pps
[00:14:18 CET] <Rajko> is it redundantly in sps ?
[00:14:35 CET] <JEEB> I think lavc always probes the profile/level from the parameter sets
[00:14:54 CET] <JEEB> I don't think it actually uses the values in the container AVCc structure
[00:15:15 CET] <JEEB> (they added even more duplicated info in the HEVC-for-ISOBMFF spec)
[00:15:24 CET] <Rajko> so null extradata is fine as long as it gets a sps/pps in the bitstream before a IDR, yes ?
[00:15:46 CET] <JEEB> yes, and annex b in that case
[00:16:14 CET] <Rajko> i also had to give it CODEC_FLAG2_CHUNKS because i was feeding it one NAL at a time
[00:16:24 CET] <Rajko> otherwise you have to give it enough to decode an entire frame
[00:16:32 CET] <Rajko> but i was told that this disables multithreaded decoding ?
[00:18:12 CET] <proxima> zzuf[s=86382,r=0.004]: signal 9 (memory exceeded?). what it means and how do we handle this?
[00:20:40 CET] <JEEB> Rajko: I don't have ffmpeg.git cloned here but at least the quick look at h264.c doesn't strictly seem to disable MT... although it is mentioned by some third party
[00:21:31 CET] <JEEB> oh
[00:21:34 CET] <JEEB> pthread.c
[00:22:02 CET] <JEEB>  int frame_threading_supported = .. && && !(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS);
[00:22:05 CET] <JEEB> so yeah
[00:22:35 CET] <JEEB> drip-feeding the decoder only lets you do slice based threading
[00:35:43 CET] <Rajko> which is fine because frame threading is broken anyway ?
[00:36:52 CET] <Rajko> how about this, is it possible for ffmpeg compiled with --disable-codecs to mux 264 and aac into a mp4 container from a source it didnt demux ?
[00:37:24 CET] <Rajko> because i wasnt able to figure that one out
[00:40:31 CET] <JEEB> frame threading has its limitations, but it is the most speedy way of threading (sliced threads have lower latency and less memory consumption, but speed-wise frame threading is king)
[00:40:59 CET] <JEEB> only hwaccel+frame threading is something that is not recommended, mostly because you get jack shit out of that anyways
[00:41:50 CET] <JEEB> not sure about your last question but in theory it should be possible, esp. if the h264 parser is in lavf
[00:41:53 CET] <Rajko> i dunno how i would set up the avstream and such for those
[00:42:15 CET] <Rajko> i would need the avcodec copy context and i dont know how to get one of those, only by find_decoder_by_name etc
[00:42:46 CET] <JEEB> I would probably just make my own AVIO wrapper and tell it its raw annex b or something if I was lazy
[00:42:59 CET] <JEEB> and have the h264 demuxer demux it for me
[00:43:00 CET] <Rajko> but then it wouldnt have pts/dts
[00:43:11 CET] <JEEB> true that
[00:43:42 CET] <Rajko> you can just feed avframes to av_mux_interleaved or whatever, but you need to set up a stream for each stream, a codec context so it knows what codec it is, etc
[00:44:00 CET] <JEEB> not that you couldn't override the PTS
[00:44:12 CET] <JEEB> after having the avformat h264 demuxer read the data
[00:44:14 CET] <Rajko> also, where does mp4 even store pts/dts as 264 doesnt ahve it
[00:44:14 CET] <JEEB> :D
[00:44:57 CET] <JEEB> ISOBMFF had to come up with its own nomenclature so it's called DTS and CTS (Composition Time Stamp) there
[00:45:08 CET] <JEEB> also there's multiple boxes in mp4 that contain timestamps
[00:45:21 CET] <JEEB> you have DTS and then a CTS offset for each sample
[00:45:43 CET] <Rajko> i have the opposite as input though
[00:45:56 CET] <Rajko> i only have pts, and the order i have my packets in is the 'dts'
[00:46:03 CET] <JEEB> yes
[00:46:07 CET] <JEEB> that's how it goes generally
[00:46:17 CET] <JEEB> thus you have the DTS pretty much always :P
[00:46:33 CET] <JEEB> so you just have to override the PTS as lavf h264 demuxer reads the packets
[00:46:37 CET] <Rajko> do the pts and dts have to start at the same value ?
[00:46:44 CET] <JEEB> no
[00:46:52 CET] <Rajko> but <JEEB> you have DTS and then a CTS offset for each sample
[00:47:07 CET] <JEEB> yes, that's in mp4
[00:47:11 CET] <JEEB> the muxer can make that up
[00:47:12 CET] <Rajko> does it figure that out ?
[00:47:22 CET] <JEEB> the main thing is to not have DTS > PTS :P
[00:47:45 CET] <Rajko> so i could just fake the dts as index of NAL i have
[00:50:17 CET] <Rajko> im afraid that it would then make the mp4 pts really inaccurate if its stored as offset from dts
[00:51:36 CET] <JEEB> pretty sure it'll be OK if you handle timebases (or in ISOBMFF vocab timescales) properly into mention
[00:52:24 CET] <JEEB> also having lavf-based muxing for something completely different isn't really that rare, with a quick reminder I can think of VLC having a lavf muxer that works without lavc being used for encoding before it
[00:54:31 CET] <JEEB> also if you are doing custom demuxing you might just want to look at L-SMASH for ISOBMFF muxing
[00:54:44 CET] <Rajko> i can also try just setting pts and having dts be AV_NO_DTS or whatever, but im afraid it will just set dts=pts in that case and mess up my b-frames
[00:55:05 CET] <JEEB> it's a library that's meant just for that, and is quite spec-compliant
[00:55:49 CET] <Rajko> (also is there a rtsp server out there that actually supports ffmpeg streaming rtsp to it ?!)
[00:57:47 CET] <JEEB> Rajko: talking of b-pictures, the way it's handled in ISOBMFF is fun. for a while the CTS offsets could only be positive, so you basically had N samples' duration worth of positive offset on the first sample, depending on the required amount. then you would have a thing called an edit list that would map that initial sample's CTS to be the start of the video track's time line.
[00:58:40 CET] <Rajko> well all the hls.js things out there convert mpeg2-ts to mp4 on the fly in JS so it cant be that hard
[00:59:18 CET] <Rajko> oh good l-smash is all in japones
[00:59:28 CET] <JEEB> the "official site" only
[00:59:34 CET] <JEEB> the code and headers is all English
[00:59:45 CET] <JEEB> and I've been thinking of English'izing the site
[00:59:52 CET] <JEEB> ENOTIME
[01:01:04 CET] <JEEB> anyways, then later people who didn't want to implement edit lists (they weren't required by the spec) complained enough and ISO decided to let the user decide whether they wanted int32 or uint32 CTS offsets depending on the version of that box used
[01:04:17 CET] <JEEB> that said, if I recall correctly L-SMASH is quite a bit on a different level than lavf
[01:04:47 CET] <Rajko> i just want to get rtsp video to a browser, i can either do that by decoding myself/using chrome api, or trying to mux into a mp4 and give that to MSE
[01:04:59 CET] <JEEB> gunky
[01:05:03 CET] <JEEB> uhh, funky
[01:05:04 CET] <JEEB> I meant
[01:05:12 CET] <Rajko> still no clue how i will handle a/v sync in the first case
[01:05:36 CET] <Rajko> ffplay has super overcomplicated sync code
[01:07:18 CET] <JEEB> you'd probably want tot take a look at a real player first, like mpv
[01:07:31 CET] <JEEB> both use lavf in the background, of course
[01:17:17 CET] <proxima> Explain about the qualification task(about fixing the crash using zzuf) mentioned for project Create a fuzzing testsuite for FFmpeg for Outreachy 2016. Please clear what kind of crash i.e. software or a simple file or any other specification?
[01:47:33 CET] <jookiyaya> what is the latest version of x265
[02:07:57 CET] <J_Darnley> what ever is in their git (or other VCS)
[02:21:42 CET] <jookiyaya> does ffmpeg come with FDK AAC encoder?
[02:22:55 CET] <J_Darnley> no
[02:23:05 CET] <kepstin> you can build your own copy that uses the fdk aac encoder, but because of license incompatibility problems ffmpeg binaries with fdk aac can't be distributed.
[02:23:17 CET] <J_Darnley> ^ what he said
[02:24:03 CET] <jookiyaya> if it's not allowed, then why am i allowed  when i building my own copy?
[02:24:14 CET] <J_Darnley> because you're not distributing
[02:24:34 CET] <jookiyaya> may i ask what happened?  why was it allowed before?
[02:24:39 CET] <J_Darnley> it never was
[02:24:39 CET] <kepstin> the provisions of the GPL license only take effect when you distribute the binaries to other people
[02:24:44 CET] <jookiyaya> and all of sudden not anymore
[02:25:00 CET] <jookiyaya> j_darnley i am pretty sure old version of ffmpeg had fdk aac encoder before
[02:25:29 CET] <J_Darnley> Perhaps a long time ago, before people realised, but not last week, last month, last year
[02:25:35 CET] <kepstin> if people were distributing it before, they were in violation of the license, so perhaps when informed of that fact they took it down?
[02:26:03 CET] <jookiyaya> and for example:  old version of handbrake supported  fdk-aac
[02:26:14 CET] <jookiyaya> and just now released new version without it
[02:26:31 CET] <jookiyaya> Thursday, Feb 11, 2016
[02:26:31 CET] <jookiyaya> Unfortunately due to circumstances beyond our control we can no longer include binary distributions of HandBrake which include the FDK-AAC encoder.
[02:26:31 CET] <jookiyaya> Please also be aware that if you are distributing any previous 0.10.x you must cease doing so now due to licensing issues.
[02:26:37 CET] <kepstin> all current ffmpeg binaries should include the native aac encoder of course, which while it isn't quite as good as fdk-aac is still decent, and far better than vo-aacenc.
[02:27:13 CET] <jookiyaya> is fdk-aac better than  apple-aac aka coreaudio-aac
[02:27:26 CET] <kepstin> not sure, I haven't seen any comparisons
[02:27:31 CET] <furq> "circumstances beyond our control" probably means "we misread the licence"
[02:28:15 CET] <furq> and i'm not aware of any meaningful listening test comparing fdk and qaac
[02:29:17 CET] <kepstin> i'd expect they're overall fairly similar, and almost certainly nearly indistinguishable at higher bitrates.
[02:29:22 CET] <furq> you might be confusing fdk-aac for libfaac which was previously distributed with some ffmpeg builds
[02:29:38 CET] <jookiyaya> what is qaac ?
[02:29:47 CET] <furq> qaac is the same thing as appleaac
[02:29:52 CET] <jookiyaya> i see
[02:30:03 CET] <jookiyaya> does ffmpeg support qaac ?
[02:30:09 CET] <furq> not as far as i'm aware
[02:30:21 CET] <J_Darnley> maybe as a system library on a mac
[02:30:27 CET] <furq> qaac is a binary which requires itunes to be installed
[02:30:27 CET] <kepstin> qaac isn't even an aac encoder, it's just a command line tool for using apple's aac encoder library
[02:31:17 CET] <furq> i doubt there's any distinguishable difference between qaac, fdk and ffmpeg at >=128kbps anyway
[02:32:10 CET] <kepstin> for the listening comparisons for these codecs, they always go down to something like 64kbit anyways, so that the encodes have a chance of being distinguished.
[02:32:12 CET] <rsully> kepstin so fdk is still better than native?
[02:32:13 CET] <furq> the builtin ffmpeg encoder is recommended now for cbr aac-lc
[02:32:23 CET] <furq> for anything else fdk is better
[02:32:41 CET] <rsully> does builtin support 5.1?
[02:33:53 CET] <jookiyaya> according to  ffmpeg site:    libopus > libvorbis >= libfdk_aac > aac > libmp3lame >= libfaac >= eac3/ac3 > libtwolame > vorbis > mp2 > wmav2/wmav1
[02:34:07 CET] <jookiyaya> is this accurate of the author is very bias?
[02:34:28 CET] <jookiyaya> is this accurate or the author is very bias?
[02:35:20 CET] <jookiyaya> why are there 2 vorbis ?
[02:35:27 CET] <furq> vorbis is the ffmpeg internal encoder which sucks
[02:35:39 CET] <jookiyaya> and libvorbis is?
[02:35:44 CET] <furq> libvorbis is libvorbis
[02:35:47 CET] <kepstin> libvorbis is xiph's reference encoder library
[02:36:04 CET] <jookiyaya> liborvis is a lot better than vorbis?
[02:36:32 CET] <furq> it sure looks like it
[02:36:42 CET] <kepstin> "libvorbis" is a lot better of an encoder than ffmpeg's builtin "vorbis" encoder.
[02:36:47 CET] <furq> i doubt libvorbis is better than fdk, but it's certainly better than lame
[02:38:20 CET] <furq> rsully: i just checked and it seems to support 5.1
[02:38:42 CET] <rsully> I guess the better question is does mp4 support ac3 widely or not
[02:38:56 CET] <rsully> since I'm in the process of muxing an mkv to mp4, and the mkv has ac3 audio
[02:39:22 CET] <furq> it's not part of the spec iirc
[02:39:28 CET] <rsully> that's what i thought
[02:40:07 CET] <wintershade> hey guys! a quick question. I'm converting some x264 videos, and I want them to work properly on hardware players. I have chosen high profile v4.0, which I suppose works on Apple and Sony devices, right? will MKV work there? also, which format should I use for subtitles, SRT or ASS? is there a chart/documentation where I can look these things up (i.e. what works where, and what not)? thanks in advance!
[02:40:08 CET] <rsully> I know ffmpeg can "force" it, but pretty sure playback support is limited
[02:40:26 CET] <furq> yeah if you care about compatibility then convert to aac
[02:40:46 CET] <rsully> furq what would you recommend for this 5.1? builtin aac or fdk?
[02:40:58 CET] <rsully> in the past I used fdk at -b 512k and -cutof 18000
[02:41:10 CET] <furq> i downmix everything to stereo so idk
[02:41:20 CET] <furq> if you have a build with fdk then you might as well use it
[02:41:36 CET] <rsully> well -b 512k would be cbr right? and it sounds like builtin is better in that case?
[02:43:03 CET] <furq> i suspect "better" factors in not having to bundle a library with a ropey license
[02:43:21 CET] <rsully> oh ok.
[02:43:29 CET] <furq> i doubt there's much difference in terms of quality
[02:44:45 CET] <rsully> I know 512k is probably overkill, but I figure audio is so small I'd rather go a little bloated and ensure quality is maintained
[02:46:19 CET] <rsully> according to this AC3 is supported in mp4 https://en.wikipedia.org/wiki/Comparison_of_video_container_formats
[02:46:34 CET] <rsully> citing "ETSI TS 102 366 v1.2.1 - Digital Audio Compression (AC-3, Enhanced AC-3) Standard, Annex F"
[02:46:36 CET] <rsully> with no link...
[02:46:56 CET] <kepstin> rsully: you'd have to check individual players. e.g. web browsers will do h264+aac, but not ac3; quicktime probably does both.
[02:47:02 CET] <furq> https://en.wikipedia.org/wiki/MPEG-4_Part_14#Data_streams
[02:47:05 CET] <furq> i'm going off that
[02:47:41 CET] <rsully> Yeah I saw that originally
[02:48:08 CET] <furq> why are you muxing to mp4 anyway
[02:48:24 CET] <rsully> for iOS/appletv playback
[02:48:32 CET] <rsully> just general portability
[02:48:45 CET] <rsully> since its a h264 stream in the mkv anyways
[02:48:49 CET] <furq> if it's for a specific device then you might as well try it
[02:49:12 CET] <rsully> I tried it, but ffmpeg gave me a warning
[02:49:33 CET] <rsully> "track 1: codec frame size is not set"
[02:49:49 CET] <rsully> thats a straight -c:a copy
[02:51:10 CET] <rsully> and I know a year ago when I was experimenting with this I had hit-or-miss results with my iphone playback
[02:55:59 CET] <jookiyaya> which aac encoder does youtube use
[02:59:59 CET] <kepstin> huh, I wouldn't actually know how to find out. I wonder if any of the aac encoders snuck in watermarks like x264 did :)
[03:00:27 CET] <furq> i've read that they use fdk but i don't know how reliable that is
[03:01:18 CET] <kepstin> that would make sense; they do apparently use a lot of pieces based on ffmpeg, and the platform is almost certainly running on linux
[03:01:31 CET] <kepstin> could have also licensed someone else's, e.g. nero
[03:01:46 CET] <furq> i think we can safely rule out appleaac
[03:06:03 CET] <rsully> hm youtube uses CBR AAC LC
[03:06:08 CET] <rsully> but no hints to encoder in mediainfo
[03:06:17 CET] <wintershade> hey guys, how do I set default streams in MKV? I have two audio streams and one subtitle stream, and would like a specific audio stream to be defaul, as well as load the subtitles by default.
[03:06:22 CET] <wintershade> (yes, it's anime.)
[03:09:37 CET] <kepstin> wintershade: huh, I don't actually know... If all else fails, it might be easiest to just run the ffmpeg output through mkvmerge after encoding to set the metadata up.
[03:11:13 CET] <wintershade> kepstin: is mkvmerge available for Linux?*
[03:11:26 CET] <kepstin> wintershade: yep, it's usually in the 'mkvtoolnix' package
[03:11:51 CET] <kepstin> i suspect you can do it in ffmpeg by using the '-metadata' option, if you know the right metadata keys
[03:11:54 CET] <wintershade> okay thanks
[03:12:10 CET] <wintershade> kepstin: I've tried, looking at Matroska's metadata keys... but it doesn't really work.
[03:12:53 CET] <furq> afaik ffmpeg can't do it but you can edit it inplace with mkvpropedit
[03:12:57 CET] <furq> which is also part of mkvtools
[03:13:10 CET] <furq> s/s$/nix/
[03:13:39 CET] <wintershade> lol
[03:13:42 CET] <wintershade> thanks
[03:13:50 CET] <wintershade> I'll do that, then.
[03:14:11 CET] <wintershade> how come ffmpeg is unable to do it, though? if I may ask
[03:18:12 CET] <kepstin> it looks like all the stuff is wired up in the matroska muxer to do it, actually. hmm.
[03:18:23 CET] <furq> yeah there's a patch which added it last january
[03:18:29 CET] <furq> but the docs for -disposition just say "disposition"
[03:18:34 CET] <furq> which is not really helpful
[03:18:37 CET] <slowfoxtrot> hey guys I have a question about m3u8
[03:18:44 CET] <slowfoxtrot> anyone familiar with that format?
[03:18:47 CET] <furq> or -h format=mkv says that, the docs don't mention it at all
[03:19:06 CET] <kepstin> slowfoxtrot: it's a text-based playlist file encoded in utf8
[03:19:14 CET] <kepstin> slowfoxtrot: often used to describe HLS streams
[03:19:20 CET] <slowfoxtrot> yes, I have found that when I use ffplay
[03:19:30 CET] <slowfoxtrot> it reads the EXT-X-DISCONTINUITY correctly
[03:19:41 CET] <slowfoxtrot> and the timestamps are adjusted accordingly
[03:20:17 CET] <slowfoxtrot> but when I try to use ffmpeg it apparently ignores that flag and just duplicates the last frame of the previous frame for the duration of the commercial
[03:20:25 CET] <slowfoxtrot> the time is shifted in ffplay
[03:20:29 CET] <slowfoxtrot> but absolute in ffmpeg
[03:20:50 CET] <slowfoxtrot> is there a trick to get ffmpeg to play nice with EXT-X-DISCONTINUITY?
[03:21:05 CET] <wintershade> furq: thanks, I've looked into disposition flag. thing is, I've tried setting "default" streams with disposition, but it leaves me with some other tracks set as default, which I cannot "un-default".
[03:21:45 CET] <wintershade> furq: i.e., I have a video stream, two audio streams and the subs. I set the audio stream A to be default, but I see ffmpeg setting both A and B as default.
[03:22:07 CET] <wintershade> furq: most likely based on the fact that stream B is default in the source, and it just copies that flag, oddly enough.
[03:22:10 CET] <furq> shrug
[03:22:18 CET] <Rajko> slowfoxtrot, ffmpeg doesnt have a great hls/m3u8 parser
[03:22:22 CET] <furq> i vaguely remember this being broken but i've never actually used it
[03:22:25 CET] <Rajko> use python livestreamer and pipe to whatever program you need instead
[03:22:55 CET] <slowfoxtrot> Rajko: I thought that might be the case, but I thought it was ironic that ffplay worked correctly
[03:23:09 CET] <slowfoxtrot> I believe ffmpeg receives a lot more attention than ffplay
[03:23:27 CET] <Rajko> theyre in the same thing
[03:23:30 CET] <Rajko> ffplay uses ffmpeg lib
[03:23:38 CET] <slowfoxtrot> how can I make ffmpeg not keep the timestamps absolute?
[03:23:53 CET] <slowfoxtrot> I want it to just skip immediately to the next video and not duplicate frames
[03:24:06 CET] <wintershade> damn, this mkvtoolnix sure takes a long time to compile...
[03:24:12 CET] <slowfoxtrot> lol right now during the commerical it looks like the video is frozen for like 10 seconds on the same frame
[03:24:45 CET] <kepstin> wintershade: yeah, their build system sucks. Doesn't support parallel compilation
[03:24:56 CET] <slowfoxtrot> if ffplay uses ffmpeg then I would imagine there has to be a way to make ffmpeg do what i am seeing in ffplay
[03:25:16 CET] <Rajko> ffmpeg doesnt display any video tho ?
[03:25:17 CET] <Rajko> ffplay does
[03:25:29 CET] <Rajko> so you want the a/v sync code out of ffplay instead of whatever player you are currently using
[03:25:37 CET] <slowfoxtrot> yes, but they aaparently parse m3u8 files diferently
[03:25:42 CET] <Rajko> no they dont
[03:25:47 CET] <Rajko> ffplay uses ffmpeg to demux that
[03:26:13 CET] <slowfoxtrot> basically my m3u8 file had various EXT-X-DISCONTINUITY flags to skip like 10 seconds from time to time
[03:26:41 CET] <slowfoxtrot> ffplay skips over the 10 seconds, ffmpeg shows 10 seconds of the last frame
[03:26:44 CET] <Rajko> the m3u8 parser is the same, there is no m3u8 parser in ffplay.c go look for yourself if you want
[03:26:50 CET] <Rajko> how can ffmpeg 'show' anything ?
[03:26:53 CET] <Rajko> its not a video player.
[03:27:25 CET] <slowfoxtrot> im referring to the mp4 im getting from the ffmpeg output
[03:27:25 CET] <wintershade> Rajko: there is ffplay...
[03:27:26 CET] <furq> i'm going to go out on a limb and say he means the video encoded by ffmpeg
[03:27:39 CET] <kepstin> keep in mind that many of the output formats that ffmpeg supports can't handle pts discontinuities; mp4 is one of them iirc
[03:27:52 CET] <slowfoxtrot> maybe that is the problem
[03:28:05 CET] <slowfoxtrot> is there a format that can handle discontinuities?
[03:28:19 CET] <furq> if it works in hls then presumably .ts can
[03:28:20 CET] <kepstin> dunno. mpegts maybe? haven't tried.
[03:28:26 CET] <Rajko> furq, it doesnt
[03:28:34 CET] <Rajko> the m3u8 file supplements it
[03:28:36 CET] <wintershade> kepstin: perhaps I should have compiled it without qt5 support...
[03:28:56 CET] <kepstin> wintershade: heh, yeah, that doubles the build time and gives you a gui of questionable usefulness :)
[03:29:09 CET] <wintershade> kepstin: stopping now. recompiling.
[03:29:33 CET] <slowfoxtrot> kepstin: Ill bet you nailed it kepstin& if mp4 cant handle discontinuities then it might be the container that simply needs to change
[03:30:21 CET] <kepstin> well, one fix would be to use a filter that rewrites the pts values to remove the discontinuities (or re-encode the video, perhaps)
[03:30:25 CET] <wintershade> what's the advantage of mp4 over mkv?
[03:30:26 CET] <slowfoxtrot> any suggestions on a good container i should try?
[03:30:35 CET] <wintershade> slowfoxtrot: OGM hehe
[03:30:42 CET] <slowfoxtrot> lol
[03:30:45 CET] <Rajko> why dont you try just using livestreamer as an stdout input of ffmpeg ?
[03:30:50 CET] <wintershade> slowfoxtrot: it's actually pretty robust, error-wise.
[03:31:02 CET] <wintershade> especially when it comes to errors on the physical media.
[03:31:06 CET] <Rajko> it handles hls/m3u8 way better than the in-ffmpeg code
[03:31:08 CET] <wintershade> I was pleasantly surprised.
[03:31:14 CET] <kepstin> wintershade: the advantage of mp4 over mkv is that it plays on apple devices mostly :)
[03:31:22 CET] <wintershade> kepstin: and mkv doesn't?
[03:31:45 CET] <furq> mp4 is more widely supported and also worse
[03:31:56 CET] <wintershade> furq: worse in what way?
[03:32:16 CET] <furq> mostly codec support
[03:32:34 CET] <wintershade> furq: but if I'm using x264+aac+srt... shouldn't it work well?
[03:32:44 CET] <furq> it doesn't support srt
[03:32:48 CET] <wintershade> O.o
[03:32:52 CET] <wintershade> whoa
[03:32:56 CET] <kepstin> wintershade: mp4 doesn't have any widely supported text subtitle formats
[03:32:56 CET] <slowfoxtrot> trying ts
[03:32:57 CET] <wintershade> (in keanu reeves voice)
[03:33:06 CET] <furq> it has mov_text
[03:33:11 CET] <furq> idk how widely supported it is though
[03:33:16 CET] <slowfoxtrot> can you use livestreamer to output to a file?
[03:33:20 CET] <Rajko> slowfoxtrot, yes
[03:33:27 CET] <Rajko> or to stdout so you can pipe it directly to ffmpeg
[03:33:31 CET] <wintershade> ok, so how do I get subtitles to play on a standalone player then?
[03:33:37 CET] <wintershade> external srt file? or something?
[03:33:50 CET] <furq> either that or mov_text
[03:34:01 CET] <wintershade> wow.
[03:34:20 CET] <furq> ffmpeg should automatically convert srt to mov_text if you try to mux it into mp4
[03:34:23 CET] <furq> iirc
[03:34:34 CET] <wintershade> hmm
[03:34:43 CET] <wintershade> perhaps I should try using m4v instead of mkv then..
[03:34:45 CET] <kepstin> also, you can't write mp4 files to a pipe, which is kind of annoying at times.
[03:35:54 CET] <slowfoxtrot> Rajko: could you give me some sudo code for piping livestreamer to ffmpeg?
[03:35:59 CET] <slowfoxtrot> i just installed livestreamer
[03:36:52 CET] <Rajko> livestreamer hlsvariant://your.hls.url.here/m3u8 best --stdout | ffmpeg ...
[03:37:25 CET] <slowfoxtrot> k let me try
[03:37:38 CET] <Rajko> tell ffmpeg to read from stdin too
[03:37:57 CET] <slowfoxtrot> is this correct?
[03:37:57 CET] <Rajko> which is ffmpeg -i -
[03:38:30 CET] <slowfoxtrot> livestreamer [stream] | ffmpeg -i pipe:0 -c copy -loglevel verbose test.mp4
[03:38:31 CET] <slowfoxtrot> ?
[03:38:50 CET] <Rajko> you missed half the arguments i gave you
[03:38:53 CET] <Rajko> best and --stdout
[03:39:24 CET] <slowfoxtrot> ok
[03:39:38 CET] <slowfoxtrot> is pipe the correct stdin for ffmpeg?
[03:40:25 CET] <kepstin> slowfoxtrot: you can just use '-' as the input file to get stdin, but that should be equivalent.
[03:40:27 CET] <Rajko> its just -i -
[03:40:47 CET] <slowfoxtrot> livestreamer: error: unrecognized arguments: best
[03:41:20 CET] <slowfoxtrot> does best need to come before the url?
[03:41:33 CET] <Rajko> after
[03:41:51 CET] <Rajko> what if you just type livestreamer url what does it give you
[03:42:03 CET] <Rajko> (not piped to ffmpeg)
[03:42:35 CET] <slowfoxtrot> k trying now
[03:44:16 CET] <slowfoxtrot> what does hlsvariant:// mean?
[03:44:26 CET] <slowfoxtrot> the m3u8 is just hosted on an apache server
[03:44:28 CET] <Rajko> it means its a hls stream with multiple qualities
[03:44:30 CET] <slowfoxtrot> via http://
[03:44:49 CET] <Rajko> aka the m3u8 points to multiple m3u8s with different qualities
[03:44:58 CET] <slowfoxtrot> ah I see
[03:47:02 CET] <slowfoxtrot> shoot it is expecting https
[03:47:13 CET] <slowfoxtrot> hlsvariant apparenlty defaults to http
[03:47:15 CET] <Rajko> try hlsvariant://http://  then
[03:48:14 CET] <slowfoxtrot> trying
[03:49:24 CET] <slowfoxtrot> while Im working on this, do you think it is worth trying straight ffmpeg with different containers that might be more friendly to discontinuities?
[03:49:28 CET] <slowfoxtrot> do you think that could be the issue?
[03:54:54 CET] <slowfoxtrot> is there a way with livestreamer to skip the m3u8 that point to the multiple m3u8s with different qualities and put in the URL of the quality desired directly?
[03:56:49 CET] <wintershade> k I'm off, thanks everyone for your help!
[03:57:31 CET] <slowfoxtrot> thanks wintershade
[04:02:15 CET] <Rajko> yes its just hls:// i think
[04:04:42 CET] <kepstin> hmm, it doesn't look like livestreamer actually will do any timestamp discontinuity corrections - I think it just concatenates the mpegts segments
[04:04:57 CET] <Rajko> whats wrong with that
[04:05:16 CET] <kepstin> nothing, except that it won't fix the problem that slowfoxtrot is having :)
[04:05:22 CET] <Rajko> you sure
[04:05:33 CET] <Rajko> maybe just concatenating is exactly what he needs
[04:05:49 CET] <slowfoxtrot> no i think that is exactly what I wan
[04:05:52 CET] <slowfoxtrot> want actualy
[04:05:56 CET] <kepstin> but the built-in ffmpeg hls support should be doing that already - concatenating the hls ts segments
[04:06:04 CET] <kepstin> i'd expect the behaviour to be identical
[04:06:09 CET] <slowfoxtrot> i want it to concat them together and normal ffmpeg output doesnt
[04:06:19 CET] <kepstin> be interesting to try and see i guess
[04:06:40 CET] <slowfoxtrot> is there an argument to tell ffmpeg to explicltiy concatenate the hls ts segments?
[04:07:10 CET] <kepstin> if you just want the concatenated mpeg-ts segments on your hard drive, using only livestreamer is probably the best option (without ffmpeg at all)
[04:07:56 CET] <slowfoxtrot> but i want to output to some kind of friendly video file?
[04:10:32 CET] <altusd> So 3.0 was just released. I went to the Windows builds page, saw no mention of 3.0 (or any other version number). Downloaded the latest file... unpacked...
[04:10:38 CET] <altusd> NO MENTION AT ALL OF 3.0!
[04:10:42 CET] <altusd> Or any version number at all.
[04:10:48 CET] <Rajko> run ffmpeg -v
[04:10:50 CET] <J_Darnley> Of course
[04:10:51 CET] <altusd> "ffmpeg version N-78636-g45d3af9 Copyright (c) 2000-2016 the FFmpeg developers"
[04:10:58 CET] <altusd> The hell is N-78636-g45d3af9?
[04:11:02 CET] <J_Darnley> only foolish people don't use the latest git master
[04:11:03 CET] <Rajko> git commit id
[04:11:11 CET] <slowfoxtrot> wow
[04:11:12 CET] <kepstin> slowfoxtrot: it'll probably work if you actually do a video transcode; this seems to only be an issue with using -c copy
[04:11:25 CET] <slowfoxtrot> unbelieveable, livestreamer did NOT concat the ts stream
[04:11:44 CET] <slowfoxtrot> it actually stopped on the frame and showed it for the duration of the break
[04:11:50 CET] <altusd> "Missing argument for option 'v'."
[04:11:50 CET] <slowfoxtrot> just like normal ffmpeg output...
[04:11:58 CET] <altusd> What is going on?
[04:12:16 CET] <J_Darnley> -v is an alias for loglevel
[04:12:27 CET] <J_Darnley> -version if you want the version
[04:12:48 CET] <slowfoxtrot> kepstin: why in the world would livestreamer not concat the hls ts segments?
[04:12:50 CET] <altusd> Heh. This is like the perfect example of how incredibly user-hostile FOSS projects are. I struggle to find any mention of which damn *version* of the program I have!
[04:13:06 CET] <Rajko> N-78636-g45d3af9?
[04:13:26 CET] <J_Darnley> Stop caring about a pointless version number and use the daily builds.
[04:13:31 CET] <altusd> "ffmpeg.exe -version" only prints that garbage data.
[04:13:44 CET] <altusd> So 3.0 was made up?
[04:13:52 CET] <J_Darnley> All version numbers are made up
[04:13:58 CET] <altusd> ...
[04:14:19 CET] <altusd> The announcement made it sound as if FFMPEG has reached a whole new generation.
[04:14:26 CET] <altusd> 3.0. After many years of 2.x. Etc.
[04:14:47 CET] <J_Darnley> Only for dumb people who use linux and *need* a version number
[04:15:39 CET] <slowfoxtrot> here is an example of my m3u8 file:
[04:15:45 CET] <slowfoxtrot> #EXTINF:20.000,
[04:15:45 CET] <slowfoxtrot> https://file-2.ts
[04:15:46 CET] <slowfoxtrot> #EXT-X-DISCONTINUITY
[04:15:47 CET] <slowfoxtrot> #EXTINF:20.000,
[04:15:48 CET] <slowfoxtrot> https://file-4.ts
[04:15:48 CET] <J_Darnley> The FFmpeg website points you to where you can get daily build of ffmpeg.
[04:16:13 CET] <slowfoxtrot> i believe this is a completely standard m3u8 file
[04:16:31 CET] <J_Darnley> The day after 3.0 it became newer than 3.0
[04:17:04 CET] <slowfoxtrot> keptstin: do you have experience concatentating hls ts segments using ext-x-discontinuity?
[04:17:11 CET] <kepstin> hmm. 'git describe' does give something a bit more useful than the ffmpeg version string tho, you get "n3.1-dev-173-g45d3af9"
[04:17:36 CET] <kepstin> so that's 173 commits after the 3.1 development started.
[04:17:53 CET] <kepstin> slowfoxtrot: not specifically, no.
[04:19:01 CET] <slowfoxtrot> kepstin: any idea why livestreamer would not have concatenated the hls ts stream?
[04:19:19 CET] <slowfoxtrot> kepstin: seems like if livestreamer cant do it im screwed with ffmpeg
[04:19:37 CET] <kepstin> slowfoxtrot: my guess is that it probably did, and you might have run into a player issue?
[04:20:08 CET] <kepstin> although discontinuous mpeg-ts seems like something that would work in most players :/
[04:20:49 CET] <slowfoxtrot> kepstin: ya I have tested that, but that frame is the exact duration of the discontinuity
[04:21:17 CET] <slowfoxtrot> kepstin: and I have tested doing a video transcode to see if that would make ffmpeg concat instead of doing a copy
[04:21:42 CET] <slowfoxtrot> kepstin: but it seems whether I copy or transcode the timestamps are always all powerful and nothing ever concats!
[04:21:49 CET] <altusd> A cat with a con.
[04:21:51 CET] <altusd> Con-cat.
[04:23:07 CET] <kepstin> Like, I know I've had ffmpeg transcode mpeg-ts streams where the timestamps periodically reset to 0 just fine (these files are the result of concatting multiple independently encoded segments)
[04:23:41 CET] <kepstin> but if you have discontinuities the other way, where the timestamps just suddenly jump higher... :/
[04:23:57 CET] <kepstin> well, that's equivalent to showing a single frame for the length of the discontinuity
[04:24:39 CET] <slowfoxtrot> exactly, that is what is happening
[04:24:51 CET] <Rajko> working as intended then
[04:24:56 CET] <slowfoxtrot> in fact in ffmpeg it actually says dup 167 frames"
[04:25:09 CET] <slowfoxtrot> it is duplicating the frame for the duration of the discontinuity
[04:25:57 CET] <slowfoxtrot> I am wanting it to ignore the gap, and ffplay does it perfectly
[04:26:00 CET] <slowfoxtrot> lol
[04:26:12 CET] <slowfoxtrot> it is so frustrating
[04:26:24 CET] <slowfoxtrot> but I cant save the output of ffplay to anything anywhere
[04:26:32 CET] <kepstin> slowfoxtrot: actually, it looks like you might be able to do it. use the '-dts_delta_threshold' option and set it to something shorter than the length of the discontinuity?
[04:26:40 CET] <slowfoxtrot> there has to be a flag Im missing
[04:27:25 CET] <slowfoxtrot> -dts_delta_threshold
[04:27:26 CET] <slowfoxtrot> Timestamp discontinuity delta threshold.
[04:27:47 CET] <slowfoxtrot> hmmm, this seems like it might be what I am looking for, but it doesnt say what the default value is or what it means
[04:28:59 CET] <kepstin> slowfoxtrot: the default is '10', and it looks like it's in seconds.
[04:29:18 CET] <slowfoxtrot> where are you finding more info? man?
[04:29:30 CET] <kepstin> for that? I had to grep the source code :/
[04:29:36 CET] <slowfoxtrot> if it was 10 that would definitely be my problem
[04:29:46 CET] <slowfoxtrot> all the ones Im testing are shorter than that
[04:31:13 CET] <slowfoxtrot> can it accept floats?
[04:31:21 CET] <slowfoxtrot> can I set it to 0.1?
[04:31:26 CET] <slowfoxtrot> or even 0?
[04:31:48 CET] <kepstin> 0 wouldn't make sense, frames are more than 0 apart so it would detect a discontinuity every frame
[04:32:07 CET] <kepstin> and then the video would be all sorts of nasty desynced
[04:32:52 CET] <slowfoxtrot> maybe 1 then?
[04:33:09 CET] <kepstin> it does take floating point.
[04:34:03 CET] <kepstin> you'll want it more than 1 frame long, so I wouldn't go below 0.05 for sure (and even that is iffy)
[04:35:54 CET] <slowfoxtrot> ill try 0.5
[04:36:58 CET] <slowfoxtrot> suck
[04:37:06 CET] <slowfoxtrot> i just cant get a break with this
[04:37:17 CET] <slowfoxtrot> frozen frame nightmare of my life
[04:37:30 CET] <slowfoxtrot> lol I tried setting it to 0.5 but still no concat
[04:37:42 CET] <slowfoxtrot> just duplicates the frame to fill the gap
[04:38:51 CET] <slowfoxtrot> but because ffplay works, then there has to be a way to get ffmpeg to work
[04:42:49 CET] <slowfoxtrot> *** 136 dup! lol bane of my existence
[04:43:46 CET] <slowfoxtrot> kepstin: I just tested transcoding the video with -dts_delta_threshold set to 1 instead of copy, still no dice
[04:44:03 CET] <slowfoxtrot> kepstin: could it still be that I am trying to store it in a mp4?
[04:44:14 CET] <slowfoxtrot> do I need to be using a different container?
[04:44:28 CET] <kepstin> for the dts_delta_threshold option, I think it depends on the input container format, not output
[04:45:05 CET] <slowfoxtrot> thats what I would have thought, because the mp4 shouldnt care that there are discontinuity in the source
[04:45:25 CET] <slowfoxtrot> it is the processing from the source that I need to fix, right?
[04:50:24 CET] <slowfoxtrot> lol I just dont know what else to try
[04:50:33 CET] <slowfoxtrot> i think this hls ts concat stuff hates me
[05:00:59 CET] <slowfoxtrot> holy crap I just figured it out
[05:02:17 CET] <slowfoxtrot> sanity checking now...
[05:51:38 CET] <jookiyaya> it says aac-FDK is removed but i still see it
[05:53:15 CET] <furq> what says that
[05:53:36 CET] <jookiyaya> Thursday, Feb 11, 2016
[05:53:36 CET] <jookiyaya> Unfortunately due to circumstances beyond our control we can no longer include binary distributions of HandBrake which include the FDK-AAC encoder.
[05:53:36 CET] <jookiyaya> Please also be aware that if you are distributing any previous 0.10.x you must cease doing so now due to licensing issues.
[05:54:09 CET] <furq> what does that have to do with ffmpeg
[05:54:40 CET] <jookiyaya> they told me handbrake uses ffmpeg
[05:56:14 CET] <relaxed> jookiyaya: ffmpeg has a native aac encoder you can use
[05:56:24 CET] <jookiyaya> what about fdk?
[05:56:36 CET] <furq> where do you "still see it"
[05:57:02 CET] <relaxed> or compile ffmpeg with libfdk-aac support
[06:00:04 CET] <jookiyaya> i have no idea how to compile
[06:01:16 CET] <slowfoxtrot> kepstin: i figured it out
[06:02:50 CET] <relaxed> jookiyaya: these are questions for #handbrake
[06:09:01 CET] <slowfoxtrot> it has libfaac
[06:09:15 CET] <slowfoxtrot> or you can just copy native aac
[06:09:43 CET] <furq> handbrake uses ffmpeg aac now
[06:09:49 CET] <furq> i don't think it uses libfaac any more
[06:10:13 CET] <furq> i also think it only ever used fdk for he-aac
[06:10:51 CET] <jookiyaya> according to  ffmpeg site:    libopus > libvorbis >= libfdk_aac > aac > libmp3lame >= libfaac >= eac3/ac3 > libtwolame > vorbis > mp2 > wmav2/wmav1
[06:10:57 CET] <jookiyaya> how accurate is this statement
[06:11:09 CET] <furq> well it says vorbis is better than fdk_aac which i find doubtful
[06:12:17 CET] <furq> if you're doing cbr aac-lc encodes then just use the builtin aac encoder
[06:12:48 CET] <furq> it's not worth going through the hassle of compiling fdk and handbrake for a possible imperceptible quality improvement
[06:16:02 CET] <furq> jookiyaya: if you're really determined to use fdk then there's a standalone fdk encoder in debian
[06:16:14 CET] <furq> https://packages.debian.org/stretch/fdkaac
[08:34:13 CET] <slowfoxtrot> thanks for the help guys
[08:34:18 CET] <slowfoxtrot> appreciate all the input
[09:00:00 CET] <Hexmind> Hi all. I have a series of raw rgb24 images I want to make a video with. My images are name %08d and I want to make a 60fps avi video. What command should I use?
[09:00:28 CET] <Hexmind> Images are 1280x720 and want to make a video of the same size.
[09:02:45 CET] <Hexmind> Images name are 00000001.raw and so on.
[09:15:35 CET] <furq> Hexmind: -f image2 -framerate 60 -s 1280x720 -pix_fmt rgb24 -c:v rawvideo -i %08d.raw -c:v whatever_output_codec out.avi
[09:16:58 CET] <furq> the second -c:v should be copy if you don't want to encode
[09:18:29 CET] <Hexmind> furq, Thanks a lot! I really appreciate.
[10:10:17 CET] <termos> I keep getting these errors: "Buffer queue overflow, dropping." when transcoding. It causes my video/audio to get our of sync, is there a way to avoid this?
[10:12:33 CET] <DHE> you're overloading something. you need a bigger buffer or turn down the CPU requirements
[10:12:49 CET] <DHE> I assume this is some kind of live source, maybe a video capture card
[12:08:19 CET] <termos> yes, it's a live source from rtmp
[12:13:49 CET] <dmp> Hello. I am trying to build ffmpeg 3.0 for Android in Windows. I have followed the instructions at http://www.roman10.net/how-to-build-ffmpeg-with-ndk-r9/ whilst following the adaptations for Windows at http://stackoverflow.com/questions/23683518/how-to-compile-ffmpeg-2-2-2-on-windows-with-cygwin-and-android-ndk-r9c. When I run the build_android.sh
[12:13:49 CET] <dmp>  script, however, I get the following error
[12:14:07 CET] <dmp> libavfor: No such file or directory
[12:14:07 CET] <dmp> make: *** [libavformat/libavformat.a] Error 1
[12:14:21 CET] <dmp> Does anyone know how to fix this?
[13:04:59 CET] <yuriks> Hey. I'm trying to do opus streaming to Firefox inside a webm
[13:05:28 CET] <yuriks> encoding to a file with ffmpeg and then playing it in firefox seems to work ok but I'm unable to seek
[13:05:58 CET] <yuriks> apparently this is a firefox problem: https://bugzilla.mozilla.org/show_bug.cgi?id=657791 adding the -dash 1 option (which I found online but was unable to find documented anywhere) seem to fix it
[13:06:22 CET] <yuriks> however, if I try to output to stdout instead (even if I just pipe to a local file), then seeking stops working again
[13:06:27 CET] <yuriks> is this an inherent limitation?
[13:07:05 CET] <yuriks> this is my ffmpeg commandline: ffmpeg -i input.flac -map 0:0 -c:a libopus -b:a 192k -f webm -dash 1 - > test.webm
[13:11:40 CET] <relaxed> yuriks: ffmpeg -h muxer=webm
[13:12:19 CET] <yuriks> ah, that's helpful, thanks. I was looking in the website
[13:14:17 CET] <yuriks> only other interesting looking option is -live, but that doesn't do anything either
[13:22:49 CET] <LigH> Hi.
[13:23:37 CET] <LigH> Does ffmpeg support 2-pass normalization in audio filters (scanning for max. volume in a 1st pass, applying the calculated gain in a 2nd)? ... I believe to remember that ffmpeg usually only works like a 1-pass filter.
[13:24:25 CET] <LigH> http://ffmpeg.org/ffmpeg-filters.html#dynaudnorm sounds like AGC, that's not desired.
[13:29:26 CET] <J_Darnley> are you looking for volume and volumedetect?
[13:31:56 CET] <LigH> A kind of combination: "volumedetect" in a 1st pass to calculate the gain required in a 2nd pass to bring the volume to "normalization" in a 2nd pass.
[13:32:30 CET] <LigH> Increasing the volume just even so much that it doesn't clip.
[13:33:02 CET] <LigH> But not with a small look-ahead window, instead looking ahead the whole playing time.
[13:51:37 CET] <lroe> Can someone help me figure out why I don't have an m3u8 file in /tmp/hls? http://paste.debian.net/hidden/344199a4/
[13:51:53 CET] <lroe> I'm using the nginx rtmp module
[13:57:36 CET] <kepstin> LigH: there's a variety of 2-pass filters already in ffmpeg, but I don't think there's a normalization one like you are looking for
[13:59:13 CET] <LigH> OK, thank you; then it may be possible to script that using SoX or BeSweet.
[14:09:04 CET] <J_Darnley> Huh?  Volume and volumedetect are exactly what you want?  Volume detect spits out the maximum sample volume.  Volume will apply the gain.
[14:11:32 CET] <kepstin> I prefer using the ebur128 filter then the volume filter, peak normalization really isn't very useful (particularly with lossy audio codecs where it's basically meaningless)
[14:12:10 CET] <LigH> J_Darnley: And how do you combien them in just one call?
[14:12:13 CET] <kepstin> but none of the volume filters have a thing where they write to then read from a state file, you actually have to read the output then manually place it into a different filter
[14:12:17 CET] <J_Darnley> You don't
[14:12:24 CET] <J_Darnley> No ffmpeg filter works that way
[14:13:10 CET] <LigH> The overall task is: Convert an input format to an output format and normalize the audio during the conversion.
[14:13:33 CET] <kepstin> the only ffmpeg filter that uses a state file for multipass is one of the detelecine ones, iirc, and it needs multiple runs (with a parameter change between runs to set the pass no). And if you're doing multi-pass video encodes, that's a total of 3 passes needed...
[14:14:01 CET] <LigH> If ffmpeg can't do multi-pass conversions, then one may have to use a different tool.
[14:14:39 CET] <kepstin> LigH: ffmpeg can, you just have to run it multiple times (once for each pass)
[14:14:42 CET] <J_Darnley> Is it supposed to buffer uncompressed video to memory or disk?
[14:16:10 CET] <Rajko> no it just decodes each pass
[14:16:11 CET] <J_Darnley> no trolling: this is something ffmpeg does lack
[14:16:52 CET] <LigH> The reason is probably that ffmpeg was designed to work as filter in a pipe; multi-pass filters would not work in a pipe, though, only on disk files.
[14:17:20 CET] <kepstin> all the tools that have a "one step" multi-pass mode just run the encode from scratch multiple times from the original disc file
[14:17:35 CET] <kepstin> if you want to do that with ffmpeg, write a shell script with a loop :/
[14:18:08 CET] <kepstin> ("if you want to do that with the ffmpeg cli tool"...)
[14:19:10 CET] <LigH> Alright, thank you.
[14:19:12 CET] <LigH> Bye
[14:26:40 CET] <Rajko> can .mp4 be used in a way for streaming
[14:42:05 CET] <BtbN> Rajko, no.
[14:42:27 CET] <Rajko> why not
[14:42:37 CET] <BtbN> because it's not designed to be a streamable format.
[14:43:04 CET] <BtbN> If you have a finalized file, and move the moov-atom to the front, it can be played from a webserver while still downloading it, but that's all the streamability you get.
[14:43:31 CET] <Rajko> so all the hls things that transmux to mp4 and then feed that to MSE are a lie ?
[14:43:42 CET] <BtbN> no.
[14:43:46 CET] <BtbN> They don't stream mp4
[14:43:58 CET] <Rajko> MSE only accepts mp4 if you want to use h264
[14:44:05 CET] <BtbN> exactly
[14:44:23 CET] <BtbN> Doesn't make it any better
[14:44:29 CET] <Rajko> so there is a way to produce a 'streamaable' mp4 at runtime
[14:44:32 CET] <BtbN> nope
[14:44:33 CET] <Rajko> to feed the decoders
[14:44:36 CET] <BtbN> it's just _a lot_ of mp4 files
[14:45:02 CET] <BtbN> each indivualy isn't streamable, but if you play them one after the other, you "stream" them.
[14:46:42 CET] <Rajko> so i cant just somehow leave the mdat with box size 0 so it 'goes to the end of the file' and then just keep on concatenating packets ?
[14:47:45 CET] <BtbN> The moov atom cannot be generated before the file is finalized.
[14:48:06 CET] <BtbN> It can only be moved to the front as a post-processing operarion.
[14:48:15 CET] <BtbN> And without it, you can't play the file.
[14:48:20 CET] <Rajko> whats it contain
[14:48:45 CET] <BtbN> codec extradata, information about the file layout and stuff
[14:49:58 CET] <Rajko> so i would have to figure out an independent sequence in my stream, then make a small mp4 file containing just that, and feed that to mse
[14:50:07 CET] <BtbN> what?
[14:50:19 CET] <Rajko> like one keyframe + dependent frames
[14:50:49 CET] <BtbN> good luck
[14:51:00 CET] <Rajko> no it just means i cant use mse
[14:51:04 CET] <BtbN> You could also just use one of the other supported streaming methods, like DASH or HLS since hls.js exists.
[14:51:29 CET] <Rajko> no. i cannot take the latency of lagging behind an entire segment
[14:51:40 CET] <BtbN> Well, you'll have to.
[14:51:49 CET] <BtbN> Also, you lag 3 segments behind, by design.
[14:51:55 CET] <Rajko> good to know i can take mse off the table
[14:52:06 CET] <BtbN> It's the only way browser support live streaming
[14:52:10 CET] <BtbN> So you're stuck with it.
[14:52:11 CET] <Rajko> not true
[14:52:18 CET] <BtbN> Well, Flash not included...
[14:52:37 CET] <Rajko> https://developer.chrome.com/native-client
[14:52:49 CET] <BtbN> I said Browsers, not Google-OS.
[14:52:58 CET] <Rajko> runs in opera too
[14:53:05 CET] <BtbN> Because it is Chrome.
[14:53:09 CET] <Rajko> thats fine
[14:53:14 CET] <Rajko> most people have it
[14:53:35 CET] <ultrav1olet> Does anyone know how to add CRC to matroska/mkv?
[14:54:27 CET] <ultrav1olet> I couldn't find anything on the internet aside from checksumming every frame and exporting that info as a text file
[14:56:09 CET] <ultrav1olet> Even MP3's have a built-in checksumming
[14:57:21 CET] <ultrav1olet> According to https://www.matroska.org/technical/specs/index.html there's some CRC-32 checksumming built in but I don't have any clue as to what it really checksums
[15:04:44 CET] <J_Darnley> Could it be that ffmpeg doesn't add it?
[15:05:03 CET] <J_Darnley> What about mkvmerge or another tool from movtoolnix?
[15:05:44 CET] <ultrav1olet> J_Darnley: I cannot find anything definite
[15:06:14 CET] <J_Darnley> (of course I mean mkvtoolnix)
[15:31:33 CET] <ultrav1olet> https://trac.ffmpeg.org/ticket/4347
[15:31:44 CET] <ultrav1olet> a feature request no one is working on
[17:37:15 CET] <Hfuy> Hello.
[17:38:13 CET] <Hfuy> If I'm h.264 compressing something with (for instance) a sky, and it's macroblocking, what could I do to improve things without throwing more bitrate at the whole thing?
[17:38:29 CET] <Hfuy> I essentially want the codec to use more bits to encode the sky, taking them from other parts of the frame.
[17:40:13 CET] <kepstin> Hfuy: you're already using x264 encoder with a non-fast preset?
[17:40:26 CET] <Hfuy> Actually this is more a general technical question.
[17:40:34 CET] <Hfuy> The encoder is in a camera. The decoder is ffmpeg.
[17:40:52 CET] <Hfuy> I'm not sure how h.264 distributs quantisation tables across the image. I guess I want less quantisation in currently-more-quantised areas.
[17:41:25 CET] <J_Darnley> if the enocder is in a camera do you have any control over its settings?
[17:41:41 CET] <Hfuy> It's a camera for which the manufacturer is about to release a firmware update.
[17:41:47 CET] <Hfuy> So yes, in that sense, I sort of do.
[17:41:47 CET] <kepstin> Hfuy: the h264 format supports the encoder deciding arbitrary qp levels per macroblock
[17:42:14 CET] <kepstin> Hfuy: in x264, they use advanced psychological optimizations to determine bit allocation within a frame.
[17:42:20 CET] <J_Darnley> in general I would say "spend more time encoding"
[17:42:39 CET] <Hfuy> Let me see if I can find a before-and-after image.
[17:43:02 CET] <J_Darnley> but it sounds like you have no control over the encoder
[17:43:05 CET] <kepstin> Hfuy: if you can tell your camera manufacturer "please encode using x264", then you're done ;)
[17:43:16 CET] <Hfuy> Ha.
[17:43:41 CET] <Hfuy> I can't say who the manufacturer is, but I've been provided with a before-and-after
[17:43:54 CET] <Hfuy> I'm trying to figure out what they might have done.
[17:44:11 CET] <Hfuy> http://imagebin.ca/v/2Y5ac6TV4Cso
[17:44:20 CET] <kepstin> but in general, cameras use hardware encoder blocks with poor ability to make decisions on bitrate allocation within frames, so the solution is "bump up the bitrate high enough, then re-encode in software afterrwards"
[17:44:54 CET] <Hfuy> This is a rather low bitrate codec, to be brutally honest.
[17:45:19 CET] <kepstin> Hfuy: huh, looks like they have at least some tweaking ability. probably added grain preservation as a priority (preserving high-frequency detail) rather than smoothing.
[17:45:36 CET] <Hfuy> I was trying to figure out how to describe what they've done.
[17:45:42 CET] <Hfuy> I'm not sure if it's necessarily obvious from that.
[17:45:58 CET] <Hfuy> I assume it will mean less bitrate elsewhere in the frame.
[17:46:43 CET] <kepstin> yeah, but it's always a tradeoff :)
[17:47:09 CET] <Hfuy> specs, by the way, are 100mbps, 3840x2160p24
[17:47:39 CET] <kepstin> the 'before' image imo looks like what you get with an encoder optimized for psnr rather than human vision - the loss of detail in that one area is offset by improved detail in other spots on the image.
[17:48:08 CET] <Hfuy> Yes, psnr optimisation resulting in greater noise in low con areas.
[17:48:44 CET] <kepstin> but the human eye really picks that out, since it's dynamically more sensitive to detail in low-contrast low-motion spots :/
[17:49:09 CET] <Hfuy> It only really shows up when you use a lot of gain, so the image gets noisier
[17:49:43 CET] <kepstin> (x264 makes use of the fact that it doesn't even have to be the "right" detail - just any noise would look better there than smooth - which lowers the "objective" psnr scores more)
[17:50:05 CET] <Hfuy> How does it do that - just use a less harsh quantisation table on those blocks, or in that area of the image?
[17:51:12 CET] <kepstin> I dunno, you'd have to read the code :)
[17:52:12 CET] <kepstin> I suspect it does adjust the qp up in regions of low motion low contrast if it can without hurting the rest of the frame too much, but there's probably more to it than that.
[17:52:49 CET] <Hfuy> sorry, qp?
[17:53:55 CET] <kepstin> er, quantization parameters
[17:54:08 CET] <Hfuy> OK, I assumed something like that.
[17:54:51 CET] <Hfuy> But that isn't decided on a block-by-block basis, is it?
[17:55:18 CET] <kepstin> in h264 i'm pretty sure you can use different quantization levels for each block
[17:55:24 CET] <Hfuy> I see.
[17:55:50 CET] <kepstin> the question is whether the overhead of coding that is worth the visual quality increase :)
[17:55:54 CET] <kepstin> everything's a tradeoff.
[17:56:28 CET] <Hfuy> That's basically the thrust of the article I'm writing.
[17:56:30 CET] <Hfuy> These things are choices.
[17:56:54 CET] <Hfuy> 100mbps is sweet spiff all for an in-camera origination codec.
[18:16:31 CET] <Rajko> hi, https://chrome.google.com/webstore/detail/geforce-experience-stream/gjljknijpnfibppaijefibndmiabonep?utm_source=www.crx4chrome.com uses ffmpeg internally, how do i get their modified source ?
[18:16:57 CET] <Hfuy> Perhaps they didn't modify it.
[18:17:25 CET] <Rajko> its in there as a .so
[18:17:37 CET] <Rajko> if it was statically linked they would have to give out their entire source out, right ?
[18:18:00 CET] <Hfuy> I suspect they're not going to do that.
[18:18:21 CET] <Hfuy> In any case, even if they were breaking the rules, the likelihood that anyone's ever actually going to take it to court is microscopic, so I wouldn't get too excited.
[18:18:35 CET] <J_Darnley> Like with all distribution: contact them and request source
[18:18:50 CET] <J_Darnley> They cannot say "go to ffmpeg.org"
[18:19:54 CET] <Hfuy> well, they can, and you probably should.
[18:19:56 CET] <Rajko> i cant even find out where their GPL contact form is
[18:20:12 CET] <Rajko> usually companies have a list of all the gpl stuff they use and links
[18:20:23 CET] <J_Darnley> well, they can tell you that but that isn't compliant with the license
[18:21:14 CET] <Rajko>  /apurva1/amayekar_workspace1/sw/grid/oss/GoogleChrome/naclports_43/src/out/build/ffmpeg/ffmpeg-2.1.3/libavcodec/
[18:21:26 CET] <Rajko> thats an old ffmpeg
[18:22:32 CET] <Rajko> im looking for the configure string so i can use exactly what they did as i know that works
[18:23:01 CET] <J_Darnley> then try running a binary through strings and grep for something
[18:23:22 CET] <Rajko> usually i can find -with etc
[18:23:44 CET] <J_Darnley> ffmpeg's configure doesn't use any "with"s
[18:23:58 CET] <Rajko> youre right, its -enable
[18:24:02 CET] <Rajko> huh its just the stock one
[18:24:05 CET] <Rajko> they didnt even strip it down
[19:22:01 CET] <Rajko> they only use it for audio it seems too :(
[19:28:40 CET] <bofh> Hi there! Where can I get some FFPMEG (static?) builds with all HW accelerations enabled at compile time?
[19:29:29 CET] <JEEB> that doesn't make any sense
[19:29:42 CET] <JEEB> 1) hwaccels generally require linking against system driver libs
[19:29:49 CET] <JEEB> 2) hwaccels are OS-specific
[19:29:59 CET] <JEEB> so by definition you can't get all of them at once
[19:30:05 CET] <bofh> ok, thanks, I will learn more of that
[19:30:08 CET] <Rajko> he probs means for his own platform
[19:30:23 CET] <bofh> by a chance - anybody here uses ffmpeg with GPU-enabled AWS instances?
[19:30:51 CET] <JEEB> I stick my money into something more price-performant :)
[19:31:26 CET] <JEEB> hwaccels do have their limited use cases, but I'm definitely not one of those who'd get much out of them
[19:32:03 CET] <kepstin> bofh: last time someone came in talking about that, they were disappointed with the performance. GPU encoding mostly makes sense if you can't use CPU, for example if you're doing something else (running a game, render, whatever)
[19:32:05 CET] <bofh> well, I just need to decide whether our company really need to use those GPU-enabled instances
[19:32:22 CET] <Rajko> also, the decoder/encoder on a gpu is a separate piece of silicon
[19:32:28 CET] <Rajko> not used for anything else or share-able
[19:32:47 CET] <JEEB> yeah, so you basically have a fixed maximum of streams that you can transcode with a certain ASIC
[19:32:50 CET] <bofh> kepstin: my use case - we have a stream of PNG images, and we're using FFMPEG to create an MP4 file out of them
[19:33:07 CET] <JEEB> and the requirement for the GPU is exactly where?
[19:33:10 CET] <kepstin> the nvidia one has pretty low-overhead context switching, so you can do a fair number of streams as long as you don't go over the max fps
[19:33:13 CET] <bofh> I'm not sure if we could get any benefit of GPU-enabled instances
[19:33:56 CET] <bofh> so basicly I planned to get couple of instances of EC2, with and w/o GPU, and run some tests
[19:34:10 CET] <Rajko> does anything even support nvenc ?
[19:34:12 CET] <Rajko> like, libraries
[19:34:15 CET] <bofh> but i'm not sure how to actually run ffmpeg for getting some stats
[19:34:19 CET] <Rajko> you pretty much just have to use it directly
[19:34:28 CET] <kepstin> Rajko: ffmpeg is a library that supports nvenc :)
[19:34:29 CET] <Rajko> VDPAU decoding sure, but does that work when headless ?
[19:35:09 CET] <JEEB> I think the hwaccel things in ffmpeg (tool) work as long as you have access to the GPU
[19:35:23 CET] <JEEB> they don't require a screen per se
[19:35:35 CET] <kepstin> nvenc explicitly does not require a display running, vdpau I believe does.
[19:35:40 CET] <Rajko> i know nvenc is accessed via the cuda dlls so its there regardless
[19:35:43 CET] <bofh> JEEB: so it is enough to run ffmpeg - and it will decide whether it should use GPU ?
[19:35:54 CET] <JEEB> bofh: seriously just don't even think about it
[19:35:59 CET] <bofh> )
[19:36:02 CET] <kepstin> bofh: you need a build with nvenc enabled (which you'll probably have to do yourself).
[19:36:08 CET] <kepstin> and yeah, probably not worth it
[19:36:17 CET] <Rajko> (the quality per bitrate you get from nvenc is also way below what you get with x264)
[19:36:19 CET] <bofh> looks like I'm looking for something esoteric, as usual
[19:36:28 CET] <JEEB> just use one of the faster libx264 presets
[19:36:34 CET] <Rajko> exactly
[19:36:42 CET] <Rajko> (or an insane bitrate with nvenc so its basically losless)
[19:36:43 CET] <JEEB> and you will be getting nice speeds, or you will have your bottleneck *before* the encoder
[19:37:02 CET] <JEEB> in which case it doesn't matter if you're using nvenc or libx264 with the CPU
[19:37:11 CET] <kepstin> reading png frames from disk and decoding it might actually end up being your bottleneck
[19:37:11 CET] <bofh> I c, so for the moment I could just skip the GPU part
[19:37:28 CET] <Rajko> yeah png is huge
[19:37:30 CET] <JEEB> kepstin: also RGB->YCbCr conversion with swscale/zscale
[19:37:37 CET] <bofh> kepstin: images are generated by the app, true, and this is most likely to be a bottleneck
[19:37:50 CET] <JEEB> so you have decoding=>colorspace conversion=>encoding
[19:37:57 CET] <Rajko> JEEB, my swscale takes about as much time as avcodec_decode on h264
[19:38:04 CET] <Rajko> and thats not even scaling, just converting to rgba
[19:38:08 CET] <JEEB> yup
[19:38:09 CET] <kepstin> if images are being generated by an app and piped, you probably want to skip the png decode/encode
[19:40:41 CET] <kepstin> I had a fun thing recently where I had an app writing raw video to a pipe and it was slower than I wanted; turned out that I was doing blocking io to the pipe, and the pipe buffer was smaller than a full frame, so it blocked my app until ffmpeg finished reading the next frame.
[19:40:55 CET] <kepstin> Had to add a thread to handle writing the output to the pipe.
[21:20:24 CET] <Mindiell> hi there, I need to convert a .mp4 into a .avi, using the mpeg4 video encoder. My problem is the quality is very bad and when I convert I see the kb/s from 889 to 200. Is there any option to fix the kb/s rate ?
[21:20:44 CET] <Mindiell> (from some hc264 video codec though)
[21:21:31 CET] <J_Darnley> set the bitrate you want or use constant quantiser encoding?
[21:21:57 CET] <J_Darnley> -b:v 500k or -v:q 3 for example
[21:22:09 CET] <Mindiell> thx, I'll try that :o)
[21:23:00 CET] <Mindiell> first option, it starts the conversion showing a bitrate at 0kb/s...
[21:23:23 CET] <J_Darnley> What about at the end?
[21:23:29 CET] <Mindiell> seems I'm not an expert :o)
[21:23:40 CET] <Mindiell> still converting, I'll see that in a minute
[21:24:59 CET] <Mindiell> but it takes time, and size, which sounds good :o)
[21:26:51 CET] <Mindiell> nop, quality is bad, I try the second option ;o)
[21:27:56 CET] <furq> use libxvid for mpeg4 if you have it enabled
[21:28:10 CET] <Mindiell> hmm, the second option make ffmpeg quiet, strange :o)
[21:28:25 CET] <furq> it should be -q:v
[21:28:26 CET] <Mindiell> furq: I'll see that, how can I verify if libxvid is enabled ?
[21:28:35 CET] <furq> ffmpeg -version
[21:28:40 CET] <furq> it'll say --enable-libxvid if it's enabled
[21:29:31 CET] <Mindiell> yup, it's enabled :o)
[21:29:47 CET] <furq> use -c:v libxvid then
[21:30:06 CET] <furq> all the other options are the same iirc
[21:30:08 CET] <J_Darnley> oops, yes -q:v
[21:32:28 CET] <Mindiell> arf
[21:32:31 CET] <Mindiell> ok :o)
[21:33:03 CET] <Mindiell> and -q:0 (for stream 0) or -q:v ?
[21:33:34 CET] <J_Darnley> depends whether your video is stream 0
[21:33:42 CET] <Mindiell> yes it is
[21:34:00 CET] <Mindiell> v takes the video stream whichever is the number ?
[21:34:00 CET] <J_Darnley> then I think they're the same
[21:34:05 CET] <Mindiell> ok, thx
[21:34:08 CET] <J_Darnley> but ffmpeg can put the audio first
[21:34:12 CET] <furq> use q:v unless you have multiple video streams which need different options
[21:34:20 CET] <furq> which seems unlikely in an avi
[21:34:31 CET] <Mindiell> thx
[21:35:05 CET] <Mindiell> hmm, file is bigger, maybe this will work
[21:36:15 CET] <Mindiell> I'll have to read/understand all the doc :o)
[21:38:46 CET] <Mindiell> ok, quality is here, now I'll try to see if my media-box can read it, thx every1 !
[22:15:31 CET] <techtopia> is there anyway to specify whats displayed in stdout when ffmpeg is doing a job?
[22:15:48 CET] <furq> techtopia: -v and -stats
[22:15:54 CET] <techtopia> like can i get it not to display the bitrate and speed
[22:15:59 CET] <techtopia> ahh cool
[22:16:21 CET] <furq> -v error will suppress the stats output iirc
[22:16:29 CET] <furq> -v quiet definitely will but that will also suppress errors
[22:17:15 CET] <furq> actually you can just use -nostats
[22:17:48 CET] <techtopia> thanks man, will have a play with -v -stats and -nostats
[22:21:38 CET] <lroe> Can someone help me figure out why I don't have an m3u8 file in /tmp/hls? http://paste.debian.net/hidden/344199a4/
[22:25:45 CET] <techtopia> not sure what that is, bash maybe
[22:26:03 CET] <techtopia> but does # comment out the line
[22:26:11 CET] <techtopia> so "#                 hls_path /tmp/hls;"
[22:26:27 CET] <techtopia> would mean the path never get's set ?
[22:27:11 CET] <furq> he has hls_path set in the other application block
[22:27:56 CET] <furq> lroe: http://sprunge.us/WRER
[22:28:14 CET] <furq> that should (hopefully) work
[22:30:26 CET] <furq> check your nginx error log if not
[22:30:48 CET] <furq> and maybe append 2>>/path/to/ffmpeg-error.log to the exec_static line
[22:31:00 CET] <lroe> nothing in the nginx error log
[22:31:02 CET] <furq> i would hope that ends up in the nginx error log though
[22:39:27 CET] <lroe> ok
[22:41:12 CET] <lroe> I added 2>>/tmp/ffmpeg.err to the exec line and now that file has: http://paste.debian.net/hidden/3e399ca5/
[22:47:10 CET] <lroe> I can connect via VLC or mplayer to the RTMP stream which means I believe the ffmpeg portion is working
[22:47:29 CET] <lroe> but sor some reason the 'hls' portion isn't generating the correct file
[22:47:44 CET] <lroe> for*
[23:13:12 CET] <DANtheBEASTman> i wrote this script that records x11 regions, but the quality is terribad and I'm not quite sure where I got the ffmpeg settings from or how to improve them https://gist.github.com/DanielFGray/334168827097f2d11729#file-record-sh-L94-L98
[23:21:50 CET] <ac_slater> DANtheBEASTman: https://trac.ffmpeg.org/wiki/Encode/H.264
[23:22:14 CET] <ac_slater> I would avoid vpx and use h264 or h265 instead
[23:22:59 CET] <TD-Linux> unless you value your freedom
[23:23:00 CET] Action: TD-Linux runs
[23:23:07 CET] <ac_slater> and having -crf and -b:v is kinda odd. I would just use one or the other (or manual quantization)
[23:23:22 CET] <ac_slater> TD-Linux: good point, use vpx if you can.
[23:23:25 CET] <TD-Linux> if you are using vp8 you want both.
[23:24:03 CET] <ac_slater> TD-Linux: I thought h265 was a bit more libre. I can't remember.
[23:24:04 CET] <TD-Linux> but you shouldn't have both -b and -b:v, and I don't think -preset does anything with libvpx
[23:24:41 CET] <TD-Linux> DANtheBEASTman, does cranking up the bitrate with -b:v help?
[23:32:10 CET] <max246> hello
[23:32:43 CET] <max246> someone knows why ffmpeg sometimes it records at 2-4 FPS and then restarting the software 2-3 times it goes back to 30?
[23:32:58 CET] <max246> is this a common issue?
[23:33:10 CET] <ac_slater> max246: that is not a general issue, it's most definitely your parameters or build
[23:33:31 CET] <max246> I am using the latest version from ffmpeg website
[23:33:35 CET] <ac_slater> encoding? decoding? muxing? demuxing?
[23:33:39 CET] <max246> and I am using a blackmagic card
[23:33:49 CET] Action: ac_slater googles
[23:34:15 CET] <max246> I am capturing and encoding in mkv
[23:34:18 CET] <max246> at 60 fps
[23:34:46 CET] <max246> this is my line: ffmpeg  -y  -f dshow  -video_size 1920x1080 -pixel_format uyvy422 -rtbufsize 2100M  -framerate  59.94 -i video="Decklink Video Capture"  -threads 10  -codec:v libx264 -preset ultrafast -an -crf 0  rawvideo.mkv
[23:35:20 CET] <ac_slater> do you have a 10 core CPU? Just curious
[23:35:21 CET] <max246> I just dont get it why I need to restart 2-3 times then I get 60 FPS fixed
[23:35:26 CET] Action: jkqxz wonders whether the one component which isn't in common with everyone else who happily uses ffmpeg might be the blame...
[23:35:33 CET] <max246> em.. no that my mistake
[23:35:39 CET] <max246> will need to change it back to 4
[23:35:46 CET] <max246> but even to 4 it happened
[23:35:52 CET] <ac_slater> mo'threads mo'problems
[23:35:57 CET] <ac_slater> I see.
[23:36:05 CET] <ac_slater> well, I first blame windows :p
[23:36:08 CET] <max246> it is strange, because it works fine when it runs
[23:36:15 CET] <max246> I even had the same issue on linux :(
[23:36:18 CET] <max246> recording webcam
[23:36:20 CET] <ac_slater> :(
[23:36:25 CET] <max246> so I thought this is was a common issue of ffmpeg
[23:36:32 CET] <DANtheBEASTman> TD-Linux: if it's at 1200k what would be a better choice? doubling it?
[23:36:56 CET] <TD-Linux> DANtheBEASTman, yeah, if you're recording to a file you probably want to set it high and then use crf as the primary limit on quality
[23:36:58 CET] <ac_slater> DANtheBEASTman: it's directly correlated to quality (most times)
[23:37:04 CET] <max246> so what I found so far is to parse the FPS info with python and kill it if it doesnt record at 60 FPS
[23:37:21 CET] <TD-Linux> libvpx-vp8 will limit to the lower quality of the two limits.
[23:37:23 CET] <max246> but I wished to know if I did some wrong :(
[23:37:38 CET] <ac_slater> max246: that's lame :(. I'm leaning toward the device causing an issue
[23:37:45 CET] <ac_slater> not the software.
[23:37:49 CET] <max246> I see
[23:38:02 CET] <furq> max246: don't set -threads manually with x264 unless you specifically want to keep cores free
[23:38:26 CET] <max246> ok so I will remove threads
[23:38:48 CET] <max246> what about the lower FPS, is something hardware , furq?
[23:39:04 CET] <ac_slater> max246: why not a hard 60 opposed to 59.94
[23:39:26 CET] <max246> because the capture card has different settings, and the Lumix camera that I am using, it outputs 59.94
[23:39:31 CET] <ac_slater> do  you need that drop frame
[23:39:33 CET] <ac_slater> ah I see
[23:39:38 CET] <max246> it was a pain in the ass to get the setting right
[23:39:38 CET] <ac_slater> leave it them
[23:39:39 CET] <ac_slater> then *
[23:40:02 CET] <max246> if you set to 60, you get a black video, but if you set 59.94, you get the right video :)
[23:40:17 CET] <ac_slater> haha, not the weirdest thing I've heard today
[23:40:24 CET] <ac_slater> but, that's the name of the game
[23:40:50 CET] <max246> yeah , I spent like 2 days googling and testing ... I wished people would make more docs for hardware :)
[23:41:21 CET] <max246> ok I need to give up and just keep using my script
[23:41:35 CET] <ac_slater> max246: good luck
[23:41:46 CET] <jkqxz> 59.94Hz (60000/1001) is normal for people in NTSC-land.  Strange that it should demand that, but the number itself isn't surprising.
[23:41:56 CET] <max246> what about to make a slow motion ? what is the best practices ?
[23:42:15 CET] <pzich> you want to slow down the playback speed?
[23:42:16 CET] <max246> should I use "setpts=2.0*PTS" ?
[23:42:18 CET] <furq> you could maybe try increasing rtbufsize
[23:43:07 CET] <max246> jkqxz: well might be the Lumix outputing as NTSC
[23:43:26 CET] <max246> furq: I think this is the make it can handle, after that ffmpeg complains
[23:43:50 CET] <max246> *this is the max it can handle
[23:44:15 CET] <max246> pzich: I would like to have a slow motion effect from 60 to 30 fps
[23:44:47 CET] <max246> should I follow this https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video or try to re-encode the video by settings the framerate
[23:45:16 CET] <yann|work> I can't find an example of h264 vdpau decoding/rendering through libavcodec, Can I find that somewhere ?  Are we just supposed to set codec_context->hwaccel before calling avcodec_open2 ?
[00:00:00 CET] --- Thu Feb 25 2016


More information about the Ffmpeg-devel-irc mailing list