[Ffmpeg-devel-irc] ffmpeg.log.20151130

burek burek021 at gmail.com
Tue Dec 1 02:05:01 CET 2015


[04:42:29 CET] <pinPoint> does -c copy loosy bitrate on videos?
[04:46:00 CET] <Prelude_Zzzzz> hey, anyone know why when i start a stream it starts right but over time the audio/ video starts to drift out of sync ?
[04:46:04 CET] <Prelude_Zzzzz> i am using -codec opy
[04:46:06 CET] <Prelude_Zzzzz> copy*
[04:47:30 CET] <pinPoint> i meant does it copy slightly lossy from original*
[04:48:28 CET] <furq> pinPoint: it copies
[04:49:23 CET] <furq> it puts the exact same streams in a new container
[04:52:04 CET] <pinPoint> furq: but does the stream loosy bitrate?
[04:52:52 CET] <furq> no
[04:52:54 CET] <furq> that wouldn't be a copy
[04:52:58 CET] <pinPoint> i went from what looks like 7900(Duration: 00:10:34.53, start: 0.000000, bitrate: 7980 kb/s) vs Copy (Duration: 00:10:34.60, start: 0.067000, bitrate: 7499 kb/s)
[04:53:41 CET] <pinPoint> i'll have another go
[04:54:07 CET] <furq> if it's actually lost bitrate then you'll have noticed ffmpeg taking ten times as long to reencode it instead of just copying it
[04:54:33 CET] <furq> it will also say in the output
[04:55:10 CET] <pinPoint> that is the result of ffprobe. But I will experiment again
[04:55:12 CET] <furq> if you muxed it into a different container then i expect one of them is just reporting the bitrate wrongly
[04:55:40 CET] <pinPoint> i did not specify containers just -c:v
[04:56:11 CET] <pinPoint> ffmpeg -i bbb_sunflower_2160p_30fps_normal.mp4 -c copy -an BBB_COPY_NO_AUDIO.mkv
[04:56:18 CET] <pinPoint> yeah I changed container.... heehhe
[05:43:58 CET] <pinPoint> still getting that 7498 kb/s
[05:44:08 CET] <pinPoint> ffmpeg -i bbb_sunflower_2160p_30fps_normal.mp4 -c copy -flags global_header -an BBB_COPY_NO_AUDIO.mp4
[05:47:50 CET] <furq> "Duration: 00:10:34.53, start: 0.000000, bitrate: 7980 kb/s" is the description of the whole file, not just the video stream
[05:49:06 CET] <furq> the video stream bitrate will be in the line starting with "Stream #0:0"
[05:49:21 CET] <furq> depending on the container
[06:44:56 CET] <pinPoint> ah
[06:45:03 CET] <pinPoint> it was missing the audio i see. ok
[13:36:41 CET] <Kwoth> Hello, im trying to decode a video by writing bytes to the input stream of ffmpeg process in C#.
[13:37:20 CET] <Kwoth> By using the bytes i got and writing them to file, and afterwards making ffmpeg read from them, i get it to work. The problem is using bytes directly to supply ffmpeg.
[13:37:31 CET] <Kwoth> https://gist.github.com/Kwoth/2d5cf0301b533b647d01 this is what i try to do
[14:09:28 CET] <dantti> how can I debug avcodec_find_encoder(AV_CODEC_ID_H264); not finding the encoder? I have built with both --enable-libx264 and --enable-encoder=libx264 flags but I'm out of ideas ...
[14:13:24 CET] <durandal_1707> have you registered all codecs?
[14:13:56 CET] <dantti> how do you mean?
[14:15:06 CET] <dantti> actually I think it linked against libx264.so instead of .a and when deployed on android it might have failed to load the .so, now I removed the libx264.so and ffmpeg configure does not find libx264 anymore :P
[14:15:06 CET] <durandal_1707> there is function
[14:15:37 CET] <dantti> durandal_1707: I'm trying to follow the encoders example
[14:15:46 CET] <dantti> didn't see such function
[14:16:07 CET] <dantti> duh!
[14:16:24 CET] <dantti> I missed  avcodec_register_all(); :P
[14:16:53 CET] <dantti> still if the .so of lib264 is available it seems to link to it, so it might fail I guess
[14:32:54 CET] <Mavrik> dantti, did you load the x264 so_
[14:32:55 CET] <Mavrik> ?
[14:33:31 CET] <dantti> I was trying to static link everything
[14:33:53 CET] <dantti> now I get undefined refs.. to x264 calls
[14:34:19 CET] <Mavrik> Did you compile x264 for Android as well?
[14:34:22 CET] <dantti> yup
[14:34:28 CET] <Mavrik> Also why aren't you using MediaCodec? :)
[14:34:34 CET] <dantti> I was even calling it directly
[14:34:39 CET] <dantti> what's it?
[14:37:33 CET] <Mavrik> Interface to the device hardware H.264 encoder.
[14:37:42 CET] <dantti> Mavrik: can't that be used to generate a mpegts stream?
[14:37:46 CET] <Mavrik> x264 will be very very slow on those ARMs :/
[14:38:00 CET] <Mavrik> Those are two different things.
[14:38:16 CET] <dantti> yup, it's the muxer part right?
[14:38:19 CET] <Mavrik> You can use MediaCodec to encode AAC / H.264 and then use something else to mux it into MPEG-TS.
[14:38:24 CET] <Mavrik> Mhm :)
[14:38:38 CET] <Mavrik> It's true that Android doesn't include a MPEG-TS muxer.
[14:38:54 CET] <Mavrik> But I'm just trying to warn you about the x264 performance - a 5 sec clip at 720p took up to 30s or more to encode on most Android devices.
[14:39:24 CET] <dantti> sure, I had to reduce image quality to not drop frames
[14:39:46 CET] <BtbN> You will also drain the battery.
[14:39:50 CET] <dantti> I guess I need to learn how to call java code...
[14:40:23 CET] <Mavrik> On the other hand, the HW video encoder quality is significantly worse.
[14:40:41 CET] <Mavrik> You need to encode at 5Mbit what x264 would easily do in 1,5Mbit :)
[14:41:48 CET] <dantti> the problem is also writing java code is boring...
[14:42:16 CET] <dantti> BtbN: for the baterry the app is on a always on tablet so no issue there
[14:42:31 CET] <Prelude_Zzzzz> good morning everyone.. running CentOS Linux 7 (Core) ... using nvenc for encoding .. when i turn up an encoding session i start seeing errors like " http://pastebin.com/G0hcWZwG ".. also i see a bunch of pixelation on the screen. Anyone know why ? Cpu's dont' look like they are doing much but it is as if there some sort of limitation that is causing these errors.
[14:43:35 CET] <BtbN> Those are decoder errors, your source is already broken.
[14:44:35 CET] <dantti> Mavrik: I can use more cores also to improve performance? of for muxing in mpegts it wouldn't work?
[14:45:07 CET] <Mavrik> x264 will use as much cores as it can get, muxing itself isn't really expensive CPU-wise
[14:45:22 CET] <Mavrik> the video encode takes a huge chunk of CPU
[14:45:59 CET] <dantti> ok I was asking because when I called x264 directly I was wondering how to distribute frames to encode and still get them at the right order
[14:46:22 CET] <dantti> as the call was blocking I didn't know how it worked
[14:46:51 CET] <Mavrik> hmm, x264 should return frames in DTS order always
[14:47:10 CET] <Mavrik> but ordering the frames is why DTS timestamp exists on them :)
[14:47:35 CET] <Prelude_Zzzzz> BtbN , i am using another encoder ( elemental ) and it seems to be outputing correctly from the same source. :( .. only ffmpeg does this
[14:48:09 CET] <Prelude_Zzzzz> i am trying to replace it with nvenc as it is more effecient but.... any way to correct some of these errors ?
[14:48:19 CET] <BtbN> This is not related to the ffmpeg encoder at all, those happen wile ffmpeg is unpacking and decoding the h264 input.
[14:48:24 CET] <dantti> right I guess what I mean was how to give more frames as the encode function blocked, I wasn't sure If I'd need to create more thread by myself
[14:55:18 CET] <Prelude_Zzzzz> anyone know a good decoding filter that can help me get rid of those errors ? not sure which ones to use
[15:03:30 CET] <Prelude_Zzzzz> BtbN ? can you please help me point in the right direction.. i am stuck :(
[15:03:36 CET] <Prelude_Zzzzz> you usually pretty good ..
[15:03:39 CET] <NoviceAlpha2> Hi all :)
[15:04:04 CET] <BtbN> Your source gives you a broken stream, fix your source.
[15:04:27 CET] <Prelude_Zzzzz> i can't fix it.. comes from satelite carrier
[15:04:37 CET] <Prelude_Zzzzz> and i am using an elemental encoder and i have no errors
[15:04:47 CET] <Prelude_Zzzzz> but with ffmpeg .. it acts up a lot
[15:05:13 CET] <BtbN> Sounds like terrible signal quality.
[15:05:21 CET] <Prelude_Zzzzz> well its 720p but..
[15:05:27 CET] <Prelude_Zzzzz> actually its 1080i
[15:06:25 CET] <NoviceAlpha2> Hope you can help me out, Does any one know what is this error means? ERROR: librtmp not found using pkg-config     basically librtmp.so is saying is missing but its there. i have even build it from fresh again and still wont go any futher, openssl its built from source, rtmpdump too, but have problems with ffmpeg, can i count on your help anyone?
[15:06:52 CET] <Prelude_Zzzzz> or 720p .. depending.. we are taking the system from sat... outputing to UDP .. then taking that and transcoding it.. works fine on the other encoder and i am sure there are errors there too but at least the other encoder handles it well.. not sure how
[15:07:03 CET] <Prelude_Zzzzz> maybe there is some decode filter that is being used that cleans it up and then it doesn't mess up the output
[15:07:13 CET] <Prelude_Zzzzz> i dont know what filter to use :(
[15:09:09 CET] <BtbN> There is no filter that magicaly fixes a broken stream
[15:09:56 CET] <Prelude_Zzzzz> ya i know.. but the output is there.. but i see pixelation like weather related issues or something .. i don't see it on the other encoder ( same exact source )
[15:10:33 CET] <Prelude_Zzzzz> also. if i do 1 encode at a time it seems to be a lot better.. soon as i start pushing the server a bit it seems to act up more .. like cpu power but i am not using any cpu .. very very strange
[15:16:42 CET] <c_14> NoviceAlpha2: upload your config.log to a pastebin service and link here please
[15:22:07 CET] <Prelude_Zzzzz> BtbN .. i think this is something to do with some processing power or something.. so i encoded 1 time... no issues and no errors.. i add 4 simultanious encodings and i get errors flying on the screen.. like cpu can't keep up and causes errors..
[15:22:19 CET] <Prelude_Zzzzz> but.. cpu's don't look like they are doing much.. i don't get it
[15:28:36 CET] <NoviceAlpha2> c_14: i am trying to change path's to toolchains, sdk, and libs, give me 5 min will see if script might be corrupted, im using oryginals from Compilation Guide Compile ffmpeg for Android with rtmp git clone. 1 min please
[15:36:05 CET] <NoviceAlpha2> c_14: there is pastebin link, http://pastebin.com/GQBUaHsN
[15:38:16 CET] <c_14> NoviceAlpha2: not build.log, ffmpeg's config.log. Should be in the ffmpeg source directory
[15:39:36 CET] <NoviceAlpha2> c_14, will do 1 sec.
[15:40:05 CET] <c_14> You probably just have to set PKG_CONFIG_PATH to "/home/test/android-ffmpeg-with-rtmp/src/rtmpdump/librtmp/android/arm"/lib/pkgconfig though
[15:41:08 CET] <NoviceAlpha2> c_14: http://pastebin.com/H9jW1q4W
[15:41:26 CET] <c_14> Package 'libssl', required by 'librtmp', not found
[15:43:52 CET] <NoviceAlpha2> c_14: thanks i did not knew that config.log is there, ok will try to apply missed liblaries, thanks a lot
[15:49:37 CET] <Prelude_Zzzzz> hey i may be onto something here.... source is h264 already.... i am doing " ${ffmpeg} -y -i "$stream" -c copy -map 0:p:$6 -f mpegts - | tee .... "  what should i use instead of mpegts so i dont convert .. i just want to pass on to the tee to have the other stuff encode it
[15:50:43 CET] <yongyung> Does someone here have experience with sony vegas? Or do you guys know a channel for sony vegas questions?
[15:50:54 CET] <Prelude_Zzzzz> rawvideo !!!
[15:52:29 CET] <DHE> mpegts is a container. if you used -c copy then it doesn't matter what you specify with -f (as long as it's acceptable for h264)
[15:59:42 CET] <Prelude_Zzzzz> oh.. then my problem is not solved
[15:59:43 CET] <Prelude_Zzzzz> :(
[16:00:00 CET] <Prelude_Zzzzz> basically just want to output the input and tee it off to multiple processes :(
[16:07:39 CET] <yongyung> Do you guys think there would be any visible difference between tga -> libx264 crf 4 -> editing/lossless .avi -> libx264 crf 18 and changing crf 4 to crf 16 in the first step?
[16:18:49 CET] <DHE> it'd be pretty hard for a human eye to notice without frame-stepping the video. but it could be
[16:19:03 CET] <DHE> but during high motion scenes it's far more likely
[16:19:48 CET] <kepstin_> if you view the high-motion scenes in slow motion or frame-by-frame, at least :)
[16:20:38 CET] <DHE> if you want constant quality, you use -qp instead. Using -crf will still reduce the image quality for high motion
[16:26:46 CET] <kepstin_> of course, if you have the disk space and bandwidth, you could just skip the first x264 encode and put it right into the lossless format for editing...
[16:29:24 CET] <DHE> well if there is an intermediate lossless step then it should be...
[16:35:20 CET] <yongyung> kepstin_: The problem is that the vegas preview starts lagging horribly with crf 4 already, with crf 16 it's fine.
[16:35:51 CET] <yongyung> DHE: I thought crf is already a variable bit rate/constant quality? It doesn't work the same as -qp?
[16:36:42 CET] <kepstin_> crf is approximately constant visual quality (uses psy optimizations) when played back realtime
[16:39:45 CET] <kepstin_> -qp is static quantizer, which means constant mathematical quality (this isn't exactly true, since there's still boosting on keyframes, etc) - but it's not really constant visual quality to a human viewer.
[16:40:07 CET] <kepstin_> close tho
[16:42:31 CET] <yongyung> Hmmm, I guess crf still makes more sense then^^
[16:43:19 CET] <kepstin_> keep in mind that the psy optimizations in crf mode are explicitly designed to remove data that might not be noticed in the video as it currently is.
[16:43:30 CET] <kepstin_> if you're doing editing afterwards, that probably doesn't make sense
[16:44:32 CET] <yongyung> Hmm yeah maybe. Does -preset still work the same way with -qp?
[16:50:26 CET] <yongyung> Maybe -tune fastdecode would help, too
[17:03:11 CET] <yongyung> qp 12 and fastdecode seem to work fine, preview is like 20 fps but that's enough for editing^^
[17:10:44 CET] <hanDerPeder> hi, does ffmpeg support googles tinyalsa as an replacement for alsa?
[17:11:45 CET] <hanDerPeder> refering to the alsa capture device support in libavdevice
[17:17:15 CET] <durandal_1707> Afaik no
[17:26:10 CET] <hanDerPeder> the encoder/decoder files for alsa doesn't seem too intimidating, might be I can implement a tinyalsa device. has this been attempted?
[17:29:05 CET] <prelude2004c> hey guys.. question.. i have a transport stream with many program ID's
[17:29:40 CET] <prelude2004c> on of with is program 1895 for example.. i use " -map 0:p:1895 " to pull that out .. problem is the stram 0:19 ( video ) and the audio streams keep changing
[17:30:23 CET] <prelude2004c> how do i map all the audio i want while the stream keeps changing
[17:30:30 CET] <prelude2004c> can i somehow select the " 0x1511 "
[17:31:41 CET] <debianuser> hanDerPeder: As I understand both alsa-lib and tinyalsa are just libraries providing an API to talk to alsa kernel drivers. So wherever tinyalsa works - alsa-lib should work too.
[18:06:52 CET] <prelude2004c> hey anyone know how to select streamid hex inside a transport stream program id ?
[18:07:17 CET] <prelude2004c> eg.. --map 0:p:$programid .. then -map 0xxxxx 0x2222 etc etc.. for the actual stuff i want from that program ID
[18:11:52 CET] <c_14> prelude2004c: https://ffmpeg.org/ffmpeg.html#Stream-selection
[18:24:15 CET] <autofsckk> hello, first of all thanks for this incredible application, its fantastic, i've been experimenting a bit with it and its great
[18:24:57 CET] <prelude2004c> hey, can someone help with with this >>> http://pastebin.com/qSCgYKDm
[18:25:03 CET] <prelude2004c> i am doing something wrong
[18:25:07 CET] <autofsckk> by any chance is there a site or blog or something showing some cool tricks with it?
[18:29:34 CET] <DHE> the trac web site has a couple of things strewn across various pages, from encoding tips/tricks to hard-to-use features to visualizing the video encoding itself
[18:35:12 CET] <autofsckk> DHE -> thanks, one more question, in the command line how is the presedence of the commands considered? i am taking like 10s from a video-resize it and convert it to animated gif, that is done perfectly but now i tried to flip the gif horizontally, so i added  -vf "hflip"  just before the output name of the gif, but now it did something totally different, it didnt resize de video, showed just a part of i
[18:35:18 CET] <autofsckk> t and made a lot much bigger file, ...
[18:35:21 CET] <autofsckk> ... from 2.3M to 16M
[18:37:40 CET] <DHE> the format of options are:   ffmpeg [input1 options] -i input1  [[input2 options] -i input2 ...]  [output1 options] output1 [[output2options] output2 ...]
[18:38:01 CET] <c_14> prelude2004c: 0:p:1895:0 and 0:p:1895:1 afaik
[18:38:29 CET] <DHE> now I'm not 100% sure but using -s will result in a scale filter being used to resize the video. maybe mixing -vf (manual video filters) affected that
[18:38:37 CET] <autofsckk> http://pastie.org/10592871
[18:38:55 CET] <DHE> ah... yeah, chain your filters together.
[18:39:04 CET] <DHE> -vf scale=600:-1,hflip
[18:39:45 CET] <autofsckk> i see, i just c&p and didnt see that im repeiting -vf , so dumb, thanks man
[18:42:07 CET] <dantti> libavcodec.a has x264_picture_init Undefined, if the link is static shouldn't it be defined?
[18:44:09 CET] <DHE> libx264 needs to be linked as well
[18:46:08 CET] <dantti> DHE: on my app?
[18:47:06 CET] <dantti> if so I did that, actually I can get rid of undef symbol if I call the function in my app, but then there is another 10 calls to x264 lib
[18:49:19 CET] <DHE> I'm saying x264_* functions require linking to libx264
[18:55:10 CET] <dantti> it's a static build I'd think that if libavcodec.a compiles fine it got linked but when I try to link to it I get the undef for libx264
[18:55:39 CET] <kepstin> dantti: yeah, because you have to also link to libx264 to pull in the code for it.
[18:55:39 CET] <furq> dantti: -Wl,--whole-archive
[18:56:26 CET] <furq> static libraries don't link anything, they're just an archive of object files
[18:57:15 CET] <dantti> furq: that's probably why ldd doesn't return a link list as does .so file :P I add that to --extra-ldflags right?
[18:57:24 CET] <kepstin> and static libraries don't know anything about dependencies, so you have to specify all of them during the final link. it's not like dynamic libs where linking to one pulls in its dependencies automatically.
[18:57:50 CET] <furq> -Wl,--whole-archive -lx264 -Wl,--no-whole-archive
[18:58:16 CET] <furq> when compiling your app
[18:58:45 CET] <dantti> ok, I thought it would be when compiling ffmpeg
[18:59:07 CET] <dantti> I have -lx264 in my app just not this (no)-whole-archive
[18:59:15 CET] <dantti> btw don't they conflict?
[18:59:17 CET] <kepstin> I don't think you need the whole-archive stuff, just putting -lx264 after the libav* libraries should be sufficient for the linker to pull in the needed stuff.
[18:59:21 CET] <furq> no
[18:59:35 CET] <furq> whole-archive applies to all static libraries until the next no-whole-archive
[18:59:41 CET] <furq> kepstin might be right though
[18:59:47 CET] <kepstin> with static archives, the order of linking is important :/
[18:59:50 CET] <dantti> must be after? I noticed that ffmpeg *.a need a right order
[19:00:31 CET] <furq> it's been a while since i linked anything with static libraries because stuff like this happens
[19:02:10 CET] <prelude2004c> hey ,can anyone point me in the right direction >> http://pastebin.com/qSCgYKDm
[19:03:27 CET] <kepstin> what gets really fun is when you have two static libraries that use functions from each-other. e.g. if liba uses a function from libb and vice-verse. Then the correct link command is "-la -lb -la" :/
[19:03:36 CET] <kepstin> (or use -Wl,--whole-archive)
[19:04:35 CET] <furq> surely the first item in the correct link order is "go back and make them shared libraries you fool"
[19:05:03 CET] <dantti> kepstin: oh, I was trying to do something like that a while ago
[19:05:12 CET] <dantti> decided to merge the two :P
[19:06:23 CET] <kepstin> basically, the way it works is that the linker has a list of symbols in the program that it's missing the code for. Each time you pass a -l option with a static archive, it looks in that archive, and pulls in the functions that satisfy missing symbols. Then, after that, it adds any new missing symbols from the new functions to its list, and goes on to the
[19:06:23 CET] <kepstin> next library.
[19:07:53 CET] <kepstin> what the "whole archive" option does is makes it copy *all* of the functions from the static archive into the program, instead of only the functions that satisfy missing symbols.
[19:07:54 CET] <dantti> so the -la -lb -la is just for a "second scan" of missing symbols right?
[19:08:07 CET] <kepstin> yeah
[19:09:04 CET] <dantti> yay, it compiled now let's see find_codec works on android this time :P
[19:11:04 CET] <dantti> great, I can now start coding :P
[19:11:07 CET] <dantti> thanks
[19:28:40 CET] <prelude2004c> anyone have a sec to help me with a map function ?
[19:30:04 CET] <prelude2004c> Stream #0:12[0x1511] the stream 0:12 keeps changing everytime i run it.. but the 0x1511 always maintains
[19:30:21 CET] <prelude2004c> how do i use a selection based on the " 0x1511 "
[21:06:02 CET] <autofsckk> im reading the -ss input/output and i dont understand really what the difference is, or when to use one or the other, for what i understand its better and faster to use input right? before the -i ?
[21:16:17 CET] <ChocolateArmpits> autofsckk: before input the seeking is done via demuxer, before ouptut the seeking is done via decoder (I think). anyways, The latter is more accurate, but you lose speed.
[21:17:41 CET] <autofsckk> ok ,thanks for the explanation, i dont know what demuxer and decoder are, well i have an idea but im really new at video editing, ill look about it thanks again
[21:18:26 CET] <ChocolateArmpits> autofsckk: demuxer helps you get the video/audio stream contained in a file. Decoder does decoding of the coded video/audio stream
[21:19:42 CET] <autofsckk> ok, thanks
[21:38:06 CET] <cbsrobot-> anyone knows a tools that prints dolby leq(m) loudness ?
[21:48:57 CET] <kepstin> autofsckk: basically, when -ss is an input option, ffmpeg tries to jump to the correct spot in the file and starts decoding there. When -ss is an output option, ffmpeg decodes the entire file from the start, but just throws out frames until it gets to the spot you wanted.
[22:33:29 CET] <Polochon_street> Hi! I'm building a music analyzer, but I totally forgot about the floating-points formats in my decoding process. Is is really dirty to get floats in a temporary variable, like this v = ((float*)decoded_frame->extended_data[0])[i];, and then multiply it by max_in16_t, so that I can work with a « normal » s16 array?
[22:35:49 CET] <JEEB> lossy audio in general is natively float, and only those formats by default get float output
[22:35:54 CET] <kepstin> heh, i think it's more common for people to convert the int formats to floating format as a working format rather than the other way around.
[22:36:16 CET] <JEEB> there is avresample and swresample to convert float outputs to integer if you require that
[22:36:27 CET] <fritsch> take great care one is 32 bit and the other is a signed 16 bit
[22:36:51 CET] <Primer> Hi, are -probesize and -analyzeduration the only ways to reduce startup latency when re-streaming via ffmpeg? I'm using filter_complex to turn 6 security cameras from a black box DVR that streams via rtsp and piping that to mpv to view locally
[22:36:52 CET] <fritsch> use swresample to produce your target format
[22:36:56 CET] <JEEB> the decoders themselves output whatever is native to them, basically
[22:37:15 CET] <Primer> Is there no way to specify the parameters that the source streams use beforehand?
[22:37:22 CET] <JEEB> what
[22:37:26 CET] <JEEB> oh
[22:37:29 CET] <JEEB> go on
[22:37:31 CET] <ChocolateArmpits> Primer: you can try the buffer "reduction" settings, like nobuffer
[22:37:39 CET] <Polochon_street> okay, so you suggest that I look into swresample?
[22:37:41 CET] <Gunni> hey, i am trying to remove an audio channel from a dual audio video file, the original sounds fine but when i use the following command the audio goes severely out of sync -- ffmpeg -i in.avi -map 0:0 -map 0:2 -acodec copy -vcodec copy out.avi
[22:37:44 CET] <ChocolateArmpits> I'm not sure how well that works
[22:37:49 CET] <Polochon_street> wouldn't it be a loss in term of performance?
[22:38:06 CET] <Primer> ChocolateArmpits: Well, I'm getting this: [cache] Cache is not responding - slow/stuck network connection?
[22:38:17 CET] <JEEB> well optimized resampling is going to be quicker than unoptimized (if you require resampling)
[22:38:19 CET] <Primer> Which lead me to looking at options to disable any form of caching
[22:38:40 CET] <JEEB> there are two APIs with similar APIs (av|sw)resample available in FFmpeg, feel free to pick your poison
[22:38:44 CET] <kepstin> Polochon_street: swresample is a pretty well optimized chunk of code; you'd use it to convert a whole buffer at a time between formats.
[22:38:56 CET] <Polochon_street> okay, I trust you, thanks very much! :)
[22:39:35 CET] <JEEB> basically you are going to get integer based output for lossless formats generally, while for lossy you are going to be getting floats
[22:40:24 CET] <JEEB> or well, for lossless you are going to get whatever there was in the input
[22:42:06 CET] <Polochon_street> yep, I only use lossless formats, and when someone tested what I wrote with a .mp3 well... It broke everything
[22:42:16 CET] <Polochon_street> I'll be more cautious next time!
[22:42:36 CET] <Gunni> can anyone help me with my audio sync issue?
[22:43:46 CET] <Primer> I see no such option as nobuffer...unless you mean mvp
[22:44:20 CET] <Primer> I've been messing with -probesize and that's made the video start much more quickly than without it
[22:44:39 CET] <Primer> but ffmpeg is still reading each source stream in sequence
[22:45:19 CET] <Primer> well, it _seems_ it's reading them sequentially
[22:45:47 CET] <ChocolateArmpits> Primer: -fflags nobuffer
[22:45:51 CET] <ChocolateArmpits> it's an input flag
[22:46:17 CET] <ChocolateArmpits> https://www.ffmpeg.org/ffmpeg-formats.html#Format-Options
[22:47:16 CET] <Primer> Anyhow, this is what I'm doing: http://ix.io/mzg
[22:48:32 CET] <Primer> It's taking about 5-6 seconds before it appears
[22:48:50 CET] <Primer> and I'm trying to get that down to as little as possible
[22:49:04 CET] <ChocolateArmpits> then reduce analyzeduration
[22:50:11 CET] <ChocolateArmpits> try something like 500 000 microseconds, which would be half second
[22:50:29 CET] <ChocolateArmpits> lower it until ffmpeg process input anymore
[22:50:33 CET] <ChocolateArmpits> can't*
[22:51:21 CET] <Primer> The output from ffmpeg suggests it's reading each source sequentially. I don't suppose there's a way to make that concurrent?
[22:52:16 CET] <Primer> Also, if the source is consistent, can't the source "attributes" (the things probing and analyzing are looking for) be specified beforehand?
[22:52:55 CET] <ChocolateArmpits> I would guess the input are analyzed parallelly, but I'm not sure
[22:53:10 CET] <ChocolateArmpits> you can specify framerate, format
[22:53:25 CET] <ChocolateArmpits> again, I can't say if this will speed up the analysis
[22:53:39 CET] <ChocolateArmpits> you can specifiy resolution and pixel format too
[22:54:08 CET] <ChocolateArmpits> anyways start with analyzeduration
[22:54:29 CET] <ChocolateArmpits> if there really are latency problems this is a definit setting that can change that
[22:54:35 CET] <Primer> I set that to 0. The stream continues to work, but there's no appreciable speed up
[22:55:08 CET] <ChocolateArmpits> ok try with one input
[22:55:09 CET] <Primer> Well, I'm remote from the source at the moment, but I'm using the same script when I'm not
[22:55:15 CET] <ChocolateArmpits> and don't set something to 0
[22:55:24 CET] <Primer> which is then on a local gigabit network
[22:56:01 CET] <kepstin> default for -analyzeduration is 0... I wonder if that has the effect of 'unlimited'.
[22:56:08 CET] <Primer> right
[22:56:22 CET] <ChocolateArmpits> set it to a sensible value
[22:56:28 CET] <ChocolateArmpits> as I said 500000
[22:57:15 CET] <Primer> yup, tried that, no difference
[22:57:27 CET] <ChocolateArmpits> Primer: for all the inputs ,right ?
[22:57:33 CET] <Primer> piping ffmpeg | mvp is hosing the terminal
[22:57:34 CET] <Primer> yes
[22:57:47 CET] <Primer> http://ix.io/mzh
[22:58:13 CET] <Primer> Is the latest version
[22:58:31 CET] <ChocolateArmpits> you should not be using probesize when using analyzeduration. The idea behind them is the same, but you either alter seconds or bytes
[22:58:48 CET] <Primer> ahh
[22:59:01 CET] <ChocolateArmpits> basically pick one depending on your situation
[23:00:22 CET] <ChocolateArmpits> also mpv may have it's own buffer, see if you can reduce it there. Or just use ffplay and transplant latency reduction settings there
[23:01:40 CET] <Primer> ffplay chokes on this
[23:01:42 CET] <Primer> err
[23:01:43 CET] <Primer> no
[23:01:56 CET] <Primer> its fullscreen option is real full screen
[23:02:01 CET] <Primer> not a borderless window
[23:02:11 CET] <Primer> which is totally unacceptable
[23:02:12 CET] <ChocolateArmpits> is that even an issue?
[23:02:16 CET] <Primer> yes
[23:02:22 CET] <Primer> I have 4 displays
[23:02:24 CET] <ChocolateArmpits> afterall you're testing for latency and not for preview
[23:02:48 CET] <Primer> and the displays flicker on and off for several seconds while going real full screen
[23:03:15 CET] <ChocolateArmpits> is the latency present when you only have one of the inputs passed ?
[23:03:51 CET] <Primer> yeah, -analyzeduration 500000 just goes back to taking forever for each stream
[23:03:58 CET] <Primer> which is being handles sequentially, for sure
[23:04:23 CET] <Primer> Not sure what you mean
[23:04:41 CET] <ChocolateArmpits> when you only have one single input, rather than all of the them
[23:05:03 CET] <ChocolateArmpits> can you "ffplay rtsp..." ?
[23:05:12 CET] <ChocolateArmpits> and check what's the latency with just one of them
[23:05:23 CET] <ChocolateArmpits> without all the added filter processing
[23:06:15 CET] <Primer> seems ffplay is a separate package
[23:06:43 CET] <Primer> but if I use mpv with just the rtsp url it takes several seconds
[23:07:19 CET] <Primer> I have ffmpeg installed as a package (Linux Mint 17) but I compiled it myself. Not sure if ffplay was a separate package
[23:07:54 CET] <JEEB> mpv has options to set libavformat parameters
[23:08:00 CET] <JEEB> see the documentation or poke #mpv
[23:08:19 CET] <JEEB> at least I remember it having a way to pass lavf parameters :P
[23:08:24 CET] <Primer> vlc URL == 1-2 seconds
[23:08:36 CET] <Primer> from hitting enter to where I see output
[23:08:54 CET] <JEEB> because it doesn't use libavformat?
[23:09:00 CET] <JEEB> at least for most formats, by default
[23:09:18 CET] <JEEB> you can make it use by switching the demuxer on command line
[23:09:49 CET] <Primer> not sure if mplayer is even relevant any more, but it just chokes
[23:10:03 CET] <JEEB> it's barely being maintained by a guy or so
[23:10:37 CET] <Primer> probesize seems to be the most effective argument so far
[23:11:17 CET] <JEEB> https://mpv.io/manual/master/#options-demuxer-lavf-o
[23:11:19 CET] <Primer> So going back to the entire reason for probesize and analyzeduration...can I not just specify these things?
[23:11:21 CET] <JEEB> or heck
[23:11:33 CET] <JEEB> mpv has demuxer-lavf-probesize
[23:11:40 CET] <Primer> Because the source format/framerate/etc. is not going to change
[23:11:43 CET] <JEEB> and demuxer-lavf-analyzeduration
[23:11:57 CET] <JEEB> some of the things can be defined, yes. some can't
[23:12:03 CET] <JEEB> at least not on command line :P
[23:12:27 CET] <Primer> Well, the sources are all from the same device, so they'll all be identical, which the exception of stream 1 having audio
[23:12:34 CET] <Primer> and the others having none
[23:13:12 CET] <Primer> As far as duration, they're never ending streams, so...
[23:13:33 CET] <Primer> All that remains are...transport, codecs, bitrates?
[23:13:58 CET] <JEEB> well you can set the format at least, and the decoder. it will still have some parsing requirements
[23:14:04 CET] <JEEB> format as in the container
[23:14:47 CET] <Primer> I also have this: [nut @ 0x29a0ca0] Non-monotonous DTS in output stream 0:1; previous: 51611, current: 51608; changing to 51612. This may result in incorrect timestamps in the output file.
[23:15:06 CET] <Primer> This seems to cause camera 1, the one with the audio, to lag behind the other 5
[23:15:23 CET] <Primer> video lag, that is
[23:19:03 CET] <Primer> Anyhow, I'm very satisfied with this as it stands right now. Just trying to refine it as best as I can.
[23:19:37 CET] <JEEB> well, did you try poking at the mpv options I noted?
[23:19:38 CET] <Primer> At home I'm running a 50' HDMI cable from the device to the TV in my living room. The direct output from the device is real time
[23:20:12 CET] <Primer> I have not, but from what I can tell it seems the majority of the the delay is happening on the ffmpeg side
[23:20:13 CET] <JEEB> basically analyzeduration and probesize and you can define the container
[23:20:26 CET] <JEEB> well those settings control libavformat...
[23:20:32 CET] <JEEB> which is why I noted them from the goddamn manual
[23:20:42 CET] <Primer> Pardon my ignorance, but how would I determine the container? The presume it's in the output?
[23:21:12 CET] <JEEB> container is called "format" in ffmpegspeak
[23:21:24 CET] <JEEB> I see nut mentioned in that one line you quoted, for example
[23:21:39 CET] <JEEB> in ffmpeg it's around the "Input 0/1 things :P"
[23:22:44 CET] <Primer> well, nut is what I'm using for output
[23:23:03 CET] <JEEB> well then you define that in the reading side
[23:25:11 CET] <Primer> sure
[23:25:41 CET] <Primer> The sequence of events have lead me to try to speed up the ffmpeg side of things first though
[23:26:14 CET] <Primer> initially without probesize, I'd see the output from the first stream take several seconds
[23:26:35 CET] <Primer> again for the next, next, etc, then after a good 20-25 seconds, the video would appear
[23:26:59 CET] <Primer> now with -probesize on the ffmpeg side, each video's output happens in about 1 second
[23:27:16 CET] <Primer> http://ix.io/mzl
[23:27:21 CET] <Primer> that's the output
[23:27:33 CET] <Primer> redacted to remove the private parts
[23:28:49 CET] <Primer> So I type ./ffmpeg.sh, hit enter, a chunk of text appears from reading the first stream in the first second, then the next check the second after that, etc. etc...6 seconds after hitting enter, the video appears
[23:29:11 CET] <kepstin> yeah, the ffmpeg tool sequentially probes each input in order before setting up the filters and outputs. No real way around that.
[23:29:27 CET] <Primer> I was hoping there was a way to do that concurrently
[23:30:03 CET] <Primer> but I'd settle for speeding up the "probing" by specifying all parameters beforehand, considering the stream parameters (attributes, whatever) won't ever change
[23:30:49 CET] <Primer> And I suppose I'd also try speeding up the mpv end as well by specifying
[23:31:11 CET] <Primer> thing is, that's entirely local, so I figure that's already pretty optimal
[23:31:26 CET] <Primer> the mpv end, that is, since I'm piping to it
[23:31:31 CET] <kepstin> I wonder if predownloading the sdp to files and passing those to ffmpeg might help
[23:31:41 CET] <kepstin> since it's the sdp that has all the connection setup stuff
[23:32:14 CET] <kepstin> hmm, but I guess rtsp does more than that :/
[23:33:01 CET] <kepstin> tbh, if you want 0 setup time, directly using rtp with the server sending to statically configured addresses/ports would do it.
[23:35:11 CET] <kepstin> I suspect that most of the per-stream setup time is the rtsp negotiation, rather than any stream probing.
[23:45:18 CET] <kepstin> if you have to stick with rtsp, it might be that the best option is to set up the rtsp connections in an external app (in parallel), write the sdp to disk, then start up ffmpeg with the sdp.
[23:45:28 CET] <Primer> yeah, it's a black box
[23:46:05 CET] <Primer> I know it's running Linux and the telnet port is open, but I haven't looked into hacking it
[23:46:14 CET] <Primer> I'm guessing it's using hardware encoders too
[23:46:28 CET] <Primer> since the PC board on it is minimal
[23:46:47 CET] <kepstin> the sdp would contain all the codec and format information, so in that case ffmpeg is basically just opening some udp ports for the rtp/rtcp and getting on with it.
[00:00:00 CET] --- Tue Dec  1 2015


More information about the Ffmpeg-devel-irc mailing list