[Ffmpeg-devel-irc] ffmpeg.log.20130613

burek burek021 at gmail.com
Fri Jun 14 02:05:01 CEST 2013


[00:34] <Bourbon> howdy, folks
[00:34] <Bourbon> I'm trying to build ffmpeg for android
[00:34] <Bourbon> and I'm having all kinds of fun
[00:37] <relaxed> Bourbon: You may get some ideas by looking at http://fate.ffmpeg.org/
[00:37] <Bourbon> libass is not found - is this something I have to tell the linker to include with Android.mk?
[00:37] <relaxed> Click on the far right link of the cpu you're targeting
[00:38] <Bourbon> I'm trying to use it with jni
[00:38] <Bourbon> but if you feel I can accomplish this with command line and stdin/out, I can do that
[00:38] <Bourbon> I can build executables just fine :\
[00:38] <relaxed> I have no idea what jni is.
[00:38] <Bourbon> java native interface
[00:38] <Bourbon> I'm doing android codings
[00:39] <relaxed> If you don't get any help here you can try the mailing list.
[00:48] <lollercopter> can ffmpeg/ffprobe dump SPS headers?
[00:49] <gangam_> Hi, what is the corret way to output an rtp multicast stream ? I tried like this: ffmpeg -rtbufsize 100000000  -f dshow -i video="screen-capture-recorder"  -an  -vcodec libx264  -preset ultrafast -tune zerolatency  -bsf:v h264_mp4toannexb -b 600000 -f rtp rtp://224.1.1.1:1234 but vlc failes to open the generated sdp file
[01:02] <Hans_Henrik> The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it. << means a old version of ffmpeg?
[01:03] <klaxa> no that means you should use fdk-aac instead
[01:03] <klaxa> Hans_Henrik: ^
[01:04] <Hans_Henrik> and acodec:   was deprecated?
[01:04] <Hans_Henrik> -acodec *
[01:04] <llogan> Hans_Henrik: "aac" is the native FFmpeg AAC encoder. it requires "-strict -2" or "-strict experimental" if you want to use it because it is not considered to be mature/goodenuf
[01:04] <klaxa> use -c:a instead of -acodec
[01:04] <llogan> acodec should still work, IIRC
[01:04] <Hans_Henrik> but fdk-aac is considdered good enough? :o
[01:05] <klaxa> yes
[01:05] <llogan> Hans_Henrik: https://ffmpeg.org/trac/ffmpeg/wiki/AACEncodingGuide
[01:05] <klaxa> it's frauenhofer's implementation iirc
[01:05] <Hans_Henrik> incompatible license or crappy coding or why don't ffmpeg just scrap the crappy encoder and use fdk-* ?
[01:05] <Hans_Henrik> s/crappy/internal/
[01:07] <llogan> it's nice to have a native encoder so an external library is not required, and most of the external libraries are non-free, and the one that isn't is worse than the native encoder.
[01:07] Action: MangaKaDenza looks around
[01:07] <MangaKaDenza> WMV IS THE BEST!
[01:09] Action: MangaKaDenza coughs
[01:14] <Hans_Henrik> Unknown encoder 'fdk-aac`      probably means it did not have --enable-nonfree ? x.x
[01:15] <Hans_Henrik> .. cause it didnt
[01:18] <llogan> There is no encoder named 'fdk-aac'
[01:18] <llogan> see the examples in the link i provided, and your ffmpeg may not be built to support libfdk_aac.
[01:19] <llogan> as mentioned in the link i provided
[01:29] <undercash> hello
[01:30] <undercash> a simple way to flip and crop with ffmpeg?
[01:56] <undercash> i guess not ^^
[02:00] <relaxed> undercash: look at the filters docs
[02:00] <undercash> i am
[02:01] <undercash> but not so easy
[02:01] <undercash> "split [main][tmp]; [tmp] crop=iw:ih/2:0:0, vflip [flip]; [main][flip]"
[02:01] <undercash> "crop=iw:ih/2:0:0, vflip [flip]; [main][flip] "
[02:02] <undercash> what do I put for flip value
[02:02] <undercash> what i want to do is to invert like if you were watching in a mirror
[02:03] <undercash> so it looks normal even if the original scene is completely inverted
[02:05] <undercash> maybe the simple hflip filter option
[02:38] <Fusl> hey, could someone help me?
[02:40] <zap0> Fusl, what does the topic say?
[02:41] <Fusl> sorry :<
[02:41] <Fusl> erm
[02:43] <Fusl> i am currently writing a playlist script, which "plays" and sends ffmpeg data to an rtmp server out... to stay permanently connected to the rtmp i need to permanently use exactly one ffmpeg process which should not die at any time...
[02:44] <Fusl> now i am `cat`ting a video to a named pipe and if i want to skip this video, i just ctrl+c the cat process and spawn a new with the same named pipe... the ffmpeg process is still running but it crashed cause of this errors:
[02:45] <Fusl> http://pastebin.com/Q28Yr2GB
[02:46] <Fusl> so is there any way to continue using one ffmpeg process but which encodes a bunch of files which are maybe corrupted (cause of the ctrl+c when `cat`ting)?
[05:33] <mudkipz> does ffmpeg have audio options analogous to gstreamer's "provide-clock"?
[05:42] <mudkipz> http://pastebin.com/raw.php?i=SGiTSWTn
[05:46] <mudkipz> So I've been trying to stream with ffmpeg since I switched to pulseaudio from just alsa (on account of switching to a new videocard with hdmi out). I originally did stream with alsa by doing some complex .asound stuff but it was problematic so I switched.
[05:46] <mudkipz> The top command in the pastebin is how I've been trying to stream with ffmpeg and pulse.
[05:46] <mudkipz> The bottom command is a similar command for gstreamer that does more or less the same thing.
[05:48] <mudkipz> I've configured my pulseaudio to run at 48khz sample rate. For some reason no matter what I set it to ffmpeg kept detecting it as 48khz and it wasn't a big deal changing my setup from 41khz to 48khz.
[05:49] <mudkipz> 44.1khz* I meant
[05:50] <mudkipz> That change resolved most of my audio sync issues except that I still would get audio drift in ffmpeg. I ended up testing on gstreamer and was also got audio drift there.
[05:51] <mudkipz> In gstreamer however, I was able to fix it by using the provide-clock property on the pulsesrc source.
[05:51] <mudkipz> I'm wondering if there's something similar in ffmpeg.
[05:53] <mudkipz> To clarify, this is for rtmp streaming on websites like livestream, ustream, twitch, justin, etc.. The issue I was having was that over time the audio would lag more and more behind the video.
[05:53] <mudkipz> Not an insane amount, but depending on the content it could be anywhere from a couple seconds per half hour to several seconds in 15 minutes.
[05:56] <mudkipz> I was just checking here and noticed that it has some different options. http://ffmpeg.org/ffmpeg-all.html#pulse
[05:57] <mudkipz> -sample_rate samplerate I suppose would be the proper method for setting the input sample rate for pulse instead of -ar??
[13:01] <damir__> hello
[13:03] <damir__> in mpeg2video encoder, is it possible to limit I-Frame frequency? I can limit minimum frequency, but I would also like to limit maximum
[13:03] <damir__> ie: force I-Frames every 25 frames, no more, no less
[13:24] <xlinkz0> damir__: doesn't x264 encode mpeg2 aswell?
[13:25] <xlinkz0> try this  -r 25 -keyint_min 25 -g 25
[13:29] <damir__> xlinkz0: do you happen to know what i need to set to x264 so it will encode mpeg2?
[13:29] <xlinkz0> just try ffmpeg -i your_file -r 25 -keyint_min 25 -g 25 out
[13:30] <damir__> alright i'll try that
[13:30] <damir__> but your comment about x264 got me interested, and i'd like to try it
[13:30] <damir__> to use x264 as mpeg2 encoder
[13:30] <xlinkz0> if it can use x264, it will
[13:32] <klaxa> x264 encodes to h264 not mpeg2
[13:33] <damir__> ok
[14:06] <smt> hi.. any reason why decoding resets the DTS after a certain period of time? http://pastebin.com/dQunRTaq
[14:12] <smt> can it be related somehow to the detection of I-frames?
[14:17] <fuleo> has anyone been able to do ffmpeg -i /dev/video0 in mac os ?
[14:24] <Mavrik> fuleo, /dev/video0 is a video4linux2 node
[14:24] <Mavrik> obviously, it kinda doesn't work on OS X :)
[14:44] <leandrosansilva> Hello. Is ffmpeg able to decode h264 streams using any kind of gpu accel? I'm not using x264, but the native decoder.
[14:46] <JEEB> x264 is not a decoder
[14:46] <JEEB> it's an encoder
[14:48] <leandrosansilva> ok, so I'm using the right tool :-)
[14:49] <leandrosansilva> I'd like to use gpu to decode some h264 streams. Currently I'm using ffmpeg libraries to read some streams from ip cameras, but I feel the cpu is workin too much to do that
[14:49] <JEEB> anyways, there are multiple hwaccels in libavcodec
[14:49] <JEEB> that let you use various hardware acceleration thingies/APIs within libavcodec
[14:50] <JEEB> they aren't as simple to use as plain decoders as they basically make you do the basic setup possibly needed for that given HW decoder
[14:50] <JEEB> you should see what hwaccels you can use
[14:50] <JEEB> and build on top of those
[15:01] <leandrosansilva> JEEB, ops, thx
[15:01] <leandrosansilva> Do you know where I can find any code examples?
[15:01] <leandrosansilva> of the using of hwaccels
[15:02] <JEEB> it's really specific to the hwaccel you want to use in many ways
[15:02] <JEEB> there was a thread on the users' mailing list on DXVA
[15:02] <JEEB> which is windows-specific of course
[15:05] <leandrosansilva> In this case I need something linux specific
[15:06] <leandrosansilva> thx for your help :-)
[15:06] <JEEB> then look for whatever hwaccels there are, pick one that your hardware and software supports
[15:06] <JEEB> then go off looking for examples, or asking for an example
[15:07] <leandrosansilva> In fact I'm looking for some solution in ffmpeg to then get the hardware :-)
[15:33] <xlinkz0> what does -g do?
[15:35] <JEEB> (maximum) GOP length
[15:35] <JEEB> same as --keyint with x264cli
[15:38] <xlinkz0> thanks
[15:46] <bunniefoofoo> is a bufsize required when using mpeg2 muxer? The reason I ask is, I have a 480p (8mb) mpeg2 mux w/o bufsize set, and a 720p (20mb) one without it set. At 720p the muxer says "buffer underflow" and "packet too large"
[15:46] <JEEB> what is this "mpeg2 muxer" you speak of?
[15:46] <bunniefoofoo> I think I mean "mpeg"... sorry
[15:47] <bunniefoofoo> the video codec is mpeg2video
[15:47] <JEEB> is that mpegps?
[15:47] <bunniefoofoo> yes
[15:48] <bunniefoofoo> also I am not setting muxrate. I am however setting muxdelay amd muxpreload if that makes a difference
[15:49] <JEEB> I have no idea about those, but that in my opinion sounds somewhat more close to it. To be honest I have as much idea about the mpeg-ps muxer as you do. I'm taking a quick look at the muxer
[15:49] <JEEB> the warning comes from the muxer, right?
[15:49] <JEEB> not from the encoder
[15:49] <bunniefoofoo> yeah it says "[mpeg @blablah] packet too large" etc
[15:50] <JEEB> can't see that warning in mpeg.c or mpeg.h in libavformat
[15:50] <JEEB> ah, mpegenc.c
[15:51] <JEEB> av_log(ctx, AV_LOG_ERROR, "packet too large, ignoring buffer limits to mux it\n");
[15:51] <JEEB> this?
[15:51] <bunniefoofoo> thats the one
[15:52] <JEEB> I will guess you will want to use muxrate :P
[15:52] <JEEB> just this little hunch
[15:54] <bunniefoofoo> I think I tried using muxrate and it didn't help, give me a minute to verify that
[15:55] <JEEB> then don't set muxpreload since IIRC that affected dts or whatever around there
[15:56] <bunniefoofoo> yeah with muxrate set, i get a lot more underflow errors (even though muxrate is 2x the bitrate)
[15:57] <bunniefoofoo> packet too large error still there too
[15:57] <bunniefoofoo> it used to be (in ff 0.8) I had to set preload and maxdelay for muxer to work right
[16:01] <bunniefoofoo> ok without maxdelay and preload I get much more error prints and eventually segfault
[16:04] <JEEB> lol
[16:05] <JEEB> I think I'd recommend you to try and find someone who knows the mpeg muxer well
[16:05] <JEEB> and see if you can get it either acknowledged as a bug or incorrect usage
[16:05] <JEEB> why are you using it btw?
[16:06] <bunniefoofoo> I'm using it to stream some real-time encoding video over lan
[16:06] <JEEB> any reason for exactly it
[16:06] <JEEB> instead of any of the other containers that can be used for such usage?
[16:06] <bunniefoofoo> for performance reasons mainly
[16:06] <JEEB> hm?
[16:07] <JEEB> I... don't think different containers matter much at all
[16:07] <bunniefoofoo> I have tried mpegts however on the decode side it seems buggy
[16:07] <JEEB> if both sides are libavformat or something, try mkv, nut for example
[16:07] <bunniefoofoo> well I don't think mp4 would work since the header gets written at the end
[16:07] <JEEB> mp4 with fragments feature on would work
[16:08] <JEEB> also it's not a header if it doesn't come first, it's the "index". And the fragments feature lets you pretty much not need it to play it
[16:08] <bunniefoofoo> also, mp4 container doesn't support mpeg2video, as a standard anyways (correct?)
[16:09] <Paranoialmaniac> yes
[16:09] <Paranoialmaniac> mp4 doesn't support mpeg2video
[16:11] <Paranoialmaniac> gpac muxer can mux mpeg2video into mp4 though
[16:53] <bunniefoofoo> mkv seems to work for streaming, thanks for the suggestion jeeb
[16:54] <bunniefoofoo> no more mux problems, for the moment!
[17:12] <Narsilou> Hello, Im receiving warnings : " [swscaler @ 0xbdd1fe0] Warning: data is not aligned! This can lead to a speedloss". It's a pointer I am not allocating myself. Before doing starting big changes, what is the order of magnitude of the speedloss ?
[17:12] <liranz> I'm trying to upgrade to ffmpeg 1.2.1, when creating videos (mp4, x264 with sac-he from fdk) the audio is no synchronised on Mac Safari and QuickTime (plays well in mplayer). When looking at the file with -i, the duration of the video stream now has a "start" field that is not 0. I assume this is what confuses QuickTime. Any way to force this start to 0?
[17:14] <liranz> sorry, meant aac-he not sac-he
[17:15] <liranz> If you have any other idea why QuickTime (and Safari) would play the audio differently than other player (mplayer, Chrome, etc) please also suggest that.
[17:31] <bunniefoofoo> narsilou, I don't know what that error is all about, I have been seeing it past week or so and not a major difference to be seen
[17:31] <bunniefoofoo> I did some checking and my data seemed to be 16-bytes aligned throughout so it is a weird one
[17:33] <liranz> Duration: 00:01:26.79, start: 0.229388, bitrate: 726 kb/s
[17:33] <bunniefoofoo> I am using av_picture_alloc, but there is also av_image_alloc() that seems similiar and also has an alignment parameter. I havent' be able to get av_image_alloc to work
[17:35] <bunniefoofoo> lirance I believe the start means there is basically a mux delay going on or rc buffer issue. Try encoing without rc buffer enabled
[17:35] <Narsilou> bunniefoofoo: Thanks.
[17:36] <liranz> How do I do that? BTW, I only use ffmpeg to encode the audio, the video stream is pre-created with x264
[17:40] <bunniefoofoo> I believe the "start" basically means the first dts (or maybe pts) that appears in the stream. This is probably why your video is delayed relative your audio. You would have to either change the delay on the video or the audio to sync it up
[17:40] <bunniefoofoo> but since it plays fine in mplayer maybe it is already correct
[17:41] <bunniefoofoo> you should try transcoding the resulting file with -loglevel 99 to see if there is anything wrong with the file
[17:41] <bunniefoofoo> or maybe try ffplay, it will output errors too
[17:41] <liranz> I would assume that either mplayer or QuickTime just don't mind that 'start' value
[17:42] <bunniefoofoo> I don't think that is very likely, it is common to see that
[17:42] <liranz> Another interesting thing: When encoding the file with faac, the start value is very small, so the difference is not noticeable, when using fdk-aac I get the large start value
[17:43] <bunniefoofoo> ok,so the audio is what has the start value, or the video?
[17:43] <liranz> I could just revert to faac for the time being, until I understand how to get fdk back to work
[17:43] <liranz> The video has the start value. But the video stream is created elsewhere with the x264 binary
[17:44] <bunniefoofoo> the start value is a container property so probalby not related to the video
[17:45] <liranz> This is how my command line looks after reverting to faac: ffmpeg -y -i generated_by_x264.mp4 -ar 22050 -ac 1 -f s16le -acodec pcm_s16le -i rawaudio.s16le -c:v copy -c:a libfdk_aac -b:a 64k -ar 22050 -ac 1 output.mp4
[17:45] <bunniefoofoo> if its a container issue, try using mkv to see if you get sync back
[17:46] <liranz> My problem is that Safari (and QuickTime) does not play MKV, so there would not be a problem.
[17:46] <liranz> Mplayer (that plays) mkv plays the file ok
[17:46] <bunniefoofoo> ok so probably it is mp4 mux issue
[17:47] <liranz> BTW, I said I pasted the faac parameters, but reading it again shows that I pasted the libfdk_aac command line, but it's not much different
[17:49] <bunniefoofoo> you might want to try -muxdelay and/or -muxpreload, it won't fix a sync problem but maybe effects a buggy player
[17:49] <liranz> Is there a way for me to control the mp4 mux from the command line? I would prefer using fdk aac-he
[17:50] <liranz> thanks
[17:50] <bunniefoofoo> try adding -muxdelay 0.1 -muxpreload 0.2 ( the default os 0.5/0.7)
[17:52] <smt> what's the best way to diagnose (in code) a sync issue between audio/video when decoding?
[17:53] <liranz> I added it with the fdk aac encoder, and still mplayer plays ok but QuickTime has a sync issue between audio an video
[17:54] <liranz> By it I mean -muxdelay 0.1 -muxpreload 0.2
[17:58] <liranz> Another interesting point -- When using fdk to encode AAC-LE, the start value is less that half of the AAC-HE value at: start: 0.092880. This is still noticeable
[17:58] <liranz> I will revert to faac for now, and hopefully have more time to debug it a bit later
[18:00] <bunniefoofoo> liranz, I might have given you those params in the wrong order, I preload should be < delay
[18:02] <bunniefoofoo> smt, possibly by feeding plaback through a recording device then lining up some periodic video feature and audio feature
[18:03] <bunniefoofoo> using a timing test clip of course
[18:03] <bunniefoofoo> with the beeps and bops in it
[18:04] <smt> well, I can observe the delay, and when looking the the DTS values, I think that delay is expected (but unwanted)
[18:05] <bunniefoofoo> decoding sync is a tough problem. There are lots of causes of the delay, including speaker system, sound card, wrong pts values...
[18:05] <smt> bunniefoofoo: http://pastebin.com/dQunRTaq
[18:05] <smt> I find it strange, although I'm ignorant in what comes to A/V sync, that the DTS is reset to 0
[18:05] <bunniefoofoo> you should be using PTS for sync if you can
[18:06] <smt> but that's up to the decoder, right?
[18:07] <smt> I'm not telling the codecs to use DTS over PTS
[18:07] <smt> I believe avformat_find_stream_info introduces unwanted delay that I can't recover
[18:07] <bunniefoofoo> typically you read the pts value off decoded frames as the come out of the decoder, then use that to line up audio and video display
[18:08] <bunniefoofoo> the values on the packets aren't too useful unless you are remuxing or seeking
[18:08] <smt> I'm decoding from livestream
[18:08] <smt> are you saying I must have my own clock for decoding?
[18:08] <smt> I thought the demuxer took care of  that
[18:08] <bunniefoofoo> no, just use the pts values off of AVFrame coming out of decoder is all
[18:09] <bunniefoofoo> demuxer isn't going to line up audio and video for you, you have to do that yourself
[18:09] <bunniefoofoo> make sure to enable GEN_PTS flag on format context too
[18:10] <smt> Ah, didn't know about that. Can you give me a pointer on which place should I look for lining up PTS's correctly for displaying?
[18:10] <liranz> bunniefoofoo, I tried the reversed values of the preload and delay, and still mplayer plays okay, QuickTime plays with a/v async
[18:11] <liranz> I will just revert back to faac for now, until I have more time to debug the fdk
[18:11] <bunniefoofoo> yeah, use what works I guess. Any reason to use fdk?
[18:12] <bunniefoofoo> smt, basically the pts value is when the frame should appear on screen. You have to insert delays into your draw loop and/or drop frames if needed to try to make the frame appear at the right time
[18:13] <liranz> fdk allowed me to encode with slightly lower bitrates. Most people don't hear any difference
[18:13] <liranz> Thanks for the help!
[18:14] <bunniefoofoo> no problem, I am using fdk myself (but encoding both audio and video) and will look out for this issue too
[18:14] <smt> bunniefoofoo, my loop is currently bound to av_read_frame, which might explain a lot of things - should I create a draw loop with timer set to 1/videoFPS?
[18:15] <bunniefoofoo> smt, basically, but subtract off the time it took to process the frame, then delay the remainder
[18:16] <smt> the problem is that requires a packet buffer, and since I want low latency, won't that contribute to adding delay?
[18:16] <smt> my draw loop must fetch packets from the buffer that is filled by av_read_frame - is this right?
[18:18] <bunniefoofoo> the only reason to buffer packets is to sync audio up with video, since audio packets usually appear before video in the stream
[18:19] <smt> exactly.. thats's what I hav observed too. So am I correct to assume such buffer is required then?
[18:20] <bunniefoofoo> yes, for sure a buffer somewhere, probably on video packets at least
[18:22] <smt> what's the best reason for not depending on av_read_frame to display packets as received? trying to understand this
[18:25] <smt> I see most code examples that render the frames inside a while(av_read_frame()) looo
[18:25] <smt> *loop
[18:25] <bunniefoofoo> the packets typically arrive out of order vs their display order. This is due to codec features like b-frames. A buffer is needed to process these out-of-order packets and also sync up audio.
[18:26] <smt> got it -- and that is up to the implementation if I understood correctly
[18:27] <smt> would you sort the packets on the buffer according to their PTS value?
[18:27] <phr3ak> what is that 50 tbc?
[18:27] <phr3ak> Video: mpeg2video (Main), yuv420p, 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
[18:27] <bunniefoofoo> sure, also the codecs themselves maintain frame buffers internally. There are lots of sources of delay but ifyou have a low-delay codec there doesn't have to be a lot in the whole system
[18:28] <bunniefoofoo> smt, no, the muxer already puts he packets in optimal order when you received them (typically)
[18:28] <smt> I'm decoding with x264 (fastdecow, zerolatency) and aac, but I notice a 500-800ms delay in audio
[18:29] <smt> I've noticed AVCodecContext (video)::pts_correction_num_faulty_pts increases over time (1 or 2 units per second)
[18:29] <smt> while the audio one is steady at 1
[18:29] <bunniefoofoo> not sure about that
[18:30] <bunniefoofoo> might be the number of times GEN_PTS generated a pts value for you
[18:30] <bunniefoofoo> lots of codecs don't put a puts value on every packet
[18:31] <bunniefoofoo> That 500-800ms could be the sound card, for example my sound card buffer here defaults to 11,000 samples or so or 0.25 seconds
[18:34] <smt> hmm, that's not the case I think.. the audio is clear
[18:35] <bunniefoofoo> if you are playing back a live recording and getting 500-800 ms I think that is pretty good. There is buffereing in the recording hardware and the playback and the sound card going on...
[18:37] <smt> the problem is that it is not synched with the video
[18:37] <smt> I should be able to compensate that delay
[18:37] <smt> but I dont know how
[18:39] <bunniefoofoo> once you have video playing and the right frame rate, and audio playing, now you have to delay the video (typically) until the audio catches up to the same PTS value as in the audio stream
[18:42] <bunniefoofoo> there are different ways of doing a/v sync with various levels of complexity... VLC for example time-stretches the audio rather than slow up the video to achieve sync
[18:43] <smt> the problem is that the examples are scarce or not optimized for live streaming
[18:43] <bunniefoofoo> the simplest I think is to delay the video display until the presentation time syncs with audio, and also be able to drop video frames if it gets too slow to keep up with audio.
[18:44] <smt> I don't necessarily need to line up the frames.. if an older one arrives, I think I should drop it
[18:44] <bunniefoofoo> you can't really do that; only if your stream only contains I-frames I think
[18:45] <bunniefoofoo> or maybe you could drop an entire GOP
[18:45] <bunniefoofoo> otherwise the decoder isn't going to work
[18:45] <smt> Hmm, it contains I-frames. I could still feed it to the decoder but not draw on screen
[18:45] <smt> but that would probably mess with the PTS on the decoder side, which I don't want
[18:45] <bunniefoofoo> yes, but usually it is the decode part that is slow, not the display part
[18:46] <smt> but my measurements indicate decode+render < 1/fps
[18:46] <smt> so I think it's not  a perf problem
[18:46] <smt> more likely due to PTS
[18:46] <bunniefoofoo> yes, if you are not delaying the video display or dropping decoded frames, you can't sync with the audio, plain and simple
[18:48] <smt> so dropping decoded frames is more correct than dropping packets in the incorrect order?
[18:48] <smt> even though my stream has I-frames
[18:49] <bunniefoofoo> yes, to drop packets would have to be 100% iframes
[18:49] <smt> right, because then the decoder will not know what to do with the new data
[18:50] <bunniefoofoo> assuming that would even work with your particular codec
[18:50] <bunniefoofoo> right, P and B frames have dependencies on other frames for example
[18:50] <smt> it's x264, so I think it would - but I agree with you
[18:52] <bunniefoofoo> typically you only drop packets maybe if there was some corruption detected and you want to discard until th enext i-frame arrives
[18:52] <smt> would it be a good strategy to compare last_video_pts and drop the decoded video frame if decoded_video_fps > last_audio_pts ?
[18:52] <smt> or too simple?
[18:53] <bunniefoofoo> no, in that case you have to delay the video frame by video_pts - audio_pts
[18:53] <bunniefoofoo> if  video_pts < audio_pts then drop, too late to display it
[18:57] <bunniefoofoo> the key to good sync here is that last_audio_pts was the last audio buffer you sent to sound card, *then* you need to count the milliseconds until just before the video would be displayed
[18:58] <DonGnom> is there any way to easy resync audio and video when audio and video desync grows?
[18:59] <bunniefoofoo> in a video player or encoder?
[18:59] <DonGnom> bunniefoofoo: currently i watch it with vlc and need to readjust it (atm iam > 30 seconds) but reencoding would be ok too
[19:00] <bunniefoofoo> so the a/v drift is basically in the video file itself due to poor encoding
[19:00] <DonGnom> bunniefoofoo: thought so yes
[19:02] <bunniefoofoo> only solution to poor encoding is re-encoding... one approach is to demux (split into separate audio/vidoe files), then time-stretch or shrink the audio to match the video duration
[19:02] <DonGnom> i'm not very good at (re)encoding so how can i do that?
[19:03] <bunniefoofoo> basically has to be done in non-linear video editor
[19:03] <Fusl> anyone knows a good technology/software to build a streaming server? i would use ffmpeg but ffmpeg always crashes/makes corrupted output when switching between input videos
[19:03] <bunniefoofoo> ffserver ?
[19:04] <DonGnom> bunniefoofoo: that means?
[19:04] <bunniefoofoo> non-linear like adobe premier, final cut pro, etc
[19:04] <DonGnom> i have neither.
[19:05] <bunniefoofoo> there are free alternatives, I'm not really familiar with them
[19:06] <smt> dropping when video_pts < audio_pts had an huge negative impact.. it was almost dropping every decoded frame
[19:06] <Fusl> something like ffserver but realtime steram data to clients and as i can see ffserver waits for the full video
[19:07] <Fusl> and ffserver drops connections after the video has been played
[19:07] <bunniefoofoo> dongnom, ffmpeg has an audio stretch/squeeze ability via swresample filter, however it could be tough to get it dialed in correctly
[19:09] <bunniefoofoo> vlc has a streaming server, could be controlled with some shell script I think
[19:10] <Fusl> vlc?
[19:10] <Fusl> are you kidding me? O.
[19:10] <Fusl> O.o
[19:10] <bunniefoofoo> no ;-)
[19:10] <JEEBsv> VLC is actually pretty good for basic streaming
[19:10] <JEEBsv> much less sanity loss than with ffserver
[19:10] <smt> bunniefoofoo, I've had some success using swr_set_compensation, but I'm not really sure I should be using it
[19:10] <Fusl> i already built my streaming server in node.js but ffmpeg crashes all the time cause i skip a video to the next video
[19:11] <smt> is this some kind of audio stretching?
[19:11] <bunniefoofoo> smt, not sure what you mean
[19:12] <smt> I've noticed that when I resample audio and use swr_set_compensation, the audio is synched
[19:12] <smt> but I must confess I have no idea what it does
[19:12] <smt> it was a long shot :)
[19:12] <bunniefoofoo> smt, I think your problem is, you need to be synced before you even try playing any audio or video (basically, start from as close to synced state as you can), if not you might never catch up
[19:13] <smt> what if the network has packet loss to some degree?
[19:13] <smt> will I be able to ever recover?
[19:14] <bunniefoofoo> sure, just stop everything, then go back to your initial-synced-but-not-playing yet state again
[19:15] <Fusl> so is there a way to prevent ffmpeg from crashing when jumping around videos?
[19:16] <smt> still not sure why dropping decoded video frames when decoded_video_pts < last_audio_pts is so high (lots and lots of dropped frames)
[19:16] <bunniefoofoo> av_seek_frame() and avcodec_flush_buffers() and pray... no really it can work if you do it right
[19:16] <smt> would you experience notice something on that pattern?
[19:16] <smt> *your
[19:17] <bunniefoofoo> smt, you need to account for the delay in the sound card and the delay since the last buffer went to the sound card, maybe that is the problem?
[19:17] <smt> for me, it means audio is always arriving after video, which is not what usually happens, right?
[19:18] <bunniefoofoo> well the audio arrives first, but gets buffered in the sound card. you have to account for that since the presentation timestamp for audio must reflect reality of when the audio hits the speakers
[19:19] <smt> but the soundcard does not give me any kind of presentation timestamp
[19:19] <smt> It's hard to know that delay that you mention about the sound card.. the only thing I know is the IO buffer duration
[19:19] <bunniefoofoo> IO buffer duration will get you pretty close I think
[19:20] <bunniefoofoo> that will be an offset against the audio_pts in the decoded audio frames
[19:20] <smt> in my case, my buffer io is 0.016 seconds
[19:21] <smt> should I subtract 16 from every audio_pts value?
[19:23] <bunniefoofoo> so basically that means, when the audio callback is called (to fill the buffer) it will be about 0.016s before the buffer starts to be consumed by sound card. This is because I assume a double-buffer approach is used.
[19:23] <bunniefoofoo> there might also be an api to read the current delay in the sound system (what API are you using)
[19:24] <smt> I use a ringbuffer. I add decoded samples to the ringbuffer and fetch a given nb of samples when the audio callback is called
[19:26] <bunniefoofoo> when the audio calback is called, it would best to set the last_known_audio_pts at this point to your best ability
[19:26] <bunniefoofoo> then record the time that the callback exits
[19:28] <smt> but when the audio callback is called I fetch samples, not frames, so I can't set the last_known_audio_pts
[19:28] <smt> I'm clearly missing something :/
[19:28] <bunniefoofoo> now, you can have an "estimated audio pts" that is last_known_audio_pts - sound_card_delay + time_since_last_callback
[19:30] <smt> and where will I use the "estimated audio pts" again?
[19:31] <bunniefoofoo> you may want to buffer packets and do audio decode in the callback, (or have a list of buffers+pts values)
[19:33] <smt> but how does that affect the next packet being decoded? By overriding pkt->pts with the estimated_audio_pts?
[19:34] <bunniefoofoo> your going to recalc the estimated audio pts every time you compare to the video pts
[19:37] <Fusl> lol? when trying to listen on udp://127.0.0.1:1234 i get this error: udp://127.0.0.1:1234: Input/output error
[19:37] <Fusl> this worked yesterday O.o
[20:43] <DonGnom> bunniefoofoo: so my audio is 41:15.32 and my video is 43:00.99
[20:52] <LithosLaptop> DonGnom: does the motion of the video seem too fast/too slow?
[20:52] <DonGnom> LithosLaptop: no i have an increasing delay between audio and video.
[20:53] <DonGnom> i have extracted the audio stream with ffmpeg -i <file> -map 0:1 -c:a copy <file>.ac3 and now see that above mentioned time difference.
[20:53] <DonGnom> i think i need to use atempo audio filter to strech the audio to match the  video
[20:53] <LithosLaptop> what is the frame rate of the video?
[20:53] <DonGnom> 23.98 fps
[20:54] <DonGnom> LithosLaptop: http://paste.codeaddicted.de/?2ee9e0f8eb990116#dG5X15syMeptsQRYmo5jMdgz6jZsxOGWz/2RcFT64hM=
[20:55] <DonGnom> (the german stream is the ne with delay havent tested the english one)
[20:58] <LithosLaptop> o ok, yeah probably will have to stretch the audio
[20:59] <DonGnom> hm currently working with -filter:a 'atempo=0.5' does not change anything :(
[20:59] <DonGnom> duration keeps the same.
[21:01] <DonGnom> complete line: ffmpeg -i video.mkv.ac3 -map 0:0 -filter:a 'atempo=0.5' video.mkv.ac3.longer.ac3 (only the audio stream)
[21:12] <DonGnom> hm wen copying the video i get "[matroska @ 0x6b9b10] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 83 >= 42 av_interleaved_write_frame(): Invalid argument"
[21:33] <Fusl> is this the correct syntax? i get much of "*** 1 dup!" and the stream is buffering all few seconds: avconv -v debug -re -loop 1 -i http://data.motor-talk.de/data/galleries/0/7/1718/47027168/trollface-2472794703751746068.jpg -i http://dsl.tb-stream.net/ -acodec libmp3lame -b:a 128k -vcodec libx264 -b:v 100000 -r 30 -preset ultrafast -s 1280x720 -tune zerolatency -f mpegts - > /tmp/stream
[21:35] <Fusl> also output is this: frame= 301 QP=42.00 NAL=2 Slice:P Poc:102 I:0    P:0    SKIP:3600 size=100 bytes
[21:36] <Fusl> and this: frame=  304 fps= 13 q=42.0 Lsize=     334kB time=10.13 bitrate= 269.8kbits/s dup=50 drop=0
[21:36] <Fusl> aand at least this: video:125kB audio:159kB global headers:0kB muxing overhead 17.540409%
[21:37] <pyBlob> I can read a dshow device using:
[21:37] <pyBlob> ffmpeg -f dshow -i video="DEVICENAME"
[21:38] <pyBlob> what additional parameters do I have to specify to set the input pixel_format/size/framerate?
[21:39] <pyBlob> -s WIDTHxHEIGHT -r FRAMERATE works, but how do I specify the pixelformat?
[21:39] <pyBlob> I want to use the format "bgr24"
[21:40] <sacarasc> -pix_fmt, IIRC.
[21:41] <sacarasc> I may be wrong, though.
[21:43] <Synthead> I have a 32-bit float WAV file: wake up edited.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 32 bit, stereo 44100 Hz  I want to make it an ogg.  How can I do this with ffmpeg?
[21:51] <LithosLaptop> ffmpeg -i "wake up edited.wav" -acodec libvorbis -ab 128k output.ogg
[21:53] <pyBlob> sacarasc: thanks, that worked
[21:55] <Synthead> that produces audio that sounds like this: http://8bitboobs.com/stuff/wakeup.ogg
[21:55] <Synthead> LithosLaptop: ^
[21:56] <LithosLaptop> ohh
[21:57] <LithosLaptop> maybe ffmpeg is detecting the input incorrectly
[22:02] <Synthead> LithosLaptop: http://pastie.org/8040244
[22:07] <LithosLaptop> try: ffmpeg -acodec pcm_f32le -i "wake up edited.wav" -acodec libvorbis -ab 128k output.ogg
[22:10] <Synthead> LithosLaptop: http://8bitboobs.com/stuff/wakeup.ogg
[22:11] <LithosLaptop> hmm, that's strange
[22:14] <Synthead> LithosLaptop: I tried all the PCM 32-bit decoders and they all produce garbage
[22:14] <Synthead> maybe ffmpeg can't do it
[22:14] <LithosLaptop> without the pcm_f32le ffmpeg picked up the wav input as pcm_s32le. 32bit signed integer, little endian
[22:14] <LithosLaptop> oh
[22:15] <LithosLaptop> maybe it isn't 32bit?
[22:15] <LithosLaptop> I dunno, this is a really weird
[22:16] <Synthead> LithosLaptop: it's certainly 32-bit
[22:16] <Synthead> LithosLaptop: I created it in an older version of FruityLoops :)
[22:18] <LithosLaptop> have you tried exporting it to 24bit integer, little endian from FruityLoops? if it is possible
[22:18] <brontosaurusrex> LithosLaptop, how about just using oggenc?
[22:18] <LithosLaptop> yeah try oggenc
[22:19] <brontosaurusrex> and irc vorbis is a quality based encoder, so dont use bitrate bs
[22:19] <brontosaurusrex> something like -q 5 should work
[22:23] <Synthead> LithosLaptop, brontosaurusrex: oggenc produces similar results.  I'm don't have the option to render 24-bit at this point
[22:24] <brontosaurusrex> so this is just one song or you are trying to convert 5000000 albums?
[22:25] <Synthead> brontosaurusrex: http://pastie.org/8040316
[22:25] <Synthead> brontosaurusrex: a file, why?
[22:25] <Synthead> brontosaurusrex: are you asking if this needs to be cli/scripted?
[22:25] <brontosaurusrex> yes
[22:26] <Synthead> brontosaurusrex: no, it doesn't need to be scripted
[22:26] <brontosaurusrex> ok, then just fire-up some audio editor, like audacity and see if that opens your wav
[22:26] <brontosaurusrex> then export 24 or 16 bit wav and do the oggenc again
[22:26] <Synthead> brontosaurusrex: it sounds the same as the other files provided.  the waveform is shown as clipped
[22:27] <brontosaurusrex> in audacity?
[22:28] <Synthead> brontosaurusrex: yes
[22:28] <brontosaurusrex> and how do you know your file is ok?
[22:28] <brontosaurusrex> is there something that can play it properly?
[22:28] <Synthead> brontosaurusrex: I've heard it played fine in FL years ago
[22:28] <Synthead> s/played/play
[22:31] <brontosaurusrex> dunno, i'd try to use sox to convert it to something else then
[22:32] <Synthead> brontosaurusrex: sox doesn't work also
[22:34] <LithosLaptop> if the file is fine then I think it might not be 32bit float
[22:38] <pyBlob> I'm trying to split a stream using tee, but it says "Output file #0 does not contain any stream"
[22:38] <pyBlob> ffmpeg -f dshow -pix_fmt bgr24 -i video="VIDEODEVICE" -vframes 100 -f tee "[f=image2]test_%03d.png|[f=image2pipe]pipe:"
[22:39] <Synthead> LithosLaptop: I'm sure it is :p
[22:42] <LithosLaptop> how big is the WAV?
[22:42] <LithosLaptop> I could maybe check it out
[22:48] <pyBlob> http://pastebin.com/HmgiA6BE
[23:05] <jdolan> weird. i ask ffmpeg to -vsync cfr -r24 to a .flv, and it does.
[23:05] <jdolan> for the first 1.0 seconds. lol.
[23:06] <jdolan> then it never duplicates video frames for the remaining 7s or so of the video.
[23:14] <llogan> pyBlob: try adding "-map 0:v"
[23:14] <schtinky> hey all. I'm grabbing frames from a dvb tuner with "ffmpeg -skip_frame nokey -i pipe -r 2 -s 1280x720 -b:v 1000k images%9d.jpg"...
[23:14] <schtinky> yet the images that result are more pixelated than they should be. the jpg transcode is degrading them
[23:15] <schtinky> I've tried changing -b:v to higher values, but it doesn't seem to help
[23:15] <pyBlob> llogan: where should I put it?
[23:15] <llogan> as an output option
[23:15] <llogan> http://ffmpeg.org/ffmpeg-formats.html#tee
[23:15] <schtinky> I think I'll try exporting the iframes with png and seeing if they're still pixelated. If so, that means the iframes themselves are pixelated
[23:15] <schtinky> but if you have other ideas, please let me konw
[23:15] <llogan> schtinky: use "-qscale:v 2"
[23:16] <llogan> otherwise...
[23:16] <schtinky> I'll try qscale and see how it goes. Thanks
[23:16] <pyBlob> llogan: yay, that did it
[23:16] <llogan> schtinky: ...instead of "-b:v 1000k"
[23:16] <schtinky> yeah I was just going to ask, llogan. thanks
[23:17] <llogan> pyBlob: i guess it requires explicit mapping
[23:23] <pyBlob> so ... swapping the camera lens and then out to capture some planets, thank you ^^
[23:24] <llogan> which planets?
[23:24] <pyBlob> saturn
[23:24] <llogan> what kind of telescope?
[23:25] <pyBlob> a reflecting telescope
[23:25] <pyBlob> "Unitron f=1000mm d=75mm
[23:25] <llogan> stop by with some pics sometime if they turn out
[23:26] <pyBlob> I'll do that ^^
[23:27] <pyBlob> the clouds were faster than me :(
[23:27] <pyBlob> the weather forecast looked so nice ... perhaps tomorrow
[23:39] <bunniefoofoo> is there a way to tell (programmatically) if a codec supports threading?
[23:40] <bunniefoofoo> for encoding (or decoding)
[23:40] <ubitux> look at AVCodec.capabilities
[23:41] <bunniefoofoo> CODEC_CAP_AUTO_THREADS ?
[23:41] <bunniefoofoo> i see it now, thanks
[23:42] <ubitux> there are other flags as well
[23:43] <bunniefoofoo> so if CODEC_CAP_SLICE_THREADS is there, what has to be done to acheive threading? So far, I have only used thread_count
[23:46] <jdolan> wow, bummer. amerge breaks on my streams while amix works just fine.
[23:49] <bunniefoofoo> do I need to supply AVCodecContext->execute ?
[00:00] --- Fri Jun 14 2013


More information about the Ffmpeg-devel-irc mailing list