[Ffmpeg-devel-irc] ffmpeg.log.20140724
burek
burek021 at gmail.com
Fri Jul 25 02:05:01 CEST 2014
[00:02] <hilacha> someone have knowledge about the g723.1 codec? i need to record it passsively, but it appears like the g723.1 stream inside the packets are invalid using ffmpeg to decompress it :(
[00:02] <hilacha> the packets are 24bytes length like in the specification...
[01:29] <relaxed> llogan: thanks for looking into it
[02:19] <saratoga> does ffmpeg have a reasonably optimized routine for unpacking 12 bit pixels/samples into 16 or 32 bit?
[03:34] <t4nk996> Hi I am trying to use ffmpeg library to seek an audio file (DASH FMP4 AAC MED). The avformat_seek_frame and avformat_seek_file are somehow failing to do so. I checked online and it seems like there have similar bugs in this context but I wasn't able to find an exact solution. Can someone help me with the same ?
[04:04] <troy_s> Anyone getting the types may not be defined in 'sizeof' expressions in C++ mixed code?
[04:05] <troy_s> (bprint)
[04:10] <troy_s> Is fflogger a bot?
[04:12] <troy_s> http://pastebin.com/C88vFp91
[04:12] <troy_s> Seems silly pasting compiler output.
[04:31] <t4nk996> With reference to seeking using ffmpeg: I have got my audio in place but my video shows fray frames initially and starts back from the beginning once the file is decoded though audio is correct. Can anyone tell me how to let decoder not produce the starting frames for the video once it loops back. Can I flush these frames .. ?
[04:31] <t4nk996> *gray
[04:40] <troy_s> t4nk996: Hrm. Are those bi-directionally encoded frames I wonder?
[04:41] <t4nk996> troy_s : Yep!
[04:41] <troy_s> t4nk996: Hard to work around that no? The decoder needs to decode up to the point that it can decode those frames.
[04:42] <t4nk996> Which means once it reaches to end it starts decoding the starting frames ? Right ?
[04:42] <troy_s> t4nk996: I'm not an expert on B frames, but it can be an arbitrary number of frames.
[04:43] <troy_s> t4nk996: So if you had five frames, and the bidirectionals were up to 3, you'd need to hit frame four, and then frames 1, 2, and 3 can be decoded correctly.
[04:43] <troy_s> t4nk996: If you have the encoding in your control, maybe you can control where the Bs begin.
[04:44] <t4nk996> Oh you are right .. I can actually identify the keyframes and tell th deocder to just seek to that frame and after that do a linear skipping
[04:44] <t4nk996> But what I am worried about is the intial frames coming up after the end ..
[04:45] <t4nk996> Cz I don't see end frames to depend on the initial frames
[04:45] <t4nk996> Don't have much idea though..
[04:48] <troy_s> t4nk996: Explain?
[04:48] <troy_s> t4nk996: Following our example, you are saying the three frames end up being decoded at the very end?
[04:49] <t4nk996> So I have a seperate audio and video files which I am decoding using separate ffmpeg decoders. I want both of them to seek 30 secs in each file.
[04:50] <t4nk996> So now when I see the output both the audio and video starts at 30 initially
[04:50] <troy_s> Ah. So the chances that you dive in at the 30 second mark and land on a series of B frames, correct?
[04:50] <t4nk996> But once the video comes to the end. The video deocder is producing the initial frames
[04:50] <troy_s> Yes.
[04:50] <t4nk996> Yep
[04:50] <troy_s> Because that's one of the only ways it makes sense.
[04:51] <t4nk996> you are right !
[04:51] <troy_s> This always happens with bidirectional frames.
[04:51] <t4nk996> but why the inital frames on end
[04:51] <troy_s> There's a little known factoid that when you decode frame by frame that there are often extra frames at the end, and you have to loop until it's empty.
[04:51] <troy_s> Because it is the only way to decode efficiently. For example, if you dive in and your first three are bi-directional, you can't decode those until the fourth say.
[04:52] <troy_s> But at the fourth, you are decoding a whole frame, so what do you do with the frame?
[04:52] <troy_s> If you toss it and offset it, now your whole thing is out of whack by three frames.
[04:52] <troy_s> (your fourth frame becomes your first if you do this.)
[04:52] <troy_s> So that too is problematic.
[04:52] <troy_s> Make sense?
[04:52] <t4nk996> yep !
[04:53] <t4nk996> It makes a lot of sense. I get your point !
[04:53] <t4nk996> Thats so true !
[04:53] <troy_s> t4nk996: This happens even when decoding from the beginning. https://github.com/chelyaev/ffmpeg-tutorial/issues/7
[04:54] <t4nk996> Right ! Okay so can I decode DASH audio using AVSEEK_FLAG_FRAME
[04:55] <t4nk996> It looks like the header file says that it might fail for some demures
[04:55] <t4nk996> I mean frame by frame
[04:55] <troy_s> t4nk996: Actually that's an option... to decode up to your 30 second marker.
[04:55] <troy_s> I still think the bi-directional frame issue can catch you out though, as there's no way to _not_ skip proper decoding order.
[04:57] <t4nk996> Oh Right .. I'll try the frame by frame option and try to find a way out for not landing on B frame. Thanks a lot for helping. I'll definitely let you know if I get through this :)
[04:57] <troy_s> t4nk996: I'm not sure that bi-predictive frames can necessarily be solved to be honest.
[04:57] <troy_s> t4nk996: Simply because it is a case of "I cannot build this frame without a frame somewhere after it."
[04:58] <t4nk996> But keyframes should have dependencies right ?
[04:58] <troy_s> t4nk996: Which means that every single call to mustering a frame will fail until it gets to the frame that can decode the whole frame. So the question is ultimately what to do with those skipped beats.
[04:58] <t4nk996> So I can actually land there ! and do linear seeking henceforth
[04:58] <t4nk996> ^no
[04:59] <t4nk996> *I mean key frames should have no dependencies
[04:59] <troy_s> Ah. So you mean seek to the first picture frame near your 30 second marker?
[04:59] <troy_s> That would work I think.
[04:59] <t4nk996> Yep !
[04:59] <troy_s> At the cost of being somewhat potentially irregular.
[05:00] <troy_s> (Doubly so if the encode in questoin is a long take with no motion where no such P frames are likely to be placed under heavier compression.)
[05:00] <troy_s> There simply isn't a super clean way of dealing with it I fear.
[05:01] <t4nk996> Haha ! True ! :)
[05:01] <t4nk996> Thanks a lot though. It was a a lot of help !
[05:01] <troy_s> Glad I could help. One of the few things I have bumped into trying to get frame precise decoding.
[09:23] <luc4> Hello! If I wanted to convert a video from 1080p to 1080i, should I simply transcode adding -flags +ildct+ilme? Is this correct?
[09:29] <dduvnjak> i'm doing a mix of several input audio files into single one, or more precisely several background files + one main one
[09:30] <dduvnjak> ffmpeg.exe" -y %LIST% -i "Avicii-WakeMeUp.mp3" -filter_complex "amix=inputs=%size%" -ar 44100 -ac 2 -q:a 1 "mix.mp3"
[09:30] <dduvnjak> i did a volume normalization of each of them
[09:30] <dduvnjak> however, when i mix the, the output file's volume drops significantly
[09:30] <dduvnjak> is there way this can be avoided of fixed?
[09:50] <zenderz> hi all
[09:51] <zenderz> anyone have any experience with the smooth streaming format? seems the muxer has recently been added to ffmpeg. but there are no docs yet
[09:54] <sfan5> zenderz: would be more helpful if you could state your exact problem
[09:56] <zenderz> using the command :
[09:57] <zenderz> ffmpeg -y -i testinput.mpg -c copy -b 10000K -f smoothstreaming output
[09:57] <zenderz> seems to generate the manifest and video/audio fragments
[09:58] <zenderz> however it does not play on any reference smooth streaming players. And I want to understand the command options so i can customise the output
[10:00] <sfan5> zenderz: command options: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/smoothstreamingenc.c;h=5a77ec3f74b96e10b3bd1cb37f62f93480132b46;hb=HEAD#l617
[10:00] <sfan5> (after the static const AVOption options[] = {=
[10:00] <sfan5> (after the static const AVOption options[] = {)
[10:02] <zenderz> thanks. Having a look now
[10:04] <ubitux> ffmpeg -h muxer=smoothstreaming
[10:34] <zafu> hi, is there a video stabilisation filter in ffmpeg?
[10:35] <zafu> or what is my best option to stabilize a jerky smartphone video?
[10:38] <ubitux> yes, vidstab
[10:43] <zafu> is it a utility? can't find it on debian
[10:44] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#vidstabdetect-1
[10:44] <ubitux> http://ffmpeg.org/ffmpeg-filters.html#vidstabtransform
[10:44] <zafu> thanks
[10:55] <zafu> I don't get why there is an input and output file at the vidstabdetect stage
[10:56] <zafu> where it's only detection for the second stage, should I throw out the output video from that stage?
[10:58] <zafu> ah, "dummy.avi" :) now I get it
[11:12] <luc4> Hello! Anyone who knows how I can converto from 1080p to 1080i?
[11:13] <Mavrik> uh
[11:13] <Mavrik> you really shouldn't :)
[11:14] <Mavrik> using interlaced video filter and setting x264opts to tff=1 should do it tho
[11:24] <ubitux> zafu: should probably be replaced with -f null - instead
[11:24] <ubitux> to avoid a pointless encode & muxing
[11:24] <ubitux> it will be faster
[11:25] <ubitux> the dummy.avi is because there is a show=1
[11:26] <zafu> what use is the show=1 ?
[11:27] <zafu> in any case it works very well, impressive
[11:31] <luc4> Mavrik: I need it for development.
[11:32] <Mavrik> luc4, I suggest you find an interlaced video then, doing progressive -> interlaced conversion isn't something you ever want to do and it usually does not result in same thing you get from a camera
[11:34] <luc4> Mavrik: actually I'm interested in performance of decoding. Not very interested in the image itself. Do you think using a converted video will lead to different results? Cause I need a few samples at different resolutions so it would be simpler to convert as needed.
[11:35] <Mavrik> luc4, yes.
[11:36] <Mavrik> alot of encoders will just cop-out and give you progressive video tagged as interlaced
[11:36] <Mavrik> you'll also get two fields at the same PTS time out of that
[11:36] <Mavrik> get a proper 1080i60 video and rescale it to what you want
[11:37] <luc4> Mavrik: so rescaling is ok and can I also remux to something different without affecting the result?
[11:37] <Mavrik> yes, as long as you rescale and reencode while keeping fields as they are
[11:37] <Mavrik> (scale filter has a flag for that, encoders as well)
[16:20] <nicholaswyoung> Can ffmpeg output flac to a pipe?
[16:21] <Fjorgynn> pipe?
[16:22] <Fjorgynn> it can output flac to a file
[16:22] <sacarasc> ffmpeg -i input.wav -c:a flac -f flac -
[16:22] <Fjorgynn> what are you pipeing to?
[16:22] <sacarasc> Or: mkfifo ilikecheese && ffmpeg -i input.wav -c:a flac -f flac ilikecheesee
[16:23] <Fjorgynn> sacarasc: never heard of mkfifo before :o
[16:24] <sacarasc> Named pipes, woo!
[16:25] <Fjorgynn> :D
[16:39] <kaotiko> hi
[17:11] <luc4> Mavrik: sorry, I'm having a hard time trying to find out the correct option to preserve interlaced streams. If I have an interlaced video, and transcode to lower the bitrate, do I need to add an option or will ffmpeg preserve that?
[17:13] <c_14> luc4: As long as you don't add a deinterlacing filter it should remain interlaced.
[17:13] <luc4> c_14: so progressive -> progressive and interlaced -> interlaced means I don't need to add anything? Thanks!
[17:14] <c_14> ye, it should usually just work
[17:14] <luc4> thanks!
[17:15] <klaxa|work> does interlaced have any advantages over progressive?
[17:16] <c_14> On screens that can interlace (crts) it increases the perceived framerate.
[17:16] <c_14> (without using extra bandwidth)
[17:17] <klaxa|work> ah
[17:17] <klaxa|work> cool
[17:21] <c_14> On screens that can't interlace (pretty much everything else) it gives you funky lines. (without using extra bandwidth)
[17:28] <vklimkov> looking for a person hands on ffmpeg and android. if interested ping me privately for details
[17:43] <Peter_Occ> When ffmpeg adds a timestamp to a video stream, it starts with a time that's six seconds ahead of the real start time, The time then freezes for six seconds until the video catches up to the timestamp, and proceeds with the correct timestamp. Has anyone ever found a way to get the timestamp to start with the correct time? I thought maybe I could have no timestamp and given the starting...
[17:43] <Peter_Occ> ...time, add it to the video after the capture is complete. Is that something ffmpeg can do?
[17:56] <Peter_Occ> Ok will do,
[18:04] <Peter_Occ> Here is the command line and output http://pastie.org/9417882
[18:07] <c_14> And that gives you the strange timestamps?
[18:07] <c_14> Also, note of advice: You really shouldn't run ffmpeg as root.
[18:09] <Peter_Occ> Yes here is the video produced by that command http://geotonics.com/captures/testingA.mp4
[18:12] <c_14> try ffmpeg -f mjpeg -r 10 -ss 6 -i [foobar]
[18:27] <Fjorgynn> 99
[18:31] <Peter_Occ> c_14, That eliminated the freeze, and the timestamp is accurate for the entire video, only now the video doesn't start for 6 seconds after the command is sent.
[18:33] <c_14> yeah, I'm assuming the stream does some sort of weird buffering. ie it sends the first frame and then starts filling the buffer and while the buffer is filling it keeps sending the first frame
[18:37] <Peter_Occ> In the example I linked to, its not sending the first frame for six seconds. It sends the video in real time so the video itself is not frozen, its just the timestamp thats frozen.
[18:49] <c_14> If seeking 6 seconds into the stream fixes the timestamp issues, I don't think the problem is on ffmpeg's side.
[18:53] <c_14> Have you tried the same command with a file source?
[18:54] <Peter_Occ> So are you saying that you or anyone else have ever been able to get it working properly? You can start the video immediately and the timestamp is accurate?
[18:54] <Peter_Occ>
[18:56] <c_14> I've never heard of anyone having issues, but if you give me the command again I can try it myself.
[18:59] <Peter_Occ> ffmpeg -f mjpeg -r 10 -i http://user:pass@192.168.0.7/video/mjpg.cgi -vf "drawtext=/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf:text='%{localtime\:%D %T}': fontcolor=red at .8: x=7: y=10" -t 00:00:30 -r 10 /var/www/html/captures/testingE.mp4
[19:03] <c_14> Works fine for me.
[19:03] <c_14> With a file source that is.
[19:04] <Peter_Occ> I tried using a file instead of the camera, but it just dies. There must be some other change I have to make to the command line. http://pastie.org/9417993
[19:04] <c_14> Can ffplay play testingE.mp4?
[19:04] <c_14> If it can try adding -probesize 1G -analyzeduration 1G
[19:09] <Peter_Occ> I don't have ffplay, but I have no problem playing the videos in Firefoxj. I added -probesize 1G -analyzeduration 1G to the command line but the output is the same. date && ffmpeg -f mjpeg -r 10 -probesize 1G -analyzeduration 1G -i /var/www/html/captures/testingE.mp4 -vf "drawtext=/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf:text='%{localtime\:%D %T}': fontcolor=red at .8: x=7: y=10" -t...
[19:09] <Peter_Occ> ...00:00:30 -r 10 /var/www/html/captures/testingEb.mp4
[19:10] <c_14> try adding -s widthxheight as an input option, where width and height detail the width and height of the video
[19:20] <saratoga> jhMikeS: opps, just realized sd_mutex is a simple typo for sd_mtx
[19:20] <saratoga> sorry about the kind of clueless comment on gerrit
[19:20] <saratoga> opps, wrong window
[19:31] <Peter_Occ> That didn't help either. I started over with a command that I know works and I was able to copy the file, but now the timestamp is only changing 1 second every 5 seconds. Is there an adjustment for that? In this video, the second timestamp is the stamp from the copy. http://geotonics.com/captures/testingEg.mp4
[19:32] <Peter_Occ> ffmpeg -i /var/www/html/captures/testingE.mp4 -vf "drawtext=/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf:text='%{localtime\:%D %T}': fontcolor=red at .8: x=7: y=30" -t 00:00:30 -r 10 -y -async 1 /var/www/html/captures/testingEg.mp4
[19:34] <c_14> Can you try it with a current static build?
[19:51] <Peter_Occ> I'm not sure if I can do that or not. I got ffmpeg using YUM with the fusion depository. Will I be able to switch back?
[19:52] <c_14> ye
[19:52] <c_14> just grab the tar
[19:52] <c_14> unpack it and use ./ffmpeg instead of ffmpeg
[19:52] <c_14> or /path/to/unpacked/ffmpeg [options]
[19:56] <Peter_Occ> Where would be a good place to put it? I'm on Fedora 20
[19:57] <c_14> you can put it somewhere like /usr/local/bin or $HOME/bin, it doesn't matter as long as you're not overwriting things.
[20:09] <Peter_Occ> I used the static build to copy the same file and the results are the same- the timestamp is 5 times too slow. There must be a reason for that.
[20:24] <c_14> Ok, this is weird. If I use your command on one of my files it works fine. Timestamp updates each second etc. If I use it on testingEg.mp4 the timer updates way too slow.
[20:31] <Peter_Occ> I wonder if that's because that file is a copy. The timestamp is not slow in the original file. Try this file http://geotonics.com/captures/testingE.mp4
[20:38] <c_14> Same thing.
[20:39] <c_14> I even took one of my videos dropped the fps down to 10 and tried using drawtext with that and it worked fine.
[20:39] <c_14> I'm running out of possible causes.
[20:42] <Peter_Occ> It seems like the slow timestamp isn't caused by anything that happens when the copy is made - whatever is causing it is in the video when its captured.
[20:46] <Peter_Occ> I'm starting a new capture every two minutes,. I think I can actually solve my original problem if I start the video 30 seconds early and add the 30 second seek with -ss 30. That should start the video at the exact time with the correct timestamp,
[20:47] <c_14> As long as it works, I guess?
[20:52] <Peter_Occ> It would require running 2 processes at a time. In other words, start a video at 1:30 minutes, seek 20 seconds, have it run another 2 minutes until 3 minutes, and start another at 2:30 minutes, have it run until 5 minutes.
[20:53] <Peter_Occ> It would require running 2 processes at a time. In other words, start a video at 1:30 minutes, seek 30 seconds, have it run another 2 minutes until 3 minutes, and start another at 3:30 minutes, have it run until 5 minutes.
[20:54] <Peter_Occ> Is that even possible ? What happens if you run ffmpeg twice from a script?
[20:55] <Peter_Occ> or maybe 2 different scripts
[20:55] <c_14> As long as they're not using the same output file it should be fine.
[20:57] <Peter_Occ> Ok, thanks for all the help.
[21:07] <t4nk057> Hey troy_s you are around ?
[21:07] <troy_s> t4nk057: Go
[21:07] <troy_s> But only if you can tell me how the hell the luminance coefficients were generated for 601.
[21:08] <t4nk057> Hi ! So I tried the frame seeking for video
[21:08] <t4nk057> for which I use av_rescale !
[21:08] <t4nk057> to get the frame_offset ! I need to seek to
[21:09] <troy_s> t4nk057: And how did that go?
[21:09] <t4nk057> which looks something like av_rescale(time_in_ms, time_base.den, time_base.num) where time_base are for the stream
[21:09] <troy_s> (never seeked based on P frames etc before.)
[21:10] <t4nk057> the issue is it gives negative values for some time_in_ms
[21:10] <t4nk057> something like 30000 -> gives neg while 30030 doesn't
[21:11] <t4nk057> Nope I haven't been able to seek it
[21:11] <troy_s> Erf.
[21:12] <troy_s> Have you thought about using the DTS values?
[21:12] <troy_s> The PTS values could work too possibly, but DTS might be more straightforward.
[21:13] <t4nk057> I guess dts cannot handle h.264
[21:13] <troy_s> ???
[21:13] <troy_s> Doesn't every codec need a decoding time stamp?
[21:13] <t4nk057> http://libav-users.943685.n4.nabble.com/Frame-accurate-seeking-on-H-264-video-streams-td946549.html
[21:14] <t4nk057> Umm ! I read something this morning. I am trying to find a link to it
[21:15] <t4nk057> https://code.google.com/p/qtffmpegwrapper/issues/detail?id=18
[21:15] <t4nk057> Here !
[21:15] <t4nk057> Not that I understood this completely !
[21:15] <troy_s> t4nk057: This sort of approach might work http://libav-users.943685.n4.nabble.com/Frame-accurate-seeking-on-H-264-video-streams-tp946549p946555.html
[21:16] <t4nk057> Right ! So the only things is that it doesn't converts the time to a frame number offset for most of the cases
[21:16] <troy_s> t4nk057: Quite familiar with the Blender decoding, which shifted in recent years to a timecode sort of base.
[21:16] <t4nk057> where time is the time of keyframe in the video
[21:18] <troy_s> t4nk057: You've read this part regarding the nuances of PTS / DTS yes? http://dranger.com/ffmpeg/tutorial05.html
[21:19] <troy_s> t4nk057: Note how it states "May not work either" an awful lot. :)
[21:22] <t4nk057> Reading ..
[21:23] <troy_s> t4nk057: (Side note, this is one of the reasons NLEs for offline editing generally do an 'ingest media' step at the beginning and convert to a suitable offline format such as DNxHD or ProRes. DNxHD was designed specifically to tackle the frame accuracy required in delivering an edit decision list. ProRes of course was basically a knock off of it.)
[21:34] <troy_s> t4nk057: Any progress?
[22:25] <t4nk057> No troy_s
[22:25] <t4nk057> It still goes to the beginning
[22:25] <troy_s> t4nk057: Bad luck.
[22:25] <troy_s> t4nk057: Have you tried the PTS trick?
[22:26] <t4nk057> The link you gave me said that it should have a flush queue function vefore flusing internal buffer
[22:26] <t4nk057> I didn't really get the pts thing
[22:26] <troy_s> Hrm.
[22:26] <t4nk057> so I wrote a flush method
[22:26] <troy_s> If (and that's a big if) the decoding time stamp is correctly grabbed, I think you could use that.
[22:26] <troy_s> Or PTS.
[22:26] <t4nk057> which takes the AvPacketList and frees it
[00:00] --- Fri Jul 25 2014
More information about the Ffmpeg-devel-irc
mailing list