[Ffmpeg-devel-irc] ffmpeg.log.20130213

burek burek021 at gmail.com
Thu Feb 14 02:05:01 CET 2013


[00:21] <fatpony> nice tip llogan thanks
[00:21] <tds5016> hi.
[00:21] <tds5016> Is it possible to use ffmpeg to convert from h264 stream over to an mpegts for http live streaming?
[00:26] <tds5016> I've basically got an h264 file, and I'm trying to convert it over to an http live streaming format to share out.
[01:47] <jankarlitos> Hello. Is anyone capable of get the streams out of this shoutcast URL?: http://johnny.serverroom.us:6336
[01:49] <klaxa> um...
[01:49] <klaxa> that's pretty much the stream already
[01:50] <klaxa> wget http://johnny.serverroom.us:6336/ -O stream_dump.mp3
[01:51] <jankarlitos> klaxa, yes that's the only way i could get it. But i need to transcode it with ffmpeg
[01:51] <klaxa> ffmpeg -i http://johnny.serverroom.us:6336/ -c:a libvorbis stream_dump_encoded.ogg
[01:51] <klaxa> or whatever
[01:52] <jankarlitos> It doesn't work
[01:53] <klaxa> mmhh.. it appears to hang that's weird
[01:54] <klaxa> this shouldn't be a problem, for the time being you could create a pipe with wget
[01:54] <klaxa> wget http://johnny.serverroom.us:6336/ | ffmpeg -i - -c:a libvorbis stream_dump.ogg
[01:54] <klaxa> argh
[01:55] <klaxa> wget http://johnny.serverroom.us:6336/ -O - | ffmpeg -i - -c:a libvorbis stream_dump.ogg
[01:55] <klaxa> like that
[01:56] <jankarlitos> Already try that. The result: "Cannot write to - (Broken pipe)"
[01:56] <klaxa> what is your command line?
[01:58] <jankarlitos> The same as yours: wget http://johnny.serverroom.us:6336/ -O - | ffmpeg -i - -acodec libvorbis out.ogg
[01:59] <klaxa> mmhh that sounds like a restriction set by your system
[02:01] <klaxa> hmm maybe you can use named pipes but that would get ugly
[02:14] <jShaf> how do I skip the beginning to the very first keyframe for video INPUT?
[02:14] <jShaf> like -ss <time of the first keyframe>
[05:19] <happarappa> ffmpeg.exe -i test.mp4 -sameq test.avi <-- This results in an .avi of poored quality than the input .mp4. What the hell? Just when I thought I had ffmpeg figured out, it goes and does this to me.
[05:20] <klaxa> sameq just means same quantizer not same quality
[05:20] <klaxa> i think it should be avoided if possible
[05:23] <happarappa> Same quantizer?
[05:24] <happarappa> Never even heard of any such thing as "quantizer".
[05:24] <Plorkyeran> then you definitely don't want to be using sameq
[05:25] <fenduru> Anyone know a way to force an encoder to NOT add padding?
[05:26] <fenduru> ffprobe reports a .wav as 0.97 seconds long, but after encoding to aac it is 10.0, and after encoding to mp3 it is 10.03. These are audio segments so playing them in sequence is causing gaps
[05:27] <happarappa> F... F... S. :|
[05:34] <relaxed> it was removed from ffmpeg because it most cases it didn't even work.
[05:37] <happarappa> What was?
[05:43] <relaxed> -sameq
[05:45] <happarappa> relaxed: Huh? It's not removed?
[05:50] <relaxed> You must be using an older version.
[05:52] <relaxed> happarappa: use '-q:v 3' instead for good quality
[06:05] <happarappa> Not using an old version...
[06:05] <happarappa> I want the same exact quality.
[06:14] <relaxed> then use -c:v copy, otherwise it's impossible
[06:17] <Plorkyeran> or a lossless codec
[06:50] <happarappa> :|
[07:52] <asher^> hi guys. does ffmpeg have an option similar to -ss where you can seek to half way through a video rather than a specific time?
[07:54] <klaxa> yes
[07:54] <klaxa> asher^: see: http://ffmpeg.org/trac/ffmpeg/wiki/Seeking%20with%20FFmpeg
[07:54] <asher^> thanks
[07:56] <asher^> klaxa, i cant see it there. i should be more specific, id like to know if theres an option where i dont need to know the duration in advance
[07:57] <klaxa> um...
[07:57] <klaxa> i don't think so?
[07:57] <klaxa> for what usecase would you need that?
[07:57] <asher^> im just taking screen grabs of videos programatically where i dont know the length in advance. i thought id just grab them from the middle
[07:58] <asher^> some are 5 mins, some an hour
[07:58] <klaxa> so you want to specify like... -ss 50% ?
[07:59] <asher^> yeah something like that
[08:00] <klaxa> hmm... well you could certainly script it with the help of ffprobe
[08:00] <asher^> yeah i guess ill have to do something like that
[08:00] <asher^> thanks
[08:01] <asher^> hmm, looks like something called ffmpegthumbnailer might do it
[08:01] <asher^> although that looks like linux too
[08:38] <spaam> klaxa: nice page.
[08:39] <klaxa> i know, i didn't write it though
[08:39] <klaxa> but it's rather useful, i didn't know ffmpeg could do fast-seeking with keyframes
[08:39] <spaam> it was burek and some other guy D:
[09:20] <happarappa> Why is ffmpeg the most confusing nonsense ever made?
[09:21] <happarappa> I just want to convert from one file format to another with the same amount of quality/untouched.
[09:21] <klaxa> happarappa: because it's the leading FOSS in video encoding
[09:21] <klaxa> that's not possible unless you use lossless codecs
[09:21] <klaxa> or copy the codec
[09:21] <klaxa> this is a general truth and has nothing to do with ffmpeg
[09:22] <klaxa> oh ffmpeg also does video filtering
[09:22] <klaxa> same stuff with audio :V
[09:23] <klaxa> i mean come on, sure it's not easy to use, but it's still one of the greatest open source projects
[09:25] <happarappa> :(
[09:25] <happarappa> Yes, it is amazing... if it were just nicer to use.
[09:25] <happarappa> Supports an insane amount of formats.
[09:26] <klaxa> actually i find it rather easy to use
[09:26] <klaxa> everything is well documented
[09:26] <happarappa> I get a crappy video result from my conversion.
[09:26] <happarappa> I want it to look just like the original.
[09:26] <klaxa> <klaxa> that's not possible unless you use lossless codecs
[09:26] <klaxa> <klaxa> or copy the codec
[09:26] <happarappa> ... What?
[09:27] <happarappa> Of course it's possible.
[09:27] <happarappa> It's damaging the video.
[09:27] <klaxa> you cannot achieve the exact s--
[09:27] <JEEB> a result close to the original is possible
[09:27] <JEEB> not the exact same
[09:27] <happarappa> It's MUCH worse.
[09:27] <happarappa> ffmpeg.exe -i test.mp4 -sameq test.avi
[09:27] <JEEB> pastebin your command line and terminal output
[09:27] <JEEB> ugh
[09:27] <JEEB> sameq is not what you think
[09:27] <JEEB> and was actually broken in most cases
[09:27] <happarappa> Whatever it means, the result is not what I want...
[09:28] <JEEB> it is not same quality but "same quantizer", which doesn't even make sense between two different formats
[09:28] <JEEB> also I think it was removen later
[09:28] <JEEB> as an option
[09:28] <JEEB> because people kept using it even if it was broken
[09:28] <JEEB> so what do you want your output to be?
[09:29] <JEEB> for what are you encoding?
[09:29] <happarappa> Visually impossible to tell apart.
[09:29] <JEEB> no, i mean
[09:29] <happarappa> For what am I encoding?
[09:29] <JEEB> avi is a container
[09:29] <JEEB> what do you want into it?
[09:29] <JEEB> just like mp4, avi is a "box"
[09:29] <JEEB> you can put stuff into the "box"
[09:29] <happarappa> I don't care. When files are jungled through the black box that is "ffmpeg", they work in various software such as video editors.
[09:30] <JEEB> :|
[09:30] <happarappa> Prior to being put into the black box, they are not accepted.
[09:30] <happarappa> (Renaming the files does not work.)
[09:30] <JEEB> well, naturally
[09:30] <JEEB> if the editor doesn't support the container format
[09:30] <JEEB> or the video inside it, then naturally
[09:30] <JEEB> for video editors I recommend utvideo
[09:30] <JEEB> because you can get decoder components for every OS for it
[09:31] <JEEB> Windows, OS X, Linux
[09:31] <JEEB> ffmpeg -i input -c:v utvideo -c:a pcm_s16le out.avi
[09:32] <klaxa> heh promoting utvideo again :)
[09:32] <aji> i wonder what percentage of this channel's messages are ffmpeg command lines
[09:32] <klaxa> (btw, is it multi-threaded yet?)
[09:32] <JEEB> the decoder is, the encoder is harder to MT
[09:32] <JEEB> at least with slices
[09:32] <JEEB> frame-based MT would be easier
[09:33] <JEEB> happarappa, also it kind of would help me if you would tell which OS and editor you're going to use
[09:33] <JEEB> and I asked for the terminal output because I kind of could have seen the version of ffmpeg you have :V
[09:34] <JEEB> pretty much everything I ask for is for a reason
[09:35] <JEEB> klaxa, also why would I not promote it? It has components for every OS after all
[09:35] <JEEB> I really don't see a reason not to use it if you're doing video editing :)
[09:35] <klaxa> i wasn't complaining, just noting :)
[09:36] <jeje34> Hi to all ;-)
[09:36] <JEEB> Daemon404 actually multithreaded the prediction per-slice but that ended up being bottlenecked by the huffman afterwards
[09:36] <JEEB> so it seems like the easiest way to make the libavcodec utvideo encoder faster is to just frame thread it :<
[09:37] <klaxa> hmm okay i don't understand this well enough yet to comment on it :P
[09:41] <jeje34> I already have integrated FFMPEGs Library to decode H264 video streaming from IP Cameras and display them in windows of a Windows OS. But now, I'm trying to decrease the CPU usage when I have several decompressions at the same time. So I compile FFMPEG (version release 1.1.1) with --enable-dxva2 --enable-decoder=h264_dxva2 --enable-hwaccel=h264_dxva2
[09:43] <jeje34> After do this, I try to modify my code to use AVHWAccel. but it's really hard to find documentation or sample code source to use it...hte only thing I found is I have to use av_hwaccel_next to find the good one (hwaccel->id == CODEC_ID_H264) and register it by av_register_hwaccel but It seems it isn't enougth to use the hardware decompression
[09:43] <JEEB> yes
[09:43] <jeje34> if someone can help me, thanks
[09:44] <JEEB> hwaccel has you do work too, because libavcodec can have no idea on how you want to use DXVA
[09:44] <JEEB> https://lists.ffmpeg.org/pipermail/ffmpeg-user/2012-May/006600.html
[09:44] <JEEB> take a look at this post
[09:45] Action: JEEB goes back to doing laundry
[09:48] <jeje34> thanks fot this reply! for the point 3- Create a dxva_context structure and initialize it's fields with the above objects, is it a context like an AVCodecContext? And how initialize the fields?
[09:50] <jeje34> if someone has already use AVHWAccel in his code, it will be very helpfull
[10:09] <JEEB> jeje34, there is an app mentioned in the post, so if you want you can look at that as an example
[10:13] <happarappa> JEEB: I don't see what my OS or my video editor has to do with this.
[10:15] <JEEB> happarappa, because if you are using a packaged ffmpeg it can be old, if you are using linux video editors they can be built with old libavcodec
[10:15] <JEEB> those naturally limit the alternatives
[10:15] <jeje34> JEEB> you telling about "a useful tool for diagnosing DXVA2 is DXVAChecker"
[10:15] <JEEB> jeje34, no
[10:15] <JEEB> far before that
[10:15] <jeje34> ok the msdn link
[10:16] <JEEB> well, that is related yes, but you know -- the reference to VLC
[10:16] <JEEB> because DXVA is a MS thing, you will find all the docs related to it from MSDN :P
[10:17] <jeje34> but do you think it really decrease my CPU time when I decode several streams at the same time (because it need me to make a lot of changes in my implementation...)
[10:17] <happarappa> JEEB: None of that made any sense to me. Sorry.
[10:17] <happarappa> "a packaged ffmpeg"?
[10:17] <JEEB> package management
[10:18] <JEEB> on linux most distributions have a package management system
[10:18] <JEEB> where most people get their software
[10:18] <JEEB> on OS X you have macports and homebrew for similar usage
[10:18] <happarappa> Windows here.
[10:18] <JEEB> ok
[10:18] <JEEB> then the case B) of you having an editor that can't use the VFW component is unrelated
[10:19] <JEEB> then Ut Video should work just fine and be lossless as long as you install the windows codec for it (DirectShow/VFW)
[10:19] <JEEB> http://umezawa.dyndns.info/wordpress/?p=3655
[10:19] <JEEB> of course your windows build of ffmpeg has to be new enough to support ut video
[10:19] <JEEB> but yes, that line I gave you up there should work then
[10:20] <JEEB> <JEEB> ffmpeg -i input -c:v utvideo -c:a pcm_s16le out.avi
[10:20] <JEEB> it will be big, but it will also be lossless as well as intra so it will be quick to seek through
[10:20] <JEEB> when editing
[10:21] <JEEB> (lossless does not mean uncompressed, it is compressed)
[10:21] <JEEB> also if you have too old of an ffmpeg, then you can grab a newer one from zeranoe http://ffmpeg.zeranoe.com/builds/
[10:29] <jeje34> I also have another question about my FFMpeg use. I compile it with --enable-w32threads (because I'm a window user). But when I try to set  a thread_count in my AVCodecContext (like 3), I always have a log error The maximum value for lowres supported by the decoder is 0. when calling avcodec_open2
[10:29] <jeje34> So I can't use FFMPEG Multithreading to decode H264?
[10:30] <JEEB> no idea if you're just using libavcodec wrong
[10:30] <JEEB> because there has been H.264 multithreaded frame decoding since summer 2011
[10:31] <JEEB> sliced decoding is a more derpy thing, and only works with slices
[10:31] <JEEB> sliced threading
[10:34] <jeje34> ok JEEEB, I set my ffmpeg code part here: http://pastebin.com/sNpcpHjf
[10:36] <jeje34> but I want to decompress H264 frames coming from a camera. So in my code I recompose one entire frame before passing it to FFMPEG so I think it will be more FF_THREAD_FRAME I need to use
[10:36] <JEEB> I have no idea, I have only coded within libavcodec, not used it myself :P Unfortunately you'll have to have someone else look at that code, or look at other examples using libavcodec's threading (various things use it)
[10:36] <jeje34> ok thanks
[10:43] <jeje34> another one question: do you think in my case, using a CodecCtx->thread_count could decrease my CPU usage to decode my H264 stream?
[10:44] <jeje34> when I have several streams to decode at the same time
[10:46] <JEEB> depending on how many things you have going at the same time, you can make them derp less with each other by limiting the overall amount of threads, the notion of "could decrease my CPU usage" is kind of derp -- if you are bound by, say, a single thread it will use less CPU but be slower, while more threads mean more power can be used for one stream. In case of you doing multiple parallel jobs of course, you might want to limit the
[10:46] <JEEB> amount of threads depending on the amount of cores in the system and the amount of streams you're doing, but that really won't "decrease your CPU usage", it will just make those separate processes not poke each other as much
[11:00] <jeje34> ok JEEB, thanks for all your answers. If I don't set a thread_count but have compiling FFMPEG with --enable-w32threads, what happen with FFMPEG? because when calling avcodec_alloc_context3, it initialze the AVCodecContext with avcodec_get_context_defaults3, so setting the threadcount to 0
[11:02] <JEEB> that should be the default now > --enable-w32threads
[11:02] <JEEB> on windows
[11:02] <JEEB> and it just enables threading
[11:02] <JEEB> I think it depends on the default in the decoder? No idea, as I've said I've never used the API
[11:02] <JEEB> so I don't know the defaults and how it is going to function
[11:07] <jeje34> ok, I'm going to seek in the FFMPEG source code to understand a little bit more what happen
[11:14] <fatpony> i can't seem to crop one pixel, is that normal? can i only crop to even values?
[11:15] <JEEB> if your source has chroma subsampling (4:2:2 or 4:2:0 YCbCr [colloquially called "YUV"]), then yes
[11:16] <JEEB> because with 4:2:0 you have one value for a 2x2 area, and 4:2:2 has one value for a 2x1 area
[11:17] <fatpony> ah so i guess that's what yuv420p stands for
[11:17] <JEEB> yup
[11:18] <fatpony> it's strange because i can clearly see that a black border beginning at an odd height
[11:18] <fatpony> s/that//
[11:19] <JEEB> luma (grayscale image) is full resolution, but the chroma planes (color information) are subsampled
[11:19] <JEEB> so yes, I guess having the black area start at an odd height is possible I guess
[11:21] <fatpony> ah i see, thanks!
[13:49] <jShaf> how do i trim the beginning up to first keyframe on the video input?
[13:59] <xroberx> hi
[13:59] <xroberx> If I only want to do pixel format conversion, which is the right functions ? sws_scale() ?
[14:00] <durandal_1707> yes
[14:00] <xroberx> I do not want to perform scaling though
[14:01] <xroberx> durandal_1707: so... I guess I'll have to set the source width/height equal to the destination width/height, right ?
[14:01] <durandal_1707> same function can do most of unscaled conversions too
[14:01] <durandal_1707> depends on input->output formats
[14:01] <durandal_1707> and obviously same dimensions
[14:02] <weecka> Hello guys, maybe you can help me. I am receiving lots of these warnings: max resync size reached, could not find sync byte.78 bitrate=N/A. Whole output is here http://pastebin.com/Ke9wse1a. What does this means?
[14:03] <durandal_1707> weecka: how you create file? what can play it? and soo on ..
[14:05] <weecka> Honestly to say, I'm trying to convert flv to mpegts, which then could be played using apple streaming protocol.
[14:05] <xroberx> durandal_1707: do you know if there is any ARM optimized version ? By looking in the libswscale folder I can see there is only optimizations for x86, ppc, sparc and bfin.
[14:06] <weecka> but I'm not sure, what I'm doing wrong.
[14:06] <durandal_1707> xroberx: as you noticed, there is noone
[14:07] <xroberx> ok
[14:08] <durandal_1707> weecka: the mpegts muxer may be just buggy, so what is your command that do flv->mpegts?
[14:09] <weecka> I've it did like this: ffmpeg -y -i mindmovie-5173.flv -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 320x480 -vcodec libx264 -b 512k -maxrate 512k -bufsize 512k -qcomp 0.6 -qmin 30 -qmax 51 -qdiff 4 -level 30 -aspect 320:480 -g 30 -async 2 output-iphone-big.ts
[14:14] <weecka> Also, I've tried to do flv->m4v and then to segment it, but the segment durations somehow are then way of target.
[14:41] <xroberx> hi
[14:43] <weecka> durandal_1707: any thoughts?
[14:43] <xroberx> durandal_1707: I've found this ARM optimized yuv2rgb conversion library : http://wss.co.uk/pinknoise/yuv2rgb/       The license is BSD. Now the question is: would it be possible to integrate it in ffmpeg's libswscale ?
[14:45] <durandal_1707> xroberx: what kind of integration?
[14:46] <durandal_1707> writing patches that add arm optimization to swscale is certainly possible and likely to be accepted and applied
[14:47] <xroberx> durandal_1707: ok, good to know, I'll contact the author then, but considering it's BSD licensed I guess it can be done
[14:47] <durandal_1707> if you can't code you find someonce who can
[14:47] <creep> h
[14:47] <durandal_1707> xroberx: why would you contact author?
[14:48] <xroberx> durandal_1707: because of this sentence in his webpage: "If you do use use this code as part of a piece of software (or hardware), please let me know, purely for my own interest."
[14:48] <durandal_1707> it's BSD license which means you can do almost anything with it (except claiming you write that code ...)
[14:48] <durandal_1707> xroberx: good, but first code must be used ...
[14:48] <durandal_1707> currently is not
[14:49] <xroberx> yes
[14:52] <xroberx> durandal_1707: do you know happen to know who is the person to contact if I wanted to send some patches to integrate the library ?
[14:53] <JEEB> ffmpeg-devel mailing list for patches
[14:53] <jarno> Hi Jeeb
[14:53] <jarno> ...and everyone else...
[14:54] <xroberx> JEEB: ok, thanks
[14:55] <jarno> ...I have a question related to encoding and mux(ing)...if I have understood it correctly, the steps should be 1) find encoder 2) allocate context and 3) create output stream, right?
[14:56] <jarno> ...so the avcodec_alloc_context3() sets default values as per the chosen encoder, right?
[14:57] <jarno> ...and then these can be changed as needed?
[15:34] <Macey> hi all whats the output format for just streaming a TS file to UDP?
[15:36] <Mavrik> mpegts?
[15:36] <Macey> if i do -f mpegts it just recreates the container
[15:38] <Mavrik> mhm
[15:38] <Mavrik> what do you want to do exactly?
[15:41] <Macey> i have a fully formed TS file (NIT/SDT/PMTs e.t.c) that i want to push over UDP
[15:41] <Macey> no transcoding nothing, just a straight tx
[15:41] <Mavrik> ffmpeg isn't the right tool for the job then
[15:41] <Macey> ah ok
[15:41] <Macey> vlc?
[15:41] <Mavrik> why don't you use something made to just dump data to network like netcat?
[15:42] <Macey> needs to read the PCR and tx it at a rate
[15:42] <Mavrik> both vlc and ffmpeg are meant to actually change streams, not be dumb network pipes
[15:42] <Mavrik> VLC might know that, but I'm not really sure if it can do what you want withou modifying the stream
[15:42] <Macey> ok thanks
[15:50] <jeje34> Hello, just to know ... are functions av_lockmgr_addref and av_lockmgr_register are deprecated?
[15:55] <jeje34> I can find av_lockmgr_register but not av_lockmgr_addref in avcodec.h (using last FFMPEG release 1.1.1)
[16:36] <catalinb> Hi, I'm trying to build ffmpeg on windows using ./chromium/scripts/build_ffmpeg.sh
[16:36] <catalinb> I'm calling the script from a mingw shell
[16:37] <catalinb> I encounter the error: 'c99wrap is unable to create an executable file.'
[16:37] <catalinb> I have a mingw installation and gcc is in path
[16:38] <catalinb> How do I specify the right compiler?
[17:48] <jeje34> hi to all
[17:49] <jeje34> I have some question about using FFMPEG to decode h264 videos streaming from IPCamera. My code works (I can decode) but I have some questions about the initialization of the AVCodecContext fields
[17:50] <jeje34> there are lot of them set - decoding: Set by user
[17:51] <jeje34> I can really understand what are the best values for flags and flags2 of the AVCodecContext structure
[17:54] <Mavrik> jeje34: usually most fields don't have to be set
[17:54] <Mavrik> unless you have a very-nonstandard use case
[17:55] <Mavrik> jeje34: those "set by user" don't always mean MUST be set by user ;)
[17:55] <jeje34> I have a standard use case (I hope all manufacturer of IP camera have a good implementation of H264 video stream)
[17:55] <jeje34> ;-)
[17:56] <Mavrik> ^^
[17:56] <jeje34> but when I see some flags like CODEC_FLAG2_FAST
[17:57] <jeje34> I'm not very sure if it's better to set
[17:58] <Mavrik> jeje34: usually, leaving default settings is prefered
[17:58] <Mavrik> seems like CODEC_FLAG2_FAST gains some speed at cost of decoded image quality
[18:56] <Guest17651> hi, I'm trying to export VOB videos to ogg (theora/vorbis) for the web, using kdenlive. The videos are about 45 minutes each. Do you have good settings to recommend?
[19:55] <someone-noone> if dts==pts does it mean that frame is a key-frame?
[19:56] <someone-noone> or there is some cases when it's not true
[19:56] <someone-noone> ?
[19:58] <JEEB> no, it just means that it's not a b-frame
[19:59] <someone-noone> Okay, thanks. That was a silly question. But I have another one, probalby less silly :)
[20:00] <someone-noone> If I want to reorder packets in pts way and I want to flush packets when they are ordered. If pts==dts is a good event for that?
[20:00] <someone-noone> If I want to reorder packets in pts order*
[20:01] <someone-noone> Or better to look-up for key-frames for this event?
[20:06] <Paranoialmaniac> JEEB: {k? it is true if pts always equals dts. but, if not, a frame with pts==dts is an un-referenced b-frame
[22:15] <mordonez1> Hu guys, how can I put an image in a video in a specified time and position?
[22:25] <llogan> the "specified time" is the tricky part. i don't think overlay filter has a temporal option
[22:26] <llogan> of course you could section the video into parts, add the image overlay (you want an overlay, right), and then concat the sections together.
[22:26] <mordonez1> yeah, I want to put an image inside a square on the video
[22:26] <mordonez1> the square only appear on second 10 to 12
[22:27] <llogan> you can do that with the drawbox and overlay filters
[22:28] <llogan> but i don't think either have "make filter work from 10 to 12 seconds" type of option
[22:28] <mordonez1> you have any example or something that I can use to start with it?
[22:38] <llogan> mordonez1: ffmpeg -i video.mp4 -i image.png -filter_complex drawbox,overlay output
[22:38] <llogan> http://ffmpeg.org/ffmpeg-filters.html#drawbox
[22:38] <llogan> http://ffmpeg.org/ffmpeg-filters.html#overlay-1
[23:40] <mordonez1> Thaks llogan
[23:56] <someone-noone> Hey! If I want to reorder packets by pts (decoder is not ffmpeg), how can I know when to "flush" those packets? When new key-frame arrives or when dts==pts? Or may be some other algorithm?
[00:00] --- Thu Feb 14 2013


More information about the Ffmpeg-devel-irc mailing list