[Ffmpeg-devel-irc] ffmpeg.log.20121107

burek burek021 at gmail.com
Thu Nov 8 02:05:01 CET 2012


[00:14] <saorel> Hey, I just got a project using ffmpeg and specially the avcodec part
[00:14] <saorel> but that's an old project and there were some changes in ffmpeg
[00:14] <saorel> so I have a warning : implicit declaration of function av_set_parameters [-Wimplicit-function-declaration]
[00:15] <saorel> and a link error : undefined references to « av_set_parameters »
[00:15] <saorel> Could someone help me please ?
[00:26] <saste> saorel, av_set_parameters was deprecated and dropped, more than one year ago IIRC
[00:26] <saste> you should set option in avformat_open_input() instead
[00:27] <saste> check avformat.h doxy
[00:44] <saorel> thanks saste
[00:46] <joelataylor> Hi, I have a simple bash script: https://gist.github.com/4027760 - I need to check if ffmpeg can't read the file, and skip the iteration & any hints?
[00:57] <saorel> Hum saste, what is the new function must I use to replace avcodec_encode_video ?
[00:57] <saorel> Thanks
[00:59] <saorel> ok I miss the line it is ok
[01:19] <xroberx> hi
[01:20] <xroberx> Does anyone know how to open an image sequence with avformat_open_input() ? I can succesfully open video files but not image sequences. From the command line it works as expected. For example: ffplay %04d.jpg works fine
[01:20] <xroberx> the same wildcard (m%04d.jpg) don't work when using avformat_open_input()
[06:14] <Chamunks> do you think it would be possible to live stream something to youtube using ffmpeg
[12:41] <shroomM_> when transcoding an mxf with mpeg2 video, pcm audio to mp4 with libx264 and aac audio, the output file is 2 frames longer than the input
[12:41] <shroomM_> any ideas as to why?
[12:48] <shroomM> not sure if the client sent it before ...
[12:48] <shroomM> when transcoding an mxf with mpeg2 video, pcm audio to mp4 with libx264 and aac audio, the output file is 2 frames longer than the input
[12:48] <shroomM> any ideas as to why?
[12:50] <divVerent> what if you use -vsync vfr?
[12:51] <juanmabc> differences in bitrate/samples per second may get a minimal diff like 2 frames
[12:51] <divVerent> these typically are 2 dup'd frames to ensure AV sync somewhere
[12:52] <divVerent> which can be turned off by -vsync vfr
[13:06] <shroomM> divVerent, juanmabc if I encode without audio, the length is the same
[13:07] <shroomM> any way to stop encoding when audio stops
[13:07] <shroomM> ?
[13:07] <shroomM> bah, sorry, when the video stops
[13:07] <shroomM> or perphaps indeed the vsync vfr will help, will try now
[13:17] <tuxx_>     Stream #0:0: Video: flashsv, bgr24, 1024x768, q=2-31, 2 kb/s, 90k tbn, 5 tbc
[13:17] <tuxx_> can someone explain to me please what tbn and tbc are?
[13:19] <tuxx_> also what is the standard behaviour when encoding a stream in real time
[13:19] <tuxx_> and you cant encode some frames so they need to be dropped
[13:19] <tuxx_> just drop them and just ajust the time stamps on the following frames?
[13:30] <Element9> where can I find available options for theora?
[13:31] <Element9> I can't get it to encode 1600x900 video in real time. does anyone have any pointers? I don't care about file size (for now)
[13:32] <juanmabc> cpu horsepower?
[13:32] <xroberx> hi
[13:33] <xroberx> Does anyone know how to open image sequences with avformat_open_input() ? I can successfully open videos, but if I pass it a filename of the form "video%04d.jpg" it doesn't work, it just returns a "No such file or directory" message
[13:35] <juanmabc> check ffplay ? probably some escape issue ?
[13:36] <Element9> juanmabc: not much. core duo 2.1ghz. can it be enough for low compression rate?
[13:36] <Element9> juanmabc: how to even specify compression rate?
[13:37] <tuxx_> how do i set the amount of keyframes using libavcodec
[13:39] <Element9> how do I even list available options for the particular codec?
[13:40] <juanmabc> Element9: man ffmpeg "/TIPS" (goes stright to horsepower issues)
[13:40] <saste> Element9, ffmpeg -h full
[13:40] <saste> also check the "Encoders" section in the manual, though many options are not documented
[13:41] <xroberx> please, can anyone tell me how to open image sequences with ffmpeg programmatically ? is it done with avformat_open_input() ? I've been googling for several days
[13:43] <Element9> saste: -h full says nothing about theora. am I missing something?
[13:43] <Element9> juanmabc: is there a Windows equivalent?
[13:44] <saste> Element9, first of all, do you have theora enabled?
[13:44] <Element9> saste: yes :)
[13:44] <Element9> but fair question :)
[13:45] <saste> Element9, second, keep in mind that in the case of FFmpeg documentation is the source, in good part
[13:46] <saste> so I had to read the code (libavcodec/libtheoraenc.c)
[13:46] <divVerent> tuxx_: tbn is the time base of the container, tbc of the codec
[13:46] <saste> theora has no private options, but it maps some generic options to libtheora specific ones
[13:46] <divVerent> as for realtime encoding: dropping frames is only allowed if your code supports timestamps at all
[13:47] <divVerent> in that case, yes, then you simply skip a timestamp value
[13:47] <divVerent> only works with VFR aware formats and codecs, though
[13:47] <divVerent> which most current ones are
[13:47] <Element9> saste: and the only way to find which are those "some generic options" is the source?
[13:47] <saste> in this case, apart trivial values (size, pixel format), it uses the global_quality param to set the quality
[13:47] <tuxx_> divVerent: any idea if flashsv is a VFR codec?
[13:47] <divVerent> for stuff like AVI, you can't really do that... if you try, you make libavformat emit empty frames in AVI, which may or may not be correct (not sure if the spec allows these, in practice they work fine)
[13:48] <divVerent> I don't even know what flashsv is+
[13:48] <Element9> saste: and irc :)
[13:48] <divVerent> so if you absolutely can't do vfr
[13:48] <saste> Element9, some codecs are documented in the manual, mostly don't
[13:48] <saste> patches are welcome of course
[13:48] <divVerent> the other common method to solve it is to encode the previous frame again
[13:48] <divVerent> to simulate a dropped frame
[13:49] <tuxx_> divVerent: i have to call avcodec_encode_video() again? or can i just do av_write_frame()?
[13:49] <divVerent> tuxx_: I suspect flashsv is not VFR aware. That may still be ok if using a VFR aware format, though.
[13:49] <saste> Element9, global_quality is set through the -q:v ffmpeg option, or you can use -global_quality (check again generic options in the ffmpeg manual, or even in libavcodec.3 in a recent install)
[13:49] <divVerent> tuxx_: encode_video() again
[13:49] <divVerent> but you should really avoid that+
[13:49] <divVerent> tuxx_: what format are you using?
[13:49] <tuxx_> divVerent: hmm but if it has to encode again
[13:49] <tuxx_> what is the performance advantage?
[13:49] <tuxx_> divVerent: i'm writing a vnc recorder
[13:49] <divVerent> that you save on generating the input frame
[13:49] <divVerent> for a game renderer that is sometimes viable
[13:50] <tuxx_> divVerent: it gets RGB565 or RGB32 and converts to BGR24 and encodes in flashsv
[13:50] <divVerent> flashsv in what container?
[13:50] <tuxx_> divVerent: i don't really care which container.. flv, avi
[13:50] <tuxx_> i dont know whats best :/
[13:50] <divVerent> if the container has proper timestamps, you can again skip frames
[13:50] <Element9> saste: so the greater quality will require more cpu, right? is there an option for less compression that will prodce good quality but a big file?
[13:51] <divVerent> tuxx_: at least from reading ffmpeg source on this, flv is fully VFR aware
[13:51] <divVerent> so flashsv inside flv should work as VFR
[13:51] <saste> Element9, no you can also tune the quality from what i see
[13:51] <saste> Element9, no you can *only* tune the quality from what i see
[13:51] <divVerent> or rather, "mostly VFR aware"
[13:51] <divVerent> you need to make sure the decode timestamp is always positive
[13:52] <tuxx_> divVerent: the other problem is that I am somehow not generating time stamps properly
[13:52] <divVerent> but you probably already ensure that
[13:52] <tuxx_> [avi @ 0xa4e030] Encoder did not produce proper pts, making some up.
[13:52] <divVerent> tuxx_: that is a critical issue then
[13:52] <Element9> saste: thanks for the help
[13:52] <tuxx_> divVerent: yea :/
[13:53] <divVerent> tuxx_: you really have to create pts, feed them to avcodec_encode_video2() as part of the const AVPacket *avpkt
[13:53] <divVerent> sorry
[13:53] <saste> Element9, and gop size
[13:53] <divVerent> as part of the const AVFrame *frame
[13:53] <tuxx_> divVerent: http://pastebin.com/bMBSrWXL
[13:53] <divVerent> then your AVPacket will contain a timestamp
[13:53] <divVerent> which you bhave to rescale (av_rescale_q or something like that)
[13:53] <tuxx_> pkt.pts= c->coded_frame->pts;
[13:53] <tuxx_> i do that after calling encode_video
[13:53] <divVerent> before you av_interleave_write_frame that packet
[13:53] <tuxx_> is that not sufficient?
[13:53] <Element9> saste: i saw that. let me go and read what gop is :)
[13:53] <divVerent> tuxx_: not really sufficient
[13:54] <divVerent> you are using the old and deprecated video encoding interface
[13:54] <divVerent> you should even be getting compiler warnings for that
[13:54] <divVerent> that old interface has no proper timestamp support
[13:54] <tuxx_> divVerent: oh
[13:54] <divVerent> but, even then
[13:54] <divVerent> for flashsv it should work anyway :P
[13:54] <divVerent> it will fail horribly for e.g. H.264, though
[13:54] <divVerent> which has B-frames
[13:54] <divVerent> you have another issue here
[13:54] <divVerent> see this?
[13:54] <divVerent>                         pkt.pts= c->coded_frame->pts;
[13:55] <tuxx_> yea?
[13:55] <divVerent> this line actually has two bugs
[13:55] <divVerent> 1. you need to av_rescale_q from the codec's to the stream's time base
[13:55] <divVerent> stupid, but that's how it is
[13:55] <divVerent> 2. where do the pts come from?
[13:55] <divVerent> you assume the code generated pts for you
[13:55] <divVerent> but where should it get these from?
[13:55] <tuxx_> gettimeofday? :)
[13:55] <divVerent> in case of dropped input frames, e.g., how do you even make the codec aware of that
[13:55] <divVerent> no, encoding is not assumed to be realtime :P
[13:56] <divVerent> typically, here the codec just counts frames
[13:56] <tuxx_> divVerent: true.. so what must i do?
[13:56] <divVerent> and pts is set to the number of encoded frames
[13:56] <divVerent> basically, the LEAST you have to do:
[13:56] <divVerent> write_video_frame() needs an argument for the current pts
[13:56] <divVerent> e.g. as a double time, or as a frame number, whatever
[13:56] <tuxx_> divVerent: hmmmm please slow down :)
[13:57] <tuxx_> i'm struggling to follow here
[13:57] <divVerent> basically, your write_video_frame() function has to know where in time it is
[13:57] <divVerent> when you call it, you need to pass the elapsed time of your VNC stream in some form
[13:57] <tuxx_> divVerent: so i can just increase a counter?
[13:57] <tuxx_> and pass it as pts?
[13:57] <divVerent> yes
[13:57] <divVerent> the idea here is that if you drop frames, you increase the counter anyway
[13:57] <tuxx_> divVerent: understood
[13:58] <divVerent> now, assume you pass a timestamp of type double
[13:58] <divVerent> you will have to convert that into a pts number
[13:58] <tuxx_> i use av_rescale_q(a,b,c) then right?
[13:59] <divVerent> not if your input is in double
[13:59] <divVerent> av_rescale_q is to convert integer pts to integer pts
[13:59] <divVerent> in different time bases
[13:59] <divVerent> e.g. from a frame counter to a codec pts
[13:59] <tuxx_> i'd like to just use a incrementable integer
[13:59] <tuxx_> for each frame
[13:59] <divVerent> okay, then that's what you use
[13:59] <tuxx_> hm let me see if i understand av_rescale_q
[14:00] <divVerent> av_rescale_q(framecount, fixed_timebase_of_your_vnc_fps, oc->time_base)
[14:00] <divVerent> is what would go into pkt.pts
[14:00] <divVerent> HOWVER, stop here for now
[14:00] <divVerent> this is for using the old deprecated interface
[14:00] <divVerent> which will fail for some more modern codecs than flashsv2
[14:00] <divVerent> I instead recommend you to rather use avcodec_encode_video2()
[14:00] <tuxx_> what is fixed_timebase_of_your_vnc_fps?
[14:00] <divVerent> whatever you want
[14:00] <divVerent> the fps your outside code runs at
[14:01] <tuxx_> well im not sure that is even fixed
[14:01] <tuxx_> it depends on when the screen updates
[14:01] <divVerent> then you should go back to my previous idea to use a double value for the timestamp
[14:01] <tuxx_> hmmm okay?
[14:01] <divVerent> VNC is by design VFR after all
[14:02] <divVerent> but you may want to limit the maximum updates per second
[14:02] <divVerent> which you can still easily do
[14:02] <divVerent> in THAT case...
[14:02] <tuxx_> so your previous idea then is what?
[14:02] <divVerent> it is more complex then
[14:02] <divVerent> so listen
[14:02] <tuxx_> omg.. :/
[14:02] <divVerent> you swich to avcodec_encode_video2()
[14:03] <divVerent> the pts for that is rint(timevalue * oc->codec->time_base.den / oc->codec->time_base.num)
[14:03] <divVerent> then you will get an AVFrame with these same pts back
[14:04] <divVerent> this AVFrame you then - similar to now - convert into an AVPacket
[14:04] <divVerent> but you can't just copy the pts, you need to do
[14:04] <tuxx_> avcodec_encode_video2 and avcodec_encode_video take the same arguments?
[14:04] <divVerent> pkt.pts = av_rescale_q(avframe.pts, oc->codec->time_base, oc->time_base)
[14:04] <divVerent> no
[14:04] <divVerent> see avcodec.h
[14:05] <divVerent> and now there is ONE more issue you need to watch out for
[14:05] <tuxx_> E486: Pattern not found: avcodec_encode_video2
[14:05] <divVerent> the minimum allowed distance between two frames is the larger one of the two time bases you get
[14:05] <divVerent> oc->time_base and oc->codec->time_base
[14:05] <tuxx_> its 1.0?
[14:05] <divVerent> yes, for ages now
[14:05] <divVerent> was IIRC there way before 1.0 release
[14:05] <divVerent> 0.11.2 had it too
[14:06] <tuxx_> the arguments are the same
[14:06] <divVerent> not at all
[14:06] <divVerent> int avcodec_encode_video2(AVCodecContext *avctx, AVPacket *avpkt,
[14:06] <divVerent>                           const AVFrame *frame, int *got_packet_ptr);
[14:06] <tuxx_> or wait
[14:06] <divVerent> compare to
[14:06] <tuxx_> sorry
[14:06] <divVerent> int avcodec_encode_video(AVCodecContext *avctx, uint8_t *buf, int buf_size,
[14:06] <divVerent>                          const AVFrame *pict);
[14:07] <divVerent> the difference is that the buf and buf_size are replaced by an AVPacket
[14:07] <divVerent> in order to properly transport pts and other such info
[14:07] <divVerent> and that also means my line above was wrong a bit ;)
[14:07] <divVerent> pkt.pts = av_rescale_q(pkt+.pts, oc->codec->time_base, oc->time_base)
[14:07] <divVerent> oops
[14:07] <divVerent> pkt.pts = av_rescale_q(pkt.pts, oc->codec->time_base, oc->time_base)
[14:07] <divVerent> this magic line will convert the packet from the codec's to the stream's view of timestamps
[14:08] <divVerent> as these two typically differ
[14:08] <tuxx_> but what is the initial double value?
[14:08] <tuxx_> im not following you honestly said
[14:08] <divVerent> you define that
[14:08] <divVerent> you decide when to store screen updates to the video
[14:08] <divVerent> like, when enough on the screen changed
[14:08] <divVerent> or a time interval is up
[14:08] <divVerent> or whatever
[14:08] <divVerent> this is the work of the vnc protocol side
[14:09] <divVerent> the double time value is basically the elapsed time since start of your VNC stream
[14:09] <divVerent> if you do live recording of VNC streams, you can just use gettimeofday() to get it
[14:09] <tuxx_> so i cld go back to my framecounter basically
[14:09] <divVerent> yes, but I'd recommend against that
[14:09] <tuxx_> because i cld count missed frames
[14:09] <divVerent> for this application
[14:09] <tuxx_> as a double
[14:09] <tuxx_> double a+=0.1
[14:09] <divVerent> if you use a double, you don't even need to think in missed frames
[14:10] <divVerent> you just encode when you can
[14:10] <tuxx_> but i need some kind of reference of time
[14:10] <divVerent> and in high motion scenes you can encode like 20 frames per sec
[14:10] <divVerent> and when nothing changes, you encode only one per sedc
[14:10] <tuxx_> hm
[14:10] <divVerent> especially for vnc stuff this would be quite useful
[14:10] <tuxx_> but i still dont understand what the value of the double should be
[14:10] <divVerent> the elapsed time since start of your vnc stream
[14:10] <divVerent> how does your tool work
[14:10] <divVerent> does it hook into a live vnc connection
[14:11] <divVerent> or do you process a dumped vnc connection later?
[14:11] <tuxx_> yea it hooks into a live vnc connection
[14:11] <divVerent> ah
[14:11] <tuxx_> and there are 2 threads
[14:11] <divVerent> okay, then gettimeofday() is your friend
[14:11] <divVerent> get the current time at start of the recording
[14:11] <tuxx_> one thread does the handling of the vnc messages
[14:11] <divVerent> and the current time whenever you write a video frame
[14:11] <divVerent> and the difference of that is the pts
[14:11] <tuxx_> and one thread uses timerfd to assess a fixed frame rate
[14:11] <tuxx_> and i can see how much was missed
[14:11] <divVerent> you really should drop the idea of a fixed frame rate for this, unless you really have to keep it
[14:12] <tuxx_> no i dont
[14:12] <divVerent> especially for screen recording
[14:12] <tuxx_> i wld muuuch prefer not to have to use it
[14:12] <divVerent> because there it's quite common that for many seconds nothing changes
[14:12] <tuxx_> the video becomes huge
[14:12] <tuxx_> and it runs on an embedded device (beagle board)
[14:12] <tuxx_> which is totally overburdoned with the encoding
[14:12] <divVerent> to get it "optimally" done, you could use logic like this
[14:12] <divVerent> the more changes, the faster you write a new frame
[14:12] <tuxx_> divVerent: yea
[14:13] <divVerent> or even, write a new frame only if there were changes at all
[14:13] <tuxx_> i get a callback function upon frame updates
[14:13] <tuxx_> so i know exactly when encoding is needed
[14:13] <divVerent> just don't write a new video frame on EVERY update ;)
[14:13] <divVerent> but on as many as you can
[14:13] <tuxx_> yea
[14:13] <divVerent> this part is somewhat difficult to figure out probably
[14:13] <tuxx_> hmm complicated
[14:13] <divVerent> you generally want to write a frame when you know your screen is complete
[14:13] <divVerent> not sure if VNC tells the client that
[14:14] <tuxx_> in fact i dont think it does
[14:14] <divVerent> but during rendering of a single rectangle, you may want to avoid that
[14:14] <tuxx_> i moved the write_frame() to the update callback earlier
[14:14] <tuxx_> and i was getting weird frame buildups
[14:14] <tuxx_> i think that was because the frames were not fully complete
[14:14] <divVerent> hehe
[14:14] <divVerent> then it probably calls that callback on every vnc message
[14:14] <divVerent> one trick you can use BTW
[14:14] <divVerent> you have a socket to the vnc connection, right?
[14:14] <tuxx_> i use libvncserver
[14:15] <divVerent> you can, after processing an update, check fi there is more to come using a zero-time select() call
[14:15] <tuxx_> its abstracted.. i might be able to get the socket tho
[14:15] <divVerent> or it may have its own interface to tell that
[14:15] <divVerent> basically, a nonblocking check if there is more data to be rendered in the queue
[14:15] <tuxx_> divVerent: hm..
[14:15] <divVerent> and if there is not, you want to make a video frame
[14:15] <tuxx_> sounds quite horrible
[14:15] <divVerent> and if there is, you want to make a video frame, but only up to a fixed frame rate
[14:16] <divVerent> by comparing the current time to the time of the previous frame
[14:16] <tuxx_> hmm yes
[14:16] <tuxx_> its getting pretty complicated
[14:16] <divVerent> vnc protocol BTW may accidentally yield this info somewhere
[14:16] <divVerent> e.g. maybe mouse pointer updates always come at the end of the packet
[14:16] <tuxx_> i shld first try to get proper time stamps
[14:16] <divVerent> or stuff like that
[14:17] <divVerent> BTW, you said you use libvncserver
[14:17] <divVerent> so you record on server side, not client side?
[14:17] <divVerent> shouldn't that mean you always have access to a full, not partially rendered, screen?
[14:19] <tuxx_> no i record on the client side
[14:19] <tuxx_> its just called vncSERVER but it supports both
[14:19] <tuxx_> client and server
[14:20] <tuxx_> ok i think i'm going to try to fix the pts
[14:20] <tuxx_> are you going to be around for a while? :()
[14:20] <tuxx_> are you going to be around for a while? :_
[14:20] <tuxx_> woops :)
[14:20] <divVerent> no
[14:20] <tuxx_> oh darn :(
[14:21] <tuxx_> divVerent: i pmed you...
[14:23] <tuxx_> divVerent: but jsut so that i understand you
[14:23] <tuxx_> i shld use a gettimeofday approach and somehow encode the time into a double
[14:24] <tuxx_> and then use what avcodec_encode_video2()?
[14:24] <divVerent> yes
[14:24] <tuxx_> i'll try
[14:24] <tuxx_> thank you
[14:24] <divVerent> if you have to, we can get it to work with avcodec_encode_video() too, but only if it really has to be
[14:25] <divVerent> like, if you have to support old ffmpeg versions
[14:25] <divVerent> the issue is, avcodec_encode_video() may actually disappear eventually from ffmpeg
[14:25] <tuxx_> well really i dont care much about compatibilty
[14:26] <tuxx_> i just want it to work
[14:26] <tuxx_> this is meant for a one time project and a dirty hack would be sufficient
[14:27] <divVerent> well, then I'd recommend using the newer function
[14:27] <divVerent> as that will also help you when you want to try other codecs
[14:27] <tuxx_> but i haven't understood yet what the arguments of video2 are
[14:28] <divVerent> I sughgest reading the info in the header file
[14:31] <tuxx_> got_packet_ptr is 1 or 0 on non-empty packets?
[14:32] <tuxx_> and 0 is the success return value
[14:32] <tuxx_> isnt this information redundant?
[16:55] <shevy> guys, got a question
[16:55] <shevy> when a file already exists, ffmpeg asks me before overwriting
[16:56] <shevy> is there a way to automatically assume that I would have answered "yes", always? perhaps as a configuration option?
[16:56] <Mavrik> shevy, "-y"
[16:57] <shevy> ah, cool
[16:57] <shevy> thanks Mavrik
[17:24] <xroberx> hi
[17:25] <xroberx> does anyone know how to open an image sequence (image2 demuxer) with avformat_open_input() ? I can successfully play image sequence from the commandline (ffplay) but not programmatically. The problem is that avformat_open_input() returns error "No such file or directory"
[17:26] <xroberx> for example: ffplay video%04d.jpg works, but avformat_open_input(&pFormatCtx, "video%04d.jpg", NULL, NULL) doesn't
[17:27] <xroberx> I haven't been able to find any information regarding this anywhere, that's why I am asking here
[17:27] <xroberx> thank you
[17:33] <xroberx> correct me If I'm wrong, but the needed demuxer is image2, the codec is mjpeg and the protocol is file
[17:36] <ubitux> xroberx: what if you explicit the "image2" format?
[17:36] <xroberx> ubitux: how do I do that programmatically ?
[17:41] <divVerent> xroberx: int avformat_open_input(AVFormatContext **ps, const char *filename, AVInputFormat *fmt, AVDictionary **options);
[17:41] <divVerent> by passing the AVInputFormat, that is
[17:42] <xroberx> I've just done a quick test on linux and avformat_open_input() works fine with "video%04d.jpg", but it doesn't work on Android even though I can play video files
[17:44] <xroberx> the problem is that the error message "No such file or directory" is not very helpful, probably it has something to do with a missing demuxer/decoder
[17:45] <xroberx> but as far as I know, only mjpeg parser/demuxer/decoder and image2 demuxer are needed, right ?
[17:45] <xroberx> and those are compiled in
[17:58] <xroberx> here's the output of the configure script in case anyone wants to take a look at it: http://pastebin.com/kdjVjjBs
[17:59] <xroberx> I would really like to know if there is something missing...
[18:06] <divVerent> xroberx: did you also get a ffmpeg binary?
[18:06] <divVerent> can you try with this one whether the demuxer works?
[18:07] <divVerent> and also, whether there are any av_log messgaes or an error return value?
[18:07] <tuxx_> divVerent: btw it worked! :)
[18:07] <divVerent> tuxx_: nice :P
[18:07] <divVerent> now you have the hard task of properly deciding when to make a new frame
[18:07] <tuxx_> divVerent: i use variable frame rates (with timestamps)
[18:07] <divVerent> to avoid halfway drawn windows
[18:08] <tuxx_> divVerent: honestly... i just do fb_update = true; whenever i get a new frame from vnc
[18:08] <divVerent> and that works?
[18:08] <divVerent> :P
[18:08] <tuxx_> yea
[18:08] <divVerent> there is one possible issue BTW... you may output frames faster than the time base allowed
[18:08] <tuxx_> how so?
[18:08] <divVerent> basically, when the timestamp didn't increase "enough", avoid encoding a new frame
[18:09] <divVerent> the integer timestamps have to always increase
[18:09] <tuxx_> divVerent: i see. yea good point
[18:09] <xroberx> divVerent: the error message as returned by av_strerror (with the return value of avformat_open_input()) is "No such file or directory"
[18:09] <divVerent> one workaround therefore is using huge fps values for the time base
[18:09] <divVerent> like 1000fps
[18:10] <tuxx_> but the hardware is so slow.. it even has difficulty with 6 fps
[18:10] <tuxx_> i think there shld always be sufficient time between frames
[18:10] <tuxx_> but yea.. thats pretty stupid to rely your hardware being slow :D
[18:10] <tuxx_> but divVerent i use num_frames++ as the pts source
[18:10] <tuxx_> not gettimeofday :)
[18:10] <tuxx_> i was a bit lazy and it worked
[18:10] <tuxx_> so i kept it
[18:11] <divVerent> hehe
[18:11] <tuxx_> oh and i use avcodec_encode_video not video2
[18:11] <tuxx_> simply because i failed to transition the code
[18:11] <tuxx_> it gave me segmentation faults
[18:12] <tuxx_> i saw there is an av_gettime() which is basically a gettimeofday which returns the value as int64_T
[18:12] <tuxx_> but the codec told me something about too many frames having been missed
[18:13] <tuxx_> or something.. anyways my current hack seems to do the job
[18:13] <xroberx> divVerent: I'm trying your suggestion of passing in an AVInputFormat struct to avformat_open_input() but how do I allocate that struct ? I've tried av_find_input_format but it returns null
[18:58] <aku> Hi. Have question about libav: how can i compute second from pts ?
[18:59] <xroberx> divVerent: I've successfully passed in an AVInputFormat. I got it like this: AVInputFormat *inputFmt = av_find_input_format("image2");
[18:59] <divVerent> sounds right to me
[18:59] <divVerent> and then it works?
[19:00] <xroberx> divVerent: the thing is that if I pass that inputFmt to avformat_open_input() it works fine on linux, but on Android I get "No such file or directory"
[19:00] <aku> i found on web this formula: pts * codecCtx->time_base.num / codecCtx->time_base.den, but output doesn't look like something reasonable
[19:01] <xroberx> divVerent: I know for sure "image2" is the right input format, because on Linux the image sequence is opened successfully with that format but not with others
[19:04] <xroberx> If only I knew what's going on... I'm about to get crazy :)
[19:21] <sine__> hi guys im at a persons house trying to help them out. they have recorded some talks in wma format. every software like audacity and others wont open the file, will ffmpeg do it ?
[19:23] <sacarasc> Possibly.
[19:23] <sacarasc> There are multiple WMA codecs, I think ffmpeg can do most, if not all...
[19:24] <sine__> thanks. ffmpeg is normally awesome
[19:24] <sine__> i want to output it at 128
[19:24] <sine__> mp3
[19:24] <sine__> no in fact i want to dump it to wave uncompressed
[19:24] <sine__> whats the syntax for wave uncompressed
[19:24] <sine__> like keep the channels etc
[19:25] <sacarasc> ffmpeg -i input output.wav
[19:25] <sine__> ffmpeg -i some.wma out.wav
[19:31] <mbrit> hi! to test RTP i use this command ffmpeg -re -i file.mpg -an -vcodec copy -f rtp rtp://127.0.0.1:11000 which works just fine on windows, but when i try to use it on gentoo i get this "rtp://127.0.0.1:11000: No such file or directory"
[19:31] <mbrit> what could be happening?
[19:53] <xroberx> mbrit: I'm facing a similar issue but with file: instead of rtp: (the file is opened fine on Linux but on Android I get "No such file or directory")
[19:59] <mbrit> eix lincity
[19:59] <mbrit> oops
[19:59] <mbrit> ok
[20:02] <mbrit> this is what i get http://pastebin.com/xjqjWSa0
[20:03] <burek> mbrit, can you type: ffmpeg -protocols
[20:04] <burek> see if there is rtp
[20:05] <mbrit> oh, no it isnt there
[20:07] <burek> :/
[20:07] <burek> your ffmpeg wasn't compiled with support for rtp
[20:07] <burek> protocol*
[20:08] <mbrit> :S ok thanks, i'll try to fix that
[20:17] <mbrit> burek done, just did an emerge using the "network" flag, thanks again
[20:17] <burek> :beer: :)
[20:29] <brontosaurusrex> latest windows git compile is to be found where?
[20:35] <brontosaurusrex> JEEB, you here?
[20:38] <brontosaurusrex> nm, found something
[22:35] <kappa> is there a way to mux video with klv stream into mxf from command line?
[23:38] <shroomM> is there a way to tell ffmpeg to encode a stream until it reaches the end of the video and ignore the excess of the audio if it's longer than the video ?
[23:38] <shroomM> it looks like the asyncts is such a filter
[23:59] <saste> shroomM, -shortest
[00:00] --- Thu Nov  8 2012


More information about the Ffmpeg-devel-irc mailing list