[Ffmpeg-devel-irc] ffmpeg.log.20120615

burek burek021 at gmail.com
Sat Jun 16 02:05:02 CEST 2012


[01:01] <brocatz> so because you can't send sigint in windows i am thinking i need to rewrite some part of ffmpeg
[01:02] <brocatz> so that my stream exits cleanly
[01:02] <brocatz> is that completely retarded, is there a better way
[01:02] <brocatz> i am running a ffmpeg binary through popen() at the moment on windows
[01:03] <burek> brocatz, what does your ffmpeg do
[01:04] <burek> and when do you need to stop it
[01:04] <burek> btw, you can always send it a 'q' keystroke, right?
[01:05] <brocatz> it's capturing from a webcam stream (dshow driver)
[01:05] <brocatz> and i need to terminate it cleanly
[01:07] <burek> and you are stopping it after a predefined time period or randomly
[01:08] <brocatz> the UI interacts with the user and wi need to stop it when they've finished
[01:08] <brocatz> basically it's like a video photo booth
[01:09] <burek> I see
[01:09] <brocatz> and using ffmpeg is probably the best way to grab a reasonable stream seamlessly
[01:09] <burek> well, did you try sending it a 'q' keystroke
[01:09] <brocatz> giving that a go now
[01:09] <burek> ok
[01:11] <brocatz> damn
[01:11] <brocatz> i'm a retard huh
[01:11] <brocatz> thanks a lot
[01:11] <burek> :beer: :)
[01:11] <brocatz> spent ages reading about win32 api
[01:11] <brocatz> and trying to send sigint
[01:11] <brocatz> haha
[01:11] <brocatz> fuck me
[01:11] <burek> :)
[01:11] <burek> well i bet it works for ctrl+c too
[01:11] <burek> :)
[01:12] <burek> but if it works for q, then even better
[01:12] <burek> :)
[01:12] <brocatz> ctrl+c is caught and turned into a signal bby the term
[01:12] <burek> yes
[01:12] <burek> but ctrl+c works on win32 too
[01:12] <burek> so, it has to have a way to stop it in a similar fashion
[01:24] <brocatz> wonder why it's not working when i write to popen's stdin
[01:24] <brocatz> i'm sending a q and calling flush
[02:53] <fucheeno_droid> hello all
[02:53] <fucheeno_droid> was hoping for some help with creating thumbnails
[02:53] <fucheeno_droid> kinda green
[02:54] <fucheeno_droid> I have a bunch of avi files from a security system at a construction site recorded at 1fps. Apparently that is still too much footage for him
[02:55] <fucheeno_droid> I was wondering how I could just capture 1 keyframe every hour and just loop through all the files.
[02:55] <fucheeno_droid> I am fairly confident that it's possible
[09:24] <ryanmcclure> hello?
[09:36] <ryanmcclure> is there anyone here who could possibly give me a hand with ffmpeg?
[11:16] <zap0> just ask the question
[16:17] <ayaka> http://paste.debian.net/174668/
[16:17] <ayaka> how shall I compile ffmpeg to avoid that problem
[16:19] <burek> one of your libs was
[16:19] <burek> compiled as 64bit
[16:19] <burek> and you are trying to link it to produce 32bit binary
[16:19] <burek> or vice versa
[16:19] <burek> probably make distclean is the best
[16:19] <burek> and doing it again
[16:20] <ayaka> burek,  the ffmpeg is 64bit, but audaciuos try to make a 32 bit lib in a x64 machines?
[16:20] <ayaka> is that solution
[16:21] <burek> why are you mixing 32 and 64
[16:21] <burek> you can add -fPIC
[16:21] <ayaka> burek, no me, I don't want it, maybe it is the problem in configure of audacious
[16:21] <burek> but it would be better if you would build it without such issues
[16:22] <burek> can you ask the developers of audacious
[16:22] <burek> and also, what does audacious have to do with ffmpeg
[16:23] <ayaka> burek, let me make more sure, the ffmpeg of mine is amd64 , but  making  audacious plugins is making a x86 version?
[16:23] <burek> what command line did you use to configure your ffmpeg
[16:24] <ayaka> burek, wait a moment
[16:25] <ayaka> burek, http://paste.debian.net/174671/ but it can't disable oss now
[16:26] <burek> where is audacious in this story?
[16:28] <ayaka> burek, it is hard to describe, if it is not ffmpeg problem, then it is audacius things, it is offtopic here
[16:28] <burek> ayaka, where is audacious in ffmpeg's configure line?
[16:28] <ayaka> burek, audacious  isn't in ffmpeg src
[16:29] <burek> then why are we talking about it :)
[16:32] <ayaka> burek, I come here just make sure that I comiple ffmpeg properly, maybe I am offtopic
[16:33] <ayaka> then it is pure ffmpeg problems, --enable-shared can't compile out so files(I don't see any) and how to disable oss
[16:34] <burek> no
[16:35] <burek> your problem is
[16:35] <burek> that your libs
[16:35] <burek> that are required by ffmpeg
[16:35] <burek> are not compiled correctly
[16:35] <burek> one of them is creating that problem, so you can just add -fPIC
[16:35] <burek> and recompile and be happy
[16:35] <burek> but it would be better if you would do it properly
[16:36] <ayaka> no no, I am not asking that problem now
[16:36] <ayaka> I want to know why can't make ffmpeg in shared lib(so file?) and how can I disable oss
[16:37] <burek> you can make shared
[16:38] <burek> either add -fPIC
[16:38] <burek> or recompile your dependent libs using --enable-shared=yes too
[16:38] <burek> for oss, type ./configure --help
[16:38] <burek> and search for oss
[16:40] <ayaka> burek, nothing in configure about oss
[16:42] <ayaka> --enable-shared=yes seems nouse, ffmpeg still only compile out static libs( .a files)
[16:43] <burek> no
[16:43] <burek> it compiles shared too
[16:43] <burek> make distclean
[16:43] <burek> and for ffmpeg, just --enable-shared
[16:43] <burek> no need for =yes
[16:43] <burek> read ./configure --help
[16:43] <leoj3n> Does -trellis do anything in the new version of ffmpeg (should I still be doing -trellis 1)?
[16:43] <burek> leoj3n, what is -trellis ?
[16:44] <burek> can't find it in ffmpeg docs: http://ffmpeg.org/ffmpeg.html
[16:44] <leoj3n> Don't know. Was recommended a long time ago. But I don't think anyone really knows or something.
[16:44] <burek> then remove it :D
[16:45] <burek> what you don't know can't hurt you :)))
[16:45] <leoj3n> hahaah ok thanks
[16:46] <ayaka> burek,  thank you, I will try later, I have to sleep
[16:46] <leoj3n> "The main decision made in quantization is which coefficients to round up and which to round down. Trellis chooses the optimal rounding choices for the maximum rate-distortion score, to maximize PSNR relative to bitrate. This generally increases quality relative to bitrate by about 5% for a somewhat small speed cost. It should generally be enabled. Note that trellis requires CABAC."
[16:46] <burek> ayaka, :beer: :)
[16:47] <ayaka> burek,  thank you, I like beer
[16:47] <dletozeun> hi all
[16:49] <dletozeun> I am looking for a way to do antialiasing a a video encoded with H.264 via ffmpeg, is there some existing filters ?
[16:50] <dletozeun> The input is a set of images captured from a 3D application in which AAA can't be enabled and i'l like to antialias the video in post-processing pass if possible
[17:05] <leoj3n> can I do -acodec copy ...but for 5.1 surround source footage to be stereo?
[17:05] <leoj3n> for = force*
[17:25] <zap0> copy 5.1 and it magically is stereo?
[17:25] <zap0> what are you on about?
[17:35] <Fire_> is a program that just feeds ffmpeg a command line string, and packages ffmpeg with itself (but isn't open source) breaking the license?
[17:36] <zap0> Fire_, in some countries it will depend on what is inside the binary.
[17:37] <Fire_> hmm, I was looking to see if ffsplit is freaking the license
[17:37] <Fire_> but since they don't use a library, just the binary itself, I'm not sure how it all applies
[17:38] <Fire_> breaking*
[19:11] <burek> leoj3n, you can map audio channels
[19:11] <burek> but most probably you'll need to re-encode your audio
[19:12] <burek> <Fire_> is a program that just feeds ffmpeg a command line string, and packages ffmpeg with itself (but isn't open source) breaking the license?
[19:12] <burek> you can freely use ffmpeg, if it is installed already on the end-user machine
[19:13] <burek> but if you bundle ffmpeg libraries/binaries/docs into your own package
[19:13] <burek> and distribute it, then you need to respect ffmpeg's license
[19:14] <burek> the best way to do what you want is to notify users they need to first install ffmpeg, prior to using your application
[19:15] <burek> just to be sure, read this: http://ffmpeg.org/legal.html
[19:15] <burek> and if something is still not clear, you can contact developers directly here: http://ffmpeg.org/contact.html
[19:15] <burek> or here http://ffmpeg.org/consulting.html
[21:41] <zap0> i've got some weird behaviour when i specifiy  -vcodec r210  the file size is wrong.
[21:42] <zap0> is there something else i have to do for r210 to work?
[21:42] <JEEBsv> you're not expecting the 10bit values to be exactly 10bit long, right?
[21:43] <zap0> i expect -r210 to be 32bit/pixel.
[21:44] <zap0> i have -f rawvideo   so i'd expect headerless file..  so filesize should be   w*h*frame_count*4   right?
[21:46] <JEEBsv> I'm not exactly sure, but wouldn't 10bit RGB be 16bit+16bit+16bit ?
[21:46] <JEEBsv> because each 10bit value goes into 16bit, and then you have three planes
[21:47] <JEEBsv> I think the pix_fmt for it was rgb48le, or something
[21:47] <JEEBsv> not sure
[21:47] <JEEBsv> definitely not sure
[21:47] <zap0> mmm...  i understand what your saying.  just not sure its right.
[21:48] <zap0> if i had  -pix_fmt ..48..  i'd expect it,  but i have not, i have -vcodec r210
[21:48] <zap0> i don't think there is a pix_fmt for 10bit
[21:49] <JEEBsv> well, I'm not sure how you stick 10bit values next to each other in different bytes :P
[21:50] <zap0> r210 is   xxrrrrrrrrrrggggggggggbbbbbbbbbb  32bits.
[21:51] <JEEBsv> pixel = (r << 20) | (g << 10) | b >> 2; , yeah
[21:51] <zap0> xxrr rrrr rrrr gggg gggg ggbb bbbb bbbb
[21:52] <zap0> how do i get a list of -pix_fmts ?
[21:53] <burek> ffmpeg -pix_fmts ?
[21:53] <zap0> oh so it is ;)
[21:53] <burek> :)
[21:54] <JEEBsv> zap0: the encoder isn't really hard to read http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/r210enc.c;h=0a7dd332ba61d8392fcc57848eb5fb9bf6c70cf7;hb=HEAD#l37
[21:54] <zap0> mmm.. the only 10s i see are  gbrp10
[21:54] <JEEBsv> (you can see at the bottom that it takes in rgb48le
[21:55] <zap0> no idea what the 'p' is
[21:55] <JEEBsv> planar
[21:55] <JEEBsv> it's for RGB-in-H.264
[21:55] <zap0> oh.
[21:56] <zap0> ok, so the encoder is expecting a 48 format.
[21:56] <zap0> so if i added  -pix_fmt ..48..   you think that might help
[21:56] <JEEBsv> it should take care of that automagically tho
[21:56] <zap0> the dump i have suggest it already did choose a 48
[21:57] <JEEBsv> try checking the output data and see if the values are there as you expected
[21:57] <JEEBsv> with a hex editor or something
[21:57] <JEEBsv> that should check it if it's output as you wanted it
[21:57] <JEEBsv> and then you could start moving into whether or not your calculations were wrong
[21:57] <zap0> it looks like its auto inserting this    [scale @ 0000000001d7c6b0] w:720 h:576 fmt:rgb24 sar:64/45 -> w:720 h:576 fmt:rgb48le sar:64/45 flags:0x4
[21:57] <JEEBsv> yeah
[21:58] <zap0> so the fact i'm not giving a specific -pix_fmt  doesn't really matter, as it automagically selects this rgb48le anyway.
[21:59] <JEEBsv> yes, ffmpeg the command line app tries to convert to a matching pix_fmt for the encoder
[21:59] <g4s> Is it possible to use a string format code to add the video timestamp to a screen image? I.e. ffmpeg -i "input.mp4" -r 0.1 -vcodec png out/pic-%d.png but using the timecode of the video instead of a digit?
[22:00] <g4s> Err instead of an autoincrement digit
[22:00] <zap0> the thing is, some files have converted successfully, using the same commandline.
[22:01] <llogan> g4s: see the drawtext filter (if i understand your question correctly).
[22:04] <zap0> do you think outputing rgb48 instead of r210  and writing my own encoder will help?
[22:09] <TheBrian> I've been struggling for a couple days to encode videos using ffmpeg to use for HTTP Live Streaming.  I have been successful in getting it to work.  However, every attempt has resulted in videos that have trouble after seeking.  I'm thinking this may be because the segmenting program is breaking up streams on non-IDR frames.  I don't know though.. anyone else tried this?
[22:10] <TheBrian> the videos will sometimes seek ok, but the video doesn't recover until the next segment loads
[22:12] <g4s> llogan: thanks I will read up on it
[22:12] <g4s> where do I read on filters?  I was also trying to figure out how to interpret results from the interlace detection filter
[22:12] <zap0> JEEBsv, (sorry if im bothering), can you figure out this math..  w=720   h=576   fr=80   filesize=143,327,232    32bit/pixel..   but  w*h*fr*4=132,710,400
[22:24] <llogan> g4s: http://ffmpeg.org/libavfilter.html (or "man ffmpeg" for example)
[22:25] <llogan> g4s: also some examples https://ffmpeg.org/trac/ffmpeg/wiki/FilteringGuide
[22:26] <llogan> such as a "Burnt in Timecode" example, but i'm not sure that's what you want
[22:27] <zap0> anyone else able to offer some insight?
[22:30] <zap0> when i do -pix_fmt rgb48le  i get the right size.   soon as i add -vcodec r210  its wrong.
[22:31] <TheBrian> it seems like anything i encode with libx264 has trouble seeking after segmentation
[22:32] <g4s> Thank you llogan
[23:13] <zap0> i'm getting a different result  for ffprobe than ffmpeg.    ffprobe says frame count is 80.   ffmpeg processes 81 frames
[23:35] <giovani> Hello, I'm currently using VLC to capture and stream some video, and I'm wondering what some pros/cons might be against using ffmpeg/ffserver
[00:00] --- Sat Jun 16 2012


More information about the Ffmpeg-devel-irc mailing list