[Ffmpeg-devel-irc] ffmpeg.log.20140418

burek burek021 at gmail.com
Sat Apr 19 02:05:01 CEST 2014


[01:18] <malfy90> sera
[01:40] <BabySuperman> hey guys, i am getting some gross scaling errors turning many .png's into a video, see this frame for example: http://youtu.be/yBTDiGhJjlo?t=25s -- that was made using ffmpeg -framerate 3 -i %05d.img.png output.mp4
[01:41] <BabySuperman> When I do try to use scaling, like -vf scale="640:-1" it asks to overwrite all the images, then doesn't output any video?
[04:03] <BabySuperman> hey guys, i am getting some gross scaling errors turning many .png's into a video, see this frame for example: http://youtu.be/yBTDiGhJjlo?t=25s -- that was made using ffmpeg -framerate 3 -i %05d.img.png output.mp4
[04:03] <BabySuperman> I've tried -vf scale=640:-1 for example and it's still pretty weird
[04:19] <BabySuperman> llogan ty but i just think i figgered it out pretty decently, thanks to https://stackoverflow.com/questions/8133242/ffmpeg-resize-down-larger-video-to-fit-desired-size-and-add-padding
[06:15] <tm512> is there a way to get ffmpeg to buffer its output?
[06:16] <tm512> in the case of a "live" input like x11grab, the output stutters if there's any spikes in disk I/O
[06:29] <tm512> -rtbufsize?
[08:27] <lucaswang> i have a problem about convert video. I use 'ffmpeg -i test.mov -pix_fmt yuv420p10be out.mov', the result format is yuv420p, 10be is ignored.  but -pix_fmts shows ffmpeg support 10be
[08:29] <relaxed> lucaswang: which video codec? h264?
[08:30] <relaxed> Use my build's ffmpeg-10bit, http://johnvansickle.com/ffmpeg/
[08:33] <relaxed> lucaswang: well, it only supports yuv420p10le yuv422p10le
[08:33] <relaxed> are you sure you need yuv420p10be?
[08:34] <lucaswang> yes. i want to test
[08:34] <lucaswang> test my shader
[08:35] <lucaswang> thanks. i will try.
[08:35] <relaxed> lucaswang: try ffmpeg -i test.mov -c:v rawvideo -pix_fmt yuv420p10be out.mov
[08:35] <relaxed> with your build
[08:56] <lucaswang> if i use rawvideo, how can i play it? ffplay seems can't
[08:57] <llogan> sure it can. just use a lazyman's container like .nut
[09:01] <lucaswang> how?
[09:01] <llogan> ffmpeg -i input -c:v rawvideo output.nut
[09:02] <lucaswang> it works! thanks!
[09:03] <llogan> don't you need big endian harsware to encode be? (I don't know. I'm ignorant of the whole byte order thing).
[09:08] <lucaswang> i don't know. i think it's hardware independent
[09:20] <lucaswang> and my shader works too : )
[09:21] <relaxed> FLAWLESS VICTORY
[09:33] <lucaswang> relaxed: how to use your ffmpeg-10bit?
[09:34] <lucaswang> my shader works for opengl es2 too. it seems that vlc, xbmc etc don't support
[09:53] <lucaswang> in ffmpeg, which pixel format equals y210 (Packed, 4:2:2, 10-bit) ?
[09:54] <ubitux> yuv422p10 probably
[09:55] <ubitux> ah, packed
[10:00] <lucaswang> no, here P means planar. I only see planar formats
[11:11] <relaxed> lucaswang: did you try  yuv422p10le?
[11:40] <lucaswang> yes, i did
[11:41] <lucaswang> relaxed: you mean convert using ffmpeg?
[11:55] <andrewk> hey guys
[11:59] <andrewk> so I'm trying to use a video and thumbnail generator plugin in wordpress however it keep stating that " FFMPEG missing library libvpx required for WEBM encoding".
[11:59] <andrewk> here is a screenshot: http://s30.postimg.org/oh7kl1a8h/Screen_Shot_2014_04_18_at_11_55_42_AM.png
[12:00] <andrewk> and here is the output of the test commands: http://pastebin.com/ymEPJpKS
[12:00] <andrewk> I have compiled ffmpeg with libvpx and no multiple libvpx is installed on the system so any idea?:S
[12:00] <andrewk> furthermore I see no error message in the output
[12:01] <sacarasc> Did you compile ffmpeg after installing libvpx?
[12:01] <sacarasc> Never mind, I fail at reading. :D
[12:03] <sacarasc> Ask the people who made the thumbnail generator? Or maybe make it rejudge your ffmpeg?
[12:05] <andrewk> sacarasc: well they have questions pending for 3 years so&:D hmm I think I give it a try despite the error message, maybe it works
[12:33] <dannyzb> is there a way for me to stream videos to users while converting them ?
[12:33] <dannyzb> not a live stream - users should be able to start from the beginning of the video up to the point where conversion got
[13:37] <vegpuff> Hey Guys, need a help - I'm trying to stitch images to video along with a audio.
[13:38] <vegpuff> Using this command - ffmpeg -r 1/3 -i images_%2d.jpg  -i music.mp3 -vf drawtext="fontfile=~/fonts/DroidSerif.ttf: text='Slideshow Text': fontcolor=black: fontsize=24: x=15: y=15: box=1: boxcolor=white at 0.6"  -c:v libx264 -r 24 -pix_fmt yuv420p out.mp4
[13:38] <vegpuff> But it takes close to 200% cpu
[13:38] <vegpuff> any tips on reducing cpu usage?
[13:39] <DannyZB> >< disconnect .. so is there a way to let users watch videos while i'm converting them to MP4 for pseudostreaming ?
[13:40] <sacarasc> vegpuff: It will use as much CPU as it can to complete the task as quickly as possible. If it is going at over real time speed and you want to limit it to real time, try adding -re (I think) to the input options.
[13:41] <vegpuff> Thanks sacarasc, the process can take time to complete, but preferably lesser cpu. Will look at -re options.
[14:38] <vegpuff> sacarasc it still uses maximum cpu (300%)
[14:38] <vegpuff> is there a way to limit cpu usage?
[14:48] <Mavrik> vegpuff, no
[14:48] <Mavrik> because that would be silly
[14:48] <vegpuff> ah
[14:48] <Mavrik> why are you bothered by the CPU usage anyway?
[14:49] <vegpuff> I'm going to run the process in a hosted machine, can't afford it to peak to 300%
[14:49] <c_14> You could try limiting the number of threads ffmpeg uses.
[14:49] <vegpuff> let me check that, thanks c_14
[14:50] <Mavrik> that won't realyl work
[14:50] <Mavrik> vegpuff, that's a totally wrong approach
[14:50] <Mavrik> use nice on that machine
[14:50] <Mavrik> and leave ffmpeg to burn all CPU when nothing else needs it
[14:50] <vegpuff> oh, so you are saying I should reduce the priority of ffmpeg process?
[14:51] <Mavrik> priority and nice factor aren't the same.
[14:51] <vegpuff> Oh ok. Sorry, newbie to this.
[14:51] <Mavrik> set nice 20 on ffmpeg and everything else will get CPU first
[14:52] <vegpuff> Sure, let me try that.
[14:52] <Mavrik> that will make sure the server runs well without problems
[14:52] <Mavrik> while ffmpeg still gets all leftover cores for best results :)
[14:59] <anshul__> hi is there any way to delete the attachement added to bug report
[15:00] <anshul__> I was adding new file which is compilable standalone(have no dependency) and produce the same result
[15:02] <anshul__> ohh got it , keep the name same and replace
[15:02] <Voicu> hello, how can I use libav* to open and stream a stream of already encoded data? my video is encoded in h264 and audio is AAC
[15:03] <Voicu> right now I'm opening the output stream with avformat_alloc_output_context2 and avio_open and then I add an audio and a video stream
[15:04] <Voicu> the thing is I get an error when doing avformat_write_header -  "-1094995529 (Invalid data found when processing input)"
[15:10] <Voicu> or to start things easy, my first confusion is: how I setup a context for outputting already encoded data?
[15:10] <Voicu> I get that I need a AVFormatContext, an AVIOContext, two streams and codecs
[15:11] <Voicu> but when getting the codecs, do I need encoders or decoders?
[15:11] <Voicu> I assume decoders but I'm not so sure now ...
[15:21] <Mavrik> hmm
[15:21] <Mavrik> AVFormatContext defines a muxer
[15:21] <Mavrik> while your content is probably already muxed right?
[15:25] <Voicu> Mavrik, no, I have two already encoded streams - one with h264 one with AAC
[15:26] <Voicu> (I'm making a streaming thingy on android)
[15:26] <Voicu> I'm reading this right now http://ffmpeg.org/doxygen/trunk/group__lavf__encoding.html
[15:26] <Voicu> I'm basically doing what it says in there but I still get some errors
[15:26] <Voicu> I'm going to try to start over with a 'cleaner' code
[15:27] <Voicu> it's just that those codecs have so many parameters
[15:28] <Mavrik> what parameters?
[15:28] <Mavrik> you dont need codecs because you're not encoding anything
[15:29] <Mavrik> in worst case you have to demux / remux
[15:29] <Voicu> Mavrik, hmm, that's what I'm thinking...
[15:31] <Voicu> Mavrik, ok, so I just create a context and start writing data into it?
[15:32] <Voicu> my next problem would be that the function that does that seems to only want AVPackets so how do I get an AVPacket from my stream? I was thinking of using an AVCodecParser but again I'm not sure that's what I need
[15:35] <Voicu> I know this might sound like newbie stuff but it's pretty overwhelming - there are just so many components
[15:41] <Mavrik> Voicu, you're trying to do too much without not understanding enough :)
[15:42] <Mavrik> Voicu, take a look at remuxing example
[15:42] <Voicu> well I'm already here - I have the encoded streams
[15:42] <Mavrik> that's what you have to do
[15:42] <Voicu> I just need to somehow send it
[15:42] <Voicu> I'm looking at that
[15:42] <Voicu> I mean I read it
[15:43] <Voicu> but that one is kinda different since everything about the codecs and so on is read from a file
[15:43] <Voicu> I need to set it up manually
[15:44] <Mavrik> and what does that mean?
[15:45] <Voicu> well in the example they are doing avformat_open_input and then copy the codec data from the input context to the output context
[15:46] <Voicu> I need to set up the output context's codec data to somehow match what I have in my streams
[15:46] <Voicu> I also need to parse those streams to get packets that I can then send on the output context
[15:47] <Mavrik> that's called demuxing.
[15:47] <Mavrik> it's what the input format context does.
[15:47] <Mavrik> it parses input and splits it into separate packets with stream ids
[15:48] <jonascj> Hi all. I am running ffserver with this config http://pastebin.com/xMZ10wAX and I hope to end up having http://host.com:8090/webcam.ffm show an mjpeg of my webcam. The webcam I try to feed to the ffserver using: "ffmpeg -v verbose -r 10 -s 320x240 -f video4linux2 -i /dev/video1 http://host.com:8090/webcam.ffm".
[15:49] <Voicu> Mavrik, hmm, can I maybe open the streams then? even if they're not contained in an avi or mpeg or something?
[15:49] <jonascj> I however get this error "Specified pixel format yuyv422 is invalid or not supported". Is that my ffmpeg command which is wrong. Do I need to specify additional parameters (pixel format for ecxample)?
[15:49] <Mavrik> Voicu, what.
[15:49] <Mavrik> you're really not giving enough information for decent help
[15:49] <Mavrik> WHAT do you have?
[15:50] <Mavrik> WHERE does it come from?
[15:50] <Mavrik> HOW is it muxed?
[15:50] <Mavrik> HOW is it encoded?
[15:50] <Mavrik> when you know answers to those questions
[15:50] <Mavrik> you also won't have problems fixing up the code to make ffmpeg do what you want
[15:50] <Mavrik> jonascj, probably yes
[15:51] <Mavrik> jonascj, if your v4l2 source grabs yuyv422 input you'll have to specify the output format
[15:51] <Voicu> Mavrik, sorry I wasn't clear enough - I'm using android's MediaCodec to encode raw video and audio into H264 and AAC respectively
[15:51] <Mavrik> jonascj, yuv420 or yuvj420 is the most widely supported
[15:51] <Voicu> MediaCodec outputs only the h264 and AAC part, it doesn't encapsulate it in a container
[15:51] <Mavrik> Voicu, ah, is it in AnnexB format?
[15:51] <Mavrik> the H.264?
[15:52] <Mavrik> that is, does it inject SPS/PSS units in the packets or do you get those at the start of the encode?
[15:52] <Mavrik> what is your desired output?
[15:52] <Voicu> Mavrik, AFAICT yes, it has those 4 bytes at the start 0,0,0, and 1
[15:53] <Mavrik> that's just NAL unit delimited
[15:53] <Mavrik> lemme check MediaCodec docs :)
[15:54] <Voicu> also, I assume it does inject SPS and PSS because it asks for PTS each time I give it data
[15:54] <Mavrik> Voicu, ahh, see the doc for MediaCodec: when it talks about "codec specific setup data"
[15:54] <Mavrik> it doesn't do injection
[15:54] <Mavrik> it just gives them at the start
[15:54] <Voicu> ok...
[15:54] <Mavrik> anyway, you still haven't told what your desired output is.
[15:55] <Voicu> Mavrik, I'm trying to output to a file and/or stream to a server
[15:55] <Mavrik> which one.
[15:55] <Mavrik> that's kinda important.
[15:55] <Voicu> well eventually I want it to stream but if it's easier to start with a file, a file it is
[15:55] <Mavrik> also the format you're trying to output is kinda important.
[15:55] <Mavrik> what kind of file container?
[15:55] <Voicu> it's in flv format
[15:56] <Mavrik> you want to create a flv file?
[15:56] <Voicu> yes
[15:56] <Mavrik> huh.
[15:56] <Mavrik> ok.
[15:56] <Mavrik> the flv muxer will expect PSS/SPS data to be in AVCodecContext->extradata
[15:57] <jonascj> Mavrik: Thank you for your input. So it is the output I need to specify?
[15:57] <Mavrik> and you'll have to create AVPackets containing a full frame and then passing them to the FLV muxer
[15:57] <Mavrik> jonascj, you need to just set the output pixel format
[15:57] <Voicu> I've already done a streaming app using FFMpegFrameRecorder (by following some examples) which works but it's very slow because it's software encoded
[15:57] <Mavrik> jonascj, I don't know the exact ffserver parameter by heart but I think there should be one :)
[15:58] <Voicu> Mavrik, yeah I figured I need to create the packets too
[15:59] <Voicu> what do you mean by 'full frame' ?
[15:59] <Mavrik> full video frame
[16:00] <Voicu> a yes
[16:00] <Mavrik> Voicu, I suggest you first make a dump of whatever you're getting from Android encoder to file and try to implement these things on your desktop first
[16:00] <Mavrik> so you'll have full debugging capabilities
[16:00] <Mavrik> what you have to do is
[16:00] <Mavrik> 1.) Initialize AVFormatContext for your output format
[16:00] <Mavrik> 2.) Setup extradata fields of AVCodec in them so they have PSS/SPS setup data
[16:01] <Mavrik> 3.) For each frame you get out of the encoder create AVPacket, properly set PTS/DTS in proper timebase
[16:01] <Mavrik> 4.) Write AVPackets to the output format
[16:01] <Mavrik> Voicu, I strongly suggest you checkout flvenc.c file to see what the FLV muxer does.
[16:02] <Mavrik> it'll also give you an idea which fields you have to fill in the context
[16:03] <Voicu> ok then
[16:03] <Voicu> thank you very much, this is tremendous help
[16:04] <Mavrik> also see remux.c and ffmpeg doxygen and source is your big friend :)
[16:04] <jonascj> jonascj: So it is the ffserver which needs the output format specified? I get the error on the ffmpeg client side?
[16:05] <Mavrik> jonascj, yeah, the encoding parameters are set up in the ffserver config file
[16:05] <Mavrik> and are then passed to ffmpeg when you start streaming
[16:05] <jonascj> Mavrik: but is it the ffserver which requests something which ffmpeg does not support or the other way round?
[16:05] <Mavrik> jonascj, https://www.ffmpeg.org/ffserver.html
[16:05] <Mavrik> no.
[16:05] <Mavrik> what happens is
[16:06] <Mavrik> you're trying to encode to mjpeg which doesn't support the pixel format your camera gives you frames
[16:06] <Mavrik> so you have to tell ffmpeg to convert to a supported pixel format
[16:06] <Mavrik> it you read that doc you'll see that by calling "-pix_fmts" you'll get a list of pixel formats and by adding PixelFormat directive you set it
[16:07] <Mavrik> and I already told you to use yuv420 or yuvj420 whichever will work
[16:07] <jonascj> Mavrik: yeah, I was your yuv420 message. I was just confused about whether it was ffmpeg or ffserver that needed the directive
[16:07] <Mavrik> ffserver... it's kinda wierd
[16:08] <Mavrik> because ffmpeg is the one actually doing all the work (and producing errors) while the encoding parameters are really set in ffserver.conf and then passed to ffmpeg when it connects
[16:09] <jonascj> Mavrik: yeah, that is what confused me
[16:16] <jonascj> Mavrik: my ffserver says unknown keyword PixelFormat regardless of whether I put it in the <Stream> section (as the manual states) or in the <Feed> section.
[16:17] <jonascj> so when ffmpeg "connects" to a feed it connects to the name which is specified in the <Feed> section. Shouldn't it be the feed-section containing directives to give to ffmpeg?
[16:52] <Voicu> Mavrik, are you still around?
[16:52] <Voicu> a nevermind, I think I got it
[17:14] <Voicu> any idea why avformat_write_header would hang with CPU usage at 100% and no error output?
[19:37] <Voicu> anyone around? I'm calling avformat_write_header and it doesn't output anything although it now returns OK (error code 0)
[19:37] <Voicu> could it be because I'm not setting the extradata field (yet)
[19:38] <Voicu> ?
[19:39] <Mavrik> why aren't you using a debugger to figure out what's going on?
[19:40] <Voicu> Mavrik, I'm using java. Although I guess I could try and do it first in C and then move to java...
[19:41] <Mavrik> but... why.
[19:41] <Voicu> hehe
[19:41] <Mavrik> it'll just make everything you to undebuggable and horribly slow
[19:41] <Mavrik> the way to do those binding is to minimize C calls
[19:41] <Mavrik> since JNI has a massive overhead :)
[19:42] <Voicu> well I'm stuck with using java on android anyway so I thought I would try java on the desktop too
[19:43] <Voicu> yeah, I will probably switch to C soon, make it work there and then port to java
[19:43] <Voicu> setting up the build environment for ffmpeg is quite a task though
[19:44] <Voicu> especially since I'm programming on windows :P
[19:45] <Voicu> I can't avoid JNI. The alternative is to use the android NDK which would mean yet another thing I have to learn
[19:45] <Voicu> and even then, I would still have to do lots of JNI calls anyway
[19:46] <Mavrik> a
[19:46] <Mavrik> m
[19:46] <Mavrik> Android NDK is just gcc configured for Android compilation
[19:46] <Mavrik> there's nothing new to learn >(
[19:46] <Mavrik> :)
[19:47] <Mavrik> Voicu, the sane way to do ffmpeg dev is to grab Linux, build ffmpeg without optimizations and with debug symbols
[19:47] <Mavrik> and then just recompile on other platforms :)
[19:47] <Mavrik> I haven't seen gdb work reliably on Windows yet
[19:48] <Voicu> hmm
[19:50] <Voicu> I didn't think it would be such a pain. I would have started with a debug version from the start
[19:50] <Voicu> maybe there are debug .jars? :D
[19:50] <Voicu> nah, that would be crazy
[19:51] <Mavrik> jars? huh?
[19:51] <Voicu> java .jar
[19:51] <Mavrik> I wouldn't even look in the direction of Java until the code works as a pure C binary first ;)
[19:51] <Voicu> java libraries
[19:51] <Voicu> Mavrik, yeah you're probably right
[19:52] <Voicu> I'm gonna try that then
[19:52] <Voicu> at least I'll know what breaks when it breaks
[19:53] <Voicu> btw, ironic thing - allocating an AVFormatContext for example is more of a hassle in java than in C
[19:53] <Voicu> one has to do 2 allocations - one to initialize the java object itself then one to load the inner (ffmpeg) object
[19:56] <Voicu> Mavrik, also, thanks again for helping me navigate through ffmpeg :D
[19:57] <VoicuGaga1> damn, the internet connection died on me
[19:57] <VoicuGaga1> Mavrik, well thanks again for helping me
[19:58] <VoicuGaga1> you're probably going to see me here more from now on as I go through this stuff
[19:58] <VoicuGaga1> :D
[20:40] <jonascj> http://trac.ffmpeg.org/wiki/How%20to%20compile%20FFmpeg%20for%20Raspberry%20Pi%20(Raspbian)   cause a mismatch in glibc version. The newest git version of fmpeg requires glibc 2.17 but the most pupular raspberrypi distro only have glibc 2.13
[20:40] <jonascj> Should I find an older version of ffmpeg then?
[20:41] <jonascj> Guides on the net recommends cloning ffmpeg from git and compiling it on the raspberrypi. Shouldn't this give rise to the same problem?
[20:42] <jonascj> Or is it maybe a problem with my configuration / compilation (on my laptop since I try to cross compilation)?
[20:43] <Mavrik> its a problem with your configuration
[20:43] <Mavrik> when compiling your binary will link to glibc you have configured, by the default the version you have on the compiling system
[20:43] <Mavrik> if you get version mismatch it means it's linking against wrong glibc on your system
[20:44] <Mavrik> that's usually the reason why the recommendation is to compile on target system... properly setting cross-compilation environment is PITA ;)
[20:47] <jonascj> Mavrik: "ldd --version" reports 2.15 on my build machine (ubuntu 12.04, 64bit)
[20:48] <jonascj> Mavrik: on second thoughts, why the h*ll would people recommend building it (even cross compiling it) when most raspberry distros have it available as a package
[20:48] <jonascj> ? :/
[20:48] <Mavrik> because it was probably built in 1972
[20:49] <jonascj> :P
[20:49] <jonascj> maybe it is not the glibc on my hostsystem, but glibc configured to work with my tool chain (crosstool-ng , ct-ng menuconfig)
[20:51] <jonascj> The packaged version was ffmpeg version 0.8.10-6:0.8.10-1+rpi1,
[20:52] <jonascj> can that really be? The current version is 2.x...
[21:08] <blippyp> I have a long pre-typed reference to my problem: But in hopes of saving you from a long read, I believe the root of my problem is the WARNING: library configuration mismatch (but do not know how to solve this): http://pastebin.com/3nDCiq3k
[21:08] <blippyp> I am attempting to compile ffmpeg with frei0r enabled. Everything seems fine during the install, but I receive a segmentation fault when I try to use it and I get the above error when I run ffmpeg by itself. At the top of the pastebin is the exact ./configure parameters I used. I did not run make install, running ffmpeg directly from my git folder. Does anyone have any advice for a lost soul? Do you want me to paste the rest of the details 
[21:44] <jonascj> I do not know if it works yet, but crosstool-ng have an ot
[21:44] <jonascj> *option to set the glibc version it uses when compiling. So I'm setting that to glibc 2.13 now as the raspberry pi is and I hope that will work
[00:00] --- Sat Apr 19 2014


More information about the Ffmpeg-devel-irc mailing list