[Ffmpeg-devel-irc] ffmpeg.log.20130802

burek burek021 at gmail.com
Sat Aug 3 02:05:01 CEST 2013


[00:00] <klaxa> mmh
[00:00] <PatNarciso> I feel like I could make *something* with ffmpeg -- but I've never worked directly with libav before.
[00:01] <PatNarciso> In my testing a few months back, I did have a lot of overhead -- chaining ffmpeg's together.
[00:01] <Mavrik> well, try and you'll see how fast and stable it is
[00:02] <PatNarciso> thus, I was curious if there was a more ideal format to toss ffmpeg data -- something more native.
[00:02] <parshap> Do you guys know of any alternatives to `ffmpeg -metadata` for writing metadata like id3 tags or ogg comments that is as robust as ffmpeg?
[00:02] <Mavrik> PatNarciso, you could probably do it with a single ffmpeg process per camera stream and a piece of code that muxes the TS streams together
[00:02] <Mavrik> and switches them
[00:05] <PatNarciso> ok -- it took me a few seconds to let that sink in... how do you see the "switcher" working?
[00:07] <Mavrik> oh wait, now I remember
[00:07] <Mavrik> you want to have an overlay right?
[00:09] <PatNarciso> heh - yes, that would be the next extension of the equation.
[00:09] <mark4o> parshap: most alternatives are format-specific, so it depends what format you are using
[00:10] <Mavrik> PatNarciso, yeah, good luck with that :)
[00:10] <Mavrik> sleep time.
[00:10] <PatNarciso> not sure where in the ffmpeg chain I would put that yet - between the camera and switcher?  or perhaps after the switcher and before the broadcast?
[00:11] <mark4o> PatNarciso: multiple overlay filters and then turn them on and off with timeline editing?
[00:11] <PatNarciso> if I could create a successful switcher, I think everything else would be "easy" after that.
[00:11] <klaxa> haha, how often i thought things would be "easy" :')
[00:11] <PatNarciso> Mavrik: goodnight.  thanks for your help.
[00:12] <PatNarciso> mark4o, with "timeline editing"?  I don't understand.
[00:13] <PatNarciso> If I could turn overlay filters on and off at will -- that would be amazing.  Is this possible?
[00:13] <mark4o> you can enable or disable the filter at the times you specify, e.g. enable="between(t,100,200)"
[00:14] <PatNarciso> ahh cool -- I follow ya now.  problem is: this would be live -- I dunno how long it would be.
[00:14] <PatNarciso> ... or when it would be for that matter.
[00:15] <mark4o> I think it supports sendcmd as well; haven't tried it
[00:16] <mark4o> or zmq - haven't tried that either
[00:16] <PatNarciso> no idea sendcmd was a thing, googleing now.  so i'll obviously be in expert in 2 mins.  brb.
[00:18] <mark4o> http://ffmpeg.org/ffmpeg-filters.html#Examples-23
[00:23] <PatNarciso> http://ffmpeg.org/ffmpeg-filters.html#sendcmd_002c-asendcmd
[00:23] <PatNarciso> and wow, this is awesome.
[00:32] <PatNarciso> so zmq is a slick message server/client library? http://zeromq.org/intro:read-the-manual
[00:47] <PatNarciso> zmq sounds like an ircd on crack.
[01:03] <mark4o> PatNarciso: um, I think at least in ffmpeg it is just a way to receive commands from an external source
[01:03] <mark4o> although it may be capable of more than that
[01:09] <PatNarciso> true.
[02:30] <PatNarciso> Chroma-key filtering-- I'm looking at the filters doc, but nothing sticks out to me (unless it's a transparent png).  Is there a filter for filtering out colors (or a range of colors)?  ex: green screen.
[02:52] <brad_c6> Hello, I am working with the decoding_encoding.c example and seem to be having a problem with the final packet being processed or written. Suggestions? http://pastebin.com/YCWTFLS5 Thank You
[03:38] <Justanotheruser> Is there a way I can watch the stream I am saving to disk, as I record it? I am recording from a webcam to a simple .mp4 file. Nothing fancy.
[03:47] <kizzo2> ffprobe says that this mp4 file has 20746 frames - how do I extract them all?  Using ffmpeg -i input.mp4 -r D pos/%5d.jpg with varying values of -r gives different results.
[03:47] <kizzo2> When I use -r 1, there are hundreds of frames; when I use -r 60 there are 30,000+ images.
[03:48] <kizzo2> Why does the -r option even matter?  I want ALL frames.
[03:48] <kizzo2> Why would ffmpeg produce more images than the amount even contained in the file?  ffprobe says there are 20746 frames, so there should be that many images after I run a command.
[04:36] <Justanotheruser> This is what I am trying to do but I don't understand the tee option. http://pastebin.com/BTGJsVNk
[04:54] <mark4o> kizzo2: you asked for a fixed frame rate of 60fps; that means to duplicate or drop frames as necessary to make it that rate
[04:54] <mark4o> don't use -r if you just want to keep the frames as they are
[05:30] <kizzo2> mark4o: Thank you.
[06:01] <kizzo2> There are exactly that many frames now, great.
[07:19] <kizzo2> The file saved by cv2.imwrite("temp.jpg", frame0) has a different MD5 from file 00001.jpg after running "ffmpeg -i input.mp4 %5.jpg"
[07:20] <kizzo2> Why is that?
[07:20] <kizzo2> du -sh 00001.jpg temp.jpg
[07:20] <kizzo2> 32k 00001.jpg
[07:20] <kizzo2> 96k temp.jpg
[07:46] <relaxed> kizzo2: jpg is lossy
[07:49] <kizzo2> Yes, someone responded in another channel that the reason may be due to headers or something.
[07:49] <relaxed> any reason you're not using png?
[07:50] <kizzo2> No particular reason - now I'm using it actually.
[07:51] <kizzo2> Just started reading about the differences and was like, "ok no real reason not to use PNG instead of this lossy JPEG stuff."
[08:33] <nlight> what's the replacement for sws_scale ?
[08:33] <nlight> or I've wrongly assumed it's deprecated?
[08:52] <nlight> what's the way to check if a codeccontext is progressive or interlaced?
[08:57] <Mavrik> uhh
[08:58] <Mavrik> that's not as easy as you'd think :)
[08:58] <nlight> ok, scratch that
[08:59] <nlight> I'm trying to get sws_scale to rescale my input
[08:59] <nlight> it works when src width/height equals dst width/height but if dst width/height is different sws_getContext returns null
[08:59] <nlight> sws_getContext(sw, sh, (AVPixelFormat)fmt, dw, dh, (AVPixelFormat)dst_format, SWS_BICUBIC, nullptr, nullptr, nullptr);
[09:00] <nlight> this returns null when sw != dw and/or sh != dh
[09:00] <nlight> why could that be?
[09:00] <nlight> i am aware i should use cachedcontext just want to get this working first
[09:02] <Mavrik> that's wierd
[09:02] <Mavrik> nlight, do you have a 420 pixel format and sizes not divisible by 2?
[09:02] <Mavrik> nlight, or, did you compile ffmpeg without swscale?
[09:03] <nlight> i have a 420 format, yes
[09:03] <nlight> i use the zeranoe builds
[09:23] <amagee> hey i have a wav file and i want to create a new version of it that is louder. how do i do this?
[09:27] <hendry> keep on running into ALSA xruns
[09:27] <hendry> http://ix.io/70K
[09:27] <hendry> encoding to aac for playing on an iphone
[09:27] <hendry> any suggestions how to avoid that?
[09:29] <nlight> i figured out my problem, avpicture_fill doesn't set width/height of the frame
[09:29] <nlight> so i was passing 0, 0 to sws_scale
[09:29] <nlight> figures
[09:46] <Mavrik> nlight, ah yes, avpicture_fill only allocates memory :)
[09:47] <nlight> now I set them myself, that is allowed, right?
[09:47] <nlight> well it works so far, so whatever, if it breaks I'll ask again :D
[09:49] <Mavrik> nlight, it's expected actually :)
[09:53] <nlight> good
[09:53] <nlight> thanks a lot for the help :)
[09:55] <IamTrying> How can i make a virtual webcam device as: /dev/video18  using FFmpeg (showing a jpeg or video clips for example) ? Which can be then readable from uvccapture -d/dev/video18 or vlc or mplayer etc... as video input source?
[09:56] <Mavrik> huh
[09:56] <Mavrik> ffmpeg isn't really a tool for that
[10:03] <IamTrying> Mavrik, what else i can use to make a Virtual video device playing a jpeg or video clips and available to use as video input source from /dev/video18 ?
[10:03] <Mavrik> your problem is that you need to implement a V4L2 device
[10:08] <nlight> is it possible that sws_getContext followed by sws_scale followed by sws_freeContext leaks any memory?
[10:08] <nlight> I got a very small leak somewhere
[10:08] <nlight> and I've tracked it down to conversion
[10:09] <nlight> not sure what I'm doing wrong
[10:09] <Mavrik> no, it shouldn't leak
[10:09] <Mavrik> are you freeing the AVFrame itself?
[10:09] <Mavrik> instead of just its buffers?
[10:09] <nlight> i do av_free(frame);
[10:10] <nlight> after freeing the buffers
[10:10] <nlight> should I use avcodec_free_frame instead?
[10:10] <IamTrying> YES - Mavrik exactly, you got my point. Do you have any idea how i can make one virtual????
[10:10] <nlight> yea, I should
[10:10] <Mavrik> IamTrying, besides writing your own driver& no idea
[10:11] <IamTrying> Mavrik, OK will make one and paste here
[10:13] <earthworm> Hi, I'm trying to run the following command to add a background image to a video, and convert it from flash to HTML5
[10:13] <earthworm> I've got the following, but it's losing the audio!
[10:13] <earthworm> ffmpeg -loop 1 -f image2 -i bg.png -vcodec libvpx -vf "movie=mercury.flv [logo]; [in][logo] overlay=0:0 [out]" -acodec libvorbis filename.webm
[10:13] <IamTrying> Mavrik, can we not do like? mencoder vid.avi -nosound -vf scale=320:240 -ovc raw -of rawvideo -o /dev/video0
[10:14] <IamTrying> Mavrik, where /dev/video0 is a virtual video input source for other tools
[10:14] <Mavrik> nope.
[10:14] <Mavrik> and is there a reason why are you doing something as awful as this?
[10:14] <Mavrik> instead of modifying your tools to read a file?
[10:15] <Mavrik> earthworm, losing in what way?
[10:15] <IamTrying> Mavrik, NO modifying the tools will take me 6 months. And having a virtual video device will solve my problem within hours.
[10:15] <earthworm> Mavrik, the new file is silent for some reason ...
[10:15] <Mavrik> IamTrying, hah.
[10:15] <IamTrying> Mavrik, i have a video capture reader application. But it must read a video source any if there is none available.
[10:15] <Mavrik> !pbb earthworm
[10:16] <Mavrik> bah, stupid bots
[10:16] <Mavrik> earthworm, do that and we'll see what's going on :)
[10:19] <earthworm> Right you are
[10:28] <IamTrying> Mavrik, i will code on those. 1) http://code.google.com/p/v4l2loopback/ 2) https://github.com/umlaeute/v4l2loopback
[10:40] <wawrek> Hello. Is there a way of writing custom filters for ffmpeg? I am not satisfied with what I found, I would like to program my own filter.
[10:41] <Mavrik> ffmpeg is opensource, so yes, you can add your own filter
[10:41] <Mavrik> there's probably not much documentation available
[10:42] <Mavrik> but you can just check a source of a simple filter and build on that
[10:44] <wawrek> Mavrik: true. there is any documentation available on how to write filters. I will check an existing filter and try to start from that.
[10:44] <Mavrik> wawrek, check for documentation in ffmpeg source
[10:44] <Mavrik> otherwise, start in libavfilter
[10:44] <Mavrik> there isn't that much code there :)
[10:44] <wawrek> from what I can see, ffmpeg uses mostly c.
[10:44] <wawrek> thanks I will
[10:45] <Mavrik> yes, ffmpeg is written in C
[11:09] <earthworm> Here's my command output ...
[11:09] <earthworm> http://pastebin.com/xLFJQE6A
[11:09] <earthworm> Just ran that, the audio has gone.
[11:12] <relaxed> earthworm: your input is a png
[11:12] <earthworm> Yeah, I'm mixing a PNG background with a flash video that's got a transparant background
[11:13] <relaxed> your ONLY input there is a png
[11:13] <Chat3009> Hi
[11:13] <earthworm> What the hell, the video comes out in the output, just without the audio?
[11:13] <relaxed> earthworm: oh, I see. Look at -filter_complex in the man page.
[11:15] <earthworm> I'll be honest, I took the command off a forum
[11:15] <earthworm> I'll look that up ...
[11:17] <relaxed> man ffmpeg | less +/"^       -filter_complex filtergraph"
[11:17] <earthworm> Could I just convert the video to HTML5 with a white background? It translates the transparent background to black which is the real problem as it looks naff
[11:22] <phantomcircuit> im passing -crf when transcoding to webm/vp8 and getting pretty terrible results no matter what value i pass
[11:22] <phantomcircuit> the input is 720p h.264 and the output looks like potatoe
[11:23] <phantomcircuit> thoughts?
[11:23] <phantomcircuit> setting a specific bitrate seems to work
[11:28] <earthworm> I tried this and it made a silent video with just the background image  :/
[11:28] <earthworm> ffmpeg -i mercury.flv -i bg.png -filter_complex 'overlay[out]' -map '[out]' -vcodec libvpx -acodec libvorbis filename.webm
[11:29] <relaxed> ffmpeg -i mercury.flv -i bg.png -filter_complex 'overlay[out]' -map '[out]' -map 0:a -vcodec libvpx -acodec libvorbis filename.webm
[11:29] <relaxed> oh, just use the example for the man page.
[11:30] <earthworm> That's kind of what I tried to do, let's see what your command does
[11:32] <relaxed> ffmpeg -i mercury.flv -i bg.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' -map 0:a -vcodec libvpx -acodec libvorbis filename.webm
[11:32] <earthworm> That's good, we've got audio now, but it's put the image on top of the video  :D
[11:33] <relaxed> I thought you wanted the png overlayed?
[11:34] <earthworm> Well, the video overlayed onto the PNG, so the transparent FLV background goes through to the PNG
[11:35] <relaxed> transparent flv- I've never heard of such a thing.
[11:35] <earthworm> Yeah, it's annoying
[11:35] <earthworm> The flash video has a transparent background
[11:36] <earthworm> If I just convert it, I end up wth black which will look gash on my web site
[13:03] <Fjorgynn> Why does people say that avi is a file format?
[13:03] <Fjorgynn> no he said filmformat actually wich means film format
[13:03] <Fjorgynn> video format. Yeah but stil..
[13:04] <Fjorgynn> no film fomrat translates to film format, like 35 mm and 120mm
[13:06] <elkng> to many syntactic errors
[13:06] <elkng> s/to/too
[13:06] <Fjorgynn> ^^
[13:07] <Fjorgynn> Why can't people learn that file extensions (containers) like avi aint the encoded format like H.264
[13:14] <phantomcircuit> hmm
[13:15] <phantomcircuit> so i just finished encoding a 40 minute 720p video in webm/vp8
[13:15] <phantomcircuit> before it finished i tried to watch the first 10 secodns or so and it was fine
[13:15] <phantomcircuit> but when it finished i tried to play in vlc and nearly every frame was dropped
[13:21] <earthworm> I was asking a bit ago about transcoding a FLV file to HTML5 video, can anyone help with that?
[13:21] <phantomcircuit> earthworm, heh that's what im doing
[13:21] <earthworm> I need to make sure the transparency in the flash video is replaced with either white, or a background image, either will do
[13:22] <phantomcircuit> what do you mean by html5 video though
[13:22] <phantomcircuit> since different browsers support different things
[13:22] <earthworm> I managed to do it, but this transparency is buggering things up
[13:22] <earthworm> I thought I just needed WebM ...
[13:23] <phantomcircuit> earthworm, not supported in ie or safari without a plugin
[13:24] <earthworm> Oh
[13:27] <Mavrik> earthworm, most video formats don't support transparency btw
[13:28] <earthworm> Yeah, I just want to get rid of it and replace it with white instead of black when I transcode
[13:37] <phantomcircuit> hrm
[13:37] <phantomcircuit> vlc's progress bar is all screwed up
[13:37] <phantomcircuit> like it has no idea how long thie file is
[13:39] <nlight> how do I correctly free a frame that has been initialized with avcodec_alloc_frame() and avpicture_fill()
[13:40] <nlight> currently I say avcodec_free_frame() but it seems to leak a tiny bit of memory
[13:41] <Mavrik> did you check with valgrind which field leaks?
[13:41] <nlight> i'm on win32 currently and don't have valgrind but I will run a debugger now
[13:41] <nlight> hoped that i was just using the wrong calls
[13:43] <Mavrik> looking at the source
[13:43] <Mavrik> avpicture_free and av_free after should be enough
[13:43] <nlight> so no avcodec_free_frame?
[13:44] <Mavrik> hmm, ok, avcodec_free_frame after avpicture_free
[13:44] <nlight> thanks, I will try now
[13:48] <nlight> nope, segfault
[14:19] <nlight> what's the default value of refcounted_frames?
[14:32] <durandal_1707> nlight: that thing is removed in latest API version
[14:33] <durandal_1707> actually ignore that
[14:33] <durandal_1707> there is no default value
[14:33] <nlight> ok, thanks
[14:34] <nlight> I will check for it always then
[14:34] <durandal_1707> decoders just sets value that explains how they decode files
[14:36] <durandal_1707> nlight: what version of lavcodec you use?
[14:36] <nlight> latest zeranoe build
[14:37] <nlight> let me see
[14:38] <durandal_1707> well only single decoder sets it
[14:39] <nlight> i use only h264 files so far
[14:39] <nlight> but still i want to support whatever so I will take care to check it correctly
[14:39] <durandal_1707> nlight: actually caller set it
[14:39] <durandal_1707> so you set it if you want such funcionality as described in header
[14:40] <nlight> ah, I get it
[14:40] <nlight> ok, thanks a lot
[15:12] <ItsMeLenny> has anybody had experience with the blackmagic intensity pro?
[15:17] <nlight> ItsMeLenny, i work with decklinks maybe i can help you
[15:19] <Mavrik> I worked with some other blackmagics
[15:19] <nlight> imho blackmagic sucks
[15:19] <nlight> deltacast make waaaay better products
[15:19] <nlight> of course the price range is a bit different
[15:19] <ItsMeLenny> is it possible to use anything else in linux other than their terrible program?
[15:20] <ItsMeLenny> can it be directed through ffmpeg
[15:20] <nlight> ItsMeLenny, you can use the SDK
[15:20] <ItsMeLenny> oh, whats the sdk
[15:20] <nlight> http://www.blackmagicdesign.com/support/sdks
[15:20] <Mavrik> ItsMeLenny, last I checked they worked with V4L2 as well
[15:20] <nlight> v4l2 also works
[15:20] <nlight> but their sdk is not bad at all
[15:20] <Mavrik> I used mines via DirectShow on Windows though, since I had the USB3 versions
[15:21] <nlight> you can setup a basic capture/render/playout in 200-300 lines
[15:22] <ItsMeLenny> Mavrik, i bought the intensity pro coz the usb one wasnt supported on linux, also, last i heard it didnt work with v4l2 :P
[15:22] <ItsMeLenny> nlight, i'll look into that sdk
[15:22] <Mavrik> ItsMeLenny, hmm, the internal ones did work with V4L2 and DirectShow for me
[15:22] <Mavrik> also used ffmpeg to capture stuff
[15:22] <Mavrik> but they might have f'ed up something in meantime
[15:22] <ItsMeLenny> is there any simple one line for ffmpeg to capture?
[15:24] <ItsMeLenny> /dev/blackmagic0
[15:25] <nlight> hm, i get memory leaks when decoding h264 only
[15:25] <nlight> no memory leaks when decoding with other codecs
[15:25] <nlight> any ideas?
[15:25] <ItsMeLenny> see it doesnt show up as a device in webcam lists or anything
[15:26] <ItsMeLenny> nlight, is that h264 or h264_vdpau
[15:27] <ItsMeLenny> also, none of those SDKs are for intensity?
[15:31] <ItsMeLenny> Mavrik, how did you get it to work with v4l2
[15:33] <Mavrik> don't remember anymore, I think I did some magickery with loopback
[15:33] <ItsMeLenny> ah
[15:34] <ItsMeLenny> i did this: avconv -f rawvideo -s 720x576 -i /dev/blackmagic0 test.mov
[15:34] <ItsMeLenny> but the video records one frame and stops
[15:35] <durandal_1707> nlight: you need to unref decoding frames you no longer need
[15:35] <durandal_1707> its explained in header, isn't it?
[17:12] <drwx> hi, i think there's a bug with ffmpeg and matroska output: i can't copy a pcm_bluray stream from an m2ts file to a mkv file, i get that error: No wav codec tag found for codec pcm_bluray
[17:26] <durandal_1707> drwx: you can not copy it
[17:26] <drwx> why not?
[17:26] <durandal_1707> mkv does not support pcm_bluray
[17:26] <drwx> oh. ok
[21:40] <CentRookie> hi @all
[21:40] <CentRookie> I'm new to ffmpeg multithreading
[21:41] <CentRookie> Would like to hear your opinion on how many threads you plan per file encoding for ffmpeg?
[21:41] <CentRookie> right now I have a dual hexacore, with 24 threads
[21:42] <CentRookie> and am encoding 6 files simultanously
[21:42] <CentRookie> but ffmpeg spawned over 660 processes
[21:42] <CentRookie> 110 per file
[21:42] <CentRookie> do you think that is excessive behaviour and somehow I could improve speed or quality if I reduced the number of threads per file?
[21:49] <CentRookie> nobody?
[21:50] <denysonique> How do I encode from an .avi into a .mpeg2?
[21:52] <CentRookie> http://www.itbroadcastanddigitalcinema.com/ffmpeg_howto.html#Encoding_MPEG-2_I-frame_only_in_Highest_Quality
[21:53] <denysonique> CentRookie: thanks
[21:53] <denysonique> -pix_fmt yuv422p -qscale 1 -qmin 1 -intra -an -- why these couldn't be by default
[21:54] <denysonique> I mean, couldn't there be a simple cli program that will just understand $ transcoder -i video.avi -o video.mpeg2?
[21:54] <denysonique> by loading the most common defaults
[21:55] <CentRookie> good question, nowadays the community is more focused on creating mobile device presets it seems
[21:55] <CentRookie> so legacy formats are a bit underdeveloped
[21:56] <brontosaurusrex> denysonique, becouse mpeg2 and 422 are not really default
[21:56] <denysonique> brontosaurusrex: mpeg2 could be just detected by target extension
[21:56] <denysonique> by the target extension*
[21:57] <brontosaurusrex> "<denysonique> -pix_fmt yuv422p -qscale 1 -qmin 1 -intra -an -- why these couldn't be by default" < i'am replying to this crap
[21:58] <denysonique> Look at this
[21:58] <denysonique> A user has a TV player which only plays .mpeg2. All he wants to do is to run a command which will convert what_ever_encoded.avi into old_standard.mpeg2
[21:58] <rcombs> denysonique: I'd expect it actually would be
[21:59] <denysonique> Why do I need to learn everything about video encoding for such simple task?
[21:59] <rcombs> denysonique: if you just want to convert to MPEG2, ffmpeg -i input.avi -o output.mp2 should be fine
[21:59] <rcombs> denysonique: the extra options are for increasing the quality of the output
[21:59] <denysonique> interesting
[21:59] <brontosaurusrex> "A user has a TV player which only plays .mpeg2" < that is also not something expected in 2013
[22:00] <rcombs> brontosaurusrex is right, though
[22:00] <rcombs> MPEG2 is a rather old format
[22:00] <denysonique> I have just picked up mpeg2, because for sure it will play that
[22:00] <rcombs> but no, you don't have to use all those extra arguments just to convert; it'll use sane settings by default
[22:02] <denysonique> Thank you guys
[22:02] <rcombs> actually, the -an argument disables audio
[22:02] <rcombs> so you definitely don't want that one
[22:02] <denysonique> Could someone post the link to the ffmpeg screenrecording guide?
[22:06] <denysonique> There was one mentioning raw recording and then later encoding
[22:08] <denysonique> rcombs: also I meant .mpeg2 not .mp2
[22:08] <rcombs> either way
[22:09] <denysonique> which is H.262, but nvm
[22:10] <rcombs> remind me, which libav* is responsible for dithering 10bit H.264 down to 8bit?
[22:10] <llogan> denysonique: ffmpeg -i input -codec:v mpeg2video -q:v 2 output.mpg
[22:15] <llogan> rcombs: libswscale i guess
[22:16] <rcombs> hmm
[22:19] <rcombs> isn't it internally considered a colorspace conversion?
[22:21] <rcombs> for instance, if VLC is using libav* to decode an MP4 file with an H.264 High 10 at 4.1 stream, the color space is something like yuv420p10le, and it's converting to rgb24 for display, yeah
[22:21] <rcombs> ?
[23:00] <ciupicri> ffmpeg -i input.wmv -f image2 -vf fps=fps=1/21 out%04d.jpg  fails with "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height"  while  ffmpeg -i input.wmv -f image2 -vf fps=fps=1/20 out%04d.jpg  doesn't complain at all. I'm using ffmpeg-1.2.1-3.fc19.x86_64 (more details http://paste.fedoraproject.org/29820/77220137 )
[23:12] <ciupicri> done http://paste.fedoraproject.org/29821/75477936
[23:45] <llogan> ciupicri: does it work if you replace "-vf fps=fps=1/21" with "-r 1/21"?
[23:46] <ciupicri> "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height"
[23:46] <ciupicri> http://paste.fedoraproject.org/29823/13754799
[23:48] <g1ra> Hello. Please help me. How to apply +10dB volume to output ? For example : ffmpeg -t 30 -i 01-INtroduction.wmv -ac 2 "volume=10dB"  01-INtroduction.mkv
[23:50] <g1ra> Moreover : "Unrecognized option 'af'" . ffmpeg version 0.10.7
[23:51] <llogan> ciupicri: what if you add "-qscale:v 2" as an output option
[23:52] <ciupicri> llogan: it seems to work, let me try with larger (smaller actually) values
[23:53] <llogan> sane range is 2-5
[23:53] <llogan> i guess
[23:53] <ciupicri> what does qscale mean?
[23:55] <llogan> quantizer scale, but you can basically think of it as a quality scale
[23:55] <g1ra> ffmpeg -t 30 -i 01-INtroduction.wmv -ac 2 -af "volume=10dB"  01-INtroduction.mkv  http://pastebin.com/aHUbT2fe
[23:56] <llogan> g1ra: i guess it's too old. is there any output of a list of filters with "ffmpeg -filters"?
[23:56] <ciupicri> and what's the relationship between that and the fps? I can't have a low fps without a low quality?
[23:56] <g1ra> original reason for this conversion is to convert mono to stereo (-ac 2) but volume is too low
[23:56] <llogan> is the original volume good?
[23:57] <llogan> ciupicri: you can conrol the fps and quality independently
[23:57] <g1ra> yes the original vol is ok
[23:57] <ciupicri> then why did it failed before setting that quality to 2?
[23:58] <llogan> because by default it uses -b:v 200k, and i suppose for your output the resulting bitrate tolerance is too small.
[23:58] <g1ra> llogan, "ffmpeg -filters" http://pastebin.com/kFzHbasP
[23:59] <ciupicri> ok, thanks for the help and the explanation
[23:59] <llogan> so instead you could use -b:v instead of -q:v but then you'd have to find a bitrate that isn't too low. q:v is easier
[00:00] --- Sat Aug  3 2013


More information about the Ffmpeg-devel-irc mailing list