[Ffmpeg-devel-irc] ffmpeg.log.20150116

burek burek021 at gmail.com
Sat Jan 17 02:05:01 CET 2015


[00:19] <knob> Hello everyone.  Got a question... looking for some direction.
[00:19] <knob> I pull a 30 second video from a rPi.  I save it locally as a .h264.   I then use MP4Box to convert it into .mp4
[00:19] <knob> All perfect in that part.
[00:19] <knob> I want to take that same .h264 file, and convert it to webm.
[00:20] <knob> I tried doing it with ffmpeg, yet I seem to be missing a codec (??).
[00:20] <knob>  I ran:    ffmpeg -codecs | fgrep vpx             and got back this:    http://pastebin.com/Sx4CnArm
[00:20] <knob> I am lost as to...   what do I need?   Do I need to re-compile ffmpeg to have this new codec for webm?
[00:21] <iive> knob: http://ffmpeg.org/general.html#libvpx ?
[00:21] <knob> iive, on my way
[00:22] <knob> Thank you iive ... I hope it's simple.   I am a n00b to ffmpeg... and, kinda lost.
[00:22] <knob> Question:   When that page says Then pass --enable-libvpx to configure to enable it.                         How do I "pass it to configure"?
[00:22] <knob> Where is the ./configure?
[00:22] <iive> in the ffmpeg source
[00:23] <knob> Ahh ok ok.   Ok
[00:23] <iive> it's the first step to compile mplayer... after extracting the source.
[00:23] <iive> i mean, ffmpeg
[00:24] <knob> Yes... now I see what I ran when I compiled ffmpeg... and ... I see where it is  ./configure  -----options----
[00:24] <knob> Reading up on your link...
[00:26] <knob> iive, this is what I run to "install" ffmpeg.           I am not sure how to "add" the WebM repo to this, so ffmpeg gets compiled with the VP8/VP9 codecs
[00:26] <knob> http://pastebin.com/Y934Fqr6
[00:28] <iive> why disable-mmx?
[00:28] <iive> btw, what distribution are you running?
[00:29] <knob> iive, Raspbian (from Debian).   It's on a Raspberry Pi
[00:29] <knob> btw, thank you for the help.   I have like 18 tabs open... and still not sure which direction I have to go
[00:30] <iive> knob: first, you have to install libvpx, if you are lucky, that debian flavor may already have it.
[00:30] <knob> Ok.
[00:31] <iive> just don't forget that you need the -dev packages to compile stuff.
[00:31] <knob> For WebM?    http://www.webmproject.org/code/  ?
[00:33] <iive> knob: https://sites.google.com/a/webmproject.org/wiki/ffmpeg/building-with-libvpx
[00:34] <iive> i wonder if libvpx is gpl...
[00:35] <iive> have in mind, that page tells you how to install every library manually. it might be better to use pre-built system version if available.
[00:38] <knob> iive, checking that link now... thank you
[00:39] <iive> i'll be off soon
[00:40] <knob> Yes... thank you.  I am going to try and install libvpx, and then re-compile ffmpeg
[00:41] <dirty_d> what am i missing here? ffmpeg -i /mnt/misc/VIDEO/MOVI0007.avi -c:v libx264 -crf 18 -c:a aac -profile:v baseline output.mp4
[00:41] <dirty_d> i get Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[00:41] <dirty_d> im using the examples from https://trac.ffmpeg.org/wiki/Encode/H.264
[00:48] <iive> dirty_d: is #0:0 the video? it looks like it does have rate (factor). try without "-profile:v baseline"
[00:49] <dirty_d> no dice
[00:49] <dirty_d> let me get you the full output
[00:49] <iive> try without "-c:a aac" and instead use -an
[00:49] <iive> that disables audio
[00:50] <dirty_d> http://codepad.org/FF1Q37pJ
[00:50] <dirty_d> that did the trick
[00:52] <dirty_d> hmm, what am i missing for audio though?
[00:52] <iive> aac might be the internal aac encoder, and there is only one encoder that is worse that it.
[00:52] <dirty_d> no dice with -c:a aac -ac 1 -ab 128k -ar 44100
[00:52] <iive> than...
[00:53] <dirty_d> what do you think i should use?
[00:53] <dirty_d> i dont realy care about audio quality as much as it working on every device possible
[00:53] <iive> faac or fdk-aac
[00:54] <dirty_d> -c:a faac?
[00:54] <dirty_d> if so its not compiled in
[00:57] <dirty_d> oh, i guess its supposed to be libfdk_aac, but i dont have that either
[00:57] <dirty_d> ill just recompile
[00:58] <iive> all externals start with lib*
[00:58] <iive> `ffmpeg -codecs`
[00:59] <iive> dirty_d: also, you might want to add -pix_fmt 420p , as the log hints, because the mjpeg is 422p. 422 is mostly used by production, it handles interlace better.
[01:00] <iive> gtg
[01:00] <iive> n8 ppl have fun.
[01:00] <dirty_d> later, thanks for the help
[01:11] <dirty_d> hmm, if anyone else has any ideas, -c:a libfaac did not work
[02:24] <fffan> is there a simple way to get pts after avcodec_decode_audio4
[02:53] <voip__> Hello guys, I transcoding and sending stream to 2 different servers, it works well.
[02:53] <voip__> But I have 1 stream which I cant start
[02:53] <voip__> There is difference between inputs. The input stream which cant start has 4 audio streams.
[02:53] <voip__> How to fix this issue?
[02:53] <voip__>  http://pastebin.com/XisBgYwv
[06:21] <jjohn> Hello, guys.
[06:22] <jjohn> I am having a problem with ffmpeg capturing audio, but it's too long to explain here, so I used pastebin: http://pastebin.com/BEnUUn9E
[06:28] <klaxa> jjohn: see https://trac.ffmpeg.org/wiki/Capture/ALSA
[06:28] <klaxa> under recording an application
[06:34] <jjohn> klaxa: like type plug slave.pcm "hw:Loopback,3,0"?
[06:34] <klaxa> i don't know the first thing about alsa
[06:34] <klaxa> the page was linked by someone else when someone else wanted to record from alsa
[06:38] <jjohn> The thing is: when I set the device to hw:0,0, I have no audio anymore.
[06:39] <jjohn> And recording from 3,0 (when issuing ffmpeg) does again just produce silence.
[06:42] <jjohn> And if I set the loopback device in the /etc/asound.conf to 0,0 and then record something (which I do not hear), then the file contains silence again (after I commented the line so that I'd get back output).
[06:46] <fffan> why av_frame_get_best_effort_timestamp does not work for wma audio for wmv?
[06:48] <jjohn> But the module is already built-in in my kernel, so maybe the default value for pcm_substreams is 0? I will add this to my boot line and reboot the kernel, and see if that did any difference.
[06:50] <jjohn> Will take a little while until the kernel is compiled and I restarted it. I will be back in a few minutes.
[07:05] <jjohn> Ahoy, guys.
[07:06] <jjohn> Recompiled the kernel to apply the new boot command line, but it didn't work out.
[07:06] <jjohn> Still no audio.
[07:07] <jjohn> http://pastebin.com/BEnUUn9E
[07:22] <jjohn> Of course I made sure that the kernel had the line: "snd_aloop.pcm_substreams=1"
[07:24] <pzich> are there two things trying to read from the pipe? do you need to tee it in some way?
[07:25] <jjohn> Did you read the pastebin? I guess I explained what I want.
[07:54] <john___> http://pastebin.com/BEnUUn9E
[08:24] <jjohn> http://pastebin.com/BEnUUn9E
[08:29] <klaxa> jjohn: what happens if you omit -ac 2?
[08:35] <jjohn> klaxa: in which command? /opt/local/ffmpeg/bin/ffmpeg -f alsa -i hw:3,0 -ac 2 -c pcm_s16le output.wav
[08:35] <klaxa> yes
[08:36] <jjohn> $ /opt/local/ffmpeg/bin/ffmpeg -f alsa -i hw:3,0 -c pcm_s16le output.wav
[08:36] <jjohn> >cannot set channel count to 2 (Invalid argument)
[08:36] <jjohn> >hw:3,0: Input/output error
[08:37] <jjohn> Stays the same.
[08:38] <jjohn> Right now there is VLC running with some Chrono Trigger tracks, which I'd like to capture from the device. But somehow it does not work.
[08:39] <jjohn> VLC outputs it to hw:3,0 (my USB headphones), and I can hear them. But I cannot capture them.
[08:50] <klaxa> did you do the loopback stuff?
[08:50] <klaxa> from the wiki entry about capturing alsa
[08:54] <jjohn> You mean cm.!default { type plug slave.pcm "hw:Loopback,X,0" }? Yeah, I did.
[08:54] <jjohn> When I set it, I got no sound to my headphones anymore.
[08:54] <jjohn> Doesn't matter if X is 0 or 3.
[08:56] <jjohn> I wouldn't know which device I'd have to put in exactly, because the paragraph isn't explained at all.
[08:56] <jjohn> So I tried out several values, but without success.
[09:00] <jjohn> But there is still a difference between X being 0 and 3: if it's 3, VLC gives me an error message that the default device is not ready. I guess this is because the option for asound.conf sets the target loopback device. If X is 0, I still have no sound, but VLC does not give me an error message.
[09:09] <jjohn> Sorry, hardware hickup, had to restart.
[09:23] <jjohn> klaxa you are out of ideas?
[09:24] <klaxa> have you done the "Record audio from an application while also routing the audio to an output device" part?
[09:24] <klaxa> also notice how the command line says:
[09:24] <klaxa> >ffmpeg -f alsa -ac 2 -ar 44100 -i hw:Loopback,1,0 out.wav
[09:24] <klaxa> notice the hw:Loopback,1,0
[10:03] <jjohn> klaxa: OK, I somehow got it by using the informations in the "Record audio from an application while also routing the audio to an output device" paragraph. The command now is this: /opt/local/ffmpeg/bin/ffmpeg -f alsa -ac 2 -ar 48000 -i loopout -f alsa -ac 1 -ar 44100 -i mic -c pcm_s16le output.wav
[10:03] <klaxa> so it works properly now?
[10:03] <jjohn> But it seems that my mic is ignored - I do not hear my voice in output.wav, only the music.
[10:04] <jjohn> Not yet, but at least I hear music from VLC now in output.wav
[10:04] <klaxa> oh right that is probably because you have two input streams and only one output file. you might want to look at amix or even better amerge since you have streams with different channel layouts
[10:05] <klaxa> http://www.ffmpeg.org/ffmpeg-filters.html#amerge
[10:12] <jjohn> OK, now I hear both my voice and the music: /opt/local/ffmpeg/bin/ffmpeg -f alsa -ac 2 -ar 48000 -i loopout -f alsa -ac 1 -ar 44100 -i mic -filter_complex "[0:0][1:0] amerge=inputs=2" -c pcm_s16le output.wav
[10:12] <jjohn> But the background noise (first problem) is still there.
[10:13] <klaxa> background noise from the mic?
[10:13] <jjohn> Seemingly. A high-pitched, constant noise.
[10:14] <klaxa> you can either figure out the frequency and try to do some equalizer filtering or try to use sox with some piping stuff to apply a noise removal filter
[10:14] <klaxa> that will, however, add latency
[10:16] <jjohn> Hm. I got a program called spek which shows the spectrum of an audio file.
[10:17] <jjohn> And the noise seems to be in different heights ... from 0 kHz to 24 kHz.
[10:17] <klaxa> noise removal filter it is?
[10:19] <jjohn> Single lines only, but constant ... not sure if that would help. But see for yourself: https://imgur.com/EaKj40F
[10:19] <jjohn> This is noise only. I deactivated the mic, so this comes from a basically empty source.
[10:20] <klaxa> wot
[10:20] <klaxa> that looks broken
[10:20] <klaxa> well uh... i guess one could fix that one way or another
[10:21] <jjohn> Any ideas? :)
[10:21] <klaxa> write down the frequencies and block them with the equalizer filter?
[10:22] <klaxa> or rather, reduce them
[10:23] <baran> does anybody know how to properly calculate bitrate of a stream/video?
[10:23] <klaxa> filesize / fileduration ?
[10:23] <baran> formatcontext->bit_rate = 0
[10:23] <baran> well, the problem is i get only byte stream from java :)
[10:24] <klaxa> can you elaborate the problem in a more precise way?
[10:24] <klaxa> pastebin code or something
[10:25] <baran> i cant paste code
[10:25] <baran> bt i will try to elaborate
[10:30] <jjohn> klaxa: OK, I tried this out: /opt/local/ffmpeg/bin/ffmpeg -f alsa -ac 2 -ar 48000 -i loopout -f alsa -ac 1 -ar 44100 -i mic -filter_complex "[0:0][1:0] amerge=inputs=2" -af "equalizer=f=22000:width_type=h:width=200:g=-10" output.wav
[10:30] <jjohn> And it gave me this error: "-vf/-af/-filter and -filter_complex cannot be used together for the same stream."
[10:31] <klaxa> you can use equalizer in filter_complex
[10:32] <klaxa> like so: -filter_complex "[0:0]equalizer=f=22000:width_type=h:width=200:g=-10[loopback];[loopback][1:0] amerge=inputs=2"
[10:34] <baran> i read the file through jni (java native interface), so i dont know it's size nor duration. I read AVpackets, so i know their size (in bytes) and duration (in ffmpeg units) + the average framerate. so to my knowledge bit rate should be calculated something like this: bitrate = (packet_size * 8) * avg_framerate, since 1 packet should contain a single frame
[10:35] <klaxa> i doubt packets will be of similar size
[10:35] <klaxa> you might want to average over a number of packets
[10:35] <baran> but i didnt came close to actual results calculating file_size/ file_duration
[10:35] <jjohn> Now I get another error: The following filters could not choose their formats: Parsed_amerge_1 - Consider inserting the (a)format filter near their input or output.
[10:36] <jjohn> /opt/local/ffmpeg/bin/ffmpeg -f alsa -ac 2 -ar 48000 -i loopout -f alsa -ac 1 -ar 44100 -i mic -filter_complex "[0:0]equalizer=f=22000:width_type=h:width=200:g=-10[loopback];[loopback][1:0] amerge=inputs=2" -c pcm_s16le  output.wav
[10:36] <klaxa> huh
[10:36] <klaxa> that's odd
[10:36] <klaxa> is there any additional info regarding that error?
[10:37] <jjohn> > [Parsed_amerge_1 @ 0x1a3cbe0] No channel layout for input 1 - Last message repeated 1 times
[10:37] <jjohn> And the previous error message was printed by [AVFilterGraph @ 0x1a3a300].
[10:38] <pzich> are you passing in a separate audio file/pipe/stream? or did you want [0:1] for separate stream, same input?
[10:38] <pzich> oh, derp, -i mic, missed that the command was there
[10:39] <klaxa> maybe add an aformat filter and tell it to to use 2 channels?
[10:39] <pzich> what does ffmpeg interpret the input as, can you pastebin the console output?
[10:39] <klaxa> -filter_complex "[0:0]equalizer=f=22000:width_type=h:width=200:g=-10[loopback1];[loopback1]aformat=channel_layouts=stereo[loopback2];[loopback2][1:0] amerge=inputs=2"
[10:40] <jjohn> The loopback output as stereo (0:0), the mic as mono (1:0).
[10:40] <pzich> is that considered input 1 though? I think it's [1:0] (mic)
[10:41] <klaxa> is it the loopback causing troubles or the microphone? i would think the loopback since we have been messing with that with the equalizer and it worked earlier
[10:41] <jjohn> pzich input 0 is the loopback device, input 1 is the mic. The mic is causing the problems.
[10:42] <pzich> might be worth converting the mic to stereo?
[10:42] <jjohn> klaxa the filter that you provided works ... but I guess I'll have to tamper with it a bit.
[10:42] <pzich> I see [1:0][1:0]amerge=inputs=2[mic] being one of the ways to do that
[10:43] <jjohn> pzich even if I don't merge the mic into the loopback output the noise is still there. If I just write the mono channel into the file there is still the noise.
[10:43] <pzich> ah, don't know what to do about the noise, sorry
[10:44] <jjohn> Let me stress this: only record mic => still noise.
[10:44] <jjohn> Me neither. :(
[10:44] <pzich> is it just ffmpeg?
[10:44] <jjohn> I am as far as ditching the mic and getting a new one. I have no idea about it.
[10:45] <jjohn> pzich no. If I record with audacity the noise is still there.
[10:45] <pzich> are you looking for a noise reduction filter?
[10:46] <klaxa> pzich: this is the spectrum graph: https://imgur.com/EaKj40F
[10:46] <jjohn> I dunno if audacity relies on ffmpeg ... I am currently tampering with noise reduction, yes.
[10:46] <pzich> ah, hmm, that's interesting
[10:46] <jjohn> But ... the specturm is ... well ... klaxa showed it to you.
[10:46] <klaxa> if you can do post production i would advice to record to different audio streams and clean them separately
[10:47] <klaxa> if you can't, maybe sox is the best solution for noise removal
[10:47] <jjohn> Can't do. The output is going to be streamed in the end, and I want to avoid latency at all cose.
[10:47] <jjohn> These are just tests.
[10:48] <jjohn> costs*
[10:48] <klaxa> voice latency is actually not as much of a deal as in-game audio latency is
[10:48] <klaxa> unless you are trying to be in sync with the in-game audio (if it's a game even)
[10:48] <jjohn> Who said anything about in-game? So far I just wanna have VLC running in the background.
[10:48] <klaxa> then it's pretty bad yeah
[10:48] <klaxa> like if you try to play along to something
[10:49] <jjohn> I mean, for now.
[10:49] <klaxa> well most people who come with these kinds of requests want to stream video games on twitch :P
[10:49] <klaxa> or some other streaming service
[10:49] <jjohn> If that would be the case I'd just use OBS or something like that.
[10:50] <jjohn> Or directly use Windows.
[10:51] <klaxa> well what is your exact use case then? maybe we are doing this too complicated
[10:51] <jjohn> Which makes me wonder ... maybe there is no problem with the recording on Windows - maybe it's just the driver being sucky?
[10:51] <pzich> seems like it could be the mic, cable, card or driver
[10:52] <jjohn> OK. I will do a test on Windows to check if it's the kernel driver. If the record is OK, then the kernel hacker will have something to do ... :)
[10:52] <jjohn> If not ... well, guess I will need to invest into new hardware then.
[10:53] <jjohn> Or do you have any more ideas?
[10:53] <klaxa> nope
[10:55] <jjohn> Thought so. Well, I am going to go now and will do the test later this day. I will come back to chat regardless whether the driver is the problem or not ...
[10:55] <jjohn> ... hm, waitasec. Are there developers present as well?
[10:55] <jjohn> Because I have another issues, which has nothing to do with audio recording, but X11 grabbing.
[10:55] <klaxa> ffmpeg-developers? some read this channel i guess?
[10:56] <klaxa> well what's the issue?
[10:56] <jjohn> I noticed that if I record my desktop at 60 FPS and save the record to the HDD immediately, I quickly drop frames.
[10:57] <klaxa> probably slow I/O
[10:57] <jjohn> But if the video gets stored on a ramdisk, the FPSs are constant.
[10:57] <klaxa> that's what i resorted to
[10:57] <pzich> mmm, ramdisks
[10:58] <jjohn> That's what I thought as well. And I took a look into the sources. Seems like X11grab uses a AV interface to save single frames ... or at least small units of data.
[10:59] <jjohn> I believe that ffmpeg does not cache some data in RAM, but writes it to the HDD immediately. I know that this is appropiate, because if ffmpeg gets interrupted, the data is still there.
[10:59] <klaxa> maybe tweaking disk caches might help?
[11:00] <jjohn> But maybe ffmpeg could offer an option to create a small/bigger buffer in RAM, maybe for a second, and flush the data not immediately, but just every second or two ... or every 30 seconds. As the user wants it.
[11:01] <klaxa> open a feature-request/bug-report
[11:01] <jjohn> Hm, OK. Will do.
[11:03] <jjohn> Should I use ffmpeg-devel for this?
[11:03] <jjohn> The mailing list, I mean.
[11:05] <jjohn> or ffmpeg-trac?
[11:11] <ubitux> try -flush_packets 0 maybe
[11:16] <klaxa> i didn't even know about that
[11:23] <jjohn> ubitux that looks rather good. I am recording for some time now - usually I would have dropped to 50 frames by now, but I am still at my full 60 FPS.
[11:26] <jjohn> But the commit message is the joke of the century: 0 disables it and may slightly increase performance in some cases.
[11:27] <jjohn> I am impressed by this imcrease of performance. Thank you very very much.
[11:27] <jjohn> increase*
[11:29] <ubitux> what format do you output?
[11:30] <jjohn> libx264 in mp4.
[11:31] <jjohn> I am already at 40,000 frames, and I didn't drop a single frame.
[11:39] <ubitux> no difference here
[12:05] <jeanre> hi all
[12:12] <jeanre> im trying to figure out the best way to get a frame from a video
[12:12] <jeanre> say 00:00:14 frame 2
[12:16] <jeanre> ffmpeg -i timecode\ 3h.mp4  -r 1  -ss  00:00:14 -frames:v 1  image-%d.jpeg
[12:17] <jeanre> this will extract 1 frame so 00:00:14 frame 1
[12:17] <jeanre> but I need frame 5
[12:23] <jeanre> the only way I can think of is to create the amount of images for each frame
[12:23] <jeanre> take the last one
[12:23] <jeanre> trash the other
[12:26] <ribasushi> jeanre: that won't be the exact frame though - -ss on input is not precise
[12:26] <ribasushi> (documentation explains why)
[12:57] <OxFEEDBACC> high, i used "ffmpeg -i stem-%3d.png ...." to create a video from rendered images... now i want it to put the same sequence into the video 3 times ... "-i stem-%3d.png -i stem-%3d.png -i stem-%3d.png" doesn't do that... what am i missing?
[12:58] <jeanre> ribasushi any other ideas?
[12:59] <jeanre> how I can get it more accurate?
[13:01] <knob> Hello everyone.    I am converting a 55secon .h264 file to webM.   The line I am using is this one:     ffmpeg -i /var/tmp/stream.h264 -strict experimental $TMP/stream.webm
[13:01] <jeanre> ribasushi so extracting all the frames will not work?
[13:01] <knob> Yet it is taking a LOT of time to "create" the webm file.   Is there a way I can... lower the quality? Or bit rate?     What do you suggest?
[13:01] <OxFEEDBACC> it doesn't even work when i copy all stem*png to sten*png and steo.png, then use  "-i stem-%3d.png -i sten-%3d.png -i steo-%3d.png"
[13:02] <knob> Or a new command line arguments for ffmpeg?
[13:02] <knob> ^^ "a new different set" of command line arguments... sorry
[13:03] <OxFEEDBACC> -qscale?
[13:04] <knob> OxFEEDBACC, that's for me?
[13:04] <OxFEEDBACC> takes a parameter 1 (best quality) to 31 (least quality)
[13:04] <OxFEEDBACC> yup, knob
[13:04] <knob> OxFEEDBACC, ahh... ok.  Will try that now.  Thansk... I am totally new to ffmpeg, and a little bit overwhelmed.   On my way to google those parameters.
[13:04] <knob> Thank you
[13:04] <jeanre> I also see hh:mm:ss[.xxx]
[13:04] <knob> Will report back in a while on how it goes1
[13:04] <OxFEEDBACC> i use it to somehow influence the size of the output file
[13:04] <jeanre> whats the use of .xxx?
[13:05] <OxFEEDBACC> milliseconds?
[13:06] <OxFEEDBACC> so, answered two question within my first 5 minutes here... could now someone look at making me happy, please? :-D
[13:06] <jeanre> im trying to extract a specific frame from the video
[13:06] <jeanre> like 00:00:14 frame 2
[13:07] <jeanre> the docs don't say anything about this
[13:07] <OxFEEDBACC> i guess it's not a good idea to mix hh:mm:ss times with frame numbers...
[13:07] <OxFEEDBACC> if you know the frame rate of your video, you should compute the absolute frame number and use that...
[13:07] <jeanre> well each second has 25 frames
[13:07] <jeanre> so frame 100
[13:08] <OxFEEDBACC> so, just convert your hh:mm:ss to seconds, multiply by 25, and there you go...
[13:08] <jeanre> OxFEEDBACChow would I do that
[13:08] <OxFEEDBACC> seconds=hh*3600+mm*60+ss
[13:08] <jeanre> no not that
[13:08] <jeanre> lol
[13:08] <OxFEEDBACC> wtf...
[13:08] <jeanre> extract frame 100
[13:08] <jeanre> or frame 125
[13:08] <OxFEEDBACC> how would i know? i'm new to ffmpeg... and never extracted anything...
[13:08] <jeanre> just that frame the calculation is the easy part
[13:09] <OxFEEDBACC> you could try mplayer for extracting stuff, however... i did it with sound clips...
[13:21] <knob> ox
[13:21] <knob> OxFEEDBACC, I used -qscale    and it returned    Please use -q:a or -q:v, -qscale is ambiguous
[13:21] <knob> is the the same thing
[13:24] <OxFEEDBACC> how did you use it?
[13:25] <knob> I did:::  ffmpeg -i /var/tmp/stream.h264 -qscale 30 -strict experimental $TMP/stream.webm
[13:25] <knob> OxFEEDBACC, I am reading up... yet, ... not sure if the "position" of the argument is correct
[13:26] <hans_s> jeanre: if you know the exact frame number to extract, you could try to adjust input frame rate to 1 fps and therefore specify a more accurate -ss parameter
[13:26] <OxFEEDBACC> well, i specify it between the codec and out file name...
[13:26] <jeanre> hans_s I can calculate the frame that I need to extract
[13:27] <jeanre> so on 29 frames 00:00:1 frame 2 will be 31 frames
[13:27] <jeanre> it should work
[13:27] <knob> OxFEEDBACC, going to try now  like this::    ffmpeg -i /var/tmp/stream.h264 -strict experimental -qscale 30 $TMP/stream.webm
[13:27] <hans_s> jeanre: like: ffmpeg -r 1 -i whatever -ss 104 ... to get the 105th frame
[13:27] <knob> will report back
[13:28] <jeanre> ye
[13:28] <OxFEEDBACC> gl, knob
[13:28] <jeanre> i'm doing ffmpeg -i timecode\ 3h.mp4 -vf "select=gte(n\,32)" -vframes 1 out_img.png
[13:32] <hans_s> jeanre: specifying "-r 1" before "-i" will rewrite the timecodes of the frames, so frame 1 will be at 00:00:00, frame 2 at 00:00:01 and so on
[13:35] <OxFEEDBACC> please, how can i repeatedly put a sequence of images into a video?
[13:36] <jeanre> hans_s
[13:36] <jeanre> ffmpeg -r 1 -i timecode\ 3h.mp4  -ss 3 -vframes 1 out_img.png
[13:36] <jeanre> always reutrns 00:00:00,2
[13:36] <jeanre> wtf
[13:38] <hans_s> so?
[13:40] <OxFEEDBACC> a funny thing is that when i extend it to even "-i stem-%3d.png -i sten-%3d.png -i steo-%3d.png -i step-%3d.png", when step* does not exist, it complains...
[13:41] <hans_s> OxFEEDBACC: I think specifying multiple -i means to add multiple parallel streams (of which probably only one is selected for output)
[13:42] <hans_s> maybe the helps: https://trac.ffmpeg.org/wiki/Concatenate
[13:42] <OxFEEDBACC> oh, okay, thanks a lot!
[13:42] <Carlo67> Hello, I am a new user to ffmpeg and I have some videos that I want to convert to .webm. I looked at the wiki page for VP8 http://trac.ffmpeg.org/wiki/Encode/VP8, so far so good. The article further links to a page about encoding parameters http://www.webmproject.org/docs/encoder-parameters/ .. can I pass these VP8 encoding parameters to ffmpeg?
[13:46] <OxFEEDBACC> ya, "-f concat" can't be used with -i stem-%03d form of input spec... :-/
[13:46] <jeanre> hans_s -ss 4 also returns the same
[13:47] <hans_s> what does the 2 in 00:00:00,2 mean? 2nd frame? 200ms?
[13:47] <jeanre> second frame
[13:47] <OxFEEDBACC> where in the man pages can i read more about the -i option that in man ffmpeg?
[13:55] <edoloughlin> I've got a file of RGBA frames @25fps that I'm encoding to MP4. Is there anything I could do on the input side to improve encoding performance? E.g., any extra metadata I could supply or use JPEG instead of RGBA?
[14:01] <OxFEEDBACC> man, it can't be THAT complicated to put some fucking simple pictures into a video?
[15:29] <herbySk> Hi! Is there any option to make ffmpeg end with nonzero exit code if input file is partial?
[15:30] <herbySk> (I just realized if ends with exit code 0 for partial file :-( which I would really not expect)
[15:30] <herbySk> s/if/it/
[15:32] <c_14> Define "partial file"
[15:33] <herbySk> c_14 an mp4 file which is not full. After all, ffmpeg output "partial file" with red letters. Just, exits with code 0.
[15:33] <herbySk> I use it to get frames of the video. Though, I'd presume it does not end with exit code 0 if the video was not complete.
[15:52] <c_14> hmm, utils.c appears to be ignoring AVERROR_INVALIDDATA
[15:52] <c_14> Not entirely sure why.
[15:53] <c_14> Make a bug report on trac mentioning it
[15:54] <herbySk> c_14 Hm. Maybe it's a feature :-( to allow recovering broken videos, or so. Where's the trac?
[15:55] <c_14> https://trac.ffmpeg.org/
[16:26] <netwiz> do you guys know a good way to crank up contract on a black and white video
[18:29] <ike> can anyone help me, im having an issue trying to get ffmpeg to work on my amazon server
[18:31] <ike> anyone here
[18:50] Last message repeated 1 time(s).
[18:52] <c_14> Maybe if you asked a question, someone could help you.
[18:54] <ike> fair enough
[18:59] <ike> ive installed ffmpeg on my amazon EC2 server to convert files to mp3. I have android and iphone app that is sending native audio recording to the server to be converted into an mp3. it is converting the audio file from android as .m4a or .amr to mp3, and for iphone .wav to .mp3. Right now we run -i /var/www/filepath..../audiofile.m4a /var/www/filepathe...../out_audiofile.mp3... the result is
[18:59] <ike> a file with 0 size and the error i get is "unsupported codec for output stream #0.0 . On my computer i try to convert the files using ffmpeg and it works fine. So the server is having the issue. Ive looked through stack overflow and have tried various solutions with no luck. Please let me know if anyone can help me.
[18:59] <c_14> pastebin the output of `ffmpeg -version' please
[18:59] <c_14> On the server
[18:59] <c_14> It presumably was built without mp3 encoding support
[19:00] <ike> SVN-r0.5.10-4:0.5.10-1
[19:00] <c_14> yeah
[19:00] <c_14> I don't even know what version that's supposed to be.
[19:00] <c_14> Either use a static build or build from source.
[19:01] <c_14> Or find a way to get a newer version of ffmpeg.
[19:02] <ike> do i get the statc build or the source from GIT? or somewhere specific
[19:05] <c_14> http://johnvansickle.com/ffmpeg/
[19:05] <c_14> for the static builds
[19:05] <c_14> source is at https://git.videolan.org/?p=ffmpeg.git
[19:06] <ike> thank you i will work with that for now
[19:35] <jjohn> klaxa: are you still around?
[19:36] <klaxa> yes
[19:36] <jjohn> OK, just wanted to give you an update.
[19:36] <jjohn> So, I tried out to record on Windows, and it had the same static noise in the background. Thus I consider the hardware broken.
[19:37] <klaxa> ok
[19:37] <klaxa> time for new hardware then probably
[19:37] <jjohn> But I managed to record from another internal device which seems to be way more reliable, and together with the changes that I made this morning I am now able to record other applications.
[19:37] <klaxa> ah right
[19:37] <jjohn> So, I wanted to thank you for your help and your patience with me.
[19:37] <klaxa> you had another soundcard
[19:38] <jjohn> I have three, to be precise.
[19:38] <klaxa> you're welcome, you explained your problem in a very precise way
[19:38] <jjohn> Just wanted to make sure that you see the exact things that I see. That's just how I work. :)
[19:39] <jjohn> Again, thanks and a very nice weekend.
[19:40] <am11> hey guys can anyone here help me with a gcc-c++/libstdc++ problem?
[19:44] <shudouken> when using 2 pass encode, is there any command that makes ffmpeg automatically remove the ffmpeg2pass-0.log file when done with the second pass?
[19:45] <c_14> shudouken: not that I know of
[19:45] <torrente> hi
[19:45] <c_14> am11: what is the problem?
[19:46] <klaxa> what about "rm ffmpeg2pass-0.log" ?
[19:46] <c_14> shudouken: use && rm ffmpeg2pass-0.log
[19:46] <torrente> who use mediacoder?
[19:47] <shudouken> yes I will but that won't work for every OS, which is why I asked
[19:47] <torrente> because it has many features
[19:51] <torrente> who convert video files?
[20:06] <voip__> Hello guys, I transcoding and sending stream to 2 different servers, it works well.
[20:06] <voip__> But I have 1 stream which I cant start
[20:06] <voip__> There is difference between inputs. The input stream which cant start has 4 audio streams.
[20:06] <voip__> How to fix this issue?
[20:06] <voip__>  http://pastebin.com/XisBgYwv
[21:22] <voip__> any help
[21:22] <voip__> ?
[21:24] <c_14> [flv @ 0x4bc6940] at most one audio stream is supported in flv
[21:24] <sfan5> voip__: [flv @ 0x4bc6940] at most one audio stream is supported in flv
[21:24] <c_14> that timing
[21:24] <sfan5> D:
[21:24] <voip__> what it means how to fix problem ?
[21:24] <yarilo_> Hi guys
[21:25] <c_14> voip__: drop audio channels
[21:25] <yarilo_> I have a question which I guess is really popular
[21:26] <yarilo_> Is there a way that I can create a video that would play on all the major browsers?
[21:26] <voip__> c_14, i need audio chanels :) what te correct ddmpeg commands
[21:26] <yarilo_> now I have to create .mp4 and .webm videos
[21:26] <c_14> voip__: if you need more than one audio channel per stream, don't use flv
[21:27] <c_14> yarilo_: I am relatively certain you're sol
[21:28] <yarilo_> c_14 I don't understand you :)
[21:28] <voip__> c_14, i need take live stream, keep video (no transcoding), resend to 2 servers with 1 audio. flv
[21:29] <c_14> yarilo_: in short, I know of no codec/format that is supported in every browser
[21:29] <c_14> voip__: then map the audio stream you want
[21:29] <yarilo_> <c_14> ok
[21:29] <yarilo_> uhmm
[21:29] <yarilo_> than I guess I'll have to go with creating both videos
[21:30] <voip__> where is my mistake ? tee -map 0:v -map 0:a -flags +global_header
[21:30] <c_14> -map 0:a:0 if you want the first audio stream only
[21:31] <yarilo_> before going that way, do you have any experience if it will be faster to create both videos at the same time or create the one first and then convert to the other format. I'm doing the following(in order): create several videos from images, merge those several videos with oteh
[21:31] <yarilo_> before going that way, do you have any experience if it will be faster to create both videos at the same time or create the one first and then convert to the other format. I'm doing the following(in order):
[21:31] <yarilo_> 1. reate several videos from images
[21:32] <voip__> c_14, thanks ! :)
[21:32] <yarilo_> 2. Merge those videos with other videos
[21:32] <yarilo_> at that point I get the final video
[21:32] <yarilo_> 3. Merge the final video with audio and get the complete final video
[21:33] <yarilo_> so
[21:33] <c_14> I'd split after 2. If I'm understanding you correctly.
[21:33] <yarilo_> should I add step 4. Convert the final video(which is .mp4) to webm
[21:33] <yarilo_> you'd split?
[21:34] <yarilo_> I need to merge all the videos together to get one final video which I then merge with audio file
[21:34] <c_14> as in ffmpeg -i final_video -i audio -map 0 -map 1 -c:v copy -c:a copy out.mp4 -map 0 -map 1 -c:v libvpx -c:a libvorbis out.webm
[21:34] <yarilo_> aha
[21:35] <yarilo_> so this will create both at the same time
[21:35] <c_14> yep
[21:35] <yarilo_> ok
[21:35] <yarilo_> I'll test :)
[21:35] <yarilo_> thank you!
[21:36] <yarilo_> Btw I know this is not exactly ffmpeg question
[21:36] <yarilo_> but if someone has experience executing ffmpeg through PHP could maybe help me
[21:37] <yarilo_> I have a problem when I use -filter-complex
[21:37] <yarilo_> -filter_complex "[1:0] setsar=sar=1,format=rgba [1sared]; [0:0]format=rgba [0rgbd]; [0rgbd][1sared]blend=all_mode='overlay':all_opacity=0.8,format=yuva422p10le"
[21:38] <c_14> what's the problem?
[21:38] <yarilo_> it adds backslashes before \[1:0\] etc..
[21:38] <yarilo_> so when it's executed it looks sth like
[21:38] <yarilo_> ok hold on let me paste the exact output
[21:39] <c_14> use -filter_complex_script and the path to a file containing the actual complex
[21:39] <c_14> Then, no more escaping things.
[21:40] <yarilo_> wow
[21:40] <yarilo_> let me test that
[23:06] <halfie> hi, how do I generate ADTS headers? I already have encoded AAC data.
[23:07] <vladimir> hi to all
[23:08] <vladimir> can some one help me
[23:08] <vladimir> I what 2 stream from IP camera (axis mjpeg stream) to make like one
[23:08] <vladimir> 2 mjpeg > 1mjpeg
[23:09] <vladimir> 1280x720 (x2) ====> 2560x720
[23:10] <c_14> pad and overlay
[23:11] <vladimir> url from camera is http://user:pass@ipofcamera/axis-cgi/mjpg/video.cgi?camera=1
[23:13] <vladimir> c_14 how is going full comand ?
[23:16] <c_14> ffmpeg -i url -i url2 -filter_complex '[0:v]pad=2560x720;[0:v][1:v]overlay=1280:0[v]' -map '[v]' outfile
[23:16] <c_14> or something
[23:17] <c_14> https://ffmpeg.org/ffmpeg-filters.html read the manpage
[23:18] <yarilo_> I'm back again <c_14>
[23:18] <yarilo_> so
[23:19] <yarilo_> as filter_complex parameters I send [1:0] setsar=sar=1,format=rgba [1sared]; [0:0]format=rgba [0rgbd]; [0rgbd][1sared]blend=all_mode='overlay':all_opacity=0.8,format=yuva422p10le
[23:19] <yarilo_> if I put this in a file named filtercomplex
[23:19] <yarilo_> I try to run the command with ffmpeg .. .. ..  filter_complex_script filtercomplex ...
[23:20] <yarilo_> right?
[23:20] <c_14> probably
[23:20] <c_14> if filtercomplex is in your cwd
[23:20] <yarilo_> I give absolute path
[23:27] <vladimir> c_14... bat how to make outgoung http mjpeg stream to be... not file.mjpg...
[23:28] <c_14> Get a http server that can serve mjpeg and send the stream  there?
[23:33] <vladimir> hm nooo
[23:34] <vladimir> like I said I have 2 AXIS ip camera...
[23:34] <vladimir> and I need from to camera to make one mjpeg strem....
[23:34] <vladimir> after I make one 1stream..... I willl record in another solutions 1 stream...
[23:35] <c_14> yees, I heard that. I also told you what you need to do. Take the two input streams, make one out of them like I told you, send the result wherever you want.
[23:35] <vladimir> aha
[23:35] <vladimir> I make and it's work
[23:35] <vladimir> like this...
[23:35] <vladimir> -filter_complex '[0:v]pad=2560x720;[0:v][1:v]overlay=1280:0[v]' -map '[v]' test.mjpg
[23:36] <vladimir> bat I don;t know how to make "stream" http mjpeg stream
[23:36] <c_14> You can try streaming to ffserver and producing an mjpeg stream that way, or find another server that accepts input in whatever form and makes an mjpeg stream from it.
[23:38] <vladimir> what u thnik is best solution to==> ffserver ...and after to http stream
[23:39] <c_14> ffmpeg can stream to ffserver and ffserver can serve the http stream
[23:44] <yarilo_> uhmmm
[23:45] <yarilo_> with the current filter_complex params I receive First input link top parameters (size 836x360, SAR 1:1) do not match the corresponding second input link bottom parameters (846x360, SAR 1:1)
[23:45] <yarilo_> here they are again
[23:46] <yarilo_> [1:0] setsar=sar=1,format=rgba [1sared]; [0:0]format=rgba [0rgbd]; [0rgbd][1sared]blend=all_mode='overlay':all_opacity=0.8,format=yuva422p10le
[23:47] <vladimir> axis-cgi/mjpg/video.cgi?camera=1&fps=5  -filter_complex '[0:v]pad=2560x720;[0:v][1:v]overlay=1280:0[v]' -map '[v]' test.mjpg [9] 29062 [10] 29063 -bash: -filter_complex: command not found root at ns-test:~# -bash: -i: command not found
[23:47] <vladimir> there is some error on command
[23:48] <c_14> -i isn't a bash command...
[23:48] <c_14> you want ffmpeg -i
[23:49] <c_14> yarilo_: resize/pad the top video to the bottom width
[23:49] <yarilo_> hmmm
[23:50] <yarilo_> the first video is a dynamically created video
[23:50] <yarilo_> that's the video created from images
[23:51] <yarilo_> and that should remain as it is
[23:52] <yarilo_> but the second one is an overlay video that actually is static
[23:52] <yarilo_> meaning
[23:52] <yarilo_> I can change that one
[23:52] <yarilo_> oh FUCK
[23:52] <yarilo_> sorry
[23:52] <yarilo_> I just saw this
[23:55] <yarilo_> the videos were supposed to have the same dimensions
[23:56] <yarilo_> that's how it was initially
[23:56] <yarilo_> ughhhhh
[23:56] <yarilo_> anyways
[23:56] <yarilo_> thank you <c_14>
[23:56] <yarilo_> I owe you a beer
[00:00] --- Sat Jan 17 2015


More information about the Ffmpeg-devel-irc mailing list