[Ffmpeg-devel-irc] ffmpeg.log.20140110

burek burek021 at gmail.com
Sat Jan 11 02:05:01 CET 2014


[00:44] <Guest20535> are there any methods to transcode HTTPS traffic?
[00:46] <Guest20535> any suggestions?
[00:48] <Guest20535> Content providers going to HTTPS for all their content, besides just for streaming video?  do we have any methods to optimize secure contents?
[01:02] <Guest20535> @Jeeb: any suggestions
[05:54] <mootsadog> I have some dumb questions if y'all don't mind. I'm using MacOS X 10.8.x. How can I add codec support to ffmpeg? Does it have to be recompiled for it? I just grabbed somebody's binary.
[06:02] <relaxed> mootsadog: yes. which codec?
[06:03] <mootsadog> WebM
[06:14] <relaxed> mootsadog: what's the output of `ffmpeg -codecs 2>/dev/null| grep vp[89]`
[06:14] <mootsadog>  DEV.L. vp8                  On2 VP8 (decoders: vp8 libvpx ) (encoders: libvpx )
[06:14] <mootsadog>  D.V.L. vp9                  Google VP9
[06:16] <relaxed> ok, then you should have support for creating webm video
[06:17] <mootsadog> "Unknown encoder 'webm'"
[06:17] <relaxed> webm is a container. ffmpeg -i input -c:v libvpx -c:a libvorbis output.webm
[06:18] Action: mootsadog facepalm
[06:18] <mootsadog> thanks for straightening that out relaxed
[06:18] <mootsadog> and there it goes
[06:19] <mootsadog> i've never used webm and need to start publishing to my own website right away due to youtube headaches, so I appreciate it :D
[06:21] <relaxed> mootsadog: might want to look at `ffmpeg -h encoder=libvpx 2>/dev/null` too
[06:22] <relaxed> mootsadog: and https://trac.ffmpeg.org/wiki/vpxEncodingGuide
[06:22] <mootsadog> ah! handy!! thank you for your help :)
[06:23] <relaxed> you're welcome
[06:47] <mstrcnvs> is it possible to convert only a byte-range of a file using the ffmpeg CLI?
[09:40] <GT1> Hi, I would need somebody to help me to figure out something. I want to turn a gif file into an avi file. If i use ./ffmpeg -i input.gif output.avi it works and I get one animation time of video
[09:41] <GT1> now I want to loop it to repeat it until i press q, and my problem is that when i use it like this
[09:41] <GT1> ./ffmpeg -loop 1 -i input.gif output.avi it says option loop not found. when I change the gif to a png or image it works, any idea why?
[09:45] <GT1> easy to say, but pastebin limit reached and the activation for sign up isn't comming, until I get it working, should this supposed to work?
[09:47] <GT1> http://pastebin.com/ttdMEPJr
[09:48] <GT1> so anyone have a clue what's wrong?
[09:48] <ubitux> you don't need -loop 1
[09:48] <GT1> if I don't use loop only one animation time is saved to the video
[09:48] <ubitux> ah, to make it loop
[09:48] <ubitux> -ignore_loop 0
[09:49] <ubitux> ffmpeg -ignore_loop 0 -i input.gif -t 60 output.avi
[09:49] <ubitux> probably
[09:49] <GT1> thx for the help, to be honest I never saw ignore loop command before
[09:54] <ubitux> it's documented
[09:54] <ubitux> see ffmpeg -h demuxer=gif
[09:55] <ubitux> "documented"
[09:55] <ubitux> :p
[09:55] <GT1> rofl :) yeah "documented" used quite loosly
[09:56] <ubitux> the gif muxer is documented, but indeed the demuxer one is missing
[09:56] <GT1> thx anyway :)
[09:59] <AleXoundOS> Hi. I'm streaming to twitch.tv/snipealot2 with ffmpeg. There is a high amount of image distortion. Can I avoid this distortion somehow?
[10:25] <GT1> I have another question, is there any way to use overlay and itsoffset together? but in a kind that it would show the first frame of video till the input starts?
[10:53] <GT1> Can somebody help me with this? http://pastebin.com/LxwDE2eh
[10:53] <GT1> I want to add couple of videos side by side and mix their audio with different sound level
[10:54] <EchoDev> Any russians here?
[10:54] <GT1> my problem first of all is that around 2 sec of the output video there is a glitch in sound and the console output is a bit unsetling
[10:55] <GT1> and when the first video is over, there is no sound at all
[10:57] <GT1> anyone?
[10:58] <GT1> @ubitux are you here?
[11:00] <AleXoundOS> EchoDev, yep
[11:00] <EchoDev> AleXoundOS can you tell me what this means? http://kthnxbai.info/PicAyzo/977dab.png
[11:01] <GT1> Can anyone help me with the amerge problem?
[11:02] <AleXoundOS> EchoDev, Settings. Choose a folder for synchronization with the Cloud. Folder is used by another account.
[11:02] <EchoDev> Hmm ok, thanks :)
[11:04] <GT1> and only if I output mp4, if I leave out video and just generate an mp3 it works
[11:04] <GT1> at least the output what I wanted, but it still generates a lot of error
[11:19] <ubitux> GT1: can you isolate the amerge in your command?
[11:21] <GT1> yes I can, just a sec
[11:23] <GT1> http://pastebin.com/yVX52dfT
[11:23] <GT1> the funny thing is as I said, when I only export audio to an mp3 there is no error and the sound is correct
[11:23] <ubitux> and only video works as well?
[11:24] <GT1> just a sec
[11:29] <GT1> k, I'm back I believe so, I'll run a test with the color source too
[11:29] <GT1> Yes it does, separatly it works as supposed to
[11:29] <GT1> without errors
[11:30] <ubitux> can you paste the working video command?
[11:30] <GT1> k
[11:30] <GT1> with the output console too?>
[11:31] <ubitux> yes please
[11:32] <GT1> http://pastebin.com/yWwtK1RG
[11:33] <ubitux> "color=s=1280x400:duration=10:0xFFFFFF[bg];[0]scale=640:-1[q];[bg][q]overlay=0:0[e];[1]scale=640:-1[r];[e][r]overlay=640:0[v]; [0:a][1:a]amerge, pan=stereo|c0=0.9*c0+0.1*c2|c1=0.9*c1+0.1*c3[a]" -map "[v]" -map "[a]
[11:33] <ubitux> doesn't work?
[11:34] <ubitux> "color=s=1280x400:duration=10:0xFFFFFF[bg];[0:v]scale=640:-1[q];[bg][q]overlay=0:0[e];[1]scale=640:-1[r];[e][r]overlay=640:0[v]; [0:a][1:a]amerge, pan=stereo|c0=0.9*c0+0.1*c2|c1=0.9*c1+0.1*c3[a]" -map "[v]" -map "[a]"
[11:34] <GT1> nope, cannot allocate memory
[11:34] <GT1> i mean it renders and all
[11:34] <GT1> but the audio is buggy
[11:35] <GT1> http://pastebin.com/5AL8Q82y
[11:36] <GT1> https://www.dropbox.com/s/9fk89kgdth6eb4c/output.mp4
[11:36] <ubitux> you're encoding with libvo_aac
[11:37] <ubitux> it's known to be one of the worse
[11:37] <ubitux> try -c:a aac -strict experimental as output option
[11:37] <ubitux> though, the error is strange
[11:38] <GT1> yeah, just trying that
[11:38] <GT1> same
[11:38] <ubitux> can you share the 2 input videos?
[11:38] <ubitux> or they're too larges?
[11:39] <GT1> http://pastebin.com/mrt7Naps
[11:39] <GT1> they are two random free video from the net, 200 mb in total
[11:40] <GT1> i'll upload it if that helps
[11:40] <GT1> first video
[11:40] <GT1> https://www.dropbox.com/s/6u762orr9drmww1/first.MOV
[11:41] <ubitux> i'd like to have the simplest command line possible to reproduce the problem
[11:42] <GT1> ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:v]scale=640:-1[e];[1]scale=640:-1[r];[e][r]overlay=640:0[v]; [0:a][1:a]amerge, pan=stereo|c0=0.9*c0+0.1*c2|c1=0.9*c1+0.1*c3[a]" -map "[v]" -map "[a]" -acodec aac -strict experimental -ar 44100 -ac 2 output.mp4
[11:42] <ubitux> yes, but that's really complet :)
[11:42] <GT1> thats one over and audio merge
[11:43] <ubitux> like, can you remove the video part of the filtergraph (and pick the first video stream)
[11:43] <GT1> but k, just a sec
[11:43] <ubitux> and then is the pan filter necessary to reproduce the issue
[11:43] <ubitux> etc
[11:43] <GT1> nope
[11:43] <GT1> wait
[11:43] <GT1> pan?
[11:43] <GT1> i'll try
[11:43] <ubitux> yes you have a pan filter after your amerge for some reason
[11:44] <GT1> nope
[11:44] <GT1> ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:v]scale=640:-1[v]; [0:a][1:a]amerge[a]" -map "[v]" -map "[a]" -acodec aac -strict experimental -ar 44100 -ac 2 output.mp4
[11:44] <GT1> still produce the error
[11:44] <GT1> second video
[11:44] <GT1> https://www.dropbox.com/s/gkkijanvipma19q/second.MOV
[11:45] <ubitux> ok now, what about removing "[0:v]scale=640:-1[v];" (and -map "[0:v]" instead of -map "[v]")
[11:45] <GT1> http://pastebin.com/9SKNBxWP
[11:45] <GT1> k 1 try
[11:46] <ubitux> if that works, add -c:v copy
[11:46] <GT1> Output with label '0:v' does not exist in any defined filter graph, or was already used elsewhere.
[11:47] <ubitux> huh.
[11:47] <GT1> same reaction
[11:47] <GT1> ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:a][1:a]amerge[a]" -map "[0:v]" -map "[a]" -acodec aac -strict experimental -ar 44100 -ac 2 output.mp4
[11:47] <GT1> Output with label '0:v' does not exist in any defined filter graph, or was already used elsewhere.
[11:47] <ubitux> ah, -map 0:v
[11:47] <ubitux> sorry
[11:47] <GT1> same error
[11:48] <GT1> i mean
[11:48] <GT1> it runs
[11:48] <GT1> but cannot allocate memory
[11:48] <ubitux> good
[11:48] <GT1> no sound when it says cannot allocate memory
[11:49] <GT1> How the hell can this be good? :P
[11:49] <ubitux> the cli is simpler ;)
[11:49] <ubitux> just a min
[11:49] <ubitux> looking at reproducing
[11:49] <GT1> ok
[11:52] <ubitux> i have sound here
[11:52] <ubitux> despite the message
[11:52] <GT1> i have sound too, but time to time it just mutes then goes forward
[11:52] <GT1> did you check my sample output?
[11:53] <ubitux> your sample output?
[11:53] <GT1> for a short time there is sound, then mutes, then the sound comes back and this repeats, so it's not really acceptable
[11:53] <GT1> yeah I linked it on dropbox :P
[11:53] <GT1> https://www.dropbox.com/s/9fk89kgdth6eb4c/output.mp4
[11:53] <GT1> this was the output what the most complex command produces
[11:53] <GT1> after 1 sec there is no sound till second 2
[11:54] <GT1> and after 5-6 sec there is no sound at all
[11:54] <GT1> after second 7 when the first video stops
[11:54] <ubitux> how was that generated?
[11:54] <GT1> the first command with color and overlay and pan all together
[11:55] <GT1> but I'll upload a new one with the simplified versiojn
[11:57] <GT1> so now I used this command
[11:57] <GT1> ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:a][1:a]amerge[a]" -map 0:v -map "[a]" -acodec aac -strict experimental -ar 44100 -ac 2 output.mp4
[11:57] <GT1> and produces this
[11:57] <GT1> https://www.dropbox.com/s/9fk89kgdth6eb4c/output.mp4
[11:58] <ubitux> i used that exact command, with -c:v copy to avoid re-encoding the video
[11:58] <ubitux> and it sounds fine to me
[11:59] <ubitux> your sample also sounds fine to me
[11:59] <GT1> Huh?
[11:59] <GT1> You don't hear any sound mutes during the playback?
[11:59] <ubitux> no
[11:59] <GT1> wait i'll test it on a device then
[12:00] <GT1> but still what's the error for?:
[12:00] <ubitux> dunno about the error, will look
[12:00] <ubitux> is the playback fine when you use ffplay?
[12:00] <GT1> didn't compiled ffplay
[12:01] <ubitux> what player are you using?
[12:01] <GT1> standard movie player from linux
[12:01] <ubitux> there is a standard movie player on linux?
[12:01] <GT1> and nor the dropbown web player works
[12:01] <GT1> ubuntu 12.04
[12:02] <ubitux> anyway, works fine here with mpv and ffplay
[12:02] <GT1> well I didn't installed it so I think it has
[12:02] <GT1> I'm noob to linux :P just using it to work with ffmpeg
[12:02] <ubitux> i'm going to look at that weird error
[12:03] <GT1> i'll test it on another computer
[12:04] <GT1> it's lagging even on other computer are we testing the same video file?
[12:04] <GT1> https://www.dropbox.com/s/9fk89kgdth6eb4c/output.mp4
[12:05] <GT1> this is tested on windows and still when I play it time to time there is no sound
[12:06] <GT1> any idea? could you post your example you encoded?
[12:08] <ubitux> not enough bw
[12:08] <ubitux> that file you just posted plays fine here too
[12:08] <GT1> k, what I found is
[12:09] <GT1> that on windows the sounds seems ok, but when I have the sound lag on linux, on windows the framerate seems to jump high
[12:09] <GT1> can you confirm that it's the same on your computer?
[12:09] <GT1> time to time the video just plays at a higher rate?
[12:14] <ubitux> no, it sounds fine to me
[12:14] <ubitux> but it's just 7 second so...
[12:15] <GT1> Did you read what I wrote? Not the sound, watch the video. Time to time do the ducks speed up? and slow down? Like if it would play with higher frame rate?
[12:16] <ubitux> ah sorry i'm at work, i'm not supposed to watch duck videos
[12:17] <ubitux> will do later, i'm debugging the error anyway
[12:24] <GT1> If you find what's the error is caused by please tell, because I need to run this on a device, which don't like errors at all
[12:27] <ubitux> yes, i'm on it, give me some time
[12:56] <GT1> or is there any other way I could mix the sound track of inputs with different volume strength
[12:56] <GT1> ?
[12:59] <ubitux> see with amix filter maybe
[13:00] <GT1> yeah, I'm just trying that one, but it's sooooooo slow
[13:00] <GT1> how the hell does an audio mix take longer time then combining and editing videos?
[13:01] <GT1> with amix it works, but my problem is that it's waaaaaaaay toooo slow
[13:01] <GT1> ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:a]volume=0.2[a1];[1:a]volume=0.8[a2];[a1][a2]amix[a]" -map [a] -map "0:v" output.mp4
[13:01] <GT1> renders, gives good result and all, but takes quite a lot of time
[13:01] <ubitux> i don't remember what amix does exactly
[13:01] <ubitux> add -c:v copy
[13:02] <GT1> I'll be using filter complex, I can't use v copy
[13:02] <GT1> it's almoast instant with v:copy
[13:02] <GT1> I dont' get it
[13:03] <ubitux> you're not filtering video
[13:03] <ubitux> so -c:v copy is fine :p
[13:03] <GT1> yeah, but if I add back the overlay and pan
[13:03] <ubitux> the video encoding is slow
[13:03] <GT1> i can't use v copy as far as I know
[13:03] <JEEB> yes
[13:03] <JEEB> because that's video filtering
[13:03] <ubitux> yeah if you add video filtering ofc..
[13:04] <GT1> hi JEEB :)
[13:04] <GT1> k, i'll try it out in my production environment
[13:04] <GT1> let see how fast is it
[13:09] <GT1> what does this mean?
[13:09] <GT1> 01-10 14:07:19.635: W/LOGTAG(11638): ***Output pad "default" with type audio of the filter instance "Parsed_volume_7" of volume not connected to any destination***
[13:09] <GT1> for the audio part i'm using [0:a]volume=0.9[a0];[1:a]volume=0.9[a1];[a1][a2]amix[audio]
[13:10] <ubitux> "[audio]" is mapped?
[13:10] <GT1> 01-10 14:07:15.436: D/Command(11638): -map
[13:10] <GT1> 01-10 14:07:15.436: D/Command(11638): [video]
[13:10] <GT1> 01-10 14:07:15.440: D/Command(11638): -map
[13:10] <GT1> 01-10 14:07:15.441: D/Command(11638): [audio]
[13:10] <GT1> yes it is
[13:11] <GT1> for four line pastebin? really? anyway yes it is mapped
[13:14] <GT1> any idea?
[13:15] <GT1> I don't get it, with video the map works, but with the audio not...
[13:20] <ubitux> what's your cli?
[13:22] <alexavenger> Hi :)
[13:22] <alexavenger> good morning
[13:22] <GT1> k, I reproduced it on my computer so
[13:22] <GT1> http://pastebin.com/YpZhRsZZ
[13:22] <GT1> and basicaly whaty I changed form what It worked before is that I added the audio mix
[13:22] <alexavenger> does anybody knows how to change a TBR 100 to TBR 25 ? There's any comand specific for that in ffmepg?
[13:27] <GT1> i know it's a bit complex, but it's generated. Anyway the part iwht the map i don't get it, why can't it see it? or what's happeneing there, if I remove the volume and amix part map an input streams audio it works
[13:30] <GT1> i could reproduce the same thing even like this: ./ffmpeg -i first.MOV -i second.MOV -filter_complex "[0:a]volume=0.9[a0];[1:a]volume=0.9[a1];[a1][a2]amix[a]" -map 0:v -map [a] -vcodec libx264 -preset ultrafast -acodec aac -strict experimental -ar 44100 -ac 2 -ab 192k -aspect 510:510 -y output.mp4
[13:37] <GT1> rofl
[13:37] <GT1> ok, miss type
[13:37] <GT1> a0
[13:37] <GT1> :)
[13:39] <GT1> well it got slower on device
[13:40] <GT1> but it seems to work with amix, but I don't get it why didn't it work with amerge
[13:57] <GT1> Can I have another quick question? My problem is that I don't always know if I have an audio stream or not, can I put a conditional statement of some kind? To check if it has an audio then mix it if not then leave it out?
[13:58] <GT1> because if I use this ./ffmpeg -i first.MOV -i second.MOV -i video.mp4 -filter_complex "[0:a]volume=0.9[a1];[1:a]volume=0.9[a2];[2:a]volume=0.9[a3];[a1][a2][a3]amix=inputs=3[a]" -map 0:v -map [a]  output.mp4
[13:58] <GT1> if I supply 3 movie with audio it works fine, but if I supply an image for example it dies, because it has no audio channel, any idea?
[13:58] <JEEB> you could parse the output of ffmpeg or ffprobe
[13:59] <JEEB> or possibly there's a way to do conditional statements in lavfi but I'm not into that voodoo
[14:01] <GT1> I think ffprobe is out of question, if you remember last night I'm working on android, so the less exec the better
[14:02] <GT1> and the problem with parsing the output is that it takes qutie a while till it loads up on android, so in case of multiple inputs running multiple times is not a good option
[14:06] <JEEB> then I guess you're better off just using the libraries?
[14:06] <JEEB> via JNI
[14:07] <GT1> if I fought so much to make the console working i won't switch :( I'll try to find another way, but thanks anyway
[14:08] <GT1> k, at least now I know that I can check for all input in a single ffmpeg run, but still quite a much fuss
[14:08] <relaxed> parse all the inputs at once with ffmpeg
[14:10] <GT1> probably that's the only way, even though it's slow to start up an exec just for taht
[14:11] <relaxed> my phone has grep but not sed or awk.
[14:11] <relaxed> how can google not include awk :/
[14:13] <GT1> :)
[17:22] <Mayumi> hi
[17:22] <Mayumi> i have a question
[17:22] <Mayumi> does anybody know how i can remove the black borders on the top and bottom of a video with ffmpeg automatically?
[17:23] <Mayumi> amazon uses ffmpeg for its elastic transcoder service and it can do this behind the scenes
[17:40] <wrabbit> if i have files named test-1.jpg and test-2.jpg i can do "avconv -r 15 -i test-%01d.jpg -vcodec libx264 out.mkv" and it works fine
[17:40] <wrabbit> if they are named test1-1.jpg and test2-2.jpg and i use "avconv -r 15 -i test%01d-%01d.jpg -vcodec libx264 out.mkv" i get "test%01d-%01d.jpg: No such file or directory
[17:41] <wrabbit> the - between the %01d seems to be killing it
[17:41] <wrabbit> im assuming avconv and ffmpeg handle this the same, sorry if i assumed wrong
[19:09] <llogan> Mayumi: you can use the crop filter
[19:10] <llogan> ...for the third time
[19:11] <Mayumi> ok
[19:11] <Mayumi> sorry
[19:11] <Mayumi> i asked this a few times yes, but i ended up not seeing the response cause i wasn't here and it took a while
[19:12] <llogan> Mayumi: using any decent IRC client will highlight or provide some sort of indication whenever your name is used
[19:12] <Mayumi> can ffmpeg do this automatically? or do i need to specify dimensions
[19:12] <Mayumi> im using irssi
[19:12] <Mayumi> i kno
[19:13] <llogan> you can look at cropdetect
[19:13] <llogan> https://ffmpeg.org/ffmpeg-filters.html#cropdetect
[19:13] <Mayumi> nea
[19:13] <Mayumi> t
[19:13] <Mayumi> im looking at it now
[19:14] <Mayumi> i have about 1000 videos to transcode so i'd like to do this as easily as possible
[19:14] <llogan> wrabbit: we have nothing to do with avconv
[19:15] <llogan> perhaps you can just crop upon playback
[19:16] <Mayumi> llogan: it looks like players are cropping it automatically during playback
[19:21] <llogan> Mayumi: i see few reasons to re-encode 1000 videos just to remove letterboxing
[19:32] <Mayumi> llogan: well, when these videos are posted up for sale then the resolution is going to be a bit misleading
[19:33] <Mayumi> also that crop technique you gave me worked, thank you, however it seems to have removed a little more of the video that it should
[19:33] <Mayumi> ill see if i can figure out why
[19:34] <Mayumi> they also need to be transcoded because they are all stupid file formats
[20:52] <yuvadm> hey everyone, quick question, i have a (DVB-T ripped) clip that for some reason has the audio track cut somewhere in the middle, is there any way to check if it can be restored?
[20:52] <yuvadm> i've already run ffmpeg -i in.mp4 -vn -acodec copy out.aac and i only get the partial audio, not what i'm expecting to be the full audio
[21:00] <yuvadm> btw i'm also getting some errors that might point to a problem
[21:00] <yuvadm> [h264 @ 0x120fb00] mmco: unref short failure
[21:00] <yuvadm> and number of reference frames (0+4) exceeds max (3; probably corrupt input), discarding one
[22:26] <jnvsor> In the ffmpeg docs I see referneces to filtergraph input/outputs [in]/[out] - how do I know which names to use for different inputs?
[22:26] <ubitux> you map the outputs
[22:27] <ubitux> with something like -map '[my_output]'
[22:29] <jnvsor> I've used map to map inputs using their input number, I suppose that would be `-map 1 [name]`?
[22:38] <ubitux> i'm not sure what you want to do, do you have a use-case?
[22:46] <Mayumi> does anyone know how to specify the SAR for ffmpeg output?
[22:46] <jnvsor> Overlaying webcam onto x11grab feed, then scaling the both of them down
[22:50] <ubitux> if you use -filter_complex, you refer to the input with something like '[0:1]' in your filtergraph
[22:51] <ubitux> then on the output you name it like '[v1]' and '[v2]' (or whatever name you prefer)
[22:51] <ubitux> then you can map them with -map '[v1]' -map '[v2]' etc
[22:51] <ubitux> [in*] and [out*] are iirc default names when used with -vf
[22:52] <jnvsor> -filter:v throws an error if I try to use a filter that takes more than one input, tells me to use filter_complex instead
[22:53] <ubitux> just do it then
[00:00] --- Sat Jan 11 2014


More information about the Ffmpeg-devel-irc mailing list