[Ffmpeg-devel-irc] ffmpeg.log.20120824

burek burek021 at gmail.com
Sat Aug 25 02:05:01 CEST 2012


[00:39] <Loplin> JEEB, Thanks. The problem was that I had to update my x264 library for ffmpeg-git, so none of the stuff that relied on the older version was working
[00:39] <Loplin> JEEB, converting with ffmpeg-git, then downgrading x264 showed that the conversion worked fine
[01:30] <ellipsis753> Hey, how can I encode videos to play back on my 480x320 phone? I tried just passing -s and I've ended up with something that looks terrible quality
[01:30] <ellipsis753> Thanks.
[01:31] <applegekko> use a higher bitrate?
[01:31] <applegekko> man ffmpeg
[01:34] <ellipsis753> I tried, -i foo.avi -acodec aac -vcodec mpeg4 -s 960x640 -strict experimental 1.mp4 Can I not just have it auto set bitrate?
[02:49] <t355u5> ok, since nobody answered, I'll try it again: there seems to be a problem with converting mp4 (x264/aac) to mkv (x264/ac3). if you do this conversion with an ffmpeg release after 0.10.2, the resulting videos don't start to play in vlc. it takes up to 50 seconds before they finally start playing and then the time indicator is not at the beginning of the video, but at the end. this happens with ffmpeg-git and latest stable x264. the last version I know wh
[02:57] <mparodi> Hello
[02:58] <mparodi> so does anybody know how to concatenate videos? I've tried with http://ffmpeg.org/faq.html#How-can-I-join-video-files_003f but it doesn't work, the video is concatenated but a few seconds of black frames are added
[03:00] <relaxed> mparodi: which container are they in?
[03:01] <mparodi> mp4 and webm
[03:01] <mparodi> but I can convert them to any other format and then reconvert it to one of these ^
[03:02] <relaxed> use MP4Box to concat the mp4 and mkvmerge to concat webm to convert them losslessly.
[03:03] <mparodi> relaxed, I already tried to concat the webm with mkvmerge
[03:03] <relaxed> and?
[03:04] <mparodi> the result in that case was that the first video is played and after it finished it stop and the second video is not played at all
[03:04] <mparodi> I'll try again in this computer, let me see
[03:09] <mparodi> same problem, relaxed :|
[03:09] <mparodi> what can br wrong?
[03:09] <mparodi> be*
[03:14] <kim> I'm trying to figure out how to take say every 10th frame out of a movie and save it as jpg
[03:14] <kim> (or every 20th ... etc)
[03:17] <t355u5> kim: ffmpeg -ss 00:00:10 -i input.avi -vf 'select=not(mod(n\,10))' out_%5d.jpg
[03:18] <kim> Hmm, I found another way to do it with -r ;-)
[03:18] <t355u5> -r does not work
[03:18] <t355u5> try -r 1/600
[03:18] <t355u5> -> error
[03:19] <t355u5> the documentation is wrong
[03:19] <kim> right
[03:19] <kim> in the manual it says that -r controls framerate
[03:19] <t355u5> this is btw, not the only thing, which is wrong in either the doc or the FAQ
[03:19] <kim> so if I downrate to 2 fps whilst outputting to jpeg, I lose frames
[03:20] <kim> which is essentially what I want
[03:20] <kim> (since I have too many of them ;-)
[03:20] Action: kim takes note of t355u5 s method too
[03:23] <kim> if I do -r 5 on a 25 fps movie, I take 1 frame in 5 , for instance
[03:23] <kim> this is not as precise I guess. But will do for now
[03:24] <t355u5> yea, this should work
[03:24] Action: kim wanted to load a pan video into hugin
[03:24] <kim> and make a pan -orama ;-)
[03:24] Action: kim worries enblend will complain about too tight an overlap anyway, but we'll see
[03:26] <kim> t355u5, thanks btw. Now I can move my camera relatively slowly, without having to worry about long processing times in post :-)
[03:27] <t355u5> kim: it can take a while, if you have a 2h movie and want to grab every 20,000th frame though :-)
[03:27] <t355u5> because ffmpeg has to scan the frames and does not access them directly
[03:28] <kim> That'd only be useful if it was a 2 hour pan shot ;-)
[03:28] <t355u5> just saying....
[03:28] <kim> Warning duly noted! :-)
[03:29] <kim> hmph, hugin only uses a single core for photometric optimization, which I don't even really need
[03:29] <kim> oh well
[03:40] <buhman> http://sprunge.us/iNJL I just did a two-pass encode (or so I thought) using ffmpeg but erm, I can't play it back
[03:40] <buhman> ^ mplayer; ffplay -> http://sprunge.us/XRFL
[03:40] <buhman> ffmpeg output (all I have from the screen buffer): http://sprunge.us/VLgP
[03:40] <buhman> ffmpeg -y -i CAPTURE-HD-RM164_2012-08-20_13_21_28.ts -pass 1 -vcodec libx264 -preset slower -b 900k -threads 0 -f mp4 -an /dev/null
[03:41] <buhman> nice ffmpeg -y -i CAPTURE-HD-RM164_2012-08-20_13_21_28.ts -pass 2 -vcodec libx264 -preset slower -b 900k -threads 0 output.mkv
[03:41] <buhman> ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
[03:41] <buhman> http://sprunge.us/LZgg
[03:42] <relaxed> why use mp4 for the first pass and matroska with the second?
[03:42] <buhman> O.o is that a problem?
[03:42] <buhman> a/the*
[03:42] <relaxed> probably not but it doesn't make sense either
[03:43] <relaxed> -f rawvideo
[03:43] <buhman> sure
[03:45] <buhman> that should take about another 3 hours or so :D
[03:57] <relaxed> -t $seconds to test
[04:57] <buhman> relaxed: meh, more like 1 hour; almost done already (with second pass)
[05:05] <mparodi> ohh
[05:05] <mparodi> I just realized the problem is not the concatenation but the fact that I'm using -ss
[05:05] <mparodi> ffmpeg -ss 20 -i input.mp4 -vcodec copy -acodec copy foo.mp4
[05:05] <mparodi> is it wrong?
[05:07] <mparodi> err, actually it is: ffmpeg -i input.mp4 -qscale:v 1 foo.mpg
[05:07] <mparodi> -ss 20
[05:07] <mparodi> any idea?
[05:09] <buhman> mparodi: what's the problem again?
[05:09] <mparodi> no again, it's the same problem hehe
[05:09] <mparodi> :)
[05:09] <buhman> right, and what is the problem
[05:10] <buhman> relaxed: yep that fails too
[05:10] <buhman> (of course it does)
[05:11] <mparodi> the problem now is pretty obvious: "ffmpeg -ss 16 -i input.mp4 -qscale:v 1 output.mpg" is not working
[05:11] <mparodi> does it make sense? probably it's not the way to skipt the first 16 seconds from the beginning
[05:11] <mparodi> skip *
[05:11] <buhman> sure it is
[05:12] <mparodi> ok, then there's a bug
[05:12] <mparodi> :P
[05:13] <buhman> mparodi: works fine for me
[05:13] <buhman> mparodi: what ffmpeg are you using?
[05:13] <mparodi> I'm trying again just to be sure
[05:13] <mparodi> ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
[05:14] <buhman> I mean, my input was porn in a .flv container
[05:14] <buhman> not sure if .mp4 *really* makes a difference
[05:14] <buhman> but surely .flv is harder to demux than .mp4
[05:14] <buhman> if anything
[05:16] <mparodi> no way, it just doesn't work
[05:17] <mparodi> http://paste.debian.net/185354/
[05:17] <mparodi> btw, this is the output, buhman ^
[05:18] <mparodi> there are some warnings but not sure what they mean
[05:18] <buhman> looks like your input sucks
[05:18] <mparodi> haha
[05:18] <mparodi> I created it with ffmpeg!
[05:19] <buhman> http://sprunge.us/QdGV p0rn works fine for me...
[05:19] <buhman> same arguments
[05:19] <mparodi> more precisely with: ffmpeg -i $1 -vcodec libx264 -quality good -preset slow -b:v 300k -r 25 -vf yadif -vf scale=720:-1 -acodec aac -b:a 128k -strict experimental -threads 4 $2
[05:20] <buhman> -strict experimental?
[05:20] <buhman> I'm having problems with my x264 encodes too
[05:21] <mparodi> I had some problems without -strict experimental so I put it
[05:21] <buhman> is -quality in the manpage?
[05:21] <buhman> oh there it is, nvm
[05:22] <mparodi> buhman, try to convert your flv to mp4 with that command and then use -ss as I did
[05:22] <buhman> sure
[05:22] <mparodi> maybe it's a bug...
[05:22] <buhman> actually I was about to try something similar to your options because they look fairly decent to me
[05:23] <buhman> -vf loops deprecated
[05:24] <buhman> looks*
[05:24] <buhman> -filter:v
[05:25] <mparodi> btw, after ffmpeg I use "qt-faststart $2.mp4 converted/$2.mp4" but it's not the problem since I don't do that for webm and the problem is the same
[05:36] <mparodi> buhman, could you? did it work?
[06:30] <mparodi_> buhman, any idea? I lost my connection, maybe you answer and I didn't read
[06:30] <mparodi_> err, answered*
[06:34] <buhman> Magicking: heh
[06:34] <buhman> mparodi_:
[06:36] <buhman> mparodi_: I can't play *any* h.264 stuff it seems
[06:36] <buhman> from ffmpeg that is
[06:37] <mparodi_> weird o.o
[06:37] <buhman> on playback: Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'
[06:37] <buhman> that was: ffmpeg -i CAPTURE-HD-RM164_2012-08-20_13_21_28.ts -vcodec libx264 -quality best -preset veryslow -crf 28 -filter:v yadif -acodec aac -b:a 96k -strict experimental -threads 0 crf28.mkv
[06:38] <buhman> same ffmpeg version as you
[06:39] <buhman> too bad it didn't work, because the output was 64Mbytes
[06:39] <buhman> (from a 1.7Gbyte source)
[06:39] <buhman> it was saying libx264 was using "profile High, level 4.0"
[06:39] <buhman> I'd never seen 4.0 before; perhaps that was the problem
[06:39] <buhman> was/is
[06:40] <buhman> 5.0* I mean
[06:41] <mparodi_> buhman, what would you do in that case?
[06:41] <mparodi_> I need to truncate+concat many videos u.u
[06:42] <mparodi_> I mean, in this case... supposing you have to do that ^ :P
[06:42] <buhman> mparodi_: concat is easier if you use mpg containers
[06:42] <mparodi_> I've found at least 5 ways of how not to do that T__T
[06:43] <mparodi_> the concat is working actually
[06:43] <mparodi_> I need the skip part now
[06:43] <buhman> then you can literally say: cat foo1.mpg cat2.mpg > big.mpg
[06:43] <mparodi_> yep
[06:43] <buhman> really fun :D
[06:43] <mparodi_> but I need cat2.mpg to be 16sec shorter
[07:48] <ACVBV> HI
[07:48] <ACVBV> please help here!!
[07:48] <ACVBV> i'm making multi-track video and audio
[07:49] <ACVBV> but second audio always change to mp2 64kbps
[07:49] <ACVBV> how to solve this problem?
[08:02] <ACVBV> ffmpeg completly ignored second audio setting
[08:03] <ACVBV> i have 400 times or more try with various commands
[08:03] <ACVBV> but always second audio change to mp2 64kbps
[08:03] <ACVBV> its so funny i feel crazy..
[09:28] <[4-tea-2]> Good morning everybody, I need a little help understanding the syntax for what I'm trying to do. I want to have all audio streams from my input file in my output file. I thought "-map 0" would take care of that, but it tells me "Number of stream maps must match number of output streams".
[09:31] <[4-tea-2]> My command line and the result: http://pastebin.com/38Ft2R6Y
[09:43] <cbsrobot> [4-tea-2]: ffmpeg version 0.7.13 ?
[09:44] <[4-tea-2]> I think I've got it now, but my brain hurts.
[09:44] <cbsrobot> built on Jun 13 2012 ... hmm
[09:44] <[4-tea-2]> cbsrobot: yes. Is that ancient?
[09:44] <cbsrobot> just use git head
[09:44] <[4-tea-2]> It's Marillat's deb-multimedia build for Debian/squeeze
[09:46] <[4-tea-2]> I'm trying to stick to this version now, because I'm also using it on production systems.
[09:46] <[4-tea-2]> The constant syntax changes are bit nightmarish.
[09:47] <cbsrobot> yeah sure
[09:47] <[4-tea-2]> When I tried to keep up with the latest version, my scripts broke a lot. :\
[09:47] <cbsrobot> I do not even remember the syntax back then
[09:47] <cbsrobot> looks cryptish to me :-)
[09:47] <cbsrobot> just drop a line here and I'm sure someone can help you
[09:48] <[4-tea-2]> I reordered the arguments, and dropped the -map 0 in favour of -newaudio.
[09:48] <[4-tea-2]> It's encoding now. I hope the result is what I want. :D
[09:50] <cbsrobot> you have a git head build somewhere ?
[09:51] <[4-tea-2]> No, currently not.
[09:55] <[4-tea-2]> Building it now, out of scientific curiosity. :)
[11:03] <coalado> Hi. I have a few questions about ffmpeg.
[11:03] <coalado> Is it possible to extract cover images from mp3s?
[11:05] <coalado> Can I use ffmpeg to get informations of an image like format, width/height/depths?   like ffprobe -i myimage.jpg
[11:46] <ubitux> coalado: does ffmpeg -i input.mp3 -c copy cover.png works?
[11:51] <coalado> ubitux:  yes
[11:51] <coalado> thanks
[12:37] <coalado> any Idea where I can get static mac/linux builds for ffmpeg & ffprobe?
[12:38] <coalado> I found a ffprobe static linux x64 build (no 32bit) and for mac no ffprobe at all
[12:40] <lltp> Hi all, does one of you have an idea on how to extract .IDX/.SUB files of vobsub subtitles (ffmpeg would be a must, I don't want to use mencoder)?
[12:40] <lltp> coalado : why don't you build it yourself?
[12:41] <coalado> lltp:  honestly... I don't know how
[12:42] <lltp> coalado : BTW, you did not go that far... static builds are available on ffmpeg's download page for both 64bits and 32bits
[12:42] <JEEB> use git to get the source, make sure you have yasm installed, ./configure , make -> get binary
[12:42] <coalado> lltp:  i got this far
[12:42] <coalado> there are ffmpeg builds for sure, but no ffprobe
[12:43] <JEEB> so basically package management wise you only have to have gcc and friends (or another compiler), yasm and git installed
[12:43] <lltp> ah my bad
[12:44] <coalado> JEEB sounds doable. thanks.
[12:45] <JEEB> also compiling yasm isn't exactly hard if you have a too old version in your repositories
[13:37] <lltp> No one has any idea on how to extract VOBSUB (.idx/.sub) files from a stream using anything except mencoder?
[14:00] <cypher497> is it useful (or recommended) to have a -threads option before -i ?
[14:07] <JEEB> cypher497, I think by now for most related decoders get auto threads set methinks
[14:07] <JEEB> before -i = set setting for decoder, after -i = set setting for encoder
[14:18] <cypher497> thanks JEEB!
[14:18] <JEEB> np
[14:41] <coalado> I'd like to create a kinf of live video screen capture. I know that there is a way to create a videostream from a images sequence.to build a screen capture tool, I have no fixed amount if images, but kind of a endless sequence.
[20:08] <sparkstreet> hi there - haven't ever visited here before but i have a question that involves ffmpeg and some h.264 flvs that are being recalcitrant.  anyone around/feeling generous? (:
[20:13] <sacarasc> If you ask a proper question, you might get a good answer.
[20:13] <sparkstreet> oh, I didn't mean to ask an improper question, I just didn't want to barge into the channel and start spewing questions at people without asking first (:
[20:14] <sparkstreet> thank you for responding, though.  let me explain
[20:22] <sparkstreet> sorry got a phonecall
[20:23] <sparkstreet> so here's the deal.  we use wowza media server to do live webcasts, and we do recordings on the server.  they are h.264 flvs
[20:23] <sparkstreet> most of these flvs are totally fine - I can transcode them, split them, etc.
[20:24] <sparkstreet> but some of them seem to not have codec properties associated with them.
[20:24] <sparkstreet> when you run them through ffmpeg they just show this for the video stream : Stream #0:0: Video: none, 1k tbr, 1k tbn, 1k tbc
[20:24] <sparkstreet> they actually can be played with VLC
[20:24] <sparkstreet> you can view the video
[20:25] <sparkstreet> but no dice if I want to do anything to them with ffmpeg.  I tried to -f h264 before the -i string, but it produced a whole set of errors to the effect of "could not find codec parameters"
[20:26] <sparkstreet> tried to force a size with  -s but got an error "option video_size not found"
[20:26] <sparkstreet> so now I'm stuck.  (:  I hope that was a more proper question.  any thoughts? (:
[20:33] <JonB> suppose I record a lecture with the speaker voice in the right sound channel, and the general hall noise in the left channel. I generally want to leave the right sound channel unattached and maybe even put it into mono mode so the speakers voice comes in both ears. I generally want to mute the left (hall channel) unless someone is asking a question to the speaker. Can I do that using ffmpeg?
[20:36] <sparkstreet> ok I'm not an expert here (I came to ask a q also) but honestly do you have access to a video editor- imovie/final cut/premiere?  I don't know if this can be done w/ffmpeg (kind of suspect not) but it would be much more easily done with a graphical editor!
[20:36] <JonB> sparkstreet: I have a grafical webbased editor that tells me when the lecture starts, the break starts&ends, and the lecture ends
[20:37] <sparkstreet> ah but you can't just load the video into final cut or something?
[20:37] <JonB> sparkstreet: expanding that to indicate when there is a question from the audience should be easy
[20:37] <JonB> sparkstreet: the great thing about a webbased editor is that I can crowdsource the editing
[20:37] <sparkstreet> ahh i get it
[20:37] <sparkstreet> well
[20:38] <sparkstreet> again, not really a ffmpeg expert.  but i could imagine a batch file that might do it
[20:39] <JonB> sparkstreet: I think I know how to edit the video. with -ss and -t
[20:39] <sparkstreet> right
[20:39] <JonB> the problem is the sound
[20:39] <sparkstreet> that wwill work
[20:39] <sparkstreet> just use -map-channel to either remove one channel or mute it
[20:39] <JonB> oh
[20:39] <sparkstreet> then you could recombine the files afterwards
[20:40] <JonB> sparkstreet: oh no no no
[20:40] <JonB> sparkstreet: that creates timing issues
[20:40] <JonB> at least with the speaker sound I will never remove it from the video
[20:41] <sparkstreet> well -ss and -t will give you a trimmed file no matter what as far as I know.  it will only process the frames described with those parameters
[20:41] <JonB> but I supposed I could extract the channel with questions and later remerge it, because it doesnt matter if it is perfect
[20:41] <sparkstreet> yeah you could do that
[20:41] <JonB> sparkstreet: but -ss and -t does include sound, right?
[20:42] <sparkstreet> well if your file has sound yes - basically -ss and -t just tell ffmpeg what parts of the file to pay attnetion to, any file
[20:42] <sparkstreet> depending on how you map your audio and video channels you can use -ss and -t to extract audio, add audio, mute audio, whatever
[20:42] <sparkstreet> those are just controls for ffmpeg
[20:42] <JonB> ok
[20:43] <JonB> I feared that they were only for video
[20:43] <sparkstreet> nope
[20:43] <sparkstreet> was that helpful?
[20:43] <JonB> sparkstreet: yes it was
[20:43] <sparkstreet> great
[20:44] <sparkstreet> i'm glad
[20:44] <JonB> me too
[20:44] <sparkstreet> now I wish someone would answer my question ;P
[20:44] <JonB> what question was it?
[20:44] <sparkstreet> i'll just paste it back in hang on
[20:45] <sparkstreet> so here's the deal.  we use wowza media server to do live webcasts, and we do recordings on the server.  they are h.264 flvs. most of these flvs are totally fine - I can transcode them, split them, etc. but some of them seem to not have codec parameters associated with them.
[20:46] <sparkstreet> when you run them through ffmpeg they just show this for the video stream : Stream #0:0: Video: none, 1k tbr, 1k tbn, 1k tbc.  they actually can be played with VLC; you can view the video.  but no dice if I want to do anything to them with ffmpeg.  I tried to -f h264 before the -i string, but it produced a whole set of errors to the effect of "could not find codec parameters".
[20:46] <sparkstreet> tried to force a size with  -s but got an error "option video_size not found. so now I'm stuck.  (:
[20:46] <JonB> sparkstreet: which codec does VLC say they are in?
[20:47] <sparkstreet> good question hang on
[20:48] <sparkstreet> H264 - MPEG4 AVC (part 10) (avc1)
[20:48] <sparkstreet> which I know to be true because I encoded the video that is recorded :p
[20:48] <JonB> and those files that ffmpeg does like?
[20:49] <sparkstreet> exact same
[20:50] <JonB> how does mencoder from mplayer handle them?
[20:50] <sparkstreet> have not tried...not familiar with mencoder
[20:50] <JonB> me neither
[20:51] <sparkstreet>  hm ok so you're saying it might be worth a shot tho? (:
[20:51] <JonB> I dont know
[20:51] <sparkstreet> fair (:
[20:53] <sparkstreet> sacarasc - any thoughts?  (he was active before you entered the chan jon)
[20:53] <JonB> ok
[20:53] <sparkstreet> thanks for advice tho, can't hurt to try
[20:53] <sparkstreet> you're trying to build a croudsourced editor for conference footage?
[20:54] <JonB> sort of
[20:54] <sparkstreet> interesting
[20:54] <JonB> I plan to use it for the monthly meetings
[20:54] <JonB> sparkstreet: the webbased editor I use is called moviemasher
[20:54] <JonB> it is flash based but still open source
[20:54] <sparkstreet> oh, cool
[20:55] <JonB> it saved the editing in an XML file
[20:55] <sparkstreet> ohw ay cool
[20:55] <JonB> I had LOTS of trouble getting it to handle my HUGE HD files
[20:55] <sparkstreet> this is going to make my life better
[20:55] <JonB> but then I realized how to do it
[20:55] <JonB> I shrink and cut down the HUGE HD files to small small flash files
[20:55] <JonB> those I lay up in a time line one after another
[20:56] <sparkstreet> got it, yeah, it's hard enough to edit HD on some computers when you're sitting at them, never mind in EC2 over the web
[20:56] <JonB> ontop of that I put a textbox over the entire video saying: "split and delete text box parts overlaying real good lecture"
[20:56] <JonB> that means people has to hit 3 buttons
[20:56] <JonB> no 4
[20:57] <JonB> play/pause
[20:57] <JonB> split
[20:57] <JonB> delete
[20:57] <JonB> and save
[20:57] <sparkstreet> but what do you actually have them doing with the footage?
[20:57] <JonB> what do you mean?
[20:58] <sparkstreet> so you are putting this footage from meetings into moviemasher
[20:58] <sparkstreet> and what do you want your users to do with it? :P
[20:58] <sparkstreet> (just curious really)
[20:58] <JonB> no, I shrink and reduce the footage
[20:58] <JonB> I then want to crowdsource the indication of when there is lecture start, break start/end and lecture end
[20:58] <sparkstreet> ahhh
[20:58] <sparkstreet> i see
[20:58] <sparkstreet> i understand
[20:58] <JonB> that produces an XML file
[20:59] <sparkstreet> that you can use to chop up the files
[20:59] <JonB> now moviemasher can do a lot more. Like put on effects
[20:59] <sparkstreet> right understood
[20:59] <JonB> no, I actually only chop up the overlaying text box
[20:59] <sparkstreet> ah
[20:59] <sparkstreet> interesting
[20:59] <sparkstreet> well
[20:59] <sparkstreet> cheers (:
[20:59] <JonB> because if you chop up the files you get some funny effects like jumping because there is no index frame right there
[20:59] <sparkstreet> and good luck on it
[20:59] <sparkstreet> right
[21:00] <JonB> sparkstreet: I have used this to "edit" several videos myself
[21:00] <JonB> so I have a few different XML files for different videos
[21:00] <sparkstreet> got it
[21:01] <JonB> now I plan to take those XML files and program FFMPEG to follow the edit decision list from the XML file and work on the raw input video files in HD
[21:02] <JonB> but I am already thinking of the future when I want to improve the sound
[21:11] <sparkstreet> sorry was afk for a sec
[21:11] <sparkstreet> yeah that's interesting
[21:16] <philosophically> i installed ffmpeg through homebrew on my mac os lion.. after installing some support software for audacity the path being displayed for ffmpeg "which ffmpeg" is the default: /usr/local/bin/ffmpeg and it is crashing& the path configuration in my bash_profile is still correct, but ffmpeg won't work, if i call it i get the following error: "dyld: Library not loaded: /usr/local/lib/libogg.0.dylib
[21:16] <philosophically>   Referenced from: /usr/local/bin/ffmpeg
[21:16] <philosophically>   Reason: image not found
[21:16] <philosophically> Trace/BPT trap: 5"
[21:19] <cbsrobot> philosophically: brew install ogg
[21:19] <philosophically> cbsrobot: "no available formula for ogg" & libogg?
[21:19] <cbsrobot> libogg
[21:19] <cbsrobot> or libvorbis
[21:19] <philosophically> says its already installed& weird
[21:19] <cbsrobot> where ?
[21:20] <philosophically> "Error: libogg-1.3.0 already installed
[21:20] <philosophically> "
[21:20] <philosophically> hmmm not sure how to find that
[21:20] <philosophically> probably under homebrew usr/local/Cellar & but it's calling it in usr/local/bin
[21:21] <philosophically> & maybe i can delete ffmpeg from /usr/local/bin?
[21:21] <cbsrobot>  /usr/local/lib/libogg.0.dylib exists ?
[21:23] <philosophically> nope& i may have deleted it because brew doctor did not like it
[21:23] <philosophically> :~
[21:23] <cbsrobot> brew remove libogg
[21:24] <cbsrobot> brew install libogg
[21:27] <philosophically> blah, same error: "dyld: Library not loaded: /usr/local/lib/libogg.0.dylib
[21:27] <philosophically>   Referenced from: /usr/local/bin/ffmpeg
[21:27] <philosophically>   Reason: image not found
[21:27] <philosophically> Trace/BPT trap: 5"
[21:56] <_Fire> I've been trying to find a simple way to repack a VFR FLV ( h.264 + aac )'s streams into a CFR mp4 file ( so it can actually be edited in sony vegas ) but thus far I've been unable to find a solution that doesn't require re-encoding the video stream or ( if not re-encoding ) results in out of sync audio... does anyone have any ideas? This is frustrating
[21:57] <JEEB> if it's really VFR you really have to re-encode if it really needs to be CFR
[21:57] <JEEB> (video-wise)
[21:58] <_Fire> Ugh, that's painfully slow
[21:58] <JEEB> libx264 does have plenty of presets
[21:58] <_Fire> It's still painfully slow regardless
[21:58] <JEEB> :/
[21:58] <JEEB> how old is this machine then?
[21:58] <_Fire> The files in question are several hours long :p
[21:59] <_Fire> They're XSplit recordings
[21:59] <JEEB> because the last I tried I could get several hundred frames per second out of raw YCbCr input on an i5
[21:59] <JEEB> (on 1920x1080)
[21:59] <JEEB> with superfast
[21:59] <JEEB> and ultrafast was even more than that
[21:59] <JEEB> but if that's too slow...
[21:59] <_Fire> Those drasticially increase the bitrate to keep the same quality
[22:00] <JEEB> yeah
[22:00] <_Fire> I'm already looking at files over 3 gigs, so ehh
[22:01] <_Fire> This wouldn't be a problem if sony vegas could handle timecodes from VFR files
[22:01] <JEEB> heh, esp. considering the fact that mp4's timestamps are quite well fit for that
[22:02] <_Fire> As of right now when I try to import it I get no video track at all, vegas just looks at it and gives up
[22:02] <_Fire> plays fine in every video player, of course
[22:03] <JEEB> and it works if you set a specific frame rate instead of copying input timestamps?
[22:04] <JEEB> (with stream copy)
[22:05] <_Fire> Actually no, ffmpeg seems to completely ignore -r
[22:05] <JEEB> yeah, its effect kind of got herpy derpy during the last year+
[22:05] <JEEB> it used to do the same as AssumeFPS() in avs
[22:06] <JEEB> (-r before -i)
[22:06] <_Fire> I've tried -r before -i, -r after -acodec copy, both, it just always does what it wants
[22:07] <JEEB> after -i would've been "change frame rate" from input
[22:07] <_Fire> Aye
[22:07] <JEEB> before -i used to be AssumeFPS() for the input
[22:07] <JEEB> thus the first shouldn't even work
[22:08] <_Fire> Is there a way to re-encode without actually re-encoding? IE: duplicating frames where needed to match CFR ?
[22:08] <JEEB> with something like H.264 that really gets non-simple
[22:09] <JEEB> as you have I,P,B frames
[22:09] <_Fire> Yeahhh
[22:26] <_hc> has anyone ever gotten an audio and video filter working at the same time with ffmpeg?  I've tried lots in 0.10.4 and 0.11.1 and no luck
[22:28] <_hc> here's one attempt:
[22:28] <_hc> ./ffmpeg -f lavfi -i 'amovie=test.mp4,aredact=ffmpeg/aredact_unsort.txt' \
[22:28] <_hc>     -f lavfi -i 'movie=test.mp4,redact=ffmpeg/redact_unsort.txt' \
[22:28] <_hc>     -acodec copy -b:a 32k -strict experimental \
[22:28] <_hc>     -vcodec copy \
[22:28] <_hc>     -y output-test-redact.mp4
[22:45] <ubitux> you can't codec copy with lavfi
[22:45] <ubitux> what's {a,}redact btw?
[22:45] <ubitux> speech2text filter?
[22:55] <_hc> ubitux: they are two custom filters for redacting chunks of audio and video, like pixelating people's faces, or distorting their voices
[22:55] <_hc> we're working on getting them ready to submit
[22:56] <ubitux> remove the codec copy, and try to match each output; and btw, you can of course use -af and -vf
[22:56] <ubitux> also, you can pute the two in the same filtergraph
[22:56] <ubitux> like -f lavfi -i 'movie=test.mp4,redact=ffmpeg/redact_unsort.txt[out0]; amovie=test.mp4,aredact=ffmpeg/aredact_unsort.txt[out1]'
[22:57] <ubitux> put*
[22:57] <ubitux> i'm looking forward your submission
[22:57] <ubitux> feel free to join the devel channel as well
[22:58] <ubitux> try to match  try to map*
[22:58] <ubitux> it's getting late...
[22:58] <_hc> ok, I'll try that now
[22:58] <ubitux> but really, in your case, -af and -vf look more appropriate
[22:59] <_hc> ubitux: about submitting them, they work on 0.9, 0.10.4 and 0.11.4, but not on master
[22:59] <_hc> it seems the filter API has changed quite a bit
[22:59] <ubitux> it's unfortunate, we only work on the master :)
[22:59] <_hc> (oops 0.11.1)
[23:00] <ubitux> if you have any problem, feel free to discuss this on #ffmpeg-devel
[23:00] <_hc> ok
[23:00] <_hc> thanks
[23:03] <intracube> is there a known issue with the '-target' option where it stops the video filters working properly?
[23:04] <intracube> it looks like -target is causing the video to be rescaled before the filter chain...
[23:04] <_hc> ubitux: what are the [out0] and [out1]? why separate outs?
[23:04] <ubitux> it indicates an output stream
[23:04] <intracube> which means an interlaced source gets messed up before it reaches the yadif deinterlacing filter...
[23:04] <ubitux> out0 for first stream, out1 for second stream, etc
[23:06] <ubitux> intracube: i didn't see anything like that, feel free to open one
[23:06] <ubitux> http://ffmpeg.org/bugreports.html
[23:07] <intracube> ubitux: ok, I'll see about submitting one
[23:07] <_hc> hmm, how should I specified the input file?
[23:08] <_hc> I get this: movie=test.mp4,redact=redact_unsort.txt[out0]; amovie=test.mp4,aredact=aredact_unsort.txt[out1]: Invalid argument
[23:08] <_hc> [lavfi @ 0x46e3e0] No such filter: 'movie'
[23:08] <_hc> in reverse order
[23:08] <_hc> movie= is the input file
[23:08] <ubitux> _hc: can you pastebin the command line and output?
[23:08] <ubitux> intracube: thank you
[23:09] <intracube> ubitux: yep, be back in 5 mins
[23:10] <_hc> ubitux: http://pastebin.com/2i15cmbv
[23:11] <ubitux> mmh strange
[23:12] <_hc> this is on an Android phone, by the way, but I don't think that matters
[23:13] <JonB> is it possible to use -ss and -t with -acodec / -vcodec copy?
[23:13] <JEEB> yes, I've used -ss at least when cutting aac streams
[23:14] <JEEB> of course it will cut on frame borders where a frame will not refer to a frame on the "other side" of the cut
[23:14] <JonB> ?
[23:14] <ubitux> _hc: i just tried something similar, and it works for me
[23:14] <ubitux> ./ffmpeg -f lavfi -i 'movie=test.mp4,scale=320:240[out0]; amovie=test.mp4,volume=0.5[out1]' -f null -
[23:14] <_hc> what's -f null?
[23:14] <ubitux> i'm just skipping the encode
[23:14] <_hc> ah, ok
[23:15] <ubitux> something might be wrong with your build
[23:15] <ubitux> but the error looks weird
[23:15] <JEEB> JonB, in most cases audio is encoded as independent frames, but video with most up-to-date formats tends to refer to other frames' contents to make compression better
[23:15] <ubitux> no movie filter...
[23:15] <ubitux> _hc: are you really able to use the movie filter?
[23:15] <JEEB> frames that don't refer to other frames are usually called "key frames" or "IDR frames"
[23:15] <ubitux> and this build
[23:16] <_hc> ubitux: you mean pixelating the video? yes that works
[23:16] <JonB> JEEB: hmm. I record in AVCHD, convert it to .flv to use an online webeditor to indicate where the cuts are to be made, and then I plan to use the Edit Decision List to work on the raw videos
[23:17] <JonB> JEEB: could that influence the location of the cuts?
[23:17] <ubitux> _hc: maybe that version is too old, i don't know
[23:17] <JonB> JEEB: I can handle +- 1s on either side of the cuts
[23:18] <_hc> ubitux: how would you run an audio and video filter at the same time using -af and -vf?
[23:18] <_hc> /data/local/ffmpeg -i test.mp4 \
[23:18] <_hc>     -af aredact=aredact_unsort.txt \
[23:18] <_hc>     -vf redact=redact_unsort.txt \
[23:18] <_hc>     -acodec aac  -b:a 32k -strict experimental \
[23:18] <_hc>     -vcodec libx264 \
[23:18] <_hc>     -y output-test-redact.mp4
[23:18] <_hc> like that?
[23:19] <ubitux> yes
[23:20] <_hc> I get a different weird error there: http://pastebin.com/HRsx3ZeT
[23:20] <_hc> 'resample' filter not present, cannot convert audio formats.
[23:21] <_hc> it seems in 0.11.1 there is af_aresample.c and af_resample.c
[23:21] <_hc> and they are different... odd
[23:22] <ubitux> yeah it's because of the two libraries for audio processing
[23:22] <ubitux> you don't have much filters
[23:23] <JonB> is it possible to work on either left or right sound channel in a stereo recording?
[23:23] <ubitux> 0.11.1 is kind of old now
[23:23] <_hc> and I can't seem to get ./configure to enable 'resample' only 'aresample'
[23:23] <ubitux> try to insert manually a resample filter
[23:23] <ubitux> a "aresample" filter sorry
[23:24] <_hc> into the command line?
[23:24] <ubitux> yes, after aredact=aredact_unsort.txt
[23:24] <_hc> like -af aresample?
[23:24] <ubitux> something like mmh -af aredact=aredact_unsort.txt,aresample=<the output expected for aac>
[23:25] <ubitux> that code really has changed since then
[23:25] <ubitux> aresample should be properly inserted
[23:26] <_hc> yes it has, I guess it doesn't work in 0.11.1?
[23:26] <_hc> adding to -af gave me:
[23:26] <_hc> No such filter: 'aresample'
[23:26] <_hc> which ffmpeg -filters confirms
[23:27] <ubitux> yeah your build doesn't have it
[23:27] <_hc> but ./configure told me it was included...
[23:27] <_hc> sigh
[23:27] <ubitux> is it that much trouble tu update your filter to the new api?
[23:27] <_hc> donno, I haven't looked at the new API yet
[23:28] <_hc> we have them working separately, just not togetehr
[23:33] <JonB> is it possible to work on either left or right sound channel in a stereo recording?
[23:33] <ubitux> asplit, pan, filter, amerge ?
[23:35] <JonB> ubitux: but how do I address them? input is something like Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s
[23:36] <ubitux> what do you want to do exactly?
[23:37] <JonB> ubitux: left channel is the lecture speaking. Right channel is the audience channel (2 different types of mic, one is directional)
[23:37] <JonB> ubitux: I want to mute the audience channel unless someone is asking a question (assume I have an Edit Decision List of when to mute and not)
[23:38] <ubitux> i'm not sure how you could do that operation
[23:39] <ubitux> i mean even for 1 channel
[23:39] <JonB> ubitux: but I suppose that people prefer that the speaker channel is in both ears when they watch the video, so I guess I have to monofy the speaker channel and merge it with the audience channel when they are raising a question
[23:39] <ubitux> mute feature is ok, playing with the volume as well, but given an "edit" mode no..
[23:39] <JonB> ubitux: the timing of the speaker must be in sync
[23:40] <JonB> but the timing of the question from the audience is not super important, since you can not see their faces anyway
[23:40] <ubitux> it will be a pain to do that according to the time
[23:40] <JonB> ubitux: my initial idea was to convert the speaker channel from the stereo signal to mono
[23:41] <JonB> ubitux: then I would extract the audience channel, edit it using sox
[23:41] <JonB> and merge it back in
[23:41] <ubitux> in that case it's ok
[23:43] <ubitux> use pan filter to filter out the channel
[23:43] <ubitux> then with a amovie and amerge, insert the edited one
[23:43] <JonB> ubitux: do you think I should use the earwax filter too?
[23:43] <ubitux> no idea
[23:43] <JonB> ok
[23:44] Action: ubitux &
[23:54] <lucas^> what deinterlacing filter should I use on 29.97 MPEG2-TS?
[23:54] <lucas^> yadif didn't turn out too well
[23:55] <lucas^> also, does the auto-crop black-bar detection filter work reasonably well on live TV with logos and such in the corners? can I use it to determine whether a given program is in 16:9 or 2.35:1 letterbox?
[00:00] --- Sat Aug 25 2012


More information about the Ffmpeg-devel-irc mailing list