[Ffmpeg-devel-irc] ffmpeg.log.20140529

burek burek021 at gmail.com
Fri May 30 02:05:01 CEST 2014

[00:05] <Dave77> llogan: I just wondered if something about QT could be added to FAQ
[00:05] <llogan> what do you want to add, exactly?
[00:14] <massdos> i think i found the problem. maybe someone can confirm
[00:15] <massdos> in libspeexdec.c from ffmpeg-20140415-git-ef818d8
[00:15] <massdos> line 46: FROM (    avctx->sample_fmt = AV_SAMPLE_FMT_S16;
[00:15] <massdos> TO     avctx->sample_fmt = AV_SAMPLE_FMT_NONE;
[00:15] <massdos> so the message says "Unable to parse option value "(null)" as sample format"
[00:16] <massdos> so i think it's should be this...
[00:16] <massdos> any one can confirm?
[00:37] <llogan> massdos: yes. ef48ac6 is the culprit
[00:37] <llogan> thanks for letting us know
[00:38] <bencc> when building on ubuntu-14.04 with --enable-x11grab I'm getting "Error: Xext not found"
[00:38] <llogan> you need libxext-dev or something like that
[00:38] <bencc> tried installing libxext-dev but still getting the rror
[00:38] <bencc> installed it
[00:39] <llogan> anything of interest in config.log?
[00:39] <bencc> "pkg-config --modversion xext" gives me 1.3.2
[00:39] <bencc> where is config.log?
[00:39] <llogan> in your ffmpeg source directory
[00:39] <bencc> I'm using https://github.com/pyke369/sffmpeg
[00:41] <bencc> in config.log I see many lines "undefined reference to `_Xreply'" and similar
[00:42] <bencc> last two lines are
[00:42] <bencc> collect2: error: ld returned 1 exist status
[00:42] <bencc> Error: Xext not found
[00:42] <llogan> third party scripts are generally offtopic here. you should contact the author.
[00:42] <bencc> ok. thanks
[01:28] <bencc> when using fbdev it records my terminal. can I record xvfb instead?
[01:33] <benlieb> I'm creating videos by slicing another video like so:      cmd = "ffmpeg -y -i #{@dir}/raw/#{@import.raw_source} -ss #{@import.lesson_times[0]} -c copy -to #{@import.lesson_times[1]} -map_chapters -1  #{@dir}/raw_lessons/#{@import.lesson_id}.mp4"
[01:33] <benlieb> the videos created this way have variable blackness for the first 3-5 seconds before the video stream is visible.
[01:33] <benlieb> What is causing this and how do I avoid it?
[01:34] <benlieb> c_14: to the rescue?
[01:34] <benlieb> or anyone else....
[01:34] <benlieb> :)
[01:35] <c_14> try using -ss as an input option instead of an output option
[01:35] <benlieb> c_14 how would I do that? Or rather where would that go?
[01:36] <c_14> -y -ss #{@import.lesson_times[0]} -i #{@dir}/raw/...
[01:36] <benlieb> after the -c ?
[01:36] <benlieb> the -to too no?
[01:37] <c_14> Don't move the -to
[01:40] <benlieb> that seems to do it.
[01:40] <benlieb> now what exactly happened :) ?
[01:41] <c_14> You used -ss as an input option instead of an output option. This causes a lot of magic and doohickery to come into effect which can help git rid of weird black frames when streamcopying.
[01:41] <c_14> Basically ffmpeg decodes and throws away the input up to the seek point before doing stuff with it.
[01:43] <llogan> benlieb: are you no longer using the color source filter?
[01:44] <joecool> hey is there any compelling reason to use libcdio over cdparanoia with ffmpeg?
[01:44] <benlieb> llogan: I am in another part of the script
[01:45] <benlieb> llogan: basically a lot of stuff needs to happen all over the place. Here's the methods that are called and in this order to give you an idea:     make_db_lesson connect_instructors_to_lesson make_commissions make_intro raw_lesson big_mp4 add_intro small_mp4 make_previews add_outro_to_preview make_thumb
[01:45] <benlieb> and then some...
[01:46] <benlieb> llogan: I'm trying to make a script that automates what used to take a human days to do
[01:47] <benlieb> Or more. The only thing I'm losing is the quality of the intro and outdo. None of the text is animated like before. Is there a cli program that allows for "flashy" video manipulations?
[01:48] <klaxa> imagemagick + ffmpeg maybe?
[01:48] <llogan> you mean to animate text using a variety of presets or whatever? not that i know of, but that doesn't mean there isn't one.
[01:48] <c_14> If you want 'flashy' text, you might want to look at substation alpha subtitles. You can do pretty nice things with those.
[01:48] <klaxa> oh yeah or that
[01:49] <benlieb> c_14: is substation the program?
[01:50] <benlieb> nvm
[01:50] <klaxa> aegisub is regarded the most sophisticated WYSIWYG editor for ssa subs
[01:51] <c_14> ssa files are plain text, so once you get something you like you can swap out parts of it (ie episode name or something)
[01:51] <benlieb> looks like a gui
[01:51] <llogan> popular with aneem fans
[01:52] <benlieb> I don't want subtitles, I want attractive and dynamic intros and outros ideally.
[01:52] <c_14> You said something about animating text, you can do things like make pretty text with ssa subtitles.
[01:54] <llogan> maybe blender.
[01:56] <kingsob> I have a ts segment from a hls stream, and it has a Duration: 00:00:09.29, start: 12.182467, bitrate: 1595 kb/s. I am trying to reencode the segment with a lower bitrate, but in doing so, I seem to reset the start back to 0.000000. is there a way I can keep the start as it was in the original video?
[01:59] <benlieb> Here's an example of our "previous" type of intro that was manually done in a premier http://previews.idance.net.s3.amazonaws.com/mp4/2368.mp4
[01:59] <benlieb> c_14: that looks like a gui, though, no?
[02:01] <c_14> aegissub is a gui, yes. But as I said the ssa files are plain text so you so if the only things you're changing are the names/people you can just have one base ssa template file that you modify for each video.
[02:03] <c_14> And you can do text effects like that with ssa subtitles. you might want to burn them into the video though.
[02:06] <benlieb> c_14 I can already do this in the manner you're describing with final cut and premier. Ideally I'd like a completed animated solution. The current script is pulling all data (text etc) from the databae.
[02:06] <freezway> so with libavutil and av_dict_get, if you don't pass AV_DICT_DONT_STRDUP_VALUE then why does it say not to change the value returned?
[02:07] <c_14> benlieb: oh, ok. I don't really have any other ideas though.
[02:07] <freezway> shouldnt it be strdup'd and not care?
[02:10] <klaxa> benlieb: with a source video for the... flaming stuff and some hours of tracing those flames it could be done with ssa subs
[02:10] <klaxa> with variable text
[02:10] <klaxa> wouldn't look exactly like it of course
[02:10] <klaxa> but probably close enough
[02:12] <benlieb> klaxa: what I'm really after is something that empowers creativity as well as automation. I don't really like the flames for example, and would love to just have a text animation filter similar to "draw text" that has many preset animation theme templates, for example.
[02:12] <benlieb> I see this stuff on TV everywhere and in in other gui video editors.
[02:12] <klaxa> hmm haven't heard of such a thing
[02:12] <klaxa> well yeah gui editors
[02:13] <klaxa> with a little bit of scripting you can generate your own animations
[02:13] <benlieb> if someone knew how to make this available (I don't) there is a HUGE market for this. I can't imagine someone hasn't stepped into fill this gap. It must exist out there somewhere....
[02:13] <klaxa> with imagemagick or ssa subs even
[02:13] <benlieb> klaxa: but as I said ssa isn't cli
[02:14] <klaxa> ssa are plaintext files
[02:14] <klaxa> sure as hell they are cli
[02:14] <benlieb> but the application of those to a video is manual via the gui
[02:14] <benlieb> I'm writing scripts that are processing thousands of videos.
[02:14] <klaxa> you create a template once and replace the text
[02:14] <benlieb> It's just not efficient for us to do this manually
[02:15] <klaxa> do you want a bazillion different text effects?
[02:15] <benlieb> klaxa: in what manner would you add the effect to the video?
[02:15] <klaxa> render the subs on a video, maybe a blank one
[02:16] <benlieb> but how do you choose the video? Probably by open a program, selecting a video and pressing a button, right?
[02:16] <freezway> ah what the shit. AV_DICT_DONT_STRDUP_VAL isnt even used in av_dict_get
[02:16] <klaxa> wait... what?
[02:17] <klaxa> ffmpeg -i video.mkv -vf ass=subs.ass output.mkv
[02:18] <klaxa> like i said, you create *one* template (you'd probably have to for custom effects in any case no matter what method) with the gui, or if you want to, write it in a texteditor
[02:18] <klaxa> then you replace the text in the subtitle file with whatever text you want
[02:19] <klaxa> and render it with ffmpeg on the video
[02:20] <benlieb> klaxa: so ass is a ffmpeg filter...
[02:20] <benlieb> let me get this straight
[02:20] <klaxa> ok
[02:20] <benlieb> I can create a "template intro" that basically has place markers for text
[02:21] <benlieb> then use the ass filter to stick text (from a file) into that template and render it , or concat it via ffmpeg?
[02:21] <klaxa> what are place markers?
[02:22] <benlieb> the things that templates use for stuff that gets shoved into them later
[02:22] <klaxa> okay let me try to explain it how i would do it
[02:22] <benlieb> place holders
[02:23] <klaxa> use a blank video source, create a subtitle script with aegisub, maybe with external scripting for effects, and use dummy text which is going to be replaced by actual text
[02:24] <klaxa> then i replace the dummy text with actual text for a video, burn it on a blank video with ffmpeg's ass filter
[02:24] <klaxa> now i have a video with shiny text effects
[02:24] <klaxa> you can concat that with ffmpeg, not sure if codec copy will work though
[02:25] <benlieb> what is aegisub?
[02:25] <benlieb> external program?
[02:26] <sacarasc> Yes.
[02:26] <klaxa> in my scenario it is used once to create a template
[02:27] <benlieb> how are the effects generated? In aegisub
[02:27] <benlieb> ?
[02:28] <klaxa> there are some lua scripts
[02:28] <klaxa> you can use the built-in move() function
[02:28] <klaxa> here's an example of what i made like a year ago: http://klaxa.eu/justice_anger.mkv
[02:29] <klaxa> i did that with a python script written for that exact scene though
[02:29] <klaxa> maybe not the best example
[02:30] <benlieb> klaxa: nice.
[02:30] <benlieb> but there are no text effects, movements, resizing, flames, etc
[02:31] <benlieb> how would that be done. I'm missing something.
[02:31] <klaxa> did you see those words forming from cards? wasn't that moving enough? :P
[02:31] <klaxa> like i said, for moving you can use the move() command which is part of the specification
[02:31] <klaxa> you can fade text with fade()
[02:31] <klaxa> fade in and fade out
[02:31] <benlieb> the red ones?
[02:32] <klaxa> yeah
[02:32] <klaxa> the english ones
[02:32] <benlieb> I thought you were talking about the subtitles
[02:32] <klaxa> those are part of the subtitles
[02:32] <klaxa> they were in the text-file
[02:32] <klaxa> it's called typesetting in the fansub scene
[02:32] <klaxa> http://klaxa.eu/justice_anger_soft.mkv
[02:33] <klaxa> in your player, disable and enable subs at random points to see what are subs and what are not
[02:51] <benlieb> I'm still trying to figure out where the effects come from. Read the wikipedia page for Substation Alpha and the page for Aegisub
[02:52] <sacarasc> They come from the subtitle format and the ass decoder in ffmpeg.
[02:53] <benlieb> sacarasc: how can I see a list of available effects...
[02:54] <klaxa> benlieb: http://docs.aegisub.org/manual/ASS_Tags
[02:59] <benlieb> In the process above that you recommended, when you say "I replace the dummy text" does that mean you write to the subtitle file?
[03:00] <benlieb> Or re-write it rather?
[03:02] <c_14> You'd probably create a local copy of the file somewhere and then rewrite parts of it, yes.
[03:07] <benlieb> c_14: klaxa: To see if I've got this straight, the ffmpeg ass filter can read a file in ssa format (which includes times and effects) generated once by the gui aegisub program?
[03:07] <c_14> yep
[03:11] <benlieb> c_14: I'm having trouble finding a visual reference for these effects. I'd like to see what's possible before going through the trouble of trying to get this to work...
[03:11] <c_14> Well, there's what klaxa posted with the text coming together to form letters.
[03:13] <benlieb> c_14: this looks promising. Was this really generated with ass? https://www.youtube.com/watch?v=Xy-9IrbfAtc
[03:13] <c_14> ye
[03:14] <c_14> I have several videos with similar karaoke effects.
[03:18] <c_14> You can just search on youtube for 'aegisub effect' or 'aegisub movement effect' etc and you'll get plenty of results.
[03:18] <c_14> Some of them even post the templates.
[03:19] <c_14> ie: https://www.youtube.com/watch?v=_KwxtW4qLy8 with template: http://sprunge.us/fJQA
[03:47] <benlieb> how can I fade out the last 2 seconds of a video of arbitrary length?
[03:48] <benlieb> c_14 ^
[03:49] <c_14> I don't think ffmpeg can do that by itself. You'll have to find the length of the video somehow, store that in a variable and use that to calculate the point from which you want to start fading out.
[03:52] <benlieb> c_14 That what I determined from the docs. This doesn't quite seem reasonable to me. Isn't that the most common use case for fading out? For arrays we can do it with negative index
[03:53] <c_14> Ye, I think there was a feature request a while ago, but I'm pretty sure it hasn't gone anywhere yet.
[04:05] <benlieb> c_14: so basically I need this to create a fade out!
[04:05] <benlieb> https://gist.github.com/pixelterra/1ad422d88c227fc607e1
[04:05] <benlieb> crazytown
[04:09] <c_14> The enhancement ticket is here: https://trac.ffmpeg.org/ticket/2631
[04:09] <c_14> Doesn't look like there's a lot going in that direction though.
[04:27] <benlieb> it says Most developers on irc seem to have a agreed that such a "queue" feature should not be implemented due to memory (security?) concerns
[04:27] <benlieb> huh?
[04:30] <benlieb> is there any way to fade in audio and video at the same time?
[04:32] <benlieb> c_14 ^
[04:33] <c_14> Use the fade and afade filters with the same timestamps.
[04:34] <Generalcamo> I'm trying to convert a folder of .wav files to .ogg. Is there an easy way to do this? I navigated to the folder, but I am confused by the documentation
[04:35] <c_14> for file in *.wav; do ffmpeg -i "$file" ${file%wav}ogg; done
[04:37] <Generalcamo> So I need to manually input the names, instead of doing an entire folder.. this will be fun
[04:37] <c_14> hmm? that script should do it automagically. Assuming you have acces to an sh compatible shell.
[04:38] <Generalcamo> Ha ha... windows
[04:38] <Generalcamo> ..
[04:39] <c_14> he, he, I have no idea.
[04:39] <c_14> You could always do something similar in python or something.
[04:40] <benlieb> Generalcamo: yeah, write a python / ruby / php script
[04:40] <Generalcamo> I haven't wrote scripts in years..
[04:40] <benlieb> gist the output of dir in that directory
[04:41] <benlieb> gist.github.com
[04:42] <Generalcamo> https://gist.github.com/anonymous/0a82687c0e0ec73d833d
[04:43] <benlieb> no I need the contents of that dir
[04:43] <Generalcamo> Oh, sorry
[04:43] <benlieb> go into the \wav folder in the command line
[04:43] <benlieb> and do a dir
[04:45] <Generalcamo> https://gist.github.com/anonymous/8070ceecb51bcb902f28
[04:45] <Generalcamo> Sorry
[04:49] <benlieb> do you have ruby installed?
[04:50] <Generalcamo> No, but I can install it quickly
[05:00] <benlieb> Generalcamo: well you'll need some scripting foo if you want to work with ffmpeg. Might as well install ruby, because DOS will make you wanna die
[05:00] <benlieb> c_14: what does this mean: Simple filtergraph 'fade=t=in:st=0:d=2;fade=t=out:st=21:d=2' does not have exactly one input and output.
[05:01] <c_14> You're using -vf? swap out the ; for a ,
[05:01] <benlieb> c_14 I thought that was when the filters were themselves enclosed in " "
[05:01] <benlieb> lemme try
[05:02] <c_14> vf takes a single filterchain
[05:12] <erisco> where can I find documentation for programming with ffmpeg? I am trying to encode a raw video stream
[05:12] <erisco> I keep hitting pages which are about the cli
[05:25] <benlieb> erisco: http://www.ffmpeg.org/ffmpeg.html
[05:25] <erisco> this seems to be entirely about the cli
[05:25] <benlieb> c_14:  when I moved that -ss filter to the input, it changed the length of the video to match the value of -to (2:04).
[05:26] <benlieb> erisco: as far as I know that's all ffmpeg is
[05:26] <erisco> ah, well there is more
[05:26] <erisco> there are C headers you can program against too
[05:26] <c_14> benlieb:  Note that if you specify -ss before -i only, the timestamps will be reset to zero, so -t and -to have the same effect:
[05:26] <benlieb> erisco: http://www.ffmpeg.org/documentation.html
[05:27] <erisco> ah the doxygen
[05:28] <benlieb> c_14 so I'd have to supply the -to as the offset of the timestamp in -ss?
[05:28] <benlieb> or more to the point, the -to timestamp of the new stream?
[05:28] <c_14> ye
[05:33] <zumba_addict> good evening folks. This command is making a 6000 kb/s which I don't need since it's just a frozen pic with an audio. What do I add? ffmpeg -i output.mp4 -target ntsc-dvd output.vob
[05:34] <c_14> -target sets the bitrate for you, so there's not much you can do if you want to stay compatible to the ntsc-dvd format.
[05:34] <c_14> You could, however, try specifying a bitrate manually.
[05:34] <zumba_addict> ok, how do I do that? what is the param? is it -ab 1200
[05:35] <c_14> -b:v nk
[05:35] <c_14> where n is a number
[05:35] <zumba_addict> cool
[05:35] <zumba_addict> do I keep -target there?
[05:35] <c_14> if you want to. -target sets a lot of things (bitrate, codec, buffer size) so you might want to keep it
[05:35] <c_14> depends on your use case
[05:36] <zumba_addict> yup, it's a dvd target
[05:36] <zumba_addict> i only have a 700mb cds here :D from 2002, LOL
[05:36] <zumba_addict> i still got tons of them
[05:48] <benlieb> c_14: "fade=t=in:st=0:d=2,fade=t=out:st=21:d=2,afade=t=in:st=0:d=2" ===> [Parsed_fade_1 @ 0x7fa16a500540] Media type mismatch between the 'Parsed_fade_1' filter output pad 0 (video) and the 'Parsed_afade_2' filter input pad 0 (audio)[AVFilterGraph @ 0x7fa16a603520] Cannot create the link fade:0 -> afade:0 Error opening filters!
[05:48] <benlieb> Know what that's about?
[05:50] <c_14> You'll probably need to throw that in a -filter_complex '[v]fade[vo];[a]afade[ao]' -map '[v]' -map '[a]'
[05:51] <benlieb> does this mean it accepts a stamp or seconds? [-]HH[:MM[:SS[.m...]]]
[05:51] <benlieb> [-]S+[.m...]
[05:51] <c_14> ye
[05:54] <benlieb> c_14 I thought you only need filter_complex when you have more than one input
[05:55] <c_14> Normally yes, but I think the current filterchain is trying to pass the video output from the fade filter into the input for the afade filter and failing miserably.
[05:55] <benlieb> c_14 but I can see that the output of the video fade goes into the input of the audio fade which won't work
[05:59] <benlieb> c_14 how would you turn this into the filter_complex: "fade=t=in:st=0:d=2,fade=t=out:st=21:d=2,afade=t=in:st=0:d=2,afade=t=out:st=21:d=2"
[06:00] <c_14> like I said '[v]fade,fade[vo];[a]afade,afade[ao]' just with your options attached to the filters
[06:01] <benlieb> what happens to the vo and ao 'variables' just discarded?
[06:03] <c_14> put those in -map '[vo]' -map '[ao]'
[06:03] <c_14> I messed up earlier when I said -map '[v]' -map '[a]'
[06:03] <benlieb> c_14 ok, that's what I thought
[06:04] <benlieb> c_14 so the [v] in [v]fade is a "pad" or a "stream" or both or is that the same thing? And that is previously defined, and it means the 1st v stream from the first input?
[06:06] <c_14> That [v] and [a] was just my symbolic hint that you should put the video and audio streams there, if you only have one input file, do [0:v] and [0:a]
[06:08] <benlieb> c_14: ok, was wondering bout that too. I'm such a newb
[06:09] <benlieb> basically you need filter_complex whenever you want to affect more than one stream in a filter
[06:09] <benlieb> correct?
[06:10] <c_14> Basically.
[06:11] <benlieb> c_14 I think I'm getting it.
[06:11] <benlieb> c_14: Also I think I can simplify this and other commands with this info: Unlabeled outputs are added to the first output file
[06:12] <benlieb> so I only need to "label" the output pad and then use -map if I want it to go somewhere else.
[06:12] <benlieb> and since I only have one output file
[06:44] <benlieb> c_14 is there a difference between the -s option and the scale filter?
[10:11] <ashhhh> PLEASE help me with RTSP streaming in windows
[10:11] <ashhhh> i cant find an alternative for ffserver for windows can
[10:11] <ashhhh> can u please suggest one
[10:37] <ashhhh> gello pls help
[13:24] <sunny0070> how can i install ffmpeg on vps
[13:25] <sunny0070> via parallel power panel
[13:52] <azk> The fuck is a parallel power panel
[13:59] <Harzilein> hi
[14:00] <Harzilein> this is technically an avconv question as that is what i currently have installed. as avconv likes to lie about its origin when called as ffmpeg -version, how do i check for presence of a working ffmpeg?
[14:04] <JEEB> it should be pretty simple, the binary tells you if it's from the ffmpeg or libav project
[14:05] <Harzilein> JEEB: i suspect it doesn't.
[14:05] <JEEB> it should, there's no reason not to :P
[14:06] <JEEB> also libav hasn't had an ffmpeg binary for literally ages by now (although unfortunately ubuntu had that 2012 version until 13.10, so it had that binary until then)
[14:07] <JEEB> ok, I tested on a 12.04 box
[14:07] <JEEB> ffmpeg -version doesn't say anything about the project
[14:07] <JEEB> just `ffmpeg` does though
[14:07] <JEEB> altough...
[14:07] <JEEB> grab
[14:07] <JEEB> grah
[14:08] <Harzilein> harzilein at debian:~$  avconv -version > version.txt ;  ffmpeg -version > version2.txt ; cmp version.txt version2.txt && debpaste add < version2.txt
[14:08] <Harzilein> [...]
[14:08] <Harzilein> http://paste.debian.net/102369
[14:08] <JEEB> yes I just told you that -version doesn't contain that info
[14:08] <JEEB> /usr/bin/ffmpeg
[14:08] <JEEB> ffmpeg version 0.8.10-4:0.8.10-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav developers
[14:08] <JEEB> so no, the libav ffmpeg binary that was there does not lie about its origins :P
[14:09] <Harzilein> really useless -version option then ;)
[14:09] <ubitux> "The Libav developers" is the fork
[14:09] <JEEB> well same for the ffmpeg then really
[14:09] <ubitux> ffmpeg 1.0 is extremely old btw
[14:09] <JEEB> blame both sides if you're gonna blame someone for not putting the project there :P
[14:09] <JEEB> but yeah, just run the binary and it's the first line
[14:10] <JEEB> without any params
[14:10] <Harzilein> ubitux: that's a bit redundant, distro versions are always old, upstream channels always call them ancient ;)
[14:10] <ubitux> "ffmpeg version 2.2.2 Copyright (c) 2000-2014 the FFmpeg developers"  "the FFmpeg developers" is ffmpeg
[14:10] <JEEB> yup
[14:10] <ubitux> Harzilein: well, 1.0 is from 2012
[14:11] <JEEB> Harzilein, basically I'd like to get a revisit on that "likes to lie about its origin" comment :P since both sides seem to be noting themselves correctly
[14:11] <ubitux> given that there is not a single day without commits since then, it's old yes
[14:12] <Harzilein> JEEB: okay. let me do another test:
[14:13] <Harzilein> harzilein at debian:~$ avconv 2>&1 | head -1 > version.txt ; ffmpeg 2>&1 | head -1 > version2.txt ; cmp version.txt version2.txt && (cat version2.txt ; echo ; echo) | debpaste add
[14:14] <Harzilein> [...] http://paste.debian.net/102370
[14:15] <JEEB> which doesn't really show us more than the fact that you have an ffmpeg ffmpeg in there
[14:16] <Harzilein> uhm, i don't think so. why would ffmpeg call its binary avconv?
[14:16] <JEEB> nfi :P
[14:16] <Harzilein> well, perhaps it's a marrilat quirk, let me check
[14:17] <sacarasc> Maybe its to "fix" your distro which thinks there will be an `avconv`.
[14:17] <JEEB> and yes, you are using some custom packaging, and if that packaging only packages the ffmpeg cli, then you really need to find someone who packages a newer one because you have no reason for not updating
[14:17] <JEEB> (unlike when you have libraries installed and depended on, and thus can't budge)
[14:18] <Harzilein> yeah, that's what i meant with checking, i'd check what set of deps it has (perhaps compared to its build deps) as well
[14:19] <Harzilein> marillat/dmo does indeed seem to package ffmpeg, sorry for the noise.
[14:20] <Harzilein> otoh it depends on marillat's "versioning era"-bumped av* libraries...
[14:21] <JEEB> well that means that nothing vanilla debian depends on those then
[14:22] <JEEB> anyways, tl;dr - if you need a newer ffmpeg cli, just build it yourself or use one of the static binaries
[14:22] <JEEB> debian unstable does already have Libav 10, too, so depending on your needs that's just fine too
[14:22] <Harzilein> JEEB: well, okay, back to my actual problem: i want to add metadata with my transcodings that preserve version and command line options so they can be reproduced
[14:23] <Harzilein> s/version/& information/
[14:23] <Harzilein> i wonder if someone has shaved that particular yak before
[14:26] <Harzilein> basically the transcoding frontend should check if that particular set of versions is still available, optionally check if it can successfully reproduce the transcoded media and once that happens delete the transcoding
[14:28] <Harzilein> the ability to recover as many command line options as possible from just the reencoded file would be nice as well
[14:29] <Harzilein> or possibly from just the original and the reencoded file, stuff like audio track assignments
[14:43] <chaika> Is there a way to tell ffmpeg to encode H264 with an odd number for height (resolution), or does that go against the H264 standard?
[14:43] <chaika> In my case it complains "[libx264 @ 0x9db7de0] height not divisible by 2 (766x431)
[14:43] <chaika> ", and then exists with "Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[14:43] <chaika> exits*
[14:43] <GT_> Hi guys, I have a problem that I need to encode a video and set the creation date. According to the documentation, I should use -timestamp now, but it's not working
[14:44] <JEEB> the libx264 wrapper defaults to 4:2:0 YCbCr, for decoder compatibility
[14:44] <chaika> And in that pixel encoding it's disallowed? Do you know if it's permitted in 4:2:2 or 4:4:4?
[14:44] <JEEB> if you want to encode 4:2:2 or 4:4:4, you would have to set the -pix_fmt yourself
[14:45] <JEEB> yes, 4:2:0 encodes chroma data in 2x2 blocks
[14:45] <JEEB> so while in theory you could do non-mod2 4:2:0, it's pretty much "there be dragons" kind of area
[14:45] <chaika> True, thanks.
[14:45] <JEEB> and 99%+ of all implementations just don't support it
[14:46] <chaika> I don't understand the implementation of pixel formats to understand how that would even work
[14:46] <chaika> well enough*
[14:47] <JEEB> basically internal padding and some other stuff, but generally you can just think that 4:2:0 just doesn't support it :P
[14:47] <JEEB> it's very muchos implementation-specific and as I said, no-one supports it
[14:48] <JEEB> 4:2:2 encodes chroma in 2x1 blocks (every horizontal line has its own chroma values), and 4:4:4 has one chroma value for every luma value
[14:48] <JEEB> but yeah, you basically kill hardware decoder compatibility etc with those
[14:48] <GT_> any Idea why can't I use -timestamp now to update the creation date? http://pastebin.com/EP04athx
[14:48] <JEEB> if you need RGB, then there's the libx264rgb encoder
[14:49] <chaika> I encode in 10-bit so I already kill hardware decoding anyway :P
[14:49] <JEEB> :)
[14:49] <JEEB> just sayin'
[14:49] <GT_> Jeeb any idea how to reset the creation date? or why isn't the way I tried working? http://pastebin.com/EP04athx
[14:49] <chaika> But yeah, weird stuff that I wouldn't want to get into, I just didn't think that it was the pixel format causing the failure
[14:49] <JEEB> chaika, what's your source?
[14:50] <JEEB> if it's 4:2:0 YCbCr then you most probably want to look into what you're doing exactly to it
[14:51] <chaika> The source is just some random webm with VP8 video actually, and the default for VP8 must be some other pixel format because libvpx never complains about odd resolution
[14:51] <JEEB> because generally if there's no reason for it, you don't want to change that
[14:51] <JEEB> chaika, can you post ffmpeg -i welp.webm info on it for interest?
[14:51] <JEEB> pastebin or so
[14:52] <GT_> JEEB do you have a minute or so from your time? Till he paste's the result?
[14:52] <JEEB> GT_, if I would have any idea about your issue, I would have responded already
[14:53] <GT_> k, just ascing. And do you know somebody I could connect about this issue? Because I'm searching the internet for quite some time already and I can't find any solution
[14:57] <chaika> JEEB: http://pastebin.com/7zY5PUGm
[14:58] <chaika> Oops, got disconnected
[14:58] <JEEB> chaika, ok so it seems like libvpx wraps around it with some padding
[14:58] <JEEB> or something like that
[14:58] <JEEB> the actual data inside is 4:2:0 YCbCr
[14:58] <chaika> Yeah that's interesting
[14:59] <chaika> I guess they don't need to meet any hardware decoding standards yet so they have a lot of freedom
[14:59] <JEEB> well in theory you could do the same with H.264, it's just that you'd have to deal with the padding somehow
[15:00] <JEEB> it does have some cropping windows things, but not sure if they can be used for that
[15:00] <JEEB> also I really wonder how whatever then converts that to RGB will deal with the internal res vs external res dilemma
[15:00] <JEEB> regarding chroma upscaling
[15:01] <JEEB> because there's usually there's these specific ways of downscaling chroma, and then you're supposed to kind of do it in a similar way
[15:01] <JEEB> the other way
[15:02] <JEEB> of course if you suddenly get a situation where the other half of that 2x2 chroma block is just padding...
[15:02] <JEEB> or whatever, dunno
[15:02] <chaika> Google magic happens
[15:02] <JEEB> surprised that things handle it just fine
[15:03] <chaika> Well it certainly takes a lot of bitrate to match the quality of High profile H264, so I wouldn't be surprised if there was some redundacy and extra data in there
[15:04] <JEEB> nah, that's just libvpx sucking (and the format in general being on the level of H.264's baseline or so)
[15:04] <chaika> Although I'm not an encoding software programmer so I'm not well aware of the actual implementation of the codecs
[15:04] <chaika> Yeah haha
[15:06] <chaika> Sometimes I wish one of them would die off (and it would obviously be VPx), but the competition has done a few good things, like get H.264's patent holder or whatever to make it a little freer of a format
[15:06] <chaika> The only time you have to pay for licensing now is if you directly sell the right to view H264 encoded video, so having ads on a page that has H264 or whatever is fine
[15:09] <JEEB> nah, that's been in there for quite a while :P
[15:10] <JEEB> the VPx video formats had nothing to do with how that ended up if I recall correctly
[15:11] <JEEB> although dunno, in early 2010 a lot of people starting pointing that thing out, but I'm :effort: to starting looking into it
[15:15] <JEEB> ok, it seems like that "news" was not really news
[15:15] <JEEB> just that MPEG-LA used to have "after December 31, 2010" in their old license
[15:15] <JEEB> so they finally decided to do something about that :P
[15:33] <sruli> hi all, just wondering if anyone can advise which would be the best minimal linux distro to use if i only need ffmpeg (no gui or anything required) i will be setting it up as a VM
[15:44] <sacarasc> sruli: Go check which distros link ffmpeg against Xorg for capturing, not them ones.
[15:45] <sruli> sacarasc: in plain english that would mean?
[15:46] <vovcia> hello :))
[15:47] <c_14> sruli: I'd also stay away from debian based distros. It's fine if you want to compile ffmpeg from source, but trying to get ffmpeg in the repos is more than just a pain.
[15:49] <sacarasc> sruli: Look at what ffmpeg depends on in different distros. If they require Xorg, you probably don't want that, as it will bring in a lot of extra junk.
[15:49] <sruli> takes me 5 minutes to complie ffmpeg, have done so on a few of my ubuntu's without a problem, but i now want to setup a vm only for ffmpeg, so was wondering if i should just go for a minimal ubuntu or is there a distro which will be much better
[15:49] <BtbN> ffmpeg on ubuntu is a major pain, thanks to libav
[15:49] <sacarasc> If you're compiling yourself, anything minimalistic would be fine.
[15:50] <vovcia> i have a question about multiple streams.. i have this command: http://pastebin.com/mN4nrEdS it works almost fine, but second stream is second behind the first because ffmpeg opens first stream and then the second
[15:50] <vovcia> i mean it opens -i 8000.sdp, synchronizes, and then open 9000.sdp
[15:51] <vovcia> is there a way to open that streams simultaneously?
[15:52] <sruli> this is the guide i used to complie ffmpeg, worked without a hicup everytime https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
[15:53] <BtbN> compiling it isn't the complicated part
[15:53] <BtbN> packaging software for ubuntu/debian which needs ffmpeg is
[16:41] <chaika> > libopus
[16:41] <chaika> Does anyone actually use this?
[16:41] <chaika> Common usage when?
[16:41] <c_14> I use opus basically whenever I want lossy audio.
[16:43] <c_14> IMO, it is one of the best if not the best lossy audio codec out there, plus it doesn't have licensing issues.
[16:45] <JEEB> chaika, well given that you come IIRC from a fansubbing background, it isn't used because most airings are already lossy, use something that compresses well enough (AAC@~192kbps) and it can be cut in small audio frame chunks so you don't have to re-encode when cutting
[16:45] <JEEB> in some blu-ray/DVD rips I've seen mirko using opus
[16:46] <JEEB> in those cases the source is lossless
[16:48] <JEEB> but really, while opus is good, it's still not really shining at these kinds of use cases since at such rates you generally can use f.ex. vorbis as well. Really low bit rate scenarios are where opus really shines.
[17:00] <chaika> I see. I thought opus was god-tier at everything and that it was just unsupported/uncommon/unheard of for some reason
[17:05] <JEEB> well, it's good at most rates, it's just that there's no real compelling reason to start using it too much on other than low rate scenarios
[17:06] <JEEB> since you have vorbis or AAC for higher ones
[18:13] <albator> hello
[18:13] <albator> just got a new box with ubuntu 14.04
[18:13] <albator> and i can't compile ffmpeg like i used to..
[18:14] <JEEB> last I tested it worked just fine :P might want o provide a pastebin with more info
[18:15] <albator> I compile the external encoder as usual, libvpx x264 and fdk_aac
[18:15] <albator> then i run the ffmpeg compile, but it doesn't find x264
[18:16] <albator> which never happened before..
[18:16] <JEEB> config.log reading time? :P
[18:16] <JEEB> also are you sure you've told ffmpeg where to find x264's library
[18:17] <JEEB> or hell, did you even compile/install it
[18:17] <JEEB> (as by default x264 only compiles the binary, not the libraries)
[18:17] <JEEB> you need to --enable-static for a static library
[18:19] <albator> no i m not telling where is x264
[18:19] <albator> as I never needed too
[18:20] <albator> i did compile x264 with --enable-static yes
[18:21] <albator> x264 binary is present in /usr/local/bin
[18:35] <albator> was missing the "-extra-libs " arg
[19:39] <karika200> hi
[19:39] <karika200> I'm trying to stream a file with ffmpeg to an ffserver, but i get this error: av_interleaved_write_frame(): Connection reset by peer
[19:40] <karika200> here's ffmpeg output: http://pastebin.com/g7NEb9Lh
[19:40] <karika200> I installed it from debian-multimedia repository
[19:40] <karika200> and I'm using debian
[19:42] <karika200> And it's my ffserver config: http://pastebin.com/mGdeVDTN
[19:42] <karika200> what could be wrong?:/
[19:44] <llogan> unfortunately ffserver is basically unmaintained. i have no experience with it, but try the ffmpeg-user mailing list if you don't get help here.
[19:44] <karika200> i see, thx
[19:45] <karika200> can u suggest me a server what i can use instead of ffserver?
[19:45] <karika200> finallky I want to stream IP cams stream
[19:46] <llogan> i don't know what to recommend, but did you see: https://trac.ffmpeg.org/wiki/Streaming%20media%20with%20ffserver
[19:47] <llogan> also, does your ffserver console output say anything useful?
[19:47] <karika200> I set a logfile for the logs, but nothing interesting in the logs neither on the console
[19:48] <karika200> I tried also in debug mode
[19:48] <karika200> but nothing
[19:49] <llogan> i don't want to fall into the ffserver rabbit hole, but i do feel like procrastinating. decisions...
[19:52] <jgh-> karika200: if your camera supports rtmp you can use nginx-rtmp
[19:52] <jgh-> wowza is another option, I think it's reasonably cheap and the trial period is like 6 months now
[19:53] <karika200> jgh, thx, nginx sounds good
[19:53] <karika200> and maybe I will try icecast2 for video streaming
[19:54] <karika200> wowza could be good, but I want to use my own server for some reasons
[19:54] <jgh-> you can use your own server with wowza
[19:54] <XHFHX> Hi. I currently try to concat a couple of different files (4:3/16:9, different framerate...) with a script. First I tried it completly with commandline, it worked, but I don't understand everything and couldn't find an explanation in the documentation. So now I'm working with the function concat and a list of videos (e.g. videos.txt). This is much simpler, but I can't rescale the videos, which is necessary, as
[19:54] <XHFHX>  I need a 720p file out. Or is there a way to say ffmpeg that it shall output a 720p file? Sorry, it's the first time I'm using ffmpeg
[19:54] <karika200> jgh, I'll give a try, thx! :)
[19:57] <XHFHX> oh, seems it was just a typo, now it works :)
[20:06] <XHFHX> mh, well, this list function only works for same codec, so this wont do it, so please can someone explain me the syntax?
[20:06] <XHFHX> bin\ffmpeg.exe -i part1.mp4 -i part2.mp4 -filter_complex "movie=part1.mp4, scale=1280:720 [v1] ; amovie=part1.mp4 [a1] ; movie=part2.mp4, scale=1280:720 [v2] ; amovie=part2.mp4 [a2] ; movie=part3.mp4, scale=1280:720 [v3] ; amovie=part3.mp4 [a3] ; [v1] [v2] [v3] concat=n=3 [outv] ; [a1] [a2] [a3] concat=n=3:v=0:a=1 [outa]" -map "[outv]" -map "[outa]"  -c:v libx264 -preset ultrafast -qp 0 output.mkv
[20:06] <XHFHX> i dont get why you have to do "concat=n=3:v=0:a=1"
[20:06] <XHFHX> concat=n=3 will output an error
[20:11] <llogan> XHFHX: you can simplify your command http://pastebin.com/cmjUKPf9
[20:12] <llogan> (but i didn't test that example)
[20:12] <XHFHX> i experienced errors when i just scale the video, i had to pad it
[20:12] <llogan> and apparently i added a superfluous part4.mp4 which you can ignore
[20:14] <XHFHX> thank you llogan, I'll try it out now
[20:14] <llogan> you should always show your example command and complete console output when experincing issues. just saying that you get errors is not very easy to provide help for
[20:15] <XHFHX> [Parsed_scale_1 @ 04b3e720] Media type mismatch between the 'Parsed_scale_1' fil
[20:15] <XHFHX> ter output pad 0 (video) and the 'Parsed_concat_3' filter input pad 1 (audio)
[20:15] <XHFHX> [AVFilterGraph @ 0037d120] Cannot create the link scale:0 -> concat:1
[20:15] <XHFHX> Error configuring filters.
[20:16] <XHFHX> right, sorry :D
[20:16] <XHFHX> http://pastebin.com/8hEFp8Ey
[20:17] <llogan> For the concat filter to work correctly all segments must start at timestamp 0, but we'll get to that later
[20:21] <llogan> oh, i forgot the audio in my example
[20:21] <llogan> [v1][0:a][v2][1:a][v3][2:a]concat
[20:25] <llogan> but the output may pause at the end of the first segment, so add the setpts filter: http://pastebin.com/PtvMB8ZY
[20:26] <XHFHX2> sorry, have some connection probs
[20:26] <llogan> i guess you missed some stuff
[20:26] <llogan> llogan | oh, i forgot the audio in my example
[20:26] <llogan> llogan | [v1][0:a][v2][1:a][v3][2:a]concat
[20:26] <llogan> llogan | but the output may pause at the end of the first segment, so add the setpts filter: http://pastebin.com/PtvMB8ZY
[20:27] <XHFHX2> still wont work llogan http://pastebin.com/QCgmR25G
[20:27] <XHFHX2> the problem is the 4:3 and 16:9 merging IMO
[20:29] <XHFHX2> still the same error with the modified command that you sent me
[20:29] <JEEB> for dealing with both you either decide on one being anamorphic, or just pad after resizing to a common size either vertically or horizontally
[20:30] <llogan> SAR 0:1 looks odd
[20:30] <JEEB> for example, you scale to X:720 and then pad to 1280x720
[20:30] <JEEB> I think that value was the unknown one
[20:30] <JEEB> not sure though
[20:30] <XHFHX2> yeah, that what i was doing
[20:31] <XHFHX2> but I don't understand the whole syntax of the concat function
[20:31] <XHFHX2> it works somehow but I'm not sure why and I think thats not good if I want to use it in a script
[20:32] <XHFHX2> its the line [v1] [v2] concat=n=2 [outv] ; [a1] [a2] concat=n=2:v=0:a=1 [outa] i dont get
[20:32] <XHFHX2> wait, actaully that makes sense, the other one makes no sense to me
[20:32] <llogan> you give it each "set" that you want to concatenate. [v1][0:a] are link labels that refer to the video from the "[v1]" filter chain, and [0:a] is the audio from the first input
[20:32] <XHFHX2> [v1] [v3] [v2] concat=n=3 [outv] ; [a1] [a3] [a2] concat=n=3:v=0:a=1 [outa]
[20:33] <llogan> i showed you how to do it in one concat instance
[20:33] <XHFHX2> why do i have to attach to the audio-concatiation v=0 and a=1?
[20:33] <llogan> default settings are v=1 a=0, so it's good to be explicit
[20:33] <XHFHX2> but if i let it out it will return an error
[20:34] <llogan> "an error"
[20:34] <llogan> with concat you should *always* declare the proper values for v and a. do not rely on defaults.
[20:35] <XHFHX2> but why does it work on the video and not on the audio concatiation?
[20:36] <llogan> i have no context to provide an answer
[20:37] <XHFHX2> wait, i'll write a command which will show my problem I don't understand
[20:41] <rps2> Hi, gang. I have a somewhat silly question regarding ffmpeg logging.
[20:42] <rps2> We were using an old V1.2.x version and were getting log messages such as "frame=17397 fps= 33 q=34.0 ENC: size=   46793kB time=00:09:40.08 bitrate= 660.8kbits/s dup=1 drop=0"
[20:43] <rps2> On a V2.2.2 version, the word "ENC:" is not included in the output. We unfortunately have a process that was watching for that to give us a progress bar.
[20:43] <rps2> Was that removed?
[20:43] <rps2> If so, is there a way to recreate it? Hacking up the PHP code that was watching for that isn't trivial.
[20:47] <benlieb> rps2: I can't answer your actual question, but what was your php code doing that makes this change non-trivial?
[20:49] <JEEB> I'm actually not sure if vanilla ffmpeg ever output that ENC part
[20:49] <rps2> benlieb: It's a rather convoluted library that watches a number of log files and generates real-time progress bars. I didn't write it and there is a significant lack of documentation on just how it works.
[20:49] <JEEB> at least I don't remember ever seeing it in vanilla ffmpeg
[20:50] <rps2> JEEB, I don't see "ENC:" in the V1.2.6 source either.
[20:50] <JEEB> but if all that's needed is adding that thing there in logging, it shouldn't be too hard to find the call for av_log or whatever that outputs it
[20:50] <JEEB> rps2, which just means that it had been patched all along :P
[20:50] <benlieb> rps2: so the php code looks for a line with that string, and parses the line into processable bits?
[20:50] <JEEB> to be honest I would rather parse the X=Y pairs
[20:50] <JEEB> but not my code so (´
[20:51] <rps2> benlieb, yes, that's what I think it does. The guy who wrote that is no longer with us and he didn't document much of anything.
[20:51] <llogan> my condolences. RIP.
[20:52] <XHFHX2> is there a very simple command to scale a 4:3 video up to 16:9 with padding or do i have to use the normal padding command?
[20:54] <benlieb> rps2: my gut tells me the best way forward is to adapt your script to the most recent, most common version of ffmpeg
[20:55] <benlieb> rps2: if feels wrong to "stick it back in there" to make it like the old version. That seems to be asking for trouble.
[20:55] <benlieb> rps2: probably time to hire someone to make that script work right, and maybe also document it.
[20:56] <rps2> benlieb: Yeah, I think that's what I may have to do, but the PHP code is really fugly. Unfortunately, I'll have to be the one to fix it since we're a bit short handed.
[20:57] <benlieb> rps2: might also be time to change it from php to a proper OOP language. Is this happening in a web process or bg server process. I couldn't imagine doing the latter in php.
[20:59] <rps2> benlieb: Whatever I do right now is a bandaid. We're doing a complete revamp of the code. The problem is a lot of the existing code was built with CodeIgnitor, hence the heavy lean on PHP.
[21:00] <benlieb> rps2: gist your code? gist.github.com
[21:04] <rps2> benlieb: https://gist.github.com/anonymous/d4743d73b9ad4a988b5d#file-ffmpegmonitor
[21:04] <rps2> At least that's the bit I spotted.
[21:05] <XHFHX2> can someone just tellme whats the easiest way to pad any 4:3 video into 16:9? I got it padded and into the middle, but the video is still as little as the source, need to upscale it somehow
[21:05] <XHFHX2> isn't there just a little command that will do this? :/
[21:06] <XHFHX2> currently im also working with x=(1280-VideoX)/2
[21:06] <XHFHX2> why is pretty lame and think can be done better
[21:07] <benlieb> XHFHX2: I'm new to ffmpeg, but I remember seeing this somewhere
[21:07] <benlieb> rps2: holy spagetti code.
[21:07] <rps2> Yeah, no kidding!
[21:07] <XHFHX2> i ggogled around a bit but havent found something useful
[21:08] <benlieb> rps2: hard to test code like that. or know what's happening
[21:08] <llogan> XHFHX2: start here http://ffmpeg.org/ffmpeg-filters.html#pad
[21:08] <XHFHX2> trough that, doesnt help much :/
[21:08] <DelphiWorld> hi guys
[21:08] <DelphiWorld> a srt subtitle
[21:08] <rps2> benlieb: Welcome to my world. "Well, let's get Rick to fix it! He can fix anything!" Sure I can. Uh, huh.
[21:09] <DelphiWorld> can it be included in a mpeg4 container?
[21:10] <benlieb> rps2: is that in a controller or a lib file, or class file or what?
[21:11] <rps2> I believe it's in a lib file, but I can't be sure. I haven't analyzed it yet.
[21:13] <rps2> benlieb: Ah HAH! The guy DID hack the V1.2 ffmpeg.c file to stuff in that ENC: thing!
[21:13] <llogan> DelphiWorld: .mp4? i don't think so.
[21:13] <rps2> I just found the source.
[21:14] <llogan> DelphiWorld: try mov_text or a better container, but others know more about subs than I do
[21:14] <DelphiWorld> llogan: thank
[21:17] <benlieb> rps2: fun times
[21:17] <benlieb> I've just discovered the blackdetect filter. Be still my beating heart.
[21:17] <benlieb> So going to make my life easier... http://www.ffmpeg.org/ffmpeg-filters.html#blackdetect
[21:20] <XHFHX2> can please anyone tell me how to scale a 40x40 video in the right aspect ratio and with padding to a 1280x720 video? im really getting tired, i've readt the documentation but i can't work it out
[21:21] <benlieb> XHFHX2: make a stack overflow question
[21:21] <XHFHX2> ok
[21:21] <benlieb> that's what I do when irc doesn't come through
[21:22] <benlieb> usually people answer in a day or so
[21:22] <llogan> XHFHX2: ask on the ffmpeg-user mailing list, go to sleep, and when you wake up you will have an answer, "please provide your command and the complete console output"
[21:22] <benlieb> make sure to tag it ffmpeg
[21:22] <llogan> superuser is generally a better place for ffmpeg cli questions unless its a programming specific question
[21:23] <llogan> (if you want to use a Stack Exchange site instead of an official help channel)
[21:24] <FVG> hi, i'm recording stereo mix through alsa hw:0,0 and when i use arecord the sound is good, but with ffmpeg there is a lot of noise/distortion when there is sound. how could i fix it?
[21:25] <benlieb> how do I install ffplay?
[21:25] <benlieb> I used brew install ffmpeg, and got all of the other tools, but ffplay didn't come
[21:28] <llogan> benlieb: it requires SDL
[21:28] <benlieb> llogan: wassat
[21:28] <llogan> as a dependency
[21:29] <llogan> benlieb: maybe add --with-sdl? i've already forgotten how to use brew...
[21:30] <llogan> see "brew info ffmpeg". might be listed there.
[21:32] <benlieb> llogan: found the --with-ffmplay option on some blog
[21:32] <benlieb> trying this:  brew install ffmpeg --with-theora --with-libogg --with-libvorbis --with-freetype --with-tools --with-ffplay
[21:33] <benlieb> I've had to recompile ffmpeg 4 times for this little project
[21:33] <benlieb> is there a --with-every-fucking-thing option?
[21:33] <benlieb> just to get it over with
[21:37] <llogan> benlieb: i don't know. this is #ffmpeg. a brew channel might be a better place to ask that. i tihnk "brew info ffmpeg" will show all the specifc script options.
[21:37] <benlieb> llogan: is ffplay a way to test out a command without actuallyy saving the output file?
[21:38] <llogan> yes, it can often be used to test commands
[21:38] <llogan> like filter previews
[21:38] <benlieb> llogan: so the same command using ffplay instead of ffmpeg just gives you a visual of what would happen?
[21:38] <benlieb> if you saved the file.
[21:38] <benlieb> that's awesome
[21:39] <llogan> it can give you an approximation since there ar other factors involved (encoder limitations, encoding artifacts, etc).
[21:40] <llogan> or you could possibly pipe from ffmpeg to ffplay to include artifacts, etc without making a file
[21:41] <benlieb> llogan: if I supply -ss and -t is there a character sequence during playback that would let me jump to the end and beginning of those positions?
[21:41] <DelphiWorld> llogan: do you have any example of including a text string on a video?
[21:41] <llogan> subtitles (soft or hard). drawtext filter.
[21:42] <llogan> benlieb: you can use -ss with ffplay i think. too lazy to test. or use arrow keys during playback. or click on screen to move to percentage.
[21:42] Action: llogan is now late for meeting
[21:42] <benlieb> llogan: you can use -ss and -t
[21:42] <FVG> llogan: www.pastebin.com/99tGMaFP
[21:43] <benlieb> DelphiWorld: use drawtext filter
[21:53] <FVG> even weirder is when i had pulseaudio, if i used ffmpeg the sound on my speakers would become staticy but the recorded file would sound fine, and it would revert when i stopped ffmpeg
[22:00] <lkiesow> Is there a way to compose a video of different sections of a source video? What comes to my mind is cutting the sections using -t and -ss and then concatenate them or using the select filter. The disadvantage of the first option is that I have to run FFmpeg multiple times (n+1 times for n sections), the disadvantage of the select filter is, that I cannot change the order of the sections.
[22:42] <bencc> when using -ss with -t, does -t is relative to -ss or abosulte in the input file?
[22:42] <bencc> if I have a 60 mintues video and I'm using "-ss 30 -t 40" will I get a video starting at 00:30:00 and end at 00:40:00?
[22:43] <c_14> bencc: -t is always absolute
[22:43] <c_14> -t is always the output length
[22:44] <bencc> c_14: so I'll get a 40 minutes output video?
[22:44] <c_14> yep
[22:44] <bencc> so if I want a video that starts at 00:30:00 and ends at 00:40:00 I need "-ss 00:30:00 -t 00:10:00" ?
[22:45] <c_14> lkiesow: try a filter_complex with splits and concat.
[22:45] <llogan> FVG: i wanted to see the complete console output
[22:46] <c_14> bencc: yes
[22:46] <bencc> thanks
[22:46] <bencc> c_14: is there another paramater that can give me "start at, end at" ?
[22:46] <llogan> lkiesow: yet another option is (a)trim filter(s), but the downside is that you have to re-encode
[22:47] <c_14> bencc: -to, but only if you use -ss as an output option
[22:47] <bencc> c_14: using -ss as an output option will give me the same result, right?
[22:48] <c_14> bencc: -ss as an output option with -to will start the output video at -ss and give you to -to from the input video
[22:48] <DelphiWorld> c_14: see pm
[22:50] <bencc> c_14: thanks. trying
[22:54] <lkiesow> c_14: you mean something like   split --> select=between(...) --> concat?
[22:55] <DelphiWorld> if anyone is ready to develope ffmpeg based app pm me
[22:55] <lkiesow> llogan: re-encoding is something I have to do anyway, so that is not a problem. I'll have a look at the trim filter, thanks
[22:56] <c_14> lkiesow: I actually meant trim like llogan said, I got the names mixed up in my head. But basically you take all the inputs, use (a)trim filters to cut it and then concat
[23:00] <Mmike> Hol-hal! Can ffmpeg by default use multicore CPUs, or do I need to manually complile that in?
[23:01] <JEEB> the things that can use multiple threads will do that as long as you have some kind of threading library available
[23:01] <JEEB> most lunixes have pthreads by default
[23:01] <JEEB> and there's a wrapper around the windows threading system
[23:02] <JEEB> so those two types of systems should be set by default at configuration time
[23:05] <JEEB> Mmike, when you configure ffmpeg you should have the configure script tell you "threading support: something"
[23:05] <lkiesow> c_14: Something is odd, I just tried it using  ffmpeg -i in.mp4 -filter_complex:v '[0:v] trim=1:2 [a]; [0:v] trim=21:22 [b]; [a][b] concat [out]' -map '[out]' xy.mp4  but instead of a two seconds long video I will get one which is 24 seconds. Any idea?
[23:06] <c_14> You probably need to add a setpts=PTS-STARTPTS filter to each of the trims
[23:07] <Mmike> JEEB, i'm actually using avconv as that ships with ubuntu 14.04 (not sure why ffmpeg was ditched)
[23:07] <Mmike> but I guess they did build it with pthreads support
[23:08] <c_14> Mmike: avconv support is in #libav
[23:08] <Mmike> c_14, thnx
[23:17] <lkiesow> c_14: Sadly, the setpts filter does not help. I've tried splitting the two sections and write them to different files. The first section is only one second (as expected) but the second is much to long
[23:23] <lkiesow> llogan: http://fpaste.org/105771/40139854/
[23:26] <c_14> lkiesow: the setpts has to be after the trim
[23:29] <lkiesow> c_14: That worked, thanks!
[23:30] <lkiesow> As for an explanation, do the trimed chunks keep their timestamps and when they are concatenated the video basically becomes the length of the last chunk?
[23:30] <lkiesow> Without the setpts filter I mean
[23:32] <c_14> pretty much. the last frame of the first part gets repeated until the start pts of the second chunk
[23:39] <lkiesow> ok, thanks again
[23:49] <bencc> what does this means? "Warning: data is not aligned! This can lead to a speedloss"
[23:49] <bencc> speedloss in transcoding?
[23:49] <bencc> is there a way I can fix it?
[23:53] <bencc> c_14: http://dpaste.com/2GQPQYJ/
[23:55] <c_14> first of -t is an output option, it has to be after the input filename
[23:57] <c_14> But the data is not aligned part sounds a lot like the input has some issues.
[23:57] <bencc> "-t duration (input/output)"
[23:57] <bencc> "When used as an input option (before -i), limit the duration of data read from the input file. "
[23:58] <c_14> Ok, didn't know that worked but ok.
[23:58] <bencc> the output looks fine so I'll just ignore it
[00:00] --- Fri May 30 2014

More information about the Ffmpeg-devel-irc mailing list