[Ffmpeg-devel-irc] ffmpeg.log.20160606

burek burek021 at gmail.com
Tue Jun 7 02:05:01 CEST 2016


[00:17:58 CEST] <ATField> Is there an analogue for trimming subtitle stream, similar to how trim (video) and atrim (audio) work?
[00:18:29 CEST] <JEEB> there are no filters for subtitles, the way you overlay them is actually a hack in ffmpeg.c that makes them a video track ;)
[00:18:45 CEST] <furq> there are tools which will rebase subtitle timestamps
[00:19:20 CEST] <furq> i know subtitle edit does it, there are doubtless plenty of others
[00:19:33 CEST] <furq> i'd imagine any subtitle editor can do that
[00:20:11 CEST] <JEEB> aegisub is the other thing that does text-based subs
[00:20:14 CEST] <ATField> Also, what sort of (if any) -vf "select .." command would be used to achieve a multi-segment trim similar to whats discussed here: http://superuser.com/questions/681885/how-can-i-remove-multiple-segments-from-a-video-using-ffmpeg?
[00:20:39 CEST] <ATField> Oh, hi, furq.
[00:21:14 CEST] <ATField> I tried looking for the select as alternative to trim option, but it seems to use formulas which I dont know where to read relevant guides about.
[00:22:01 CEST] <furq> http://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
[00:22:38 CEST] <ATField> Thanks on subtitles. Maybe you could trim them by trimming the whole file segment and then extracting the subtitle stream. But by that point it would be easier to use the already trimmed whole segments, lol.
[00:23:19 CEST] <wallbroken> is there somebody which uses ffmpeg on windows?
[00:23:59 CEST] <ATField> furq: Oh, so *thats* what I shouldve been asking Google about... Thanks.
[00:24:07 CEST] <ATField> wallbroken: Yes, me.
[00:24:34 CEST] <ATField> (also, "who", not "which")
[00:25:26 CEST] <wallbroken> yes, sorry
[00:25:42 CEST] <wallbroken> i have a problem with the last version of ffmpeg
[00:25:53 CEST] <furq> use a different version?
[00:26:06 CEST] <furq> https://ffmpeg.zeranoe.com/builds/win64/static/
[00:26:08 CEST] <ATField> (But the latest version is the bees knees!)
[00:26:17 CEST] <wallbroken>  unable to find the entrypoint _wfopen_s of foutine in library msvcrt.dll (is a translation from italian)
[00:26:31 CEST] <furq> yeah that's not something anyone except zeranoe can fix
[00:26:40 CEST] <furq> just use a different version for now
[00:26:51 CEST] <wallbroken> it's already known problem?
[00:26:57 CEST] <furq> no idea
[00:27:09 CEST] <furq> but if the static binary is throwing that error then there's been a build problem
[00:27:16 CEST] <furq> so use a different one
[00:27:24 CEST] <wallbroken> the different one is an older one
[00:27:32 CEST] <wallbroken> so it could come with some bug
[00:27:50 CEST] <furq> you mean a bug like not being able to find _wfopen_s in msvcrt.dll
[00:27:53 CEST] <furq> that would be terrible
[00:28:16 CEST] <ATField> Couldnt it be because of mis-configuration on his particular machine?
[00:28:32 CEST] <furq> if it worked before updating then it's doubtful
[00:29:20 CEST] <wallbroken> furq, i'm using windows xp, which is a very outdated and unsupported version
[00:29:29 CEST] <wallbroken> so, i'd like to know if is for that
[00:30:10 CEST] <furq> https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=2477
[00:30:51 CEST] <wallbroken> ok thank you
[00:31:19 CEST] <furq> also you should really stop using xp, but you evidently know that
[00:31:49 CEST] <furq> if you're going to use an unsupported version of windows it should at least be windows 2000, the best windows ever made
[00:33:16 CEST] <ATField> What about 7, no love for 7?
[00:33:30 CEST] <wallbroken> windows 2000 is older than xp
[00:33:35 CEST] <wallbroken> but i've used it
[00:39:55 CEST] <wallbroken> different version of ffmpeg produces differences in output?
[02:36:34 CEST] <Demon_Fox> VP9 must be done
[02:36:42 CEST] <Demon_Fox> Since it's in use right now
[02:38:30 CEST] <furq> it's software, of course it's not done
[02:42:29 CEST] <c_14> Well, the bitstream should be relatively stable.
[02:42:51 CEST] <DHE> performance might be improvable, but it works
[02:43:54 CEST] <c_14> Performance is definitely improvable, have you seen the data for eve?
[02:47:01 CEST] <DHE> no...
[02:47:09 CEST] <c_14> https://blogs.gnome.org/rbultje/2016/05/02/the-worlds-best-vp9-encoder-eve-2/
[02:54:10 CEST] <klaxa> ooooh nice
[02:55:32 CEST] <c_14> afaik it's still proof of concept (and closed source)
[02:55:58 CEST] <c_14> But it goes to show that a faster encoder is possible
[03:02:28 CEST] <hyponic> is it possible to pipe from ffmpeg to vlc?
[03:02:55 CEST] <c_14> assuming vlc accepts input on stdin, yes. try ffmpeg -i file pipe:0 | vlc -
[03:03:21 CEST] <hyponic> c_14 thanks will give it a shot
[03:03:25 CEST] <klaxa> from the screenshots eve does seem to give a lot better results
[03:05:24 CEST] <furq> looks like they're planning to make it commercial
[03:05:27 CEST] <furq> which is a shame
[03:07:29 CEST] <Demon_Fox> I say something because the latest release for vp9 says it's unstable
[03:07:40 CEST] <Demon_Fox> and plus, a bunch of copied and pasted C isn't a spec
[03:08:36 CEST] <Demon_Fox> The DCT coefficient ordering makes to logical sense
[03:08:52 CEST] <Demon_Fox> Everyone does zigzags for a reason
[04:10:52 CEST] <ATField> 1. Are "picture-based subs" ones where each letter(word) is a separate image file? 2. Does the solution listed here (https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo) for burning pic-based subs work *only* for that type, and not for regular text-subs? Because when I tried using it on regular subs, the video output got encoded but with no burned subs on it.
[04:12:06 CEST] <furq> ATField: each subtitle is a separate image file, they're not combined
[04:12:25 CEST] <furq> at least that's how dvd subs work, i've never had cause to work with any others
[04:12:58 CEST] <furq> and -vf subtitles should work fine for text subs, as the example shows
[04:13:15 CEST] <furq> overlay obviously won't work with text subs
[04:16:04 CEST] <ATField> Thanks for clarifying. Overlaying was advantageous because I wouldnt have to extract the sub files first. Plus, -vf subtitles requires weird double-escaping (e.g. -vf "subtitles='D\:\\burn.ssa'") even when the subs full path is enveloped in quotemarks (unless I am doing it wrong  the solutions I got went only as far as making it work with double-escaping).
[04:16:22 CEST] <ATField> Oh, look at that  it fitted in a single message.
[04:17:21 CEST] <furq> i normally just move the subs into the working directory to avoid the hassle
[04:24:27 CEST] <ATField> Can you give a sample of command string that works for you (with subs being in workdir)? Depending on how I put the quotemarks, ffmpeg either doesnt find the sub file or gets confused on some argument after -vf sub.
[04:25:26 CEST] <ATField> oh, ok nvm it worked
[04:25:37 CEST] <ATField> but thanks for the hint still
[05:48:25 CEST] <CoJaBo> ..o that's just beautiful. Conversion script effed up, ran the same file thru ffmpeg just over 9,000 times
[05:48:38 CEST] <CoJaBo> result is... interesting to say the least
[06:28:47 CEST] <ycon_> Hi all, so I'm trying to split a bunch of .mov files into 30second segments.
[06:29:39 CEST] <ycon_> This was unsucessful, as it did not create 30 multiple clips out of the same video. It just trimmed the initial video file. Any ideas why? find . -name '*.mov' -exec ffmpeg -t 30 -i \{\} -c copy \{\} \;
[07:55:04 CEST] <nifwji2> http://puu.sh/piuVV/dcd2516ee1.png
[07:55:07 CEST] <nifwji2> just a little test
[08:36:26 CEST] <odinsbane> Good morning, I am trying to encode a video so that it can be played using quicktime. I had done it before, I think I had to use a pix_fmt setting, but I forgot which one.
[08:36:57 CEST] <odinsbane> Also on the wiki: https://trac.ffmpeg.org/wiki/Encode/H.264 there is a section about apple quicktime, but the link is dead.
[08:39:22 CEST] <odinsbane> It says that I might need to use -pix_fmt yuv420p, but when I use that setting I get an error.
[08:39:40 CEST] <odinsbane> Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[08:42:12 CEST] <odinsbane> Ah hah, I missed the most important error message. "Height not divisible by two."
[10:42:22 CEST] <chama> Is there any way to align text in ffmpeg drawtext?
[10:43:23 CEST] <chama> Can we create multi-line text in drawtext and align the multi-line text?
[11:29:56 CEST] <f00bar80> again i'm asking what's the best way to analyze an output for any missing audio/video streams ?
[12:22:01 CEST] <chama> what is the max length for a text in drawtext filter?
[12:50:41 CEST] <ElAngelo> hi, i'm trying to detect if a video is shooted vertically or not with ffprobe
[12:50:52 CEST] <ElAngelo> some of my videos have a rotate tag -> fixed
[12:51:06 CEST] <ElAngelo> some of my videos have height > width -> fixed
[12:51:23 CEST] <ElAngelo> but some have a width > height, have no rotation tag and are still vertical
[12:51:29 CEST] <ElAngelo> and ffplay plays them correctly??
[12:52:02 CEST] <ElAngelo> the DAR says 9:16 though
[12:52:15 CEST] <ElAngelo> 1920x1080 [SAR 81:256 DAR 9:16]
[12:52:19 CEST] <ElAngelo> does this make sense?
[12:56:12 CEST] <wallbroken> c_14, is also possible to speed up video in stead of throttle audio?
[14:05:41 CEST] <ocZio> Hi all, I will be using ffmpeg for video encoding and was wondering if there is a difference if I go for i7-4790K vs Intel  Xeon D vs 	Intel  Xeon E3 ?
[14:06:07 CEST] <ocZio> I have read online and most people tend to use i7 (as a side note this will be used on Linux/debian)
[14:09:41 CEST] <jkqxz> Core i7 and Xeon E3 of the same generation are pretty much exactly the same thing (subtly different chipsets now, but it's really the same die).
[14:11:00 CEST] <jkqxz> Xeon D is a SoC for making small servers, but it can have more cores and therefore more total throughput than the i7/E3.
[14:11:43 CEST] <jkqxz> (If have lots of moeny to buy the biggest ones, that is.)
[14:12:33 CEST] <ocZio> jkqxz, thanks
[14:15:07 CEST] <jkqxz> Though unless you actually want the low power and small boards of Xeon D, Xeon E5 is likely to be a better choice for making bigger/faster Intel machines.
[14:21:47 CEST] <DHE> in the same generation, cores * clockspeed gives you an idea of what performance you'll get as long as you use the thread options properly. hyperthreading usually means multiply by 1.5x (hard to give an exact number)
[14:22:15 CEST] <DHE> though it's not quite that easy. adding CPU cores only works to a point.
[14:30:55 CEST] <ocZio> well I am trying to make videos out of images :)
[14:31:00 CEST] <ocZio> and just experimenting right now
[14:31:17 CEST] <ocZio> but there will be alot of images to process so was just wondering what type of CPU would be better for this task
[14:31:30 CEST] <ocZio> does ffmpeg has a built in tool to do this btw or shall I dig into the API ?
[14:31:56 CEST] <ocZio> I have 4 images and want to make them like video gifs with some delays in between the images
[15:10:23 CEST] <Crucials> hey. im trying to set up my ffmpeg - i downloaded the latest release from the website, but the /bin folder isnt there
[15:10:30 CEST] <Crucials> guess im missing something silly but im not sure what
[16:37:14 CEST] <c_14> wallbroken: sure, use setpts. You'll have to reencode the video though.
[16:53:50 CEST] <claz> hey, i'm using -f segment to segment a video file but the segments have timestamps starting at whatever second they were cut from
[16:54:12 CEST] <claz> i tried passing -start_at_zero but it didn't change anything
[16:56:11 CEST] <claz> ah, haven't tried -reset_timestamps
[17:03:36 CEST] <claz> yep, that did it
[17:15:19 CEST] <jackp10> hi to everyone.
[17:17:42 CEST] <jackp10> I am struggling to find the correct parameters to convert an .mov to an mp4 that is streamable with Safari. Right now i have this to convert to an mp4 ( ffmpeg -y -i '$input' -ac 2 -ab 96k -ar 44100 -vcodec libx264 -level 41 -preset ultrafast -vf scale=640:-1 $output )
[17:18:11 CEST] <jackp10> but if I point Safari to the generated file, I cannot get it to stream it. Chrome and Firefox do strem the content
[17:18:15 CEST] <jackp10> any idea why ?
[17:24:16 CEST] <kepstin> jackp10: try adding '-movflags faststart' to ensure you're creating a streamable mp4 file (MOOV atom at the start of the file)
[17:36:25 CEST] <jackp10> when I tried to add the -movflags, it says: Undefined constant or missing '(' in 'faststart'
[17:37:31 CEST] <jackp10> /usr/etc/venture/bin/ffmpeg -y -i $INPUT -ac 2 -ab 96k -ar 44100 -vcodec libx264 -level 41 -preset ultrafast -vf scale=640:-1 -movflags faststart $OUTPUT 2>&1
[17:37:52 CEST] <c_14> try +faststart
[17:38:57 CEST] <jackp10> nope.. no luck
[17:40:10 CEST] <jackp10> /usr/etc/venture/bin/ffmpeg -y -i $INPUT -ac 2 -ab 96k -ar 44100 -vcodec libx264 -level 41 -preset ultrafast -vf scale=640:-1 -movflags +faststart $OUTPUT 2>&1
[17:41:19 CEST] <jackp10> this is what it gives me when I print the version:
[17:41:20 CEST] <jackp10> ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers
[17:41:54 CEST] <c_14> eh
[17:41:57 CEST] <c_14> that's your problem
[17:42:05 CEST] <c_14> that version of ffmpeg is ancient
[17:42:10 CEST] <utack> there was libx264 in 2012? impressive
[17:42:12 CEST] <c_14> update or use a recent static build
[17:42:28 CEST] <c_14> libx264 is relatively old
[17:42:54 CEST] <kepstin> i mean, with an ffmpeg that old you might have the standalone 'qt-faststart' tool, but still - better to get a newer ffmpeg :)
[17:42:55 CEST] <jackp10> what if I have no changes of updating it due to IT restrictions ?
[17:43:14 CEST] <kepstin> jackp10: then put a static build in your home directory or something ;)
[17:43:14 CEST] <c_14> check if the qt-faststart tool is installed...
[17:43:23 CEST] <c_14> but yeah, what kepstin said
[17:43:25 CEST] <c_14> or build it from source
[17:44:27 CEST] <jackp10> qt-faststart is installed
[17:44:44 CEST] <c_14> you can use that on the output file
[17:44:51 CEST] <jackp10> you think I could use that instead to achieve what i need ?
[17:45:20 CEST] <c_14> Also, you should talk to your IT department. I don't even want to know how many bugs have been fixed since a 0.10 release
[17:45:29 CEST] <c_14> jackp10: qt-faststart should do the same as -movflags +faststart
[17:46:21 CEST] <utack> not to mention libx264 with ultrafast from 2012 is likely horrible quality, and on a modenr cpu slower than libx264 today with preset fast
[17:47:09 CEST] <jackp10> I truly wish I could update it, but that was part of an antient installation that the company still havent updated yet and it is still used nowadays
[17:47:14 CEST] <jackp10> sadly
[17:47:35 CEST] <jackp10> I need to get it to work with the environment they have provided me
[17:48:00 CEST] <jackp10> Ill google to see how I can use qt-faststart in my example
[17:48:11 CEST] <kepstin> if you're writing new tools to run on an old environment, just include a new x264 and ffmpeg as part of your new tools :/
[17:48:43 CEST] <kepstin> Using qt-faststart is simple. After encoding your mp4 file, just run qt-faststart with the mp4 file as the only parameter.
[17:55:58 CEST] <utack> maybe someone knows of a security bug in al old ffmpeg that would make your IT move their butts and upgrade quickly?
[17:56:22 CEST] <utack> that would be one way to trick them into upgrading
[17:56:33 CEST] <ATField> why dont they upgrade, do they love torturing people?
[17:56:54 CEST] <c_14> utack: https://ffmpeg.org/security.html just look at all those CVEs
[17:57:10 CEST] <utack> problem solved
[17:57:22 CEST] <utack> it absolutely needs to be upgraded
[18:39:36 CEST] <ocZio> hi, using ffmpeg -framerate 2 -pattern_type glob -i '*.png' -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
[18:40:13 CEST] <ocZio> I have 4 images, this will produce a video of 2 seconds, is it possible to extend the last frame so I can see it before the end of the video ?
[18:43:05 CEST] <ocZio> stupid question, forget about it
[18:43:07 CEST] <ocZio> :(
[19:02:28 CEST] <rainabba> When applying 2 video filters (say unsharp and crop), I've been using the syntax -vf "[in] ....filter1.... [resulta]; [resultb] .....filter2.... [out]"  Is there a more implicit syntax to say, "take the input, apply the first filter, then to that apply a 2nd filter and output it"? I'd expect something like -vf "filter1; -> filter2" or such.
[19:03:11 CEST] <rainabba> That verbose syntax is very powerful, but it feels like overkill for just applying filters inline.
[19:24:31 CEST] <ATField> Piping output of filter1 for processing with filter2 seems like an easy solution for that, no?
[19:24:34 CEST] <kepstin> rainabba: you can usually just do '[in]filtera,filterb[out]' - the comma is the standard "feed output of this filter into the next" operator.
[19:25:38 CEST] <kepstin> rainabba: and for the really simple case, no need to use any [] stuff at all; if it's all one-in-one-out filters, you can just do '-vf unsharp=XXX,crop=XXX'
[19:38:38 CEST] <rainabba> kepstin: ty. thought the last time I tried that, I got an error about only having 1 input. I'll try again.
[19:42:18 CEST] <kepstin> rainabba: if you're using a filter that requires multiple inputs, you'll need to use filter_complex, yeah.
[19:45:25 CEST] <wallbroken> c_14, is also possible to speed up video in stead of throttle audio?
[19:48:05 CEST] <c_14> wallbroken: sure, use setpts. You'll have to reencode the video though.
[19:49:13 CEST] <c_14> pass it the same values you would atempo
[19:49:16 CEST] <furq> it's possible to change the video framerate without reencoding but it's a bit convoluted and you'll need tools other than ffmpeg
[19:49:23 CEST] <furq> and it probably depends on the video codec as well
[19:49:42 CEST] <c_14> yeah, that would work as well
[19:49:52 CEST] <furq> if you're not doing a pal to film conversion or something then just use atempo, it's easier
[19:51:27 CEST] <wallbroken> c_14, with atempo, the audio was re-encoded?
[19:51:32 CEST] <furq> yes
[19:51:39 CEST] <wallbroken> :\
[19:51:48 CEST] <furq> using -vf or -af will always reencode the stream
[19:51:52 CEST] <wallbroken> ok
[19:52:19 CEST] <furq> this is why you obsessively hoard your lossless sources forever
[19:52:23 CEST] <furq> or at least why i do
[19:53:22 CEST] <furq> wallbroken: http://askubuntu.com/a/370826
[19:54:06 CEST] <wallbroken> oh ok
[19:54:09 CEST] <wallbroken> i can use that
[19:54:12 CEST] <wallbroken> to make it lossless
[19:54:16 CEST] <wallbroken> right?
[19:54:18 CEST] <furq> sure
[19:54:35 CEST] <wallbroken> it's a good idea?
[19:54:41 CEST] <wallbroken> or it will cause other problems?
[19:55:22 CEST] <furq> you might run into problems if you end up with a non-standard framerate
[19:59:47 CEST] <ATField> furq: How much disk space can obsessive lossless hoarding end up requiring, on average?
[20:00:16 CEST] <furq> i'm not sure i can give a meaningful average
[20:00:41 CEST] <furq> also personally speaking "lossless sources" usually means dvd images, which obviously isn't lossless
[20:03:48 CEST] <ATField> DVD or BD? Though I guess thats kinda becoming a question for another channel.
[20:17:28 CEST] <rainabba> Thanks for the syntax help guys.
[20:27:26 CEST] <sliter> Am i avaliable to user mayny -i's to create slideshow as described here: https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images
[20:28:11 CEST] <rainabba> sliter: You can provide many inputs with -i input1 -i input2 -i input3 etc...
[20:28:39 CEST] <sliter> rainabba: And it will work as described in link?
[20:29:06 CEST] <rainabba> If someone documented an example, i'm sure it was valid at that time, in that context.
[20:30:12 CEST] <rainabba> If you're having an issue, please describe it, provide the command you're using, etc..
[20:30:35 CEST] <sliter> I'll check and than report.
[20:30:40 CEST] <sliter> If it will ofc.
[21:02:02 CEST] <sliter> rainabba: looks like it using only 1 file of that files i specified in input.
[21:02:28 CEST] <sliter> 'ffmpeg -y ' + files + '-r 1 -pix_fmt yuv420p -b:v 1M' + ' screenshow.webm'
[21:17:26 CEST] <sliter> Hello?
[21:17:30 CEST] <sliter> Anyone might help me?
[21:17:37 CEST] <sliter> ffmpeg -y  -framerate 1/5 -i screens/wr0000.png -i screens/wr0001.png -i screens/wr0002.png -i screens/wr0003.png -i screens/wr0004.png -i screens/wr0005.png -r 1 -pix_fmt yuv420p -b:v 1M screenshow.webm
[21:17:41 CEST] <sliter> What is wrong with it?
[21:17:45 CEST] <sliter> Why it renders only 1 frame.
[21:20:24 CEST] <DHE> it loads all files simultaneously. you probably want to concatenate them
[21:20:40 CEST] <DHE> there's a whole section on the wiki on that.
[21:20:49 CEST] <DHE> but there's a better way if you want to convert a sequence of images into a slide show
[21:20:56 CEST] <sliter> I have script that outputs filenames that i need.
[21:21:10 CEST] <sliter> So i can't use many -i's?
[21:21:21 CEST] <llogan> ffmpeg -i screens/wr%04d.png
[21:24:07 CEST] <sliter> Ahem
[21:24:16 CEST] <sliter> I have sort of trouble
[21:24:21 CEST] <sliter> I can't use %0 in batch file
[21:24:31 CEST] <llogan> %%0
[21:24:54 CEST] <sliter> Yay! That worked! Thanks.
[22:47:26 CEST] <nick0> Hi, I'm trying to resample some audio I decoded and filtered, I managed to setup the resampler, get the same amount of frames as the input, then I flush it. Flushing gives me the extra frames I needed, but it also gives *extra* frames, which sounds like a few seconds of the clip, played backwards and sped up. What should I look at?
[00:00:00 CEST] --- Tue Jun  7 2016


More information about the Ffmpeg-devel-irc mailing list