[Ffmpeg-devel-irc] ffmpeg.log.20151121

burek burek021 at gmail.com
Sun Nov 22 02:05:01 CET 2015


[00:06:46 CET] <jasom> votz: try ffmpeg -i foo.wav -map_metadata -1 ...
[00:07:52 CET] <votz> jasom: -map_metadata -1 didn't do it
[00:08:09 CET] <votz> I did find a solution, though: -flags +bitexact
[00:08:22 CET] <jasom> votz: oh, good to know
[00:08:32 CET] <votz> from digging around in libavformat/mux.c
[00:16:16 CET] <ChocolateArmpits> Ok I think I solved my problem by using gnuwin utilities. With 'tail' I can seek to any particular position, then with 'cat' concatenate the ts files which get piped to ffmpeg. Works wonders
[00:17:59 CET] <ChocolateArmpits> the command line is pretty hefty but it's so much nicer to have a working solution
[01:31:54 CET] <Batch> Hi
[01:32:08 CET] <Batch> someone have ffmpeg in arch?
[01:40:01 CET] <pepee> running this:  ffmpeg -y -loop 1 -r 1 -i $IMG.png -i $AUDIO.mp3 -shortest -c:v libx264 -tune stillimage -c:a copy -f mp4 /tmp/$FILE.mp4  generates a 1 FPS video, but if I remove everything related to audio, it makes a ~50 FPS video... how come?
[01:41:30 CET] <pepee> I mean, I want to make a 1 FPS file, but I don't understand the difference between  `ffmpeg -y -loop 1 -r 1 -i $IMG.png -i $AUDIO.mp3 -shortest -c:v libx264 -tune stillimage -c:a copy -f mp4 /tmp/$FILE.mp4`  and `ffmpeg -y -loop 1 -r 1 -i $IMG.png -shortest -c:v libx264 -tune stillimage -an -f mp4 /tmp/$FILE.mp4`
[01:47:57 CET] <c_14> pepee: use -framerate instead of -r and specify -r 1 as an output option
[01:57:51 CET] <pepee> heh, nm, I think I found the real problem... I didn't specify a length for the video, duh
[02:00:04 CET] <pepee> another question: why is webm/libvpx encoding faster than mp4/libx264 in this case (encoding one image + an audio file)?
[02:01:17 CET] <pepee> it's 1s for webm/ogg and 2-3s for mp4/mp3, with -c:a copy
[02:06:58 CET] <c_14> eeeeh, muxing of the mp4 maybe?
[02:07:09 CET] <c_14> You can try a test output to mkv instead of mp4
[02:07:27 CET] <c_14> If it still takes longer than webm, then eh it's just faste
[02:07:29 CET] <c_14> +r
[02:08:40 CET] <pepee> do web browsers support mkv?
[02:09:12 CET] <TD-Linux> in the more limited form of webm, yes
[02:09:25 CET] <pepee> hah
[02:09:34 CET] <pepee> nah, they don't support mkv :(
[02:09:50 CET] <pepee> well, guess I'll be using webm only
[02:10:11 CET] <furq> does a 2-second difference actually matter
[02:11:13 CET] <TD-Linux> supporting unencumbered codecs matters :^)
[02:11:26 CET] <TD-Linux> note that you're probably using vp8. vp9 will get you better quality but longer encode time
[02:11:37 CET] <furq> i have no opposition to that reason
[02:11:45 CET] <pepee> yeah, I'm using vp8
[02:12:28 CET] <pepee> furq, well, I want to use whatever I can use to reduce the encoding time
[02:12:56 CET] <TD-Linux> also libvpx has different speed settings, you can likely get it even faster (same for vorbis)
[02:13:47 CET] <furq> you might as well use opus if it's for the web
[02:13:53 CET] <furq> it has identical desktop browser support
[02:14:08 CET] <TD-Linux> and less setup overhead, which might matter for 1s files
[02:14:38 CET] <pepee> actually, the server has only 1 CPU, so it's a broader difference
[02:14:54 CET] <Ripmind> Hi. https://trac.ffmpeg.org/wiki/HWAccelIntro#NVENC Does that mean i need to compile it?
[02:15:42 CET] <c_14> If you want to use nvenc, yes
[02:16:58 CET] <Ripmind> uhm
[02:17:06 CET] <Ripmind> is it diffcult on windows?
[02:17:50 CET] <c_14> Depends, have you ever cross-compiled something or using mingw/msys2 to compile a program for windows?
[02:18:03 CET] <c_14> s/using/used/
[02:18:32 CET] <Ripmind> ok i could use a linux VM to crosscompile if that is easier? But i never compiled anything, only make make install etc.
[02:18:35 CET] <Ripmind> basic linux knowledge
[02:19:35 CET] <c_14> https://trac.ffmpeg.org/wiki/CompilationGuide/CrossCompilingForWindows
[02:19:48 CET] <Ripmind> yes, i am on that right now :)
[02:19:52 CET] <c_14> I think you'd just have to add the nvencencodeapi.h to the cross-prefix/include
[02:20:09 CET] <Ripmind> can you help me if i get questions while compiling?
[02:20:16 CET] <c_14> I can do my best.
[02:20:29 CET] <Ripmind> that already means a lot, thank you :)
[02:23:16 CET] <Ripmind> The first thing i can't find is which dependencies i need to build
[02:23:33 CET] <c_14> What do you want, besides nvenc?
[02:23:35 CET] <c_14> Any encoders?
[02:23:39 CET] <c_14> You'll need yasm
[02:23:44 CET] <Ripmind> no i mean. i'll need gcc?
[02:24:00 CET] <Ripmind> usually there is a huge list of "apt-get install" requirements
[02:25:19 CET] <c_14> iirc you need mingw-w64 gcc
[02:25:32 CET] <c_14> you'll probably have to build that yourself
[02:25:33 CET] <Ripmind> Isn't that for windows?
[02:25:47 CET] <Ripmind> https://trac.ffmpeg.org/wiki/CompilationGuide/CrossCompilingForWindows#CrossCompiler
[02:25:49 CET] <Ripmind> is it that step?
[02:25:59 CET] <c_14> yes
[02:26:22 CET] <c_14> I'm looking at the second link in that section
[02:26:30 CET] <Ripmind> https://github.com/rdp/ffmpeg-windows-build-helpers this?
[02:26:44 CET] <c_14> https://ffmpeg.zeranoe.com/forum/viewtopic.php?f=19&t=459 this one
[02:26:51 CET] <Ripmind> oh
[02:26:59 CET] <Ripmind> the 3rd link says it can autocompile it
[02:27:16 CET] <c_14> Try it
[02:27:21 CET] <c_14> It may or may not work
[02:27:30 CET] <Ripmind> :D
[02:27:32 CET] <Ripmind> true
[02:33:36 CET] <Ripmind> ok
[02:33:45 CET] <Ripmind> so first i install all that stuff liek build-essential?
[02:34:24 CET] <c_14> Ye, you'll probably need it.
[02:35:03 CET] <Ripmind> but i do not need gcc because i build it myself?
[02:35:25 CET] <c_14> Well, you're going to need some sort of compiler in order to compile it.
[02:35:52 CET] <Ripmind> ah
[02:38:00 CET] <Ripmind> ok it's running :)
[02:38:22 CET] <Ripmind> will this already take long?
[02:38:39 CET] <c_14> probably
[02:38:49 CET] <c_14> What kind of CPU?
[02:38:54 CET] <Ripmind> i7-4790
[02:43:38 CET] <Ripmind> ir is spamming pax: Unable to create gcc-5.2.0/gcc/c/c-convert.c: No such file or directory for many many files
[02:44:17 CET] <Ripmind> http://p.styler2go.de/?9
[02:44:20 CET] <Ripmind> oh
[02:44:21 CET] <Ripmind> nvm
[02:55:01 CET] <Ripmind> i am such a noob... *sigh*
[03:14:08 CET] <Ripmind> ok it is compiling again and a lot faster now
[03:36:59 CET] <Ripmind> i don't understand what i need to do next
[03:37:21 CET] <Ripmind> how do i build the dependencies?
[03:37:32 CET] <c_14> For ffmpeg?
[03:37:54 CET] <Ripmind> yes. I build the compiler and now it says compile dependencies
[03:38:08 CET] <Ripmind> that would mean, my NVENC, am i right?
[03:38:23 CET] <c_14> For that you just need the header
[03:38:41 CET] <Ripmind> yeah i have that file
[03:39:00 CET] <c_14> Just throw that in your include path and you should be fine
[03:39:26 CET] <Ripmind> but where do i find my include path and what do i do afterwards? compiling the compiler again?
[03:40:13 CET] <pepee> how come ffmpeg is slower in my machine (AMD llano APU from 2011, 4 cores) than in amazon (Intel Xeon E5-2670, 1 core) :(
[03:40:23 CET] <c_14> Doesn't really matter where, you can set it with --extra-cflags="-I/path/to/somewhere"
[03:40:30 CET] <pepee> err, *faster in my machine, for x264
[03:40:35 CET] <c_14> And then you just need to run the ffmpeg configure script and then compile
[03:40:53 CET] <c_14> pepee: threading, probably
[03:41:02 CET] <Ripmind> c_14: set it on the cross_compile.sh script?
[03:41:21 CET] <c_14> pepee: also, is it the same version, same compilation flags, do both cpus support the same instruction sets
[03:41:48 CET] <c_14> Ripmind: eeeeeh, don't know how the cross_compile.sh works. Was talking about ffmpeg's configure script
[03:41:59 CET] <pepee> c_14, the intel cpu supports avx, while mine doesn't even support sse4+
[03:42:06 CET] <Ripmind> https://trac.ffmpeg.org/wiki/CompilationGuide/CrossCompilingForWindows#Compiledependencies
[03:42:14 CET] <Ripmind> it says... "Next cross compile any added dependencies you may want, for instead libx264."
[03:42:24 CET] <Ripmind> oh
[03:42:26 CET] <Ripmind> hmm
[03:42:31 CET] <c_14> Do you want something besides nvenc?
[03:42:56 CET] <Ripmind> no
[03:43:01 CET] <Ripmind> welll, the basic
[03:43:07 CET] <Ripmind> everythign that is in the usual ffmpeg?
[03:43:11 CET] <c_14> pepee: assuming they weren't manually disabled when compiling ffmpeg, it's probably just the threading
[03:43:34 CET] <c_14> Ripmind: default ffmpeg has decoders for just about everything and a handful of encoders, external deps usually just pull in extra encoders
[03:43:43 CET] <Ripmind> nice
[03:43:48 CET] <Ripmind> so i just want that nvenc
[03:44:04 CET] <c_14> Then just skip that step and go straight to compile ffmpeg
[03:44:33 CET] <Ripmind> don't i need to add nvenc?
[03:45:20 CET] <c_14> Just add --enable-nvenc to the configure command (and --extra-cflags="-I/path/to/somewhere" in case the header isn't in a default include path)
[03:45:31 CET] <c_14> eh
[03:45:36 CET] <c_14> and --enable-nonfree iirc
[03:45:37 CET] <Ripmind> ah
[03:45:41 CET] <c_14> But it should bug you about that
[03:45:48 CET] <Ripmind> ok but
[03:45:55 CET] <Ripmind> it says i shoudl use ./configure
[03:46:14 CET] <Ripmind> do i need to clone the git to get the file / script?
[03:46:22 CET] <c_14> Well
[03:46:24 CET] <c_14> You need the sources
[03:46:30 CET] <c_14> Otherwise you can't compile them
[03:47:03 CET] <Ripmind> mmmh
[03:49:45 CET] <Ripmind> i686-w64-mingw32-gcc is unable to create an executable file.
[03:50:06 CET] <c_14> Upload the last few lines of your config.log to a pastebin service
[03:50:57 CET] <Ripmind> http://p.styler2go.de/?10
[03:51:38 CET] <c_14> Make sure i686-w64-mingw32-gcc is in your path
[03:54:03 CET] <Ripmind> hm
[03:54:07 CET] <Ripmind> but where shoudl that be
[03:54:26 CET] <c_14> How'd you compile the cross-compiler?
[03:54:52 CET] <Ripmind> i don't have the i686, i have the http://p.styler2go.de/?11 prolly?
[03:55:51 CET] <c_14> Add that bin to your path and set --cross-prefix=x86_64-w64-mingw32-
[03:56:03 CET] <Ripmind> i think i did that
[03:56:18 CET] <Ripmind> echo $PATH: /home/debian/ffmpeg-windows-build-helpers/sandbox/mingw-w64-x86_64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[03:56:31 CET] <c_14> Right, then just modify the --cross-prefix thing
[03:57:22 CET] <Ripmind> will this be a problem later? that i don't have i686?
[03:59:40 CET] <Ripmind> omgosh i think i am gooing somewhere
[03:59:53 CET] <Ripmind> only thing missing now is the nvenc .h file
[04:00:25 CET] <c_14> As long as you only plan on using the binary on 64bit systems it's no problem at all
[04:00:43 CET] <Ripmind> i said that in the gcc compile
[04:00:46 CET] <Ripmind> only 64 bit
[04:05:34 CET] <Ripmind> i am curios, how long will it take to compile ffmpeg?
[04:05:51 CET] <c_14> On my system on a non-cross-compile about 3 minutes
[04:05:56 CET] <Ripmind> oh
[04:06:01 CET] <Ripmind> i thoguth it would take ages
[04:06:18 CET] <Ripmind> i add it with -exra-cflags?
[04:06:28 CET] <c_14> --extra-cflags, yes
[04:06:33 CET] <Ripmind> ./configure --enable-memalign-hack --arch=x86 --target-os=mingw32 --cross-prefix=x86_64-w64-mingw32- --pkg-config=pkg-config --enable-nvenc --enable-nonfree --extra-cflags="-I../nvEncodeAPI.h"
[04:06:48 CET] <Ripmind> without the -I?
[04:06:52 CET] <c_14> just -I../
[04:06:55 CET] <c_14> without the nvEncodeAPI.h
[04:07:02 CET] <Ripmind> oh
[04:07:06 CET] <Ripmind> -I = include?
[04:07:13 CET] <c_14> basically
[04:07:20 CET] <Ripmind> ah
[04:07:27 CET] <Ripmind> so.. it created the xonfig i think
[04:07:29 CET] <Ripmind> now i do make?
[04:07:39 CET] <Ripmind> *excitment*
[04:08:01 CET] <c_14> It should be pretty obvious if configure exited successfully. And yes, just run make (-jwhatever)
[04:08:05 CET] <c_14> Probably -j4 or -j8
[04:08:15 CET] <Ripmind> j means?
[04:08:24 CET] <c_14> How many threads make should use when compiling
[04:08:28 CET] <c_14> Use about 1 per cpu core
[04:08:35 CET] <Ripmind> ah
[04:09:02 CET] <Ripmind> ok it's running
[04:10:30 CET] <Ripmind> it took you only 3 minutes?
[04:10:45 CET] <c_14> 3minutes 5 seconds
[04:10:57 CET] <Ripmind> so you have as very good computer?
[04:11:43 CET] <c_14> It's 7 years old, but it used to be very good yes.
[04:12:15 CET] <pepee> turns out mpeg4 is much faster than libx264
[04:12:18 CET] <Ripmind> well i am usnign a VM
[04:12:27 CET] <Ripmind> for all that
[04:12:34 CET] <Ripmind> real 3m32.284s
[04:12:36 CET] <Ripmind> it's done.
[04:12:40 CET] <Ripmind> \o/
[04:12:54 CET] <pepee> but produces a bigger file
[04:13:32 CET] <pepee> (faster with -shortest)... bigger file, even with -tune stillimage
[04:13:40 CET] <Ripmind> c_14: i ahve the exe file on my pc now. hwo do i find out of nvenc is included?
[04:13:53 CET] <c_14> pepee: I don't think mpeg4 supports tunes
[04:14:03 CET] <c_14> Ripmind: ffmpeg -encoders
[04:14:06 CET] <c_14> it should list nvenc
[04:14:21 CET] <c_14> There or in -codecs
[04:14:26 CET] <Ripmind> it's in there!
[04:14:43 CET] <Ripmind> ok one last question: how do i use it now?
[04:15:01 CET] <c_14> ffmpeg -i video -c:v nvenc out.mkv
[04:16:43 CET] <Ripmind> HAHAHA
[04:16:45 CET] <Ripmind> ok it works
[04:16:52 CET] <Ripmind> around 400 fps encoding
[04:16:56 CET] <Ripmind> i am happy *_*
[04:17:39 CET] <Ripmind> thank you so much :)
[04:17:50 CET] <pepee> does anyone knows if you can use hardware encoding in amazon or digitalocean? o.O
[04:18:25 CET] <pepee> or if it's there a cloud provider that supports hardware encoding?
[04:18:52 CET] <pepee> *dedicated hw encoding
[04:19:06 CET] <c_14> I'm sure they exist, but don't know of any off the top of my head.
[04:19:11 CET] <TD-Linux> pepee, I mean some Xeons have QSV, also obviously the nvidia boxes on AWS have it
[04:19:23 CET] <TD-Linux> but no one actually does hw cloud encoding
[04:19:33 CET] <pepee> how do people do it?
[04:19:37 CET] <TD-Linux> software
[04:19:44 CET] <pepee> I have no idea, I really want to learn
[04:19:53 CET] <pepee> you mean, the way I'm doing it now?
[04:20:13 CET] <Ripmind> i am getting unsure if it worked again, c_14 :/
[04:20:28 CET] <Ripmind> i have 100% cpu consumption, barely gpu i guess
[04:20:30 CET] <TD-Linux> pepee, with libvpx or libx264, yes
[04:20:42 CET] <pepee> isn't kinda stupid that people don't use dedicated hardware encoders...
[04:20:51 CET] <pepee> err, s/people/companies/
[04:20:53 CET] <TD-Linux> pepee, not at all, CPU tends to be more cost effective
[04:21:14 CET] <c_14> Ripmind: nvenc doesn't do any calculations in the gpu, it uses a dedicated hardware chip, also pushing 400 frames per second to the gpu is going to take a decent bit of cpu
[04:21:17 CET] <TD-Linux> also with CPU you fully control the encoder, and generally CPU encoders produce better quality/filesize than hw encoders
[04:21:28 CET] <pepee> what if there were a few big providers of hw encoding?
[04:21:30 CET] <Ripmind> oh
[04:21:30 CET] <pepee> ahh
[04:22:02 CET] <Ripmind> pepee: how about that little thing? http://www.nvidia.de/object/visual-computing-appliance-de.html :D
[04:23:14 CET] <pepee> Ripmind, nice
[04:23:16 CET] <TD-Linux> pepee, the output would still be inferior to software encoders.
[04:23:28 CET] <Ripmind> i think it costs around 50.000$
[04:23:36 CET] <Ripmind> from what i found in google
[04:24:00 CET] <pepee> TD-Linux, how come?
[04:24:13 CET] <TD-Linux> you can get a aws c4.8xlarge for $2 an hour that has 18 real cores, that's what most use (myself included)
[04:24:22 CET] <pepee> I don't really know what "encoding" means, but from comments I've read in forums, it has to do with randomnedd or something?
[04:24:36 CET] <pepee> *randomness
[04:25:00 CET] <TD-Linux> no, basically video formats specify how a decoder should work, but not the encoder.
[04:25:42 CET] <TD-Linux> there are many possible ways to encode a video that plays back correctly, it's the encoder's job to search for the best way.
[04:25:52 CET] <pepee> ah
[04:32:03 CET] <pepee> how do you specify two output videos with an image and two input audio files?
[04:32:25 CET] <c_14> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[04:32:33 CET] <pepee> thanks
[04:41:57 CET] <pepee> I don't get it... I mean, img + ogg -> webm, and img + mp3 -> mp4   in the same command line. is it possible?
[04:42:53 CET] <TD-Linux> no, you're better off with two instances of ffmpeg there
[04:43:11 CET] <pepee> ah, ok
[04:43:37 CET] <pepee> can you somehow point to input #? if so, how?
[04:45:16 CET] <chungy> It's possible, but more managable with separate instances
[04:45:34 CET] <pepee> ah, ok, I'll just use it like that. thanks
[08:02:35 CET] <pepee> how do I force ffmpeg to merge the files? "-strict -1" seems to be ignored   :  track 1: muxing mp3 at 12000hz is not standard, to mux anyway set strict to -1
[08:02:44 CET] <pepee> using ffmpeg 2.8.2
[08:08:20 CET] <waressearcher2> pepee: hallo
[08:08:29 CET] <pepee> hello waressearcher2
[08:08:41 CET] <waressearcher2> pepee: wie geht's ?
[08:09:04 CET] <pepee> I don't speak german(?)
[08:11:47 CET] <durandal_1707> pepee: pastebin full command
[08:12:21 CET] <pepee> command or command + output?
[08:12:32 CET] <durandal_1707> both
[08:15:42 CET] <pepee> durandal_1707, http://pastebin.ubuntu.com/13389123/
[08:19:15 CET] <durandal_1707> pepee: you have strict at wrong position
[08:19:42 CET] <pepee> durandal_1707, I tried 3 positions, still won't help. anyway, where should I put it?
[08:19:56 CET] <durandal_1707> move it after inputs options
[08:20:54 CET] <pepee> durr, I'm dumb...
[08:21:11 CET] <durandal_1707> anyway you could resample audio so this not needed
[08:22:18 CET] <pepee> can I do it in the same line?
[08:23:27 CET] <durandal_1707> yes
[08:23:43 CET] <pepee> lemme try
[08:29:11 CET] <pepee> I still get the same error message without -strict -1
[08:29:35 CET] <pepee> anyway, thank you very much durandal_1707
[08:31:08 CET] <pepee> err, I get the error with -strict -1 too...
[08:32:58 CET] <pepee> ffmpeg -y -loop 1 -r 1 -i save/in.jpg -i save/in.mp3 -strict -1 -c:v mpeg4 -qscale:v 4 -c:a copy -ab 32k -ar 16000 -shortest -f mp4 out.mp4  <- "[mp4 @ 0x3a8f880] track 1: muxing mp3 at 12000hz is not standard, to mux anyway set strict to -1"
[08:33:35 CET] <pepee> nm, I'm using an older player in my local setup...
[08:52:55 CET] <durandal_1707> you are using copy for audio, that can't work
[08:53:35 CET] <durandal_1707> put strict after -f mp4
[09:06:13 CET] <pepee> durandal_1707, I will simply create a temporary mp3 file to solve it. now I have another problem... https://stackoverflow.com/questions/18214322/ffmpeg-creates-mp4-which-no-browser-can-decode-but-it-can-be-played-in-vlc
[09:12:16 CET] <durandal_1707> use basic preset
[09:13:23 CET] <pepee> https://stackoverflow.com/a/24697998  <- I get "Undefined constant or missing '(' in 'baseline'"
[09:17:28 CET] <pepee> http://permalink.gmane.org/gmane.comp.video.ffmpeg.devel/133615
[09:17:43 CET] <pepee> from http://ubuntuforums.org/showthread.php?t=786095&page=180&p=11102917#post11102917
[09:17:57 CET] <pepee> tl;dr I have to do this in 2 steps.
[09:28:07 CET] <pepee> thanks durandal_1707 . bye
[12:17:33 CET] <kruzin> Hi.. is there any specific channel I need to go to for asking an ffmpeg on android query
[12:18:16 CET] <IntelRNG> I know none
[12:18:26 CET] <kruzin> oh :/
[12:18:29 CET] <kruzin> can I ask here?
[12:21:59 CET] <kruzin> anyone here worked with ffmpeg on android?
[12:27:09 CET] <waressearcher2> kruzin: hallo
[12:27:18 CET] <kruzin> hey waressearcher2  :D
[12:27:42 CET] <waressearcher2> kruzin: wie geht's wie steht's ?
[12:27:56 CET] <kruzin> waressearcher2 I don't understand :/
[12:29:01 CET] <kruzin> waressearcher2 have you workied with ffmpeg on android?
[12:29:07 CET] <kruzin> worked*
[12:29:18 CET] <waressearcher2> kruzin: nein
[13:16:40 CET] <Worf> looking for any tipps about how to stabilize and denoise videos created by mobile phone. currently i have quite good results for stabilizing using the vidstabdetect and vidstabtransform filters, denoising however i'm not so happy yet. trying hqdn3d ... i manage to denoise the dark parts acceptable, but at the same time i the not so noisy not so dark parts really suffer ... any tipps?
[13:18:25 CET] <IntelRNG> Meh, when I have tried denoisers, I have taken noise uot by turning the image more blurry... I am not much of a fan of it.
[13:22:18 CET] <Worf> IntelRNG: so you tried several denoisers ending up just using blur? :)
[13:23:45 CET] <IntelRNG> lol. To be honest, I tried long ago and I don't know which filters I used. The source material was very damaged anyway. I tried with some films transferred from a 16 mm film from the 30s.
[13:24:32 CET] <IntelRNG> But video was the least problematic. Audio was screwed up because the sproket holes of the original source were a bit broken and there was A/V desync due to that
[13:25:44 CET] <Worf> oh ... i understand ... reminds me of when i tried to get the best out of a old audio casette
[13:26:47 CET] <IntelRNG> Audio is more forgiving, you can get nice transfers from vynils
[13:27:01 CET] <IntelRNG> Asuming they are not completey screwed to beign with.
[13:50:33 CET] <Worf> hrmm ... and i need to try what difference it makes if i first denoise and then stabilize, or the other way round ...
[14:15:53 CET] <Flerb> Hi. Does anyone have any idea how I might convert an MJPEG stream to something like H.264 over HTTP?
[14:26:12 CET] <Worf> ... testing atadenoise now, seems to be fairly new ...
[16:59:59 CET] <grumper> greetings
[17:00:51 CET] <grumper> does ffmpeg have a way to use an external process as part of a filtergraph? connectin the input and output to the stdin/out respectively of an arbitrary program?
[17:18:13 CET] <grumper> I suspect the answer is no, time for a complex forest of processes with named pipes inbetween :/
[17:19:12 CET] <ChocolateArmpits> grumper: You can use it in a pipeline, supports stdin and stdout
[17:20:34 CET] <ChocolateArmpits> Depending on what you're doing on both ends raw yuv/rgb input and output will probably be easiest to use
[17:26:05 CET] <grumper> ChocolateArmpits: the problem is that I have several streams emerging from a filtergraph, and I want to pass only one of them to this external process
[17:26:14 CET] <grumper> and then re-merge them back
[17:26:32 CET] <grumper> so it's rather nontrivial to build with individual processes, so was looking to simplify it
[17:26:45 CET] <ChocolateArmpits> grumper: So can't the external process simply pass them to the output unchanged?
[17:27:32 CET] <ChocolateArmpits> Or you can try combining the streams into one whole frame and only has the process affect a particular portion of the frame where the stream you want changed is located
[17:27:41 CET] <ChocolateArmpits> I have no idea about your external process btw
[17:27:51 CET] <ChocolateArmpits> Are you writing it yourself as an application ?
[17:28:12 CET] <grumper> it's a gst-launch invocation from hell to take advantage of a hardware openmax thingymajig
[17:28:31 CET] <grumper> given how fragile and counterintuitive gs is, I am trying to keep as little of the stream within it as possible
[17:38:02 CET] <grumper> anyway this gives me enough to go on
[17:38:02 CET] <grumper> cheers
[21:10:30 CET] <pepee> is it normal that mp4/libx264 files are much bigger than vp8/webm?
[21:10:55 CET] <JEEB> means you have no idea what you're doing
[21:11:07 CET] <fritsch> :-)
[21:11:14 CET] <furq> it's normal if you encode them with ffmpeg's default parameters
[21:11:15 CET] <JEEB> if you're using "-crf" with both they mean completely different things with every encoder
[21:11:22 CET] <pepee> well, yes, I have no idea what I'm doing
[21:11:35 CET] <JEEB> libx264 is exceptionally good rate control wise
[21:11:50 CET] <furq> x264 should outperform vp8 in general
[21:11:52 CET] <JEEB> so if you tell it "I want this average bit rate" it will keep that till its death
[21:11:59 CET] <pepee> at the expense of more processing time, right?
[21:12:02 CET] <furq> no
[21:12:07 CET] <furq> x264 is higher quality and faster
[21:12:17 CET] <furq> well, should be higher quality
[21:12:21 CET] <furq> it's definitely faster
[21:12:22 CET] <pepee> funny, I get the contrary
[21:12:39 CET] <JEEB> pepee: anyways, libx264 either has the rate factor rate control (crf) or average bit rate one
[21:12:59 CET] <JEEB> first one doesn't limit the rate control, second of course limits it to hit that specific average bit rate :P
[21:13:06 CET] <JEEB> so if you tell it to hit X, it will hit X
[21:13:37 CET] <JEEB> and as I said, crf values between encoders, heck - even between presets in a single encoder aren't comparable
[21:13:38 CET] <pepee> I'm trying to convert images + audio to videos... and webm is easier to use, faster and generates small files. libx264 and mpeg4 are absurdly complicated, libx264 is slower than vp8 and generates bigger files
[21:13:40 CET] <furq> ffmpeg vp8 defaults to something ridiculous like 150kbps
[21:14:04 CET] <JEEB> pepee: did you build libx264 without yasm or something?
[21:14:26 CET] <furq> is this just for single image files
[21:14:30 CET] <JEEB> also the file size depends completely on your rate control :P
[21:14:32 CET] <furq> and also have you checked the quality of the output
[21:14:34 CET] <pepee> JEEB, I'm using the static build listed in the website
[21:14:47 CET] <JEEB> please pastebin your full command line and terminal output
[21:14:48 CET] <JEEB> and link here
[21:15:53 CET] <pepee> 1 min, please
[21:15:59 CET] <furq> i don't really get what you mean by "easier to use" vs "absurdly complicated"
[21:18:46 CET] <pepee> well, I tried merging 16kbps, 8kHz mp3s and it complained. I also did some tests and turns out -tune stillimage made no difference. then I tried playing the ouput of some files in the browser... and it didn't show the image
[21:19:44 CET] <furq> what does mp3 have to do with x264
[21:24:05 CET] <pepee> JEEB, http://paste.ubuntu.com/13404900/
[21:24:29 CET] <pepee> furq, I don't know... but ffmpeg let me do that with other formats, so I guessed it has to do with the video format, or something
[21:25:06 CET] <furq> yeah you're not providing any bitrate options to libvpx so it's no wonder it's smaller
[21:25:36 CET] <JEEB> 200kb/s being the default
[21:25:40 CET] <furq> also this audio file is six seconds long
[21:26:13 CET] <furq> that's not really long enough for codec performance to make any difference
[21:27:27 CET] <pepee> well, in my tests, longer files made even more difference
[21:27:53 CET] <pepee> anyway, yeah, I hadn't noticed the difference in bitrate, thanks
[21:28:23 CET] <JEEB> how old is this CPU you're using btW?
[21:28:40 CET] <JEEB> since you only have SSE2 "fast" path in libx264 in addition to mmx2/lzcnt
[21:29:01 CET] <pepee> now I wonder, for larger pics, I have to use higher bitrate, right? I mean, I can't use a fixed bitrate for random inputs
[21:29:12 CET] <pepee> JEEB, an AMD llano APU
[21:29:17 CET] <JEEB> :/
[21:29:18 CET] <furq> pepee: that's what -crf is for
[21:29:20 CET] <JEEB> my condolences
[21:29:44 CET] <pepee> I'm not gonna use this one to encode files, though, I'll use AWS
[21:29:48 CET] <pepee> I'm just testing my app
[21:30:15 CET] <JEEB> are these files going to be watched over limited bandwdith?
[21:30:39 CET] <JEEB> in that case you have to use maxrate/bufsize in addition to your general rate control mode
[21:30:49 CET] <JEEB> but yeah, CRF is very good in libx264/6
[21:30:51 CET] <JEEB> *5
[21:31:02 CET] <furq> i don't think you need to set -profile:v baseline if you're using -preset ultrafast
[21:31:04 CET] <JEEB> it's also available in libvpx, but it's quite different there
[21:31:27 CET] <JEEB> furq: if he needs baseline then it's better to set it
[21:31:28 CET] <pepee> not necessarily, but I don't need to store and serve high quality videos made with images
[21:31:48 CET] <JEEB> then in case he heightens the preset it will still be baseline
[21:32:05 CET] <pepee> I took that from stackoverflow, when I was testing the mp4 in the browser and didn't see the image
[21:32:20 CET] <furq> desktop browsers support main profile anyway
[21:32:34 CET] <furq> afaik it's only phones and whatnot which don't support main
[21:32:35 CET] <JEEB> I'd actually say that desktops support high/level 4.1 just fine
[21:32:47 CET] <JEEB> very little amount of mobiles don't support >baseline these days
[21:32:54 CET] <JEEB> mostly chinese android 2.x devices
[21:33:05 CET] <furq> when did android start supporting it
[21:33:19 CET] <JEEB> eh, you almost never use the SW decoder
[21:33:24 CET] <JEEB> it's always an ASIC on the board
[21:33:29 CET] <furq> oh
[21:33:33 CET] <JEEB> not sure what the fallback SW decoder supports
[21:33:33 CET] <pepee> what I want is to reduce encoding time and file size as much as possible... am I asking for too much? :P
[21:33:40 CET] <furq> so it's cheap android x.y devices then
[21:33:57 CET] <JEEB> yeah, stuff with very crappy chipsets or drivers
[21:34:22 CET] <JEEB> pepee: not really. you've already gotten the fastest (and worst compressing) preset set there and for rate control you can use -crf with libx264 :P
[21:34:39 CET] <JEEB> now find the highest crf value that still looks good with that content
[21:34:45 CET] <furq> i don't think slower presets will make much difference if it's a still image
[21:34:52 CET] <JEEB> oh they do
[21:34:58 CET] <JEEB> they definitely do :P
[21:35:02 CET] <furq> fair enough
[21:35:06 CET] <JEEB> it improves intra as well :P
[21:36:37 CET] <furq> pepee: fwiw if you do use mp4 then you're better off using aac over mp3
[21:36:48 CET] <furq> i've had some issues with web players that choked on mp3 in mp4
[21:37:13 CET] <furq> also good aac encoders (i.e. fdk-aac) are much better at low bitrates
[21:38:32 CET] <JEEB> pepee: anyways my results with libvpx have always been much worse than libx264 speed/perf -wise, so if you at any point get the time to show your way of getting such results feel free to pastebin another thing :P
[21:38:45 CET] <furq> you should also probably benchmark with longer audio files
[21:39:04 CET] <pepee> yeah, I'll make longer files
[21:39:07 CET] <furq> 5 frames of video isn't really enough for a benchmark
[21:39:56 CET] <pepee> will copying from mp3 to aac take too much time?
[21:40:12 CET] <furq> if your source is mp3 then you might as well keep it
[21:40:22 CET] <pepee> JEEB, I didn't even think about the image quality...
[21:40:47 CET] <JEEB> heck, I like my i7 being able to do preset veryslow 17fps with decoding+IVTC
[21:40:49 CET] <JEEB> at 1080p
[21:41:13 CET] <JEEB> almost could stream 24fps content >_>
[21:41:39 CET] <furq> time for some more overclocking
[21:43:39 CET] <fritsch> use the gpu ...
[21:44:12 CET] <fritsch> seen some people transcoding 8 h264 stream in parallel via vaapi
[21:44:21 CET] <fritsch> not straight forward though
[21:44:45 CET] <pepee> well, what I'm really doing is: the user enters some text and the backend sends that to a TTS engine, which outputs a wav file. then that file is converted to mp3/ogg, the WAV file is removed and then the user is given a link to the mp3/gg.  the idea is to let the user test stuff until they find a good output from the tts engine, input a URL to an image, and use the mp3/ogg files to make videos
[21:46:06 CET] <JEEB> fritsch: if I just cared about speed I would just superfast the bitch :P
[21:46:16 CET] <JEEB> or used intel's ASIC
[21:47:27 CET] <pepee> I guess the obvious solution here is to keep the WAV files, but... encoding mp3/ogg takes time and wastes space, which I'm trying to prevent. this is all in real time, the user waits a few seconds for the webapp to make the video... and that's why I want fast encoding and low size files
[21:48:00 CET] <pepee> not sure if anyone understands or cares :/
[22:09:51 CET] <pepee> thanks people for being patient, I'm re-reading the logs, and already understood that what I really needed is to tune -crf
[22:28:34 CET] <Worf> hrmm ... trying to do a few things at once, using some -filter_complex for video. however, i seem to loose the audio despite -c:a copy ... even tried a 2nd filtercomlex for audio ...
[22:28:52 CET] <c_14> 2 filtercomplexes won't work. you're probably missing a map
[22:29:25 CET] <Worf> ah ok ... hmm ...
[22:30:54 CET] <Worf> currently i have:   -filter_complex "[0]firstfilter,split=2[out1][tmp1];[tmp1]secondfilter[out2] -map '[out1] -c:a copy -c:v libx264 ... file1.mp4 -map '[out2] -c:a copy -c:v libx264 ... file2.mp4
[22:31:22 CET] <Worf> do i need a 2nd map for audio? or where did i loose it?
[22:31:25 CET] <c_14> add a map for the audio
[22:31:38 CET] <c_14> As soon as you have one explicit map, you no longer have any implicit maps
[22:33:08 CET] <Worf> inside the filter complex i need to do add another input->split->2 audio output too then?
[22:33:33 CET] <c_14> no
[22:33:45 CET] <pepee> heh, you were right, x264 is faster and gives better quality, even with my crappy CPU
[22:33:50 CET] <Worf> just 2 identical -map then?
[22:33:59 CET] <c_14> Just map the audio stream
[23:04:07 CET] <zhanshan> hi
[23:04:28 CET] <Worf> ... if it only was so easy ... i keep failing ... either i silently loose the audio, or i get errors that my stream specifier matches no stream, etc ... if [0:v] works for the video stream, shouldnt [0:a] work for audio? or [0:1], or [#0:1] or something like that?
[23:10:32 CET] <pepee> one last thing: the duration of the input audio (in.wav) is 57.2 seconds, but the duration of the video is 1m 21s. command is:  ffmpeg -y -framerate 1 -loop 1 -i in.png -i in.wav -c:v libx264 -preset veryfast -crf 28 -tune stillimage -c:a libvo_aacenc -r 1 -ab 32k -ar 16000 -shortest -pix_fmt yuv420p -movflags faststart -f mp4 out.mp4
[23:11:36 CET] <c_14> Worf: without []
[23:11:51 CET] <pepee> can someone tell me why is ffmpeg doing that?
[23:12:10 CET] <c_14> pepee: you're resampling
[23:12:37 CET] <c_14> wait, no
[23:12:40 CET] <c_14> that shouldn't be it
[23:32:22 CET] <furq> pepee: i've actually had the same problem and never really figured out how to fix it
[23:32:49 CET] <furq> it gets worse as you reduce the framerate
[23:33:10 CET] <furq> i ended up bumping the framerate to 5 but that didn't completely fix it
[23:37:57 CET] <pepee> furq, hmm... is there a way to pass the duration of the video to ffmpeg, then?
[23:40:54 CET] <c_14> You can try -t length
[23:44:45 CET] <pepee> that works, thanks
[23:45:29 CET] <Worf> c_14: thanks for your help, i don't yet 100% understand the syntax, but i learned a bit now :)
[23:47:25 CET] <Worf> and it seems to work ... i hope i don't need more complex mappings soon :)
[23:52:05 CET] <Worf> something totally different: i get a warning which i'm not sure what to make of: "Codec for stream 0 does not use global headers but container format requires global headers" - from what i find online i can't tell if the warning is actually correct, and if so if i should take it serious
[23:52:38 CET] <c_14> ignore it
[23:53:45 CET] <Worf> this is something i know how to do! :)
[23:56:38 CET] <pepee> furq, in my case, I think I'll read the headers of the wav/mp3/ogg file programmatically, get the duration and use that
[23:57:20 CET] <furq> weirdly i actually can't replicate the issue now
[23:57:36 CET] <pepee> hah, perhaps it was fixed recently?
[23:57:45 CET] <furq> what version are you on
[23:57:57 CET] <pepee> 2.8.2
[23:58:00 CET] <furq> i'm on 2.8.1
[23:58:09 CET] <pepee> hehe
[23:58:16 CET] <pepee> using what cmd line?
[23:58:20 CET] <furq> maybe it got fixed for a really short time
[23:59:11 CET] <furq> i tried a bunch and they all seem to work identically
[23:59:34 CET] <furq> -framerate 1 before input and -r 1 afterwards works
[00:00:00 CET] --- Sun Nov 22 2015


More information about the Ffmpeg-devel-irc mailing list