[Ffmpeg-devel-irc] ffmpeg.log.20141017
burek
burek021 at gmail.com
Sat Oct 18 02:05:01 CEST 2014
[00:00] <llogan> (...not counting devices or decoders that may not be enabled if missing --enable-gpl)
[00:00] <akiselev> If you distribute ffmpeg/libav* binaries it can get a bit hairy. Look here http://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic for some info on LGPL (ffmpeg's general license) but watch out for --enable-gpl and --enable-nonfree compile config flags
[00:01] <pzich> if I ffprobe one of my videos I see "Duration: 00:00:46.03, start: 0.033333", is there something I can pass when encoding to reset the start to 0?
[00:02] <sourpulse> llogan, akiselev: Thank you, that's how I was thinking too. The commercial product won't distribute FFmpeg, it will simple mention that utilities such as FFmpeg, x264, etc are available elsewhere.
[00:04] <sourpulse> s/simple/simply
[00:04] <llogan> sounds fine to me
[00:05] <akiselev> pzich, you can use the a/setpts video filter: -vf setpts=PTS-STARTPTS
[00:05] <akiselev> sourpulse, you can distribute LGPL static/dynamic libs as long as you "make the source code available"
[00:05] <akiselev> *you can also
[00:06] <sourpulse> llogan, akiselev: That's reassuring. Does FFmpeg has an official contact person for such questions?
[00:06] <akiselev> Which means either responding to requests for source code of the LGPL parts you used to compile the binaries, or putting them online (most companies just bury zip files on some back page)
[00:06] <sourpulse> The commercial program won't be using libs. It communicates merely by command-line pipe.
[00:07] <akiselev> sourpulse, doesn't matter to the LGPL, they're both binaries
[00:07] <akiselev> But as long as you don't distribute it doesnt matter
[00:07] <llogan> sourpulse: no contact person, but you can ask the ffmpeg-user mailing list.
[00:07] <pzich> akiselev: hmm, it still shows the same start time after I do that
[00:07] <akiselev> What do you mean it shows the start time?
[00:08] <sourpulse> Ah, the mailing list sounds like a good idea. Thanks.
[00:08] <llogan> or refer to http://ffmpeg.org/legal.html
[00:08] <akiselev> which basically says "you're on your own, get a lawyer"
[00:08] <llogan> or ask a lawyer who knows about this stuff
[00:08] <pzich> akiselev: if I export a video with that and run ffprobe on it, I still see "Duration: 00:00:46.07, start: 0.033333"
[00:08] <sourpulse> I've read the FFmpeg legal page and didn't see any contact info.
[00:09] <llogan> i meant as general info, not as a means of contact
[00:11] <sourpulse> llogan: Ok, understood. Yes, asking the lawyer is usually wise.
[01:42] <sourpulse> "ffplay -v verbose -f lavfi -i testsrc" --- That ffplay command auto-inserts a scaler conversion to yuv420p, causing color subsampling artifacts along the perimeter of the circle. Is there any way to avoid that conversion and get a sharper image? ffplay output: http://ffmpeg.gusari.org/viewtopic.php?f=26&t=1722
[01:46] <llogan> ffplay -v verbose -f lavfi -i testsrc,boxblur
[02:20] <[2]Ian> Can someone tell me how I would modify this command to properly use the adelay filter to delay the second input by 10 seconds?
[02:20] <[2]Ian> ffmpeg -i 1.opus -i 2.opus -filter_complex amix=inputs=2:duration=longesst conference.opus
[02:22] <sourpulse> llogan; ??? That's still subsampling, and now blurry. I'm hoping for sharper.
[02:23] <[2]Ian> misspelling of longest aside...
[02:26] <llogan> sourpulse: it was a bad joke
[02:27] <sourpulse> Oh ok! I get it now.
[02:33] <llogan> [2]Ian: [0:a]adelay=1000[a0];[a0][1:a]am(ix|erge)...?
[02:44] <[2]Ian> thanks
[07:05] <Nosomy> vp9 only have bitrate ratecontrol method?
[09:09] <Baked_Cake> ne1 know the correct refresh rate multiples for monitors
[09:09] <Baked_Cake> i think they use upto 3 decimal points
[09:09] <Baked_Cake> im trying to use 75hz
[09:09] <Baked_Cake> khv*
[09:09] <Baked_Cake> whatevs
[11:01] <flavioberetti> hi. i want to pass the 'date' metadata to the libvorbis audio encoder, where do i find documentation on how the fields are named in ffmpeg?
[11:01] <flavioberetti> ffmpeg -y -i Remark_Music_remix.wav -acodec libvorbis -metadata artist="Juha-Matti Hilpinen (AMJ)" -metadata genre="Chiptune" -metadata title="Remark Music (remix)" -metadata album="HVSC" -metadata date="1993 Side B/Topaz Beerline" -aq 8 ../C64Music-ogg/MUSICIANS/A/AMJ/Remark_Music_remix.ogg
[11:02] <flavioberetti> gives me Unrecognized option 'metadata date="1993 Side B/Topaz Beerline"'.
[11:02] <flavioberetti> Error splitting the argument list: Option not found
[11:02] <flavioberetti> however, man oggenc describes the option as -d date, --date date
[11:03] <flavioberetti> thinking about it, i can just use oggenc anyways...
[11:03] <flavioberetti> still can i do it with ffmpeg?
[11:11] <relaxed> flavioberetti: single quotes
[11:11] <flavioberetti> relaxed: for the other keys it works with double quotes..
[11:12] <relaxed> oh, well, see if it works
[11:13] <flavioberetti> relaxed: nope, get Unrecognized option 'metadata date='1993 Side B/Topaz Beerline''.
[11:14] <flavioberetti> relaxed: i got it working with oggenc now, thanks for helping! :)
[11:17] <c_14> -metadata date="1193 topaz/fooobar" <- this works for me
[11:37] <rahul__> hii all there is any way to stream my video continouesly using ffplay
[11:37] <rahul__> i am trying udp with ffplay to stream frames from server
[11:37] <rahul__> i need to give continoues mouse clicks to update it
[11:37] <rahul__> is there any way to do it fluently?
[12:32] <Diogo> hi i use this command ffmpeg -threads 1 -i alice.avi -vcodec libx264 -crf 15.0 -acodec libfdk_aac -ab 64 3194 root 20 0 5932 632 532 S 0.0 0.0 0:00.00 /sbin/getty 38400 tty4
[12:32] <Diogo> k -ar 48000 -ac 2 -f mp4 alice.mp4
[12:33] <Diogo> i want that ffmpeg use only threads = 1
[12:34] <Diogo> http://postimg.org/image/lnvwow4bx/ ??
[12:38] <Diogo> http://pastebin.com/GSmWpBmz
[16:17] <nick112234> hi everyone Q coming:
[16:17] <nick112234> ffmpeg -i sbs.mp4 -vf stereo3d -vf scale=w=iw*2:h=ih -acodec copy -threads 10 -b:v 10000k -preset ultrafast -vcodec libx264 out.avi
[16:18] <nick112234> does the 3d filter merging but not the rescaling... why is that? thanks
[16:19] <Mavrik> you should probably do a single filter chain
[16:19] <Mavrik> -vf stereo3d,scale=w=iw*2:h=ih
[16:24] <nick112234> ok my mistake; -vf stereo3d,scale=w=iw*2:h=ih does the 3D filter but not the rescaling !!
[16:27] <spaam> !!!
[16:28] <nick112234> ok it does something as the output is said to be 1920x1080 (and 960x1080 if scale not present....) still i get a 960x1080 looking output with black left & right of the video, any clue?
[16:32] <nick112234> anyone? thanks
[16:35] <nick112234> anyone avail to help tuning a cmd line? thanks
[16:36] <nick112234> a free beer is at stake! ;-)
[16:43] <ChocolateArmpits> nick112234, Did you correct the aspect?
[16:43] <ChocolateArmpits> setdar=dar=16/9
[16:43] <ChocolateArmpits> At the end of the filter chain
[16:45] <ChocolateArmpits> It would also be good if you could ffprobe the output video file to be sure the combined picture has a resolution of 1920 by 1080
[16:49] <nick112234> no, i took it for granted the aspect ratio would be what ouput i required, but it truely seems the aspect ratio is kept so i got black adds on left & right....
[16:53] <nick112234> ok so the question is how do i set up the right cmd line "-vf scale=w=iw*2:ih" to get a 2x width same height ?
[16:55] <nick112234> having the DAR or SAR whatever.... match the new 2*w:h format ?
[16:58] <albator> hello
[16:58] <albator> i m trying to restream an udp stream, but the audio is out of sync
[16:58] <albator> so I tried the ifoffset method, but I get input/output error
[17:00] <albator> http://pastebin.com/LLRwR6aL
[17:01] <albator> I remember it use to work before..
[17:02] <albator> and this is the stream Stream #0:0[0x32a]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1440x1080 [SAR 4:3 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
[17:02] <albator> Stream #0:1[0x334](fra): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 123 kb/s
[17:12] <nick112234> and the answer was : ffmpeg -i sbs.mp4 -vf stereo3d,scale=w=iw*2:ih,setsar=sar*2 -acodec copy -threads 10 -b:v 10000k -preset ultrafast -vcodec libx264 out.avi
[17:12] <nick112234> thanks & bye
[17:16] <nick112234> let's go for the last Q of the day...
[17:17] <nick112234> what cmd line would you suggest to make a very ultra fast conversion from any format to a ??? format x264? xvid? (popular one easy to read on any platform) with good quality, big file size is ok (not huge) and ultra fast transcoding ?? thanks
[17:18] <nick112234> what is the fastest and file size reasonnable encoding codec & option in ffmpeg ?
[18:05] <kaotiko> hi
[19:12] <Matina25> Hello, is anyone here with experience to tell me if --enable-librtmp is better than the native ffmpeg rtmp?
[19:12] <Matina25> and what is the differences between these two
[20:34] <albator> i d enable it yea
[20:34] <albator> had problem with ustream for instance without
[20:47] <Matina25> Another question, when i compile ffmpeg statically
[20:47] <Matina25> it cant process domain names
[20:48] <Matina25> only ips
[20:48] <Matina25> :/
[22:24] <bofh> Hi all! Given an audio file (MP3) of length 30 seconds I need to cut off first 15 seconds, pad it with silence, then for next 5 seconds increase volume by 20% and cut off the rest 10 seconds to the end of the file. So the resulting mp3 will contain 15 seconds of silence, then 5 seconds of audio with increased volume, and of length 20 seconds in total. Is it possible to do with FFMPEG?
[22:38] <kepstin-laptop> sure, would need to use -filter_complex with an avealsrc to get the silence, a concat, and some seeking on the file input or a trim filter.
[22:39] <bofh> kepstin-laptop: so it's feasible by using a bunch of command-line arguments, or I have to chain/pipe multiple invocations of ffmpeg?
[22:40] <kepstin-laptop> you can do it with a complex filter chain in a single ffmpeg invocation.
[22:41] <bofh> nice, could you please suggest some how-to/guide/article to read?
[22:41] <kepstin-laptop> http://www.ffmpeg.org/ffmpeg-filters.html is probably not a bad place to start
[22:42] <bofh> kepstin-laptop: great, cam ffmpeg also mix several files?
[22:42] <kepstin-laptop> define mix
[22:42] <kepstin-laptop> there's several ways in which multiple files could be combined into one file
[22:43] <bofh> kepstin-laptop: just "join" several files, combine/merge
[22:44] <bofh> does it make sense?
[22:44] <kepstin-laptop> one after the other? both playing at the same time? both syncronized, but the player can select between them?
[22:44] <kepstin-laptop> you have to be more specific :)
[22:44] <bofh> kepstin-laptop: well, the result would be the single MP3 file
[22:45] <bofh> with all input streams combined, and the length of the file is the length of the longest input file
[22:45] <kepstin-laptop> ok, so the audio from all streams played at the same time, mixed into a single audio stream.
[22:46] <bofh> yep
[22:46] <kepstin-laptop> that would be the 'amix' filter
[22:47] <bofh> so if I have, say, 5 input files, and each of them needs to be transformed somehow, and then "merged" into a single file - I need to invoke 5 ffmpeg processes to prepare all files, and the invoke another ffmpeg process to apply the 'amix' filter?
[22:47] <kepstin-laptop> you could do it all in one complex filter chain in one process if you want to.
[22:48] <kepstin-laptop> command line gets more and more complex, obviously ;)
[22:48] <bofh> that would be perfect
[22:48] <bofh> kepstin-laptop: as soon as I'll generate the command-line, I don't care how complex it could be )
[22:49] <DarKnesS_WolF> a noob question, if i have a nvidia GT0640 with like 300 Core and wanna convert a video using that gpu, i have vdpau whatever is taht installed and i have -hwaccel auto enabled in the ffmeg command but i see only CPU us being used , what i am missing ?
[22:50] <kepstin-laptop> I don't think ffmpeg currently supports doing any video encoding via gpu, only decoding. could be wrong.
[22:50] <DarKnesS_WolF> decoding in that sense ?
[22:50] <ubitux> long story short, gpus are useless
[22:53] <DarKnesS_WolF> mmm
[22:54] <kepstin-laptop> imo, the only reason to want to use a fixed-function video encoder on a graphics card is that you want to do live screencasting of something that's cpu-intensive, and can't use an external encoder box.
[22:56] <kepstin-laptop> in thoery, a gpu could be used to accellerate portions of the code of something like x264, and x264 does have some opencl support for that, but I dunno if it really gains you much.
[22:56] <kepstin-laptop> apparently it's buggy, reduces quality, and may be slower than cpu-only encoding atm :)
[22:59] <DarKnesS_WolF> wow
[22:59] <DarKnesS_WolF> lol
[22:59] <DarKnesS_WolF> i have a big video and wanted to use GPU to convert it
[22:59] <DarKnesS_WolF> no other tools ?
[23:00] <kepstin-laptop> just use x264, you'll be happier with the result.
[23:00] <DarKnesS_WolF> needed to try x265
[23:01] <kepstin-laptop> I haven't personally tried it, but apparently hevc/h.265 encoding is hella slow compared to h.264 if you actually want the result to be better quality
[23:02] <JEEB> I'd say for most use cases HEVC encoding is not useful right now
[23:02] <JEEB> for very low bit rates it can give a better result if you put in a *lot* of time
[23:04] <JEEB> well, the result will kind of suck with both, but HEVC will suck a bit less
[23:04] <JEEB> but it's still not good enough with psychovisual optimizations, so if you are trying to aim for good compression and a quality level higher than "lol low"
[23:04] <JEEB> then x264 still has the upper hand
[23:26] <DarKnesS_WolF> thx guys :)
[00:00] --- Sat Oct 18 2014
More information about the Ffmpeg-devel-irc
mailing list