[Ffmpeg-devel-irc] ffmpeg.log.20190508

burek burek021 at gmail.com
Thu May 9 03:05:02 EEST 2019


[00:34:57 CEST] <rmbeer> hello...
[00:36:08 CEST] <rmbeer> i'm use this script for convert the mp4 to gif, but still make gif very big, how to reduce the size of the gif?...
[00:36:13 CEST] <rmbeer> https://paste.rs/H10
[00:37:07 CEST] <rmbeer> i can handle the compress or something?...
[00:37:23 CEST] <rmbeer> the palette is always 256 colours?...
[00:42:01 CEST] <DHE> that's a gif for ya
[00:42:19 CEST] <DHE> it's yet another reason why people are moving to mp4 for their "videos" on the web
[00:42:36 CEST] <rmbeer> hummm...
[00:42:38 CEST] <another> or webm
[00:43:33 CEST] <rmbeer> i use mp4, but i need upload this in youtube, and unknown if in twitter can make a previsualize automatically from a link of youtube...
[00:45:56 CEST] <ekiro> how do i tell ffmpeg to use the fonts i dumped out of the video for its subtitles?  is there a option to pass the fontdir for it to look through?
[00:47:35 CEST] <rmbeer> ekiro, you must use mkv...
[00:47:44 CEST] <ekiro> rmbeer, i am
[00:47:55 CEST] <rmbeer> and send subtitles in the last -i ...
[00:54:51 CEST] <ekiro> yu[
[00:54:53 CEST] <ekiro> p
[05:07:33 CEST] <ekiro> how do i set a default font when no font is specified in the subs
[05:13:52 CEST] <lindylex> How can I select the audio and video for split?  This does the video only "[0:v]split[v0][v10];"
[09:22:28 CEST] <kubast2> Hey, how can I grab a specific window in ffmpeg?
[09:22:38 CEST] <kubast2> linux x11
[09:23:02 CEST] <kubast2> I can probe for vissible X11 window id no problem, I don't need to filter by title
[09:23:27 CEST] <kubast2> I looked up on google I only say "gdigrab" but I'm mildly convinced right now it is a windows thing
[09:23:41 CEST] <kubast2> :0.windowid? maybe
[09:23:48 CEST] <kubast2> I will check tbh
[09:25:17 CEST] <kubast2> :0.0 hmm so maybe one of them is the screen?; the display variable is only one so
[09:26:46 CEST] <kubast2> Screens aren't used much anymore, with xinerama and now xrandr combining multiple screens into a single logical screen.
[09:26:49 CEST] <kubast2> Ah I see
[11:40:27 CEST] <Pinchiukas> If I do a "ffmpeg -i input.mp4 -vf minterpolate=fps=60 output.mp4", what is it doing with the audio?
[11:45:56 CEST] <JEEB> by default if you don't set a codec it will pick whatever is the default for the output container
[11:46:02 CEST] <JEEB> thus you probably want -c:a copy there
[11:46:09 CEST] <JEEB> so that the audio stream doesn't get touched
[11:47:27 CEST] <JEEB> and for video you probably want -c:v libx264 -crf XYZ. exact value of XYZ depends on your eyes and the content so you can use -ss and -t to only handle a specific part of the input for quicker testing
[11:47:57 CEST] <JEEB> for example seeking into 25 seconds of input would be -ss 25 before the input (-i BLAH)
[11:48:13 CEST] <JEEB> and then encoding a minute's worth of stuff is -t 60 after the input (-i BLAH)
[11:48:26 CEST] <JEEB> then you can start with crf 23, and if it looks crappy go down
[11:48:29 CEST] <JEEB> if it looks good go up
[11:48:53 CEST] <JEEB> that way you will find the highest CRF value for the default preset (medium) that is good enough for you
[11:49:24 CEST] <JEEB> shouldn't take too many tries and if you limit yourself to a scene or a few it shouldn't take too long to go through a try
[12:52:15 CEST] <Pinchiukas> JEEB: why do I want -c:v libx264 for video? :)
[12:58:03 CEST] <JEEB> Pinchiukas: the better H.264 encoder. since you're filtering the video the video needs to be decoded before that. and thus you want something sane to encode it with again
[13:08:21 CEST] <Pinchiukas> Is there a 'raw' option if I have plenty of diskspace?
[13:55:50 CEST] <DHE> while technically an option, depending on the image resolution you may find that the bottleneck is disk IO.
[13:56:10 CEST] <DHE> h264 in lossless mode may perform better in the end
[14:01:55 CEST] <Pinchiukas> I'm wondering how to make stuff hw accelerated. I'm doing this on Mac OS so I guess I should use 'videotoolbox'?
[14:02:22 CEST] <Pinchiukas> Because as it is it's not using all CPU and producing a embarrasing 1-1.5fps.
[14:08:03 CEST] <Pinchiukas> I'm trying different encoding options but nothing seems to have an effect. Might this be the minterpolate filter being single-threaded being the bottleneck?
[14:20:47 CEST] <DHE> plausibly. it sounds like a busy filter and I don't see any threading code in it
[14:33:24 CEST] <Pinchiukas> I came up with the idea of slicing the video into several parts and then doing minterpolate on that but it's probably going to produce a horrible image.
[15:17:18 CEST] <kepstin> yeah, it won't be able to handle motion across slice boundaries if you do that
[15:18:08 CEST] <JEEB> should be OK if you slice at scene boundaries
[15:18:47 CEST] <kepstin> ah, yeah, temporal slicing rather than spatial slicing could work
[16:05:50 CEST] <SixEcho> "ffmpeg -init_hw_device list" lists videotoolbox. "-init_hw_device videotoolbox" is accepted, however "-filter_hw_device videotoolbox" says "Invalid filter device videotoolbox".  ideas?
[16:10:03 CEST] <Mindiell> hi there, I try to mix two audio files. This is working quite good, but I want more :o)
[16:10:26 CEST] <Mindiell> I would like to add the second audio (a simple bip sound) say at the 3rd second of the first file
[16:10:52 CEST] <Mindiell> so I'll have a final audio based on the first audio file, but with a bip at 3 seconds. Is it possible and how ?
[16:11:26 CEST] <SixEcho> ^ nvm last question; had to assign a name to the device on init
[17:11:41 CEST] <Mindiell> Hmm, I finally add a silent before th ebip and mix it wit hmy audio file
[17:22:34 CEST] <another> why wouldn't you? :)
[17:25:10 CEST] <another> arg! ignore that^^
[17:55:53 CEST] <saml> how big is your bitrate ladder?
[20:16:17 CEST] <ekiro> is the main or high profile sufficient enough for web streaming?
[20:17:13 CEST] <ekiro> i've read folks recommending using -profile:v baseline -level 3.0 to support as many devices possible but i do not need to support ancient devices
[20:19:07 CEST] <kepstin> most devices restricted to baseline profile are obsolete nowadays. you're generally ok with high profile
[20:19:27 CEST] <kepstin> if you're doing multiple encodes you could do one lower quality baseline just as a fallback if you like
[20:20:40 CEST] <kepstin> (note that there's some additional restrictions if you're using webrtc rather than browser playback apis - the 'openh264' decoder used by firefox for webrtc can only do baseline)
[20:23:49 CEST] <ekiro> i see. thx. also is there a difference between using -movflags faststart and +faststart ?
[20:24:23 CEST] <kepstin> no, the + is syntax for adding a flag, but that's also the default if you just give a flag name.
[20:25:58 CEST] <Hello71> isn't openh264 an encoder
[20:26:10 CEST] <Hello71> huh.
[20:26:27 CEST] <kepstin> it's an encoder and decoder
[20:26:51 CEST] <kepstin> annoyingly, in most installs firefox can play h264 high using the system decoder, but it doesn't use the system decoder for webrtc.
[20:27:12 CEST] <kepstin> at least last i checked.
[20:30:20 CEST] <sluidfoe> Hi. I'm looking at ffprobe and it's not clear to me if it needs ffmpeg built with flags for every media type I want to detect or if it's got some magic number database or... I guess specifically: what do I need to do to get an ffprobe that will work with "reasonably arbitrary" media?
[20:31:09 CEST] <DHE> defaults cover a lot. there's not a lot of decoders you may need that ffmpeg doesn't support natively, and even fewer file formats.
[20:43:07 CEST] <sluidfoe> DHE: sorry, there's a caveat: we're using Gentoo, so "defaults" is a strong word
[20:57:36 CEST] <DHE> I mean defaults as in running "configure" with no parameters. you basically get all features that don't require external libraries or hardware (barring simple stuff like zlib)
[21:02:07 CEST] <kepstin> i think most of the use flags on the gentoo package control external deps, but not all of them and it's really not obvious which is which.
[21:23:01 CEST] <cehoyos> The gentoo package is notorious for having completely broken compilation flags (up to and including slow and unsafe binaries)
[21:27:36 CEST] <BtbN> The primary problem of the Gentoo ebuild is to insist that it knows better about CPU optimizations
[21:52:50 CEST] <sloth> hi, when i take a video from /dev/video0 its fine, no color or brightness problems, however when i use -vframes 1 to get single pics, about 1/4 of the image is at a different brightness level
[21:53:01 CEST] <sloth> like some kinda rolling shutter thing, but the only lighting is smothed LED
[21:54:06 CEST] <sloth> exact same spot in every image too
[22:01:18 CEST] <kepstin> i would assume the first frame of a multi-frame video would have the same issue
[22:01:31 CEST] <sloth> heeeey
[22:01:33 CEST] <sloth> it does
[22:01:46 CEST] <kepstin> probably some issue with the camera's automatic exposure :/
[22:01:53 CEST] <sloth> so what do i just take 2 frames and discard the first?
[22:03:07 CEST] <kepstin> sure, that would probably work. you could try '-vf trim=start_frame=1 -vframes 1
[22:03:20 CEST] <kepstin> the trim filter should drop the first frame, and then it'll take one frame from the rest
[22:05:00 CEST] <sloth> kepstin: and if i want to adjust that i can use says start_frame=2 to skip the first 2 frames?
[22:05:06 CEST] <kepstin> yeah
[22:05:33 CEST] <sloth> awesome, thank you very much my man
[22:13:46 CEST] <sluidfoe> Thanks, all. Yeah, the ebuild is... "interesting"
[00:00:00 CEST] --- Thu May  9 2019


More information about the Ffmpeg-devel-irc mailing list