[Ffmpeg-devel-irc] ffmpeg.log.20181123
burek
burek021 at gmail.com
Sat Nov 24 03:05:01 EET 2018
[05:11:37 CET] <vincent42> hi all, I see two option of the libx264 encoder : keyint and min-keyint, the later is documented, but to make sure, the former defines the maximim gop length ?
[06:18:47 CET] <tth> $ mplayer https://ve.media.tumblr.com/tumblr_pi5lhd3K7l1qigfjt.mp4
[06:18:58 CET] <tth> oups, wrong channel :)
[06:23:12 CET] <pink_mist> lol
[06:23:30 CET] <pink_mist> lost it when the butter melted
[07:32:01 CET] <TheAMM> That was amazing
[07:32:18 CET] <TheAMM> I've grown tired of rube goldberg machines but that had a fresh take on it
[08:32:54 CET] <ejr> hi. i am using devuan ascii and previously used standarf ffmpeg from the repository. yesterday i installed the latest ffmpeg from source because i needed one of the newer functions (-to option working for cutting properly). but now, when i try to convert mp4s to mp3s, i get the following error when I run "ffmpeg -n -i "$1" -f mp3 "${1%.mp4}.mp3"
[08:32:59 CET] <ejr> Automatic encoder selection failed for output stream #0:0. Default encoder for format mp3 (codec mp3) is probably disabled. Please choose an encoder manually.
[08:33:02 CET] <ejr> Error selecting an encoder for stream 0:0
[08:33:08 CET] <ejr> any suggestions on how to fix this?
[08:39:24 CET] <JEEB> ejr: you prpbably meant to copy the stream so -c copy :p by default ffmprg.c re-encodes with whatever is marked as default for a given format
[08:39:50 CET] <JEEB> and you need LAME to actually encode mpeg-1 layer 3 aka mp3
[08:40:36 CET] <JEEB> which is why your build has no encoder for it most likely :p
[08:40:40 CET] <pink_mist> well if he built from source, it might just be that he forgot to explicitly enable lame
[08:40:58 CET] <pink_mist> (but copy would probably be smarter)
[08:42:44 CET] <ejr> ok, i tried ffmpeg -n -i file.mp4 -c copy -f mp3 out.mp3 but that does not work either. pink_mist: right, i did only run ./configure, make and sudo make install. maybe i should just recompile it, but how can i enable lame in that process?
[08:55:04 CET] <pink_mist> assuming you have lame and its headers installed, you can just pass --enable-libmp3lame
[08:55:09 CET] <pink_mist> to ./configure
[08:55:29 CET] <pink_mist> you should probably look through the ./configure --help output too to see what else you might want to enable
[08:59:01 CET] <ejr> will do, thanks so far!
[09:01:47 CET] <ejr> wow, i don't know 99 percent of the codecs and stuff listed there that can be enabled. what are the typical things i should enable for the occasional conversion to standard formats? or can one just enable everything?
[09:03:11 CET] <JEEB> you generally only need to care about what you're encoding; decoders are generally internal
[09:03:25 CET] <JEEB> so if you need to encode mp3, LAME. aac has an internal encoder
[09:03:56 CET] <JEEB> basically you can check the list of encoders enabled with just ./configure and figure out what you'd need more, since you generally know what you need to encode to
[09:04:34 CET] <ejr> ok. what about avi to mp4 and mkv to mp4/avi? does that need any extra decoders enabled? (sorry, i don't know much about decoders/encoders)
[11:00:20 CET] <w1kl4s> mby question unrelated to ffmpeg, but someone here will prolly know
[11:00:35 CET] <w1kl4s> wd purples or seagate purples for torrent server
[11:06:18 CET] <pink_mist> never seagate
[11:13:38 CET] <Alina-malina> ffmpeg -loop 1 -i kuck4_crop.jpg -i kuck4_crop.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest -vf scale=-2 :720 kuckready.mp4 please someone help, this is taking so creaping long and using all resources, what i am doing wrong?
[13:47:46 CET] <skident> Hey guys
[13:49:20 CET] <skident> Does anybody know how to create an AVFrame (audio frame) manually in case I have more than 8 channels? Should I put all data into extended_data buffer or only data from channels greater than 8?
[13:55:20 CET] <JEEB> https://ffmpeg.org/doxygen/trunk/structAVFrame.html#afca04d808393822625e09b5ba91c6756
[14:02:48 CET] <skident> Yeah I have read this "documentation" and it is not clear for me
[14:03:21 CET] <skident> for example I have an audio stream with 16 channels
[14:03:38 CET] <skident> I see few options:
[14:04:03 CET] <skident> 1. put first 8 channels into frame.data[]
[14:04:15 CET] <skident> and other 8 channels into extended_data
[14:04:31 CET] <skident> 2. put all 16 channels into frame.extended_data[]
[14:04:37 CET] <skident> which one is correct?
[16:20:05 CET] <JC_Yang> do any build configure options ensure us that the deprecated features are disabled in the public API? for example, the AVPacket.convergence_duration
[16:21:13 CET] <c_14> Not that I know of, only thing you can do is build everything with -Werror=deprecated
[16:21:29 CET] <c_14> which will abort compilation if it tries using a deprecated feature
[16:23:04 CET] <JC_Yang> I mean, if I want to mute the warning, I have to undef FF_API_CONVERGENCE_DURATION, or maybe something that's upper layer which disable all of them for me, but how can I make sure that the build libraries(be it static or dynamic) do work with the the deprecation-free client codes?
[16:24:16 CET] <c_14> you want to make sure that your client doesn't use deprecated functions?
[16:44:56 CET] <JC_Yang> no, I mean the compatibility between my deprecation-free client codes and the pre-built libavcodec.a/libavformat.a. for example, if I undef FF_API_CONVERGENCE_DURATION, my client codes will utilize a AVPacket struct which is one field less, how can I make sure the prebuilt libavcodec.a use the same version of AVPacket? This is just one example, even it might not lead to real world problem, I doubt any other features incompatible will lead to hard to debug
[16:44:56 CET] <JC_Yang> errors... I wonder whether my concerns are valid. iiuc, AVPacket is a public API struct and never malloc/free() inside the libraries so it won't lead to any problem, but does it mean there's no other areas we should pay attention to?
[16:54:45 CET] <c_14> If you're linking statically you don't have to worry about that since the version you link against and the version you're running against are always the same, if you link dynamically you have to check the MAJOR/MINOR/MICRO versions. As long as the Major version of the library you're running against is the same as the major version you linked against the ABI should be compatible afaik
[16:57:04 CET] <c_14> the version's are listed in libav*/version.h with defines which you use in your code and functions you compare them to at runtime
[17:04:33 CET] <JC_Yang> okay, get it.
[18:32:06 CET] <the_gamer> is there an easier way than this to just concatenate two videos? https://trac.ffmpeg.org/wiki/Concatenate#differentcodec i don't get it. just [0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa] looks really complicated
[18:33:46 CET] <kepstin> the_gamer: if you are only doing concat and have all inputs listed with -i on the ffmpeg command line, you can shorten that to "-filter_complex concat=n=3:v=1:a=1" (if omitted, it's just grab inputs and outputs in order from what's available)
[18:34:06 CET] <furq> n=2 if you have two videos
[18:34:13 CET] <the_gamer> is n=3 the number of the videos?
[18:34:15 CET] <the_gamer> ok
[18:34:16 CET] <furq> yes
[18:34:30 CET] <furq> and v and a are the number of video and audio streams
[18:34:48 CET] <the_gamer> where i would have thought 1 would be standard if omitted?
[18:35:01 CET] <furq> iirc it defaults to v=1 a=0
[18:35:15 CET] <the_gamer> no audio is standard?
[18:35:25 CET] <furq> [0:v:0] etc selects which streams you want to pass to the filter, but generally speaking they'll be in the right order by default
[18:36:05 CET] <the_gamer> got it thank you :)
[18:36:36 CET] <kepstin> if you have videos with different numbers of video or audio tracks (e.g. one video is dual audio), you'll probably have to explicitly specify the input tracks to use since it might pull the wrong ones otherwise.
[18:36:52 CET] <kepstin> but if the inputs all match, then it's usually fine.
[18:37:52 CET] <the_gamer> all 1 audio only, it is still rendering&
[22:27:31 CET] <zinger> hi
[22:27:43 CET] <zinger> I sometimes use a combination of youtube-dl and FFmpeg to download live streams. Is it somehow possible to download a live stream using only FFmpeg without using youtube-dl ??
[22:29:09 CET] <durandal_1707> no
[22:29:31 CET] <durandal_1707> you need to get url
[22:38:06 CET] <furq> zinger: youtube-dl -g will get the url and then ffmpeg may or may not be able to download it
[22:38:16 CET] <furq> if you just want to pass custom options to ffmpeg or something
[22:40:08 CET] <zinger> ok thanks
[00:00:00 CET] --- Sat Nov 24 2018
More information about the Ffmpeg-devel-irc
mailing list