[Ffmpeg-devel-irc] ffmpeg.log.20191202

burek burek at teamnet.rs
Tue Dec 3 03:05:01 EET 2019


[03:02:19 CET] <flai> This is more of an "how should I approach this" kind of question, so bear with me: I have 10+ HLS (m3u8+mpegts+h264) live streams, and want to cut segments of those together into one big clip, with a nice crossfade between these segments
[03:04:08 CET] <flai> In general, I've come down to two approaches: I could use a modded version of ffmpeg that supports opengl shaders (for example, I don't think ffmpeg supports it easily natively), or I could simply export rgb frames of the segments and then add those together programmatically, piping it into a secondary instance of ffmpeg
[03:04:34 CET] <flai> in both cases audio isn't trivial
[03:04:47 CET] <flai> (I want a simple mp4+h264+audio as output)
[03:05:04 CET] <flai> What kind of solution would you guys go for?
[03:08:24 CET] <flai> Anyway, here's a version of what it should look like, essentially: https://www.youtube.com/watch?v=TFwAOrj2uSo
[03:12:02 CET] <Pie-jacker875> does anyone know anything about mlt rendering parameters through kdenlive
[03:12:15 CET] <Pie-jacker875> I'm trying to figure out how to get a profile for 2 pass vp9
[03:12:47 CET] <Pie-jacker875> it's only actually trying to do the second pass and fails because it doesn't actually have the log file from the first pass
[03:13:07 CET] <Pie-jacker875> and if I set it to do 1 pass then it doesn't seem to create the log file
[03:35:56 CET] <trfl> flai, this and similar things would be much easier with vapoursynth
[03:36:10 CET] <trfl> so i would use that for the sequencing and then ffmpeg for encoding
[03:36:39 CET] <trfl> ...actually VS didn't do audio the last time i checked, but i recall someone was working on that
[03:37:46 CET] <flai> hm, yeah
[03:39:27 CET] <trfl> if you can survive with something windows-native, then avisynth is compiled into the ffmpeg static builds by zeranoe
[03:39:35 CET] <trfl> that's what inspired vapoursynth and does audio too
[03:40:02 CET] <flai> Hm, i mean we run a few cloud-windows instances anyway, but they're being used for sth else
[03:40:08 CET] <flai> but windows vms work anyway if we need them to
[03:40:37 CET] <flai> but avisynth looks great
[03:41:27 CET] <flai> yeah, that looks amazing
[03:45:11 CET] <flai> hm, i'd might have to remux the relevant sections to avi, but that should be trivial
[03:46:44 CET] <flai> Thank you so much trfl!
[09:43:14 CET] <th3_v0ice> How can I call reconfigure_encoder from x264 encoder for example? Are there any methods available from the API that allow this?
[09:44:12 CET] <JEEB> the libx264 wrapper has support for that, various parameters are checked when you feed the encoder an AVFrame
[09:45:24 CET] <JEEB> X264_frame() which is the cfunction called at each AVFrame calls reconfig_encoder, which checks various params and then calls x264_encoder_reconfig one or more times
[09:46:01 CET] <JEEB> libx264 does limit in which cases it will actually reconfigure though :P
[09:46:21 CET] <JEEB> so I'd check first if you're not or are hitting x264_encoder_reconfig already in the wrapper
[09:46:39 CET] <JEEB> for the exact changes monitored see the wrapper code
[09:46:48 CET] <th3_v0ice> So reading from the code, if I just change in the AVCodecContext CRF value, it will reconfigure the encoder on the next send frame?
[09:46:49 CET] <JEEB> reconfig_encoder in libavcodec/libx264.c
[09:47:04 CET] <JEEB> should yes
[09:47:17 CET] <JEEB> whether libx264 itself takes that change in is then what matters
[09:47:53 CET] <JEEB> but that is no longer a FFmpeg thing, but rather a libx264 thing :P
[09:48:05 CET] <th3_v0ice> That's true :)
[09:48:09 CET] <th3_v0ice> Thanks for the info
[09:48:32 CET] <JEEB> np. I tried at one point to add colorspace reconfig there but failed :/
[09:48:55 CET] <JEEB> as in, if I don't have knowledge of output colorspace/primaries when the encoder is init, but there is knowledge of that in the first AVFrame
[09:49:07 CET] <JEEB> (which is what often happens in ffmpeg.c)
[09:49:28 CET] <th3_v0ice> Why did you fail in accomplishing that?
[09:49:49 CET] <th3_v0ice> Encoder didnt like it?
[09:49:49 CET] <JEEB> x264 didn't take the change in, apparently. or I didn't see anything changing :P
[09:49:55 CET] <JEEB> I did try to patch x264 too, btw
[09:49:59 CET] <JEEB> so it wasn't just FFmpeg side
[09:50:02 CET] <JEEB> but I didn't get far enough
[09:50:23 CET] <JEEB> I think it might make sense to instead init x264 at the first frame
[09:50:32 CET] <JEEB> do a test-init in wrapper's init()
[09:50:46 CET] <JEEB> and then initialize actually in encode_frame()
[09:51:12 CET] <th3_v0ice> That sounds like a solid idea
[09:55:30 CET] <th3_v0ice> Also seems from x264 code that they only support changing bufsize, maxrate, crf and bitrate.
[09:55:54 CET] <JEEB> yup
[09:55:58 CET] <JEEB> mostly rate control
[10:16:38 CET] <lain98> does the order in which i specify the filtergraph affect the output ? using -vf
[10:17:18 CET] <cehoyos> Definitely
[10:26:23 CET] <lain98> hmm okay
[11:41:31 CET] <lain98> i'm using this command to generate a video using geq filter. ffmpeg -f lavfi  -i  "nullsrc='s=512x512:d=256:r=1',scale=out_range=full,geq='r=N:g=N:b=N',format=yuv420p,drawtext=fontfile=Arial.ttf: text=%{n}: x=(w-tw)/2: y=h-(2*lh): fontsize=20: fontcolor=white: box=1: boxcolor=0x00000099" test.mp4
[11:41:45 CET] <lain98> but the 5th frame seems to be incorrect
[11:42:06 CET] <lain98> because i get r=g=b=3 instead of 4
[11:50:47 CET] <lain98> ok, there are many frames where the geq equation isnt correct
[11:53:35 CET] <durandal_1707> lain98: geq is correct, probably dropped frames
[11:55:07 CET] <lain98> how do i not drop ?
[11:55:31 CET] <durandal_1707> post full uncut output to pastebin
[12:01:59 CET] <lain98> check pm durandal_1707
[12:04:03 CET] <lain98> durandal_1707: https://gist.github.com/a-sansanwal/145f244b10acca2ae696dbf282f84d30
[12:06:00 CET] <durandal_1707> you are converting rgb to yuv
[12:06:33 CET] <durandal_1707> can not expect same R/G/B values
[12:08:00 CET] <lain98> yeah, that might be it
[12:11:38 CET] <durandal_1707> it sure is
[12:16:33 CET] <lain98> thanks durandal_1707 , was able to fix it
[16:34:52 CET] <lofo> Hi
[16:35:16 CET] <lofo> From FFmpeg's StreamingGuide i can read " It either streams to a some "other server", which re-streams for it to multiple clients, or it can stream via UDP/TCP directly to some single destination receiver"
[16:35:39 CET] <lofo> i do not understand the difference between the two approaches
[16:36:55 CET] <DHE> ffmpeg is not a server that handles many users connected to it at once
[16:37:20 CET] <lofo> If we do not care for the "multiple clients" the other server streams to in either case there is an emitter and a receiver.
[16:39:40 CET] <lofo> what im trying to figure out is : How, on a technical standpoint, does "streams to a some other server"  and "stream via UDP/TCP directly to some single destination receiver" are different
[16:42:24 CET] <lofo> Another way to put it : When you stream to a single destination do you do it the same way you would stream to a Streaming server that would subsequently dispatch to N other devices ?
[16:43:08 CET] <lofo> because the phrase i've copy pasted suggests that it isnt the case
[16:45:00 CET] <kepstin> there's some modes where you can use ffmpeg as a "server" (accepts an incoming tcp connection) but it only supports one client connecting.
[16:47:51 CET] <lofo> sure, but i'm only using ffmpeg to produce the stream
[16:49:38 CET] <lofo> I'm trying to figure out if transmitting to a server or directly to a single receiver are two diffirent connection methods
[16:49:51 CET] <kepstin> not necessarily
[16:50:21 CET] <kepstin> ffmpeg does support a lot of different connection methods, you pick one according to what the remote side supports, not what role the remote side fulfills.
[16:51:19 CET] <lofo> Because in this phrase from FFmpeg's StreamingGuide we can read "UDP/TCP" like an alternative connection method : "It either streams to a some "other server", which re-streams for it to multiple clients, or it can stream via UDP/TCP directly to some single destination receiver"
[16:51:50 CET] <kepstin> "some other server which re-streams to multiple clients" is an example of "a single destination receiver"
[16:52:48 CET] <lofo> this is what i was suspecting. needed confirmation. :)
[16:53:19 CET] <kepstin> although a "single destination receiver" is usually interpreted as "something that plays the video directly", which is why the server that re-streams is also listed
[16:53:32 CET] <kepstin> but ffmpeg doesn't care what the remote side does with the video
[16:53:47 CET] <lofo> the "real" info in this phrase is that FFmpeg does handle multiple clients. It was an odd way to put it
[16:53:54 CET] <kepstin> no, it does not
[16:54:04 CET] <lofo> does NOT sorry
[16:58:30 CET] <lofo> I'm sorry to insist but this documentation suggests that point-to-point is a different way to stream: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
[17:07:00 CET] <kepstin> that's just giving an example of how to do a point to point stream, showing how to set up bothe ffmpeg and the player.
[17:07:35 CET] <kepstin> a point to point stream is just one where the thing on the other end is playing the video directly, rather than throwing the data into a black hole  or sending it to multiple other machines
[17:08:20 CET] <kepstin> ffmpeg doesn't care what the other end is, but for people who want to do a point to point stream, it's helpful to know how to set up the *other end* of the connection.
[17:09:04 CET] <kepstin> if you're sending to a re-streaming server, presumably it's already set up and you just need to know how to get ffmpeg to talk to it.
[17:09:18 CET] <kepstin> which is why they're documented differently.
[18:38:11 CET] <Fyr> is there a tool to create slideshow with FFMPEG?
[18:38:35 CET] <Fyr> I mean, a script for Windows with random tranitions.
[20:01:22 CET] <Polochon_street> Hi! Let's say I have a vizualisation I'm making from a song in FFmpeg. Is there an "easy" way to stream it on some port for my LAN ?
[20:02:39 CET] <Polochon_street> the ideal would be to only be computing the vizualisation if someone requests it, and then stopping it as soon as the connection has closed
[20:51:03 CET] <BenLubar> I'm thinking about trying to implement a demuxer and a decoder in ffmpeg for dwarf fortress cmv files
[20:59:28 CET] <Polochon_street> guys, I've managed to do this https://paste.isomorphis.me/6Wv to stream the spectrometer of my audio out in my LAN, but I still have like 1s of latency when looking at the output on my own computer (see the line where I play it below)
[20:59:36 CET] <Polochon_street> do you have any idea to reduce latency even more?
[21:12:38 CET] <faLUCE> Hello. When I open an ac3 file with Audacity, it uses the ffmpeg ac3 decoder and shows the channels Rear-left/right at a lower volume than front-left/right. Is this the recording volume or does the decoder lower the volume when downmixing? (It should not downmix, given that I see 6 different channels with audacity, but I want to be sure)
[21:15:14 CET] <durandal_1707> dunno what audacity does
[21:16:53 CET] <kepstin> it seems like it uses ffmpeg to decode most formats it doesn't handle natively, and passes no extra options to the decoder
[21:17:38 CET] <durandal_1707> you can use ffmpeg to export ac3 with all channels to wav
[21:37:27 CET] <faLUCE> durandal_1707: right. how can I export only channels 3,4 and 5 ?
[21:40:09 CET] <kepstin> faLUCE: https://www.ffmpeg.org/ffmpeg-all.html#channelmap
[21:40:27 CET] <kepstin> is one way
[21:40:50 CET] <kepstin> i thought there was an ffmpeg option as well, but i can't be bothered to search the docs for it
[21:41:18 CET] <furq> -map_channels
[21:41:41 CET] <furq> -map_channel rather
[21:42:06 CET] <faLUCE> thnks
[21:55:19 CET] <analogical> can some ffmpeg ninja please extract the video stream from this site? http://ustvgo.tv/cnn-live-streaming-free/
[21:56:52 CET] <kepstin> analogical: maybe the youtube-dl folks would be more appropriate?
[21:58:16 CET] <kepstin> but in general for a one-off, it just ends up being "use the browser inspector network panel to find the hls playlist url, then use that as an input to ffmpeg"
[22:06:37 CET] <safinaskar> how to list all codecs supported by particular container?
[22:06:52 CET] <kepstin> safinaskar: ffmpeg doesn't have that functionality.
[22:07:16 CET] <safinaskar> kepstin: thanks
[22:07:25 CET] <safinaskar> kepstin: this is very sad
[22:08:49 CET] <safinaskar> kepstin: how to get list of all codecs supported by nut? i know that there exists a file https://git.ffmpeg.org/gitweb/nut.git/blob/670ff4339ed54f779ece2086fe8dbe9cbd51db9d:/docs/nut4cc.txt , but i want to get list of strings i can pass to "-c:v" switch, so that I can do "for I in ...list...; do ffmpeg -c:v $I"
[22:09:04 CET] <kepstin> it's kind of annoying at times, yeah. There is technically an api to do it in libavformat, but last I checked not very many formats actually provided a list, so it's not useful.
[22:10:12 CET] <analogical> safinaskar, https://en.wikipedia.org/wiki/Comparison_of_video_container_formats#Video_coding_formats_support
[22:10:27 CET] <kepstin> safinaskar: note that "a string you can pass to -c:v" and "codec name" are not the same thing
[22:11:15 CET] <kepstin> -c:v takes the name of an encoder (although all the builtin encoders have the same name as the codec, and there is a fallback so it'll pick a random encoder for the codec if there's no builtin one)
[22:16:36 CET] <safinaskar> kepstin: analogical: thanks
[22:19:09 CET] <kepstin> my impression was that nearly everything which ffmpeg can encode (and probably some things it can only copy) can be put into nut, but the result might not be particularly useful (or even decodeable, necessarily)
[22:19:32 CET] <kepstin> like, i don't think the nut muxer has any code to reject any codecs as unsupported.
[22:46:25 CET] <Polochon_street> hm. So, I've managed to stream a 800x480 x264 stream over my network and it's read smoothly by another laptop, but my raspberry pi struggles to display it via fbdev. Do you have any idea of how I could ease things for it?
[22:46:43 CET] <Polochon_street> the poor little guy struggles
[22:47:48 CET] <irwiss> LO2 chiller? :P
[22:50:24 CET] <wodim> I want to overlay a series of png files, in loop, one image per frame, on a video. would this be a good approach? https://stackoverflow.com/q/42943805
[22:50:46 CET] <wodim> or anything else you can suggest
[22:59:30 CET] <kepstin> seems fine. If you don't want to specify a bunch of -i options, it's possible to use the "movie" filter in your filter chain as the image source.
[23:53:43 CET] <CounterPillow> I think the latest commit broke bofa
[00:00:00 CET] --- Tue Dec  3 2019


More information about the Ffmpeg-devel-irc mailing list