[Ffmpeg-devel-irc] ffmpeg.log.20190611

burek burek021 at gmail.com
Wed Jun 12 03:05:01 EEST 2019


[00:10:16 CEST] <electrotoscope> brimestone: There's an old build hosted at http://www.ffmpegmac.net/
[00:11:39 CEST] <brimestone> electrotoscope, that side no longer active
[00:13:28 CEST] <kepstin> brimestone: seems like the answer is "nowhere", then. It shouldn't be hard to build it yourself on a mac, and there probably is a mac os update available for your system...?
[00:13:40 CEST] <electrotoscope> https://web.archive.org/web/20180109214631/http://www.ffmpegmac.net:80/ sorry
[00:13:42 CEST] <kepstin> might be an archive.org copy of that site tho
[00:14:34 CEST] <kepstin> using an old ffmpeg has similar issues to using an old os, of course, which is that there's likely to be known unpatched security issues.
[00:15:32 CEST] <cehoyos> Not just likely...
[00:15:44 CEST] <brimestone> Got it.. hope this work
[00:16:57 CEST] <brimestone> It works! thanks!
[09:21:16 CEST] <grosso> hi
[09:25:18 CEST] <grosso> in muxing.c example, line 444
[09:25:41 CEST] <grosso> when it does "avio_open"
[09:27:32 CEST] <grosso> suppose I have an avformat_context that is allocated for librtmp
[09:28:10 CEST] <grosso> I want to stream out, so using muxer.c example
[09:29:08 CEST] <grosso> then, in line 44 it does "avio_open" and that, in turn, calls rtmp_open() in librtmp library
[09:29:19 CEST] <grosso> line 444 sorry
[09:29:29 CEST] <grosso> so, the question is:
[09:30:26 CEST] <grosso> how it is supposed to pass the "rtmp_buffer" parameter to librtmp? because avio_open does not receive a dictionary
[09:31:42 CEST] <JEEB> you can't pass option to the librtmp library itself. the option passing is to the AVFormat input or output module
[09:31:47 CEST] <grosso> I can send the dictionnary later when writing header (line 451) but it is too late
[09:32:13 CEST] <JEEB> so you check what the AVOption it is you want to set
[09:33:39 CEST] <grosso> ok, so, when and how to pass that option to av_format before calling avio_open?
[09:36:31 CEST] <grosso> in muxing.c example, options are passed in avformat_write_header (line 451) but, for librtmp that is too late, because the actual opening is at avio_open
[09:36:59 CEST] <JEEB> yea
[09:37:18 CEST] <JEEB> I think the AVFormatContext can have options set to it
[09:37:27 CEST] <JEEB> and you create the AVFormatContext first
[09:37:36 CEST] <JEEB> (the output lavf context)
[09:40:13 CEST] <grosso> maybe... but, pay attention to this: when calling avio_open, no avformat_context is passed, rather is an internal "pb" field, which is an avio_context I think
[09:41:33 CEST] <JEEB> the first parameter is an AVIO context
[09:41:44 CEST] <grosso> maybe I can set the options to that avio_context...
[09:43:51 CEST] <JEEB> see the comments in libavformat/avformat.h
[09:44:00 CEST] <JEEB> @defgroup lavf_encoding Muxing
[09:44:13 CEST] <JEEB> it also notes things about avio_open2
[09:45:23 CEST] <grosso> ok! let me see
[09:45:46 CEST] <JEEB> https://svn.ffmpeg.org/doxygen/trunk/group__lavf__encoding.html
[09:45:58 CEST] <JEEB> this is the generated documentation for it
[09:46:58 CEST] <grosso> yes I see
[09:47:51 CEST] <grosso> avio_open2 has a dictionary entry, and even an interrupt callback... look interesting
[09:54:17 CEST] <grosso> I think avio_open2 is the solution... I wonder what can I do with that interrupt_callback
[15:09:09 CEST] <th3_v0ice> Is it possible to reconfigure the x264 encoder on the fly? Do i need to close and reopen it again in order to set new parameters?
[15:30:38 CEST] <BtbN> depends on the parameters
[15:48:38 CEST] <DHE> th3_v0ice: https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/libx264.c#L180
[15:50:27 CEST] <JEEB> then it's down to libx264 at which point it actually reconfigures the encoder
[15:50:38 CEST] <JEEB> I recommend reading it up on x264's side
[15:51:33 CEST] <DHE> but the ffmpeg compatibility layer is as indicated. it won't reconfigure for other changes. so you can't, say, switch resolutions or colourspaces without reopening the encoder
[15:51:44 CEST] <JEEB> yeh
[15:52:19 CEST] <BtbN> The muxer would hate that anyway
[15:52:48 CEST] Action: JEEB has seen that over in broadcasts anyways
[15:53:00 CEST] <JEEB> sometimes with the same PID, otherwise with a PID switch
[15:53:05 CEST] <DHE> yeah I've seen broadcast streams do all sorts of nasty things.
[15:53:12 CEST] <JEEB> not really nasty tbqh
[15:53:13 CEST] <DHE> AAC -> AC3 remains my least favourite
[15:53:20 CEST] <JEEB> within the same PID?
[15:53:39 CEST] <JEEB> I would be surprised if that was something almost anything supported
[15:53:47 CEST] <JEEB> suddenly switching codecs in the same PID
[15:53:49 CEST] <JEEB> with PID switches, sure
[15:53:54 CEST] <JEEB> that can make *anything* happen
[15:53:59 CEST] <DHE> I'd need to doublecheck. what I recall is that even though the PMT updated with a new version, even when I cut it past that point the decoder still went to crap. it wasn't a good, "clean" cut by any stretch.
[15:54:01 CEST] <BtbN> mpegts is special in that you can do stuff like that, since the container doesn't really care
[15:54:30 CEST] <BtbN> how well it works depends on the one trying to decode the mess you made
[15:54:48 CEST] <JEEB> the problem is when you utilize the hack in the mpeg-ts demuxer that merges the PID switches when it thinks there was one
[15:54:58 CEST] <DHE> `cut` using linux command dd, viewing with wireshark for verification, and then ffmpeg based (custom written) app to transcode
[15:55:01 CEST] <JEEB> although I think that doesn't try to merge different codec IDs
[15:55:21 CEST] <JEEB> same PID is just murder though :D
[15:56:06 CEST] <DHE> yeah, my app will reset the avformatcontext without doing any UDP stop/start if the decoders go to crap for this exact situation. :/
[16:21:04 CEST] <exonintrendo> I have a small webserver running that the goal is to take a requested video, generate an m3u8 playlist file from it that the video player will use and as it requests the segment files, it will create them as requested. The general concept seems to work as the playlist is generated and the video plays, but the video starts skipping ahead and becomes off. Could someone maybe take a look and see if I'm missing
[16:21:06 CEST] <exonintrendo> something obvious?
[16:21:08 CEST] <exonintrendo> https://paste.w00t.cloud/view/697f3f0b
[16:24:19 CEST] <DHE> you're generating the .ts files on demand using ffmpeg and the offsets?
[16:24:38 CEST] <DHE> that's not safe/reliable. ffmpeg will always seek to a keyframe which may not be where you want it
[16:25:13 CEST] <exonintrendo> ah, so should probably build the ts files based on those keyframes'
[16:26:02 CEST] <DHE> ffmpeg has an hls muxer. I suggest using that. Either actually break it into lots of .ts files, or if you prefer you can have a single playable .ts file with -hls_flags single_file and metadata for seek offsets will be included in the .m3u8 allowing HTTP fetches with Range headers
[16:26:25 CEST] <exonintrendo> oh really?
[16:26:51 CEST] <exonintrendo> i just want to be able to generate / send them as requested to prevent having to pre-transcode all files ahead of time
[16:27:00 CEST] <exonintrendo> i understand the 'load' time might be a bit longer, but that's fine
[16:27:30 CEST] <DHE> well that's technically doable, but you'd basically have to read the .ts file and index it for keyframes, PAT/PMT, and make a .m3u8 intelligently using the Range-offsets method
[16:28:35 CEST] <exonintrendo> right, i was hoping i could use ffprobe to get the keyframe offsets, generate an m3u8 playlist with TS file urls that have those offsets in their request, so the http request can take that information, generate a TS file based on that offset and duration (duration until the next keyframe) and send that data down
[16:28:56 CEST] <kepstin> hmm, you're actually transcoding tho, so the keyframe locations in the original video don't matter
[16:31:47 CEST] <exonintrendo> i just assumed that i could send down a TS file and transocde using those time offsets and duration, but doesn't seem to be giving me the desired effect
[16:31:56 CEST] <kepstin> exonintrendo: to start, you should skip the live business and try just manually generating segment files on the HD, tweaking stuff until you get that right
[16:32:00 CEST] <exonintrendo> err, m3u8 file, not ts file
[16:32:27 CEST] <exonintrendo> if i manually run the ffmpeg command to generate 10 second clips individually, it seems to work
[16:32:54 CEST] <kepstin> manually generate a few segments, then concatenate them (e.g. with cat) and make sure they play smoothly
[16:33:05 CEST] <kepstin> if that doesn't work, the final result definitely won't work :)
[16:33:12 CEST] <exonintrendo> right
[16:33:25 CEST] <exonintrendo> so individually they're fine
[16:33:27 CEST] <exonintrendo> let me concat them
[16:34:41 CEST] <exonintrendo> ok, concatting them worked
[16:34:58 CEST] <Jestin> Hey! Is it possible to create a scheduled livestream with FFmpeg thast plays a video at certain times/
[16:36:09 CEST] <exonintrendo> could some of the information in my m3u8 file be incorrect as far as telling the player what to do?
[16:36:49 CEST] <kepstin> Jestin: sure, by writing an external application that either uses ffmpeg libraries or that runs an ffmpeg command at a certain time.
[16:37:19 CEST] <kepstin> exonintrendo: dunno, hls streaming has always been kind of a weird black magic sort of thing.
[16:37:56 CEST] <exonintrendo> i'm also not married to that, i think i could use mpegDash if necessary (if that matters
[16:38:34 CEST] <kepstin> I stand by my original statement that the easiest way to do this is to buy a bigger hdd and pre-transcode everything :)
[16:50:03 CEST] <cehoyos> DHE: Could you provide a broadcast sample with AAC -> AC3? I don't think we have that.
[18:05:15 CEST] <Compusomnia> furq: For the sake of submitting a bug report, since you were able to reproduce my problem yesterday, are you a developer?
[18:07:45 CEST] <DHE> cehoyos: it's not quite broadcast. it's IP over fiber. this particular source is a channel that shows sports games from various regions. when they switch from their filler feed to the actual event sometimes the audio codec changes.
[18:08:15 CEST] <cehoyos> Can you dump the stream?
[18:25:08 CEST] <DHE> cehoyos: I have a capture in .ts format, yes. raw off the network
[18:25:20 CEST] <cehoyos> Please provide it
[18:46:40 CEST] <DHE> cehoyos: the sample I have is 1.4 GB (about 8 megabit video, could be trimmed further but I wanted a good sample) and I'd rather not have it publicly readable. got a place to put it?
[19:18:08 CEST] <DHE> what exactly does ffmpeg do on stream/PMT changes? the mpegts.c code doesn't make it very obvious
[19:19:19 CEST] <exonintrendo> anyone here familiar with HLS and/or videojs? From what I can tell, what I'm doing should be working, but having some issues with playback
[19:20:06 CEST] <DHE> I've produced a single-variant HLS video from a webcam with hls.js for the player
[19:22:04 CEST] <JEEB> DHE: you grab the AVProgram that you are interested in (or all of them), and after tmm1's changes it will have a flag for updates + a version entry in the AVProgram
[19:22:07 CEST] <exonintrendo> so i'm building an HLS playlist file and sending it to the player and as the player requests the chunks, I'm creating them on demand. Each chunk is 10 seconds long. Here's an example playlist file that I generated: https://paste.w00t.cloud/view/602f8884
[19:22:34 CEST] <JEEB> DHE: so in theory you have the means to update your API client's configuration when PMT changes if there are program changes (removing/adding AVStreams etc)
[19:22:46 CEST] <exonintrendo> The issue is that the player seems to jump to start playing the next chunk after only about 5 seconds. But if I only include a single chunk in the playlist, it plays it properly and to completion
[19:23:03 CEST] <exonintrendo> So i'm not sure if it's an issue with the format of my playlist file, or what.
[19:24:44 CEST] <DHE> JEEB: while I'm running slightly old Git, I see the avfctx->programs[0]->pmt_version number change, but not necessarily an update in the stream information. eg: codec changed according to the PMT, but nothing happens in the streams[x]->codecpar data
[19:25:01 CEST] <JEEB> DHE: yea I'm not sure if tmm1 had samples of that
[19:25:20 CEST] <JEEB> DHE: so the sample has no PID switch?
[19:25:22 CEST] <DHE> well I do, but it's from premium TV channels so I can't just share it
[19:25:38 CEST] <JEEB> no streams appear/disapper but the same PID just gets switched?
[19:25:58 CEST] <JEEB> in that case it can be funky because you'd have to somehow tell the demuxer to re-configure all that stuff
[19:26:01 CEST] <DHE> let me double-check....
[19:31:42 CEST] <DHE> https://pastebin.com/raw/LJqtbDqn   Yep, PID remains unchanged...
[19:31:52 CEST] <JEEB> fun times
[19:31:56 CEST] <DHE> yeah...
[19:32:12 CEST] <DHE> and last I tried it, I get the PMT version change but not the updated codec info in CodecPar
[19:32:43 CEST] <JEEB> yes, that's not unexpected. I don't think we re-init that stuff
[19:32:47 CEST] <JEEB> (currently)
[19:33:37 CEST] <DHE> yeah, but I need to know what's new and if at all possible I want to avoid resetting the avformatcontext because it'll affect decoding of the unchanged streams (in this case I lose video and need to wait for a new keyframe)
[19:33:53 CEST] <DHE> I'm concerningly close to doing my own demuxing... :/
[19:34:12 CEST] <JEEB> I'm not sure how hard it would be to send the AVStream back to being sniffed
[19:34:38 CEST] <JEEB> you'd somehow have to figure out from the descriptors that the codec has changed
[19:35:33 CEST] <DHE> there's a whole "codec id" byte in the header. it changes.
[19:35:34 CEST] <DHE> +        Stream type: ATSC A/52 Audio (0x81)
[19:35:46 CEST] <DHE> the information is there. I just need to get it
[19:36:31 CEST] <JEEB> yes, pretty sure it checks that and the descriptors in the descriptor loop
[19:36:46 CEST] <JEEB> and it works for new PIDs
[19:37:04 CEST] <JEEB> so it could be something as simple as checking new descriptors for a stream and switching the codec id if needed
[19:43:48 CEST] <greypack> im running ffmpeg 3.3 - is latest more performant for codecs like h265?
[19:44:28 CEST] <JEEB> slightly, but in general HEVC is a bad example since nobody seems to be interested in optimizing the software decoder more :P
[19:44:34 CEST] <JEEB> (and nobody's paying either it seems)
[20:21:39 CEST] <DHE> yeah I'm just going to modify mpegts.c for now to force codec tracking....
[20:24:02 CEST] <kurosu> > nobody's paying either it seems
[20:24:09 CEST] <kurosu> yeah...
[20:24:17 CEST] <DHE> ..?
[20:24:41 CEST] <kurosu> it is not in a particularly good state
[20:47:32 CEST] <greypack> I have a dual processor e5-10 core, gets 5fps encoding bluray rips at 1080p... something about hevc just sucks the wind out of my datacenter
[20:51:18 CEST] <BtbN> x265 at higher settings just is that slow
[20:51:35 CEST] <BtbN> It also doesn't scale with more cores endlessly
[20:52:04 CEST] <kurosu> s/x265/new codecs/ ? maybe svt-av1 scales better, dunno
[20:52:35 CEST] <BtbN> Video Encoding in general doesn't scale like that
[20:53:10 CEST] <DHE> number of cores only helps so much
[20:54:25 CEST] <kepstin> we've all been spoiled by x264
[20:55:19 CEST] <DHE> in fairness it's a very mature codec
[20:57:23 CEST] <BtbN> Even x264 doesn't profit from more than a handful of cores
[20:59:12 CEST] <kepstin> i wonder if the intel svt codecs are doing something like encoding multiple gops in parallel, that would explain the memory usage.
[20:59:40 CEST] <kepstin> although even x264 needs lots of ram :)
[20:59:48 CEST] <DHE> assuming linux, numactl to pin the encoder on a single physical CPU and same memory controller might help a bit. do make sure you have quad channel memory going
[20:59:49 CEST] <JEEB> I think they did do something like that
[21:02:06 CEST] <kepstin> for my usage, i need lots of throughput of video encoding but latency isn't as important, so I just do a bunch of completely separate encodes, each with a smallish number of threads, in parallel.
[21:08:51 CEST] <DHE> Little of column A, little of column B, so I have 10 VMs with 24 cores/threads each at my disposal for good quality but also not absurdly slow turnarounds.
[21:33:34 CEST] <jafa> hevc technical - when there are multiple slice NALs in a row, how does code tell if they relate to the same frame (for example 4 NALs each carrying 1/4 of the image) vs different frames (each slice NAL is a new frame)?
[21:35:09 CEST] <JEEB> I think multi-slice coding and the thing they added for HEVC for vertical+horizontal slicing are both noted in the spec
[21:35:26 CEST] Action: JEEB doesn't know from the top of his head
[21:47:44 CEST] <jafa> ah, there is a "first slice in pic" flag
[00:00:00 CEST] --- Wed Jun 12 2019


More information about the Ffmpeg-devel-irc mailing list