[Ffmpeg-devel-irc] ffmpeg.log.20190317

burek burek021 at gmail.com
Mon Mar 18 03:05:02 EET 2019


[00:12:32 CET] <damdai> is ffmpeg still better than libav
[00:12:47 CET] <damdai> better as in can do more things
[00:13:02 CET] <JEEB> given that FFmpeg merges almost anything, most likely yes :P
[00:13:21 CET] <JEEB> I think the people using Libav specifically know exactly what they want
[00:13:34 CET] <JEEB> like the guy maintaining movenc, who uses it exactly for that
[00:13:42 CET] <damdai> does libav not able to merge almost everything like ffmpeg
[00:14:01 CET] <JEEB> I think libav specifically doesn't want to merge everything.
[00:15:00 CET] <damdai> interesting
[00:23:01 CET] <ncouloute> hmm with just decoding to null I get Application provided invalid. non monotonically increasing tds to muxer in stream 0: 960 >=960... then 5 reference picture missing during reorder/missing reference picture default is x. Sidenote: It seems fine if I let it do a variable fps output.
[00:26:22 CET] <JEEB> dongs: btw did you have anything with the 22.2ch stuff? would be nice to get a sample of such audio track
[01:40:58 CET] <faLUCE> do you know if is it possible to debug ts and/or print encoded pkt size before they are decoded, with ffplay ?
[02:04:23 CET] <Soni> what's the human-friendly documentation for the lowpass filter?
[03:09:16 CET] <KombuchaKip> JEEB: Thank you.
[03:27:56 CET] <agris> Is there any way to tell ffmpeg to create a directory if the output folder doesn't exist?
[03:28:17 CET] <agris> I'm writing a automated script with GNU Parallel and it deals with subfolders
[03:29:34 CET] <c_14> pretty sure you can't
[04:50:55 CET] <damdai> everybody: i just realized  there is no way to visit youtube site by  just typing  http://ip.address.of.youtube
[04:51:00 CET] Last message repeated 1 time(s).
[04:51:40 CET] <another> yes. that's very likely true
[04:52:04 CET] <another> look into virtual hosts if you're interested. gotta go
[04:53:30 CET] <damdai> that's a huge flaw in the internet
[04:53:38 CET] <damdai> this must be fixed
[04:57:56 CET] <another> no. that's intentional
[04:58:35 CET] <another> https://en.wikipedia.org/wiki/Virtual_hosting
[04:59:37 CET] <damdai> youtube is not using vhost
[04:59:59 CET] <damdai> that might be true for small websites
[05:12:22 CET] <klaxa> if it was a flaw, it would have been fixed ages ago, i don't think you stumbled upon something nobody has every questioned
[05:12:54 CET] <klaxa> also
[05:12:55 CET] <klaxa> <damdai> youtube is not using vhost
[05:12:56 CET] <klaxa> proof?
[05:13:11 CET] <damdai> why would website that big use vhost
[05:13:30 CET] <klaxa> because it makes mangement of domains and subdomains easier
[05:13:39 CET] <damdai> no it doesn't
[05:13:40 CET] <klaxa> also load-balancers are probably used for a multitude of hosts
[05:14:03 CET] <klaxa> so not having a Host parameter in the http header makes the server wondering what site you wanted to access
[05:14:13 CET] <klaxa> *makes the server wonder
[05:14:20 CET] <damdai> i can visit  google.com
[05:14:22 CET] <damdai> by using ip
[05:14:58 CET] <damdai> if what you say is true, then i shouldn't able to do that either
[05:15:12 CET] <klaxa> i would guess that is convenience for people who don't have dns but want to google
[05:15:21 CET] <klaxa> look
[05:15:25 CET] <klaxa> the burden of proof is on you
[05:15:35 CET] <klaxa> <damdai> that's a huge flaw in the internet
[05:15:35 CET] <klaxa> <damdai> this must be fixed
[05:15:42 CET] <klaxa> extraordinary claims require extraordinary evidence
[05:15:55 CET] <damdai> it is a falw
[05:15:58 CET] <damdai> it is a flaw
[05:16:10 CET] <damdai> i should able to visit a website by typing the ip address
[05:16:23 CET] <damdai> dns could go down
[05:16:43 CET] <damdai> if dns goes down, you are screwd
[05:16:53 CET] <klaxa> if dns goes down you have /etc/hosts
[05:17:13 CET] <damdai> i should able to just type the ip address
[05:17:17 CET] <klaxa> and you probably have other problems than watching youtube
[05:17:33 CET] <damdai> klaxa , lol true, but still
[05:17:42 CET] <klaxa> i am not going to discuss internet standards you don't seem to understand any further with you
[05:19:20 CET] <damdai> what if i don't have a domain name, and just want to host a website by using IP
[05:32:51 CET] <another> go ahead and do that
[06:58:31 CET] <Snoober> I'm having trouble going from yuvj420p (full color range, 8-bit) h264 -> dnxhd (which always uses yuv422p) -> back to h264 but wanting to convert back to yuvj420p
[06:59:03 CET] <Snoober> am i supposed to use -pix_fmt yuvj420p or I also read some commands -dst_range 1 -color_range 2 to use. i'm not sure
[12:42:51 CET] <__raven__> hi
[12:46:26 CET] <__raven__> i need to compose a mosaic "multiviewer" of up to 32 rtmp input streams. to avoid a frozen output if just one input fails or buffers, is there any (recommended) step to "normalize" those input streams?
[13:10:55 CET] <JEEB> __raven__: I don't think FFmpeg's framework itself provides a dynamic switch from f.ex. what you are providing to an alternative source which can come and go
[13:11:13 CET] <JEEB> there's a filter that will fall back to alternative generated input, yes. but that only works once and will not go back
[13:12:15 CET] <JEEB> so you would have to handle each AVFormatContext yourself and handle the concept of real world time yourself, basically
[13:12:42 CET] <JEEB> I know that upipe has something dynamic but that's a whole other framework (mostly meant for broadcasting)
[14:12:42 CET] <faLUCE> hello. When printing the timestamps of packets with ffprobe, how can I print the date (year-month-day-hour-seconds-millisecs. etc.) as well?
[14:25:21 CET] <__raven__> JEEB: do you have an example of what this preprocessing would look like? it would be no problem to normalize each input stream on our rtmp server before it gets into the mosaic chain
[15:18:52 CET] <faLUCE> do you know if 1 frame is the minimum latency for demuxing an opus mpegts stream?
[15:36:21 CET] <satoshi2> hello
[15:37:59 CET] <satoshi2> i was wondering if anybody have reported this already, twitch seems to have started using SureStream, which inserts ADs on the raw video stream
[15:38:44 CET] <dongs> JEEB: will decode some samples i capped this week.
[15:38:50 CET] <dongs> a bit busy with unrelated stuff
[15:39:00 CET] <dongs> satoshi2: why is that worth reporting
[15:39:15 CET] <satoshi2> i have come across a channel that is inserting a ad with dts sound, and it breaks the file after that
[15:39:21 CET] <JEEB> dongs: as proof of my masochism http://up-cat.net/p/5421c134
[15:39:28 CET] <dongs> cilcking furiously
[15:39:35 CET] <dongs> hot
[15:39:51 CET] <dongs> lokoing good
[15:40:00 CET] <satoshi2> i have tried a few workarounds liek this https://github.com/ytdl-org/youtube-dl/issues/10719#issuecomment-364389876 but none of them worked
[15:40:10 CET] <satoshi2> the resulting file is broken after teh ad
[15:40:18 CET] <dongs> satoshi2: is original audio AAC?
[15:40:21 CET] <dongs> (before dts)
[15:40:23 CET] <satoshi2> yes
[15:40:27 CET] <dongs> as i thought
[15:40:33 CET] <dongs> you can blame AAC being fucking garbage for this
[15:41:15 CET] <satoshi2> what a rip
[15:41:33 CET] <JEEB> oh, so DTS as not the audio format, but just derping up the timestamps
[15:43:09 CET] <dongs> no, the audio format
[15:43:14 CET] <dongs> and i guess AAC decoder crashes when it sees it
[15:43:33 CET] <JEEB> well the issue he linked was about discontinuities
[15:43:46 CET] <dongs> i think vlc &* co still break when kouhaku mpegts switches from 5.1 audio to 2.0 during mid-time news segment
[15:43:50 CET] <JEEB> unless he was talking about the audio format and then misunderstood :P
[15:44:04 CET] <JEEB> dongs: pretty sure FFmpeg fixed that since I provided a sample of that ages ago
[15:44:07 CET] <dongs> ya?
[15:44:38 CET] <dongs> next year my target is clean 8k kouhaku recording
[15:44:49 CET] <dongs> going to be the biggest waste of bandwidth ever
[15:45:12 CET] <JEEB> at least I have an aac_decoding_regression.ts in my files and that is 7:15 news->kouhaku stereo to 5.1 switch
[15:45:17 CET] <JEEB> and it works nicely
[15:45:20 CET] <dongs> cool
[15:46:40 CET] <satoshi2> https://pastebin.com/xpbtJJna
[15:46:44 CET] <JEEB> also I remember TheFluff derping me up even earlier with a mono to stereo switch in like 2011 or so
[15:46:55 CET] <JEEB> moshidora sample
[15:47:34 CET] <JEEB> and that one was fixed by then
[15:48:08 CET] <dongs> i forgot which aac decoder ate shit with format switching midstream. im thinking the helix aac or wahtever that everyone used
[15:48:21 CET] <JEEB> a lot of decoders did
[15:48:30 CET] <dongs> oh, faad2
[15:48:31 CET] <dongs> that.
[15:48:33 CET] <JEEB> yup
[15:49:41 CET] <JEEB> I think it was circa 2011 or whatever when most of that stuff started getting fix'd and people started moving off of faad2 more
[15:50:11 CET] <dongs> i think around  the time i stopped fucking around wiht j-tv stuff
[15:50:18 CET] <dongs> and just gave up at it being broken shit
[15:50:46 CET] <dongs> and yeah... satoshi's shit is talking about displaytimestamp, not DTS audio
[15:50:55 CET] <JEEB> *decoding time stamp
[15:51:00 CET] <JEEB> PTS is presentation time stamp
[15:51:43 CET] <JEEB> right, so then the discontinuity handling with the meta HLS demuxer doesn't work right
[15:52:07 CET] <JEEB> pretty sure someone added a patch to add that flag to the HLS demuxer (which just uses an mpeg-ts demuxer in the background)
[15:52:08 CET] <satoshi2> heh
[15:52:18 CET] <satoshi2> i thought it was teh audio lol
[15:52:34 CET] <JEEB> as far as I can tell the segment just derps
[15:52:48 CET] <JEEB> if you can take a look at the HLS playlists, check if the playlists actually mark discontinuities
[15:53:06 CET] <JEEB> if they don't, that's a twitch boog
[15:53:23 CET] <JEEB> (although we should still check if the timestamps are continuous on our side)
[15:54:11 CET] <satoshi2> is there any way i can force it?
[15:55:29 CET] <JEEB> towards the end of libavformat/hls.c in the AVInputFormat structure, switch the .flags to `.flags          = AVFMT_NOGENSEARCH | AVFMT_TS_DISCONT,`
[15:55:54 CET] <JEEB> the AVFMT_TS_DISCONT notifies the framework that this thing can have discontinuities
[15:57:54 CET] <satoshi2> i am looking at a m3u8, how should it look liek if they are marked?
[15:58:29 CET] <JEEB> https://tools.ietf.org/html/rfc8216#page-14
[15:58:38 CET] <JEEB> EXT-X-DISCONTINUITY
[15:58:58 CET] <dongs> you expect people to actually follow a standard?!
[15:59:28 CET] <satoshi2> i can confirm, that tag isn't there, fucking twitch
[15:59:30 CET] <JEEB> it's actually something you can complain about, yes. whether they will do anything about that is not as relevant
[15:59:43 CET] <JEEB> satoshi2: even if it was HLS in FFmpeg is not marked as "can have discontinuities" :P
[15:59:57 CET] <JEEB> that's the part I noted that you should change libavformat/hls.c
[16:00:13 CET] <JEEB> but yes, if they were proper the profile playlist should have the discontinuity flag
[16:09:24 CET] <satoshi2> JEEB would you mind making a build? media autobuild suite seems to require a 13gb install lol
[16:14:27 CET] <JEEB> satoshi2: that sounds /really/ excessive
[16:14:42 CET] <JEEB> I am currently poking at something else so not right away at least :P
[16:15:17 CET] <dongs> after pulling 2.5gb of that hevc reference shit, the actual -top- branch was only like few megs of sores and built without a single warning with VC
[16:15:34 CET] <dongs> if only svn wasn't complete shit
[16:15:37 CET] <JEEB> inorite
[18:08:40 CET] <faLUCE> there's some ambiguity about that: if I receive a mpegts stream with ffprope, what does "show_frame" shows? the demuxed frames or the decoded ones?
[18:09:35 CET] <DHE> should be that frames are decoded. packets are raw on-the-wire payloads.
[18:09:53 CET] <DHE> ffmpeg terminology, not container-specific terminology
[18:19:46 CET] <faLUCE> DHE: then, packets are the content of demuxed data and frames are the decoded frames?
[18:20:07 CET] <JEEB> generally what lavf outputs is something you should be able to feed to a decoder of that format
[18:20:23 CET] <JEEB> with mpeg-ts if the format doesn't need a parser it will output the result of the relevant PES packet
[18:20:42 CET] <JEEB> if the format needs a parser the PES packet(s) will be fed to a parser until that outputs a single decode'able AVPacket
[18:21:13 CET] <faLUCE> JEEB: with "lavf output" do you mean packets?
[18:21:24 CET] <JEEB> lavf only outputs AVPackets
[18:21:28 CET] <DHE> mpegts does preserve packet boundaries, at least as best I understand it. I'm using lavf & lavc with mpegts in for several codecs without any issues
[18:21:35 CET] <faLUCE> ok
[18:22:02 CET] <JEEB> DHE: well for some formats extra parsing is needed
[18:22:19 CET] <JEEB> I think video formats are most common
[18:22:26 CET] <JEEB> because a single PES packet is 188 - <overhead>
[18:22:32 CET] <JEEB> but yes, that is handled automagically for you
[18:22:45 CET] <DHE> JEEB: true, but there is also a bit in the header to indicate it starts a new avpacket
[18:23:21 CET] <JEEB> AVPacket is on a higher level of abstraction but yes a new AVpacket is probably going to be generated from the read buffer
[18:24:38 CET] <JEEB> 34
[18:25:24 CET] Action: DHE wrote his own mpegts parser a while back
[18:26:18 CET] <DHE> shoutouts to wireshark which really helps understanding the format
[18:26:41 CET] <JEEB> DVB Inspector is something I recommend for checking out MPEG-TS
[18:26:59 CET] <JEEB> life saver with checking the packets all the way to graphing the timestamps etc
[18:27:28 CET] <JEEB> it's java, but I am an equal opportunity person if it'll get the job done
[18:28:02 CET] <DHE> my company bought some physical hardware that does the same thing. quite expensive, but it'll handle a gigabit port's worth of traffic before the switch it's plugged into is the one that ruins the statistics
[18:28:12 CET] <DHE> :/
[18:28:26 CET] <JEEB> lol, I do so know that feeling :P
[18:34:32 CET] <DHE> tfw you learn that the tiny packet buffer on this big switch is shared across all ports, and not actually "shared" but small dedicated buffers on ports that don't need it exist.
[18:41:58 CET] <__raven__> JEEB: do you have an example of what this preprocessing would look like? it would be no problem to normalize each input stream on our rtmp server before it gets into the mosaic chain
[18:44:48 CET] <JEEB> __raven__: it's one thing to normalize timestamps according to reception etc, but handling the inputs so that you switch back and forth between a generated back-up thing to show there while the actual input is dead
[18:44:59 CET] <JEEB> the synchronization based on reception time is probably the simpler part :P
[18:47:40 CET] <__raven__> JEEB: the synchronisation between those pip inputs is not the problem. those can drift far away from each other no matter. but the generated mosaik must not freeze if just one stream fails for just a few seconds or even if it misses a few frames
[18:48:12 CET] <JEEB> yes
[18:48:21 CET] <JEEB> that is the second part which I noted is the less simple part of it
[18:48:47 CET] <JEEB> there is a filter in libavfilter that can fall back to a secondary input if things fail, but it will never ever go back to the primary input
[18:49:36 CET] <kepstin> i'd suggest having some custom code which - if it notices that it's not receiving something from an input - just passes duplicates of the last seen frame to the filter chain at regular intervals.
[18:49:43 CET] <__raven__> JEEB: in my naive imagination maybe there is "just" the need for an intermediate "baseband"?
[18:51:47 CET] <__raven__> input stream frame rate is between 15 and 30 fps but the decoder needs to "scan" it on a fixed rate of 25fps
[18:52:36 CET] <__raven__> maybe kind of the old line/scan converters used for apollo landings with display-camera-chain ;)
[18:53:53 CET] <JEEB> well, with digital usually you do the frame rate normalization after decoding
[18:54:19 CET] <JEEB> but yes, you will have to have a filter doing frame rate normalization if you need that - and also a deadline for when you should get a new packet out of the input protocol
[18:54:29 CET] <JEEB> say, 250ms or something like that :P
[18:54:35 CET] <kepstin> you can't change framerate before decoding with modern codecs, because of reference frames etc.
[18:55:23 CET] <JEEB> latter meaning that you start feeding a constant back-up feed of 25Hz or whatever your rate is while the input is down, and then when you start getting stuff again you feed it the input stuff again
[18:56:29 CET] <kepstin> note that this can't be done *in* the filter chain, due to the construction of the framework (a filter requests a new frame, and then it doesn't get run again until a new frame is available)
[18:56:42 CET] <JEEB> yes, I have already mentioned that
[18:56:55 CET] <__raven__> JEEB: yes but i need no backup stream/input for it. it should just freeze on the last received image or in other words duplicate this as many times to reach the 25fps
[18:57:05 CET] <JEEB> __raven__: that's still a back-up feed
[18:57:26 CET] <__raven__> kepstin: i do not need that to be in the filter chain
[18:58:06 CET] <__raven__> i could do a normalization process for every input apart from that
[18:58:51 CET] <__raven__> anything what feeds fixed framerate into a fifo, our rtmp server, a pipe or something else
[18:59:17 CET] <kepstin> the code to do this has to run after the decoder, before the filter chain.
[18:59:47 CET] <kepstin> since it will work by passing duplicates of the last decoded frame to the filter when no frames were decoded in some timeout.
[19:00:10 CET] <JEEB> as I noted, upipe has logic for async primary/back-up switching if you are interested in looking at that logic
[19:00:24 CET] <JEEB> where back-up includes "just show the last image at a static rate"
[19:01:15 CET] <__raven__> not sure if i got you right: do you mean the filter chain within the mosaique command or the filter chain of "any ffmpeg command"?
[19:01:46 CET] <JEEB> you will not be able to do this with just ffmpeg.c
[19:01:50 CET] <JEEB> in its current state
[19:02:06 CET] <JEEB> unless you handle all this logic with another thing that does a lot of the stuff that ffmpeg.c already does
[19:03:45 CET] <__raven__> what i mean is "vfrInputStreamFromRtmp{i} -> [ffmpegNormalizationProzess{i}] -> toRtmp -> allRtmpInputs -> [ffmpegMosaiqueProcess]
[19:04:31 CET] <JEEB> not sure why you're going back to RTMP there but if that's how you want to handle that - sure (´4@)”
[19:05:00 CET] <__raven__> could be pipe, fifo or something else too
[19:06:45 CET] <kepstin> that sounds like kind of a pain to set up - you'd basically be writing a new program that reads from rtmp, demuxes, decodes, duplicates frames if needed to ensure a constant stream, then remuxes the frames (i'd probably use nut) - to feed it into an ffmpeg.c process
[19:07:06 CET] <__raven__> :/
[19:07:07 CET] <kepstin> at that point you might as well put the final mosaic and encoding step into your new program too
[19:08:33 CET] <JEEB> yea, that's what I meant :P
[19:08:48 CET] <__raven__> what is coming out in general by transcoding an rtmp with vfr with a fixed -r 25?
[19:09:13 CET] <JEEB> static 25Hz unless your input goes wee-wee
[19:09:26 CET] <JEEB> which includes the protocol going dead as well as just having really low frame rate
[19:10:05 CET] <JEEB> like if you get a frame every second or so, you will only get those next 25 output images after the logic has noticed the amount of duplication that has to be done
[19:10:09 CET] <kepstin> the really fun thing is when you get e.g. a 10 second gap - then it'll pause output for 10 seconds, then output 250 frames as fast as possible.
[19:10:18 CET] <JEEB> yes
[19:10:21 CET] <JEEB> that's what I meant
[19:10:36 CET] <JEEB> ffmpeg.c is really based on file-to-file workflows where such pauses are OK
[19:11:03 CET] <__raven__> hm but that sounds like everything i want so far
[19:11:05 CET] <kepstin> (i recently fixed an issue in the fps filter when it was doing that - it used to create all the avframes to fill the gap in memory, and I had it oom on a file with a multi-hour timestamp gap)
[19:11:36 CET] <__raven__> real breakdowns of single input streams are not my concern - we would just sort that stream out and restart the generator
[19:12:19 CET] <kepstin> __raven__: the issue is that ffmpeg.c as currently designed cannot produce a continuous properly paced output stream given unstable inputs.
[19:12:42 CET] <__raven__> more important is, if it either exits if output rate generally above average input rate or slows down the video if averaged below
[19:13:14 CET] <__raven__> kepstin: any way to catch that with buffers and timeouts?
[19:13:28 CET] <JEEB> sure, but even that is extra code
[19:13:36 CET] <kepstin> __raven__: the required buffers and timeouts would have to be added inside ffmpeg.c
[19:13:55 CET] <JEEB> or you just make your own API client with your own requirements
[19:14:14 CET] <JEEB> ffmpeg.c can get you surprisingly far - until it can't :P
[19:14:25 CET] <__raven__> yeah ^^
[19:15:18 CET] <kepstin> i mean, you could do something utterly ridiculous like have an ffmpeg read the input, decode, output raw frames, pipe that to a new application you write that paces and drops/duplicates the raw frames, then pipe that to another ffmpeg that encodes.
[19:15:36 CET] <kepstin> but i wouldn't recommend that, seems likely to be fragile :)
[19:16:22 CET] <kepstin> (you'd need to put the frames in some container so they have timestamps, too, which means you still need a demuxer/muxer in your app)
[19:16:38 CET] <__raven__> going down the line lets say i do not need realtime: "sampling" every input stream into a still image and working with loop input on the generator script......?
[19:17:08 CET] <kepstin> you can't sample images in an encoded stream
[19:19:24 CET] <kepstin> if you don't need realtime (e.g. you're just archiving to disk), then ffmpeg will be fine
[19:19:38 CET] <JEEB> well
[19:19:46 CET] <JEEB> think of the scenario where one input goes dead
[19:19:48 CET] <kepstin> as long as the gaps aren't too long, i guess (it might have to generate a lot of data to fil gaps)
[19:19:51 CET] <JEEB> and the lavfi chain is waiting
[19:20:03 CET] <__raven__> indeed it is just a multiviewer to get an overview of the input streams
[19:20:39 CET] <kepstin> for a multiviewer, I'd honestly just run a bunch of independent mpv with the windows titled :)
[19:21:40 CET] <__raven__> kepstin: it is a headless server
[19:21:59 CET] <__raven__> the multiviewer should be sent out via rtmp too
[21:33:58 CET] <JEEB> dongs: http://up-cat.net/p/d4d024e2
[21:34:10 CET] <JEEB> I think for today that's enough masochism :P
[22:05:47 CET] <TheAMM> When using -dump_attachment, is there any sane way not to have ffmpeg complain about the lack of an output file?
[22:06:24 CET] <JEEB> probably only by doing -f null -
[22:06:26 CET] <JEEB> or something
[22:06:41 CET] <JEEB> welcome to features that don't really mix with the ffmpeg.c flow vOv
[22:07:49 CET] <TheAMM> Leads to decoding
[22:08:04 CET] <TheAMM> I'll just ignore the error and check if the dumped attachment exists
[22:08:55 CET] <JEEB> aye
[22:45:09 CET] <ossifrage> I'm amazed that chrome is playing the crap I'm sending it, shame firefox doesn't like it as well
[22:48:44 CET] <ossifrage> I'm just writing one GOP frame by frame as a fragmented mp4 file and then truncating the file at the end of the GOP, the webserver is effectively doing 'tail -f'
[23:38:33 CET] <__raven__> opening all input videos (those are rtmp streams) for the mosaique generator takes ages. i ws not able to find out about the reason yet. how to speed that up or parallelize it?
[23:42:46 CET] <JEEB> if you are using ffmpeg.c most likely they are opened and probed one by one :P
[23:53:20 CET] <__raven__> JEEB: yes thery are but why does it take around 20 seconds for one stream? i already fixed gop, cbr, fps, pts and such
[23:54:06 CET] <JEEB> probing takes time?
[23:55:41 CET] <__raven__> any workaround?
[00:00:00 CET] --- Mon Mar 18 2019


More information about the Ffmpeg-devel-irc mailing list