[Ffmpeg-devel-irc] ffmpeg.log.20180116
burek
burek021 at gmail.com
Wed Jan 17 03:05:02 EET 2018
[00:07:51 CET] <chocolaterobot> `ffmpeg -i file:filename.ogg -loop 1 -i image.png -tune stillimage -shortest -c:a copy out.mkv` <--- How do I lower framerate in this command? (I jus tneed to add a video to an audio s othat I can upload to Youtube
[00:16:10 CET] <alexpigment> set -framerate [whatever] before the inputs
[00:16:24 CET] <alexpigment> https://trac.ffmpeg.org/wiki/Slideshow
[00:44:39 CET] <chocolaterobot> alexpigment: thanx. so I can do `-framerate 1` and it won't make a difference with regard to audio, yes?
[02:46:59 CET] <therage3> has there ever been a bug in FLAC such that an encoded file actually differed from the source?
[06:48:52 CET] <k_sze> Is there a way to use ffmpeg to turn a H.264 video from 1080p to 4K, with nearest-neighbor resampling and losslessly?
[06:49:19 CET] <k_sze> Hopefully without the file size blowing up.
[06:52:56 CET] <k_sze> This is mostly to work around a stupid 4K TV that can't play a 1080p video without interpolation.
[06:55:10 CET] <furq> not losslessly
[06:55:27 CET] <furq> i mean you could encode it to lossless h264 but that'll be massive and the tv most likely won't play it anyway
[09:41:25 CET] <Guest56> hi, i currently use https://github.com/michaelherger/spotty to pipe audio coming from Spotify to stdout. I'm trying to pipe this through ffmpeg to do some processing. I can't seem to get the right parameters to receive the buffer from stdout, anyone has experience with this?
[09:42:41 CET] <Guest56> i'm currently using https://github.com/fluent-ffmpeg/node-fluent-ffmpeg library, i just have process.stdin as its input and pipe to process.stdout, then in the cli, i pipe it to vlc
[10:25:46 CET] <kikobyte> BtbN: Hi, yesterday I asked about scale_npp not working from within a systemd service. The issue seems to be that it creates $HOME/.nv/ComputeCache folder, while the user specified in the .service file usually doesn't have a home directory.
[10:26:21 CET] <BtbN> weird, must be something specifically libnpp does
[10:27:03 CET] <kikobyte> Most likely. Apparently that doesn't look like anything an FFMPEG filter would do
[10:27:21 CET] <sfan5> .nv/ComputeCache/ sounds like nvidia's driver to me
[11:21:04 CET] <BtbN> sfan5, except it doesn't. So it has to be libnpp
[11:37:41 CET] <sfan5> BtbN: then I'm wondering why these files are in my homedir, since I don't even have cuda (libnpp) installed on this system
[11:38:09 CET] <BtbN> Well, it works fine if not using scale_npp, so that rules out everything else.
[11:38:42 CET] <BtbN> scale_cuda might cause it as well, as the compute cache might be CUDA related
[14:44:29 CET] <BaronLand> has there ever been a FLAC bug where files end up being different from the source material despite md5 match?
[14:46:04 CET] <ch80> hi, i'm struggling to compile ffmpeg with vmaf for win. it's way over my head. does anyone here have a working build they could share?
[14:48:00 CET] <relaxed> ch80: If you can run linux in a VM or boot a livecd, my builds include vmaf: https://www.johnvansickle.com/ffmpeg/
[14:55:39 CET] <ch80> unfortunately not
[15:14:45 CET] <durandal_1707> BaronLand: if you find such files, you are very rich
[15:15:38 CET] <BaronLand> durandal_1707 i was curious, since I have some backed up files here on my laptop from a CD collection back home, and they were encoded with an old version of the FLAC reference encoder
[15:15:45 CET] <BaronLand> just curious really
[15:31:40 CET] <b0bby__> hello
[15:31:56 CET] <b0bby__> is this the chat for the C api
[15:38:59 CET] <DHE> yes
[15:46:38 CET] <JoshX> Hi, When i have a file with exactly 27000 frames and a framerate of 30/1 why would the length of the file not be 15:00.000 but something like 14:59.992 or 15:00.010 ??
[15:47:43 CET] <JoshX> even when I drop the frames to files, get exactly 27000 files and then encode them again into mp4 with -r 30 the length is not exactly 15:00.000
[15:49:51 CET] <JoshX> also no matter if i use libx264, h264_nvenc or h264_qsv
[15:50:03 CET] <JoshX> all give me a small offset
[15:51:01 CET] <JoshX> and this matters because we concatenate multiple 15 minute files into larger files and they should be exactly 15:00.000 / 27000 frames @ 30fps each so 4 should be 1 hour.. but sometimes we gain like 20 sec per hour
[15:51:16 CET] <JoshX> and then per 24 hours that is 480 seconds which is 8 minutes ??
[15:51:27 CET] <JoshX> which throws of our software...
[16:10:53 CET] <durandal_1707> JoshX: isnt duration stored in container?
[16:12:27 CET] <durandal_1707> and depending on container duration may be pure guesswork
[16:18:44 CET] <JoshX> durandal_1707: well.. i have exactly 27000 frames and a framerate of 30/1
[16:18:53 CET] <JoshX> so how many seconds would that be?
[16:19:02 CET] <JoshX> (900.000 in my world)
[16:19:20 CET] <JoshX> r_frame_rate=30/1 -> duration=900.010026
[16:19:41 CET] <JoshX> i don't get it.. and it's different if i use different encoders and decoders as well
[16:20:18 CET] <durandal_1707> muxers for duration only matters
[16:22:21 CET] <JoshX> but it's not as simple as frames / fps = duration??
[16:32:59 CET] <durandal_1707> JoshX: only for cfr, besides you never said container you use
[16:34:53 CET] <JoshX> its h264 / mp4
[16:35:08 CET] <JoshX> so when i recode with CFR it should be correct?
[16:35:41 CET] <JoshX> when i recode with -r 30/1 -vsync cfr
[16:35:51 CET] <JoshX> should that force the output to be exactly 30fps?
[16:38:26 CET] <JoshX> or what should be my ffmpeg command to convert the 27000 frames 30 fps mp4 h264 file to a file of 27000 frames with 30 fps and exactly 900 sec duration?
[16:49:02 CET] <JoshX> also, when I convert from 30/1 to lets say 15/1 and go from 27000 frames to 13502 or 13506.. even with cfr and everything.. why is that? :-/
[16:53:11 CET] <DHE> JoshX: it is assumed you want to preserve the video appearance
[16:53:45 CET] <DHE> what you should do is override the video input framerate. ffmpeg -r 30 input_at_15fps.vid [encoding options] output_at_30fps.vid
[16:53:47 CET] <DHE> or something like that
[16:56:28 CET] <JoshX> I'm trying exactly that: ffmpeg -y -c:v h264_cuvid -r 30/1 -i ../testinput.mp4 -c:v h264_nvenc -r 30/1 -vsync cfr test6.mp4
[16:56:39 CET] <JoshX> that should give me the correct output i assume?
[16:57:31 CET] <kepstin> you'll probably get the same result, because you're putting it back into mp4 container again (which iirc doesn't preserve exact timebase)
[16:57:36 CET] <JoshX> DHE: that works when I keep the framerate at 30/1
[16:57:46 CET] <JoshX> DHE: when i do ffmpeg -y -c:v h264_cuvid -r 30/1 -i ../testinput.mp4 -c:v h264_nvenc -r 15/1 -vsync cfr test6.mp4
[16:58:05 CET] <JoshX> I get 13505 output frames...
[16:58:15 CET] <JoshX> where I would expect 13500 exactly
[17:00:30 CET] <JoshX> the output of above command gives me a file of exactly 15:00.000 with exactly 27000 frames with frame rate 30/1 and when i dump all the frames, all frames have 0.0333333 sec duration
[17:00:42 CET] <JoshX> this is exactly what I expect and is super cool
[17:01:01 CET] <JoshX> then when i change the frame rate from that file to 15/1 i get 13502 frames in the output
[17:01:08 CET] <JoshX> :-O
[17:06:01 CET] <kepstin> that's probably just a quirk of how the fps filter works internally. Try using the select filter instead, to drop every second frame.
[17:08:14 CET] <kepstin> might need settb/setpts filters after it to correct the stream's framerate tho.
[17:08:34 CET] <kepstin> (otherwise it'll dup frames when outputting in crf mode)
[17:10:04 CET] <kepstin> something like -vf 'select=mod(n\,2),settb=1/15,setpts=N' would do what you want, I think.
[17:29:26 CET] <b0bby__> in C how would you combine to videos of different frame rates and with audio streams at different bit rates
[17:32:31 CET] <DHE> b0bby__: audio should probably be resampled to a constant rate. with the video you can manually set the timestamps on frames freely within the confines of the format and its framerate
[17:33:50 CET] <b0bby__> DHE: could you please elaborate on the video side?
[17:35:50 CET] <DHE> b0bby__: each frame has a "pts" value which sets its timestamp, in units relative to the time_base on the stream. if you set the time_base to 1/30, then you have 30fps material and you can set the pts of each frame to 1,2,3,... to make a normal 30fps video
[17:36:13 CET] <JEEB> (also in containers like mp4 you additionally have a duration set for each sample as well)
[17:36:16 CET] <DHE> if you're switching back and forth between 30fps and 60fps content, you could set the time_base to 1/60 and for 30fps material you use pts values of 2,4,6,8,...
[17:36:34 CET] <JEEB> (generally this then takes care of the issue of "what's the duration of the last sample?")
[19:38:25 CET] <lyncher> hi all. I'm wondering if anyone was able to successfully add SEI user data to H264 video frames?
[19:39:10 CET] <lyncher> I've looked to extract_mvs.c to be able to read existing SEI data
[19:39:26 CET] <lyncher> but what I really need is to _add_ SEI data
[19:39:40 CET] <lyncher> without reencoding the video stream
[19:39:56 CET] <lyncher> is that even possible (or I always need to reencode the video stream)?
[19:41:21 CET] <atomnuker> yes, h264_metadata bsf
[19:47:00 CET] <lyncher> but If I wan't to add 608 CC data, currently I'll need to work directly with the API (libavcodec & friends), right?
[19:47:27 CET] <lyncher> because in that scenario a SEI user data would have to be added at every frame
[19:48:57 CET] <atomnuker> yes
[19:52:45 CET] <lyncher> I've made a small change in transcoding.c to add SEI:
[19:52:46 CET] <lyncher> AVFrameSideData * new_sd = av_frame_new_side_data(frame, AV_FRAME_DATA_A53_CC, size);
[19:52:52 CET] <lyncher> memcpy(new_sd->data, data, size);
[19:53:07 CET] <lyncher> but the SEI data isn't being added to the frames in the output
[19:53:55 CET] <lyncher> I've started to add that changes in remuxing.c and encoder_video.c, but without any success
[19:54:14 CET] <atomnuker> muxing that isn't supported
[19:54:58 CET] <BtbN> pretty sure A53 stuff can only be actually added during encoding
[19:55:01 CET] <lyncher> what would be the right approach to mux that?
[19:55:29 CET] <BtbN> since it's actually in the bitstream of the video, which is extremely weird and wrong for subtitles
[19:56:49 CET] <lyncher> but that is the current approach that broadcast video contains the captions
[19:57:45 CET] <lyncher> ffmpeg already supports read_eia608 which is able to extract the captions from the video bitstream
[19:58:07 CET] <atomnuker> that's the current approach for the same small part of the world that gave us non-integer framerates
[19:58:25 CET] <lyncher> what I'm trying to achieve is something like "write_eia608"....
[19:58:27 CET] <BtbN> dvb definitely does not use it
[19:59:52 CET] <lyncher> dvb uses images to transport text instead....
[20:00:07 CET] <JEEB> DVB subtitles are images, DVB teletext is text
[20:00:09 CET] <JEEB> vOv
[20:02:09 CET] <lyncher> I'm able to create a dvb subtitle stream for live content
[20:02:44 CET] <lyncher> which just needs to be muxed in the main TS stream
[20:03:04 CET] <lyncher> to deliver live captions in dvb regions
[20:03:38 CET] <lyncher> but what I'm trying to figure out is out to add 608/708 captions in live streams
[20:03:47 CET] <lyncher> but what I'm trying to figure out is how to add 608/708 captions in live streams
[20:04:04 CET] <BtbN> you will need to re-encode the stream
[20:04:13 CET] <BtbN> It's impossible to add them outside of an encoder
[20:04:45 CET] <JEEB> were they in SEI messages or so?
[20:05:03 CET] <JEEB> or were they part of VCL NAL units+
[20:05:07 CET] <JEEB> ?
[20:05:30 CET] <lyncher> SEI/user_data payload
[20:05:42 CET] <lyncher> SEI_USER_DATA_REGISTERED_ITU_T_T35
[20:06:00 CET] <JEEB> ok, so in theory you could use a bit stream writer filter to stick them around
[20:06:15 CET] <JEEB> but I don't think FFmpeg has such subsystems; currently only encoders do things
[20:07:18 CET] <BtbN> I don't think you can put bitstream filters after encoders at all
[20:07:46 CET] <JEEB> that's why I didn't call it a BSF
[20:07:53 CET] <JEEB> but yes, in theory a BSF could do it?
[20:08:24 CET] <BtbN> for just remuxing, a bsf could totally be used though
[20:08:26 CET] <BtbN> if one existed
[20:08:45 CET] <JEEB> yup
[20:09:09 CET] <lyncher> but if the BSF can't be placed after the encoder and the muxer disregards the AVFrameSideData....
[20:09:24 CET] <BtbN> I thought you aren't re-encoding?
[20:09:30 CET] <JEEB> how is a muxer supposed to utilize that thing :P
[20:09:48 CET] <lyncher> I would like to avoid re-encoding
[20:16:32 CET] <lyncher> looking to remuxing.c.... what would I need to add there to add a bsf?
[20:18:00 CET] <BtbN> you need to write such a bsf first.
[20:18:17 CET] <BtbN> it doesn't exist. Only way to add a53 to a stream is to encode it
[20:19:27 CET] <lyncher> let's say that I write something like: h264_mp4toannexb_bsf.c
[20:20:20 CET] <lyncher> that get the frame from the packet, adds a53 data to the frame
[20:21:53 CET] <lyncher> ...but I still can get how would I create a new packet(s)
[20:36:34 CET] <lyncher> trying to play around with h264_metadata, but I'm getting: Unknown bitstream filter h264_metadata
[20:38:12 CET] <FishPencil> Is it possible to take a 10 frame video and stack each frame side by side (horizontally) into a final image? Or should I first save each frame and then combine
[20:39:06 CET] <kepstin> FishPencil: the tile filter should be able to do that.
[20:39:40 CET] <FishPencil> how about with an unknown number of tiles?
[20:40:24 CET] <kepstin> unknown number of tiles? no, and that's a problem in general since you'll hit ffmpeg's max image size pretty quickly with large frames/long videos :)
[20:40:26 CET] <FishPencil> The bigger picture: I want to crop a section of video, and "tile" the croped images side by side, frame by frame
[20:41:01 CET] <FishPencil> so i'll need to first save the crops, then combine
[20:41:11 CET] <kepstin> with an unknown number of tiles, you'll have to make a tool that analyzes the video in advance (e.g. using ffprobe) then generates the appropriate ffmpeg command line
[20:41:29 CET] <kepstin> or you could do them individually too, I guess.
[20:42:15 CET] <FishPencil> Can ffprobe get the number of frames in a video
[20:45:18 CET] <JEEB> after decoding, yes
[20:45:22 CET] <JEEB> -show_frames
[20:46:20 CET] <FishPencil> then FFmpeg could just do that
[20:46:49 CET] <JEEB> you can get number of packets from -show_packets but not sure if that always is the same as actual image or audio frames
[20:47:07 CET] <JEEB> (as in, IIRC NAL units generally get separated)
[20:47:13 CET] <JEEB> if not, then packets are just fine
[20:48:06 CET] <stephaneyfx> Is it ok to ask questions regarding the FFmpeg C API on here or is there a more appropriate channel?
[20:48:26 CET] <kepstin> stephaneyfx: for using the C api to build other apps? Right here is fine.
[20:48:40 CET] <stephaneyfx> kepstin: Exactly. Thank you.
[20:49:02 CET] <kepstin> (the #ffmpeg-devel channel is for development of ffmpeg itself)
[20:56:46 CET] <stephaneyfx> I read the explanations regarding timestamps given in https://stackoverflow.com/a/40278283. One step mentions to set the time_base of the decoder to some sane value before opening it (time_base in AVCodecContext) but the doc indicates that this field is deprecated when decoding and it recommends using the framerate field instead. Should I set framerate in the decoder context then (my sane value would be the inverse of the
[20:56:47 CET] <stephaneyfx> stream time_base)?
[21:12:09 CET] <FishPencil> What would be the correct crop filter args to return the vertical center line of pixels? 1 pixel wide, the middle one
[21:12:45 CET] <kepstin> FishPencil: what pixel format? e.g. in yuv420p you can't get a single pixel wide video, has to be a multiple of 2
[21:13:06 CET] <kepstin> FishPencil: you might have to convert to yuv444p or something first
[21:13:32 CET] <FishPencil> It's yuv420p
[21:13:39 CET] <stephaneyfx> I also noticed that the demuxing_decoding example does not seem to perform any timestamp conversion after getting a packet from the demuxer and before passing it on to the decoder. Shouldn't the pts of the packet be converted to use the decoder's time_base?
[21:14:10 CET] <kepstin> stephaneyfx: I'd recommend first just reading the comments in the avcodec.h header file - it tells you for each field whether you're supposed to set it or the codec sets it for decoding and encoding.
[21:15:29 CET] <lyncher> JEEB + BtbN: just found this repo: https://github.com/jpoet/ffmpeg
[21:15:42 CET] <lyncher> the changelog is: Add A53 Closed Captions to MPEG header if they are available.
[21:15:54 CET] <BtbN> and so it does?
[21:16:01 CET] <lyncher> but....
[21:16:06 CET] <BtbN> are you encoding mpeg?
[21:16:13 CET] <lyncher> it is only implemented for mpeg2
[21:16:14 CET] <JEEB> lyncher: see the file :P
[21:16:31 CET] <JEEB> it's the MPEG-1/2 video encoder
[21:16:35 CET] <BtbN> h264 should already be able to add a53 in most/all encoders
[21:16:54 CET] <lyncher> but it added some concept that was rejected by ffmpeg main repo: an array of AVFrameSideData per frame
[21:18:21 CET] <stephaneyfx> kepstin: Thank you for your answer. These comments appear to be the same as the ones in the doxygen doc. They indicate that time_base in AVCodecContext is deprecated when decoding, so I just wanted to know if I should set framerate instead before opening the decoder (https://stackoverflow.com/a/40278283 indicates to set time_base of the decoder).
[21:19:21 CET] <lyncher> in ffmpeg main repo I see code to decode a53 from h264 in h264_sei.c
[21:23:38 CET] <kepstin> stephaneyfx: according to the comments in the header file, the decoder might set it if a framerate is stored in the codec bitstream. There's no reason for you to ever set it when decoding, and in fact you should probably generally ignore it in favour of the framerate you get from the AVFormat instead.
[21:23:57 CET] <lyncher> in h264_slice.c a53 data is copied (for pass-through I suppose - which means that must be contained in source stream)
[21:25:24 CET] <kepstin> stephaneyfx: (er, from the AVStream specifically)
[21:27:52 CET] <stephaneyfx> kepstin: (Sorry if my questions sound silly, I am just having a hard time understanding all the timestamps and time bases) I thought that a stream and the corresponding decoder can use different time bases for timestamps (and this is what https://stackoverflow.com/a/40278283 seems to indicate) and that pts should be converted after reading a packet from the demuxer and before feeding it to the decoder. Did I misunderstand?
[21:32:17 CET] <kepstin> stephaneyfx: I think with current ffmpeg, the codec is supposed to just pass through the pts/dts values from the packet to the frame.
[21:32:19 CET] <stephaneyfx> I can see this conversion happening in the muxing example (conversion of a packet's timing fields from codec to stream time base), and I was expecting the reverse conversion when demuxing/decoding as the stackoverflow entry explains.
[21:32:47 CET] <kepstin> stephaneyfx: but I'd have to confirm this, I haven't looked to closely recently :/
[21:33:37 CET] <stephaneyfx> kepstin: Oh, so does it mean the timing fields in AVFrame are in AVStream's time_base unit?
[21:34:39 CET] <kepstin> stephaneyfx: they should be, yes.
[21:36:22 CET] <stephaneyfx> kepstin: So the time_base of the codec context is used only when encoding/muxing to convert the packet's timing fields from the encoder's time_base to the muxer's time_base? For decoding/demuxing, there's no such conversion?
[21:37:43 CET] <stephaneyfx> Err, from the encoder's time_base to the AVStream's time_base
[21:37:57 CET] <kepstin> normally when encoding, you'd set the encoder's time base to the same as the avstream's timebase, I'd think?
[21:39:45 CET] Action: kepstin notes that when decoding, you're supposed to set the pkt_timebase on the decoder to the timebase from the avstream.
[21:41:36 CET] <stephaneyfx> kepstin: I would think so too. But I guess there may be cases where they are different (e.g. I have read that a decoder can change its time base when it gets opened).
[21:42:17 CET] <stephaneyfx> Interesting note regarding pkt_timebase. Let me look it up. Thank you very much for your help kepstin.
[21:44:32 CET] <stephaneyfx> Looks useful indeed. I had overlooked this field. Thank you again.
[21:44:47 CET] <FishPencil> kepstin: I don't even think yuv444p can do single pixel width
[21:44:47 CET] <kepstin> stephaneyfx: like I said, just read through the comments/docstrings on all the fields on the avcodeccontext, they say who sets it.
[21:45:24 CET] <kepstin> FishPencil: i'd have to see your full filter line. You should be able to do single pixel width stuff with yuv444p.
[21:45:44 CET] <FishPencil> -i cut.mp4 -pixel_format yuv444p -vf "crop=1:in_h:0:100"
[21:46:13 CET] <stephaneyfx> kepstin: I have but will again, more thoroughly this time. I was just trying to match this stackoverflow answer with the current API. Thank you very much.
[21:47:22 CET] <kepstin> FishPencil: -pixel_format does nothing there. You want to do -vf "format=yuv444p,crop=1:in_h:0:100"
[21:48:20 CET] <kepstin> FishPencil: the correct name for the output option to set pixel format is "-pix_fmt", and it's equivalent to adding a ",format=XXX" to the *end* of the video filter chain.
[21:48:42 CET] <kepstin> FishPencil: to do a format conversion earlier, you usually have to add it explicitly to the filter chain.
[22:11:18 CET] <rjp421> ffmpeg-git-20180111-64bit-static/ffmpeg on centos 6 x64, kernel 2.6.32-696.18.7.el6.x86_64: "FATAL: kernel too old"
[22:11:43 CET] <rjp421> the latest kernel
[22:13:28 CET] <kepstin> upgrade to centos 7? it has a newer kernel
[22:13:51 CET] <kepstin> or build ffmpeg yourself, i guess, if you can't find a build that works on that old kernel.
[22:15:59 CET] <therage3> this is at least the 50th time I've seen someone in a support channel ask something because a CentOS kernel is way too ancient
[22:16:12 CET] <therage3> 2.6.32-696.18.7.el6.x86_64
[22:16:14 CET] <therage3> Jesus
[22:16:27 CET] Action: kepstin notes that kernel 2.6.32 was originally released in 2009, although centos/rhel has *heavily* modified it.
[22:17:25 CET] <rjp421> easier said than done
[22:21:03 CET] <rjp421> certain bs on the server wont run on centos7, or else id be using it
[22:21:14 CET] Action: therage3 is on 4.14.13 :)
[22:32:58 CET] <kepstin> rjp421: well, there's always the option of just building ffmpeg yourself.
[22:34:47 CET] <rjp421> meaning i need to install all the codecs etc, to build against. clutter i dont want on the server
[22:34:58 CET] <furq> to be fair the page does say it should work on 2.6.32
[22:35:02 CET] <rjp421> especially needing extra repos for the codecs
[22:35:04 CET] <furq> so i assume relaxed would want to know if it doesn't
[22:38:06 CET] Action: kepstin would spin up a new centos 7 vm, build a static ffmpeg there, then copy the binaries over.
[22:57:20 CET] <SortaCore> Okay Google, play 10 minutes of silence.
[23:08:56 CET] <kepstin> Youtube: <plays a loud advertisement before someone's video of contentid-matched silence>
[23:13:06 CET] <tyng> in this output which represents the colormatrix, primaries and gamma values? yuv422p10le(tv, bt709/unknown/unknown)
[23:13:59 CET] <JEEB> the three are those after tv|pc (limited|full) range IIRC
[23:14:10 CET] <tyng> but in which order?
[23:14:15 CET] <JEEB> you could just ffprobe -of json -show_streams I think?
[23:14:24 CET] <JEEB> to actually get the values in a parse'able manner
[23:17:26 CET] <jkqxz> matrix/primaries/transfer, it's made by <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/utils.c;h=4c718432ad06c5712074ef75e95cbd6b011dc7f9;hb=HEAD#l1335>.
[23:19:20 CET] <tyng> json outputs confirms its pixfmt(range, space/primaries/transfer)
[00:00:00 CET] --- Wed Jan 17 2018
More information about the Ffmpeg-devel-irc
mailing list