[Ffmpeg-devel-irc] ffmpeg.log.20191119

burek burek at teamnet.rs
Wed Nov 20 03:05:03 EET 2019


[00:04:03 CET] <kepstin> either way, the actual storage of the caption data in digital video is per frame, and I haven't read enough to know whether they preserve the multiple caption tracks stuff.
[00:17:29 CET] <kepstin> lyncher: by showinfo, do you mean the showinfo video filter?
[00:17:47 CET] <kepstin> if so, what's an example of its output?
[00:17:47 CET] <lyncher> yes, the video filter
[00:22:33 CET] <lyncher> https://filebin.net/huskm87rmr3atpr1/showinfo.txt?t=c3zfxvun
[00:22:52 CET] <lyncher> the SCC of what should be printed by showinfo: https://filebin.net/huskm87rmr3atpr1/elias_yoder_-_022619_H_Commerce_31_2lines_paintOn.scc?t=01xh2wlj
[00:24:42 CET] <kepstin> ok, those aren't 608 captions
[00:24:58 CET] <lyncher> at line 840 the captions start
[00:25:21 CET] <lyncher> and the 608 command is sent: 942c
[00:25:28 CET] <lyncher> *fc942c
[00:25:49 CET] <lyncher> but when you follow the remaining commands, they are interleaved
[00:25:52 CET] <kepstin> 608 captions give you a whole 2 bytes of caption data per field, and you have 80 bytes of caption data (plus 1 byte per caption packet header) per frame there :)
[00:26:08 CET] <lyncher> 608 + 608 + empty 708
[00:26:48 CET] <kepstin> hmm, you're right, that's just padding.
[00:27:03 CET] <kepstin> the two interesting ones are 'fc8080fd8080' in most of the frames.
[00:27:11 CET] <lyncher> empty 608
[00:27:22 CET] <lyncher> fc (f1) fd (f2)
[00:28:50 CET] <kepstin> fc means invalid ntsc cc field 1, fd means invalid ntsc cc field 2. lets see if there's some with valid data in there...
[00:29:49 CET] <kepstin> unless i'm off by a bit
[00:29:53 CET] <kepstin> oh, i'm off by a bit
[00:30:05 CET] <lyncher> yes.... the valid one
[00:30:22 CET] <kepstin> fc is valid field 0, fd is valid field 1
[00:30:31 CET] <kepstin> ok, so both fields are there just like you wanted?
[00:30:41 CET] <lyncher> 0x80 is "0" empty (in odd)
[00:33:30 CET] <kepstin> if you're having issues with vlc playing this file, i'd assume its due to bugs in vlc's cc data parser, since ffmpeg's exposing all the cc information it gets...
[00:34:07 CET] <lyncher> the issue that I'm having is that I'm trying to retrieve all the CC data in the file using the showinfo video filter
[00:34:29 CET] <lyncher> but showinfo is giving me only part to 608 contained in the source
[00:34:49 CET] <kepstin> i notice you've already hacked up the showinfo filter, since by default it doesn't print a dump of the side data in hex.
[00:35:12 CET] <lyncher> yes, I've put the printf that I placed above in the conversation
[00:35:18 CET] <lyncher> just to dump the contents
[00:35:54 CET] <lyncher> of AVSideData A/53 CC
[00:36:01 CET] <kepstin> hmm, looking through that log i do see that there are some video frames that do not have any CC side data
[00:36:24 CET] <kepstin> every so often a couple frames are missing it completely
[00:39:08 CET] <kepstin> specifically, 2 out of every 30 frames are missing cc data.
[00:42:01 CET] <kepstin> if you're sure that the caption data is actually included for every frame in a file (i'm assuming you're using a current git build), then filing a bug with a sample file would make sense - but you mentioned that mpv shows the CC correctly?
[00:42:30 CET] <kepstin> that leaves me confused, since i'd assume that mpv would be using ffmpeg's demuxer to get the CC data, same as you're seeing here.
[00:42:43 CET] <lyncher> yes. which means that ALL the cc_data is being carried in SEI units
[00:43:00 CET] <lyncher> but I'm not able to retrieve it with showinfo filter
[00:43:42 CET] <lyncher> the cc data that is make available is interleaved
[00:44:25 CET] <lyncher> since the video is interlaced, I'm assuming that some kind of bug might exist when loading AVSideData of interlaced fields
[00:44:44 CET] <kepstin> fields aren't stored separately in digital video
[00:45:05 CET] <kepstin> and caption data from both fields should be stored in a single sei per frame, i think
[00:45:34 CET] <kepstin> (put an asterisk on that "aren't stored separately", because sometimes they are. But not with h264)
[00:46:24 CET] <kepstin> i think?
[00:46:36 CET] <kepstin> man, i don't know enough about how this is stored in h264
[00:46:58 CET] <kepstin> i need to stop talking, and you should file a bug so a developer can take a look
[00:47:12 CET] <lyncher> ok. thanks a lot for you time
[00:47:17 CET] <kepstin> ideally with a sample, a couple seconds of that file will do
[00:47:57 CET] <kepstin> my guess is that something is going wrong with associating the caption data with the frames, but someone familiar with h264 syntax would have to figure out why
[00:48:21 CET] <kepstin> still really weird that mpv works, since it is presumably using ffmpeg's decoder
[00:48:50 CET] <lyncher> maybe they use lavfi
[00:48:57 CET] <lyncher> or have a custom CC parser
[00:49:16 CET] <kepstin> lavfi? i dunno what that is
[00:49:26 CET] <kepstin> mpv uses ffmpeg libraries directly for demuxing and decoding
[00:52:48 CET] <BtbN> lavfi is literally libavfilter. From FFmpeg.
[00:52:59 CET] <BtbN> And it usually doesn't decode stuff, but filters.
[00:53:21 CET] <pink_mist> it's almost like the clue is in the name
[00:54:25 CET] <kepstin> well, there is a "demuxer" named 'lavfi', but it just lets you stick a filter chain which can generate audio/video frames in place of a real video input.
[00:59:36 CET] <kepstin> the loop in the h264 decoder looks pretty simple, it parses nal units (including sei, sps, pps, etc.), accumulating all the info until it gets a slice, and then it hands off to the actual video decoder, and the other info gets attached to the newly decoded frame.
[01:01:07 CET] <TheAMM> Quick Q before I go and try to find out myself: which format is the h264 bitstream stored in NUT?
[01:01:17 CET] <TheAMM> I assume length prefixed mode
[01:02:21 CET] <furq> TheAMM: annex b
[01:03:20 CET] <TheAMM> hmm
[01:03:25 CET] <TheAMM> Cheers, then
[04:59:08 CET] <analogical> how do I tell ffmpeg to encode using MPEG-2 layer 3 ?
[06:18:45 CET] <kepstin> the only thing the mpeg2 spec added to mp3 was lower sampling rates for lower quality/reduced bitrate
[06:19:15 CET] <kepstin> i'm not sure whether lame supports encoding them, but they're so terrible that i wouldn't even bother. it might do it automatically.
[06:19:26 CET] <kepstin> if you provide a sufficiently low sample rate signal? i dunno
[07:20:22 CET] <thewordsmith> hello all
[10:49:39 CET] <Anderssen> Hi, is there anything in ffmpeg that can be activated or deactivated so that in the output file, when you play it on a video player, is fast or slow to navigate in? (the context is that I used the hardware encoder hevc_amf (on windows) which produced a file that is slow to navigate in; however hevc_vaapi (suse leap 15.1) produces a file that is fast to navigate in).
[10:58:33 CET] <cehoyos> While most necessary information is missing from your question, you could try to change the gop size
[11:16:30 CET] <Anderssen> cehoyos: if you mean the command line, it was very simple without additional parameters: ffmpeg -i in.mp4 -vcodec hevc_amf -acodec copy out.mp4
[11:17:42 CET] <Anderssen> i don't think it's the gop size though, i've played videos with a range of gop sizes, and never got this problem
[11:28:05 CET] <Anderssen> cehoyos: my apologies, it must be something with the gop size after all; it's just that the encoder seems to ignore the "-g" command, so the output file remains slow regardless; maybe the "force_key_frames" would do something
[11:44:41 CET] <cehoyos> The source code indicates that it gets set, I cannot test myself
[11:47:58 CET] <Anderssen> what I've also tried is "-header_insertion_mode gop", but the gop size in the resulting file seems to be enormous anyhow; it's a keyframe every 10 minutes
[11:49:17 CET] <cehoyos> That sounds buggy but as said, I don't know the code and driver
[12:05:00 CET] <Anderssen> kk, that seems to be it: if you add the "-gops_per_idr" option, e.g. -gop_per_idr 1, then it accepts the values given by -g; (respecively, if you say -gop_per_idr 2, then it's twice the number of frames between k-frames)
[12:05:22 CET] <Anderssen> *between i-frames
[12:05:40 CET] <cehoyos> Sorry, I wanted to suggest this
[12:06:09 CET] <Anderssen> ah ok, i thought you mean the -g option
[12:06:39 CET] <cehoyos> I then wondered how buggy and untested the code must be if that makes a difference
[14:24:56 CET] <funyun> hi. can anyone help me convert this command to one which encodes using crf instead of 2pass? "ffmpeg -ss 300 -loglevel debug -hwaccel cuvid -i input.mp4 -filter:v "minterpolate='mi_mode=mci:mc_mode=aobmc:vsbmc=1'" -c:v h264_nvenc -preset slow -profile:v main -rc ll_2pass_quality -an -b:v 12M -pass 1 -2pass -1 -t 10 test.mp4"
[14:26:00 CET] <funyun> when i remove "-rc ll_2pass_quality -an -b:v 12M -pass 1 -2pass -1"
[14:26:23 CET] <funyun> and enter -crf 20, it automatically encodes at 2MB no matter what crf i choose
[14:27:47 CET] <furq> -rc vbr_hq -cq 20
[14:29:17 CET] <funyun> furq: thanks so much
[15:07:19 CET] <BtbN> nvenc does not have crf
[15:07:31 CET] <BtbN> closest it has is vbr mode with target_quality
[15:07:46 CET] <DHE> which is more a -qp target than a -crf target
[15:07:55 CET] <BtbN> no, that's constqp
[15:15:23 CET] <DHE> oh... hmm...
[15:49:52 CET] <Chagall> I am using the psnr filter, but I get absurd results unless I do an intermediate conversion of my input files to raw y4m. any explanation? if I recall correctly ffmpeg implicitly does frame interpolation or some kind of fps conversion but both files have the same frame rate, so it does not make much sense
[15:50:45 CET] <Chagall> I am comparing ffv1 and AV1 and the samples are in this github issue: https://github.com/xiph/rav1e/issues/1815
[15:51:21 CET] <Chagall> ffprobe gives me these lines:
[15:51:22 CET] <Chagall>     Stream #0:0: Video: ffv1 (FFV1 / 0x31564646), yuv420p, 1920x1080, SAR 1:1 DAR 16:9, 23.98 fps, 23.98 tbr, 1k tbn, 1k tbc (default)
[15:51:34 CET] <Chagall>     Stream #0:0: Video: av1 (Main) (AV01 / 0x31305641), yuv420p(tv), 1920x1080, 23.98 tbr, 23.98 tbn, 23.98 tbc
[15:52:13 CET] <durandal_1707> Chagall: tbc differs
[15:52:51 CET] <durandal_1707> you need to use same timebase
[15:53:33 CET] <Chagall> ok
[15:54:01 CET] <Chagall> probably just going to do y4m conversion before then, but thanks
[15:55:03 CET] <durandal_1707> you are wasting time and resources that way
[15:58:13 CET] <Chagall> I think I would waste more time figuring out how to do it the way you like
[15:59:04 CET] <durandal_1707> nonsense
[16:05:36 CET] <Freneticks> Is there a way to know if ffmpeg is active ? Like a socket, script hooks, or PID file or something ?
[16:30:38 CET] <Anderssen> Freneticks, you mean besides listing all threads (with like ps aux)?
[16:31:14 CET] <Freneticks> Anderssen: yes
[16:32:52 CET] <Anderssen> hm, I wouldn't have thought there is a need for another method than with "ps" or "htop".
[16:32:56 CET] <kepstin> I'm not sure what you mean by "active". If you've started it and it hasn't exited yet, it's active?
[16:34:02 CET] <Freneticks> kepstin: that's right
[16:34:57 CET] <kepstin> so, that's how you tell it's active then.
[16:42:09 CET] <Freneticks> I was looking for a way to launch something when ffmpeg is started, but well i will just put ffmpeg inside a bash script
[16:42:19 CET] <Freneticks> with &&
[16:42:39 CET] <kepstin> if you're starting ffmpeg, you can start something else at the same time, yeah, you don't need ffmpeg to do it.
[16:52:35 CET] <Freneticks> kepstin: yeah but i wanted to be sure that ffmpeg is started since my second service is bounded to ffmpeg
[17:48:08 CET] <Hello71> previously in #systemd: 14:40 <Freneticks> Is there a way to set the service in "failed" mode or another one thats not active when he's trying to restart it ? (a service with infinite restart parameters)
[18:01:13 CET] <Freneticks> Are you a bot from nsa Hello71 ?
[18:04:58 CET] <Freneticks> Im just trying to manage process for convertion bot
[18:42:42 CET] <rockyh> hello!
[18:44:43 CET] <rockyh> In this ffmpeg-dev IRC log, https://bit.ly/2O04OWk, it is mentioned that transfer characteristics smpte2084 is unsupported (April 2018). Is it still so?
[18:46:44 CET] <JEEB> rockyh: swscale doesn't support it, but if you utilize zscale you can convert to/from it. granted, you will need to tone map if you want to convert to SDR. for which we have multiple filters now.
[18:47:35 CET] <rockyh> JEEB: and if using the colorspace filter?
[18:48:09 CET] <JEEB> I don't see it in it
[18:48:41 CET] <JEEB> only BT.2020 TRC, but not AVCOL_TRC_SMPTEST2084
[18:50:49 CET] <rockyh> sorry, I can't follow you (I am quite new to ffmpeg). If I use `-vf "colorspace=bt709"', it doesn't work
[18:51:06 CET] <rockyh> (it gives the `Unsupported' error I mentioned)
[18:51:08 CET] <JEEB> yes, because colorspace filter does not support it
[18:51:17 CET] <JEEB> also as noted, PQ is HDR
[18:51:20 CET] <JEEB> so you will have tone map
[18:52:05 CET] <rockyh> ok, so in ffmpeg swscale and colorspace still do not support smpte2084
[18:52:16 CET] <JEEB> you will have to utilize one of the tone mapping related filters possibly together with zscale if your FFmpeg is built with teh zimg library
[18:52:45 CET] <JEEB> https://www.ffmpeg.org/ffmpeg-all.html#tonemap-1 or https://www.ffmpeg.org/ffmpeg-all.html#tonemap_005fopencl
[18:52:55 CET] <JEEB> both have usage examples there
[18:53:56 CET] <rockyh> zscale is completely new to me. Thanks for the links. I'll try!
[18:54:00 CET] <JEEB> also wat, why does the opencl thing only say it supports the BT.2020 transfer. that makes no sense
[18:54:06 CET] Action: JEEB checks the code
[18:54:33 CET] <JEEB> ok, no. it properly supports 2084 and HLG
[18:54:38 CET] <JEEB> at least it has linearization for that
[18:55:00 CET] <JEEB> ahhh, so you can *output* bt.2020 or bt.709 :P
[18:55:31 CET] <JEEB> and input has to be SMPTE ST.2084 or HLG
[18:55:42 CET] <JEEB> so yes, if you can build the opencl one, that should do it without zimg
[18:56:03 CET] <JEEB> I think that is a bit newer than the software one, and has a bit newer version of what mpv/libplacebo does with tone mapping
[18:56:38 CET] <rockyh> that seems exactly what I need to do. But I still have some confusion about what options/tools must be used
[18:57:12 CET] <JEEB> if everything was perfect both the opencl and the software tonemap filters would do the same thing algorithm-wise :P
[18:57:21 CET] <JEEB> but nobody updated the sw one to match closer what current mpv does
[18:57:32 CET] <JEEB> note that even the opencl one is probably a bit outdated by now
[18:57:39 CET] <JEEB> but still, it has tone mapping :P
[18:57:46 CET] <JEEB> zimg can do the conversion but it will just clip
[18:57:55 CET] <JEEB> thus you most likely want tone mapping
[18:58:34 CET] <JEEB> rockyh: you can output which filters are built into your ffmpeg.c if you check the output of `ffmpeg -filters`
[18:59:08 CET] <rockyh> yes, I know that option. Should I check for tone mapping or for opencl?
[18:59:26 CET] <JEEB> both have the tonemap word in their filter name
[18:59:28 CET] <JEEB> so tonemap
[18:59:42 CET] <rockyh> I have
[18:59:43 CET] <rockyh>  .S. tonemap           V->V       Conversion to/from different dynamic ranges.
[18:59:49 CET] <JEEB> and check if you have zscale for possible high-quality scaling
[19:00:04 CET] <rockyh> zscale is not there
[19:00:19 CET] <JEEB> ok, so the tonemap filter doesn't do linearization by itself, which the opencl one seems to do
[19:00:34 CET] <JEEB> so you need (currently) zscale for it
[19:00:38 CET] <JEEB> to use it
[19:01:04 CET] <JEEB> not sure how hard it would be to get the basic algo from the opencl one or elsewhere and put it in there
[19:01:52 CET] <rockyh> I have no preference with respect to opencl
[19:02:40 CET] <JEEB> anyways you have two alternatives: 1) build your FFmpeg with zimg (https://github.com/sekrit-twc/zimg) or 2) build with the opencl tonemap filter
[19:02:52 CET] <JEEB> of course, doing both and having them as alternatives is also an alternative
[19:03:56 CET] <rockyh> I found --enable-libzimg and --enable-opencl as available configuration options
[19:04:15 CET] <rockyh> maybe I'll try the 1) suggestion
[19:04:22 CET] <JEEB> you of course need the zimg library for the first one, and opencl needs whatever it needs
[19:07:34 CET] <rockyh> this seems quite complicated in Ubuntu
[19:08:16 CET] <rockyh> but it's ok
[19:08:30 CET] <JEEB> zimg is quite useful so I've been trying to find time to get it packaged in debian, from which it would go towards ubuntu as well
[19:08:37 CET] <JEEB> since zimg is quite useful in various applications
[19:09:12 CET] <JEEB> also I'm still not sure where the linearization thing for PQ is there, or is that just something opencl defines?
[19:09:18 CET] <rockyh> it seems it is available in `ppa:mc3man/bionic-media'
[19:09:24 CET] <JEEB> just looking at the filter code
[19:09:43 CET] <JEEB> ah, colorspace common
[19:10:25 CET] <JEEB> ok, that is simpler than I thought :P
[19:12:19 CET] <rockyh> when switching to HDR to SDR, how to use `zscale' to obtain the same as `-vf "colorspace=bt709"'? If it's too broad, don't worry, I'll try to learn
[19:12:40 CET] <rockyh> (in the meanwhile I'm recompiling ffmpeg with zimg enabled)
[19:12:50 CET] <JEEB> if you look at the tonemap filter docs I linked, the example does exactly that I think
[19:13:07 CET] <JEEB> uses zscale to output linear rgb
[19:13:12 CET] <JEEB> then does tone mapping
[19:13:28 CET] <JEEB> then you use zscale to convert linear to what you need (gamma, bt.709)
[19:14:03 CET] <rockyh> you are referring to (in the 1st link) `ffmpeg -i INPUT -vf zscale=transfer=linear,tonemap=clip,zscale=transfer=bt709,format=yuv420p OUTPUT'
[19:14:36 CET] <JEEB> yes, that does a clipping tonemap though, which is very similar to what just -vf "zscale=transfer=bt709,format=yuv420p" would do
[19:14:48 CET] <JEEB> so you probably want to see the other options in the tone map filter :)
[19:14:57 CET] <rockyh> would you avoid that `clipping'?
[19:15:10 CET] <JEEB> clipping means that whatever goes over 100 nits just gets lost
[19:15:45 CET] <JEEB> while tone mapping with not just clipping attempts to map a wider range of brightness into the output 100 nits
[19:16:18 CET] <rockyh> I would choose this second options
[19:16:24 CET] <rockyh> s/options/option
[19:16:59 CET] <JEEB> a simple way to see alternatives
[19:17:20 CET] <JEEB> if you have mpv, add an input.conf config file
[19:17:23 CET] <JEEB> and put `P cycle tone-mapping`
[19:17:24 CET] <JEEB> there
[19:17:35 CET] <JEEB> then open a HDR sample with --pause so it doesn't move
[19:17:38 CET] <JEEB> and hit shift-P
[19:17:49 CET] <JEEB> it will then cycle between the available tone mapping things
[19:18:05 CET] <JEEB> (see the manual where under your home dir the config file would go)
[19:18:44 CET] <JEEB> the positive side of clipping of course is that it keeps exactly what you have in the 100 nits without trying to be fancy or dynamic
[19:20:04 CET] <rockyh> I have mpv and I can try this! All these notions are completely new to me
[19:25:06 CET] <rockyh> in the meanwhile, the zscale option did a good work. Thank you so much!
[19:27:57 CET] <JEEB> :)
[19:28:06 CET] <JEEB> so what did you try in the end?
[19:28:28 CET] <rockyh> without clipping: `zscale=transfer=linear,zscale=transfer=bt709'
[19:28:43 CET] <JEEB> that is clipping ;)
[19:28:53 CET] <rockyh> whooooooops
[19:29:03 CET] <rockyh> then, I made some confusion. I'll try the other one, then
[19:29:04 CET] <JEEB> since you don't have the tonemap filter there
[19:29:55 CET] <JEEB> -vf "zscale=transfer=linear,tonemap=mobius,zscale=transfer=bt709,format=yuv420p"
[19:29:58 CET] <JEEB> something liek this
[19:31:00 CET] <rockyh> this produces an error: Specified pixel format gbrpf32le is invalid or not supported
[19:31:13 CET] <rockyh> (as well as `zscale=transfer=linear,tonemap=clip,zscale=transfer=bt709')
[19:35:18 CET] <JEEB> can you post the log from -v verbose of that failure?
[19:35:22 CET] <JEEB> on a pastebin or so
[19:35:23 CET] <JEEB> and link here
[19:35:51 CET] <JEEB> the tonemap should take in AV_PIX_FMT_GBRPF32
[19:39:36 CET] <JEEB> oh wait, I have the thing built here too
[19:40:00 CET] <rockyh> sure! https://pastebin.com/UqsA6nFc
[19:40:07 CET] <JEEB> thanks
[19:40:19 CET] <JEEB> ahhh, are you sure you have the format there at the end?
[19:40:29 CET] <JEEB> since it's the encoder saying it can't habla it
[19:42:25 CET] <JEEB> the exact thing with mobius I posted works for me
[19:42:28 CET] <rockyh> sorry for not mentioning this. I was using `-pix_fmt +' and thought this was enough. Now I removed it and used `format' and it is running
[19:43:09 CET] <JEEB> the format filter is a meta thing that hints the previous filter what it should attempt to output
[19:44:58 CET] <JEEB> also zscale can of course scale so because I decided I didn't want my thing impressively big (and funky to encode) I did "zscale=transfer=bt709:w=1280:h=720,format=yuv420p" after tone mapping
[19:45:12 CET] <JEEB> w being width, and h being height
[19:45:41 CET] <rockyh> yes, I'm now using also a scale and it works, too!
[19:46:04 CET] <rockyh> maybe I don't remember well my first attempt with clipping, but I did not notice a big difference
[19:46:24 CET] <rockyh> with respect to the original HDR video, this output is brighter and with less "warm" colors
[19:46:46 CET] <JEEB> tone mapping algorithms do their own guesstimates
[19:47:00 CET] <JEEB> also it depends on if the tone mapping algorithm does brightness detection
[19:47:12 CET] <rockyh> yes, of course there are some modifications in the color appearance (this is a downgrande, after all)
[19:47:17 CET] <kepstin> are you watching the hdr video on an hdr monitor? otherwise you're just comparing different tonemapping implementations :)
[19:47:33 CET] <rockyh> this is a very good question! I am not sure!!
[19:47:43 CET] <JEEB> the opencl one seems to have taken the newer mpv stuff, and includes brightness detection
[19:47:45 CET] <kepstin> if you're not sure, then probably not.
[19:48:04 CET] <JEEB> so it should match closer to what libplacebo/mpv currently do
[19:48:15 CET] <JEEB> (although it is by now a bit behind on latest changes to how it's done)
[19:49:18 CET] <kepstin> should make an ffmpeg filter that uses libplacebo to do the tonemapping ;)
[19:50:06 CET] <rockyh> I am quite sure it's not HDR, because it's a not-so-expensive monitor
[19:51:08 CET] <JEEB> kepstin: yea :) positive is that you could always build it against latest libplacebo, negatives is that it isn't included in the thing itself :)
[19:51:19 CET] <JEEB> tone mapping just happens to be a moving target since it's not science
[19:52:18 CET] <JEEB> I think currently the biggest problem with FFmpeg's tone mapping filters is that they're not kept in sync :D
[19:52:21 CET] <rockyh> trivial question, but: bt709 of course does not support HDR, right?
[19:52:31 CET] <JEEB> BT.709 is gamma basically
[19:52:35 CET] <JEEB> standardized at 100 nits max
[19:52:41 CET] <JEEB> HDR is specified as higher than 100 nits
[19:53:02 CET] <JEEB> (also specify that you mean the BT.709 transfer function)
[19:53:22 CET] <rockyh> this is pretty clear
[19:54:20 CET] <rockyh> thanks for all your help
[19:54:46 CET] <JEEB> glad to have been of help
[19:55:04 CET] <rockyh> :)
[20:04:38 CET] <soma_> hi, I have an mp4 cenc encryption. tldr; Is it possible to record camera to an AES-128-CENC encrypted mp4 file and open it during the recording session? https://gist.github.com/docoprusta/c294ebaaa68fdc8502b6caa641a75d7f
[20:05:16 CET] <soma_> *I have an mp4 cenc encryption related question
[20:06:32 CET] <JEEB> FFmpeg can both encrypt and decrypt cenc on some level, and if you utilize fragmented mp4 it can be opened as long as you fragment f.ex. on random access points
[20:07:06 CET] <JEEB> -movflags frag_keyframe does that, for example
[20:07:49 CET] <soma_> I tried that but the problem was the same
[20:09:17 CET] <JEEB> ok, so we lack the support for whatever you need in either writing or reading
[20:10:13 CET] <JEEB> and yes, the tenc test is supposed to be a test for decrypting that sample
[20:10:31 CET] <JEEB> so if the result of that is not correct then that should sound alarm bells
[20:10:43 CET] <JEEB> also for future reference http://fate-suite.ffmpeg.org/
[20:10:50 CET] <JEEB> that's where the samples live if you like HTTP views
[20:11:10 CET] <soma_> thanks I didn't know
[20:12:08 CET] <JEEB> ffprobe -v verbose -decryption_key 12345678901234567890123456789012 -i "http://fate-suite.ffmpeg.org/mov/mov-tenc-only-encrypted.mp4"
[20:12:11 CET] <JEEB> does work
[20:13:00 CET] <soma_> I think encrypted fmp4 can be also a good solution but it's not implemented yet.
[20:13:13 CET] <JEEB> and I can play that sample in mpv as well
[20:13:24 CET] <JEEB> mpv --demuxer-lavf-o=decryption_key=12345678901234567890123456789012 "http://fate-suite.ffmpeg.org/mov/mov-tenc-only-encrypted.mp4"
[20:13:58 CET] <JEEB> so I'm not sure what you were meaning with that tenc thing?
[20:14:04 CET] <soma_> https://pastebin.com/HMFKZ7HP for me
[20:15:05 CET] <JEEB> http://up-cat.net/p/778ead52
[20:16:38 CET] <cehoyos> soma_: Update to current FFmpeg git head
[20:17:06 CET] <soma_> thanks I'll try with that
[20:19:00 CET] <soma_> and ffmpeg can produce a file with a similar metadata structure like mov-tenc-only-encrypted.mp4?
[20:19:48 CET] <soma_> I mean encryption without saio,saiz,senc with a constans IV in the tenc atom
[20:23:09 CET] <AiNA_TE> hello, im trying to dump all the frames of an mjpeg video to jpeg
[20:23:18 CET] <AiNA_TE> i use this "ffmpeg -i Entrance.thp %CD%\frames\frame%04d.jpg"
[20:23:32 CET] <AiNA_TE> but it's re-endoding them all
[20:23:45 CET] <AiNA_TE> is there anyway to dump the raw frames without re-encoding
[20:23:51 CET] <kepstin> AiNA_TE: ffmpeg re-encodes by default, you can use "-c copy" output option to override that.
[20:24:01 CET] <AiNA_TE> ahh thank you :)
[20:25:26 CET] <AiNA_TE> hmm all the frames are broken with -c copy
[20:26:31 CET] <JEEB> soma_: no idea :)
[20:27:15 CET] <kepstin> there's some weird things that mjpeg can do that don't work in normal jpeg. You can try adding "-bsf mjpeg2jpeg" to convert it
[20:28:38 CET] <JEEB> soma_: `ffmpeg -h muxer=mp4` gives you the options for the muxer
[20:28:49 CET] <cehoyos> AiNA_TE: thp != jpg, you cannot remux
[20:29:17 CET] <cehoyos> A bitstream filter would be needed, nobody has written one (yet)
[20:29:43 CET] <familiyaF> Hi, I have an h264 30 fps stream with embedded 608 sei data. If I change frame rate to 60fps via frame rate filter I see the captain data getting repeated. Following is by command
[20:29:43 CET] <familiyaF> ffmpeg -i <30fpsImput>.mp4 -t 30 -vf framerate=fps=60 <Output>.mp4
[20:30:19 CET] <familiyaF> Is this expected?
[20:30:32 CET] <cehoyos> familiyaF: Yes, we could not agree on how to fix this;-(
[20:32:31 CET] <cehoyos> You can try to port commit 1893c72086024e53ac32312ecb96693c5be023d0 to the framerate filter
[20:32:54 CET] <soma_> JEEB no problem. Thanks I tried everything in the commandline so I'll check the source code.
[20:34:01 CET] <kepstin> obviously if the framerate filter is blending frames, it should also blend the CC data (I'm joking, but... i mean, what do you do?)
[20:34:19 CET] <cehoyos> kepstin: The issue is known, see the mentioned commit
[20:34:56 CET] <familiyaF> cehoyos: thanks will check
[20:35:51 CET] <kepstin> iirc the fps filter was fixed so when increasing framerate, the cc data is attached to the first time a particular frame is output. I'm not sure what it does when decreasing framerate/dropping frames, i can't imagine anything reasonable.
[20:37:19 CET] <cehoyos> Probably somewhere in process_work_frame(), I don't immediately understand the logic
[20:37:43 CET] <cehoyos> Of course not: The bug is only a bug for inreased framerate
[20:39:45 CET] <kepstin> hmm. with the a52 captions data format, are you allowed to concatenate multiple frames worth of 608 caption data into one frame? or is that not allowed / not supported by players?
[20:46:13 CET] <soma_> Can somebody approve my mail that I sent to the mailing list? My mail contains this problem: https://gist.github.com/docoprusta/c294ebaaa68fdc8502b6caa641a75d7f
[22:50:00 CET] <cehoyos> AiNA_TE: You can play /decode the created single-frame files with "-vcodec thp"
[00:00:00 CET] --- Wed Nov 20 2019


More information about the Ffmpeg-devel-irc mailing list