[Ffmpeg-devel-irc] ffmpeg-devel.log.20190709
burek
burek021 at gmail.com
Wed Jul 10 03:05:03 EEST 2019
[00:05:35 CEST] <juandl__> Hello everyone,my name is Juan. I'm working on a change to extract QP from different encoded streams, It'd be great if anyone could give me some feedback about my design. https://docs.google.com/document/d/1WClt3EqhjwdGXhEw386O0wfn3IBQ1Ib-_5emVM1gbnA/edit?usp=sharing
[00:08:44 CEST] <juandl__> feel free to comment on the doc
[00:09:58 CEST] <Lynne> nice design document, would discuss in a meeting/10, but you can already do this with the region of interest api
[00:10:20 CEST] <tmm1> has anyone heard of h264_nvenc not respecting -b:v/-maxrate ?
[00:14:38 CEST] <jkqxz> What do you want to do with the QPs once you've extracted them?
[00:16:51 CEST] <tmm1> this seems strange.. why would nvenc_setup_encoder() be overwriting avctx->bit_rate (https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/nvenc.c#L1229-L1230)
[00:22:34 CEST] <cone-618> ffmpeg 03Andreas Rheinhardt 07master:730e5be3aa11: cbs: ff_cbs_delete_unit: Replace return value with assert
[00:22:35 CEST] <cone-618> ffmpeg 03Andreas Rheinhardt 07master:d9418aba66e7: cbs_h264, h264_metadata: Deleting SEI messages never fails
[00:22:36 CEST] <cone-618> ffmpeg 03Andreas Rheinhardt 07master:f83b46e2181c: configure, cbs_h2645: Remove unneeded golomb dependency
[00:27:16 CEST] <mkver> jkqxz: Any decision on whether to make updating the AVCodecParameters mandatory?
[00:28:45 CEST] <mkver> (Honestly, I am surprised that I can't find a concrete example for the abstract "The bistream might not be able to express everything" situation.)
[00:29:44 CEST] <juandl__> jkqxz, it is for quality metrics mostly, it could help my team adjust rate control if they have more information about QPs per frame
[00:31:13 CEST] <jkqxz> mkver: I'm inclined to agree with jamrial that it should be mandatory, but I don't have a strong opinion.
[00:31:30 CEST] <mkver> Ok, then I will adapt the patchset.
[00:31:42 CEST] <jkqxz> (Hence my looking for examples.)
[00:31:43 CEST] <mkver> And did you overlook this: https://ffmpeg.org/pipermail/ffmpeg-devel/2019-June/245054.html?
[00:31:59 CEST] <jkqxz> Yes.
[00:32:45 CEST] <jkqxz> I still have 20+ on the big patchset; anything else I should look at?
[00:33:59 CEST] <mkver> 20+ on the (my) big patchset left? No.
[00:34:14 CEST] <Lynne> juandl__: you ought to use an analyzer, there's a few out there
[00:34:16 CEST] <mkver> It's just 12.
[00:35:39 CEST] <jkqxz> I mean 20-31 of 31.
[00:36:05 CEST] <jkqxz> Well, not yet if you send new ones.
[00:38:19 CEST] <jkqxz> juandl__: QP is not a very good measure to use to asssess quality after the fact. It is a sensible to use it with an encoder as an indication of the intended result, but given an encoded stream there are a lot of other things which change the result.
[00:39:05 CEST] <jkqxz> Encoding at minimum QP doesn't help you if you throw away all the AC coefficients.
[00:40:39 CEST] <mkver> There is btw also a patch from James https://ffmpeg.org/pipermail/ffmpeg-devel/2019-June/245643.html.
[00:41:51 CEST] <mkver> He even wants to stop using h2645_parse functions, because currently everything not in the base layer gets dropped by hevc_parse_nal_header.
[00:42:27 CEST] <jkqxz> He said he would revise it, so I didn't comment.
[00:43:01 CEST] <mkver> And of course there are the vp9_raw_reorder patches that I sent you.
[00:45:18 CEST] <jkqxz> Rewriting to avoid the h2645_parse functions seems reasonable to me. I guess you'd want to preserve the current performance levels, though? That doesn't sound so fun.
[00:47:17 CEST] <mkver> Couldn't this be changed by simply adding a new parameter to ff_h2645_packet_split and hevc_parse_nal_header?
[00:48:01 CEST] <mkver> (And actually I want to improve the performance: ff_h2645_extract_rbsp uses suboptimal masks.)
[00:49:40 CEST] <juandl__> jkqxz Lynne: thanks for the advice, we specifically need FFmpeg for one of the internal analyzing tools. I'd appreciate any more feedback on the data structure or the implementation. Thanks!
[00:51:32 CEST] <kierank> I thought we already had a qp extraction api
[00:52:21 CEST] <mkver> jkqxz: Btw: This commit https://github.com/mkver/FFmpeg/commit/a40d9bc8d5fb11f161b2f29ffd1bded51461c840 improves the speed of h264_metadata by more than 50% for me.
[04:07:00 CEST] <_orestes_> Hi All! I am trying to get hardware encoding working on a raspi 4. I can get it to use the hardware but the SPS and PPS are not sent except right at the start. I found that that setting "OMX_IndexParamBrcmVideoAVCSEIEnable" is a common answer. there is nothing i can find though on WHERE to set that. has anyone done this?
[08:13:25 CEST] <Compn> always with the rasbpi people
[11:50:31 CEST] <durandal_1707> nothing left to do
[11:57:09 CEST] <kierank> durandal_1707: cfhd p-frames
[11:57:22 CEST] <kierank> durandal_1707: also liking my tweets
[11:58:22 CEST] <durandal_1707> kierank: no real samples, i'm stuck
[11:58:32 CEST] <kierank> what's wrong with mountain sample
[11:58:35 CEST] <kierank> why can it not be fixed
[11:59:00 CEST] <durandal_1707> i need real sample
[12:00:10 CEST] <durandal_1707> one where more frames are P frames
[12:23:00 CEST] <mkver> durandal_1707: You could also take at this patchset for truehd_core: https://ffmpeg.org/pipermail/ffmpeg-devel/2019-July/246230.html
[12:29:27 CEST] <durandal_1707> mkver: if it works just apply it!
[12:29:43 CEST] <mkver> I don't have commit rights.
[12:29:51 CEST] <durandal_1707> get one
[12:30:08 CEST] <mkver> Also, shouldn't patches always be reviewed even when the author has commit rights?
[12:30:13 CEST] <durandal_1707> which matroskadec patches are not applied?
[12:30:24 CEST] <durandal_1707> mkver: nope
[12:31:01 CEST] <mkver> Everything from 17 onwards in this patchset: https://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244193.html
[12:31:22 CEST] <JEEB> they should, but in some cases E_NOBODY_CARES in which case if you have FATE coverage added or existent then if you are confident it could start going in
[12:31:53 CEST] <JEEB> last time I pushed something nobody seemed to care about I gave it two weeks after my initial show of interest towards a patch (in this case I was not the original author)
[12:32:16 CEST] <mkver> There is no fate coverage and if there were, I would need to change the fate output, given that it is explicitly intended to change the output.
[12:33:41 CEST] <durandal_1707> i'm not matroska expert at all
[12:34:32 CEST] <mkver> But I compared hashes of the output of our TrueHD decoder with a) after stripping Atmos away with my patchset applied b) after stripping Atmos away with current git head c) without stripping Atmos away. They were all equal.
[12:35:46 CEST] <mkver> I also tested on a (non-Atmos capable) receiver. Everything worked fine.
[12:37:34 CEST] <JEEB> would be nice to have a FATE test for that, but since that's a fixup instead of a new module I am definitely not requiring one :)
[12:38:09 CEST] <JEEB> so the thing is that it copies the header, and then modifies some bits
[12:38:50 CEST] <JEEB> and the bit mask has become smaller for what it copies over
[12:39:08 CEST] <JEEB> 0x0f to 0x0c
[12:39:18 CEST] <JEEB> (might take a look at that after $dayjob)
[12:39:31 CEST] <JEEB> since it has a link to the spec
[12:39:55 CEST] <mkver> That would be great. Thank you!
[12:40:11 CEST] <JEEB> w/46
[12:59:18 CEST] <durandal_1707> mkver: have you checked that ordinary truehd streams still work?
[12:59:31 CEST] <durandal_1707> when run with this bsf?
[13:00:30 CEST] <mkver> They are not changed.
[13:37:15 CEST] <durandal_1707> will apply truehd_core patches!
[13:40:40 CEST] <mkver> Thanks!
[13:47:14 CEST] <Lynne> "Intra-only frames in profile 0 are automatically BT.601" <- in vp9? what were google on?
[13:52:45 CEST] <JEEB> Lynne: ahahah what
[13:52:46 CEST] <JEEB> :D
[13:55:12 CEST] <mkver> It's in vp9-bitstream-specification-v0.6-20160331-draft.pdf, pages 28-29.
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:99c191151a71: truehd_core: Disable 16-channel presentation
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:cbe23e40ae91: truehd_core: Correct output size
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:610460a397b1: truehd_core: Return error in case of error
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:2275e70569ce: truehd_core: Miscellaneous improvements
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:836065b27a0f: truehd_core: Use byte offsets instead of bit offsets
[14:00:09 CEST] <cone-688> ffmpeg 03Andreas Rheinhardt 07master:5a481b15bd86: truehd_core: Switch to in-place modifications
[14:01:02 CEST] <Lynne> that looks like 100% leftover from when vp8 got support for !bt.601 and makes intra-only frames completely useless as everything is bt709 these days
[14:03:29 CEST] <Lynne> I have 0 idea what they were meant to be used as though
[14:04:04 CEST] <Lynne> they can be put in as random I-frames but they're required not to reset decoding so I don't think they can be used as a reference
[14:11:13 CEST] <mkver> They can be used as reference; that's why they contain the refresh_frame_flags element.
[14:25:33 CEST] <cehoyos> Does really everybody believe that side data is the best solution for tiles?
[14:26:12 CEST] <durandal_1707> its format is not well designed for ffmpeg
[14:26:13 CEST] <JEEB> at this point that is the only alternative that I have heard of that can be implemented somehow. if you have another idea please feel free to explain :)
[14:27:41 CEST] <Lynne> side data is the best way to make it fit
[14:28:33 CEST] <Lynne> mkver: in that case it was probably meant to be used to fix horrid rate control undershoots
[14:28:33 CEST] <cehoyos> I believe there is a solution that makes user's lifes much easier
[14:28:57 CEST] <cehoyos> I wonder if side data has any advantage:
[14:29:12 CEST] <cehoyos> Do we want to support remuxing of heif images? Would this have any usecase?
[14:29:25 CEST] <JEEB> basically how I understand the side data alternative is to keep the decoder buffer the images
[14:29:36 CEST] <cehoyos> Or the demuxer...
[14:29:37 CEST] <JEEB> until all tiles have been received
[14:29:57 CEST] <JEEB> well, whatever layer makes sense?
[14:30:16 CEST] <JEEB> but the tiles are usually separate images so I would expect them to go around as compressed data as such
[14:30:31 CEST] <JEEB> unless you want to have separate buffers in an AVPacket or something like that (for example)
[14:30:32 CEST] <cehoyos> Note we have an open ticket since forever with the same issue (searching...)
[14:30:58 CEST] <JEEB> and then of course the muxer would take in this data as well
[14:31:14 CEST] <JEEB> but I would consider remux t obe a separate task from enabling decoding
[14:31:28 CEST] <JEEB> but the overarching design should be capable of handling both
[14:31:34 CEST] <cehoyos> https://trac.ffmpeg.org/ticket/2564
[14:31:35 CEST] <Lynne> I don't think there's a use case for remuxing heif, since only heif supports the tiling used
[14:31:47 CEST] <JEEB> Lynne: remuxing HEIF to HEIF in my mind
[14:31:50 CEST] <JEEB> not remuxing out of it
[14:32:06 CEST] <cehoyos> Then why shouldn't the heif demuxer return (large) rawvideo frames?
[14:32:07 CEST] <JEEB> or well, you can remux to mp4 but it will be separate images then :P
[14:32:14 CEST] <Lynne> yeah, and that's kinda pointless unless stripping thumbnails or such
[14:32:28 CEST] <JEEB> cehoyos: and you lose all visibility of the compressed data then?
[14:32:40 CEST] <cehoyos> Again: What is the usecase?
[14:32:54 CEST] <durandal_1707> what is heif usecase?
[14:33:03 CEST] <cehoyos> (Apart from offering a flag to return the compressed data)
[14:33:17 CEST] <cehoyos> I still believe we work on a Swiss knife...
[14:33:56 CEST] <JEEB> remux of HEIF and TIFF with tiles is IMHO not too shabby of an idea. but yes, we should also support outputting the full image after decoding. which is as far as I currently understand the side data pops up
[14:34:07 CEST] <JEEB> of course for remux AVPackets would have to have stuff as well
[14:34:17 CEST] <JEEB> so that the muxer could know if it's being fed a tile or not
[14:34:17 CEST] <cehoyos> The issue is not that I believe the side data would somehow be "wrong", I just believe another solution that makes using heif from libav* much easier exists
[14:34:25 CEST] <durandal_1707> cehoyos: more swiss cheese with bunch of holes
[14:34:41 CEST] <JEEB> please keep civil, the discussion is just OK so far durandal_1707 :)
[14:35:01 CEST] <cehoyos> I did not consider Paul's comment in any way uncivil
[14:35:10 CEST] <JEEB> ok
[14:35:31 CEST] <JEEB> anyways, I don't disagree that there couldn't be other ways of handling things
[14:35:36 CEST] <cehoyos> But I wonder now if I understand correctly that you are arguing we have to provide the compressed data by default because remuxing is an important usecase?
[14:35:55 CEST] <JEEB> no, I think it comes from the lavf/lavc design more tan that
[14:36:01 CEST] <JEEB> lavf gives you demuxed packets
[14:36:05 CEST] <JEEB> lavc gives you decoded stuff out
[14:36:26 CEST] <cehoyos> But this concept exists to make user's lifes easier, not because it is some ancient law, no?
[14:36:41 CEST] <cehoyos> And in this case, I thought we agree that there is not usecase.
[14:36:44 CEST] <durandal_1707> how are tiles stored in HEIF?
[14:36:50 CEST] <JEEB> separate packets
[14:36:54 CEST] <JEEB> not tiled HEVC
[14:37:02 CEST] <JEEB> if it was just tiled HEVC it would be simpler :P
[14:37:21 CEST] <cehoyos> (And I wonder where your argument was when this f* raw network codec was committed without resistance=
[14:37:40 CEST] <cehoyos> )
[14:37:56 CEST] <durandal_1707> then demux all relevant packets into one uber defined format and let hevc handle it?
[14:38:08 CEST] <JEEB> durandal_1707: that is another option I mentioned
[14:38:15 CEST] <Lynne> that's no good
[14:38:17 CEST] <JEEB> have a packet with multiple buffers or whatever
[14:38:17 CEST] <cehoyos> As in: Inventing in our own variant of hevc?
[14:38:22 CEST] <JEEB> cehoyos: no
[14:38:24 CEST] <Lynne> we shouldn't invent our own packing formats for packets
[14:38:32 CEST] <Lynne> packets should be standard
[14:38:45 CEST] <JEEB> Lynne: no, the packets should be standard, just more than one buffer in a packet.
[14:38:45 CEST] <cehoyos> My question was if this was the suggestion, and I believe the answer is "yes"
[14:39:01 CEST] <durandal_1707> then invent bsf for HEIF ?
[14:39:03 CEST] <Lynne> that's the benefit of side data, it keeps packets untouched, standard hevc packets
[14:39:08 CEST] <JEEB> > into one uber defined format
[14:39:14 CEST] <cehoyos> But how is that a benefit?
[14:39:17 CEST] <JEEB> I understood that as multiple buffers somehow put into an AVPacket
[14:39:23 CEST] <JEEB> not somehow modifying the data
[14:39:29 CEST] <Lynne> yes, that's the idea, let a bsf take packets and reconstruct a single packet based on the side data
[14:39:29 CEST] <JEEB> a BSF is another alternative, maybe
[14:39:43 CEST] <Lynne> or if not, let the decoder tile them
[14:39:54 CEST] <cehoyos> And the output of the bsf would not be a variant of heif invented by us?
[14:39:59 CEST] <JEEB> no
[14:40:04 CEST] <JEEB> it would be standard tiled HEVC
[14:40:07 CEST] <cehoyos> But what would it be?
[14:40:09 CEST] <JEEB> the problem is, if that is possible
[14:40:19 CEST] <cehoyos> Does our decoder support tiled hevc?
[14:40:22 CEST] <JEEB> which is why it isn't being raised as the most possible alternative
[14:40:22 CEST] <cehoyos> Do we have a sample?
[14:40:40 CEST] <JEEB> I think the standard set has tiled HEVC, but I must be honest and say that I have not checked :P
[14:40:41 CEST] <kurosu> iirc, heif "tracks" may contain independant images or actual tiles (as internal subdivision of the same image)
[14:40:49 CEST] <Lynne> yes, it supports it, it will even do slice threading for those
[14:41:40 CEST] <cehoyos> Atm, the student has the issue that he creates a stream with a single frame for each tile
[14:42:08 CEST] <durandal_1707> cool, export sidedata for xstack filter then?
[14:42:13 CEST] <cehoyos> But at some point, he will hopefully solve this and I sincerely hope side-data (that would make using the demuxer unlikely) can be avoided.
[14:42:38 CEST] <cehoyos> Yes, but are we sure that the tiles alway have sane sizes with respect to each other?
[14:42:39 CEST] <JEEB> how would it become unlikley since most users will likely want to utilize the decoder together with it?
[14:42:55 CEST] <JEEB> since my understand ing was that the decoder would utilize the side data?
[14:42:56 CEST] <durandal_1707> it is joke
[14:43:04 CEST] <cehoyos> Most user's of FFmpeg (the ones that are not active here) likely have never heard of FFmpeg side-data
[14:43:20 CEST] <JEEB> if you don't need to touch it, then how does that exactly affect an API user?
[14:43:31 CEST] <cehoyos> I thought this would be the application's job, did I misunderstand?
[14:43:39 CEST] <JEEB> I might be misundersatnding as well
[14:44:08 CEST] <cehoyos> So instead of using my "hack" that only affects one library, your suggestion is to put special-case code in all the libraries?
[14:44:15 CEST] <JEEB> ?!
[14:44:23 CEST] <cehoyos> ?
[14:44:51 CEST] <JEEB> I'm kind of struggling to understand since what I rmember being the discussion here is that the decoder would read the side data and buffer according and push out the full image
[14:44:58 CEST] <JEEB> I might be incorrect with this so feel free to correct
[14:45:04 CEST] <JEEB> and I am not sure how much of this is a hack?
[14:45:15 CEST] <JEEB> of course if you consider side data altogether to be a hack then sure
[14:45:49 CEST] <cehoyos> Apart from the fact that - afaiu - this is not how lavc works so far, it would definitely (?) include a special case in lavf (create specific side data for tiles) and lavc (read the side data, reserve image space, wait for more frames, output in the future)
[14:46:22 CEST] <JEEB> special cases aka ifs, most probably yes
[14:46:35 CEST] <JEEB> I am not sure how different this is wrt to how lavf/lavc works currently
[14:46:46 CEST] <cehoyos> (I feared "side data" would mean to inform the application it will have to insert a filter chain to get the intended output but apparently I was wrong as I learned now)
[14:47:00 CEST] <JEEB> wel, this is just what I read here some days/weeks ago
[14:47:04 CEST] <JEEB> might hav ebeen different on the ML
[14:47:21 CEST] <JEEB> but that way you keep the separation of coded packets in lavf, and lavc would give the API user something it would expect
[14:47:23 CEST] <cehoyos> I don't think there was much on the ml
[14:47:53 CEST] <cehoyos> The user expects to send 16 frames to get one output frame?
[14:47:58 CEST] <cehoyos> possibly...
[14:48:30 CEST] <Lynne> cehoyos: I'll nak an attempt to make the demuxer do the decoding and output rawframes, that's by far the worst option
[14:48:49 CEST] <cehoyos> That's great given your commit history, lol
[14:49:12 CEST] <Lynne> lol
[14:49:51 CEST] <cehoyos> As in: We must really not try to ease user's lifes?
[14:50:08 CEST] <Lynne> yes, we need the decoder to decode packets and demuxers to output packets
[14:50:24 CEST] <durandal_1707> Lynne: but that is most clean solution!
[14:50:32 CEST] <Lynne> if users need an easier life they'd use ffms2
[14:50:33 CEST] <cehoyos> Yes, please play the sample of above ticket to see how useful this is
[14:51:24 CEST] <JEEB> the only non-easy part is having to feed the decoder more. and in that case our normal enc/dec APIs can return "please feed more"
[14:51:35 CEST] <JEEB> I think by now the feed/receive APIs are a few years old
[14:51:54 CEST] <JEEB> and the older API had the frame received flag as well I think?
[14:52:06 CEST] <cehoyos> So what you are suggesting is to put the "hack" that I suggested to have in libavformat in libavcodec and suddenly it is not a "hack"?
[14:52:29 CEST] <JEEB> I am not seeing the clear improvement in doing decoding in lavf
[14:52:31 CEST] <Lynne> yes, because there's nothing hacky about letting lavc do what its meant to do - decode
[14:52:56 CEST] <cehoyos> This isn't about decoding, please try for a moment to understand what you are suggesting
[14:53:05 CEST] <durandal_1707> one packet would got side-data with more packets?
[14:53:12 CEST] <JEEB> durandal_1707: no
[14:53:31 CEST] <cehoyos> The exact code that you don't want to see in lavf would have to be put in lavc (or the hevc decoder) - in addition to the special code in lavc only used for three tiled formats
[14:54:17 CEST] <Lynne> yes, that's okay, tiling in the decoder is simpler and more efficient, since you can just alloc a single frame and decode directly to it
[14:54:37 CEST] <Lynne> while using the decoder api in the demuxer is completely nuts
[14:55:14 CEST] <cehoyos> So back to one of my questions above: What do we know about the tiles in general?
[14:55:41 CEST] <cehoyos> (Isn't it more nuts to keep invalid code?)
[14:55:50 CEST] <JEEB> invalid code?
[14:56:00 CEST] <cehoyos> unrelated to heif...
[14:56:08 CEST] <JEEB> ok
[14:56:27 CEST] <cehoyos> I still believe personal attacks are acceptable to some degree
[14:56:53 CEST] <JEEB> also yes, our knowledge of the tiles is important enough. we have to be able to signal the tiles and if there are tiles missing we should be able to start decoding the following one
[14:57:05 CEST] <durandal_1707> you want to personally attack someone here?
[14:57:12 CEST] <JEEB> (although that could be also done through checking if we had already decoded a tile for the currently decoded frame)
[14:57:21 CEST] <JEEB> anyways, sorry - need to go back to healing myself
[14:57:21 CEST] <cehoyos> No, I only meant I can stand some of them, like the one above
[14:57:29 CEST] <cehoyos> What happened?
[14:57:59 CEST] <durandal_1707> he need to heal after being attacked :)
[14:58:10 CEST] <cehoyos> ;-)
[14:58:26 CEST] <JEEB> w/29
[14:59:13 CEST] <cehoyos> https://de.wikipedia.org/wiki/Mercedes-Benz_W_29
[15:01:03 CEST] <Lynne> cehoyos: if anything I was being attacked for my #commits
[15:01:59 CEST] <cehoyos> Technical arguments instead of threats may help=-)
[15:02:21 CEST] <Lynne> nuts is a valid technical argument and a reaction for using lavc api inside lavf
[15:02:37 CEST] <Lynne> unacceptable is also acceptable
[15:04:08 CEST] <cehoyos> Yes, I am sure there is a dimension where "nuts" is a technical argument!
[15:04:28 CEST] <durandal_1707> just decode single tile, and declare heif broken nuts format!
[15:05:18 CEST] <cehoyos> https://en.wikipedia.org/wiki/Socket_wrench
[15:05:24 CEST] <cehoyos> Found it!
[15:13:27 CEST] <durandal_1707> cehoyos: what you gonna talk about at next nttw conference?
[15:13:57 CEST] <kierank> durandal_1707: i'll go if you will be there
[15:16:43 CEST] <kierank> jamrial: heh you missed lots of stuff
[15:16:59 CEST] <kierank> probably a good thing
[15:17:17 CEST] <jamrial> oh boy :p
[15:18:20 CEST] <jamrial> send me a log? or i can wait until the evening for it to show up on pipermail
[15:21:12 CEST] <cehoyos> durandal_1707: Didn't you see the sheet? I will repeat the last talk plus updates
[15:21:20 CEST] <cehoyos> (If they want it)
[15:22:51 CEST] <durandal_1707> cehoyos: i looked at sheet, but what updates you will tell them? nothing new here happened
[15:22:58 CEST] <kurosu> I also skipped a lot of the heif discussion, but I surmise it will impact avif handling ?
[15:23:59 CEST] <cehoyos> I misunderstood the "side data" suggestion, assuming that special code in the decoder to delay output and create special frames would not be more acceptable than lavf outputting rawvideo
[15:24:14 CEST] <cehoyos> (I thought the side data would go to the calling application, not lavc)
[15:24:42 CEST] <cehoyos> vc1 is now bit-exact (for the known samples), I consider this a major improvement
[15:26:04 CEST] <cehoyos> The second suggestion are comments regarding a talk two years ago "don't use lossless encryption for archiving, there is good lossy compression" and last years answer.
[15:26:47 CEST] <durandal_1707> huh, drop that, they need bitexact video for every frame
[15:27:00 CEST] <cehoyos> I should drop my comments?
[15:27:28 CEST] <durandal_1707> comment about telling them to use lossy compression instead
[15:27:49 CEST] <cehoyos> I should comment and tell them to use lossy compression?
[15:28:12 CEST] <durandal_1707> dunno, there was some talks about that but forgot details
[15:30:28 CEST] <cehoyos> I'll try again: There was a talk two years ago in Vienna that consisted of suggestions not to use lossless compression, and last year in London there was an answer. I wondered if more comments may make sense, especially telling the community if all their digital video is one type of compressed format, re-compressing it in lossless format will not help, while using ffv1 (or j2k) for their scans is useful
[15:32:55 CEST] <durandal_1707> aha, i will try not to forget
[15:38:09 CEST] <nevcairiel> Didn't we already agree on a perfectly clear, scalable and portable HEIF solution a few days ago, by letting each component do exactly what its designed to do, leverage every component clearnly and without hacks?
[15:41:46 CEST] <nevcairiel> no bsf, no hacks in the demuxer or decoder, just flat out demuxing, decoding, and tiles being assembled in avfilter, since thats the component we have to deal with raw images
[15:42:06 CEST] <durandal_1707> ugh, nope
[15:43:15 CEST] <nevcairiel> no other solution will ever let you do it properly, so, yes.
[15:45:00 CEST] <nevcairiel> Especially if you want to involve hwaccel, any such "hacks" will either make it impossible or make them extremely more hacky
[15:47:39 CEST] <Lynne> nevcairiel: well, that's kinda still your idea
[15:48:22 CEST] <Lynne> it'll be difficult to deal with hwframes in lavfi anyway to stitch them together
[15:49:06 CEST] <Lynne> since you'll need to use a different filter with a different name and it'll involve a copy, and then there's still the issue with the frame size > capabilities
[15:50:00 CEST] <Lynne> you still need packet side data either way, which will have to be converted to frame side data for your approach, so its a good starting point
[15:59:16 CEST] <nevcairiel> the side data conversion is not something you need, thats something the framework already does
[15:59:42 CEST] <nevcairiel> and a hwaccel stitching filter is at least a possibility in that design, unlike any other
[16:00:10 CEST] <nevcairiel> and a bsf will cause images to become too large for hwaccel quickly
[16:00:20 CEST] <durandal_1707> who shares irc logs in real-time with other people?
[16:00:57 CEST] <nevcairiel> like, a 4K capable hw decoder is easily defeated by a modern camera
[16:06:33 CEST] <durandal_1707> if you do not start answering, i will start banning random no-op on this channel!
[16:28:13 CEST] <Lynne> nevcairiel: if users require hw decoding throughout they could strip the side data and do the tiling themselves (or set up an overlay filter)
[16:29:11 CEST] <philipl> the side-data stiching filter approach would require a separate filter for each hwaccel interop system, but we have families of filters for things like scaling already.
[16:29:18 CEST] <philipl> That part isn't particularly unusual.
[16:45:42 CEST] <cehoyos> Good to know that I did understand it correctly!
[16:45:58 CEST] <cehoyos> Well, other suggestions exist.
[16:53:15 CEST] <nevcairiel> Lynne: its 2019, hw decoding needs to be a primary concern in any such design, not some afterthought that people are expected to hack together manually
[16:53:44 CEST] <cehoyos> But isn't hw decoding completely irrelevant for heif?
[16:54:07 CEST] <nevcairiel> why? because its only one image? I'm sure people would be happy if that loads faster too
[16:54:50 CEST] <cehoyos> If it came for free - ok, but if it comes with a huge amount of necessary special code, I am 100% nobody cares about the speed.
[16:55:02 CEST] <cehoyos> (My suggestion is to offer both possibilities anyway)
[16:55:03 CEST] <nevcairiel> thats why we need to make it free
[16:55:24 CEST] <nevcairiel> and not design some method that requires a lot of manual labor to make it hw capable
[16:55:33 CEST] <cehoyos> But it isn't in your suggestion, and the others (except you) didn't even understand that this is what you suggested
[16:55:51 CEST] <cehoyos> Why should it be hw-capable?
[16:56:05 CEST] <nevcairiel> Why shouldn't it?
[16:56:19 CEST] <nevcairiel> because its "easier"?
[16:56:21 CEST] <nevcairiel> thats just lazy : )
[16:56:26 CEST] <cehoyos> Because for all users except you this means additional work that they don't even know about
[16:56:34 CEST] <cehoyos> lazy by whom?
[16:56:54 CEST] <nevcairiel> whoever is suggesting a method to handle heif without accounting for hw decoding
[16:56:55 CEST] <cehoyos> It is lazy to put the burden downstream instead of finding a soluation that produces one frame for a fringe format
[16:57:30 CEST] <nevcairiel> Is every photo taking with a modern iPhone that fringe?
[16:57:51 CEST] <cehoyos> It doesn't need hardware acceleration, it needs an out-of-the-box solution
[16:59:40 CEST] <nevcairiel> if you want something that just spits out image data, then libav* libraries are too low-level anyway and you probably want gstreamer or something like that, so we might as well design it properly without breaking library borders left and right
[17:00:01 CEST] <cehoyos> So the users of libav* should install gstreamer for heif?
[17:00:15 CEST] <cehoyos> and swf
[17:00:18 CEST] <kierank> yes or libav* should have something like that
[17:00:31 CEST] <kierank> media has evolved beyond an api from 2002
[17:00:44 CEST] <nevcairiel> if they can't deal with a new API to handle tiled images, then yes, they should
[17:01:04 CEST] <nevcairiel> and i'm not proposing to make it incredibly hard, piping your image through lavfi for example isn't exactly rocket science
[17:01:07 CEST] <cehoyos> But why on earth should they? It's not that a 90 minute video has to be decoded
[17:01:32 CEST] <nevcairiel> Independent of hw decoding, those proposed solutions are giant hacks and not acceptable
[17:02:45 CEST] <nevcairiel> stuff i read earlier had me almost throw up, having the demuxer hand out raw decoded image data? seriously?
[17:02:57 CEST] <cehoyos> At least I am not alone
[17:03:23 CEST] <nevcairiel> the bsf approach might be ok-ish if it works
[17:03:35 CEST] <nevcairiel> but i'm skeptical of that, personally
[17:03:36 CEST] <cehoyos> Too good to be true
[17:06:07 CEST] <nevcairiel> so in my mind that leaves the simple solution of just decoding the tiles one by one and stiching them together after decoding, and the perfect place for such a stiching filter is avfilter. ffmpeg.c could auto-insert the filter even. for API users that don't use avfilter yet, they would have to work on doing that.
[17:06:52 CEST] <nevcairiel> as a bonus, this method could support full hwaccel - if someone were to write a stiching filter, or with download.
[17:11:03 CEST] <durandal_1707> stiching what?
[17:23:37 CEST] <durandal_1707> nothing left to do :(
[17:30:42 CEST] <philipl> The whole reason why the tiling exists is *because* of hw decoders and ther comparatively low size limits. It would be ironic if we came up with an approach to tiling that prevented hwaccel from working.
[17:35:42 CEST] <Lynne> have the texture limits (for vulkan etc.) jumped up or are they still 8kx8k?
[17:36:17 CEST] <nevcairiel> well desktop decoders have relatively higher limits then mobile decoders, so its probably not that extremely bad, many desktop decoders can do 8k hevc decoding
[17:36:42 CEST] <nevcairiel> but you can still come up with an image that would exceed that of course
[17:37:18 CEST] <philipl> My nvidia desktop card has 8k hevc limits and 32k vulkan image dimension limits.
[17:37:43 CEST] <philipl> But there's plenty of >8k cameras out there.
[17:39:05 CEST] <philipl> Lynne: (but there's a fair point there that in some situations, stiching might have to be deferred all the way to geometry in the renderer if the texture limit is exceeded)
[17:40:20 CEST] <nevcairiel> you might also pre-scale the tiles if you know how its likely going to be shown
[17:40:37 CEST] <nevcairiel> although without overlap that might not be ideal
[18:03:29 CEST] <durandal_1707> have ideas for a/v filter?
[18:07:04 CEST] <Lynne> rnnoise
[18:09:15 CEST] <durandal_1707> thats one for you
[18:11:22 CEST] <durandal_1707> Lynne: what's delaying SIMD for tx?
[18:15:50 CEST] <Lynne> the 8 point FFT simd crushed me
[18:16:40 CEST] <Lynne> how can something be this simple and yet I can't beat the 20 instruction current version
[18:17:20 CEST] <Lynne> the faster runtime, 1 less tmp reg and less rodata only kept my motivation that far
[18:17:54 CEST] <Lynne> I've spent more time on a shenzhen io problem, I'll give it another go
[18:19:15 CEST] <JEEB> time to make shenzhen IO mods out of multimedia optimization problems?
[18:27:25 CEST] <Lynne> durandal_1707: if you're bored you could probably template libavutil/tx for int32/doubles
[18:28:31 CEST] <Lynne> the mdct in the mp3 decoder could be replaced then since the C code is faster than a naive 320-point fft, even with simd
[19:53:33 CEST] <DarkiJah> hello
[19:58:32 CEST] <JEEB> ohai
[00:00:00 CEST] --- Wed Jul 10 2019
More information about the Ffmpeg-devel-irc
mailing list