[Ffmpeg-devel-irc] ffmpeg-devel.log.20171105
burek
burek021 at gmail.com
Mon Nov 6 03:05:03 EET 2017
[04:00:22 CET] <cone-537> ffmpeg 03Michael Niedermayer 07master:66f0c958bfd5: avcodec/exr: fix undefined shift in pxr24_uncompress()
[04:00:22 CET] <cone-537> ffmpeg 03Michael Niedermayer 07master:4b51437dccd6: avcodec/xan: Check for bitstream end in xan_huffman_decode()
[04:00:22 CET] <cone-537> ffmpeg 03Kaustubh Raste 07master:1e7e9fbb03c3: avcodec/mips: Improve hevc uni 4 tap hz and vt mc msa functions
[04:00:22 CET] <cone-537> ffmpeg 03Kaustubh Raste 07master:b9cd26f556f4: avcodec/mips: Improve hevc uni weighted 4 tap hz mc msa functions
[04:00:22 CET] <cone-537> ffmpeg 03Peter Große 07master:3ddb887c8848: ffmpeg.c: fix calculation of input file duration in seek_to_start()
[04:00:22 CET] <cone-537> ffmpeg 03Peter Große 07master:0ae1f6ddeb35: ffmpeg.c: fix code style in seek_to_start
[10:42:17 CET] <cone-389> ffmpeg 03Paul B Mahol 07master:3f4fccf4d6d2: avformat/mvdec: check for EOF
[10:52:00 CET] <cone-389> ffmpeg 03Piotr Bandurski 07master:6ea77115324c: avcodec/qdrw: support 16bpp files with bppcnt == 2 && bpp == 8
[16:04:12 CET] <durandal_1707> matroska demuxer in ffmpeg is really slow
[16:06:20 CET] <jamrial> durandal_1707: how so?
[16:07:34 CET] <durandal_1707> comparing to mpv internal one
[19:26:14 CET] <cone-597> ffmpeg 03Carl Eugen Hoyos 07master:2cc51d5025c9: lavc/v4l2_context: Change the type of the ioctl cmd to uint32_t.
[20:52:29 CET] <cone-597> ffmpeg 03Carl Eugen Hoyos 07master:e06bdc3c37f4: lavf/amr: Add amrnb and amrwb demuxers.
[20:55:35 CET] <atomnuker> nevcairiel/michaelni: what's actually in that lavc private field?
[20:55:50 CET] <atomnuker> why does cuda require it?
[21:01:15 CET] <nevcairiel> cuda hwaccel is super slow if you map the video frame right after decoding because its fully async, so instead of doing that there will be a new post-process callback that gets called when the frame gets returned to the user to delay mapping the frame to as late as possible, and that struct contains the required information to do that
[21:01:23 CET] <nevcairiel> videotoolbox does something similar for async
[21:03:53 CET] <atomnuker> but the frame already has a reference to the avhw stuff so users have all they need to map it
[21:04:21 CET] <nevcairiel> no
[21:05:42 CET] <nevcairiel> the native in-decoder cuvid format cant really be shared properly with anything else
[21:05:56 CET] <nevcairiel> so before it leaves lavc, it gets transformed into a generic cuda frame
[21:07:29 CET] <nevcairiel> to do this the frame needs to get locked, which serializes the decoder, so best to do it as late as possible. right now it only leverages delays from frame re-ordering and whatnot, but in the future one could even imagine a queue to improve performance, right on avcodec level
[21:10:58 CET] <BtbN> It could in theory be shared. But it would need a new pixfmt
[21:11:18 CET] <nevcairiel> its also quite uncomfortable, and nothing can really use it
[21:11:25 CET] <atomnuker> so what actually happens, you give the decoder a packet and it outputs a cuda frame which is non-mapped
[21:11:34 CET] <michaelni> would it be even better if its done even later ? some players do have a fifo between decoder and display (at least for non hw stuff)
[21:11:44 CET] <atomnuker> why do you need anything extra?
[21:12:00 CET] <BtbN> michaelni, from my experience 2~3 frames is enough of "download-delay"
[21:12:01 CET] <atomnuker> when the user needs to present the cuda frame they map it and present it
[21:12:24 CET] <BtbN> Because there is no API to map frames in libav*
[21:12:34 CET] <atomnuker> yes there is, in lavu
[21:12:50 CET] <BtbN> And the CUDA pixfmt is defined as just plain CUdevptrs in the normal pointer fields
[21:13:00 CET] <BtbN> cannot do that with the cuvid frame handle that has to be mapped
[21:13:27 CET] <atomnuker> why not?
[21:13:33 CET] <nevcairiel> the cuvid frame handle is rather opaque, its basically just an index into the decoders internal state
[21:13:40 CET] <BtbN> Because it's not a devptr
[21:13:49 CET] <BtbN> like I said, you'd need a new hw pix fmt, that contains that cuvid handle
[21:14:13 CET] <nevcairiel> which would be rather pointless because nothing but the cuvid stuff can even make use of that particular handle
[21:14:19 CET] <BtbN> And it would also be very messy to do so, as that "handle" is just a number from 0 to n-1, with n being the amount of cuvid buffer frames, which is usually quite low
[21:14:22 CET] <nevcairiel> the cuda frame we have right now is far more versatile
[21:14:25 CET] <atomnuker> so an AV_PIX_FMT_CUDA2 because we already have one?
[21:14:41 CET] <BtbN> Well, I'd cal it FMT_CUVID, but I think it's a horrible idea
[21:15:07 CET] <atomnuker> here's a suggestion: we tell nvidia to go fuck themselves with the latest TI card they sell
[21:15:29 CET] <BtbN> This is not any different from other hwaccels?
[21:15:41 CET] <atomnuker> yes it is, nothing else requires this
[21:15:48 CET] <BtbN> frame download from the GPU is slow
[21:16:07 CET] <atomnuker> yep, and that's okay
[21:16:14 CET] <nevcairiel> most hardware benefits from some delay between decode and accessing the frame
[21:16:23 CET] <atomnuker> if api users want they can use an interop
[21:16:26 CET] <nevcairiel> we just choose to transform the cuvid frames into a format thats actually useful
[21:16:27 CET] <atomnuker> they don't have to map it
[21:16:57 CET] <nevcairiel> there is no interop that accepts cuvid internal handles
[21:17:22 CET] <BtbN> you map the cuvid frame handle, and get a CUdevptr
[21:17:23 CET] <atomnuker> so cuvid != cuda?
[21:17:44 CET] <atomnuker> and you can't display cuvid stuff but you can display cuda stuff?
[21:17:46 CET] <BtbN> Which we just copy into normal device memory, to get rid of the cuvid part of it
[21:18:10 CET] <atomnuker> couldn't the decoder do that by itself?
[21:18:15 CET] <BtbN> no
[21:18:20 CET] <atomnuker> why not?
[21:18:26 CET] <BtbN> Because it's slow
[21:18:27 CET] <nevcairiel> it could, but that would destroy performance
[21:18:45 CET] <BtbN> The current cuvid decoder _does_ do it on its own
[21:18:57 CET] <BtbN> Because it's a fully self contained decoder and can just delay output
[21:19:02 CET] <nevcairiel> yeah but that design has other disadvantages
[21:19:05 CET] <atomnuker> why would it destroy performance, lavc shouldn't return incomplete frames?
[21:19:26 CET] <BtbN> Because forcing cuvid to syncronize the decoder state is dirt slow
[21:19:36 CET] <BtbN> You want it to stay async
[21:19:47 CET] <BtbN> Like, not even 60 fps capable slow
[21:19:52 CET] <atomnuker> lavc's new api can be fully async
[21:20:04 CET] <nevcairiel> it has nothing to do with api
[21:20:07 CET] <BtbN> The our h264 decoder isn't, and all the other classic hwaccel decoders
[21:20:20 CET] <BtbN> And that's where this is going with cuvid
[21:20:35 CET] <BtbN> Making it a classic hwaccel, using the ffmpeg native parsers
[21:20:42 CET] <atomnuker> okay, so what happens after lavc returns this incomplete avframe?
[21:20:47 CET] <nevcairiel> it doesnt
[21:21:05 CET] <nevcairiel> the entire point is that lavc makes the frame complete before giving it back to the user
[21:21:07 CET] <atomnuker> what does the user do with it if they want to present it?
[21:21:17 CET] <BtbN> The user does not notice this at all
[21:21:20 CET] <nevcairiel> hence the postprocess callback and the private opaque data
[21:21:38 CET] <atomnuker> but users _do_ have to call the postprocess callback, right?
[21:21:42 CET] <BtbN> I do wonder how much delay this approach introduces though
[21:21:42 CET] <nevcairiel> no
[21:21:45 CET] <BtbN> it can't be that much
[21:22:25 CET] <nevcairiel> no, but it lays the groundwork for just having a fifo in lavc, transparent to the user
[21:23:23 CET] <atomnuker> nevcairiel: then how does the user wait until the frame is ready?
[21:23:40 CET] <nevcairiel> he doesnt?
[21:23:52 CET] <nevcairiel> lavc doesnt give the user incomplete frames, period
[21:24:33 CET] <atomnuker> okay, so then does the decoder wait until the frame is readY?
[21:24:58 CET] <nevcairiel> the decoder does not, the new hwaccel postprocess callback does
[21:25:12 CET] <nevcairiel> which avcodec calls right before r eturning the frame to the user
[21:25:39 CET] <atomnuker> and does avcodec wait for the callback to return?
[21:25:57 CET] <BtbN> What else would it do? And how? oO
[21:26:06 CET] <nevcairiel> its hardware-async, not software-async, there are no extra threads or anything in avcodec
[21:26:27 CET] <atomnuker> I don't get how it can output a complete frame without waiting?
[21:26:46 CET] <nevcairiel> who ever said it wouldnt wait
[21:27:10 CET] <atomnuker> okay, so it waits, that's good
[21:27:31 CET] <atomnuker> why does this wait step need internal data that must be kept in the avframe?
[21:27:57 CET] <nevcairiel> because it needs a reference to the cuvid state
[21:28:10 CET] <BtbN> And it needs the entire cuvid decoder
[21:28:30 CET] <BtbN> The handle is just an integer index local to that decoder instance
[21:30:03 CET] <atomnuker> so as far as I understand the wait step happens in avcodec, right?
[21:30:06 CET] <atomnuker> *utils.c
[21:30:11 CET] <nevcairiel> yes
[21:30:30 CET] <atomnuker> couldn't it be merged in the decode function?
[21:30:40 CET] <atomnuker> oh wait, its async
[21:30:50 CET] <atomnuker> so you'd sync the state if the user inputs another frame
[21:30:54 CET] <nevcairiel> it could, but then you would have a bunch of extra stuff in every hwaccel capable decoder
[21:31:01 CET] <nevcairiel> so why not centralize it
[21:31:47 CET] <atomnuker> no it won't, only cuda needs this, doesn't it?
[21:31:58 CET] <JEEB> and the apple thing
[21:32:00 CET] <JEEB> videotoolbox
[21:32:14 CET] <atomnuker> right, 2, its not that much code, is it?
[21:32:16 CET] <nevcairiel> cuda is a hwaccel in the future, which means h264,hevc,vc1,mpeg2/4,vp8/9, ....
[21:32:26 CET] <nevcairiel> its a growing list of decoders
[21:32:30 CET] <atomnuker> oh, okay
[21:33:28 CET] <atomnuker> can't we make a decode2 which would provide the state as a separate argument?
[21:34:53 CET] <nevcairiel> what exactly would this solve though? You still need to track the state together with the AVFrame in some form and fashion, even if only internal in the decoder - so those would then need big changes to somehow track this together with the frame
[21:35:05 CET] <atomnuker> or overallocate the avframe and put the data there so it isn't visible at all to the user?
[21:35:45 CET] <JEEB> C is kind of awful in this sense
[21:35:47 CET] <atomnuker> its only needed for lavc->lavc so it doesn't matter if the user overwrites it after it leaves lavc
[21:36:02 CET] <atomnuker> I mean its dirty but so is cuda
[21:36:05 CET] <JEEB> you could have some internal header definition
[21:36:14 CET] <JEEB> s/header/struct/
[21:37:10 CET] <BtbN> I tried that. Did not find anything that seemed sane
[21:37:23 CET] <JEEB> yea
[21:37:54 CET] <BtbN> If AVFrame was defined in AVCodec, sure, something like that would be possible
[21:37:58 CET] <JEEB> right
[21:38:07 CET] <JEEB> the internal wrapping part is awful but on the other hand it doesn't show the structure to the end user
[21:38:46 CET] <BtbN> it's entirely unnoticable by the API user
[21:38:53 CET] <JEEB> yea
[21:38:55 CET] <BtbN> unless someone messes up and forgets to unwrap it somewhere
[21:39:08 CET] <JEEB> (there was a good tl;dr on the unwrapping on #libav-devel just today http://up-cat.net/p/e5b6eab4 )
[21:39:24 CET] <JEEB> because I was completely out of touch with it as I just wasn't poking around that part of stuff
[21:39:56 CET] <JEEB> and yes, I'm the bad person quoting logs again, but to be honest I hadn't seen as good of a tl;dr on the thing until now :P
[21:46:38 CET] <atomnuker> get_buffer2 has nothing to do with it
[22:15:13 CET] <cone-597> ffmpeg 03Michael Niedermayer 07master:e131b8cedb00: avcodec/h264idct_template: Fix integer overflows in ff_h264_idct8_add()
[22:15:14 CET] <cone-597> ffmpeg 03Michael Niedermayer 07master:e34fe61bf453: avutil/softfloat: Add FLOAT_MIN
[22:15:15 CET] <cone-597> ffmpeg 03Michael Niedermayer 07master:7d1dec466895: avcodec/aacsbr_fixed: Fix division by zero in sbr_gain_calc()
[22:15:16 CET] <cone-597> ffmpeg 03Michael Niedermayer 07master:981e99ab9998: avcodec/sbrdsp_fixed: Fix integer overflow in shift in sbr_hf_g_filt_c()
[22:15:17 CET] <cone-597> ffmpeg 03Martin Vignali 07master:cb618da8d39d: Maintainers : add myself for exr
[23:07:19 CET] <beastd> FWIW: About the wrap/unwrap discussion. It's not helpful to argue that there are other brittle things and adding this would be not the first thing that can go bad. Would be better to not add more brittle things and discuss how we can get rid of other things that are considered brittle.
[23:07:56 CET] <BtbN> I don't see what's brittle about this
[23:08:16 CET] <BtbN> The whole discussion is only about wrapping that specific field
[23:08:26 CET] <BtbN> Some kind of wrapping has to happen
[23:09:50 CET] <iive> why?
[23:10:10 CET] <BtbN> Because nobody wants to put avcodec api into avutil
[23:10:14 CET] <beastd> With a field carrying user data? Is that really necessary?
[23:10:31 CET] <BtbN> lavc is the user in this case. User of lavu
[23:11:05 CET] <BtbN> So you either end up with something super hacky between the two libraries, or you wrap something
[23:12:14 CET] <iive> how is having a dedicated internal field a "something super hacky"?
[23:12:27 CET] <BtbN> How would you have a dedicated internal field?
[23:12:50 CET] <BtbN> It would neccesarily be public, as it has to be in avutil
[23:12:56 CET] <BtbN> And avcodec is using it
[23:13:00 CET] <iive> so what?
[23:13:19 CET] <iive> why we must hide field names?
[23:13:41 CET] <BtbN> We don't want to clutter fields into lavu that have no meaning for it.
[23:13:48 CET] <BtbN> And no new avpriv symbols either
[23:14:52 CET] <iive> it is far better than (un)wrapping and using same field for two separate things.
[23:15:14 CET] <BtbN> Why is it using it for seperate things?
[23:15:37 CET] <BtbN> It's an opaque field for arbitrary data. The user, which in this case is lavc, can do with it however it pleases.
[23:17:58 CET] <iive> precisely because it holds arbitrary data
[23:18:10 CET] <BtbN> I don't follow
[23:18:34 CET] <BtbN> It _owns_ that frame. It can do with it whatever it wants, as long as it's original state is restored when it's returned
[23:18:45 CET] <iive> it's very easy to make a bug where you don't do the wrap/unwpar procedure and you leak the internal structure to the calling application.
[23:19:32 CET] <iive> you should know how much fun is to debug why your structure suddenly contains bananas
[23:19:38 CET] <BtbN> Why is that easy? There's only like 2 or 3 places where that happens
[23:20:27 CET] <iive> look. the wrapping and unwrapping are just moving one field to another private field
[23:20:35 CET] <iive> just use the private field.
[23:20:49 CET] <BtbN> What other private field? There are no private fields.
[23:20:58 CET] <BtbN> Specially not for a struct that's _in another library_
[23:22:02 CET] <atomnuker> nevcairiel: hold on, couldn't the new api in lavc do the posprocess step?
[23:22:23 CET] <atomnuker> since sending packets and receiving frames is decoupled
[23:23:04 CET] <BtbN> so you want to put codec-specific code into the generic API?
[23:24:17 CET] <iive> BtbN: how is generic ffmpeg internal field a codec-specific code?
[23:24:40 CET] <BtbN> Because the post-processing is specific to cuvid and videotoolbox at the moment
[23:25:20 CET] <BtbN> Others may follow
[23:25:44 CET] <BtbN> And even if you hard-code the code for it, you still need to store some handles and references in the frame somewhere
[23:26:02 CET] <iive> I'm starting to think that there is much bigger design flaw here.
[23:26:25 CET] <BtbN> you mean you want to move AVFrame from lavu to lavc?
[23:27:05 CET] <atomnuker> BtbN: could you answer if my idea makes sense?
[23:27:22 CET] <beastd> BtbN: Especially in that case it seems wiser to search for a better solution. This sounds much broader then what you said before "decode: add a method for attaching lavc-internal data to frames"
[23:27:32 CET] <iive> BtbN: it's more a question how avframe contains hw surfaces.
[23:27:39 CET] <BtbN> I don't even understand your idea. Those functions are exactly where that post-processing is intended to happen?
[23:27:51 CET] <BtbN> iive, it's entirely unrelated to how it stores hw surfaces
[23:28:05 CET] <BtbN> The questionable patch is entirely independend from any kind of hw stuff
[23:28:45 CET] <BtbN> It's just about how lavc can store internal private data in an AVFrame, without just cluttering it into the AVFrame struct in lavu
[23:29:20 CET] <iive> BtbN: what data, exactly?
[23:29:30 CET] <atomnuker> BtbN: just do post-processing (waiting) when the user requests a frame via the new api
[23:29:31 CET] <BtbN> In the initial patch, no data at all
[23:29:40 CET] <BtbN> atomnuker, yes, that's where it's done?!
[23:30:16 CET] <BtbN> At the last possible moment there is access to the frame before returning it
[23:31:12 CET] <iive> BtbN: well, if we want to find an alternative solution, we should know for what kind of data it is supposed to pass through.
[23:31:41 CET] <BtbN> It's a generic method to store private avcodec data that belongs to a frame
[23:31:59 CET] <BtbN> Which for the first thing, in a later patch, is needed for the frame post-processing
[23:32:39 CET] <iive> BtbN: if iti s to pass data from codec to filter, then the wrapping is even worse thing
[23:32:48 CET] <BtbN> What filter?!
[23:32:58 CET] <iive> isn't postprocessing a filter?
[23:33:00 CET] <BtbN> It's _avcodec internal_
[23:33:11 CET] <BtbN> no it's not
[23:33:20 CET] <iive> oh
[23:33:23 CET] <JEEB> wrong postproc
[23:34:08 CET] <JEEB> iive: http://up-cat.net/p/e5b6eab4
[23:34:43 CET] <atomnuker> BtbN: great, here's my idea which is generic and applicable to hwaccels and doesn't need avframe hacks: call the postprocess function from the decoder itself
[23:34:50 CET] <iive> iirc there were few fileters that could use opencl to process image data on the videocard itself.
[23:34:59 CET] <BtbN> atomnuker, not fesible. Read that paste.
[23:35:12 CET] <BtbN> Unless you are fine with the hwaccel being cripples to 10 fps or so
[23:35:26 CET] <atomnuker> BtbN: that would only happen with the old api
[23:35:33 CET] <BtbN> no it wouldn't
[23:35:45 CET] <atomnuker> why not, the new api decouples encoding and decoding
[23:35:49 CET] <beastd> BtbN: A generic way to store lavc internal data to an AVFrame sounds kind of not good to begin with. Sorry, need to leave now. Bye...
[23:35:51 CET] <BtbN> the h264/hevc/vp9/vc1/... decoders are not capable of delaying frame output
[23:36:11 CET] <BtbN> Do you want to heavily modify all of them instead?
[23:36:12 CET] <atomnuker> so they need to be ported to the new api rather than have hacks
[23:36:33 CET] <BtbN> This is not a thing about new/old API
[23:36:39 CET] <BtbN> the old API can delay frames perfectly fine.
[23:36:41 CET] <atomnuker> it is, the new api wouldn't require the hacks
[23:36:48 CET] <BtbN> Yes it would
[23:36:53 CET] <atomnuker> I see no reason to bend over for proprietary nvidia crap
[23:37:08 CET] <BtbN> Unless someone half-rewrites all of the native hwaccel decoders in ffmpeg to account for async hwaccels
[23:37:17 CET] <atomnuker> no it wouldn't, if you called the postprocess step from inside the decoder
[23:37:19 CET] <BtbN> This at least affects cuvid and videotoolbox
[23:37:28 CET] <BtbN> read the damn pastebin.
[23:37:47 CET] <iive> what does post-process means in this context?
[23:37:51 CET] <atomnuker> I did, its 5 lines of nothing I don't know
[23:37:54 CET] <BtbN> The whole point of out-sourcing the post-processing is to do it _later_. Not in the decoder. Because that forces the hardware to synchronize, which makes it ridiculously slow.
[23:38:23 CET] <iive> dering/deblock or ?
[23:38:28 CET] <BtbN> Downloading the frame from the decoder. The decoding happens asynchronous in hardware.
[23:38:41 CET] <atomnuker> right, so you can do that when the user requests a frame via the new api, not when they send a packet
[23:38:47 CET] <atomnuker> the last possible moment
[23:38:50 CET] <BtbN> If you do it the instant the frame becomes available, it is really incredibly slow
[23:39:02 CET] <BtbN> atomnuker, again, this is about hwaccel
[23:39:11 CET] <BtbN> So it sits inside of all the hwaccel enabled decoders
[23:39:14 CET] <BtbN> h264, vp9, hevc, ...
[23:39:26 CET] <BtbN> You would nead to _heavily_ modify their logic
[23:39:50 CET] <atomnuker> and that's okay
[23:39:57 CET] <BtbN> no it's not
[23:40:33 CET] <atomnuker> well, what can you do, its suboptimal at the moment for nvidia's "special" needs
[23:40:44 CET] <BtbN> It's not an nvidia special need for fucks sake
[23:40:51 CET] <atomnuker> sorry, and videotoolbox
[23:41:05 CET] <BtbN> no idea if you're trolling at this point. Anyway, I'm done with this shit
[23:41:33 CET] <Mavrik> Yeah, seems like atomnuker is more interested in doing Linus-shit-on-nVidia thing than actually solving the issue.
[23:41:33 CET] <iive> BtbN: so, frame is decoded. do you get a callback from the driver? and in that callback function you could stard "download" ?
[23:41:35 CET] <atomnuker> I'm not trolling, it just seems to me the person who wrote this patch and that grand postprocess thing did it out of lazyness
[23:41:56 CET] <atomnuker> because he didn't want to rewrite all the decoders
[23:42:01 CET] <jamrial> i'm starting to wonder if this subject isn't cursed. every person that touches ends up in anger...
[23:42:06 CET] <atomnuker> (hwaccel ones anyway)
[23:42:19 CET] <JEEB> partially there is laziness, yes. but that's also IMHO a partial solution.
[23:42:21 CET] <atomnuker> jamrial: it is, its hwaccel, hardware fucking sucks
[23:43:14 CET] <jamrial> atomnuker: you're not helping matters in any case. your attitude to diss the whole thing "because nvidia" rubs people the wrong way
[23:43:39 CET] <JEEB> I don't disagree with hwaccels being black boxes of anger, but I'm not sure if this is productive. if it was just nvidia I might even agree, but async decoding definitely seems like a thing coming more and more
[23:44:03 CET] <JEEB> maybe at some point the decode/encode API will be async and there will be no issues :P
[23:44:08 CET] <JEEB> but at this moment that's not the case
[23:44:08 CET] <atomnuker> JEEB: we can have async decoding via the new api
[23:44:35 CET] <JEEB> IIRC it wasn't fully async? as in, you still have to wait for the return. it's just that it's unpaired
[23:44:37 CET] <atomnuker> the postprocess thing has no relevence to non-hw things afaik
[23:44:39 CET] <JEEB> async would be with callbacks or so
[23:44:43 CET] <JEEB> yes
[23:45:00 CET] <JEEB> it only is relevant to hwaccels or things where you have to have special state for the decoder instance
[23:45:59 CET] <atomnuker> "+JEEB | but at this moment that's not the case"
[23:46:13 CET] <atomnuker> right, there's the problem, the attitude
[23:46:30 CET] <JEEB> sure
[23:47:06 CET] <JEEB> would you be ready to help people port those decoders?
[23:47:13 CET] <JEEB> or is it "fuck hw decoding" area?
[23:48:19 CET] <iive> the async hw in theory should not be much different than threaded decoding.
[23:48:21 CET] <atomnuker> no, I'd be ready, how hard can it be
[23:48:43 CET] <atomnuker> we'd even get the benefit of async decoding ahead of async decoding api
[23:49:02 CET] <JEEB> well, at least this is going for a positive side :)
[23:57:58 CET] <iive> BtbN: if the whole thing is async, isn't it theoretically possible that the driver could call the callback, while the vframe is owned by the application? aka unwrapped.
[00:00:00 CET] --- Mon Nov 6 2017
More information about the Ffmpeg-devel-irc
mailing list