[FFmpeg-devel] [PATCH v4 03/11] libavutil/hwcontext_d3d11va: adding more texture information to the D3D11 hwcontext API
Hendrik Leppkes
h.leppkes at gmail.com
Sat May 9 10:18:37 EEST 2020
On Sat, May 9, 2020 at 9:07 AM Hendrik Leppkes <h.leppkes at gmail.com> wrote:
>
> On Sat, May 9, 2020 at 2:12 AM Soft Works <softworkz at hotmail.com> wrote:
> >
> > > From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of
> > > Hendrik Leppkes
> > > Sent: Friday, May 8, 2020 10:27 PM
> > > To: FFmpeg development discussions and patches <ffmpeg-
> > > devel at ffmpeg.org>
> > > Subject: Re: [FFmpeg-devel] [PATCH v4 03/11] libavutil/hwcontext_d3d11va:
> > > adding more texture information to the D3D11 hwcontext API
> > >
> > > On Fri, May 8, 2020 at 5:51 PM <artem.galin at gmail.com> wrote:
> > > >
> > > > From: Artem Galin <artem.galin at intel.com>
> > > >
> > > > Added AVD3D11FrameDescriptors array to store array of single textures
> > > > in case if there is no way to allocate array texture with BindFlags =
> > > D3D11_BIND_RENDER_TARGET.
> > > >
> > > > Signed-off-by: Artem Galin <artem.galin at intel.com>
> > > > ---
> > > > libavutil/hwcontext_d3d11va.c | 26 ++++++++++++++++++++------
> > > > libavutil/hwcontext_d3d11va.h | 9 +++++++++
> > > > 2 files changed, 29 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/libavutil/hwcontext_d3d11va.c
> > > > b/libavutil/hwcontext_d3d11va.c index c8ae58f908..cd80931dd3 100644
> > > > --- a/libavutil/hwcontext_d3d11va.c
> > > > +++ b/libavutil/hwcontext_d3d11va.c
> > > > @@ -72,8 +72,8 @@ static av_cold void load_functions(void) }
> > > >
> > > > typedef struct D3D11VAFramesContext {
> > > > - int nb_surfaces_used;
> > > > -
> > > > + size_t nb_surfaces;
> > > > + size_t nb_surfaces_used;
> > > > DXGI_FORMAT format;
> > > >
> > > > ID3D11Texture2D *staging_texture; @@ -112,6 +112,8 @@ static void
> > > > d3d11va_frames_uninit(AVHWFramesContext *ctx)
> > > > if (s->staging_texture)
> > > > ID3D11Texture2D_Release(s->staging_texture);
> > > > s->staging_texture = NULL;
> > > > +
> > > > + av_freep(&frames_hwctx->texture_infos);
> > > > }
> > > >
> > > > static int d3d11va_frames_get_constraints(AVHWDeviceContext *ctx, @@
> > > > -152,8 +154,10 @@ static void free_texture(void *opaque, uint8_t *data)
> > > > av_free(data);
> > > > }
> > > >
> > > > -static AVBufferRef *wrap_texture_buf(ID3D11Texture2D *tex, int index)
> > > > +static AVBufferRef *wrap_texture_buf(AVHWFramesContext *ctx,
> > > > +ID3D11Texture2D *tex, int index)
> > > > {
> > > > + D3D11VAFramesContext *s = ctx->internal->priv;
> > > > + AVD3D11VAFramesContext *frames_hwctx = ctx->hwctx;
> > > > AVBufferRef *buf;
> > > > AVD3D11FrameDescriptor *desc = av_mallocz(sizeof(*desc));
> > > > if (!desc) {
> > > > @@ -161,6 +165,10 @@ static AVBufferRef
> > > *wrap_texture_buf(ID3D11Texture2D *tex, int index)
> > > > return NULL;
> > > > }
> > > >
> > > > + frames_hwctx->texture_infos[s->nb_surfaces_used].texture = tex;
> > > > + frames_hwctx->texture_infos[s->nb_surfaces_used].index = index;
> > > > + s->nb_surfaces_used++;
> > > > +
> > > > desc->texture = tex;
> > > > desc->index = index;
> > > >
> > > > @@ -199,7 +207,7 @@ static AVBufferRef
> > > *d3d11va_alloc_single(AVHWFramesContext *ctx)
> > > > return NULL;
> > > > }
> > > >
> > > > - return wrap_texture_buf(tex, 0);
> > > > + return wrap_texture_buf(ctx, tex, 0);
> > > > }
> > > >
> > > > static AVBufferRef *d3d11va_pool_alloc(void *opaque, int size) @@
> > > > -220,7 +228,7 @@ static AVBufferRef *d3d11va_pool_alloc(void *opaque,
> > > int size)
> > > > }
> > > >
> > > > ID3D11Texture2D_AddRef(hwctx->texture);
> > > > - return wrap_texture_buf(hwctx->texture, s->nb_surfaces_used++);
> > > > + return wrap_texture_buf(ctx, hwctx->texture,
> > > > + s->nb_surfaces_used);
> > > > }
> > > >
> > > > static int d3d11va_frames_init(AVHWFramesContext *ctx) @@ -267,7
> > > > +275,7 @@ static int d3d11va_frames_init(AVHWFramesContext *ctx)
> > > > av_log(ctx, AV_LOG_ERROR, "User-provided texture has
> > > mismatching parameters\n");
> > > > return AVERROR(EINVAL);
> > > > }
> > > > - } else if (texDesc.ArraySize > 0) {
> > > > + } else if (!(texDesc.BindFlags & D3D11_BIND_RENDER_TARGET) &&
> > > > + texDesc.ArraySize > 0) {
> > > > hr = ID3D11Device_CreateTexture2D(device_hwctx->device,
> > > &texDesc, NULL, &hwctx->texture);
> > > > if (FAILED(hr)) {
> > > > av_log(ctx, AV_LOG_ERROR, "Could not create the texture
> > > > (%lx)\n", (long)hr); @@ -275,6 +283,12 @@ static int
> > > d3d11va_frames_init(AVHWFramesContext *ctx)
> > > > }
> > > > }
> > > >
> > > > + hwctx->texture_infos = av_mallocz_array(ctx->initial_pool_size,
> > > sizeof(*hwctx->texture_infos));
> > > > + if (!hwctx->texture_infos)
> > > > + return AVERROR(ENOMEM);
> > > > +
> > > > + s->nb_surfaces = ctx->initial_pool_size;
> > > > +
> > > > ctx->internal->pool_internal =
> > > av_buffer_pool_init2(sizeof(AVD3D11FrameDescriptor),
> > > > ctx, d3d11va_pool_alloc, NULL);
> > > > if (!ctx->internal->pool_internal) diff --git
> > > > a/libavutil/hwcontext_d3d11va.h b/libavutil/hwcontext_d3d11va.h index
> > > > 9f91e9b1b6..295bdcd90d 100644
> > > > --- a/libavutil/hwcontext_d3d11va.h
> > > > +++ b/libavutil/hwcontext_d3d11va.h
> > > > @@ -164,6 +164,15 @@ typedef struct AVD3D11VAFramesContext {
> > > > * This field is ignored/invalid if a user-allocated texture is provided.
> > > > */
> > > > UINT MiscFlags;
> > > > +
> > > > + /**
> > > > + * In case if texture structure member above is not NULL contains the
> > > same texture
> > > > + * pointer for all elements and different indexes into the array texture.
> > > > + * In case if texture structure member above is NULL, all elements
> > > contains
> > > > + * pointers to separate non-array textures and 0 indexes.
> > > > + * This field is ignored/invalid if a user-allocated texture is provided.
> > > > + */
> > > > + AVD3D11FrameDescriptor *texture_infos;
> > > > } AVD3D11VAFramesContext;
> > > >
> > >
> > >
> > > I'm not really a fan of this. Only supporting array textures was an intentional
> > > design decision back when D3D11VA was defined, because it greatly
> > > simplified the entire design - and as far as I know the d3d11va decoder, for
> > > example, doesnt even support decoding into anything else.
> > >
> > > - Hendrik
> >
> > It's not like there would be a choice. The Intel MSDK uses an allocator mechanism
> > and when it asks for a non-array DX11 texture it has to be given one.
> >
>
> Of course there is a choice. Only support the new stuff. Afterall we
> havent supported it at all for years now, so only supporting it on
> newer drivers isn't the end of the world.
>
To give an example for consistency:
d3d11va decoding will only ever decode into array-textures. So when I
use d3d11va decoding, and then try to encode with qsvenc, it still
fails on such systems, right?
And not only that, it'll fail in mysterious ways.
When I'm decoding with qsvdec and it produces a list of textures, and
the API user does not handle them, since its a new feature and a API
change, it'll break mysteriously again.
Adding a confusing alternate way to store textures in the context
seems less then ideal, even more so since its not necessary for
up-to-date drivers. Let Intel document the exact driver requirements,
and check it, fallback to d3d9 otherwise? Seems like an overall much
neater solution.
Bending our API to the needs of legacy drivers seems like something
that will cause headaches for years to come, while said hardware will
slowly just go away.
- Hendrik
More information about the ffmpeg-devel
mailing list