[FFmpeg-devel] [PATCH v1] avcodec/vaapi_av1: correct data size when create slice data buffer

Jan Ekström jeebjp at gmail.com
Mon May 17 11:59:00 EEST 2021


On Mon, May 17, 2021 at 4:50 AM Fei Wang <fei.w.wang at intel.com> wrote:
>
> Set all tiles size to create slice data buffer, hardware will use
> slice_data_offset/slice_data_size in slice parameter buffer to get
> each tile's data.
>
> This change will let it success to decode clip which has multi
> tiles data inside one OBU.
>
> Signed-off-by: Fei Wang <fei.w.wang at intel.com>
> ---
>  libavcodec/vaapi_av1.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/libavcodec/vaapi_av1.c b/libavcodec/vaapi_av1.c
> index 1809b485aa..16b7e35747 100644
> --- a/libavcodec/vaapi_av1.c
> +++ b/libavcodec/vaapi_av1.c
> @@ -292,7 +292,7 @@ static int vaapi_av1_decode_slice(AVCodecContext *avctx,
>          err = ff_vaapi_decode_make_slice_buffer(avctx, pic, &slice_param,
>                                                  sizeof(VASliceParameterBufferAV1),
>                                                  buffer,
> -                                                s->tile_group_info[i].tile_size);
> +                                                size);
>          if (err) {
>              ff_vaapi_decode_cancel(avctx, pic);
>              return err;
> --
> 2.17.1

So basically this fixes setting the size of the passed buffer to the
whole size of the buffer (in which tile offset and tile size should
always be located). As such it makes sense.

It seems like at some point there was an idea to only pass "buffer +
tile_offset", but that never materialized completely (and then setting
VASliceParameterBufferAV1's tile_offset to 0)? Would that lead to
smaller buffers being utilized? Or does libva already extract the
tiles into their own buffers based on the information?

Jan


More information about the ffmpeg-devel mailing list