[FFmpeg-devel] [PATCH v3 3/6] lavc/qsvdec: Replace current parser with MFXVideoDECODE_DecodeHeader()
Li, Zhong
zhong.li at intel.com
Thu Mar 14 13:03:25 EET 2019
> From: Rogozhkin, Dmitry V
> Sent: Tuesday, March 12, 2019 7:37 AM
> To: Li, Zhong <zhong.li at intel.com>; ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v3 3/6] lavc/qsvdec: Replace current
> parser with MFXVideoDECODE_DecodeHeader()
>
> On Mon, 2019-03-11 at 17:23 +0800, Li, Zhong wrote:
> > > -----Original Message-----
> > > From: Rogozhkin, Dmitry V
> > > Sent: Saturday, March 9, 2019 8:48 AM
> > > To: ffmpeg-devel at ffmpeg.org
> > > Cc: Li, Zhong <zhong.li at intel.com>
> > > Subject: Re: [FFmpeg-devel] [PATCH v3 3/6] lavc/qsvdec: Replace
> > > current parser with MFXVideoDECODE_DecodeHeader()
> > >
> > > On Fri, 2019-03-08 at 15:40 +0800, Zhong Li wrote:
> > > > Using MSDK parser can improve qsv decoder pass rate in some cases
> > > > (E.g:
> > > > sps declares a wrong level_idc, smaller than it should be).
> > > > And it is necessary for adding new qsv decoders such as MJPEG and
> > > > VP9
> > > > since current parser can't provide enough information.
> > > > Actually using MFXVideoDECODE_DecodeHeader() was disscussed at
> > > > https://ffmpeg.org/pipermail/ffmpeg-devel/2015-July/175734.html
> > > > and
> > > > merged as commit 1acb19d, but was overwritten when merged libav
> > > > patches (commit: 1f26a23) without any explain.
> > > >
> > > > v2: split decode header from decode_init, and call it for
> > > > everyframe to detect format/resoultion change. It can fix some
> > > > regression issues such as hevc 10bits decoding.
> > > >
> > > > Signed-off-by: Zhong Li <zhong.li at intel.com>
> > > > ---
> > > > libavcodec/qsvdec.c | 172
> ++++++++++++++++++++----------
> > > > ----
> > > > ----
> > > > libavcodec/qsvdec.h | 2 +
> > > > libavcodec/qsvdec_h2645.c | 1 +
> > > > libavcodec/qsvdec_other.c | 1 +
> > > > 4 files changed, 93 insertions(+), 83 deletions(-)
> > > >
> > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index
> > > > 4a0be811fb..b78026e14d 100644
> > > > --- a/libavcodec/qsvdec.c
> > > > +++ b/libavcodec/qsvdec.c
> > > > @@ -120,19 +120,17 @@ static inline unsigned int
> > > > qsv_fifo_size(const
> > > > AVFifoBuffer* fifo)
> > > > return av_fifo_size(fifo) / qsv_fifo_item_size();
> > > > }
> > > >
> > > > -static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q)
> > > > +static int qsv_decode_preinit(AVCodecContext *avctx, QSVContext
> > > > *q,
> > > > enum AVPixelFormat *pix_fmts, mfxVideoParam *param)
> > > > {
> > > > - const AVPixFmtDescriptor *desc;
> > > > mfxSession session = NULL;
> > > > int iopattern = 0;
> > > > - mfxVideoParam param = { 0 };
> > > > - int frame_width = avctx->coded_width;
> > > > - int frame_height = avctx->coded_height;
> > > > int ret;
> > > >
> > > > - desc = av_pix_fmt_desc_get(avctx->sw_pix_fmt);
> > > > - if (!desc)
> > > > - return AVERROR_BUG;
> > > > + ret = ff_get_format(avctx, pix_fmts);
> > > > + if (ret < 0) {
> > > > + q->orig_pix_fmt = avctx->pix_fmt = AV_PIX_FMT_NONE;
> > > > + return ret;
> > > > + }
> > > >
> > > > if (!q->async_fifo) {
> > > > q->async_fifo = av_fifo_alloc(q->async_depth *
> > > > qsv_fifo_item_size()); @@ -170,48 +168,72 @@ static int
> > > > qsv_decode_init(AVCodecContext *avctx, QSVContext *q)
> > > > return ret;
> > > > }
> > > >
> > > > - ret = ff_qsv_codec_id_to_mfx(avctx->codec_id);
> > > > - if (ret < 0)
> > > > - return ret;
> > > > + param->IOPattern = q->iopattern;
> > > > + param->AsyncDepth = q->async_depth;
> > > > + param->ExtParam = q->ext_buffers;
> > > > + param->NumExtParam = q->nb_ext_buffers;
> > > >
> > > > - param.mfx.CodecId = ret;
> > > > - param.mfx.CodecProfile = ff_qsv_profile_to_mfx(avctx-
> > > > >codec_id,
> > > > avctx->profile);
> > > > - param.mfx.CodecLevel = avctx->level ==
> > >
> > > FF_LEVEL_UNKNOWN ?
> > > > MFX_LEVEL_UNKNOWN : avctx->level;
> > > > -
> > > > - param.mfx.FrameInfo.BitDepthLuma =
> desc->comp[0].depth;
> > > > - param.mfx.FrameInfo.BitDepthChroma = desc->comp[0].depth;
> > > > - param.mfx.FrameInfo.Shift =
> desc->comp[0].depth >
> > >
> > > 8;
> > > > - param.mfx.FrameInfo.FourCC = q->fourcc;
> > > > - param.mfx.FrameInfo.Width = frame_width;
> > > > - param.mfx.FrameInfo.Height = frame_height;
> > > > - param.mfx.FrameInfo.ChromaFormat =
> > >
> > > MFX_CHROMAFORMAT_YUV420;
> > > > -
> > > > - switch (avctx->field_order) {
> > > > - case AV_FIELD_PROGRESSIVE:
> > > > - param.mfx.FrameInfo.PicStruct =
> > >
> > > MFX_PICSTRUCT_PROGRESSIVE;
> > > > - break;
> > > > - case AV_FIELD_TT:
> > > > - param.mfx.FrameInfo.PicStruct =
> > >
> > > MFX_PICSTRUCT_FIELD_TFF;
> > > > - break;
> > > > - case AV_FIELD_BB:
> > > > - param.mfx.FrameInfo.PicStruct =
> > >
> > > MFX_PICSTRUCT_FIELD_BFF;
> > > > - break;
> > > > - default:
> > > > - param.mfx.FrameInfo.PicStruct =
> > >
> > > MFX_PICSTRUCT_UNKNOWN;
> > > > - break;
> > > > - }
> > > > + return 0;
> > > > + }
> > > >
> > > > - param.IOPattern = q->iopattern;
> > > > - param.AsyncDepth = q->async_depth;
> > > > - param.ExtParam = q->ext_buffers;
> > > > - param.NumExtParam = q->nb_ext_buffers;
> > > > +static int qsv_decode_init(AVCodecContext *avctx, QSVContext *q,
> > > > mfxVideoParam *param)
> > > > +{
> > > > + int ret;
> > > >
> > > > - ret = MFXVideoDECODE_Init(q->session, ¶m);
> > > > + avctx->width = param->mfx.FrameInfo.CropW;
> > > > + avctx->height = param->mfx.FrameInfo.CropH;
> > > > + avctx->coded_width = param->mfx.FrameInfo.Width;
> > > > + avctx->coded_height = param->mfx.FrameInfo.Height;
> > > > + avctx->level = param->mfx.CodecLevel;
> > > > + avctx->profile = param->mfx.CodecProfile;
> > > > + avctx->field_order = ff_qsv_map_picstruct(param-
> > > > > mfx.FrameInfo.PicStruct);
> > > >
> > > > + avctx->pix_fmt = ff_qsv_map_fourcc(param-
> > > > > mfx.FrameInfo.FourCC);
> > > >
> > > > +
> > > > + ret = MFXVideoDECODE_Init(q->session, param);
> > > > if (ret < 0)
> > > > return ff_qsv_print_error(avctx, ret,
> > > > "Error initializing
> the
> > >
> > > MFX video
> > > > decoder");
> > > >
> > > > - q->frame_info = param.mfx.FrameInfo;
> > > > + q->frame_info = param->mfx.FrameInfo;
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +static int qsv_decode_header(AVCodecContext *avctx, QSVContext
> > > > *q,
> > > > AVPacket *avpkt, enum AVPixelFormat *pix_fmts, mfxVideoParam
> > > > *param)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + mfxBitstream bs = { { { 0 } } };
> > > > +
> > > > + if (avpkt->size) {
> > > > + bs.Data = avpkt->data;
> > > > + bs.DataLength = avpkt->size;
> > > > + bs.MaxLength = bs.DataLength;
> > > > + bs.TimeStamp = avpkt->pts;
> > > > + if (avctx->field_order == AV_FIELD_PROGRESSIVE)
> > > > + bs.DataFlag |=
> > >
> > > MFX_BITSTREAM_COMPLETE_FRAME;
> > > > + } else
> > > > + return AVERROR_INVALIDDATA;
> > > > +
> > > > +
> > > > + if(!q->session) {
> > > > + ret = qsv_decode_preinit(avctx, q, pix_fmts, param);
> > > > + if (ret < 0)
> > > > + return ret;
> > > > + }
> > > > +
> > > > + ret = ff_qsv_codec_id_to_mfx(avctx->codec_id);
> > > > + if (ret < 0)
> > > > + return ret;
> > > > +
> > > > + param->mfx.CodecId = ret;
> > > > + ret = MFXVideoDECODE_DecodeHeader(q->session, &bs,
> param);
> > > > + if (MFX_ERR_MORE_DATA == ret) {
> > > > + return AVERROR(EAGAIN);
> > > > + }
> > > > + if (ret < 0)
> > > > + return ff_qsv_print_error(avctx, ret,
> > > > + "Error decoding stream header");
> > > >
> > > > return 0;
> > > > }
> > > > @@ -494,7 +516,10 @@ int ff_qsv_process_data(AVCodecContext
> > > > *avctx,
> > > > QSVContext *q,
> > > > uint8_t *dummy_data;
> > > > int dummy_size;
> > > > int ret;
> > > > - const AVPixFmtDescriptor *desc;
> > > > + mfxVideoParam param = { 0 };
> > > > + enum AVPixelFormat pix_fmts[3] = { AV_PIX_FMT_QSV,
> > > >
> > >
> > >
> + AV_PIX_FMT_NV
> > > 12,
> > > >
> > >
> > >
> + AV_PIX_FMT_N
> > > ONE };
> > > >
> > > > if (!q->avctx_internal) {
> > > > q->avctx_internal = avcodec_alloc_context3(NULL); @@
> > >
> > > -508,7
> > > > +533,6 @@ int ff_qsv_process_data(AVCodecContext *avctx,
> > > > QSVContext
> > > > *q,
> > > > return AVERROR(ENOMEM);
> > > >
> > > > q->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;
> > > > - q->orig_pix_fmt = AV_PIX_FMT_NONE;
> > > > }
> > > >
> > > > if (!pkt->size)
> > > > @@ -521,15 +545,17 @@ int ff_qsv_process_data(AVCodecContext
> > >
> > > *avctx,
> > > > QSVContext *q,
> > > > pkt->data, pkt->size, pkt->pts,
> pkt->dts,
> > > > pkt->pos);
> > > >
> > > > - avctx->field_order = q->parser->field_order;
> > > > /* TODO: flush delayed frames on reinit */
> > > > - if (q->parser->format != q->orig_pix_fmt ||
> > > > - FFALIGN(q->parser->coded_width, 16) !=
> FFALIGN(avctx-
> > > > > coded_width, 16) ||
> > > >
> > > > - FFALIGN(q->parser->coded_height, 16) != FFALIGN(avctx-
> > > > > coded_height, 16)) {
> > > >
> > > > - enum AVPixelFormat pix_fmts[3] = { AV_PIX_FMT_QSV,
> > > >
> > >
> > >
> - AV_PIX_F
> MT_NONE,
> > > >
> > >
> > >
> - AV_PIX_F
> MT_NONE };
> > > > - enum AVPixelFormat qsv_format;
> > > > +
> > > > + // sw_pix_fmt was initialized as NV12, will be updated after
> > > > header decoded if not true.
> > > > + if (q->orig_pix_fmt != AV_PIX_FMT_NONE)
> > > > + pix_fmts[1] = q->orig_pix_fmt;
> > >
> > > Well, it still seems non self-explanatory to me why we have
> > > pix_fmts[1] here.
> >
> > I think it has been explained and descripted in my comment. It means
> > sw_pix_fmt.
> > For more detail, probably you want to check the implementation of
> > ff_get_format().
> >
> > > > +
> > > > + ret = qsv_decode_header(avctx, q, pkt, pix_fmts, ¶m);
> > >
> > > Parsing can be expensive to perform on each frame. See more below.
> >
> > 1. Parsing is just to reading header bits, I don't think it is CPU
> > heavy calculation task, compared with slice level decoding.
> > 2. Mostly only key frames have completed header. For non-key frame,
> > just return MFX_ERR_MORE_DATA instead of real parsing work.
>
>
> That was long when I looked myself in details of DecodeHeader, but I think
> that what the following will happen if frame is missing a header and you feed
> it into mediasdk: mediasdk will read the whole bitstream to the end trying to
> find a header. Once it will fail to find it mediasdk will return
> MFX_ERR_MORE_DATA. I think you will be able to see that if you will print
> mfxBitstream::DataOffset and mfxBitstream::DataLength before and after
> your call for MFXVideoDECODE_DecodeHeader(). So, mediasdk actually
> changes DataOffset either pointing to the beginning of the header or right
> after the header (I don't remember exactly, sorry). If header not found it
> sets DataOffset=DataLength and DataLength=0 requesting new bitstream
> portion. The reason why you think everything is ok is that you construct
> mfxBitstream structure anew each time in qsv_decode_header.
> Hence, you just don't notice that mediasdk performs a massive parsing.
>
> I afraid that as is your change will result in massive CPU% regressions.
That is a good point, if massive CPU% regressions really happens, definitely it should be taken seriously.
Then I compared the performance and CPU usage with and without this patch. I can't find obvious performance drop or higher CPU usage, at least for h264.
And I take a little deep to check how MSDK search start code: https://github.com/Intel-Media-SDK/MediaSDK/blob/master/_studio/shared/umc/codec/h264_dec/src/umc_h264_nal_spl.cpp#L319
If MSDK can find the start code, not matter SPS/PPS or slice header start code, it will end the search.
Yes, mostly for non-key frame, there is no SPS/PPS. But slice header is still exist, thus mean MSDK don't need to search the whole packet.
For case of AnnexC, ffmpeg-qsv will convert it to be AnnexB: https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/qsvdec_h2645.c#L257, that can make sure start code is always existed.
If you agree above, I would like to keep current implementation and push this patch, else it will pending the MJPEG/VP9 decoding patches.
More information about the ffmpeg-devel
mailing list