[FFmpeg-devel] [PATCH] avcodec: add QOI decoder and demuxer and parser

Tomas Härdin tjoppen at acc.umu.se
Tue May 31 20:07:06 EEST 2022


tis 2022-05-31 klockan 13:32 +0200 skrev Paul B Mahol:
> 
> diff --git a/libavcodec/codec_id.h b/libavcodec/codec_id.h
> index 03234b7543..a716dc87c3 100644
> --- a/libavcodec/codec_id.h
> +++ b/libavcodec/codec_id.h
> @@ -311,6 +311,7 @@ enum AVCodecID {
>      AV_CODEC_ID_VBN,
>      AV_CODEC_ID_JPEGXL,
>      AV_CODEC_ID_BINKVIDEO2,
> +    AV_CODEC_ID_QOI,

Missing minor version bump?

> +++ b/libavcodec/qoi_parser.c
> 
> +typedef struct QOIParseContext {
> +    ParseContext pc;
> +} QOIParseContext;

Could just be ParseContext

> +
> +static int qoi_parse(AVCodecParserContext *s, AVCodecContext *avctx,
> +                     const uint8_t **poutbuf, int *poutbuf_size,
> +                     const uint8_t *buf, int buf_size)

Looks OK

> +static int qoi_decode_frame(AVCodecContext *avctx, AVFrame *p,
> +                            int *got_frame, AVPacket *avpkt)
> +{
> +    [...]
> +    avctx->pix_fmt = AV_PIX_FMT_RGBA;

Would be nice if RGB pictures were actually decoded as RGB, but it
might not be strictly necessary as far as I can tell from the spec.

The reference implementation will use whichever pixel format is
specific in the file unless the user explicitly requests either of the
two pixel formats. So users coming from that context would expect our
decoder to automagically pick the format specified in the file I think.

> +
> +    if ((ret = ff_get_buffer(avctx, p, 0)) < 0)
> +        return ret;
> +
> +    dst = p->data[0];
> +    len = width * height * 4LL;
> +    for (int n = 0, off_x = 0, off_y = 0; n < len; n += 4, off_x++)
> {
> +        if (off_x >= width) {
> +            off_x = 0;
> +            off_y++;
> +            dst += p->linesize[0];
> +        }
> +        if (run > 0) {
> +            run--;
> +        } else if (bytestream2_get_bytes_left(&gb) > 0) {
> +            int chunk = bytestream2_get_byteu(&gb);
> +
> +            if (chunk == QOI_OP_RGB) {
> +                px[0] = bytestream2_get_byte(&gb);
> +                px[1] = bytestream2_get_byte(&gb);
> +                px[2] = bytestream2_get_byte(&gb);
> +            } else if (chunk == QOI_OP_RGBA) {
> +                px[0] = bytestream2_get_byte(&gb);
> +                px[1] = bytestream2_get_byte(&gb);
> +                px[2] = bytestream2_get_byte(&gb);
> +                px[3] = bytestream2_get_byte(&gb);

This will silently accept RGBA chunks in RGB pictures which *might* be
incorrect. The spec is not clear on this. The reference implementation
will accept this but discard the alpha channel *on the output*, if the
file is marked as RGB and the user doesn't explicitly say that they
want RGBA. It updates px.a regardless, which changes the hash, so we
can't just ignore this chunk in RGB mode. Unlike the official decoder
this one will output garbage alpha for RGB files.

> --- a/libavformat/img2dec.c
> +++ b/libavformat/img2dec.c
> @@ -1131,6 +1131,17 @@ static int photocd_probe(const AVProbeData *p)
>      return AVPROBE_SCORE_MAX - 1;
>  }
>  
> +static int qoi_probe(const AVProbeData *p)
> +{
> +    if (memcmp(p->buf, "qoif", 4))
> +        return 0;
> +
> +    if (AV_RB32(p->buf + 4) == 0 || AV_RB32(p->buf + 8) == 0)
> +        return 0;

Should also check channels and colorspace

/Tomas



More information about the ffmpeg-devel mailing list