[FFmpeg-devel] [PATCH] checkasm: h264dsp: test luma_dc_dequant

Martin Storsjö martin at martin.st
Fri Jun 13 16:26:42 EEST 2025


On Fri, 13 Jun 2025, Tristan Matthews wrote:

> On Fri, Jun 13, 2025 at 2:08 AM Martin Storsjö <martin at martin.st> wrote:
>>
>> On Fri, 13 Jun 2025, Tristan Matthews wrote:
>>
>>> Good catch, also I realized that the output buffers were too small,
>>> will be fixed in the next version.
>>
>> Why was that too small? If we write (and check) 16x16 int16_t elements,
>> the previous allocation of LOCAL_ALIGNED_16(int16_t, dst0, [16 * 16])
>> sounds just right? Or does the function use the [16*16,2*16*16) area of
>> the destination as scratch space?
>
> That's what I thought too until I noticed the FATE failures (e.g.
> https://patchwork.ffmpeg.org/check/124147/), and on further digging
> realized that dctcoef (used for dst here:
> https://git.ffmpeg.org/gitweb/ffmpeg.git/blob/fb65ecbc9b805571e5ff707b935c343803137e54:/libavcodec/h264idct_template.c#l256
> ) will be either 2 or 4 bytes depending on bit-depth IIUC (see
> https://git.ffmpeg.org/gitweb/ffmpeg.git/blob/fb65ecbc9b805571e5ff707b935c343803137e54:/libavcodec/bit_depth_template.c#l54
> )

Oh, I see. Well in that case, I think that using int16_t and *2 feels 
quite confusing; I think I'd rather have it be uint8 and *sizeof(int32_t) 
or something like that, to clarify what's going on.

I see that other preexisting tests, like vp9dsp.c, do use int16_t and an 
extra magical *2, but I think going plain uint8_t is clearer when it isn't 
always specifically int16_t.

In that case, using checkasm_check(int16_t) also is going to be wrong; we 
have similar cases for pixels, see the checkasm_check_pixel() macros in 
checkasm.h. Perhaps we need a similar checkasm_check_dctcoef() macro, 
which checks int16_t or int32_t depending on bit_depth?

// Martin


More information about the ffmpeg-devel mailing list