[FFmpeg-devel] [PATCH v6 1/2] avcodec/avutil: move dynamic HDR10+ metadata parsing to libavutil
Andreas Rheinhardt
andreas.rheinhardt at outlook.com
Tue Mar 14 17:38:44 EET 2023
Raphaël Zumer:
> On 3/13/23 20:44, Andreas Rheinhardt wrote:
>>> diff --git a/libavutil/hdr_dynamic_metadata.h b/libavutil/hdr_dynamic_metadata.h
>>> index 2d72de56ae..3d327241c1 100644
>>> --- a/libavutil/hdr_dynamic_metadata.h
>>> +++ b/libavutil/hdr_dynamic_metadata.h
>>> @@ -340,4 +340,15 @@ AVDynamicHDRPlus *av_dynamic_hdr_plus_alloc(size_t *size);
>>> */
>>> AVDynamicHDRPlus *av_dynamic_hdr_plus_create_side_data(AVFrame *frame);
>>>
>>> +/**
>>> + * Parse the user data registered ITU-T T.35 to AVbuffer (AVDynamicHDRPlus).
>>> + * @param s A pointer containing the decoded AVDynamicHDRPlus structure.
>>> + * @param data The byte array containing the raw ITU-T T.35 data.
>>> + * @param size Size of the data array in bytes.
>> The implementation of this function uses the GetBit-API which requires
>> the buffer to be padded; yet the documentation does not mention this.
>>
>> Looking at the calculation in av_dynamic_hdr_plus_to_t35(), it seems
>> that the maximum bitlength of a valid ITU-T T.35 payload is
>> 48+2×937+27+1+10+25×25×4+3×82+(3×15×24)+(1+10+25×25×4+3×1)+(3×(28+15×10))+3+3×6
>> = 8855 bits (please double-check this). This means we can just copy that
>> much into a padded buffer on the stack and ignore the padding. We may
>> then remove the padding from the serialization function, too.
>>
>> (The GetBit-API btw actually does not need AV_INPUT_BUFFER_PADDING_SIZE
>> bytes of padding, but way less (I think 8 bytes or so). I don't really
>> like using AV_INPUT_BUFFER_PADDING_SIZE here.)
>
> Hi,
>
> From get_bits.h (init_get_bits()): "buffer, must be AV_INPUT_BUFFER_PADDING_SIZE bytes larger than the actual read bits". Is this wrong? If so, what is the correct value and is it defined anywhere? What about "other places where one needed a padded buffer" as mentioned in your previous comment?
It is overtly pessimistic. In fact, AV_INPUT_BUFFER_PADDING_SIZE has
been bumped several times with no changes to the GetBit API.
The correct value is not defined anywhere; I'd need to take a closer
look at it to find it out.
>
> Why is copying the buffer to the stack in the parsing function preferable to padding it when serializing? The vast majority of HDR10+ payloads will be well below the max size.
>
If the parsing function does not require padding, then it can be used
with any buffer, not only those created by the serialization function.
- Andreas
More information about the ffmpeg-devel
mailing list