[FFmpeg-user] should I shoot the dog?
Michael Koch
astroelectronic at t-online.de
Tue Sep 29 16:37:10 EEST 2020
Am 29.09.2020 um 14:58 schrieb Mark Filipak (ffmpeg):
> On 09/29/2020 04:06 AM, Michael Koch wrote:
>> Am 29.09.2020 um 04:28 schrieb Mark Filipak (ffmpeg):
>>>
>>> I just want to understand the frame structures that ffmpeg creates,
>>> and that ffmpeg uses in processing and filtering. Are Y, Cb, Cr
>>> separate buffers? That would be logical. Or are the Y, Cb, Cr values
>>> combined and organized similarly to macroblocks? I've found some
>>> code that supports that. Or are the Y, Cb, Cr values thrown
>>> together, pixel-by-pixel. That would be logical, too.
>>
>> As far as I understood it, that depends on the pixel format.
>> For example there are "packed" pixel formats rgb24, bgr24, argb,
>> rgba, abgr, bgra,rgb48be, rgb48le, bgr48be, bgr48le.
>> And there are "planar" pixel formats gbrp, bgrp16be, bgrp16le.
>
> Hi Michael,
>
> "Packed" and "planar", eh? What evidence do you have? ...Share the candy!
As far as I know, this is not described in the official documentation.
You can find it for example here:
https://video.stackexchange.com/questions/16374/ffmpeg-pixel-format-definitions
>
> Now, I'm not talking about streams. I'm talking about after decoding.
> I'm talking about the buffers. I would think that a single, consistent
> format would be used.
>
There is no single consistent format used internally. See Gyan's answer
here:
http://ffmpeg.org/pipermail/ffmpeg-user/2020-September/050031.html
Michael
More information about the ffmpeg-user
mailing list