[FFmpeg-user] should I shoot the dog?
Devin Heitmueller
devin.heitmueller at ltnglobal.com
Tue Sep 29 16:20:26 EEST 2020
Hi Mark,
So when you talk about the decoded frames, there is no concept of
macroblocks. There are simple video frames with Y, Cb, Cr samples.
How those samples are organized and their sizes are determined by the
AVFrame format.
> "Packed" and "planar", eh? What evidence do you have? ...Share the candy!
>
> Now, I'm not talking about streams. I'm talking about after decoding. I'm talking about the buffers.
> I would think that a single, consistent format would be used.
When dealing with typical consumer MPEG-2 or H.264 video, the decoded
frames will typically have what's referred to as "4:2:0 planar"
format. This means that the individual Y/Cb/Cr samples are not
contiguous. If you look at the underlying data that makes up the
frame as an array, you will typically have W*H Y values, followed by
W*H/4 Cb values, and then there will be W*H/4 Cr values. Note that I
say "values" and not "bytes", as the size of each value may vary
depending on the pixel format.
Unfortunately there is no "single, consistent format" because of the
variety of different ways in which the video can be encoded, as well
as performance concerns. Normalizing it to a single format can have a
huge performance cost, in particular if the original video is in a
different colorspace (e.g. the video is YUV and you want RGB).
Generally speaking you can set up the pipeline to always deliver you a
single format, and ffmpeg will automatically perform any
transformations necessary to achieve that (e.g. convert from packed to
planer, RGB to YUV, 8-bit to 10-bit, 4:2:2 to 4:2:0, etc). However
this can have a severe performance cost and can result in quality loss
depending on the transforms required.
The codec will typically specify its output format, largely dependent
on the nature of the encoding, and then announce AVFrames that conform
to that format. Since you're largely dealing with MPEG-2 and H.264
video, it's almost always going to be YUV 4:2:0 planar. The filter
pipeline can then do conversion if needed, either because you told it
to convert it or because you specified some filter pipeline where the
individual filter didn't support what format it was being given.
See libavutil/pixfmt.h for a list of all the possible formats in which
AVFrames can be announced by a codec.
Devin
--
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com e: devin.heitmueller at ltnglobal.com
More information about the ffmpeg-user
mailing list