[FFmpeg-devel] Towards YUVJ removal

Niklas Haas ffmpeg at haasn.xyz
Fri Dec 9 13:49:41 EET 2022


So, as was discussed at the last meeting, we should move towards
removing YUVJ. I want to gather feedback on what appears to be to the
major hurdle, and possible ways to solve it.

The basic major issue is how to handle the case of combining limited
range input with an encoder for a format that only accepts full range
data. The common case, for example, would be converting a frame from a
typical video file to a JPEG:

$ ffmpeg -f lavfi -i smptebars -vframes 1 output.jpg

Currently, this works because the JPEG encoder only advertises YUVJ
pixel formats, and therefore ffmpeg auto-inserts swscaler to convert
from limited range to full range. But this depends on conflating color
range and pixel formats, which is exactly what has been marked
deprecated for an eternity.

Now, there are some solutions I can see for how to handle this case in
a non-YUVJ world:

1. Simply output an error, and rely on the user to insert a conversion
   filter, e.g.:

   $ ffmpeg -f lavfi -i smptebars -vframes 1 output.jpg
   error: inputs to jpeg encoder must be full range

   $ ffmpeg -f lavfi -i smptebars -vframes 1 -vf scale=out_range=jpeg output.jpg
   ... works

2. Have the JPEG encoder take care of range conversion internally, by
   using sws to convert limited to full range.

3. Allow filters to start exposing color space metadata as part of
   filter negotiation, and then auto-insert swscaler whenever colorspace
   conversion needs to happen as a result of filters not accepting the
   corresponding color metadata. This would also allow more than just
   conversion between limited/full range.

4. Combine approach 1 with an encoder flag for "supports full range
   only", and have ffmpeg.c auto-insert a scale filter as the last step
   before the encoder if this flag is set (and input is limited range).

1 would be the most explicit but would break any existing pipeline that
includes conversion to jpeg, which is probably a very large number.

2 would be the least work, but violates abstraction boundaries.

3 would be the most work and is, IMO, of questionable gain.

4 would be both explicit and not break existing workflows.

Thoughts?


More information about the ffmpeg-devel mailing list