[FFmpeg-devel] shared api for exposing a texture
Vittorio Giovara
vittorio.giovara at gmail.com
Fri May 15 16:09:32 CEST 2015
Hi,
following the positive trend as of late, here is a shared discussion
on a proposed API.
There are a couple of formats that are based on texture compression,
usually called DXTn or BCn, and described here:
http://en.wikipedia.org/wiki/S3_Texture_Compression. Currently in
libavcodec only txd uses this style, but there are others I am working
on, namely Hap and DSS.
What I thought while working on them (and later found out actually
being of commercial interest) is that the texture could be potentially
left intact and rather than being decoded (or encoded) internally by
libavcodec. The user might want to skip decoding the texture
altogether and decode it him or herself, possibly exploiting gpu
acceleration.
Unfortunately these formats often employ additional compression or add
custom headers so the user can't just demux and accelerate the output
frame as is. Interested codecs could let the user choose this with a
private option.
There are a couple of strategies here.
1. Introduce a pixel format for each texture: this has the advantage
of getting appropriately-sized buffers, but in the end it would
require having a different pixel format for each variant of each
texture compression. Our users tend to dislike this option and I am
afraid this would require us of constantly adding new texture formats
when support is added.
2. Introduce a single opaque pixel format: this has the advantage of
not having to update API at every new format, but leaves users in the
dark to what the texture actually is, and requires to know what he or
she is actually doing.
3. Introduce a single opaque pixel format and a side data: as a
variant of above, the side data would contain which variant of of the
texture and would let the user know how to deal with data without
anything special.
4. Write in the normal data buffers: instead of filling in rgba
buffers with decoded data, the raw texture could be written in
data[0], and the side data would help understand how to interpret it.
This could be somewhat hacky since it's not something users would
normally expect.
5. Introduce refcounted side data: just write everything in a side
data buffer and let the user act upon it on demand. Similar to idea 3,
but without introducing new pixel formats. Could be potentially
6. Write in the 'special' data buffer : similar to what is done for
paletted formats, write the texture in data[1], so that normal users
don't have to worry about anything and special users might just access
the appropriate buffers.
Every idea has some drawbacks and some benefits, we should try to
trade off between new APIs, maintenance and actual use cases. In my
opinion 5 is interesting but probably overkill for this usecase. I
like 3 for it simplicity and ease of use.
What would other lavc devs think is the more appropriate one? What
about our API users?
Cheers,
--
Vittorio
More information about the ffmpeg-devel
mailing list