[MPlayer-G2-dev] Re: slices in g2
D Richard Felker III
dalias at aerifal.cx
Wed Dec 17 06:33:09 CET 2003
[Intentionally removed this header to start a new thread :]
In-Reply-To: <200312150913.hBF9D2aD001616 at mail.mplayerhq.hu>
On Mon, Dec 15, 2003 at 10:13:02AM +0100, Arpi wrote:
> - correct support for slices (note there are 2 kind of strides: one
> when you call next filter's draw_slice after each slice rendering
> to next vf's buffer completed, and the other type is when you have
> own small buffer where one slice overwrites the previous one)
Arpi (and others): I'd like some advice on designing the new slice
support for g2. It's a delicate balancing act of functionality and
complexity, because if using slices is too complicated, no one will
use slices in their filters and they'll be no help, but if there isn't
enough functionality, they'll also be useless...
I'm thinking about various conditions the source and destination
filter might want to impose on slices...
* x=0 and w=fullwidth
* Rendering slices in order (top to bottom)
* Rendering slices in order but bottom-to-top (dumb codecs)
* Some specified amount of context around the slice (for filtering)
* Direct rendering with notification as slices are finished
* Ability to examine the entire source buffer slices are being drawn
from, including already-completed parts...
* Alignment? Certainly we should have (x%8)==0... And y should be
divisible by the chroma subsampling...
Here are some possible scenarios using slices, just for things to
think about:
1. Video codec is using internal buffers, rendering into a filter via
slices. The filter will do most of its processing while receiving
slices (maybe just taking some metrics from the image), but it also
wants to see the final picture.
2. SwScale has (hypothetically) been replaced by separate horizontal
and vertical scale filters and colorspace pre/post-converters. And
we want to render slices through the while damn thing...
3. Tfields wants to improve cache performance by splitting fields a
slice at a time, but it needs a few pixels of context to do the
filtering for quarterpixel translate.
Now some further explanation of the issues and questions I have:
With the first scenario, let's simplify by saying that the filter is
just going to pass the image through unchanged. Thus, there's a very
clear need for it to have the final picture available; otherwise it
would have to waste time making a copy while processing slices. Having
an arrangement where a codec draws into a buffer and notifies the
filter as slices are completed is fairly straightforward when the
buffer is a DR buffer provided by the filter, because the filter is
the _owner_ of this buffer and already has the private data areas and
locks to keep track of it. But if the buffer is AUTO(-allocated) or
EXPORTed from the codec, some sort of mechanism is going to be needed
to inform the filter about its existence and attach it to slice
rendering.
In scenario 3, there are several ways to ensure tfields has the
necessary context. One way is for it to store a few lines from the
previous slice, but this is incredibly PAINFUL for the filter author
if slices can come in any order, any shape, and any sizes! Another
possible solution is forcing the source that's sending the slices to
assure that there's a border around the slice with sufficient context,
but that makes things painful for the source filter author.
As for scenario 2, I don't even want to think about it... :) All the
above concerns about alignment, rendering order, extra lines of
context, etc. etc. etc. come into play.
So in summary...
The basic question is, what degree of freedom should codecs/filters
rendering with slices have, and what sort of restrictions should be
replaced on them? If the restrictions are going to be allowed to vary
from filter to filter, how should they be negotiated?
I'll post a rough proposal soon.
Rich
More information about the MPlayer-G2-dev
mailing list