[Ffmpeg-devel] Embedded preview vhook/voutfb.so /dev/fb
Bobby Bingham
uhmmmm
Sun Apr 1 04:31:06 CEST 2007
Rich Felker wrote:
> On Fri, Mar 30, 2007 at 02:07:12PM +0800, Bobby Bingham wrote:
>>> Now, on to picture sequence filters....
>>>
>>> These would be similar to picture filters, but also have an amount of
>>> necessary 'context' along the time axis, i.e. a number of past and
>>> future frames that need to be available in order for the filter to
>>> operate. Again, what to do when the in/out correspondence isn't
>>> one-to-one is a little tricky. Perhaps mimic the way scaling ends up
>>> working along the spatial axes...
>>>
>> Is there any reason the picture filters can't just be a picture
>> sequence filter with temporal context = 0?
>
> They could, but the idea was to have a simpler filter-writing API for
> pure picture filters where it doesn't have to know anything about the
> idea of "frames" or "sequence".
>
> Also it's possible that a filter operating on sequences of frames
> could need to see them in-order, even if it doesn't explicitly need
> context to be able to look back at old frames. For example it might
> remember the brightness of the previous frame internally, or remember
> a phase (in inverse telecine or phase shifting), etc..
>
> Or if it's a temporal blur, it might remember the sum/average of the
> past few frames in an internally-kept buffer (or a permanent output
> buffer), rather than having to re-sum them from context each time.
> This is the time-dimension analogue of what Michael was talking about
> with boxblur and slices. :)
Okay.
I've started looking at the source for some of the simple libmpcodecs
filters, as well as reading DOCS/tech/libmpcodecs.txt (does someone know
off-hand if this is up to date?). I haven't looked at much of the actual
libmpcodecs code itself yet though, so many of these questions could
probably be answered by looking there first. But while we're on the
topic, it might be faster to ask here.
How does libmpcodecs handle slice order? Does it guarantee any specific
order? Actually, backing up a step, I know slice size is codec defined
- I assume order is too? What orders are common, if any? And sizes -
do slices generally correspond to the entire image width? Or are
smaller widths common too?
One thing that seems simple enough is to let filters specify if they
need slices or frames in order. Simple filters that really don't care
about order at all can allow out of order filtering, but once an
in-order filter is encountered, the frames or slices will need shuffled
around. Filters that can benefit from in-order filtering, even with
zero temporal context (like the temporal blur example) can request it.
Same idea in space dimension with slices.
I also want to ask about the motivation for treating full frames and
slices differently in libmpcodecs. Obviously slices have the benefit of
being small and fitting in cache - but is there any reason for having
separate APIs for both of them (put_image() vs draw_slice())? At first
glance, it seems to me that if a filter requires full frames, it could
still be passed in the form of a single, frame-sized slice. That would
have the benefit that filters that support both frames and slices could
implement just draw_slice() instead of implementing the same operations
again in put_image().
--
Bobby Bingham
??????????????????????
More information about the ffmpeg-devel
mailing list