[MPlayer-G2-dev] vp layer and config
D Richard Felker III
dalias at aerifal.cx
Mon Dec 15 11:09:26 CET 2003
On Mon, Dec 15, 2003 at 10:13:02AM +0100, Arpi wrote:
> Hi,
>
> > Despite it already being somewhat ugly and complicated, I'd actually
> > like to propose adding some more config-time negotiations:
>
> Instead of hacking and adding more, i would suggest to drop g1's
> vf coimpletely and re-design from scratch for g2.
> Yes i know i was the one against this way, but i've changed my mind :)
That's basically what I'm doing already... :)
> some issues to solve:
> - runtime re-configuration (aspect ratio, size, stride, colorspace(?) changes)
Some filters will naturally be able to support changes like this, but
many won't without some sort of reset/discontinuity in output.
Pretty much all temporal filters will at least show momentary
artefacts when reconfiguring; this is inevitable.
Some codecs (even prominent ones like lavc!!) will not allow you to
change stride after the first frame is decoded. I had to add a new
stride restriction type just for this.
Anyway, here's a model I see for runtime reconfiguration:
* User changes parameters for a filter through some sort of interface
(gui widgets, osd, lirc, slavemode, keyboard, whatever).
* New config gets passed to the filter via the cfg layer.
* The filter then calls vp_config on its output link to renegotiate
the image size, format, stride restrictions, etc. it will need.
Similar idea if a new filter gets inserted at runtime, or if one gets
removed.
> - aspect ratio negotation through the vf layer to vo
> (pass thru size & aspect to the vo layer, as some vo (directx, xv) doesnt
> like all resolutions)
Hm? I think what I said about just passing SAR instead of DAR pretty
much covers it. No negotiation is needed. As long as the correct SAR
is available at the end of the filter chain, the calling app and/or vo
stuff can use the SAR to generate an appropriate and compatible
display size.
> - window resizing issue (user resizes vo window, then reconfigure scale
> expand etc filters to produce image in new size)
Let me propose a solution. As we discussed before, having resize
events propagate back through the filter chain is very bad -- in many
cases you'll get bogus output. How about this instead: if software
zoom is enabled, something in the vo module inserts a scale filter and
keeps a reference to it, so it can at a later time reconfigure this
filter. Same idea for expand. Thus we reduce it to the same problem as
runtime reconfiguration by the user, except that it's controlled by
the vo module instead of by the gui/osd/whatever widgets the user is
interacting with.
> - better buffer management (get/put_buffer method)
Already doing it.
> - split mp_image to colorspace descriptor (see thread on this list)
> and buffer descriptor (stride, pointers), maybe a 3rd part containing
> frame descriptor (frame/field flags, timestamp, etc so info related to
> the visual content of the image, not the phisical buffer itself, so
> linear converters (colorspace conf, scale, expand etc) could simply
> passthru this info and change buffer desc only)
Agree, nice ideas!
IMO it's not ok to just point to the original frame's descriptor
(since they might not have the same lifetime!) so the info will have
to be copied instead, but that's still easy as long as we put it in a
nice struct without multiple levels of pointers inside.
I'll make these changes to my working copies of vp.[ch].
> - correct support for slices (note there are 2 kind of strides: one
> when you call next filter's draw_slice after each slice rendering
> to next vf's buffer completed, and the other type is when you have
> own small buffer where one slice overwrites the previous one)
Hmm, I said this too in a post a while back, but then I worried that
it was too complicated... Do you have a proposal for how it should
work?
> - somehow solve framedropping support
> (now its near impossible in g2, as you hav eto decode and pass a
> frame through the vf layer to get its timestamp, to be used to
> decide if you drop it, but then it's already too late to drop)
No, it's very easy with my design! :))
All frames, even dropped frames pass through the vf chain. But if you
set the drop flag when calling pull_image, the source codec/vo isn't
required to output any valid image, just the metadata. In fact we
could create a new buffer type "DUMMY" for it, where there are no
actual buffers.
The benefit of this system is that the pull_image call still
propagates all the way back through the chain to the codec. The only
difference is that the drop flag is set. So all filters naturally know
if they're missing frames, and if they really need to see every frame,
they can refuse to propagate the drop flag any further.
(IMO there should be some policy for what they're required to do, and
perhaps several levels of dropflag, the highest of which must always
be honored.)
> i think the new vf layer is the key of near everything.
Agree totally! Maybe new af layer too...? :)
Rich
More information about the MPlayer-G2-dev
mailing list