[MPlayer-dev-eng] Moving the audio plugins to libmpcodecs - design of plugin management
Anders Johansson
ajh at atri.curtin.edu.au
Tue Jun 18 10:01:56 CEST 2002
Hi,
> Hi,
>
> > I have continued my design on the new audio plugin layer (moving audio
> > plugins to libmpcodecs). I have also had a look at what other people
> > have done. There seems to be a move towards floating point processing
> > of sound. I believe this is a very good thing, and propose we do the
> > same. Below is a draft of the design plus a number of issues I would
> > like feedback on.
> >
> >
> > The sound data format (not the information itself) can be changed with
> > regards to the following aspects:
> > Sample Frequency
> > Bits per sample
> > Big/Little endian
> > Signed/Unsigned
> > Float/Integer
> > Special formats such as AC3 and MPEG audio
> > Number of channels
>
> imho we should use single id to describe audio format, just like in video
> with IMGFMT_* or currently with AFMT_*
I was thinking of using bitmasks, hope that is ok?
> > Design:
> > 3 types of plugins input, effect and output.
> why? imho one interface is enough, don't make 3 type of plugins
OK, I'll do that instead, but it means that some plugins has to be
reentrent, but I guess that is quite ok anyway.
> > The Input plugins are stored in a linked list and are auto-configured
> > at start of a new movie.
> ...
>
> there should be some special plugins, like expand or scale for video, they
> are auto-inserted by the core when conversion needed
>
> > An other issue is buffering and how to handle changes in buffer size.
> > One idea I have had is to reallocate buffers when necessary. The
> > potential bugs this could lead to could be solved by supplying a
> > standardized memory management library to the plugin writers (i.e. a
> > function to realloc buffers). Problem with this approach is that the
>
> why don't do something like af_get_buffer() ?
> so, a per-plugin buffering system, like in video?
> the plugin tells the core how big buffer it needs to process the given
> amount of audio data, then get teh buffer and do it.
>
> anyway, we should reverse some things here.
> the pull should be started at the end, soundcard/encoder.
>
> so, in opposite to video filters, audio filter list should be started at the
> last filter, and it's get_data() should calculate how many dat ait needs to
> pruduce therequested number of samples, call the previous filter's
> get_data() and so on. the last filter is a wrapper to audio codec.
So you mean that the buffeing should be moved out of mplayer and put
into the audio codec? cause as I understood it one can never know how
much data the audio codec produces. Is this really the best solution?
> > Also do we want closed source plugins to be loadable like they are in
> > xmms? I guess no???
>
> why not
>
>
> A'rpi / Astral & ESP-team
>
Cheeers,
//Anders
More information about the MPlayer-dev-eng
mailing list