[Mplayer-cvslog] CVS: main/libvo jpeg_enc.c,1.17,1.18

rsnel at cube.dyndns.org rsnel at cube.dyndns.org
Sat Nov 1 15:21:10 CET 2003


Some ramblings...

On Fri, 31 Oct 2003, Alex Beregszaszi wrote:
> > A CVS commit will follow shortly.
> Btw, I'm really interested in moving away jpeg_enc to libmpcodecs. Some
> months ago I hacked vf_lavc to output mjpeg, but vo_zr was built around
> jpeg_enc. Are you interested in making vo_zr independent from jpeg_enc
> and moving jpeg_enc to filter layer aswell?
Moving jpeg_enc to the filter layer is doable. Two new IMGFMT's will be 
needed, like

IMGFMT_ZRMJPEG (YUV422, width%16==0, height%8==0)
IMGFMT_ZRMJPEG_INTERLACED (two concatenated JPEG images of halve height)

The ZR is there to signify that the JPEG images need to have special 
properties, ordinary MJPEG streams from webcams for example, won't work 
with zoran boards because the colorspace is wrong (YUV420). 

I think it will be best to do this change in four steps.

1. Add new zr driver (vo_zr2.c, for example), which only accepts 
IMGFMT_ZRMJPEG* (and complies with the 'multiple config calls are 
alllowed'-rule (vo_zr still doesn't...))

2. Add passthrough driver (analogous to vd_mpegpes) (for quick testing, 
passthrough will also be useful because mplayer will then be able to 
emulate the behavour of lavplay from the mjpegtools)

3. Add vf_zrmjpeg.c, at first it will be jpeg_enc.c including a

4. Remove vo_zr.c if vo_zr2.c is better in every way.

One major obstacle is cinerama support. (which is broken at this time 
though, but can be fixed by moving the -zr* options to the subdevice)
Ideally the -vf layer will allow multiple output devices, and multiple
instances of the same output device (where possible).

movie -> scale=bla:split -
                           \ crop=right_part:vo=x11,display=otherhost:0

The current cinerama functionality of the zr driver could then be replaced 
by doing vo=zr2,dev=/dev/video0 and vo=zr2,dev=/dev/video1 in the example.
(also the crop filter could be enhanced to accept IMGFMT_ZRMJPEG* data; 
JPEG files can be easily cropped at macroblock boundaries)

> Or better, backport your improvements to ffmpeg.
There are a few things which need to be added (in some way) to libavcodec:

-support for black and white encoding (is already implemented for other 
codecs), will probably be easy.

-support for the creation of YUV422 jpeg from YUV420 and YUV422 data. If 
agreement is reached about in which way this will fit in the current 
libavcodec, it will probably be doable.

-support for very generic buffer specification of the source image to
facilitate horizontal decimation. (hard, the functionality could be 
replaced by a filter in the -vf chain, but at a performance loss)
(the workings of the 'very generic buffer specification' in jpeg_enc.c, 
are explained in the comment preceding jpeg_enc_init)
-support for creating interlaced ZRMJPEG from non-interlaced YUV420 data
with the following optimalisation (this optimalisation currently does not
exist in jpeg_enc.c): the JPEGs of the odd and even frame will be created
in parallel so that the U and V planes only need to be encoded once. This
should give a 33% performance increase. (now, the first and second field
also have the same UV data, but it is encoded twice because the fields are
encoded seperately). The source needs to be non-interlaced to make sure 
that the color on the first U and V line matches with the first two lines 
of Y.

So, the process of switching to libavcodec while keeping the performance,
features and possibe future optimalisations of jpeg_enc.c seems very
daunting to me. I like the simplicity of jpeg_enc.c, but I don't like the
fact that some functions of libavcodec are duplicated because they are
declared static in libavcodec...

I am, however, interested starting to create a new, clean vo_zr2.c and 
corresponding filters. Alex, do you agree with the 'four step plan' 
presented above?



Nothing is ever a total loss; it can always serve as a bad example.

More information about the MPlayer-cvslog mailing list