[FFmpeg-devel] [RFC] libswscale palette output implementation
Vitor Sessak
vitor1001
Sat Jan 2 01:16:23 CET 2010
Stefano Sabatini wrote:
> On date Friday 2010-01-01 21:55:44 +0100, Michael Niedermayer encoded:
>> On Fri, Jan 01, 2010 at 02:15:05PM +0100, Stefano Sabatini wrote:
>>> On date Thursday 2009-12-31 19:41:44 +0200, Kostya encoded:
>>>> On Thu, Dec 31, 2009 at 05:28:24PM +0100, Stefano Sabatini wrote:
>>>>> Hi,
>>>>>
>>>>> related thread:
>>>>> http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/80845/focus=82531
>>>>>
>>>>> Kostya's idea is to use Vitor's ELBG implementation in
>>>>> libavcodec/elbg.{h,c}, first step would be to move it to lavu where it
>>>>> can be used by lsws. This shouldn't comport any ABI / API issues,
>>>>> since the API is only internal (it would only require a dependancy
>>>>> change of lavc on lavu, but I may be wrong here).
>>>> Just don't forget one point (I may be wrong here though): scale code may
>>>> be called on slice or single line, so you need somehow to ensure it
>>>> processes the whole picture. Maybe just having single filter for that is
>>>> better. Also it may be improved to produce palette with minimum
>>>> differences for consequent frames, etc.
>>> Michael what about the filter option? I'm not even sure if the
>>> libswscale solution would be viable, since the usage required here
>>> conflicts with the slice API.
>> It would be very annoying if one pixel format was a special case and
>> couldnt be handled by swscale.
>> There are projects using swscale tha do not use lavfi
>
> Mmh OK, do you already have some ideas about how to make sws_scale()
> manage such a thing?
>
> I mean: sws_scale() is passed in input a slice, it needs the whole
> image in order to be able to compute the quantization palette, but
> it is supposed to immediately draw in output the scaled slice.
>
> Also how would be possible to request the filter/sws_scale() to keep
> the same palette, and/or to use a palette provided by the user rather
> than compute it for each frame?
>
> In attachment a lazy implementation of a quantization filter using
> ELFG.
In case anyone is curious, I've tested it with against a few different
programs:
Original image:
http://sites.google.com/site/vsessak2/home/original.jpg
This patch:
http://sites.google.com/site/vsessak2/home/ffmpeg_output.bmp
pnmquant:
http://sites.google.com/site/vsessak2/home/pnmquant.bmp
ffmpeg -vfilters "format=rgb8":
http://sites.google.com/site/vsessak2/home/ffmpeg_rgb8.bmp
gimp without dithering:
http://sites.google.com/site/vsessak2/home/gimp.bmp
gimp with dithering:
http://sites.google.com/site/vsessak2/home/gimp_dither.bmp
Interestingly, using just clustering (like this patch) the image suffer
less from the lack of dithering than what I would expect.
-Vitor
More information about the ffmpeg-devel
mailing list