[MPlayer-users] -vf ilpack

D Richard Felker III dalias at aerifal.cx
Tue Mar 9 06:32:01 CET 2004


On Mon, Mar 08, 2004 at 09:46:34PM +0200, Ville Saari wrote:
> On Sun, Mar 07, 2004 at 10:39:19PM -0500, D Richard Felker III wrote:
> 
> > > True, but for such frames it wouldn't hurt if progressive chroma
> > > subsampling is used too.
> > 
> > The way chroma sampling really works, you're right. However, if it
> > worked the stupid way I used to think it did, bad things would
> > happen...
> 
> Yes, but if each mpeg frame were unambigously tagged as either
> progressive or interlaced, then each frame type could use optimal
> chroma subsampling method. Too bad mpeg doesn't work that way.

The chroma subsampling IS OPTIMAL (IMO) the way it's done. The "bad
way" is bad for more reasons. No reason to have 2 methods.

> > > I have also witnessed at least one case where 24 fps film content was
> > > converted to PAL using 3:2:2:2:2:2:2:2:2:2:2:2-pulldown!
> ...
> > Well normally this is done for movies where the music is the key
> > feature, and increasing the pitch by 4% would butcher it.
> 
> It was an animated Mickey Mouse short film. Algorithms exist to
> shorten audio without changing the pitch so that only the tempo of
> the music would change. I actually believe that it is the usual way
> to convert film to PAL nowadays.

I don't think so. These butcher the quality even more. Certainly not
acceptable for transferring Pink Floyd's The Wall (one of the PAL
discs I know is done with 3:2:2:2:2:2:2:2:2:2:2:2 pulldown),

> If even that is unacceptable then there is still one option: The video
> could be converted from 24 to 25 fps with motion compensated temporal
> resampling. Hardware to do that is probably not cheap, but if I'm not
> completely mislead, it at least exists.

I think you're completely misled. :)

> > Linear blend doesn't kill vertical resolution. It does give slight
> > blurring, but it's far from a dimension-halving filter like li,ci,fd.
> 
> Aren't the blurring and loss of resolution pretty much the same thing?

Loss of resolution is the dimension of the kernel of the filter.
Linear blend filter is invertible from a mathematical viewpoint; the
only loss comes from the rounding errors which become very large. If
you had 24bit luma samples you could invert it.

You could try to estimate the 'resolution loss' based on where the
errors get too big, I suppose. All I'm saying is that it's very false
that you lose "half the resolution" from linear blend. Linear blend
followed by a sharpen filter (i.e. pp=l5) will look very close to the
original when applied to progressive pictures.

> Most deinterlacing algorithms are effectively vertical low-pass filters.

I'm not sure I agree with this. Correct non-motion-adaptive
deinterlacing is just building a full picture from a single field.
Only blend or other convolution-based deinterlacers act as a vertical
lowpass, and they generally give bogus output (ghosts) even if they do
look somewhat better. Also I'd be hesitant to talk about band-pass
filters at all with interlaced video, since the fields are almost
always horribly aliased to begin with and not even correctly
representing a band-limited sampling. (Interlaced video just sucks...)

> > This still sounds bad... Care to write a nice filter to detect the
> > ghosts from the previous frame and remove them? :)
> 
> Interesting problem. Adding some proportion of the difference of
> current and previous frame to the current frame could do the trick
> (kind of a sharpening convolution in temporal dimension).

Yep.

> Another similar problem is to fix progressive material that has been
> NTSC-telecined and then deinterlaced. If the deinterlacing was done
> by a method that uses just one field and invents the other, then the
> fix is to simply drop the duplicate frame, but if the fields were
> blended together, then the reconstruction of the frame that was split
> between the interlaced frames is very similar to the ghost removal
> problem.

Yep. I wrote some emails (or irc discussion) on this topic a while
back, and actually worked out the math to do it. (It's more
complicated than you think because you also have to DETECT the
pattern, which is hard enough without the blend applied...) It's quite
surprising, because while linear-blend isn't invertible in general
(due to rounding), the redundancy of having a duplicate field from
every other frame effectively makes it invertible.

Rich




More information about the MPlayer-users mailing list