[FFmpeg-devel] [PATCH / RFC] Deshake / stabilize filter
compn
tempn
Mon Apr 12 23:07:56 CEST 2010
On Mon, 12 Apr 2010 19:26:53 +0200, Michael Niedermayer wrote:
>On Mon, Apr 12, 2010 at 12:49:47PM -0400, Daniel G. Taylor wrote:
>> Hey,
>>
>> Attached is a filter I wrote over the weekend for fun to stabilize an input
>> video by shifting frames horizontally and vertically. It's fast (uses
>> MMX/SSE/etc) and fairly accurate.
>>
>> I am just learning about block matching, motion searches, and various other
>> concepts and this is an algorithm I came up with reading various things
>> online and in some free papers. The algorithm is described in the comments
>> of the patch in plain English. There are still a few things I'm confused
>> about / working on but here are some quick results from my Vado HD pocket
>> camera in a real world scenario:
>>
>> http://programmer-art.org/dropbox/deshake_dog_orig.mp4
>> http://programmer-art.org/dropbox/deshake_dog_32.mp4
>>
>> And here is an unrealistic example to show the strengths and weaknesses of
>> the filter in its current state (e.g. no rotation support):
>>
>> http://programmer-art.org/dropbox/deshake_table_orig.mp4
>> http://programemr-art.org/dropbox/deshake_table_32.mp4
>>
>> The filter is licensed under the LGPL and I'd like to:
>>
>> 1. Ask for comments, suggestions, etc to make it better
you might want to ask the author of this filter for some tips:
http://www.guthspot.se/video/deshaker.htm
>> 2. Get it into FFmpeg trunk
>
>this filter is very interresting but
>could you look at motion_est* please, the code there likely can be used.
>Changing or extending motion_est* so it has a nice public API would for
>this be required but i think its the better approuch than having
>each filter and codec duplicate motion estimation.
seems like a lot of work for one filter author.
>> + * TODO:
>> + * - Compensate for rotation (requires full transform, slower)
>> + * - Zoom instead of filling frame edges (requires full transform, slower)
>
>> + * - Fill frame edges based on previous/next reference frames
>
>1. allocate a larger internal picture
>2. for each frame match its contents to the larger picture and copy the new
> frame in there. If no good match reset things and draw in the middle
>3. output center portion of the larger internal picture or smooth motion
> to compensate for panning and output the resulting rectangle
>
>With this you not only fill in cornrs fron the next/prev frames but even
>more distant frames.
>And it might work for something else too. If you pan a cammera over a static
>scene the internal picture should then contain the whole thing stitched
>together
content aware fill?
http://www.logarithmic.net/pfh/resynthesizer
-compn
More information about the ffmpeg-devel
mailing list