[FFmpeg-devel] [PATCH] libavfilter-soc: implement pad filter
Vitor Sessak
vitor1001
Sun May 31 17:09:51 CEST 2009
Michael Niedermayer wrote:
> On Sat, May 30, 2009 at 09:31:06PM +0200, Vitor Sessak wrote:
>> Michael Niedermayer wrote:
>>> On Sat, May 30, 2009 at 06:37:55PM +0200, Vitor Sessak wrote:
>>>> Michael Niedermayer wrote:
>>>>> On Sat, May 30, 2009 at 06:25:37PM +0200, Vitor Sessak wrote:
>>>>>> Michael Niedermayer wrote:
>>>>>>> On Sat, May 30, 2009 at 03:44:00PM +0200, Vitor Sessak wrote:
>>>>>>>> Michael Niedermayer wrote:
>>>>>>>>> On Fri, May 22, 2009 at 02:31:57PM +0200, Vitor Sessak wrote:
>>>>>>>>>> Stefano Sabatini wrote:
>>>>>>>>>>> On date Thursday 2009-05-21 23:20:51 +0200, Stefano Sabatini
>>>>>>>>>>> encoded:
>>>>>>>>>>>> On date Wednesday 2009-05-20 20:42:21 +0200, Vitor Sessak
>>>>>>>>>>>> encoded:
>>>>>>>>>>> [...]
>>>>>>>>>>>>> I suppose you didn't test the changes to ffmpeg.c, unless you
>>>>>>>>>>>>> forgot to attach the patch for vsrc_buffer.c. I imagine that
>>>>>>>>>>>>> here handling avfilter_request_frame() without memcpy'ing the
>>>>>>>>>>>>> whole frame (as is done in ffplay.c) would be non trivial.
>>>>>>>>>>> In attachment an updated patch with the missing changes to
>>>>>>>>>>> vsrc_buffer.c.
>>>>>>>>>>> Can someone suggest how would be possible to avoid the initial
>>>>>>>>>>> frame
>>>>>>>>>>> -> picref memcpy?
>>>>>>>>>> What non lavfi-patched ffmpeg.c does now is:
>>>>>>>>>>
>>>>>>>>>> 1- allocs a frame with the padding specified by command-line opts
>>>>>>>>>> -padXXXX
>>>>>>>>>> 2- decodes the frame to this buffer. Note that this buffer might
>>>>>>>>>> need to be reused for ME.
>>>>>>>>>>
>>>>>>>>>> what I suggest:
>>>>>>>>>>
>>>>>>>>>> a) For the first frame
>>>>>>>>>> 1- ffmpeg.c allocs a frame with no padding.
>>>>>>>>>> 2- libavfilter request a frame with padding px,py.
>>>>>>>>>> 3- ffmpeg.c allocs a frame with padding px, py, copies the frame to
>>>>>>>>>> it and free the replaces (freeing) the old frame by the new
>>>>>>>>>> 4- ffmpeg.c passes the new frame to the filter framework
>>>>>>>>>>
>>>>>>>>>> b) For the next frame
>>>>>>>>>> 5- ffmpeg.c decodes the frame with padding px, py
>>>>>>>>>> 6- libavfilter request a frame with padding px2, py2
>>>>>>>>>> 7- if (px2 > px || py2 > py) alloc another frame and memcpy the pic
>>>>>>>>>> to it (and set px = px2; py = py2;). if not, just send the frame
>>>>>>>>>> pointer to libavfilter
>>>>>>>>> 1 - the decoder which is pretty much a filter with no input requests
>>>>>>>>> from the next filter a buffer.
>>>>>>>>> 1b- the next filter can pass this request up until to the video
>>>>>>>>> output
>>>>>>>>> device in principle or return a buffer. If this request passes a
>>>>>>>>> "pad" filter it is modified accordingly.
>>>>>>>>> 2 - the decoder decodes into this frame.
>>>>>>>>> Which part of that are you not understanding
>
>>>>>>>> I probably was missing that there is no decoder that need not only to
>>>>>>>> preserve, but to output to the same data pointers of the last frame.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
>>>>>>>> Can you confirm that you can decode the first frame in a buffer and
>>>>>>>> the second frame in a different buffer for every codec?
>>>>>>> no i cant confirm that, the filter framework must support that as well
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>>>> but i cant see in how far that would be a problem.
>> What do you propose? Your previous proposal choke for the case of frames
>> that differs in padding and a codec that need the previous frame as output.
>
> sorry, i do not know what you talk about, you will have to explain the
> problem you hint at more verbosely
I think I didn't understand what you mentioned above. Imagine the
following code snippet for decoding two frames
AVFrame frames[2] = {avcodec_alloc_frame(),avcodec_alloc_frame()};
AVFrame *f1 = frames[0];
AVFrame *f2;
AVPacket pkt;
av_read_frame(formatCtx, &packet);
avcodec_decode_video2(codecCtx, f1, &x, &packet);
switch(var) {
case DO_AS_FFMPEG_C_DO:
// This have to work, it's what ffmpeg.c do
f2 = f1;
break;
case BUFFER_MEMCPY:
// This might work even if the decoder looks for the previously
// decoded pixels in the frame
f2 = frames[1];
avpicture_copy(f2, f1);
break;
case BUFFER_NO_MEMCPY:
f2 = frames[1];
break;
}
av_read_frame(formatCtx, &packet);
avcodec_decode_video2(codecCtx, f2, &var, &packet);
My question is: for what values of var would this code work for every
codec? If only DO_AS_FFMPEG_C_DO works, if the padding values increases
even a single time, we'll have to do memcpy() for every following frame.
If BUFFER_MEMCPY works, we'll have to do a memcpy() every time the
padding values increases. And, finally, if BUFFER_NO_MEMCPY works, your
solution works and there is never the need to do any memcpy'ing.
-Vitor
More information about the ffmpeg-devel
mailing list