[FFmpeg-devel] [PATCH v5 01/12] avutil/frame: Subtitle Filtering - Add AVMediaType property to AVFrame

Soft Works softworkz at hotmail.com
Sun Sep 12 22:55:49 EEST 2021



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of
> Andreas Rheinhardt
> Sent: Sunday, 12 September 2021 14:04
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v5 01/12] avutil/frame: Subtitle
> Filtering - Add AVMediaType property to AVFrame
> 
> Lynne:
> > 12 Sept 2021, 13:31 by dev at lynne.ee:
> >
> >> 12 Sept 2021, 05:21 by softworkz at hotmail.com:
> >>
> >>> This is the root commit for adding subtitle filtering
> capabilities.
> >>> Adding the media type property to AVFrame replaces the previous
> >>> way of distinction which was based on checking width and height
> >>> to determine whether a frame is audio or video.
> >>>
> >>> Signed-off-by: softworkz <softworkz at hotmail.com>
> >>>
> >>
> >> Why do you need a new allocation function av_frame_get_buffer2
> >> when it has the same syntax as av_frame_get_buffer?
> >> Also, could you please drop all but one of the filter patches
> >> when sending new versions? You're overspamming the ML
> >> and it's hard to keep up.
> >> Finally, why not combine the 2 subtitle overlay filters into one?
> >> There's no need for explicitness between text and bitmap subs.
> >>
> >
> > Just read why (the media type). Says a lot about the ML overload.
> >
> > Could the media type be set as unknown during the deprecation
> > period by default by the old alloc function for audio and video?
> > Subtitle allocations can set it to _SUBTITLES. That way, you can
> > still keep compatibility, as the old users detect AVFrame type
> based
> > on the 'format' field, without adding a new alloc function.
> The decode API should (when it is ported; haven't looked whether this
> already happens in this commit) ensure that AVFrame.type is set
> according to the data contained in the frame. 

Not done yet, but makes sense.

> If a user unaware of
> the
> decodes (say) video and then wants to utilize said AVFrame for audio,
> he
> may do so by resetting the fields corresponding to video and setting
> the
> audio related fields; you do not need to call av_frame_unref on it
> (say
> you want to keep the metadata and the timing related stuff, so you
> reset
> manually). If he then gives this frame to av_frame_get_buffer, it
> will
> be inconsistent, i.e. av_frame_get_buffer should error out. And then
> the
> user app will be broken.
> (I of course admit that lots of users will not be affected by this,
> as
> they always reset their frames with av_frame_unref. But we have to
> think
> of the odd cases, too.)

Isn't that a bit "too odd"? It's almost always possible to construct 
odd cases that would be broken in case they would exist.

I don't want to open architecture design discussion, but from my 
perspective:

- Due to the not really comprehensive documentation of API usage,
  it's often the code inside ffmpeg itself that demonstrates how 
  certain APIs should be used

- So when a patch doesn't break any public API usage within the ffmpeg
  package, shouldn't that be sufficient to assume that external use
  of the APIs won't be broken either?
  Of course, assuming that the external user has done its implementation
  following the way it's used within ffmpeg.

It's not that I want to argue about that subject, just in case there
exists a different philosophy, I'd like to know for better understanding.

Thanks,
softworkz







More information about the ffmpeg-devel mailing list