[FFmpeg-devel] [PATCH 0/4] avdevice/dshow: implement capabilities API
Diederick C. Niehorster
dcnieho at gmail.com
Sat Jun 12 14:50:39 EEST 2021
Nicolas, I agree with what you said, and you obviously have given this
more thought than me. Your and Anton's replies provide two by now
pretty clear different views, I hope more will chime in.
I will only highlight some things below.
On Fri, Jun 11, 2021 at 3:17 PM Nicolas George <george at nsup.org> wrote:
> Input devices are demuxers with a few extra methods; output devices are
> muxers with a few extra methods. We already have the beginning of a
> class/interface hierarchy:
>
> formats
> |
> +---- muxers
> | |
> | +---- output devices
> |
> +---- demuxers
> |
> +---- input devices
Exactly, this is what the class hierarchy in my program using ffmpeg
also looks like.
>
> I agree. Thought have been given to designing this API, the efforts have
> dried up before implementing the functional parts, but the design is
> sound, and a good starting point to work again.
Yes, so lets use it to solve a real (my) problem now. Doing so does
not hamper a large redesign/reorganization effort later.
> I think the problem emphasized here is not really about fields, more
> about the working of the API: files are read on demand, while operate
> continuously, that makes a big difference.
>
> But really, we already have this difference with network streams,
> especially those that do not have flow control, for example those in
> multicast. These network streams have aspects of protocols, but also
> aspect of devices.
>
> And the answer is NOT to separate libavio from libavformat: protocols
> and formats mesh with each other, see the example of RTP.
:) I have been wondering why protocols are in formats, and not in a
separate libavprotocol or libavtransport or so. But i understand
indeed that some of these, like devices, must be intertwined.
> In practice, that would look like this:
>
> application
> → libavformat API
> → libavdevice compatibility wrapper
> → libavdevice API
> → wrapper for old-style device
> → actual device
>
> While the useful code is just:
>
> application
> → libavformat/device API
> → actual device
>
> That's just an insane idea, a waste of time.
Hmm, fair enough!
>
> > Anyway, out of Mark's options i'd vote for a separate new AVDevice
> > API, and an adapter component to expose/plug in AVDevices as formats.
>
> I do not agree. I think we need to think this globally: shape our
> existing APIs into a coherent object-oriented hierarchy of
> classes/interfaces. This is not limited to formats and devices, we
> should include protocols in the discussion, and probably codecs and
> filters too.
This is an interesting point. It would force a discussion about which
"classes" should sensibly have access to internals of other classes
(i.e. alike public/protected/private), and thereby make completely
explicit how each component is linked to the others. It seems to me
that physical organization both into a folder structure and into
different dlls is rather secondary. As you wrote elsewhere, a lot of
the libraries depend on each other, and that makes sense.
> And to handle the fact that devices and network streams are
> asynchronous, the main API needs to be asynchronous itself.
>
> Which brings me to my project to redesign libavformat around an event
> loop with callbacks.
>
> I have moderately started working on it, by writing the documentation
> for the low-level single-thread event loop API. Then I need to write the
> documentation for the high-level multi-thread scheduler API. Then I can
> get to coding.
Looking forward to it, sounds useful!
Cheers,
Dee
More information about the ffmpeg-devel
mailing list