[FFmpeg-devel] [RFC] Async M:N asynchronous API model discussion
Alexis Ballier
aballier at gentoo.org
Fri Sep 25 14:43:37 CEST 2015
On Fri, 25 Sep 2015 12:22:51 +0200
Clément Bœsch <u at pkh.me> wrote:
> Hi,
>
> With the increasing number of hardware accelerated API, the need for a
> proper M:N asynchronous API is growing. We've observed that with MMAL
> and VideoToolBox accelerations typically. Similarly, the incoming
> MediaCodec hwaccel is going to suffer from the same limitations.
>
> Some Libav developers started discussing API evolution in that
> regard, see
> https://blogs.gentoo.org/lu_zero/2015/03/23/decoupling-an-api/
>
> A few FFmpeg developers are already interested in cooperating with
> Libav on that topic.
Great! This is something I've been lacking recently and had to somewhat
emulate with queues and threads. If there's anything I can do to help,
I'm interested in participating.
After reading Luca's post, I've had a few random and probably naive
ideas and comments:
- Why not merging AVPacket into AVFrame ? This would restore the
symmetry between encoding and decoding. This is actually what V4L
does.
- Maybe codecs could propose their input buffers, ala
get_video_buffer/get_audio_buffer of AVFilterPad. Not sure what HW
accel APIs accept random buffers, but some don't and probably some
others use background DMAs. E.g., nvenc.c copies the entire raw input
into "nvenc memory" before sending them for encoding. It could be a
great performance improvement if what produces the input frames could
write directly there. Again, V4L does that.
- This could go one step further with a lavfi wrapper: demuxers/devices
are sources, muxers are sinks, codecs transform AVFrames between coded
and decoded content.
Best regards,
Alexis.
More information about the ffmpeg-devel
mailing list