[FFmpeg-devel] Fwd: pipeline multithreading
Daniel Oberhoff
danieloberhoff at gmail.com
Tue Nov 25 10:02:27 CET 2014
Von meinem iPhone gesendet
Anfang der weitergeleiteten E‑Mail:
> Von: Daniel Oberhoff <danieloberhoff at gmail.com>
> Datum: 25. November 2014 09:58:03 MEZ
> An: Reimar Döffinger <Reimar.Doeffinger at gmx.de>
> Betreff: Re: [FFmpeg-devel] pipeline multithreading
>
>
>
> Von meinem iPhone gesendet
>
>>> Am 24.11.2014 um 17:16 schrieb Reimar Döffinger <Reimar.Doeffinger at gmx.de>:
>>>
>>> On Mon, Nov 24, 2014 at 12:35:58PM +0100, Daniel Oberhoff wrote:
>>> inout -> filter1 -> filter2 -> output
>>>
>>> some threads processing frame n in the output (i.e. encoding), other threads procesing frame n+1 in filter2, others processing frame n+2 in filter1, and yet others processing frame n+3 decoding. This way non-parallel filters can be sped up, and diminishing returns for too much striping can be avoided. With modern cpus scaling easily up to 24 hardware threads I see this as neccessary to fully utilize the hardware.
>>
>> Keep in mind the two things:
>> 1) It only works for cases where many filters are used, which is not
>> necessarily a common case
>
> For us it is, we do lens correction, rescaling, transform, pad, and overlay with multiple streams, easily 10 filters in place. But I cannot judge for others.
>
>> 2) While it would possibly be simpler to implement, you do not want each
>> filter to use its own thread. This leads to massive bouncing of data between
>> caches and especially for filters that use in-place processing a large
>> amount of cache coherency traffic.
>> Ideally, when used with frame multithreading you would even re-use the
>> thread that did the decoding.
>
> What you want is a thread pool with threads "following the data", i.e. rather a thread per frame in-flight, then per filter. Mixing this with slice threading gets more complex, but not too much, it's a balance game.
More information about the ffmpeg-devel
mailing list