[FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture

Anton Khirnov anton at khirnov.net
Fri Dec 1 21:49:52 EET 2023


Quoting Nicolas George (2023-12-01 16:25:04)
> Anton Khirnov (12023-12-01):
> > http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/316787.html
> 
> So not Wednesday but Tursday three weeks ago.

The Wednesday email was the one I linked to two emails ago. Here it is
again:
http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/317536.html

> I did not agree that the current code was broken.
> 
> > The current code is broken because its output depends on the order in
> > which the frames from different inputs arrive at the filtergraph. It
> > just so happens that it is deterministically broken currently. After
> > this patchset it becomes non-deterministically broken, which forces me
> > to do something about it.
> 
> That is not true. The current code works and gives correct result if the
> file is properly muxed: it cannot be said to be broken.

I can definitely say it is broken and I already told you why. But if you
want something more specific:
* the output of your example with the current master changes depending
  on the number of decoder frame threads; my patch fixes that
* in fate-filter-overlay-dvdsub-2397 subtitles appear two frames too
  early; again, my patch fixes that

> > Your testcase offsets two streams by 60 seconds.
> 
> Indeed.
> 
> >						   That implies 60 seconds
> > of buffering. You would get this same amount of bufering in the muxer if
> > you did the same offsetting with transcoding or remuxing two streams
> > from the same source.
> > One can also avoid this buffering entirely by simply opening the file
> > twice.
> 
> You are wrong. You would be right if the offset had been in the opposite
> direction. But in the case I chose, it is the subtitles stream that is
> delayed, and 60 seconds of subtitles means a few dozens frames at most,
> not many hundreds.
> 
> Your change to the sub2video hearbeat makes it continuous, and turns the
> few dozens frames into many hundreds: this is what is breaking.
> 
> So I say it again: this test case is useful and currently works, include
> it in your test case so that your patch series keeps the feature
> working.
> 
> I can consider sending a patch to add it to FATE, but not before Monday.
> 
> Also, note that in the grand-parent message from the one you quoted
> above, I gave you a solution to make it work. You told that it was
> already what you did, but obviously it is not, so let us resolve this
> misunderstanding.

IIUC your suggestion was to send heartbeat packets from demuxer to
decoder, then have the decoder forward them to filtergraph.

That is EXACTLY what I'm doing in the final patch, see [1]. It also does
not address this problem at all, because it is caused by the heartbeat
processing code making decisions based on
av_buffersrc_get_nb_failed_requests(), which fundamentally depends on
what frames previously arrived on the video input.

[1] https://git.khirnov.net/libav.git/tree/fftools/ffmpeg_demux.c?h=ffmpeg_threading#n527
    https://git.khirnov.net/libav.git/tree/fftools/ffmpeg_dec.c?h=ffmpeg_threading#n406

-- 
Anton Khirnov


More information about the ffmpeg-devel mailing list