[FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture

James Almer jamrial at gmail.com
Wed Dec 6 15:21:20 EET 2023


On 12/6/2023 9:55 AM, Nicolas George wrote:
> Anton Khirnov (12023-12-04):
>> Which of these are you saying is correct?
> 
> I do not know? Do you think I am able to reverse MD5 mentally? I am
> flattered, but I am sorry to confess I am not.
> 
> Why do you not look at the resulting videos to judge for yourself? But

I honestly can't believe you're arguing this. At this point you're just 
being defensive of your position without really taking into account what 
you were challenged with.

> to do that, you will need to remember (or learn two things):

And being condescending will not help your case.

> 
> First, most people do not have that many CPU threads available, and if
> they do they will spend them on encoding more than decoding.
> 
> Second, and most important: for subtitles, in many many cases, a few
> frames of shift do not matter because the timing in the source material
> is not that accurate.
> 
> So the answer to your question is: probably most of the ones generated
> with a sane number of threads are correct, in the sense that the result
> is within the acceptable accuracy of subtitles sync and useful for the
> user.

How can you argue it's fine when you request bitexact output and do NOT 
get bitexact output? Go ahead and add that command line as a FATE test. 
See the runners turn yellow. Will you argue it's fine and not broken?

Number of threads should not matter, the output has to be deterministic. 
Saying "Maybe the user likes what he gets. Varying amount of artifacts 
here and there or a few frames of shift here and there of difference 
between runs. It's fine!" is laughable.

> 
> Of course, if the use case is one where perfect accuracy is necessary,
> users need to revert to a slower and more bulky procedure (like you
> suggested: open the file twice, which might require storing it entirely)
> to get it.
> 
> So really, what you pretend is not breaking anything is really removing
> one of the options currently available to users in the compromise
> between speed, latency and accuracy.
> 
> So I demand you stop pretending you are not breaking anything, stop
> pretending it is currently broken, just so you can move forward without
> bothering to search for a solution: that starts to feels like laziness
> and it always felt like rudeness because I spend a lot of effort in
> getting this to work in the cases where it can.
> 
>> The only bug that's been established to exist so far is in your
>> heartbeat code, which produces random output as per above.
> 
> As I explained many times, this is not a bug.

If i request -bitexact, i want bitexact output, regardless of running on 
a core i3 or a Threadripper. There's nothing more to it.

> 
>> Buffering is by itself not a bug, otherwise you'd have to say the lavf
>> interleaving queue is a bug.
> 
> Once again, buffering thousands of frames and crashing because out of
> memory when the current code succeeds and produces an useful result is a
> regression and the patch series cannot be applied until that regression
> is fixed.

Calling random output that happens to be "acceptable" within the 
subjective expectations of the user as useful sounds to me like you're 
trying to find an excuse to keep buggy code with unpredictable results 
around, just because it's been there for a long time.

> 
>> So for the last time - either suggest a specific and practical way of
>> reducing memory consumption or stop interfering with my work.
> 
> The specific and practical way is to let the current logic in place.
> There might be a few tweaks to make it more accurate, like looking into
> this comment:
> 
>      /* subtitles seem to be usually muxed ahead of other streams;
>         if not, subtracting a larger time here is necessary */
>      pts2 = av_rescale_q(pts, tb, ifp->time_base) - 1;
> 
> But first, we need you to stop behaving as if my previous efforts did
> not mater just because it does not overlap with your narrow use cases.

Your previous efforts mattered, but evidently did not yield completely 
acceptable results, and this overhaul has exposed it.

So, like Anton has asked several times, suggest a way to keep 
deterministic and bitexact output without exponentially increasing 
memory consumption due to buffering.


More information about the ffmpeg-devel mailing list