[FFmpeg-devel] [PATCHv3 1/3] lavu/rand: add 64 bit random number generator
Ganesh Ajjanagadde
gajjanag at gmail.com
Fri Mar 18 16:16:53 CET 2016
On Tue, Mar 15, 2016 at 6:49 PM, Rostislav Pehlivanov
<atomnuker at gmail.com> wrote:
> On 15 March 2016 at 23:21, Ganesh Ajjanagadde <gajjanag at gmail.com> wrote:
>
>> On Tue, Mar 15, 2016 at 10:59 AM, Derek Buitenhuis
>> <derek.buitenhuis at gmail.com> wrote:
>> > On 3/15/2016 2:56 PM, Ronald S. Bultje wrote:
>> >> Might be related to aacenc? But yes, we need to know overall speed gain
>> of
>> >> some useful end user feature before/after this.
>> >
>> > [13:42] <@atomnuker> well, AAC just requires the random numbers to be
>> only somewhat random
>>
>> This is extremely vague. For instance, why do you even use Gaussians
>> in that case? There are far cheaper distributions, e.g a uniform even
>> for floating point is super cheap given an integer RNG.
>>
>> On the other hand, if I guess by this that you still want Gaussians,
>> just not necessarily very good quality ones, I am happy to drop
>> AVRAND64 and simply use av_lfg_get, e.g by av_lfg_get << 32 |
>> av_lfg_get.
>>
>> > [13:43] <@atomnuker> you could probably replace the random numbers with
>> just a static table of somewhat random numbers
>>
>> That would be a pretty large table, of course dependent on at what
>> point you are willing to cycle back through old entries.
>
>
> Okay, basically the reason the encoder uses any random numbers is due to
> the way the PNS system works. Noise energy values from PNS are quantized by
> taking the log2 of the value, multiplying it by 2 and rounding it down,
> clipping it and feeding that resulting integer into a table. This
> introduces big errors and since the encoder uses PNS quite extensively
> (since it saves so many bits and just by itself often makes high
> frequencies sound better), those errors resulted in a higher or lower than
> normal energy simply because there wasn't enough range, particularly at low
> energies (which is what PNS was actually designed to handle, yet doesn't do
> that well!). So, to account for the errors, the encoder first calculates
> the energy value for a scalefactor band, quantizes and dequantized that
> energy, generates random numbers, amplifies them using the dequantized
> energy, gets the energy of the synthesized noise, and compensates for the
> error in the original energy. Also, does an RD on the PNS bands and
> estimates how much it would take to encode the band as-is without PNS and
> decides whether to enable PNS for that band. If it does, the compensated
> energy is used. If the original band is near the boundary caused by
> quantization, over time this can result in dithering (dequantized energy
> switching between each frame) which can take care of the quantization error.
>
> So random numbers here aren't that important as they never actually
> directly reach any samples.
> The encoder used to use the linear congruential PRNG from the decoder, but
> I changed that with commit ade31b9424 since it reduced duplication and
> saved a few lines and I though the LFG might be faster (I wasn't able to
> observe any speed increase or decrease which was above the error) on some
> systems. Its only a few lines so rolling it back and bechmarking it would
> be easy (the energy of the random noise is always normalized [0.0, 1.0]
> before applying the dequantized energy gain, so even non-normalized random
> integers would work).
Can you please test this out and submit a patch? If it works, it is
simplest and cleanest solution that also gives a larger gain. It will
avoid the whole mess of the current proposed patch set.
> Using a table might have been an overestimation, though maybe it still
> would work.
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
More information about the ffmpeg-devel
mailing list