[FFmpeg-devel] [RFC] avcodec: Add native DCA decoder based on libdcadec.
Andreas Cadhalpun
andreas.cadhalpun at googlemail.com
Tue Jan 12 23:58:43 CET 2016
On 08.01.2016 15:34, foo86 wrote:
> On Thu, Jan 07, 2016 at 08:17:59PM +0100, Andreas Cadhalpun wrote:
>> On 03.01.2016 18:49, foo86 wrote:
>>> + for (i = 0; i < s->nmixoutconfigs; i++) {
>>> + for (j = 0; j < nchannels_dmix; j++) {
>>> + // Mix output mask
>>> + int mix_map_mask = get_bits(&s->gb, s->nmixoutchs[i]);
>>
>> Here s->nmixoutchs[i] can be zero. If that should not happen, there needs
>> to be an error check and otherwise it should use get_bitsz, because
>> get_bits doesn't support reading 0 bits.
>
> It doesn't seem that zero channel mask should normally happen, added an
> error check earlier.
It's not completely fixed yet, because the check is only done if
static_fields_present is 1.
I think the correct fix would be to set mix_metadata_enabled to 0 if
static_fields_present is 0, e.g. in the else branch in ff_dca_exss_parse.
>> Anyway, I still think the code is pretty robust. :-)
>
> Thanks for testing!
You're welcome.
>> I'd be glad to increase fuzz-testing coverage further, but I'm lacking
>> input examples. It would be great if you could share some (tiny) samples
>> triggering the HEADER_XCH/HEADER_XXCH cases and/or *_down_mix functions.
>
> As Hendrik pointed out, there are lidcadec test suite samples you can
> use for this. To trigger all downmixing functions you may need to
> provide -request_channel_layout 3 option.
Thanks to your hints I could increase the coverage above 90%.
I found a few more issues along the way:
static void get_array(GetBitContext *s, int *array, int size, int n)
{
int i;
for (i = 0; i < size; i++)
array[i] = get_sbits(s, n);
This should be get_sbits_long, because n can be larger than 25.
static int chs_assemble_freq_bands(DCAXllDecoder *s, DCAXllChSet *c)
{
int i, ch, nsamples = s->nframesamples, *ptr;
av_assert1(c->nfreqbands > 1);
This assert can be triggered.
static void combine_residual_frame(DCAXllDecoder *s, DCACoreDecoder *core, DCAXllChSet *c)
{
int ch, nsamples = s->nframesamples;
DCAXllChSet *o;
av_assert0(c->freq == core->output_rate);
av_assert0(nsamples == core->npcmsamples);
These two, as well.
Maybe they should be regular error checks instead?
static void scale_down_mix(DCAXllDecoder *s, DCAXllChSet *o, int band)
{
int i, j, nchannels = 0, nsamples = s->nframesamples;
DCAXllChSet *c;
for (i = 0, c = s->chset; i < s->nactivechsets; i++, c++) {
if (!c->hier_chset)
continue;
for (j = 0; j < c->nchannels; j++) {
int scale = o->dmix_scale[nchannels++];
if (scale != (1 << 15)) {
s->dcadsp->dmix_scale(c->bands[band].msb_sample_buffer[j],
scale, nsamples);
c->bands[band].msb_sample_buffer[j] can be NULL here, which causes a crash.
Additionally there are some rarely happening overflows in dcadsp.c.
(More precisely in filter0, filter1, dmix_sub_c and dmix_scale_c.)
It would be nice to avoid those, if that's possible without significant
speed loss.
Best regards,
Andreas
More information about the ffmpeg-devel
mailing list