[FFmpeg-devel] [RFC] avcodec: Add native DCA decoder based on libdcadec.
foo86
foobaz86 at gmail.com
Wed Jan 13 19:18:00 CET 2016
On Tue, Jan 12, 2016 at 11:58:43PM +0100, Andreas Cadhalpun wrote:
> It's not completely fixed yet, because the check is only done if
> static_fields_present is 1.
You are right, placing the check there doesn't account for case when
static_fields_present becomes 0 later. I moved the check closer to
problematic get_bits() call for a more complete fix.
> I think the correct fix would be to set mix_metadata_enabled to 0 if
> static_fields_present is 0, e.g. in the else branch in ff_dca_exss_parse.
The spec doesn't give many details about this, but when
static_fields_present is 0, values parsed earlier are supposedly
retained for stream duration, so clearing mix_metadata_enabled doesn't
seem correct in this case.
> Thanks to your hints I could increase the coverage above 90%.
> I found a few more issues along the way:
>
> static void get_array(GetBitContext *s, int *array, int size, int n)
> {
> int i;
>
> for (i = 0; i < size; i++)
> array[i] = get_sbits(s, n);
>
> This should be get_sbits_long, because n can be larger than 25.
Added xbr_bit_allocation[][] range check to make sure get_sbits()
argument doesn't exceed 23.
> static int chs_assemble_freq_bands(DCAXllDecoder *s, DCAXllChSet *c)
> {
> int i, ch, nsamples = s->nframesamples, *ptr;
>
> av_assert1(c->nfreqbands > 1);
>
> This assert can be triggered.
There was supposed to be a check to ensure that all channel sets have
the same number of frequency bands, but apparently it got lost during
porting. Re-added it.
> static void combine_residual_frame(DCAXllDecoder *s, DCACoreDecoder *core, DCAXllChSet *c)
> {
> int ch, nsamples = s->nframesamples;
> DCAXllChSet *o;
>
> av_assert0(c->freq == core->output_rate);
> av_assert0(nsamples == core->npcmsamples);
>
> These two, as well.
> Maybe they should be regular error checks instead?
Hmm, core sample rate and number of samples are already checked to be
compatible with XLL at this point. There must be some bug in upsampling
logic if these asserts are triggering, however it is not obvious for me
where the problem is... Can you provide a sample which triggers these?
It would be helpful.
> static void scale_down_mix(DCAXllDecoder *s, DCAXllChSet *o, int band)
> {
> int i, j, nchannels = 0, nsamples = s->nframesamples;
> DCAXllChSet *c;
>
> for (i = 0, c = s->chset; i < s->nactivechsets; i++, c++) {
> if (!c->hier_chset)
> continue;
> for (j = 0; j < c->nchannels; j++) {
> int scale = o->dmix_scale[nchannels++];
> if (scale != (1 << 15)) {
> s->dcadsp->dmix_scale(c->bands[band].msb_sample_buffer[j],
> scale, nsamples);
>
> c->bands[band].msb_sample_buffer[j] can be NULL here, which causes a crash.
A check re-added earlier to ensure that all channels sets have the same
number of frequency bands should fix this as well, I think.
> Additionally there are some rarely happening overflows in dcadsp.c.
> (More precisely in filter0, filter1, dmix_sub_c and dmix_scale_c.)
> It would be nice to avoid those, if that's possible without significant
> speed loss.
I wouldn't worry much about overflows if they happen only with fuzzed
files... Though this is definitely a problem if normal streams trigger
these.
More information about the ffmpeg-devel
mailing list