[FFmpeg-devel] [PATCH] Speex parser
Baptiste Coudurier
baptiste.coudurier
Sun Sep 20 04:31:49 CEST 2009
On 09/19/2009 06:25 PM, Justin Ruggles wrote:
> Baptiste Coudurier wrote:
>
>> Hi Justin,
>>
>> On 09/04/2009 03:55 PM, Justin Ruggles wrote:
>>> Baptiste Coudurier wrote:
>>>
>>>> On 08/30/2009 06:26 PM, Justin Ruggles wrote:
>>>>> Baptiste Coudurier wrote:
>>>>>
>>>>>> On 8/30/2009 5:50 PM, Justin Ruggles wrote:
>>>>>>> Justin Ruggles wrote:
>>>>>>>
>>>>>>>> Justin Ruggles wrote:
>>>>>>>>
>>>>>>>>> Michael Niedermayer wrote:
>>>>>>>>>
>>>>>>>>>> On Sun, Aug 30, 2009 at 10:40:45AM -0400, Justin Ruggles wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Here is a Speex parser. It does not split any frames, but only analyzes
>>>>>>>>>>> them in order to set the proper AVCodecContext parameters.
>>>>>>>>>> the decoder can do this, av_find_stream_info() should already create a
>>>>>>>>>> decoder to fill these in when they are missing.
>>>>>>>>> Why should it have to rely on the decoder...especially since we do not
>>>>>>>>> have a native decoder? So that one MUST compile in an external library
>>>>>>>>> for stream copy to work properly.
>>>>>>>> If there is no problem with packet duration being 0 or wrong, then I
>>>>>>>> think stream copy could work without the parser or decoder. I tried flv
>>>>>>>> to ogg and it seemed to work since timestamps were just copied from one
>>>>>>>> container to the other. Packet duration was still wrong though, and I
>>>>>>>> don't know if that causes other problems.
>>>>>>> Ok, I think I figured out the solution to this part at least. Speex
>>>>>>> needs to be added to the list of codecs in has_codec_parameters() that
>>>>>>> require frame_size to be non-zero, then the libspeex decoder should not
>>>>>>> set frame_size at init when it does not have extradata since it could be
>>>>>>> the wrong value.
>>>>>> Or finally set sample_fmt to SAMPLE_FMT_NONE triggering decoding of the
>>>>>> first frame.
>>>>> That seems like a hack. We always know the sample format at decoder
>>>>> init, but we don't necessarily know the frame size.
>>>> This depends on codec. Codecs supporting different bit depth certainly
>>>> don't know the bit depth at init. Does it look like a hack ? I really
>>>> don't think so.
>>>
>>> I thought you were just talking about Speex. In general, yes I agree
>>> with you that the decoder might not know it at init.
>>>
>>> If you are suggesting that having SAMPLE_FMT_NONE as default would be a
>>> preferable situation, then I agree with you. But it seems wrong to me
>>> to force av_find_stream_info() to always defer to the decoder to
>>> complete all stream parameters. In my opinion, the sample format output
>>> by the decoder has nothing to do with the stream parameters as far as
>>> the demuxer is concerned.
>>
>> Well IMHO it is more complicated than that, but I agree with you in
>> principle. Ie sample_fmt and raw_bits_per_sample are related like
>> colorspace and pix_fmt are related.
>>
>> This applies to pix_fmt as well.
>>
>> If no pix_fmt was computed, it basically means frames/stream could not
>> be decoded, the same applies to sample_fmt I think.
>>
>> That's why I'm sceptical about setting pix_fmt and sample_fmt at codec init.
>
> I'm not sure that is what it means. sample_fmt is often set at decoder
> init because the decoder can often know definitively at that point what
> the sample format will be, either because it always outputs the same
> sample format or because it can determine what it will be from extradata.
Yes, however this cause inconsistencies between decoders, and user could
think that after init, sample_fmt is set, that's why I'm skeptical,
though it's seems reasonable to assume that users always decodes the
first frame before initializing resampling.
>>> A grep of libavformat shows 4 uses of sample_fmt, not including the one
>>> in has_codec_parameters(). 3 look like incorrect uses. One is just
>>> questionable. What do you think about fixing these and removing the
>>> sample_fmt requirement from has_codec_parameters() and possibly from
>>> libavformat completely? Would it then be safe to make SAMPLE_FMT_NONE
>>> the default instead of SAMPLE_FMT_S16?
>>
>> IMHO It is already safe to change default to SAMPLE_FMT_NONE.
>
> What are the arguments against changing it now? It seems to me that the
> only side effect that could be seen by some as negative would be that
> the demuxer and parser alone will never satisfy all audio codec
> parameters for av_find_stream_info() unless you also remove sample_fmt
> as a requirement. It seems you would prefer to not only keep it as a
> requirement, but even go further to basically always force decoding of
> the first packet when getting the stream info by not setting sample_fmt
> at codec init.
>
> My preference would be to remove it from the list of required parameters
> in has_codec_parameters().
Well, default sample_fmt should be SAMPLE_FMT_NONE, like pix_fmt is
PIX_FMT_NONE.
Many people are wishing libavformat to be more standalone ie not relying
on libavcodec, so I think removing sample_fmt value requirement goes
into this direction.
--
Baptiste COUDURIER
Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA
FFmpeg maintainer http://www.ffmpeg.org
More information about the ffmpeg-devel
mailing list