[FFmpeg-devel] AAC - Fetching data out of data stream elements/DSE?

Andreas Rheinhardt andreas.rheinhardt at outlook.com
Tue Aug 3 18:48:12 EEST 2021


Carsten Gross:
> Hello,
> 
> AAC audio streams offer the possibility of transmitting additional binary
> data with the audio data. The corresponding element is called a "data
> stream element" (DSE). On the new ARD satellite audio transponder on
> Astra 19.2E, this is used by NDR, for example, to transmit RDS (Radio
> Data System). The audio format used is AAC-LATM.
> 
> In the current ffmpeg git code, DSE are skipped. I have included a
> HexDump of the DSE data in libavcodec/aacdec_template.c (by changing
> skip_data_stream_element()) for testing and in the Hexdump I see exactly
> the data I would like to get out of the audio data stream into my
> application "ts2shout" with the help of ffmpeg libavcodec. 
> 
> If I want to implement DSE support, what is the right "ffmpeg-"way to
> provide this binary data embedded in the audio frames to a user
> application? Metadata seems to be more for "static" properties of files
> and it is text only.
> 
> Would using "AVFrameSideData" be the right method to transfer the data
> to the user application? Is there a generic data type, or would I have
> to define a new one in AVAudioServiceType, regarding that it is possible
> to have more then one DSE in one audio frame? My plan would be to later
> pass the individual AAC frames to libavcodec for decoding (like it is
> described in the example in doc/examples/decode_audio.c) and possibly
> only evaluate the respective DSE.
> 
Using frame side data (or also packet side data -- a matching packet
side data would be needed if one wanted to preserve this data when
reencoding/encoding from scratch; this would also allow to write a
bitstream filter to extract this without decoding) is the right way to
do this.
I do not see any side data type that matches your usecase, so you will
probably create new ones.

- Andreas


More information about the ffmpeg-devel mailing list