[FFmpeg-devel] [PATCH v2 3/3] avformat/movenc: add support for fragmented TTML muxing

Dennis Mungai dmngaie at gmail.com
Fri Dec 8 17:36:38 EET 2023


On Fri, 8 Dec 2023 at 15:14, Andreas Rheinhardt <
andreas.rheinhardt at outlook.com> wrote:

> Jan Ekström:
> > From: Jan Ekström <jan.ekstrom at 24i.com>
> >
> > Attempts to base the fragmentation timing on other streams
> > as most receivers expect media fragments to be more or less
> > aligned.
> >
> > Currently does not support fragmentation on subtitle track
> > only, as the subtitle packet queue timings would have to be
> > checked in addition to the current fragmentation timing logic.
> >
> > Signed-off-by: Jan Ekström <jan.ekstrom at 24i.com>
> > ---
> >  libavformat/movenc.c                        |    9 -
> >  libavformat/movenc_ttml.c                   |  157 ++-
> >  tests/fate/mov.mak                          |   21 +
> >  tests/ref/fate/mov-mp4-fragmented-ttml-dfxp | 1197 +++++++++++++++++++
> >  tests/ref/fate/mov-mp4-fragmented-ttml-stpp | 1197 +++++++++++++++++++
>
> Am I the only one who thinks that this is a bit excessive?
>
> >  5 files changed, 2568 insertions(+), 13 deletions(-)
> >  create mode 100644 tests/ref/fate/mov-mp4-fragmented-ttml-dfxp
> >  create mode 100644 tests/ref/fate/mov-mp4-fragmented-ttml-stpp
> >
> > diff --git a/tests/fate/mov.mak b/tests/fate/mov.mak
> > index 6cb493ceab..5c44299196 100644
> > --- a/tests/fate/mov.mak
> > +++ b/tests/fate/mov.mak
> > @@ -143,6 +143,27 @@ FATE_MOV_FFMPEG_FFPROBE-$(call TRANSCODE, TTML
> SUBRIP, MP4 MOV, SRT_DEMUXER TTML
> >  fate-mov-mp4-ttml-stpp: CMD = transcode srt
> $(TARGET_SAMPLES)/sub/SubRip_capability_tester.srt mp4 "-map 0:s -c:s ttml
> -time_base:s 1:1000" "-map 0 -c copy" "-of json -show_entries
> packet:stream=index,codec_type,codec_tag_string,codec_tag,codec_name,time_base,start_time,duration_ts,duration,nb_frames,nb_read_packets:stream_tags"
> >  fate-mov-mp4-ttml-dfxp: CMD = transcode srt
> $(TARGET_SAMPLES)/sub/SubRip_capability_tester.srt mp4 "-map 0:s -c:s ttml
> -time_base:s 1:1000 -tag:s dfxp -strict unofficial" "-map 0 -c copy" "-of
> json -show_entries
> packet:stream=index,codec_type,codec_tag_string,codec_tag,codec_name,time_base,start_time,duration_ts,duration,nb_frames,nb_read_packets:stream_tags"
> >
> > +FATE_MOV_FFMPEG_FFPROBE-$(call TRANSCODE, TTML SUBRIP, MP4 MOV,
> LAVFI_INDEV SMPTEHDBARS_FILTER SRT_DEMUXER MPEG2VIDEO_ENCODER TTML_MUXER
> RAWVIDEO_MUXER) += fate-mov-mp4-fragmented-ttml-stpp
> > +fate-mov-mp4-fragmented-ttml-stpp: CMD = transcode srt
> $(TARGET_SAMPLES)/sub/SubRip_capability_tester.srt mp4 \
> > +  "-map 1:v -map 0:s \
> > +   -c:v mpeg2video -b:v 2M -g 48 -sc_threshold 1000000000 \
> > +   -c:s ttml -time_base:s 1:1000 \
> > +   -movflags +cmaf" \
> > +  "-map 0:s -c copy" \
> > +  "-select_streams s -of csv -show_packets -show_data_hash crc32" \
> > +  "-f lavfi -i
> smptehdbars=duration=70:size=320x180:rate=24000/1001,format=yuv420p" \
> > +  "" "" "rawvideo"
>
> Would it speed the test up if you used smaller dimensions or a smaller
> bitrate?
> Anyway, you probably want the "data" output format instead of rawvideo.
>
> > +
> > +FATE_MOV_FFMPEG_FFPROBE-$(call TRANSCODE, TTML SUBRIP, ISMV MOV,
> LAVFI_INDEV SMPTEHDBARS_FILTER SRT_DEMUXER MPEG2VIDEO_ENCODER TTML_MUXER
> RAWVIDEO_MUXER) += fate-mov-mp4-fragmented-ttml-dfxp
> > +fate-mov-mp4-fragmented-ttml-dfxp: CMD = transcode srt
> $(TARGET_SAMPLES)/sub/SubRip_capability_tester.srt ismv \
> > +  "-map 1:v -map 0:s \
> > +   -c:v mpeg2video -b:v 2M -g 48 -sc_threshold 1000000000 \
> > +   -c:s ttml -tag:s dfxp -time_base:s 1:1000" \
> > +  "-map 0:s -c copy" \
> > +  "-select_streams s -of csv -show_packets -show_data_hash crc32" \
> > +  "-f lavfi -i
> smptehdbars=duration=70:size=320x180:rate=24000/1001,format=yuv420p" \
> > +  "" "" "rawvideo"
> > +
> >  # FIXME: Uncomment these two tests once the test files are uploaded to
> the fate
> >  # server.
> >  # avif demuxing - still image with 1 item.
>

Hello Jan,

Taking this note into account, and I quote:

 " Currently does not support fragmentation on subtitle track only, as the
subtitle packet queue timings would have to be checked in addition to the
current fragmentation timing logic."

Wouldn't it be ideal to have this merged until after support for
fragmentation in subtitle-only tracks is complete, at the very least? That
way, the fate tests for such a workflow (case in point CMAF) would
therefore be feature complete?
The typical workloads that depend on such functionality, such as ingesting
CMFT require a subtitle-only stream be present in such a representation.

See:
1.
https://www.unified-streaming.com/blog/cmaf-conformance-is-this-really-cmaf
2. https://www.unified-streaming.com/blog/live-media-ingest-cmaf


More information about the ffmpeg-devel mailing list