[FFmpeg-devel] [PATCH v16 08/16] fftools/ffmpeg: Replace sub2video with subtitle frame filtering

Soft Works softworkz at hotmail.com
Sat Nov 27 09:23:13 EET 2021



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Soft Works
> Sent: Saturday, November 27, 2021 8:19 AM
> To: FFmpeg development discussions and patches <ffmpeg-devel at ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH v16 08/16] fftools/ffmpeg: Replace
> sub2video with subtitle frame filtering
> 
> 
> 
> > -----Original Message-----
> > From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Andreas
> > Rheinhardt
> > Sent: Friday, November 26, 2021 2:02 PM
> > To: ffmpeg-devel at ffmpeg.org
> > Subject: Re: [FFmpeg-devel] [PATCH v16 08/16] fftools/ffmpeg: Replace
> > sub2video with subtitle frame filtering
> >
> > > -    int subtitle_out_max_size = 1024 * 1024;
> > > +    const int subtitle_out_max_size = 1024 * 1024;
> > >      int subtitle_out_size, nb, i;
> > >      AVCodecContext *enc;
> > >      AVPacket *pkt = ost->pkt;
> > > +    AVSubtitle out_sub = { 0 };
> >
> > You are adding some stuff here which is removed in patch 16. These
> > patches should be merged.
> 
> Done. I had to move the encoding API commit to an earlier position for this.
> 
> 
> > > +        pts     = heartbeat_pts; //av_rescale_q(frame->subtitle_pts +
> > frame->subtitle_start_time * 1000LL, AV_TIME_BASE_Q, ist->st->time_base);
> > > +        end_pts = av_rescale_q(frame->subtitle_pts + frame-
> > >subtitle_end_time   * 1000LL, AV_TIME_BASE_Q, ist->st->time_base);
> > > +    }
> > > +    else {
> >
> > Put this on the same line as }.
> >
> > > +        frame = av_frame_alloc();
> >
> > Is it actually certain that we need a new frame and can't reuse
> > decoded_frame like the other functions that ultimately call
> > send_frame_to_filters() do?
> 
> This is typically only happening at the start when no subtitle frames have
> been decoded yet. Also those are empty frames, nothing like video frames
> with a large buffer allocated.
> 
> > >
> > > -        do_subtitle_out(output_files[ost->file_index], ost, &subtitle);
> > > +    if (ist->nb_filters > 0) {
> > > +        AVFrame *filter_frame = av_frame_clone(decoded_frame);
> >
> > If I see this correctly, then the reason that one has to do something
> > besides send_frame_to_filters() is that we do not add a dummy
> > filtergraph like is done in line 2645 of ffmpeg_opt.c for audio and video?
> 
> IIRC this is to avoid the heartbeat frames arriving at the encoder when no
> filtering is done (or a source subtitle stream is both going through
> filtering
> but at the same time encoded directly as an output stream.
> 
> > > +    if (!(w && h) && ist->dec_ctx->subtitle_header) {
> > > +        ASSSplitContext *ass_ctx = avpriv_ass_split((char *)ist-
> >dec_ctx-
> > >subtitle_header);
> >
> > avpriv functions must not be used in fftools.
> 
> I wanted to make this part of the public API, but you an/or Hendrik objected
> IIRC. I had made some suggestions for how this could be done, but it remained
> unresponded (I had asked multiple times about this).
> 
> > And what makes you so
> > certain that subtitle_header will only be used by ass subtitles?
> 
> Because ASS is the defined internal format for text subtitles.
> 
> - Each text subtitle decoder outputs ASS subtitles
> - Each text subtitle encoder gets ASS subtitles as input
> - Text subtitle filters work on ASS subtitle data
> (there's a hardly used exception: AV_SUBTITLE_FMT_TEXT, but that doesn’t use
> any
> header)
> 
> > Furthermore, missing check.
> > (Maybe ass subtitle based codecs should set AVCodecContext.width and
> > height based upon this play_res_x/y?
> 
> Breaks the decoder API. Anyway, the full header needs to be parsed by certain
> filters.
> Also the other ass functions needs to be available for filters (e.g. for
> parsing dialogs).
> 
> Due to the fact that ASS is not just an encoding output or decoding input,
> but
> internally used as transport format, those ass functions need to be available
> outside
> of libavcodec.
> I'm totally open for suggestions regarding _how_ this should be done.
> 
> 
> > >      av_freep(&ifilter->displaymatrix);
> > >      sd = av_frame_get_side_data(frame, AV_FRAME_DATA_DISPLAYMATRIX);
> > > diff --git a/fftools/ffmpeg_hw.c b/fftools/ffmpeg_hw.c
> > > index 14e702bd92..be69d54aaf 100644
> > > --- a/fftools/ffmpeg_hw.c
> > > +++ b/fftools/ffmpeg_hw.c
> > > @@ -449,7 +449,7 @@ int hw_device_setup_for_encode(OutputStream *ost)
> > >      AVBufferRef *frames_ref = NULL;
> > >      int i;
> > >
> > > -    if (ost->filter) {
> > > +    if (ost->filter && ost->filter->filter) {
> >
> > I don't think that your patches make it necessary to add this (or have
> > you already added hardware accelerated subtitle encoding?), so it is
> > either already necessary in master and should be sent as a separate
> > commit or it is unnecessary and should be dropped.
> 
> Using ffmpeg with hw acceleration is my primary use case, that's why I'm
> rather sure that it isn't required before this patchset.
> 
> While working on this patchset I've also tested various scenarios with hw
> acceleration, so it might be possible that I've run into this in combination
> with hw acceleration. Anyway - why should adding this check be unnecessary
> or even wrong?
> 
> When ost->filter->filter is NULL, then av_buffersink_get_hw_frames_ctx()
> will crash.
> 
> 
> Kind regards,
> softworkz

PS: Forgot to mention - everything else applied as suggested


More information about the ffmpeg-devel mailing list