[FFmpeg-devel] [PATCH 6/6] lavfi/dnn_classify: add filter dnn_classify for classification based on detection bounding boxes
Guo, Yejun
yejun.guo at intel.com
Mon Apr 26 05:10:47 EEST 2021
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of
> Andreas Rheinhardt
> Sent: 2021年4月26日 9:34
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 6/6] lavfi/dnn_classify: add filter
> dnn_classify for classification based on detection bounding boxes
>
> Guo, Yejun:
> >
> >
> >> -----Original Message-----
> >> From: Guo, Yejun <yejun.guo at intel.com>
> >> Sent: 2021年4月18日 18:08
> >> To: ffmpeg-devel at ffmpeg.org
> >> Cc: Guo, Yejun <yejun.guo at intel.com>
> >> Subject: [PATCH 6/6] lavfi/dnn_classify: add filter dnn_classify for
> >> classification based on detection bounding boxes
> >>
> >> classification is done on every detection bounding box in frame's side
> data,
> >> which are the results of object detection (filter dnn_detect).
> >>
> >> Please refer to commit log of dnn_detect for the material for detection,
> >> and see below for classification.
> >>
> >> - download material for classifcation:
> >> wget
> >>
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/20
> >> 21.1/emotions-recognition-retail-0003.bin
> >> wget
> >>
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/20
> >> 21.1/emotions-recognition-retail-0003.xml
> >> wget
> >>
> https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/20
> >> 21.1/emotions-recognition-retail-0003.label
> >>
> >> - run command as:
> >> ./ffmpeg -i cici.jpg -vf
> >>
> dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:
> >>
> input=data:output=detection_out:confidence=0.6:labels=face-detection-ad
> >>
> as-0001.label,dnn_classify=dnn_backend=openvino:model=emotions-recog
> >>
> nition-retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:la
> >> bels=emotions-recognition-retail-0003.label:target=face,showinfo -f null
> -
> >>
> >> We'll see the detect&classify result as below:
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] side data - detection
> bounding
> >> boxes:
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] source:
> >> face-detection-adas-0001.xml, emotions-recognition-retail-0003.xml
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0, region: (1005, 813)
> ->
> >> (1086, 905), label: face, confidence: 10000/10000.
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label:
> >> happy, confidence: 6757/10000.
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1, region: (888, 839)
> ->
> >> (967, 926), label: face, confidence: 6917/10000.
> >> [Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label:
> >> anger, confidence: 4320/10000.
> >>
> >> Signed-off-by: Guo, Yejun <yejun.guo at intel.com>
> >> ---
> >> configure | 1 +
> >> doc/filters.texi | 36 ++++
> >> libavfilter/Makefile | 1 +
> >> libavfilter/allfilters.c | 1 +
> >> libavfilter/vf_dnn_classify.c | 330
> >> ++++++++++++++++++++++++++++++++++
> >> 5 files changed, 369 insertions(+)
> >> create mode 100644 libavfilter/vf_dnn_classify.c
> >>
> >> +
> >> +AVFilter ff_vf_dnn_classify = {
> >> + .name = "dnn_classify",
> >> + .description = NULL_IF_CONFIG_SMALL("Apply DNN classify
> filter
> >> to the input."),
> >> + .priv_size = sizeof(DnnClassifyContext),
> >> + .init = dnn_classify_init,
> >> + .uninit = dnn_classify_uninit,
> >> + .query_formats = dnn_classify_query_formats,
> >> + .inputs = dnn_classify_inputs,
> >> + .outputs = dnn_classify_outputs,
> >> + .priv_class = &dnn_classify_class,
> >> + .activate = dnn_classify_activate,
> >> +};
> >
> > I've locally added 'const' for AVFilter ff_vf_dnn_classify, any other
> comment? thanks.
> >
> If you did this, then this filter may only be added after the bump.
thanks, I'll remove/add 'const' by checking the bump when it can be pushed.
btw, any rough estimate when the bump will happen? thanks.
More information about the ffmpeg-devel
mailing list