[FFmpeg-devel] [PATCH] lavfi: edgedetect filter

Stefano Sabatini stefasab at gmail.com
Thu Aug 9 02:04:01 CEST 2012


On date Wednesday 2012-08-08 22:14:03 +0200, Clément Bœsch encoded:
> On Wed, Aug 08, 2012 at 03:28:58PM +0200, Stefano Sabatini wrote:
> [...]
> > > > > +static void double_threshold(AVFilterContext *ctx, int w, int h,
> > > > > +                                   uint8_t *dst, int dst_linesize,
> > > > > +                             const uint8_t *src, int src_linesize)
> > > > > +{
> > > > > +    int i, j;
> > > > > +
> > > > > +#define THRES_HIGH 80
> > > > > +#define THRES_LOW  20
> > > > 
> > > > This values should be made parametric (and expressed as float values
> > > > in the [0, 1] range).
> > > > 
> > > 
> > > Yes, but right now the filter is called "edgedetect" and only implements
> > > the canny algo. If we were to add more algo, we would end up with various
> > > mode such as vf edgedetect=canny, or edgedetect=marrhildreth or such. So I
> > > wasn't confident about making it customizable yet.
> > > 
> > > This parametrization can be done later without breaking the usage, that
> > > really was the main point.
> > 
> > OK but this is trivial to implement, so I'd rather have this in the
> > first version (without the need to add named options and stuff in a
> > second commit).
> > 
> > [...]
> > > > Another consideration: we could optionally support faster less
> > > > accurate algorithms (e.g. Marr-Hildreth).
> > > 
> > > Feel free to implement it. I was just having fun with lavfi.
> > 
> > Resume: threshold parametrization is trivial and should be added
> > when pushing the first version.
> > 
> > Gaussian mask sigma and size customization can be added later, same
> > for generic convolution computation and for other edge detection
> > algorithms with different features.
> 
> OK. New patch attached.

[...]
> From 7ca50c30bc25f55a4cc5355597407a640b88f8a7 Mon Sep 17 00:00:00 2001
> From: =?UTF-8?q?Cl=C3=A9ment=20B=C5=93sch?= <ubitux at gmail.com>
> Date: Thu, 26 Jul 2012 19:45:53 +0200
> Subject: [PATCH] lavfi: edgedetect filter
> 
> FIXME: bump lavfi minor
> ---
>  doc/filters.texi            |  18 +++
>  libavfilter/Makefile        |   1 +
>  libavfilter/allfilters.c    |   1 +
>  libavfilter/vf_edgedetect.c | 324 ++++++++++++++++++++++++++++++++++++++++++++
>  tests/lavfi-regression.sh   |   1 +
>  tests/ref/lavfi/edgedetect  |   1 +
>  6 files changed, 346 insertions(+)
>  create mode 100644 libavfilter/vf_edgedetect.c
>  create mode 100644 tests/ref/lavfi/edgedetect
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index e73fc09..ec285c1 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -1929,6 +1929,24 @@ For more information about libfreetype, check:
>  For more information about fontconfig, check:
>  @url{http://freedesktop.org/software/fontconfig/fontconfig-user.html}.
>  
> + at section edgedetect
> +
> +Detect and draw edges. The filter uses the Canny Edge Detection algorithm with
> +the following optional named parameters:

I'd prefer:
|Detect and draw edges. The filter uses the Canny Edge Detection
|algorithm.
|
|This filter accepts the following optional named parameters:
...

Maybe mention that the filter only works only on grey-tone images.

> +

> + at table @option
> + at item high
> +Set high threshold. Default is @code{80/256}.
> +
> + at item low
> +Set low threshold. Default is @code{20/256}.
> + at end table

Specify valid range. Also why x/256 rather than x/255?

Also from my fine textbook:
"Canny suggested that the ratio of the high to low threshold should be
two or three to one", so if you set low=20/255 I believe
high=2.5*20/255 = 50/255 should be fine.

Also the text doesn't explain the use of the thresholds. I'd say:
---------------8<----------------8<------------------------------
@item low, high
Set low and high threshold values used by the Canny thresholding
algorithm.

The high threshold selects the "strong" edge pixels, which are then
connected through 8-connectivity with the "weak" edge pixels selected
by the low threshold.

@var{low} and @var{high} threshold values must be choosen in the range
[0,1], and @var{low} should be lesser or equal to @var{high}.

Default value for @var{low} is ..., default value for @var{high} is ...
---------------8<----------------8<------------------------------

> +
> +Example:
> + at example
> +edgedetect=low=0.1:high=0.4
> + at end example
> +
>  @section fade
>  
>  Apply fade-in/out effect to input video.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index b6bd37f..b8d89b0 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -89,6 +89,7 @@ OBJS-$(CONFIG_DELOGO_FILTER)                 += vf_delogo.o
>  OBJS-$(CONFIG_DESHAKE_FILTER)                += vf_deshake.o
>  OBJS-$(CONFIG_DRAWBOX_FILTER)                += vf_drawbox.o
>  OBJS-$(CONFIG_DRAWTEXT_FILTER)               += vf_drawtext.o
> +OBJS-$(CONFIG_EDGEDETECT_FILTER)             += vf_edgedetect.o
>  OBJS-$(CONFIG_FADE_FILTER)                   += vf_fade.o
>  OBJS-$(CONFIG_FIELDORDER_FILTER)             += vf_fieldorder.o
>  OBJS-$(CONFIG_FIFO_FILTER)                   += fifo.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index da1c8e6..91eec2a 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -79,6 +79,7 @@ void avfilter_register_all(void)
>      REGISTER_FILTER (DESHAKE,     deshake,     vf);
>      REGISTER_FILTER (DRAWBOX,     drawbox,     vf);
>      REGISTER_FILTER (DRAWTEXT,    drawtext,    vf);
> +    REGISTER_FILTER (EDGEDETECT,  edgedetect,  vf);
>      REGISTER_FILTER (FADE,        fade,        vf);
>      REGISTER_FILTER (FIELDORDER,  fieldorder,  vf);
>      REGISTER_FILTER (FIFO,        fifo,        vf);
> diff --git a/libavfilter/vf_edgedetect.c b/libavfilter/vf_edgedetect.c
> new file mode 100644
> index 0000000..1e25c2d
> --- /dev/null
> +++ b/libavfilter/vf_edgedetect.c
> @@ -0,0 +1,324 @@
> +/*
> + * Copyright (c) 2012 Clément Bœsch
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Edge detection filter
> + *
> + * @url https://en.wikipedia.org/wiki/Canny_edge_detector
> + */
> +
> +#include "libavutil/opt.h"
> +#include "avfilter.h"
> +#include "formats.h"
> +#include "internal.h"
> +#include "video.h"
> +
> +typedef struct {
> +    const AVClass *class;
> +    uint8_t  *tmpbuf;
> +    uint16_t *gradients;
> +    char     *directions;

> +    double   lowd, highd;
> +    uint8_t  low,  high;

Nit: I'd prefer to keep the field name equal to the corresponding option name

> +} EdgeDetectContext;
> +
> +#define OFFSET(x) offsetof(EdgeDetectContext, x)
> +static const AVOption edgedetect_options[] = {
> +    { "high", "set high threshold", OFFSET(highd), AV_OPT_TYPE_DOUBLE, {.dbl=80/256.}, 0, 1 },
> +    { "low",  "set low threshold",  OFFSET(lowd),  AV_OPT_TYPE_DOUBLE, {.dbl=20/256.}, 0, 1 },
> +    { NULL },
> +};
> +
> +AVFILTER_DEFINE_CLASS(edgedetect);
> +
> +static av_cold int init(AVFilterContext *ctx, const char *args)
> +{
> +    int ret;
> +    EdgeDetectContext *edgedetect = ctx->priv;
> +
> +    edgedetect->class = &edgedetect_class;
> +    av_opt_set_defaults(edgedetect);
> +
> +    if ((ret = av_set_options_string(edgedetect, args, "=", ":")) < 0) {

> +        av_log(ctx, AV_LOG_ERROR, "Error parsing options string: '%s'\n", args);

Feel free to drop this.

> +        return ret;
> +    }
> +

> +    edgedetect->low  = edgedetect->lowd  * 256.;
> +    edgedetect->high = edgedetect->highd * 256.;

> +    return 0;
> +}
> +
> +static int query_formats(AVFilterContext *ctx)
> +{
> +    static const enum PixelFormat pix_fmts[] = {PIX_FMT_GRAY8, PIX_FMT_NONE};
> +    ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> +    return 0;
> +}
> +
> +static int config_props(AVFilterLink *inlink)
> +{
> +    AVFilterContext *ctx = inlink->dst;
> +    EdgeDetectContext *edgedetect = ctx->priv;
> +
> +    edgedetect->tmpbuf     = av_malloc(inlink->w * inlink->h);
> +    edgedetect->gradients  = av_calloc(inlink->w * inlink->h, sizeof(*edgedetect->gradients));
> +    edgedetect->directions = av_malloc(inlink->w * inlink->h);
> +    if (!edgedetect->tmpbuf || !edgedetect->gradients || !edgedetect->directions)
> +        return AVERROR(ENOMEM);
> +    return 0;
> +}
> +
> +static void gaussian_blur(AVFilterContext *ctx, int w, int h,
> +                                uint8_t *dst, int dst_linesize,
> +                          const uint8_t *src, int src_linesize)
> +{
> +    int i, j;
> +
> +    memcpy(dst, src, w); dst += dst_linesize; src += src_linesize;
> +    memcpy(dst, src, w); dst += dst_linesize; src += src_linesize;
> +    for (j = 2; j < h - 2; j++) {
> +        dst[0] = src[0];
> +        dst[1] = src[1];
> +        for (i = 2; i < w - 2; i++) {

> +            dst[i] = ((src[-2*src_linesize + i-2] + src[2*src_linesize + i-2]) * 2
> +                    + (src[-2*src_linesize + i-1] + src[2*src_linesize + i-1]) * 4
> +                    + (src[-2*src_linesize + i  ] + src[2*src_linesize + i  ]) * 5
> +                    + (src[-2*src_linesize + i+1] + src[2*src_linesize + i+1]) * 4
> +                    + (src[-2*src_linesize + i+2] + src[2*src_linesize + i+2]) * 2
> +
> +                    + (src[  -src_linesize + i-2] + src[  src_linesize + i-2]) *  4
> +                    + (src[  -src_linesize + i-1] + src[  src_linesize + i-1]) *  9
> +                    + (src[  -src_linesize + i  ] + src[  src_linesize + i  ]) * 12
> +                    + (src[  -src_linesize + i+1] + src[  src_linesize + i+1]) *  9
> +                    + (src[  -src_linesize + i+2] + src[  src_linesize + i+2]) *  4
> +
> +                    + src[i-2] *  5
> +                    + src[i-1] * 12
> +                    + src[i  ] * 15
> +                    + src[i+1] * 12
> +                    + src[i+2] *  5) / 159;
> +        }

add a note about the fact that you're using a Gaussian mask of size
5x5 with sigma = X

> +        dst[i    ] = src[i    ];
> +        dst[i + 1] = src[i + 1];
> +
> +        dst += dst_linesize;
> +        src += src_linesize;
> +    }
> +    memcpy(dst, src, w); dst += dst_linesize; src += src_linesize;
> +    memcpy(dst, src, w);
> +}
> +
> +enum {
> +    DIRECTION_45UP,
> +    DIRECTION_45DOWN,
> +    DIRECTION_HORIZONTAL,
> +    DIRECTION_VERTICAL,
> +};
> +
> +static int get_rounded_direction(int gx, int gy)
> +{
> +    /* reference angles:
> +     *   tan( pi/8) = sqrt(2)-1
> +     *   tan(3pi/8) = sqrt(2)+1
> +     * Gy/Gx is the tangent of the angle (theta), so Gy/Gx is compared against
> +     * <ref-angle>, or more simply Gy against <ref-angle>*Gx
> +     *

> +     * Gx and Gy bounds = [1020;1020], using 16-bit arith:

Nit+: arithmetic (chars are cheap these days)

> +     *   round((sqrt(2)-1) * (1<<16)) =  27146
> +     *   round((sqrt(2)+1) * (1<<16)) = 158218
> +     */
> +    if (gx) {
> +        int tanpi8gx, tan3pi8gx;
> +
> +        if (gx < 0)
> +            gx = -gx, gy = -gy;
> +        gy <<= 16;

> +        tanpi8gx  =  27146 * gx;
> +        tan3pi8gx = 158218 * gx;

> +        if (gy > -tan3pi8gx && gy < -tanpi8gx)  return DIRECTION_45UP;
> +        if (gy > -tanpi8gx  && gy <  tanpi8gx)  return DIRECTION_HORIZONTAL;
> +        if (gy >  tanpi8gx  && gy <  tan3pi8gx) return DIRECTION_45DOWN;
> +    }
> +    return DIRECTION_VERTICAL;
> +}
> +
> +static void sobel(AVFilterContext *ctx, int w, int h,
> +                        uint16_t *dst, int dst_linesize,
> +                  const uint8_t  *src, int src_linesize)
> +{
> +    int i, j;
> +    EdgeDetectContext *edgedetect = ctx->priv;
> +
> +    for (j = 1; j < h - 1; j++) {
> +        dst += dst_linesize;
> +        src += src_linesize;
> +        for (i = 1; i < w - 1; i++) {
> +            const int gx =
> +                -1*src[-src_linesize + i-1] + 1*src[-src_linesize + i+1]
> +                -2*src[                i-1] + 2*src[                i+1]
> +                -1*src[ src_linesize + i-1] + 1*src[ src_linesize + i+1];
> +            const int gy =
> +                -1*src[-src_linesize + i-1] + 1*src[ src_linesize + i-1]
> +                -2*src[-src_linesize + i  ] + 2*src[ src_linesize + i  ]
> +                -1*src[-src_linesize + i+1] + 1*src[ src_linesize + i+1];
> +
> +            dst[i] = FFABS(gx) + FFABS(gy);
> +            edgedetect->directions[j*w + i] = get_rounded_direction(gx, gy);
> +        }
> +    }
> +}
> +
[...]

I'd also like to add support to generic Gaussian smoothing masks, let
me know if you want to work on that (in a separate patch) or I'll try
to find some time for it.
-- 
FFmpeg = Fostering and Friendly Most Ponderous Elastic Game


More information about the ffmpeg-devel mailing list