[FFmpeg-devel] [PATCH 2/5] lavc/aarch64: Add neon implementation of vsse16
Martin Storsjö
martin at martin.st
Wed Sep 7 11:57:46 EEST 2022
On Tue, 6 Sep 2022, Hubert Mazur wrote:
> Provide optimized implementation of vsse16 for arm64.
>
> Performance comparison tests are shown below.
> - vsse_0_c: 254.4
> - vsse_0_neon: 64.7
>
> Benchmarks and tests are run with checkasm tool on AWS Graviton 3.
>
> Signed-off-by: Hubert Mazur <hum at semihalf.com>
> ---
> libavcodec/aarch64/me_cmp_init_aarch64.c | 4 ++
> libavcodec/aarch64/me_cmp_neon.S | 87 ++++++++++++++++++++++++
> 2 files changed, 91 insertions(+)
>
> diff --git a/libavcodec/aarch64/me_cmp_init_aarch64.c b/libavcodec/aarch64/me_cmp_init_aarch64.c
> index ddc5d05611..7b81e48d16 100644
> --- a/libavcodec/aarch64/me_cmp_init_aarch64.c
> +++ b/libavcodec/aarch64/me_cmp_init_aarch64.c
> @@ -43,6 +43,8 @@ int sse4_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
>
> int vsad16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
> ptrdiff_t stride, int h);
> +int vsse16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
> + ptrdiff_t stride, int h);
>
> av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx)
> {
> @@ -62,5 +64,7 @@ av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx)
> c->sse[2] = sse4_neon;
>
> c->vsad[0] = vsad16_neon;
> +
> + c->vsse[0] = vsse16_neon;
> }
> }
> diff --git a/libavcodec/aarch64/me_cmp_neon.S b/libavcodec/aarch64/me_cmp_neon.S
> index 1d0b166d69..b3f376aa60 100644
> --- a/libavcodec/aarch64/me_cmp_neon.S
> +++ b/libavcodec/aarch64/me_cmp_neon.S
> @@ -649,3 +649,90 @@ function vsad16_neon, export=1
>
> ret
> endfunc
> +
> +function vsse16_neon, export=1
> + // x0 unused
> + // x1 uint8_t *pix1
> + // x2 uint8_t *pix2
> + // x3 ptrdiff_t stride
> + // w4 int h
> +
> + ld1 {v0.16b}, [x1], x3 // Load pix1[0], first iteration
> + ld1 {v1.16b}, [x2], x3 // Load pix2[0], first iteration
> +
> + sub w4, w4, #1 // we need to make h-1 iterations
> + movi v16.4s, #0
> + movi v17.4s, #0
> +
> + cmp w4, #3 // check if we can make 3 iterations at once
> + usubl v31.8h, v0.8b, v1.8b // Signed difference of pix1[0] - pix2[0], first iteration
> + usubl2 v30.8h, v0.16b, v1.16b // Signed difference of pix1[0] - pix2[0], first iteration
> + b.le 2f
> +
> +
> +1:
> + // x = abs(pix1[0] - pix2[0] - pix1[0 + stride] + pix2[0 + stride])
> + // res = (x) * (x)
Technically, there's no need for abs() here, we can just as well just do a
plain subtraction. I tested this by replacing sabd with sub here (and
changing umlal into smlal). It doesn't make any difference for the
performance on the cores I tested on though - apparently there's no
difference in performance between sabd and sub. So in practice, both
should be fine. And I don't think that either of them is better for
handling overflows/edge cases here either (which shouldn't be happening
anyway).
// Martin
More information about the ffmpeg-devel
mailing list