[FFmpeg-devel] [RFC] Introducing policies regarding "AI" contributions

Kacper Michajlow kasper93 at gmail.com
Sat Jul 5 15:22:39 EEST 2025


On Sat, 5 Jul 2025 at 13:21, Rémi Denis-Courmont <remi at remlab.net> wrote:
>
> Le tiistaina 1. heinäkuuta 2025, 13.58.23 Itä-Euroopan kesäaika Alexander
> Strasser via ffmpeg-devel a écrit :
> > (...) I want this thread to start a discussion, that eventually leads
> > to a policy about submitting and integrating "AI" generated content.
>
> Well, you can define a policy and/or make a public statement on FFmpeg.org, but
> as others said, just like we can't prevent someone misattributing their
> contributions and violating copyrights, we can't credibly prevent (mis)use of
> LLMs to generate code.
>
> There is also a problem of definition. While I don't personally use computer
> assistance, I think it's fine to use language servers to automatically generate
> or suggests boilerplate, possible contextual completions, etc. While this sort
> of technology predates LLMs and is clearly distinct from it **at the moment**,
> it's going to be hard to define "AI" and where to draw a line.
>
> Ultimately, I think you need to define the problem(s) as far as FFmpeg-devel is
> concerned. Potential copyright violations are not new, and I think the current
> policies and license terms are adequate, regardless of AI.
>
> Low quality patches are also not really a new problem, and they can be
> rejected with the current processes too.
>
> *Maybe* LLM usage will (willingly or unwittingly) lead to a denial of service
> attacks on the review capacity and motivation of the FFmpeg-devel, TC and GA
> membership, but that remains highly speculative, and I think we don't need to
> solve that what-if problem yet. And again, this attack does not necessarily
> need an LLM to be carried.

Fully agreed. I think the bottom line is that there is a human sending
those patches, and while "AI" tools may have been used, it's
ultimately on the human to ensure quality and send patches in a
reasonable state.

I think we can all agree, that skill/experience between people varies.
I trust that experienced developers will produce patches that have
reasonable quality, regardless of tools used. While the main issue I
see is that those "AI" tools enable so-called Vibe Coders to produce
something they do not understand and think it's ok to share that. This
is not acceptable.

Of course there is also the possibility to send completely automated
patches by bots. This should be considered spam and rejected. I know
there is research towards making this work, but currently it backfires
immensely. GitHub offsers service like that to generate pull requests
(patches) from issue description in a fully automated process. This
basically unwillingly converts maintainers/reviewers into Vibe
Codders, who are prompting LLM in the review comments. You can imagine
this doesn't end well... But again I don't think this is specific to
AI tools. If an inexperienced developer produces a patch and doesn't
understand review comments and is not responsive or not able to
correct their changes, it's the same deal. Barrier of entry is
different when using LLM, but review of code really is the same.

- Kacper


More information about the ffmpeg-devel mailing list