[FFmpeg-devel] [PATCH v2] avformat: add Software Defined Radio support

Rémi Denis-Courmont remi at remlab.net
Sat Jun 24 23:27:13 EEST 2023



Le 23 juin 2023 20:12:41 GMT+02:00, Michael Niedermayer <michael at niedermayer.cc> a écrit :
>Hi
>
>On Fri, Jun 23, 2023 at 06:37:18PM +0200, Rémi Denis-Courmont wrote:
>> Hi,
>> 
>> Le 23 juin 2023 13:17:28 GMT+02:00, Michael Niedermayer <michael at niedermayer.cc> a écrit :
>> >On Fri, Jun 23, 2023 at 10:34:10AM +0800, Kieran Kunhya wrote:
>> >> FFmpeg is not the place for SDR. SDR is as large and complex as the
>> >> entirety of multimedia.
>> >> 
>> >> What next, is FFmpeg going to implement TCP in userspace, Wifi, Ethernet,
>> >> an entire 4G and 5G stack?
>> >
>> >https://en.wikipedia.org/wiki/Straw_man
>> >
>> >What my patch is doing is adding support for AM demodulation, the AM
>> >specific code is like 2 pages. The future plan for FM demodulation will
>> >not add alot of code either. DAB/DVB should also not be anything big
>> >(if that is implemented at all by anyone)
>> 
>> Literally every one of those layer-2 protocols has a lower-level API already on Linux, and typically they are, or would be, backends to libavdevice.
>> 
>> (Specifically AM and FM are supported by V4L radio+ALSA; DAB and DVB by Linux-DVB. 4G and 5G are network devices.)
>
>4 problems
>* FFmpeg is not "linux only".

And then what? Whether you like it or not, radio signal processing sits on top of OS-specific APIs to access whatever bus or hardware. You can't make this OS-independent whether it's in FFmpeg or elsewhere.

At best you can write or reuse platform abstraction layers (such as libusb). Maybe.

In other words, whether this ends up in FFmpeg or not has absolutely no bearing on this "problem" as you call it.

But it doesn't end here. Audio input on Linux is normally exposed with ALSA modules (hw/plughw if the driver is in kernel, but it doesn't have to be), and other OSes have equivalent APIs. A sound (pun unintended) implementation of AM or FM would actually be an ALSA module, and *maybe* also PA and PW modules. (They don't have to be kernel mode drivers.)

...Not an FFmpeg device or demux.

>* No software i tried or was suggested to me used V4L or Linux-DVB.

That would be because audio input is done with ALSA (in combination with V4L for *hardware* radio tuning).

The point was that this is lower layer stuff that belongs in a lower level library or module, rather than FFmpeg. I never *literally* said that you should (or even could) use V4L or Linux-DVB here. Those APIs are rather for the *hardware* equivalent of what you're doing in *software*.

So again, no "problem" here.

>* iam not sure the RSP1A i have has linux drivers for these interfaces

Unless you're connecting to the radio receiver via IP (which would be a kludge IMO), you're going to have to have a kernel driver to expose the bus to the physical hardware. This is the same fallacious argument as the first one: you *can't* be OS-independent here, even if it's understandably and even agreeably desirable in an ideal (and purely imaginary) world.

>* What iam interrested in was working with the signals at a low level, why
>  because i find it interresting and fun.

Nothing wrong with that, but that doesn't make it fit in FFmpeg, which, for all the amazing work you've done on it, isn't your personal playground.

> Accessing AM/FM through some high
>  level API is not something iam interrested in. This is also because any
>  issues are likely unsolvable at that level.
>  If probing didnt find a station, or demodulation doesnt work, a high
>  level API likely wont allow doing anything about that.

I'm not sure what you even call high-level API here.

AM and FM are analogue audio sources, and ALSA modules and equivalent APIs on other OSes are as low as it practically gets for *exposing* digitised analogue audio. They're *lower* than libavformat/libavdevice even, I'd argue.

>
>> 
>> So I can only agree with Kieran that these are *lower* layers, that don't really look like they belong in FFmpeg.
>
>FFmpeg has been always very low level. We stoped at where the OS provides
>support that works, not at some academic "level". If every OS provides a great
>SDR API than i missed that, which is possible because that was never something
>i was interrested in.
>
>
>> 
>> >If the code grows beyond that it could be split out into a seperate
>> >library outside FFmpeg.
>> 
>> I think that the point is, that that code should be up-front in a separate FFmpeg-independent library. And it's not just a technical argument with layering. It's also that it's too far outside what FFmpeg typically works with, so it really should not be put under the purview of FFmpeg-devel. In other words, it's also a social problem.
>> 
>> The flip side of that argument is that this may be of interest to other higher-level projects than FFmpeg, including projects that (rightfully) don't depend on FFmpeg, and that this may interest people who wouldn't contribute or participate in FFmpeg.
>
>The issue i have with this view is it comes from people who want nothing to
>do with this SDR work.
>I would see this argument very differntly if it would come from people who
>want to work on that external SDR library.
>
>I mean this is more a "go away" than a "lets work together on SDR (for FFmpeg)"
>
>
>> 
>> >The size of all of SDR really has as much bearing on FFmpeg as the size
>> >of all of mathematics has on the use of mathematics in FFmpeg.
>> 
>> On an empirical basis, I'd argue that FFmpeg mathematics are so fine-tuned to specific algorithmic use cases, that you will anyway end up writing custom algorithms and optimisations here. And thus you won't be sharing much code with (the rest of) FFmpeg down the line.
>
>Iam not sure iam drifting off topic maybe but
>I frequently use code from libavutil outside multimedia
>
>thx
>
>
>[...]


More information about the ffmpeg-devel mailing list