[FFmpeg-user] Video streaming from FOH to multiple targets with ffmpeg?

Michael Jarosch riotsound at riotmusic.de
Thu Nov 16 16:32:46 EET 2023


Hi!

My name is Mitsch, I'm a sound engineer who works for a german theater. 
Sometimes I do lighting and video-projection, too. I love to use open 
source tools as they are lightweight and flexible. I'll soon have a new 
production and here's what I want reach, finally:

I'm sitting at FOH, driving a theater show. I have - let's say - 3 
projectors available. One on my back to cover the stage from the front, 
two behind the stage doing a rear projection on the right and the left. 
On every projector, there is a Raspberry Pi connected via HDMI, waiting 
to send videos to the projector. All Raspberries are connected to LAN, 
just like my linux-laptop from which the show is controlled.

I know, I can reach similar with QLC+: Install the app on all the 
computers involved, setup 3 different Artnet-channels, configure a/some 
video function(s) and make each one accessible through a dmx-channel. 
Therefore, the videos that should be presented have to be on the 
Raspberries. I can copy them to the devices and configure the triggers 
on each before the show.

But my goals are different: Keep it simple, keep it fast (in terms of 
latency, but also in terms of using light and fast apps and finally: in 
terms of not running through the venue to make some last-minute 
configurations) and let only one machine be the one that has to be 
configured - the main laptop at FOH.

I'm not so far away from that - the tools and the technology seem to be 
there, already. As you might already know: With ffmpeg it's possible to 
stream videos from point to point in realtime.


|ffmpeg -i [input-video] -f [streaming codec to use] udp://[reciever's 
network-adress]:[port]|

(There are options to speed thing up and/or relieve the CPU, but take it 
as an easy example.)

On the other side of the chain, ffplay or mpv can catch the stream and 
decode it in no time:

|mpv udp://[transmitter's network-adress]:[port]|

(Again: Optimizations left aside)

Tried this myself in a LAN between a Ryzen5 2400G Desktop and a 10 year 
old Thinkpad and achieved latencies under 1s - which is good enough, 
even for professional use. Once, you found the best options for your 
setup you can use it over and over again with different video-inputs and 
destinations. Best of it: With a commandline code it's capable of being 
integrated in QLC+ or Linux Show Player (LiSP). And: With ffmpeg I can 
-tee video from audio stream, if I like, and keep the audio at the FOH. 
(Or send it back from one of the raspberries to FOH via net-jack or 
comparable. Keeping video and audio in sync will be another challenge, I 
see…)

But there is one downside: If the receiver already plays the video, 
there is no big latency between sender and receiver (if the options are 
chosen well, of course). But: Catching the stream can take several 
seconds. So, what I need is a continuous stream on which I can send my 
videos. OBS can do this, but it's another resource intensive app and - 
as far as I know - I cannot send commands from QLC+ or LiSP to it. (I 
want ONE cue-player for all, you know…!) Also: I *guess* OBS can't 
handle more than 1 stream, at once (sending to the different 
RPi-receivers) - but with ffmpeg-commands it's easy…!

I had the idea, sending a continuous stream by streamcasting a virtual 
desktop page and configure mpv to play on that in fullscreen, by demand. 
But I guess, this comes not so handy with more than one beamer. (Also: 
Wayland's more and more common for the linux-desktop and I've heard 
ffmpeg can't grab wayland (the way it used to grab X11).)

Any ideas in how to reach my goals? (You can suggest other apps than 
ffmpeg or mpv, of course!)

(Disclaimer: I have also posted this to the 
Linux-Audio-Users-Mailinglist and the QLC+-forum. I will inform you if I 
get good thoughts from the other sources…)

Greets!
Mitsch


More information about the ffmpeg-user mailing list