[FFmpeg-devel] WHIP - Webrtc Http Ingest Protocol

Sergio Garcia Murillo sergio.garcia.murillo at gmail.com
Fri Sep 11 15:16:09 EEST 2020


On 11/09/2020 13:49, Lynne wrote:
> On 11/09/2020 12:31, Sergio Garcia Murillo wrote:
>> Hi all!
>>
>> WebRTC can be an intimidating monster, and we all are aware of that.
>> Also, the fact that every webrtc media server/service has their own
>> custom protocol has not helped to increase its adoption in the streaming
>> world, with still relies heavily on other protocols like RTMP, SRTP or
>> even RTSP for ingesting content.
>>
>> In order to try to broaden the adoption of WebRTC in the streaming
>> world, and make it available in OSS tools like ffmpeg, gstreamer and
>> OBS, we have created the Webrtc Http ingest protocol (WHIP), so the same
>> implementation works across several (hopefully most) webrtc services and
>> media servers.
>>
>> The first draft is available here:
>>
>> https://tools.ietf.org/html/draft-murillo-whip-00
>>
>> The latest editorial draft is in this gitup repo:
>>
>> https://github.com/murillo128/webrtc-http-ingest-protocol
>>
>> The feedback from the webrtc community has been very positive so far, as
>> this has been a recurring burden for years, so the adoption on that side
>> is very likely to happen. I have already implemented it on my media
>> server (https://github.com/medooze/media-server-node) and we will
>> disclose more information about how to test and interoperate with our
>> implementation during next week.
>>
>> Myself and other webrtc devs are willing to collaborate in the
>> implementing this into ffmpeg, as it would be huge for the webrtc comunity
>>
>> I would love to hear the feedback from you, the ffmpeg devs, to the
>> draft so we can improve it and make it easier to implement it in ffmpeg.
>> What do you think?
>>
>> Best regards
>>
>> Sergio
>>
>>
>> _______________________________________________
>> ffmpeg-devel mailing list
>> ffmpeg-devel at ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-devel-request at ffmpeg.org with subject "unsubscribe".
> Honestly, this is pretty meh. While it makes the job of the ingest
> server easier, what I and most of us in the community would have really
> liked to see is something like Matroska sent over a UDP protocol with
> adjustable error correction settings, error detection and built-in
> redundancy. Basically, SRT but with Matroska instead of mpegts, or at
> least a variant of ISOBMFF.
>
> And rather than it being published a WebRTC-specific protocol, a general
> protocol implementing support for multitude of codecs.
>
Thank you for your honest feedback.


I might have added a bit of context about what is its intended usage to 
set expectations correctly.


The idea behind whip is not about proposing a general ingestion content 
to replace SRT/RTMP, but to allow ingesting webrtc ultra low delay media 
into a webrtc server to avoid having to do protocol/codec conversions on 
the server and preserve the webrtc properties end to end (bandwidth 
estimation, congestion control, nack, rtx, fec, simulcast and svc 
support, nat traversal and end to end encryption with an end to end 
delay of less that 100<ms)


In that context a matroska over udp would be as useless for the webrtc 
community as current RTMP/SRT.th


While I acknoledge that this might be useless for vod-like ingestion, 
there are already a lot of interesting webrtc services that could 
benefit from this (and from the feedback I have got, it is a real need).


> Matroska is already close to being an IETF standard
> (draft-ietf-cellar-matroska-05) itself, so you wouldn't even have had to
> specify the container, and for the protocol itself, even the most basic
> of error correction Hamming codes would suffice, and such could be even
> worked into Matroska itself so only the data which matters would be
> protected.
>
>
>
> The only reason why RTMP is still in use and is still worked on (a team
> not too long ago wrote HEVC encapsulation spec/draft for it) is because
> despite of its shortcomings, its ubiquitous, generalized, simple and
> fully encapsulates all data streams sent through the FLV container.
>
>
>
> FFmpeg's RTP support is honestly pretty broken, and the fact you need
> separate RTP streams for audio and video pretty much guarantees the
> streams will be out of sync with the way the current muxers are
> implemented. So even if this protocol were to be implemented, it
> wouldn't work any better than just sending 2 RTP streams.
>
I have suffered ffmpg rtp support for years, that's why we volunteer to 
help with that part of the development.

WebRTC uses rtp muxing to send all data in the same udp port, and lip 
sync based on rtcp (also muxed on same udp port).  It has been working 
for years.

It has also bandwidth estimation, congestion control, nack, rtx, fec, 
simulcast and svc support, nat traversal and end to end encryption with 
an end to end delay of less that 100<ms.


Best regards

Sergio



More information about the ffmpeg-devel mailing list