Proposed Deprecation of FTL Protocol in OBS #4021
Replies: 29 comments 84 replies
-
What I want to know is what FTL offers the world that other protocols, such as WebRTC and RIST, do not? RIST is an open RTP/UDP standard, and WebRTC has widespread adoption and can solve a lot of the same problems. If those protocols had been supported by OBS before these FTL-based projects had been started, would they have still been started? What is the advantage of FTL over other protocols beyond the mere fact that OBS supports it? It's worth noting that OBS is very likely going to begin working on adding generic WebRTC support, with the ability for services to create plugins that offer their signaling method of choice. |
Beta Was this translation helpful? Give feedback.
-
While SRT is supported it does not seem that there is anything using it? I haven't gone all the way through SRT yet however I am struggling to even find adequate documentation that I could even base an implementation off of. Furthermore, I am not entirely sure how FTL negatively impacts OBS right now to justify its removal without the implementation of a suitable alternative (ex. WebRTC, RIST). What is desired, in my use case at least, is a straightforward protocol to deliver RTP packets and FTL provides that with a very easy implementation. I would also like to challenge your classification of FTL as a dead technology. In my opinion a dead technology is one which is no longer used or adopted, yet there are multiple services still using FTL including mine. There are also plans to standardize and continue the development of the FTL protocol and I feel that deprecating it without at least seeing what the future holds would be jumping the gun a bit. |
Beta Was this translation helpful? Give feedback.
-
Serious question: Do you take bribes, and if so, how much would it take for you to reconsider? FTL is the only proven solution that offers a low delay by design, and also the solution that offers the lowest possible delay out of everything else out there. There is (currently) no other way to deliver low latency video like this to a web browser at all, and all the the different transport protocols that are left either inherently impose a significant video delay, require expensive transposition at the ingest (adds a delay!) or can't deal with synchronized video at all (requires a buffer to work, adds delays, and includes a retransmission mechanism that would add accumulative delay whenever an error occurs). Out of the three reasons you stated, the first two are just plain wrong, and the third one, while understandable, usually comes down to one thing and one thing only: a bad internet connection (be that jitter, overprovisioned and congested routes to the ingest or other forms of packet loss) |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
There's a lot to unpack here, and I am still digesting the issues brought up by the OBS team & the current situation of the industry, so I'm not quite ready to give my opinion yet. I do have a question about the timeline though so we can make sure that we haven't wasted the last 6 months of complete volunteer development time on Glimesh :).
Glimesh is not quite launched yet, we are close but not close enough to call it yet. The very short February 1st timeline puts real stress on rushing through the remaining work on the project to try and ensure it get's a fighting chance. As I'm sure you know just how popular OBS is for streaming users, launching without first party support for it would be unfortunate. My questions are:
|
Beta Was this translation helpful? Give feedback.
-
So, here's my response, written w/ input from those in the FTL dev group; apologize for the length but I wanted to cover all points here. So, talking from a historical context, FTL's entire design goal was to get sub second (ideally 500ms or less, real world, we got to 200-250ms) from streamer to the browser under "good" to ideal Internet conditions without add-on software. To that end, it was specifically designed to skip a muxing step, allow for easy implementation of a O(1) hard real time ingest (a naive FTL ingest can be implemented simply by modifying a few bits of each RTP packet in transit). The entire intent was to implement like Stadia or xCloud in browser, and it was overwhelmingly successful in that role. While Microsoft didn't push Beam/Mixer Plays, the FTL technology has a six year history of doing exactly what it was designed to do. Creating a new streaming protocol was not done lightly at Mixer, and frankly, none of the alternatives promise to hit the real time concerns of "in browser" streaming that FTL can, and I'm not convinced that even in ideal situations they will. The primary difference between WebRTC, RIST, and HST vs. FTL is that FTL is designed to lose packets and intentionally does not give any notion of reliable packet delivery. While there is a small retransmission buffer built into ftl-sdk (which was implemented after I stepped away from FTL), it's (as far as I can tell) primarily to smooth out H.264 issues. From the RIST specification, the re-transmission effect is much larger, and doesn't appear to have an upper bound on how far it can queue. The entire intent appears to re-implement TCP/IP reliability onto of RTP. While this will give good performance in good conditions and be manageable in bad, it creates a problem if you need to stay in realtime. With FTL, it essentially YOLOs each packet (and this plays into the last mile, but I'll get to that). RIST on the other hand is specifically designed as reliable and has a large middleware level to allow resumption in case of stream interruption. While RIST is indeed built on RTP and SRTP, it doesn't specifically promise a latency goal. While it might be possible to get the same low latency performance over RIST, it's not a design goal of the protocol, and TBH, I'm less than fond of tossing out the "known to work" solution for one that is not proven. RIST’s retransmission facilities are also not really designed for the notion of synchronization. The client can request transmission via block or specific packets, but not to a specific KF interval like can be done with H.264/FTL streaming. While retransmission is technically optional, it does exist in FTL today for a specific purpose. The original VP8/Opus FTL implementation actually didn’t have any re-transmission facilities, and could hit 200-250ms latency real world. I do intend to look at reviving this functionality. In a RIST world, you send NAKs to get the packet retransmitted and you have a fairly large state machine on-top of that, which basically is TCP/IP implemented onto of UDP with RTP. Furthermore, RIST still doesn’t handle signaling of information like video metadata and similar which is a specific feature that Charon (FTL’s handshake) implements. While RIST might be a replacement for FTL at some day, it doesn’t actually provide anything over the current implementation, and still requires additional infrastructure to be used. In short, actually adopting RIST doesn’t solve a problem. This brings me to the topic of WISH/WHIP and WebRTC. To be frank, I’m not certain why WISH or WHIP is being brought up; given that we have two separate specifications that try to solve WebRTC signaling, it’s pretty clear this isn’t a solved problem. Before I dig into either, I do want to talk a bit about WebRTC’s own signaling mechanism: the Session Description Protocol (SDP). SDP underpins all WebRTC calls, and is also used in VOIP. It’s a nightmarishly complex handshake, with a very loosely defined standard. When I looked at doing FTL originally, I did actually look at the possibility of building the signaling information into SDP, and decided that the best solution was to run as fast as I can. WISH and WHIP both appear to be extensions of SDP which means they inherit all the problems with SDP. While it is a slightly old article, this sums up the pain points pretty well: https://webrtchacks.com/webrtc-sdp-inaki-baz-castillo/. I highly recommend going through WebRTC Hack’s section about SDP before advocating any solution built around it. I’ll start by talking about WISH first: the IETF specification is in the Internet-Draft stage, and still in the Proposed WG state: https://datatracker.ietf.org/wg/wish/about/. I’m not even certain there’s a reference implementation of the specification. Furthermore, WISH is a direct extension of the SDP protocol which means all the pain points are primarily there. SDP has a fiendishly complex protocol negotiation because its essentially intended to make two VoIP phones from different manufacturers agree on some sorta standard to talk with each other. WebRTC not only uses this for talking to browsers, but adds a very complicated stack of ICE connectivity. Even though things like STUN have gone to the wayside, it’s pretty hard to say that this is a desirable thing. In contrast, if you can make TCP and UDP connections from point to point, you can stream with FTL because its designed to go in one direction and only has an “accept/deny” state vs. offer/accept machinery. In effect, to adopt WISH into OBS would mean integrating a full WebRTC stack, signaling layer, and more to accomplish what FTL does with two RTP streams, a signaling protocol, and some basic invariants. There’s also the notion that both RIST and WISH are either new, or still being drafted. While both have the advantage of being an open standard from the get-go, they have no field test history, and neither are trying to hit the original sub second latency goal of the original FTL protocol. This brings me to SRT. Unlikely RIST or WISH, SRT is specifically designed as a low latency solution. However, SRT has two fundamental problems which make it unsuitable for use on “streamer->web browser pipeline”. The first is that SRT is not wire compatible with RTP nor does it use the same encoding standard. Part of the trick of FTL is that the packet generated by OBS is in fact the same packet sent to the browser with the least amount of post-processing done in the middle. Based off the benchmarks I did at Beam, each time we mux, demux, or re-encode a packet, we pick up a significant lag since in most cases, you need to have a fully assembled keyframe set to re-encode from location to location. FTL specifically bypasses that requirement by putting out something that WebRTC (with slight modifications) can digest directly. However, that’s not the only problem with SRT. SRT lacks a signaling mechanism. FTL’s Charon protocol acts as a signal for its two RTP streams, and can send metadata, signals keep alive, and could be potentially extended if ever needed. SRT merely defines a wire protocol and doesn’t specify any inband signaling. In short, you’d still need something like FTL’s Charon protocol to use SRT, and unless browsers adopt native SRT support, you still won’t be able to hit the same latency benchmarks you can hit with FTL. There’s also a final thing to consider. At the heart of this discussion, we’re talking about ripping out a proven technology for something that only exists on paper. While theoretically, you could build low latency livestreaming solutions on several of the technologies listed here, no one has actually done it, and proved it works in the real world. FTL has, and it did so for more than half a decade. There’s something to be said for a proven solution. It is true that under Mixer, FTL wasn’t an open protocol, and the only implementation was Mixer itself. This was not the intent of FTL. FTL was intended as an open standard, and most of my reference notes were posted in the FTL-SDK. MSFT removed them, but they’re still in the git history. Furthermore, multiple people have reimplemented the server side from scratch using those notes, and the “last mile” part is easily implemented using the Janus gateway which was the same software the first FTL implementations were built on. Furthermore, the FTL RTP streams are essentially designed that the packet coming out of OBS is the packet that will go into WebRTC. The FTL implementation we used only had to change the SSRC in flight, and later ones had to add SRTP when it became mandated. Most of the work of the FTL standards effort is to essentially create a canonical version since there are some server behaviors that can’t be defined from the client, but on the whole, FTL is well understood, it’s documented (and getting bettered), and we’ll likely either code or canonicalize an existing implementation as the de facto reference implementation, and then resume development to add new features and to bring FTL out of the pit it was left in. In short, FTL was and is the only standard that is specifically targeting latency as it’s number 1 concern. Neither HST, RIST, or WISH can claim that. Furthermore, I need to ask a question: what is the ongoing cost to keep this code in OBS? ftl-sdk isn’t what I call a fast moving target, and most of it hasn’t changed in a long time. |
Beta Was this translation helpful? Give feedback.
-
Howdy all, Once again appreciate your time and the opportunity to discuss this in an open forum! Happy to answer some of your questions so hopefully you can understand our history of decision making and ultimately our perspective w.r.t. FTL. The answers I provide are from the perspective of janus-ftl-plugin, which is the technology currently powering Glimesh.tv. The Glimesh.tv project started around July with the demise of Mixer. @clone1018 as the founder of Glimesh can probably provide more context on the relationship between Glimesh and the end of Mixer - but a substantial part of the motivation was to provide a new option for true low-latency (sub-second, and ideally way lower) streaming, as the Mixer shutdown left a void in this space. At the time, SRT support in OBS had only recently been released a few months prior in mid March. At the time (and arguably still today), SRT in OBS was still a nascent technology - with no SRT services represented in services.json, and no broadly available consumer-facing streaming platforms with SRT support that I could find with a quick search. As a new platform with many uphill challenges ahead of us, it made the most sense to leverage an existing protocol that had a proven track-record at scale, and what seemed at the time to be solid client support. Plus we knew with confidence from Mixer's example that we would be able to meet the low-latency expectation we set out to achieve. Knowing what I know now about the perspective of the OBS team and the alternatives available, what would I choose today? My perspective here is that a lot of these new protocols present a substantial risk. I don't have a whole ton of technical concerns - at the core of the issue, I just need to get a bunch of packets from OBS to my service as quickly as I can - I'm not super nitpicky over how that happens, though niceties like nack/retransmit and forward-error-correction are probably best for a good experience. My biggest concerns are about the politics and shifting industry focus on "what's next." Right now I see three protocols on the table - SRT, RIST, and some form of WebRTC. SRT is the only choice with support in OBS available today, making that our only alternative available right now. With Glimesh going live soon, that leaves us with the uncomfortable choice of either choosing to support a protocol deprecated on the client-side, or choosing to be one of the first guinea pigs to implement SRT for a large-scale consumer-facing streaming platform. In general I don't mind being a guinea pig and pushing forward with newer, potentially better technologies - but when I am shipping a product to a customer (a non-technical consumer, no less), a solid, proven foundation to build that experience on is the first priority, and currently that foundation is FTL. Long term maybe the answer is SRT/RIST/WebRTC/etc - but I feel the best approach to reach that future is a gradual transition to these new technologies, migrating off of FTL when we can say with confidence that the new technologies can meet or exceed the existing bar for a good customer experience. Betting entirely on SRT now would mean we open ourselves up to the possibility that the industry decides to move in a different direction later on - we could spend a few months adding SRT support only to discover that next year it's being deprecated in favor of RIST or WebRTC. With FTL as our backup, we have the ability to experiment with the new nascent protocols without being burned if they don't work out - then once we have some indication that the new protocol has met our expectations and gained enough traction in industry, we can fully commit and pull the plug on the old stuff. I hope that answers the questions you had, please let me know if there are any areas I can clarify! Thanks once again for your time and the opportunity to discuss this. I know that dealing with issues like this are burdensome in both time and mental energy, so I really appreciate it! |
Beta Was this translation helpful? Give feedback.
-
I'd like to give a real-world example that clearly validates WebRTC viability as more widely used standard instead of FTL. Google Stadia uses WebRTC for delivering real-time audio/video in production today. I do not think any kind of consumer live streaming needs exceed strict requirements of gaming on a remote machine. Signaling is a concern for many with WebRTC, but with WebRTC stack in OBS it shouldn't be a big deal to swap one signaling with the other or support multiple for some time or in general (for instance one based on SDP and another on ORTC types in JSON). WHIP draft should be more than enough to get things going. With that in mind and the fact that the WebRTC is ultimately being used with SFU to deliver content to viewers, WebRTC will be present in the stack regardless, so having FTL makes people have to support 2 protocols and create bridges between them instead of just one and using off-the-shelf SFUs with little to no modifications. |
Beta Was this translation helpful? Give feedback.
-
This is a very long discussion already, I just wanted to make a few bullet points:
|
Beta Was this translation helpful? Give feedback.
-
So, I'm going to sum up things as I read them right now, as of right now, there are multiple solutions that aren't FTL that can get subsecond latency, and some might do a better job of it. However, for something like obs-webrtc, I'm still not seeing a specification that can handle the authentication step like you have in RTMP and in FTL. Maybe I've overlooked it. I suppose you could implement a 401 HTTP authentication step, or add something to the SDP handshake in WHIP, but that doesn't appear to exist as implemented or defined right now. No one is disagreeing that WebRTC does infact work for the last mile, but to actually get low latency from the streamer to the viewer, you need to emit a packet stream very close to what WebRTC will take (SRTP wrapping being the exception). Otherwise you have to introduce a transcoding step, and usually to transcode, you need a full KF interval. That essentially makes SRT and other non-RTP wire protocols a non-starter unless/until WebRTC can support a non-RTP based stream. I will note that obs-webrtc will likely suffer the same sort of packet dropouts FTL will since it basically works on the same mechanism of transmitting RTP streams that FTL does. |
Beta Was this translation helpful? Give feedback.
-
Er, one thing I do need to add, I'm not entirely sure you could easily route WHIP across a backend infrastructure. While FTL was intended to use SRTP, it was only in an authentication role (ultimately, FTL used source based identification). WHIP uses DTLS, which would require an additional decrypt stage before routing from the ingest daemon to a Janus server since in any real world deployment, those two would almost certainly be disconnected as they were at Beam/Mixer. I'm not sure how much of an impact on latency that would have, but I can see it making implementation of an ingest daemon more complicated. |
Beta Was this translation helpful? Give feedback.
-
@NCommander as shown in my previous message, I think there are a lot of your statements and concerns that could be addressed by a dive in the documentation / specification draft / implementation. I want to keep this thread short, so I stay at your disposal to discuss in as many detail as you would want your concerns off thread anytime you see fit. Please note that the millicast platform which handles up to 2 millions concurrent viewers for a given single source, uses WHIP, with the usual HTTP token for authentification and access control. Tencent LEB (Live Event Broadcast) whose volume is two order of magnitude higher, also uses WHIP for ingest. A few bullets point again to illustrate my point:
Happy hacking. (1) - "The key strength issue could be resolved by going to 2048-bit RSA keys or more, but that would delay call setup by several additional seconds. Instead of changing the RSA key size, Chrome 52 implements ECDSA keys (Elliptic Curve Digital Signature Algorithm) for use in certificates. These are as strong as 3072-bit RSA keys‚ but several thousand times faster: call setup overhead with ECDSA is just a few milliseconds." |
Beta Was this translation helpful? Give feedback.
-
One thing which is on the plus side for FTL against webrtc is that it can use our obs encoders, including hardware acceleration e.g. nvenc, quicksync, amf. It doesn't seem the case with obs-webrtc (dr alex fork of obs):
Regarding (1), I found this project though: https://github.com/sonysuqin/WebRTCOBSEncoder ; it'd be interesting to be able to have the same thing for obs-webrtc. |
Beta Was this translation helpful? Give feedback.
-
@pkviet This is an implementation detail. This is not a protocol discussion anymore. wrt hardware acceleration, in OBS:
Why AV1 (or VP9 mode 2), because of the support for HDR, and up to 12 bits, 4:4:4 in webrtc with it. the risk of using the same approach as the korean one, is to cut the RTP feedback loop, losing NACK, FEC, and all the other adaptability and resilience mechanisms while doing it. It would be better to inject the encoder in the media engine. |
Beta Was this translation helpful? Give feedback.
-
https://github.com/sonysuqin/WebRTCOBSEncoder |
Beta Was this translation helpful? Give feedback.
-
So after looking through the posts i have a question about what issues can arise with how WebRTC works in browsers and if these could be potential if not already existing issues with some of the implementations of WebRTC in obs that exists as a fork. |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have greatly appreciated the conversation that has taken place here over the last several days. I have learned a lot! I'd like to summarize my current understanding of the situation and my thoughts on how we might proceed regarding this issue. First is the question of what FTL brings to the table that is unique relative to other, more widely-used protocols. I don't actually think the answer is that it can provide lower latency than the alternatives, as it's clear WebRTC can achieve similar latency by disabling reliability features. Instead, it seems to me that the main appeal of FTL is its simplicity, particularly on the side of the service provider. WebRTC can do lots of things, and as such, it brings a lot of baggage. FTL, on the other hand, was developed for a specific use case (multiple simultaneous ingest of ad-hoc user-generated live video content), and as such it is able to remove a lot of WebRTC's cruft. FTL's downside (and the OBS Project's main objection to it) is that, in pursuit of absolute low latency, FTL has been designed without mechanisms that would improve stream stability for users streaming under poor network conditions. This has historically led to an increased burden of support placed on OBS volunteers, and more generally leads to poor user experiences when FTL's network requirements are not properly communicated or detected. In my opinion, the advantage of FTL's simplicity on the part of the service provider does not matter a whole lot to users in practice. My impression is that users don't generally care what protocol they are using, they just want their stream to work as advertised. FTL correctly advertises extremely low latency (especially in the marketing materials of services that leverage it) but this advertisement frequently lacks the caveat about the necessary network reliability. Here are my current thoughts on the path forward:
A few thoughts on things that can be done that should help all parties during this time:
I welcome further thoughts and discussion about this, including from other members of the OBS Project besides myself. |
Beta Was this translation helpful? Give feedback.
-
Hello everybody Although this is already a long thread, I would like to jump on this discussion and highlight some bullet points for SRT and raise some interest for a native SRT integration into OBS, rather than using the ffmpeg SRT integration.
Currently using SRT in OBS is still a bit clumsy, since it's ffmpeg based and all parameters have to be typed into a command line. A native SRT integration into OBS could bring a lot of benefits, not just ease of use.
If there is interest on learning more details on these features, I can organise a Webinar about SRT for OBS developers and explain these feature in a bit more detail and also answer questions. with best regards, |
Beta Was this translation helpful? Give feedback.
-
Today is March 1st, which means we are now putting a hold on accepting new service submissions from FTL-based services or investing time into FTL output maintenance. We will continue to monitor the status of the FTL protocol in context of industry usage and demand, and will continue working toward adding WebRTC output support in OBS as the next major supported protocol. |
Beta Was this translation helpful? Give feedback.
-
@dodgepong We are also using FTL but willing to move on to WebRTC output, just hope there's a grace period where FTL and WebRTC output are both available. Is there an active development branch somewhere to test? |
Beta Was this translation helpful? Give feedback.
-
RE: OBS WebRTC support, do we have anymore details about this? Is there an issue to track it? Are you planning on generic WebRTC support or something like WHIP? |
Beta Was this translation helpful? Give feedback.
-
https://lightspeed.tv is a new service that uses FTL, so it is still being used by some new platforms. |
Beta Was this translation helpful? Give feedback.
-
Checking lightspeed.tv, they appear to have maybe a few dozen users, and I'm not really concerned about small services. Our stance hasn't changed where we strongly discourage the adoption and support of FTL at this time, and we will not be accepting new FTL-based services. I've updated the main post to clarify the timetable a bit. The developer on our team who was working towards WebRTC output has been working on other projects that took precedence over this work on OBS, and we are currently looking for someone to take over the work and continue it to completion. So, at this time we no longer have an ETA on when WebRTC support will be available in OBS. Anyone interested in assisting with this work can certainly reach out to us and we can coordinate on it. You can either comment on this discussion, reach out via our Discord, or email me directly: [email protected] |
Beta Was this translation helpful? Give feedback.
-
For those monitoring this thread for news about WebRTC support, here's a prototype/playground for how we're considering going about the process of adding WebRTC support to OBS: #7192 |
Beta Was this translation helpful? Give feedback.
-
Is the opening post supposed to contain a list of currently supported streaming protocols? It doesn't seem up to date because https://obsproject.com/wiki/Streaming-With-SRT-Or-RIST-Protocols says RIST is supported while this post says it isn't. |
Beta Was this translation helpful? Give feedback.
-
Now that WebRTC has been merged, we are moving forward with the proposed plan for removal of FTL in OBS. While we understand this step took a lot longer than original estimations, this now marks the last requirement. Removal will be considered in future releases of OBS as the timeline suggests. Current estimates for a public release with WebRTC available are later this year. |
Beta Was this translation helpful? Give feedback.
-
Per the opening post:
At the current release cadence will likely mean you have until Q1 - maybe Q2 - of 2024. |
Beta Was this translation helpful? Give feedback.
-
As a note, our removal may be escalated as issues such as #9105 prevent us from building, and would require either workarounds or custom patches to fix, neither of which is appealing to us. No decision has been made at this time, but those still relying on FTL in the remaining grace period are invited to comment any challenges or roadblocks. |
Beta Was this translation helpful? Give feedback.
-
#10019 is now merged, so this is complete. |
Beta Was this translation helpful? Give feedback.
-
Overview
As previously mentioned in #3816, #3834, and #4018, the OBS team has internally discussed deprecating FTL and eventually removing it from OBS. I'd like this discussion to serve as a means of communicating intent and setting expectations, helping us see if our expectations for timing are realistic, evaluate what alternatives the community would like to see put in place the most, and hear out what reasons we might want to reconsider the deprecation of FTL support.
Rationale
We would like to remove FTL support from OBS for the following reasons:
Timetable
At present, our plans and timetable are as follows:
We plan to deprecate FTL on March 1, 2021. Until March 1, 2021, we plan to continue to allow services to add entries using FTL (
"output": "ftl_output",
) to theplugins/rtmp-services/data/services.json
file via the usual pull request process, though we highly discourage choosing FTL at this point. Starting March 1, 2021, we will no longer accept new additions for entries that use FTL to our services list.To provide an alternative to current FTL users, we will begin work on implementing WebRTC support in OBS. This currently does not have a set delivery date. Current work on this has progressed slower than we like, as the individuals who were working on it have been pulled towards other projects/jobs outside OBS.
After 9 months of WebRTC support being available in OBS, we plan to remove FTL support (all code using FTL and all services entries for FTL ingests) from the OBS codebase, meaning the first major or minor OBS release that comes 9 months after WebRTC release will not support FTL.
Alternatives to FTL
The primary advantage of FTL that advocates point to is its low latency (often less than a second screen-to-screen) compared to RTMP (often 2-3 seconds or more screen-to-screen).
Currently Supported by OBS
Currently, OBS supports the following output protocols:
Of these options, the protocol that most closely meets the needs of people wishing to stream at low latency is SRT, which is our current recommendation for services currently using FTL.
Under Consideration for OBS Support
WebRTC
WebRTC support is not currently in OBS. While support could be implemented, the main issue is that signaling methods between the broadcaster and the server are as varied as there are server implementations. Ideally, we could implement a pure WebRTC implementation and allow services to provide SDP signaling plugins that allow them to communicate with OBS using whatever system they want. All that said, none of this has progressed beyond the idea phase for official inclusion in OBS.
WISH
The primary downside to WebRTC is that implementation depends on the server that you are communicating with, such as Janus, Wowza, Millicast, Mediasoup, Evercast, etc.
There have recently been some efforts to create a standardized WebRTC signaling/ingest using HTTPS called WISH. While this project is not complete, it is something we are keeping our eye on for future developments.
More on WISH:
RIST
RIST is also not currently supported by OBS (EDIT: This is no longer true, OBS now supports RIST output as of OBS Studio 27,2), and so far we have not received a great deal of demand for it. That said, the development of the protocol has been done in a much more open manner, and there are many users of the protocol in production. Furthermore, it is RTP/UDP-based, like FTL, so it could be a possible alternative for FTL users.
More on RIST:
Stakeholders
YouNow (@rsiv) currently uses FTL. Glimesh (@haydenmc) appears to have been planning to use FTL. @GRVYDEV maintains Project Lightspeed, which ingests FTL. @NCommander appears to have recently started an effort to standardize FTL development.
Beta Was this translation helpful? Give feedback.
All reactions