Replies: 6 comments 2 replies
-
A bit more input on this and first observations:
Works absolutely brilliant if the browser sends VP8 and KMS is doing the transcoding. I can consume the video with ffplay, gstreamer or vlc and it rocks. It also kind of works, if the H.264 from the browser is just forwarded, but I'm facing the problem, that I'm missing the intra frames... The publisher session runs already for a long time, there is no resolution change anymore, it is stable at 1280x720, no errors. Now I come with an RTSP client and I'm getting the video as it comes. I can observe that on the client: Gray soup on the screen. I can wipe over the camera and all is fine, but I'm not always having the camera in front of my nose. For a while, for a long while, no intra frame appears. Sometimes more than a minute. Whenever the browser thinks it is time to send an intra frame... So I would need to force the KMS to request an intra frame from upstream (publisher) whenever an RTSP client joins... Hmmm.... |
Beta Was this translation helpful? Give feedback.
-
The good thing: It is not a problem with my target devices (drones, ANAFI, DJI). Just with the browsers. I can literally "see" the intra frame in the WebRTC stats of the publisher in Chrome: The interval seems to be 100 s. If I enter the RTSP client session between two of those keyframes, I just have grey artefacts and garbage on the screen and the H.264 delta encoding can be seen perfectly. So feel free to ignore, it is an issue, but not a big one anymore. Maybe once the browsers have the "generateKeyFrame" API it will not be a problem anymore. |
Beta Was this translation helpful? Give feedback.
-
You're referring to the fact that, when you're using WebRTC to share video between two browsers, the receiver can ask the publisher to publish an IDR frame in order to start rendering the video immediately. This is possible since browsers allocate a dedicated video encoder for each WebRTC connection, and can control the encoding process by using feedback from the connection, tuning bitrate in order to fill available bandwidth and inserting I-frames whenever they want. Media servers and SFUs normally don't allocate a video encoder for each connection, since it would consume too much CPU, they instead use a shared video encoder, or don't even re-encode video (like rtsp-simple-server does). The drawback of this approach is that the encoder can't be controlled dynamically. In order to solve the issue, you have to tune the encoder (in this case the one on the publisher) in order to generate IDR frames at a fixed interval. Furthermore, in case of H264, you have to make sure that the encoder inserts SPS and PPS NALUs before the IDR frame. These are needed to parse the stream but are not always inserted before IDRs in order to save bandwidth. rtsp-simple-server does it automatically, and WebRTC support is in beta here #1242 |
Beta Was this translation helpful? Give feedback.
-
Thanks for your reply. Well, I never noticed this, because I was always just working with WebRTC, from a lot of clients and environments. I know about the FIR and PLI mechanisms, however, my particular solution cannot benefit from that, because I have no means to "trigger" the browser to push an intra frame. If the publisher connection (WebRTC from browser to my Media Server) is stable, there is no need for nobody to push intra frames, so I depend on the browser's default interval (which seems to be 100 s, at least for Chrome) or on situations, when the resolution changes due to network congestions. Situation is different if my MediaServer is forced to do a transcoding, e.g. when the publisher delivers VP8 and the connection to the RTSP-Simple-Server requires H.264. In this case the KMS implementation fires way more often intra frames, so that the garbage is only on screen for about 2 secs or so.
Yes, not that easy if you have no means to alter the browser implementation :) Anyway, this problem is not really a problem for me, since my main use case is drones. I have implementations for doing WebRTC with
So not a big deal. I was wondering if it would be sufficient and work, if the RTSP-Simple-Server would retain the last received SPS/PPS (and maybe some other information) in order to push that into a new RTSP client session, so that these "no intra frame" periods could be bridged w/o too much annoyances... I have overlooked your WebRTC and seen, that you are currently only supporting H.264 and WebRTC on the stream consuming side. This is OK, but it doesn't cover my use cases. I was closing a gap in my WebRTC server landscape: To have means to consume published WebRTC streams with things like RTSP, RTMP and HLS. And for this your RTSP-Simple-Server is really a nugget and I'm glad I was able to find it on the web. Thanks |
Beta Was this translation helpful? Give feedback.
-
I know, but I have no such means, neither on the Kurento Media Server nor in my browser implementations. There is simply no API which would allow me to send RTCP packets :) I would have to patch the KMS to achieve this. |
Beta Was this translation helpful? Give feedback.
-
Thanks for this discussion. It is hard to find guys which are able and willing to discuss things like that, nearly in real-time. Have a nice weekend |
Beta Was this translation helpful? Give feedback.
-
I'm encountering a weird but explainable problem while attempting to proxy a browser provided WebRTC H.264 stream (no transcoding) to RTSP. On the client side the image is not updating for a long time, most likely because of the missing intra frame. If I run the media server session with VP8 from the client, letting the media server do the transcoding, then obvious the frequency of intra frames is sufficient, so that the image updates quickly.
I currently have no idea, how to make the source sending an intra frame whenever an RTSP client joins... Any idea?
Beta Was this translation helpful? Give feedback.
All reactions