Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align Streaming Protocol Capabilities with Media Capabilities? #256

Open
tidoust opened this issue Mar 1, 2021 · 1 comment
Open

Align Streaming Protocol Capabilities with Media Capabilities? #256

tidoust opened this issue Mar 1, 2021 · 1 comment
Labels

Comments

@tidoust
Copy link
Member

tidoust commented Mar 1, 2021

Reviewing #225, I wondered how streaming protocol capabilities exposed in a streaming-capabilities-response mapped to capabilities defined in the AudioConfiguration and VideoConfiguration constructs in the Media Capabilities API.

For Video:

  • The Open Screen Protocol has a max-pixels-per-second and a supports-rotation field, neither of which match any property in the Media Capabilities API. When are these fields useful? If they are, should they be considered in the Media Capabilities API as well? If not, could they be dropped?
  • On top of HDR related properties that will be added to the Open Screen Protocol, Media Capabilities also has scalabilityMode, which targets identifiers defined in WebRTC-SVC. Would that be worth adding to the Open Screen Protocol?
  • Media Capabilities also defines hasAlphaChannel. That does not strike me as particularly useful in a remote streaming scenario though?

For Audio, Media Capabilities API also has:

  • samplerate. The Open Screen Protocol has [max-frames-per-second] for video capabilities, but no equivalent for audio.
  • spatialRendering

Note that there is an ongoing discussion on these audio configuration capabilities in the Media Capabilities repo w3c/media-capabilities#160.

@markafoltz
Copy link
Contributor

For the streaming protocol capabilities, I would have to do some research to find out what the use cases are for max-pixels-per-second and supports-rotation.

In general, if there are properties of the video decoder that are not captured in the "profile" part of the codec string and the sender needs to know them to figure out how to encode the video, then they should be reported as streaming capabilities by the receiver.

For the additional capabilities reported through Media Capabilities, we'd have to consider them on a case by case basis. Audio samplerate probably makes sense, but alphaChannel may not.

If we decide to open up an API to script to allow senders to query the MediaCapabilities of a remote playback device, then that would also impact what capabilities need to be reported through the protocol.

@markafoltz markafoltz added the v2 label Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants