-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify start()
and stop()
in AudioRenderCapacity
#2558
base: main
Are you sure you want to change the base?
Conversation
@padenot Friendly ping! |
|
||
1. If {{[[is running]]}} is <code>false</code>, abort this algorithm. | ||
2. Create |E|, which is a new instance of {{AudioRenderCapacityEvent}}, | ||
with computed performance metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might need to have something that stores the load values on the context as a private slot (e.g. a list) and perform the average/peak/underrun calculation, resets the list, fire the event, etc. so it's clear what's going on.
Besides, do we want this to be clocked off of the rendering clock or the main thread?
I can see things going both ways:
- after a delay of
[[update interval]]
in the audio context clock domain, the rendering algorithm queues a task with all the data to the main thread, continuously, while it's running. No event is fired when not running (e.g. context suspended). - after a delay of
[[update interval]]
in the system clock clock-domain, the main thread gathers the performance data, computes and fires the event. If theAudioContext
is suspended, we can decide to fire or not fire the event.
I'd lean towards the second solution, not firing the event when suspended. I think it's more useful: even if underrunning horribly, authors can know.
But what happens if the graph is so big that not a single rendering quantum was completed in the interval? That easy to simulate, and we should make it so that it reports 100% of load or something like that (some software report > 100%, that's also well-defined what it means).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you proposal is to expand "computed performance metrics" with more detailed steps? Right?
+1 on the second option.
For your last question, the API doesn't really specify how the renderer can recover from such problem. So we might see different behavior depending on the implementation. Some will drop subsequent system audio callbacks until the renderer becomes responsive while others will continue to process subsequent callbacks and accumulate more buffer underruns.
All that aside, 1.0 (100%) is practical. I am not sure what users can do with a number beyond 1.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, generally I was asking for more details on the specifics of the computation, so it's clear what one should implement.
I don't know what we should do for the case "above 100%", so maybe explicitly capping to 100% and saying that at 100% it means it's over capacity (regardless of the actual rendering time), like you say.
In Firefox, we keep plowing through as fast as we can, there are audio glitches, but otherwise it proceeds normally, currentTime
increases as buffers finish being rendered. In this case, for example, if you get 2.0 (meaning the computations took twice longer than the budget), it simply means that you should divide the workload by at least 2 in duration to have any chance of glitch-free audio. I might or might not be useful.
This algorithm can be invoked by {{AudioRenderCapacity/start}} method | ||
to report the performance metric repreatedly. | ||
|
||
1. If {{[[is running]]}} is <code>false</code>, abort this algorithm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if is running
is only checked here it could result in multiple Metric Reporting Loops running in parallel.
I think the following could cause two parallel loops.
start();
stop();
start();
Since stop()
only sets the is running
flag to false
but doesn't stop the loop both loops started by calling start()
could be kept alive. It seems to be possible that by the time the first loop checks is running
it's true again as a result of calling start()
for the second time.
This effort is currently on hold because of other blockers (TAG review, Privacy/Security review). |
Teleconf 5/30/2024:
|
Fixes #2526.
FYI @padenot - the current draft is without the proposed addition markup. I'll add markups once the PR is reviewed.
Preview | Diff