-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[media] Create AudioDiscardDurationTracker #2017
base: main
Are you sure you want to change the base?
[media] Create AudioDiscardDurationTracker #2017
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2017 +/- ##
==========================================
- Coverage 59.04% 58.70% -0.34%
==========================================
Files 1744 1909 +165
Lines 80005 94125 +14120
==========================================
+ Hits 47240 55258 +8018
- Misses 32765 38867 +6102 ☔ View full report in Codecov by Sentry. |
900559f
to
8880943
Compare
Creates a class AudioDiscardDurationTracker to store the discard durations of audio buffers sent to pass-through audio renderers. The stored durations can then be used during media time calculation. This is implemented for the Android pass-through audio renderer. b/313719729
8880943
to
ddb75db
Compare
const AudioSampleInfo audio_sample_info = input_buffer.audio_sample_info(); | ||
discard_durations_by_timestamp_[input_buffer.timestamp()] = | ||
audio_sample_info.discarded_duration_from_front + | ||
audio_sample_info.discarded_duration_from_back; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to treat discarded_duration_from_front
and discarded_duration_from_back
differently. When there's audio track switch, usually the last audio unit of the first stream will have non-zero discarded_duration_from_back, and the first audio unit of the second stream will have non-zero discarded_duration_from_front. Ideally, we should have only one audio gap. But if we combine discarded_duration_from_front
and discarded_duration_from_back
, no matter when we adjust the media time (at beginning or at end), it would introduce any extra gap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can pause the media time when it reaches input buffer's timestamp (if discarded_duration_from_front
is not zero) until the media time exceeds input's timestamp + discarded_duration_from_front
. And for discarded_duration_from_back
, we can move or combine it to discarded_duration_from_front
of next input buffer. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the approach with discarded_duration_from_front
. With discarded_duration_from_back
, ideally I'd like to pause media time from <input timestamp + buffer duration - discarded_duration_from_back
> to <input timestamp + buffer duration>, but I need to know the length of the buffer. We don't know the length of the buffer until we read the E/AC3 sync frame, however I know it's likely 32 ms (1536 samples @ 48 khz). I'm working to come up with a different solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up hardcoding 32 ms and adding a TODO
to address later.
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
This commit also adds a passthrough only option to GetSupportedAudioTestFiles(), as well as refines the AudioDiscardDurationTracker implementation.
starboard/shared/starboard/player/filter/testing/audio_discard_duration_tracker_test.cc
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/testing/audio_discard_duration_tracker_test.cc
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/testing/audio_discard_duration_tracker_test.cc
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.h
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.h
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.h
Outdated
Show resolved
Hide resolved
starboard/shared/starboard/player/filter/audio_discard_duration_tracker.cc
Outdated
Show resolved
Hide resolved
// AC-3/EAC-3 sync frames with 1536 samples at 48khz have a duration of 32 ms. | ||
// TODO: Determine this value at runtime. | ||
const SbTime kBufferDuration = 32 * kSbTimeMillisecond; | ||
discard_duration_tracker_.CacheMultipleDiscardDurations(input_buffers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move it to AudioRendererPassthrough::OnDecoderOutput()? And maybe we can reuse |frames_per_input_buffer_|.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be ideal, however the AudioSampleInfo would need to be copied to the DecodedAudio for the discard information. I wanted to avoid adding that to the DecodedAudio interface for just that use case. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or rather, just the discard durations would need to be stored.
@@ -0,0 +1,107 @@ | |||
// Copyright 2023 The Cobalt Authors. All Rights Reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Now it's 2024.
input_buffer->audio_sample_info().discarded_duration_from_back; | ||
|
||
if (discard_duration_from_front + discard_duration_from_back >= | ||
buffer_duration) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move this check to AudioRendererPassthrough::OnDecoderOutput() and discard the whole buffer if discarded duration is larger than the buffer duration? We can then add a SB_DCHECK() here to ensure buffer duration is larger than discarded duration.
// As a lot of time may have passed since the last call to | ||
// AdjustTimeForTotalDiscardDuration(), remove all AudioDiscardInfos that | ||
// have already been passed by the |timestamp|. | ||
while (discard_infos_.size() > 0 && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Usually, we prefer !discard_infos_.empty()
.
SbTime discard_end_timestamp = | ||
discard_start_timestamp + discard_infos_.front().discard_duration; | ||
if (timestamp >= discard_start_timestamp && | ||
timestamp < discard_end_timestamp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
timestamp < discard_end_timestamp
here is redundant. It's guaranteed by the while loop above.
<< ". AudioDiscardDurationTracker::AdjustTimeForTotalDiscardDuration() " | ||
"requires monotonically increasing timestamps."; | ||
last_received_timestamp_ = timestamp; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
discard_start_timestamp
is based on adjusted timestamp, but the input timestamp
is not adjusted. It should be adjusted before comparing to the discard_start_timestamp
. And the return value is discard_start_timestamp - total_discard_duration_
, that adjusts the timestamp on an adjusted timestamp.
For example, if there're two inputs and both have 20ms audio and discarded duration from front of 5ms, the total played time would be 40ms(wall time), and discard_infos_
would be {0ms, 5ms}, {15ms, 5ms}. During 0-15ms (wall time), the returned value works as expected. During 15-20ms(wall time), the returned value would be 10ms, which is not right. It's expected to freeze the timestamp at 20-25ms(wall time).
Creates a class AudioDiscardDurationTracker to store the discard durations of audio buffers sent to pass-through audio renderers. The stored durations can then be used during media time calculation. This is implemented for the Android pass-through audio renderer.
b/313719729