From 689cd74d7718efa7dc70021338c20f2f2d19e983 Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 15:41:38 +0800 Subject: [PATCH 01/13] Android: add 431 release notes en --- .../en-US/native/release_android_ng.md | 139 ++++++++++++++++++ 1 file changed, 139 insertions(+) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index 0dab37d0050..a564a053c5e 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -4,6 +4,145 @@ On Android 14 devices (such as OnePlus 11), screen sharing may not be available when `targetSdkVersion` is set to 34. For example, half of the shared screen may be black. To avoid this issue, Agora recommends setting `targetSdkVersion` to 34 or below. However, this may cause the screen sharing process to be interrupted when switching between portrait and landscape mode. In this case, a window will pop up on the device asking if you want to start recording the screen. After confirming, you can resume screen sharing. +## v4.3.1 + +This version is released on 2024 Month x, Day x. + +#### Compatibility changes + +To ensure parameter naming consistency, this version renames `channelName` to `channelId` and `optionalUid` to `uid` in `joinChannel [1/2]`. You must update your app's code after upgrading to this version to ensure normal project operations. + +#### New Features + +1. **Speech Driven Avatar** + + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [`registerFaceInfoObserver`](/api-ref/rtc/android/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) method and [`onFaceInfo`](/api-ref/rtc/android/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + + **Attention:** + + - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . + - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + +2. **Wide and ultra-wide cameras (Android, iOS)** + + To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) to check the device's focal length capabilities, and then call [`setCameraCapturerConfiguration`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_setcameracapturerconfiguration) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. + +3. **Multi-camera capture (Android)** + + This release introduces additional functionalities for Android camera capture: + + 1. Support for capturing and publishing video streams from the third and fourth cameras: + - New enumerators `VIDEO_SOURCE_CAMERA_THIRD`(11) and `VIDEO_SOURCE_CAMERA_FOURTH`(12) are added to [`VideoSourceType`](/api-ref/rtc/android/API/enum_videosourcetype), specifically for the third and fourth camera sources. This change allows you to specify up to four camera streams when initiating camera capture by calling [`startCameraCapture`](/api-ref/rtc/android/API/toc_camera_capture#api_irtcengine_startcameracapture). + - New parameters `publishThirdCameraTrack` and `publishFourthCameraTrack` are added to [`ChannelMediaOptions`](/api-ref/rtc/android/API/class_channelmediaoptions). Set these parameters to `true` when joining a channel with [`joinChannel`](/api-ref/rtc/android/API/toc_channel#api_irtcengine_joinchannel2)[2/2] to publish video streams captured from the third and fourth cameras. + 2. Support for specifying cameras by camera ID: + - A new parameter `cameraId` is added to [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration). For devices with multiple cameras, where `cameraDirection` cannot identify or access all available cameras, you can obtain the camera ID through Android's native system APIs and specify the desired camera by calling [`startCameraCapture`](/api-ref/rtc/android/API/toc_camera_capture#api_irtcengine_startcameracapture) with the specific `cameraId`. + - New method [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] supports switching cameras by `cameraId`, allowing apps to dynamically adjust camera usage during runtime based on available cameras. + +4. **Data stream encryption** + + This version adds `datastreamEncryptionEnabled` to [`EncryptionConfig`](/api-ref/rtc/android/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/android/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + +5. **Local Video Rendering** + + This version adds the following members to [`VideoCanvas`](/api-ref/rtc/android/API/class_videocanvas) to support more local rendering capabilities: + + - `surfaceTexture`: Set a native Android `SurfaceTexture` object as the container providing video imagery, then use SDK external methods to perform OpenGL texture rendering. + - `enableAlphaMask`: This member enables the receiving end to initiate alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content. + +6. **Adaptive configuration for low-quality video streams** + + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/android/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: + + - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. + - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. + +7. **Other features** + + - New method [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. + - New method [`getCallIdEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_getcallidex) is introduced for retrieving call IDs in multi-channel scenarios. + +#### Improvements + +1. **Optimization of local video status callbacks** + + This version introduces the following enumerations, allowing you to understand more about the reasons behind changes in local video status through the [`onLocalVideoStateChanged`](/api-ref/rtc/android/API/toc_video_basic#callback_irtcengineeventhandler_onlocalvideostatechanged) callback: + + - `LOCAL_VIDEO_STREAM_REASON_DEVICE_INTERRUPT` (14): Video capture is interrupted due to the camera being occupied by another app or the app moving to the background. + - `LOCAL_VIDEO_STREAM_REASON_DEVICE_FATAL_ERROR` (15): Video capture device errors, possibly due to camera equipment failure. + +2. **Camera capture improvements** + + Improvements have been made to the video processing mechanism of camera capture, reducing noise, enhancing brightness, and improving color, making the captured images clearer, brighter, and more realistic. + +3. **Virtual Background Algorithm Optimization** + + To enhance the accuracy and stability of human segmentation when activating virtual backgrounds against solid colors, this version optimizes the green screen segmentation algorithm: + + - Supports recognition of any solid color background, no longer limited to green screens. + - Improves accuracy in recognizing background colors and reduces the background exposure during human segmentation. + - After segmentation, the edges of the human figure (especially around the fingers) are more stable, significantly reducing flickering at the edges. + +4. **CPU consumption reduction of in-ear monitoring** + + This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1 < 15). For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + +5. **Other improvements** + + This version also includes the following improvements: + + - Enhanced performance and stability of the local compositing feature, reducing its CPU usage. (Android) + - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. + - New chorus effect `ROOM_ACOUSTICS_CHORUS` is added to enhances the spatial presence of vocals in chorus scenarios. (Android) + - In [`RemoteAudioStats`](/api-ref/rtc/android/API/class_remoteaudiostats), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + +#### Issues fixed + +This version fixed the following issues: + +- Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. +- After joining a channel and calling [`disableAudio`](/api-ref/rtc/android/API/toc_audio_basic#api_irtcengine_disableaudio), audio playback did not immediately stop. (Android) +- Broadcasters using certain models of devices under speaker mode experienced occasional local audio capture failures when switching the app process to the background and then back to the foreground, causing remote users to not hear the broadcaster's audio. (Android) +- On devices with Android 8.0, enabling screen sharing occasionally caused the app to crash. (Android) +- In scenarios using camera capture for local video, when the app was moved to the background and [`disableVideo`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_disablevideo) or [`stopPreview`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_stoppreview)[1/2] was called to stop video capture, camera capture was unexpectedly activated when the app was brought back to the foreground. (Android) + +#### API Changes + +**Added** + +- The `surfaceTexture` and `enableAlphaMask` members in [`VideoCanvas`](/api-ref/rtc/android/API/class_videocanvas) +- `LOCAL_VIDEO_STREAM_REASON_DEVICE_INTERRUPT` +- `LOCAL_VIDEO_STREAM_REASON_DEVICE_FATAL_ERROR` +- [`registerFaceInfoObserver`](/api-ref/rtc/android/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) +- [`IFaceInfoObserver`](/api-ref/rtc/android/API/class_ifaceinfoobserver) +- [`onFaceInfo`](/api-ref/rtc/android/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) +- [`MediaSourceType`](/api-ref/rtc/android/API/enum_mediasourcetype) adds `SPEECH_DRIVEN_VIDEO_SOURCE` +- [`VideoSourceType`](/api-ref/rtc/android/API/enum_videosourcetype) adds `VIDEO_SOURCE_SPEECH_DRIVEN` +- [`EncryptionConfig`](/api-ref/rtc/android/API/class_encryptionconfig) adds `datastreamEncryptionEnabled` +- `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` +- `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` +- [`RemoteAudioStats`](/api-ref/rtc/android/API/class_remoteaudiostats) adds `e2eDelay` +- `ERR_DATASTREAM_DECRYPTION_FAILED` +- `ROOM_ACOUSTICS_CHORUS` is added, enhancing the spatial presence of vocals in chorus scenarios. (Android) +- [`getCallIdEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_getcallidex) +- [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) +- [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) +- [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) (Android, iOS) +- [`AgoraFocalLengthInfo`](/api-ref/rtc/android/API/class_focallengthinfo) (Android, iOS) +- [`CAMERA_FOCAL_LENGTH_TYPE`](/api-ref/rtc/android/API/enum_camerafocallengthtype) (Android, iOS) +- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraFocalLengthType` (Android, iOS) +- [`VideoSourceType`](/api-ref/rtc/android/API/enum_videosourcetype) adds the following enumerations: + - `VIDEO_SOURCE_CAMERA_THIRD`(11) + - `VIDEO_SOURCE_CAMERA_FOURTH`(12) +- [`ChannelMediaOptions`](/api-ref/rtc/android/API/class_channelmediaoptions) adds the following members: + - `publishThirdCameraTrack` + - `publishFourthCameraTrack` +- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraId` (Android) +- [`CAMERA_DIRECTION`](/api-ref/rtc/android/API/enum_cameradirection) adds `CAMERA_EXTRA`(2) +- [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] +- `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 < 15) ## v4.3.0 From 5abf66e1e6fd6f7f730ef58abe364ffec1ad706c Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Tue, 16 Apr 2024 15:47:14 +0800 Subject: [PATCH 02/13] add --- .../en-US/native/release_ios_ng.md | 131 ++++++++++++++++++ .../en-US/native/release_mac_ng.md | 117 ++++++++++++++++ 2 files changed, 248 insertions(+) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md index 321a78c98d8..ceef674ae44 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md @@ -4,6 +4,137 @@ AirPods Pro does not support A2DP protocol in communication audio mode, which may lead to connection failure in that mode. + +## v4.3.1 + +This version is released on 2024 Month x, Day x. + + +#### New Features + +1. **Speech Driven Avatar** + + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + + **Attention:** + + - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . + - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + +2. **Privacy manifest file** + + To meet Apple's safety compliance requirements for app publication, the SDK now includes a privacy manifest file, `PrivacyInfo.xcprivacy`, detailing the SDK's API calls that access or use user data, along with a description of the types of data collected. + + **Note:** If you need to publish an app with SDK versions prior to v4.3.1 to the Apple App Store, you must manually add the `PrivacyInfo.xcprivacy` file to your Xcode project. For more details, see . + +3. **Center stage camera** + + To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. + + Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html). A list of supported devices can be found in the API documentation at [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html). + +4. **Camera stabilization** + + To improve video stability in mobile filming, low-light environments, and hand-held shooting scenarios, this version introduces a camera stabilization feature. You can activate this feature and select an appropriate stabilization mode by calling [setCameraStabilizationMode](API/api_irtcengine_setcamerastabilizationmode.html), achieving more stable and clearer video footage. + +5. **Wide and ultra-wide cameras** + + To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [queryCameraFocalLengthCapability](API/api_irtcengine_querycamerafocallengthcapability.html) to check the device's focal length capabilities, and then call [setCameraCapturerConfiguration](API/api_irtcengine_setcameracapturerconfiguration.html) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. + +6. **Data stream encryption** + + This version adds `datastreamEncryptionEnabled` to [AgoraEncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + +7. **Local Video Rendering** + + This version adds the following members to [AgoraRtcVideoCanvas](API/class_videocanvas.html) to support more local rendering capabilities: + + - enableAlphaMask: This member enables the receiving end to initiate alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content. + +8. **Adaptive configuration for low-quality video streams** + + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode [2/2\]](API/api_irtcengine_setdualstreammode2.html), the SDK defaults to the following behaviors: + + - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. + - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. + +9. **Other features** + + - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. + - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + +#### Improvements + +1. **Virtual Background Algorithm Optimization** + + To enhance the accuracy and stability of human segmentation when activating virtual backgrounds against solid colors, this version optimizes the green screen segmentation algorithm: + + - Supports recognition of any solid color background, no longer limited to green screens. + - Improves accuracy in recognizing background colors and reduces the background exposure during human segmentation. + - After segmentation, the edges of the human figure (especially around the fingers) are more stable, significantly reducing flickering at the edges. + +2. **Custom audio capture optimization** + + To enhance the flexibility of custom audio capture, this release deprecates [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) and introduces [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html). Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. + +3. **CPU consumption reduction of in-ear monitoring** + + This release adds an enumerator `AgoraEarMonitoringFilterReusePostProcessingFilter` (1 << 15) in `AgoraEarMonitoringFilterType`. For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + +4. **Other improvements** + + This version also includes the following improvements: + + - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. + - Improved stability in processing video by the raw video frame observer. + - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. + - In [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + +#### Issues fixed + +This version fixed the following issues: + +- Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. + +#### API Changes + +**Added** + +- [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) +- [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html) +- [setCameraStabilizationMode](API/api_irtcengine_setcamerastabilizationmode.html) +- [AgoraCameraStabilizationMode](API/enum_camerastabilizationmode.html) +- The `enableAlphaMask` member in [AgoraRtcVideoCanvas](API/class_videocanvas.html) +- [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) +- [AgoraFaceInfoDelegate](API/class_ifaceinfoobserver.html) +- [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) +- [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` +- [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` +- [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` +- `AgoraEncryptionErrorType` adds the following enumerations: + - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` + - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` +- [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` +- [AgoraErrorCode](API/enum_errorcodetype.html) adds `AgoraErrorCodeDatastreamDecryptionFailed` +- [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. +- [getCallIdEx](API/api_irtcengineex_getcallidex.html) +- [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) +- [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) +- [queryCameraFocalLengthCapability](API/api_irtcengine_querycamerafocallengthcapability.html) +- [AgoraFocalLengthInfo](API/class_focallengthinfo.html) +- [AgoraFocalLength](API/enum_camerafocallengthtype.html) +- [AgoraCameraCapturerConfiguration](API/class_cameracapturerconfiguration.html) adds a new member `cameraFocalLengthType` +- [AgoraEarMonitoringFilterType](API/enum_earmonitoringfiltertype.html) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1 <<15) +- [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html) +- +**Deprecated** +- [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) + + ## v4.3.0 v4.3.0 was released on xx xx, 2024. diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md index 7a84f951cbe..aac638c2263 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md @@ -1,4 +1,121 @@ +## v4.3.1 + +This version is released on 2024 Month x, Day x. + + +#### New Features + +1. **Speech Driven Avatar** + + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + + **Attention:** + + - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . + - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + +2. **Center stage camera** + + To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. + + Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html). A list of supported devices can be found in the API documentation at [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html). + +3. **Data stream encryption** + + This version adds `datastreamEncryptionEnabled` to [AgoraEncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + +4. **Adaptive configuration for low-quality video streams** + + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode [2/2\]](API/api_irtcengine_setdualstreammode2.html), the SDK defaults to the following behaviors: + + - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. + - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. + +5. **Other features** + + - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. + - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + +#### Improvements + +1. **Optimization of local video status callbacks** + + To facilitate understanding of the specific reasons for changes in local video status, this version adds the following enumerations to the [localVideoStateChangedOfState](API/callback_irtcengineeventhandler_onlocalvideostatechanged.html) callback's [AgoraLocalVideoStreamReason](API/enum_localvideostreamreason.html) enumeration class: + + - `AgoraLocalVideoStreamReasonScreenCaptureRecoverFromMinimized` (27): The window being captured for screen sharing has recovered from a minimized state. + +2. **Audio device type detection** + + This version adds the `deviceTypeName` member to `AgoraRtcDeviceInfo`, used to identify the type of audio devices, such as built-in, USB, HDMI, etc. + +3. **Virtual Background Algorithm Optimization** + + To enhance the accuracy and stability of human segmentation when activating virtual backgrounds against solid colors, this version optimizes the green screen segmentation algorithm: + + - Supports recognition of any solid color background, no longer limited to green screens. + - Improves accuracy in recognizing background colors and reduces the background exposure during human segmentation. + - After segmentation, the edges of the human figure (especially around the fingers) are more stable, significantly reducing flickering at the edges. + +4. **Custom audio capture optimization** + + To enhance the flexibility of custom audio capture, this release deprecates [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) and introduces [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html). Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. + +5. **CPU consumption reduction of in-ear monitoring** + + This release adds an enumerator `AgoraEarMonitoringFilterReusePostProcessingFilter` (1 << 15) in `AgoraEarMonitoringFilterType`. For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + +6. **Other improvements** + + This version also includes the following improvements: + + - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. + - For macOS 14 and above, optimization of [getScreenCaptureSourcesWithThumbSize](API/api_irtcengine_getscreencapturesources.html) behavior. From this version onward, the method automatically filters out widget windows from the list of available window resources. + - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. + - In [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + +#### Issues fixed + +This version fixed the following issues: + +- Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. +- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [stateChanged](API/api_irtcengine_statechanged.html) was triggered multiple times. + +#### API Changes + +**Added** + +- [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) +- [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html) +- The following enumerations in `AgoraLocalVideoStreamReason`: + - `AgoraLocalVideoStreamReasonScreenCaptureRecoverFromMinimized` +- [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) +- [AgoraFaceInfoDelegate](API/class_ifaceinfoobserver.html) +- [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) +- [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` +- [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` +- [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` +- `AgoraEncryptionErrorType` adds the following enumerations: + - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` + - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` +- [AgoraRtcDeviceInfo](API/class_agorartcdeviceinfo.html) adds `deviceTypeName` +- [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` +- [AgoraErrorCode](API/enum_errorcodetype.html) adds `AgoraErrorCodeDatastreamDecryptionFailed` +- [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. +- [getCallIdEx](API/api_irtcengineex_getcallidex.html) +- [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) +- [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) +- [AgoraEarMonitoringFilterType](API/enum_earmonitoringfiltertype.html) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1 <<15) +- [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html) + +**Deprecated** + +- [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) + ## v4.3.0 v4.3.0 was released on xx xx, 2024. From a78e23d72a31397297ac26ddd3f1586f8e1642a4 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Tue, 16 Apr 2024 16:28:20 +0800 Subject: [PATCH 03/13] Update release_windows_ng.md --- .../en-US/native/release_windows_ng.md | 108 +++++++++++++++++- 1 file changed, 107 insertions(+), 1 deletion(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md index 102838f5fc2..e3a99a240fa 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md @@ -1,3 +1,109 @@ +## v4.3.1 + +This version is released on 2024 Month x, Day x. + +#### New Features + +1. **Speech Driven Avatar** + + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [registerFaceInfoObserver](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). + + **Attention:** + + - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . + - The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). + +2. **Data stream encryption** + + This version adds `datastreamEncryptionEnabled` to [EncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + +3. **Adaptive configuration for low-quality video streams** + + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode](API/api_irtcengine_setdualstreammode2.html)[2/2], the SDK defaults to the following behaviors: + + - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. + - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. + +4. **Other features** + + - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. + - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + +#### Improvements + +1. **Optimization for game scenario screen sharing (Windows)** + + This version specifically optimizes screen sharing for game scenarios, enhancing performance, stability, and clarity in ultra-high definition (4K, 60 fps) game scenarios, resulting in a clearer, smoother, and more stable gaming experience for players. + +2. **Virtual Background Algorithm Optimization** + + To enhance the accuracy and stability of human segmentation when activating virtual backgrounds against solid colors, this version optimizes the green screen segmentation algorithm: + + - Supports recognition of any solid color background, no longer limited to green screens. + - Improves accuracy in recognizing background colors and reduces the background exposure during human segmentation. + - After segmentation, the edges of the human figure (especially around the fingers) are more stable, significantly reducing flickering at the edges. + +3. **CPU consumption reduction of in-ear monitoring** + + This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1 << 15) in `EAR_MONITORING_FILTER_TYPE`. For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + +4. **Other improvements** + + This version also includes the following improvements: + + - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. + - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. + - In [AUDIO_EFFECT_PRESET](API/enum_audioeffectpreset.html), a new enumeration `ROOM_ACOUSTICS_CHORUS` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [RemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + +#### Issues fixed + +This version fixed the following issues: + +- Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. +- In screen sharing scenarios, when the app enabled sound card capture with [enableLoopbackRecording](API/api_irtcengine_enableloopbackrecording.html) to capture audio from the shared screen, the transmission of sound card captured audio failed after a local user manually disabled the local audio capture device, causing remote users to not hear the shared screen's audio. +- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [onAudioDeviceStateChanged](API/callback_irtcengineeventhandler_onaudiodevicestatechanged.html) was triggered multiple times. +- During interactions, when a local user set the system default playback device to speakers using [setDevice](API/api_iaudiodevicecollection_setdevice.html), there was no sound from the remote end. +- When sharing an Excel document window, remote users occasionally saw a green screen. +- On devices using Intel graphics cards, occasionally there was a performance regression when publishing a small video stream. + +#### API Changes + +**Added** + +- [registerFaceInfoObserver](API/api_imediaengine_registerfaceinfoobserver.html) + +- [IFaceInfoObserver](API/class_ifaceinfoobserver.html) + +- [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) + +- [MEDIA_SOURCE_TYPE](API/enum_mediasourcetype.html) adds `SPEECH_DRIVEN_VIDEO_SOURCE` + +- [VIDEO_SOURCE_TYPE](API/enum_videosourcetype.html) adds `VIDEO_SOURCE_SPEECH_DRIVEN` + +- [EncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` + +- [ENCRYPTION_ERROR_TYPE](API/enum_encryptionerrortype.html) adds the following enumerations: +- `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` + - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` + +- [RemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` + +- [ERROR_CODE_TYPE](API/enum_errorcodetype.html) adds `ERR_DATASTREAM_DECRYPTION_FAILED` + +- [AUDIO_EFFECT_PRESET](API/enum_audioeffectpreset.html) adds `ROOM_ACOUSTICS_CHORUS`, enhancing the spatial presence of vocals in chorus scenarios. + +- [getCallIdEx](API/api_irtcengineex_getcallidex.html) + +- [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) + +- [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) + +- [EAR_MONITORING_FILTER_TYPE](API/enum_earmonitoringfiltertype.html) adds a new enumeration `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 <<15) + ## v4.3.0 v4.3.0 was released on xx xx, 2024. @@ -50,7 +156,7 @@ This release has optimized the implementation of some functions, involving renam - Before v4.3.0, if you call the [disableAudio](API/api_irtcengine_disableaudio.html) method to disable the audio module, audio loopback capturing will not be disabled. - As of v4.3.0, if you call the [disableAudio](API/api_irtcengine_disableaudio.html) method to disable the audio module, audio loopback capturing will be disabled as well. If you need to enable audio loopback capturing, you need to enable the audio module by calling the [enableAudio](API/api_irtcengine_enableaudio.html) method and then call [enableLoopbackRecording](API/api_irtcengine_enableloopbackrecording.html). - + 5. **Log encryption behavior changes** For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console. From 533fc485c07782fa34caa3537f2a30e0ae99b546 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Tue, 16 Apr 2024 16:34:29 +0800 Subject: [PATCH 04/13] Update release_notes.dita --- dita/RTC-NG/release/release_notes.dita | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/dita/RTC-NG/release/release_notes.dita b/dita/RTC-NG/release/release_notes.dita index a0719c2741b..9990ffa160a 100644 --- a/dita/RTC-NG/release/release_notes.dita +++ b/dita/RTC-NG/release/release_notes.dita @@ -166,14 +166,14 @@
  • (iOS)
  • 中新增 surfaceTextureenableAlphaMask
  • 中新增 enableAlphaMask
  • -
  • 中新增如下枚举: +
  • 中新增如下枚举:
    • LOCAL_VIDEO_STREAM_REASON_DEVICE_INTERRUPT
    • LOCAL_VIDEO_STREAM_REASON_DEVICE_FATAL_ERROR
  • 中新增 datastreamEncryptionEnabled
  • -
  • 中新增如下枚举: +
  • 中新增如下枚举:
    • @@ -198,12 +198,12 @@
    • (Android, iOS)
    • (Android, iOS)
    • 新增成员 cameraFocalLengthType (Android, iOS)
    • -
    • 新增以下枚举: +
    • 新增以下枚举:
      • (11)
      • (12)
    • -
    • 新增以下成员: +
    • 新增以下成员:
      • publishThirdCameraTrack
      • publishFourthCameraTrack
      • From cdffaf2f9c4b92ffd9db6396c40b7321a82c4f95 Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Tue, 16 Apr 2024 16:37:13 +0800 Subject: [PATCH 05/13] 1 --- dita/RTC-NG/release/release_notes.dita | 4 ++-- en-US/dita/RTC-NG/release/release_notes.dita | 2 +- .../RTC 4.x/release-notes/en-US/native/release_ios_ng.md | 4 ++-- .../RTC 4.x/release-notes/en-US/native/release_mac_ng.md | 6 +++--- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/dita/RTC-NG/release/release_notes.dita b/dita/RTC-NG/release/release_notes.dita index a0719c2741b..c37ad2ccefa 100644 --- a/dita/RTC-NG/release/release_notes.dita +++ b/dita/RTC-NG/release/release_notes.dita @@ -19,7 +19,7 @@
        1. 隐私清单文件 (iOS)

          为满足 Apple 对于 App 发布的安全合规要求,SDK 自该版本起新增隐私清单文件 PrivacyInfo.xcprivacy,其中包含 SDK 中需要访问或使用用户数据的 API 调用说明和 SDK 采集的数据类型说明。

          - 如果你需要将集成 v4.3.1 之前的 SDK 版本的 App 发布到苹果应用商店,则需要在 Xcode 工程中手动添加 PrivacyInfo.xcprivacy 文件。详见
        2. + 如果你需要将集成 v4.3.1 之前的 SDK 版本的 App 发布到苹果应用商店,则需要在 Xcode 工程中手动添加 PrivacyInfo.xcprivacy 文件。详见
        3. 人像锁定 (iOS, macOS)

          为提升在线会议、秀场、在线教育等场景中的主播演讲效果,该版本新增 方法开启人像锁定功能。该功能可确保主播无论移动与否,始终位于画面中心,以取得良好的演讲效果。

          在开启人像锁定前,建议你先调用 查询当前设备性能是否支持该功能。支持的设备清单可参考 API 文档

          @@ -58,7 +58,7 @@
          • surfaceTexture:设置一个 Android 原生的 SurfaceTexture 对象作为提供视频图像的容器,然后使用 SDK 外部的方法自行实现 OpenGL 中的纹理绘制。
          • -
          • enableAlphaMask:可以通过该成员设置接收端是否开启 alpha 遮罩渲染。alpha 遮罩渲染功能可以创建具有透明效果的图像或提取视频中的人像。
          • +
          • enableAlphaMask:可以通过该成员设置接收端是否开启 alpha 遮罩渲染。alpha 遮罩渲染功能可以创建具有透明效果的图像或提取视频中的人像。

        4. 视频小流自适应配置 diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 99659a42277..06d7ebeecee 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -29,7 +29,7 @@
        5. Privacy manifest file (iOS)

          To meet Apple's safety compliance requirements for app publication, the SDK now includes a privacy manifest file, PrivacyInfo.xcprivacy, detailing the SDK's API calls that access or use user data, along with a description of the types of data collected.

          - If you need to publish an app with SDK versions prior to v4.3.1 to the Apple App Store, you must manually add the PrivacyInfo.xcprivacy file to your Xcode project. For more details, see . + If you need to publish an app with SDK versions prior to v4.3.1 to the Apple App Store, you must manually add the PrivacyInfo.xcprivacy file to your Xcode project.
        6. Center stage camera (iOS, macOS)

          To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects.

          diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md index ceef674ae44..fde2a614731 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md @@ -27,7 +27,7 @@ This version is released on 2024 Month x, Day x. To meet Apple's safety compliance requirements for app publication, the SDK now includes a privacy manifest file, `PrivacyInfo.xcprivacy`, detailing the SDK's API calls that access or use user data, along with a description of the types of data collected. - **Note:** If you need to publish an app with SDK versions prior to v4.3.1 to the Apple App Store, you must manually add the `PrivacyInfo.xcprivacy` file to your Xcode project. For more details, see . + **Note:** If you need to publish an app with SDK versions prior to v4.3.1 to the Apple App Store, you must manually add the `PrivacyInfo.xcprivacy` file to your Xcode project. For more details, see [](). 3. **Center stage camera** @@ -115,7 +115,7 @@ This version fixed the following issues: - [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` - [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` - [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` -- `AgoraEncryptionErrorType` adds the following enumerations: +- [`AgoraEncryptionErrorType`](/api-ref/rtc/ios/API/enum_encryptionerrortype) adds the following enumerations: - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` - [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md index aac638c2263..9f4d4f83c62 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md @@ -91,7 +91,7 @@ This version fixed the following issues: - [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) - [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html) -- The following enumerations in `AgoraLocalVideoStreamReason`: +- The following enumerations in [`AgoraLocalVideoStreamReason`](/api-ref/rtc/macos/API/enum_localvideostreamreason): - `AgoraLocalVideoStreamReasonScreenCaptureRecoverFromMinimized` - [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) - [AgoraFaceInfoDelegate](API/class_ifaceinfoobserver.html) @@ -99,7 +99,7 @@ This version fixed the following issues: - [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` - [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` - [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` -- `AgoraEncryptionErrorType` adds the following enumerations: +- [`AgoraEncryptionErrorType`](/api-ref/rtc/macos/API/enum_encryptionerrortype) adds the following enumerations: - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` - [AgoraRtcDeviceInfo](API/class_agorartcdeviceinfo.html) adds `deviceTypeName` @@ -241,7 +241,7 @@ This release has optimized the implementation of some functions, involving renam - This release adds the `earMonitorDelay` and `aecEstimatedDelay` members in [AgoraRtcLocalAudioStats](API/class_localaudiostats.html) to report ear monitor delay and acoustic echo cancellation (AEC) delay, respectively. - When using the sound card for recording, it supports capturing audio data in stereo. - The [cacheStats](API/callback_imediaplayersourceobserver_onplayercachestats.html) callback is added to report the statistics of the media file being cached. This callback is triggered once per second after file caching is started. - - The [playbackStats](API/callback_imediaplayersourceobserver_onplayerplaybackstats.html) callback is added to report the statistics of the media file being played. This callback is triggered once per second after the media file starts playing. You can obtain information like the audio and video bitrate of the media file through . + - The [playbackStats](API/callback_imediaplayersourceobserver_onplayerplaybackstats.html) callback is added to report the statistics of the media file being played. This callback is triggered once per second after the media file starts playing. You can obtain information like the audio and video bitrate of the media file through [AgoraMediaPlayerPlaybackStats](API/class_playerplaybackstats.html). #### Issues fixed From db569c60865ea33a373f6873fc65fc7713e01d58 Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 16:41:53 +0800 Subject: [PATCH 06/13] fix release notes en --- .../en-US/native/release_android_ng.md | 17 ++++++++--------- .../en-US/native/release_ios_ng.md | 5 ++--- .../en-US/native/release_mac_ng.md | 5 ++--- .../en-US/native/release_windows_ng.md | 3 +-- 4 files changed, 13 insertions(+), 17 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index a564a053c5e..744342d3710 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -18,18 +18,17 @@ To ensure parameter naming consistency, this version renames `channelName` to `c The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [`registerFaceInfoObserver`](/api-ref/rtc/android/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) method and [`onFaceInfo`](/api-ref/rtc/android/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. - The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). **Attention:** - - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . - - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). -2. **Wide and ultra-wide cameras (Android, iOS)** +1. **Wide and ultra-wide cameras (Android, iOS)** To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) to check the device's focal length capabilities, and then call [`setCameraCapturerConfiguration`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_setcameracapturerconfiguration) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. -3. **Multi-camera capture (Android)** +2. **Multi-camera capture (Android)** This release introduces additional functionalities for Android camera capture: @@ -40,25 +39,25 @@ To ensure parameter naming consistency, this version renames `channelName` to `c - A new parameter `cameraId` is added to [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration). For devices with multiple cameras, where `cameraDirection` cannot identify or access all available cameras, you can obtain the camera ID through Android's native system APIs and specify the desired camera by calling [`startCameraCapture`](/api-ref/rtc/android/API/toc_camera_capture#api_irtcengine_startcameracapture) with the specific `cameraId`. - New method [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] supports switching cameras by `cameraId`, allowing apps to dynamically adjust camera usage during runtime based on available cameras. -4. **Data stream encryption** +3. **Data stream encryption** This version adds `datastreamEncryptionEnabled` to [`EncryptionConfig`](/api-ref/rtc/android/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/android/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. -5. **Local Video Rendering** +4. **Local Video Rendering** This version adds the following members to [`VideoCanvas`](/api-ref/rtc/android/API/class_videocanvas) to support more local rendering capabilities: - `surfaceTexture`: Set a native Android `SurfaceTexture` object as the container providing video imagery, then use SDK external methods to perform OpenGL texture rendering. - `enableAlphaMask`: This member enables the receiving end to initiate alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content. -6. **Adaptive configuration for low-quality video streams** +5. **Adaptive configuration for low-quality video streams** This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/android/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. -7. **Other features** +6. **Other features** - New method [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md index fde2a614731..775d1e54d76 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md @@ -16,12 +16,11 @@ This version is released on 2024 Month x, Day x. The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. - The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). **Attention:** - - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . - - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). 2. **Privacy manifest file** diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md index 9f4d4f83c62..dacb83f08cd 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md @@ -10,12 +10,11 @@ This version is released on 2024 Month x, Day x. The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. - The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at . + The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). **Attention:** - - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . - - The speech driven avatar feature is currently in beta testing. To use it, please [technical support](mailto:support@agora.io). + The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). 2. **Center stage camera** diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md index e3a99a240fa..660314cf95d 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md @@ -12,8 +12,7 @@ This version is released on 2024 Month x, Day x. **Attention:** - - The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at . - - The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). + The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). 2. **Data stream encryption** From c9c6b0448b0edc7787d42c12cb1ffca9acccc263 Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 16:53:25 +0800 Subject: [PATCH 07/13] fix android release notes --- .../RTC 4.x/release-notes/en-US/native/release_android_ng.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index 744342d3710..be331de3bcb 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -86,7 +86,7 @@ To ensure parameter naming consistency, this version renames `channelName` to `c 4. **CPU consumption reduction of in-ear monitoring** - This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1 < 15). For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1 <<15). For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. 5. **Other improvements** @@ -141,7 +141,7 @@ This version fixed the following issues: - [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraId` (Android) - [`CAMERA_DIRECTION`](/api-ref/rtc/android/API/enum_cameradirection) adds `CAMERA_EXTRA`(2) - [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] -- `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 < 15) +- `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 <<15) ## v4.3.0 From eac0ae4dda6b945da9fd4b975a06c69de2ea9eab Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 17:03:01 +0800 Subject: [PATCH 08/13] update en release --- en-US/dita/RTC-NG/release/release_notes.dita | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index 06d7ebeecee..b06e1958973 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -19,13 +19,8 @@
          1. Speech Driven Avatar

            The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added method and callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines.

            -

            The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at .

            - -
              -
            • The Agora SDK extension extension, MetaKit, simplifies the implementation process of animating avatar with speech, eliminating the need to build your own framework for collection, encoding, and transmission. Detailed introduction and integration guidance for MetaKit are available at .
            • -
            • The speech driven avatar feature is currently in beta testing. To use it, please .
            • -
            -
            +

            The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size]().

            + The speech driven avatar feature is currently in beta testing. To use it, please contact .
          2. Privacy manifest file (iOS)

            To meet Apple's safety compliance requirements for app publication, the SDK now includes a privacy manifest file, PrivacyInfo.xcprivacy, detailing the SDK's API calls that access or use user data, along with a description of the types of data collected.

            From b9a00d0d901570c1194959b629cb09286bfcf452 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Tue, 16 Apr 2024 17:22:56 +0800 Subject: [PATCH 09/13] Update release_notes.dita --- en-US/dita/RTC-NG/release/release_notes.dita | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/en-US/dita/RTC-NG/release/release_notes.dita b/en-US/dita/RTC-NG/release/release_notes.dita index b06e1958973..eabf34e749a 100644 --- a/en-US/dita/RTC-NG/release/release_notes.dita +++ b/en-US/dita/RTC-NG/release/release_notes.dita @@ -175,7 +175,7 @@
          3. (iOS)
          4. The surfaceTexture and enableAlphaMask members in
          5. The enableAlphaMask member in
          6. -
          7. The following enumerations in : +
          8. The following enumerations in :
            • LOCAL_VIDEO_STREAM_REASON_DEVICE_INTERRUPT
            • LOCAL_VIDEO_STREAM_REASON_DEVICE_FATAL_ERROR
            • @@ -188,7 +188,7 @@
            • adds
            • adds
            • adds datastreamEncryptionEnabled
            • -
            • adds the following enumerations: +
            • adds the following enumerations:
              • @@ -212,12 +212,12 @@
              • (Android, iOS)
              • (Android, iOS)
              • adds a new member cameraFocalLengthType (Android, iOS)
              • -
              • adds the following enumerations: +
              • adds the following enumerations:
                • (11)
                • (12)
              • -
              • adds the following members: +
              • adds the following members:
                • publishThirdCameraTrack
                • publishFourthCameraTrack
                • From 8f098944f4af302b089e90d1f5845fdf47252357 Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 18:07:47 +0800 Subject: [PATCH 10/13] update android release en --- .../en-US/native/release_android_ng.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index be331de3bcb..4c45b037748 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -10,7 +10,7 @@ This version is released on 2024 Month x, Day x. #### Compatibility changes -To ensure parameter naming consistency, this version renames `channelName` to `channelId` and `optionalUid` to `uid` in `joinChannel [1/2]`. You must update your app's code after upgrading to this version to ensure normal project operations. +To ensure parameter naming consistency, this version renames `channelName` to `channelId` and `optionalUid` to `uid` in `joinChannel` [1/2]. You must update your app's code after upgrading to this version to ensure normal project operations. #### New Features @@ -24,11 +24,11 @@ To ensure parameter naming consistency, this version renames `channelName` to `c The speech driven avatar feature is currently in beta testing. To use it, please contact [technical support](mailto:support@agora.io). -1. **Wide and ultra-wide cameras (Android, iOS)** +2. **Wide and ultra-wide cameras** To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) to check the device's focal length capabilities, and then call [`setCameraCapturerConfiguration`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_setcameracapturerconfiguration) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. -2. **Multi-camera capture (Android)** +3. **Multi-camera capture** This release introduces additional functionalities for Android camera capture: @@ -39,25 +39,25 @@ To ensure parameter naming consistency, this version renames `channelName` to `c - A new parameter `cameraId` is added to [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration). For devices with multiple cameras, where `cameraDirection` cannot identify or access all available cameras, you can obtain the camera ID through Android's native system APIs and specify the desired camera by calling [`startCameraCapture`](/api-ref/rtc/android/API/toc_camera_capture#api_irtcengine_startcameracapture) with the specific `cameraId`. - New method [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] supports switching cameras by `cameraId`, allowing apps to dynamically adjust camera usage during runtime based on available cameras. -3. **Data stream encryption** +4. **Data stream encryption** This version adds `datastreamEncryptionEnabled` to [`EncryptionConfig`](/api-ref/rtc/android/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/android/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. -4. **Local Video Rendering** +5. **Local Video Rendering** This version adds the following members to [`VideoCanvas`](/api-ref/rtc/android/API/class_videocanvas) to support more local rendering capabilities: - `surfaceTexture`: Set a native Android `SurfaceTexture` object as the container providing video imagery, then use SDK external methods to perform OpenGL texture rendering. - `enableAlphaMask`: This member enables the receiving end to initiate alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content. -5. **Adaptive configuration for low-quality video streams** +6. **Adaptive configuration for low-quality video streams** This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/android/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. -6. **Other features** +7. **Other features** - New method [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. @@ -92,9 +92,9 @@ To ensure parameter naming consistency, this version renames `channelName` to `c This version also includes the following improvements: - - Enhanced performance and stability of the local compositing feature, reducing its CPU usage. (Android) + - Enhanced performance and stability of the local compositing feature, reducing its CPU usage. - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. - - New chorus effect `ROOM_ACOUSTICS_CHORUS` is added to enhances the spatial presence of vocals in chorus scenarios. (Android) + - New chorus effect `ROOM_ACOUSTICS_CHORUS` is added to enhances the spatial presence of vocals in chorus scenarios. - In [`RemoteAudioStats`](/api-ref/rtc/android/API/class_remoteaudiostats), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. #### Issues fixed @@ -102,10 +102,10 @@ To ensure parameter naming consistency, this version renames `channelName` to `c This version fixed the following issues: - Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. -- After joining a channel and calling [`disableAudio`](/api-ref/rtc/android/API/toc_audio_basic#api_irtcengine_disableaudio), audio playback did not immediately stop. (Android) -- Broadcasters using certain models of devices under speaker mode experienced occasional local audio capture failures when switching the app process to the background and then back to the foreground, causing remote users to not hear the broadcaster's audio. (Android) -- On devices with Android 8.0, enabling screen sharing occasionally caused the app to crash. (Android) -- In scenarios using camera capture for local video, when the app was moved to the background and [`disableVideo`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_disablevideo) or [`stopPreview`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_stoppreview)[1/2] was called to stop video capture, camera capture was unexpectedly activated when the app was brought back to the foreground. (Android) +- After joining a channel and calling [`disableAudio`](/api-ref/rtc/android/API/toc_audio_basic#api_irtcengine_disableaudio), audio playback did not immediately stop. +- Broadcasters using certain models of devices under speaker mode experienced occasional local audio capture failures when switching the app process to the background and then back to the foreground, causing remote users to not hear the broadcaster's audio. +- On devices with Android 8.0, enabling screen sharing occasionally caused the app to crash. +- In scenarios using camera capture for local video, when the app was moved to the background and [`disableVideo`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_disablevideo) or [`stopPreview`](/api-ref/rtc/android/API/toc_video_basic#api_irtcengine_stoppreview)[1/2] was called to stop video capture, camera capture was unexpectedly activated when the app was brought back to the foreground. #### API Changes @@ -124,7 +124,7 @@ This version fixed the following issues: - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` - [`RemoteAudioStats`](/api-ref/rtc/android/API/class_remoteaudiostats) adds `e2eDelay` - `ERR_DATASTREAM_DECRYPTION_FAILED` -- `ROOM_ACOUSTICS_CHORUS` is added, enhancing the spatial presence of vocals in chorus scenarios. (Android) +- `ROOM_ACOUSTICS_CHORUS` is added, enhancing the spatial presence of vocals in chorus scenarios. - [`getCallIdEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_getcallidex) - [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) - [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) @@ -138,7 +138,7 @@ This version fixed the following issues: - [`ChannelMediaOptions`](/api-ref/rtc/android/API/class_channelmediaoptions) adds the following members: - `publishThirdCameraTrack` - `publishFourthCameraTrack` -- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraId` (Android) +- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraId` - [`CAMERA_DIRECTION`](/api-ref/rtc/android/API/enum_cameradirection) adds `CAMERA_EXTRA`(2) - [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] - `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 <<15) From 8c69346e3be2aff60d793014059013b386c16462 Mon Sep 17 00:00:00 2001 From: Suri539 Date: Tue, 16 Apr 2024 18:09:11 +0800 Subject: [PATCH 11/13] Update release_windows_ng.md --- .../en-US/native/release_windows_ng.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md index 660314cf95d..f7e8d22edeb 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md @@ -6,7 +6,7 @@ This version is released on 2024 Month x, Day x. 1. **Speech Driven Avatar** - The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [registerFaceInfoObserver](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [`registerFaceInfoObserver`](/api-ref/rtc/windows/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) method and [`onFaceInfo`](/api-ref/rtc/windows/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). @@ -16,20 +16,20 @@ This version is released on 2024 Month x, Day x. 2. **Data stream encryption** - This version adds `datastreamEncryptionEnabled` to [EncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + This version adds `datastreamEncryptionEnabled` to [`EncryptionConfig`](/api-ref/rtc/windows/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/windows/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. 3. **Adaptive configuration for low-quality video streams** - This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode](API/api_irtcengine_setdualstreammode2.html)[2/2], the SDK defaults to the following behaviors: + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/windows/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. 4. **Other features** - - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. - - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. - - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + - New method [`enableEncryptionEx`](/api-ref/rtc/windows/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/windows/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. + - New method [`getCallIdEx`](/api-ref/rtc/windows/API/toc_network#api_irtcengineex_getcallidex) is introduced for retrieving call IDs in multi-channel scenarios. #### Improvements @@ -53,19 +53,19 @@ This version is released on 2024 Month x, Day x. This version also includes the following improvements: - - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. + - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. - - In [AUDIO_EFFECT_PRESET](API/enum_audioeffectpreset.html), a new enumeration `ROOM_ACOUSTICS_CHORUS` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. - - In [RemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + - In [`AUDIO_EFFECT_PRESET`](/api-ref/rtc/windows/API/enum_audioeffectpreset), a new enumeration `ROOM_ACOUSTICS_CHORUS` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [`RemoteAudioStats`](/api-ref/rtc/windows/API/class_remoteaudiostats), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. #### Issues fixed This version fixed the following issues: - Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. -- In screen sharing scenarios, when the app enabled sound card capture with [enableLoopbackRecording](API/api_irtcengine_enableloopbackrecording.html) to capture audio from the shared screen, the transmission of sound card captured audio failed after a local user manually disabled the local audio capture device, causing remote users to not hear the shared screen's audio. -- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [onAudioDeviceStateChanged](API/callback_irtcengineeventhandler_onaudiodevicestatechanged.html) was triggered multiple times. -- During interactions, when a local user set the system default playback device to speakers using [setDevice](API/api_iaudiodevicecollection_setdevice.html), there was no sound from the remote end. +- In screen sharing scenarios, when the app enabled sound card capture with [`enableLoopbackRecording`](/api-ref/rtc/windows/API/toc_audio_capture#api_irtcengine_enableloopbackrecording) to capture audio from the shared screen, the transmission of sound card captured audio failed after a local user manually disabled the local audio capture device, causing remote users to not hear the shared screen's audio. +- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [`onAudioDeviceStateChanged`](/api-ref/rtc/windows/API/toc_audio_device#callback_irtcengineeventhandler_onaudiodevicestatechanged) was triggered multiple times. +- During interactions, when a local user set the system default playback device to speakers using [`setDevice`](/api-ref/rtc/windows/API/toc_audio_device#api_iaudiodevicecollection_setdevice), there was no sound from the remote end. - When sharing an Excel document window, remote users occasionally saw a green screen. - On devices using Intel graphics cards, occasionally there was a performance regression when publishing a small video stream. @@ -155,10 +155,10 @@ This release has optimized the implementation of some functions, involving renam - Before v4.3.0, if you call the [disableAudio](API/api_irtcengine_disableaudio.html) method to disable the audio module, audio loopback capturing will not be disabled. - As of v4.3.0, if you call the [disableAudio](API/api_irtcengine_disableaudio.html) method to disable the audio module, audio loopback capturing will be disabled as well. If you need to enable audio loopback capturing, you need to enable the audio module by calling the [enableAudio](API/api_irtcengine_enableaudio.html) method and then call [enableLoopbackRecording](API/api_irtcengine_enableloopbackrecording.html). - + 5. **Log encryption behavior changes** - For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console. + For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console. Refer to the following solutions for different needs: From 1471ba5faf8eba9585ce6c842134d1ae3013ad1a Mon Sep 17 00:00:00 2001 From: Nero-Hu Date: Tue, 16 Apr 2024 18:10:20 +0800 Subject: [PATCH 12/13] remove redundant platforms in android release --- .../release-notes/en-US/native/release_android_ng.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index 4c45b037748..8eb101935d8 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -128,10 +128,10 @@ This version fixed the following issues: - [`getCallIdEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_getcallidex) - [`enableEncryptionEx`](/api-ref/rtc/android/API/toc_network#api_irtcengineex_enableencryptionex) - [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/android/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) -- [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) (Android, iOS) -- [`AgoraFocalLengthInfo`](/api-ref/rtc/android/API/class_focallengthinfo) (Android, iOS) -- [`CAMERA_FOCAL_LENGTH_TYPE`](/api-ref/rtc/android/API/enum_camerafocallengthtype) (Android, iOS) -- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraFocalLengthType` (Android, iOS) +- [`queryCameraFocalLengthCapability`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) +- [`AgoraFocalLengthInfo`](/api-ref/rtc/android/API/class_focallengthinfo) +- [`CAMERA_FOCAL_LENGTH_TYPE`](/api-ref/rtc/android/API/enum_camerafocallengthtype) +- [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraFocalLengthType` - [`VideoSourceType`](/api-ref/rtc/android/API/enum_videosourcetype) adds the following enumerations: - `VIDEO_SOURCE_CAMERA_THIRD`(11) - `VIDEO_SOURCE_CAMERA_FOURTH`(12) From 94d4a19d5b589e300187b989e39543961e7a4087 Mon Sep 17 00:00:00 2001 From: Cilla-luodan <85477033+Cilla-luodan@users.noreply.github.com> Date: Tue, 16 Apr 2024 18:13:17 +0800 Subject: [PATCH 13/13] review comments --- .../en-US/native/release_android_ng.md | 4 +- .../en-US/native/release_ios_ng.md | 80 +++++++++---------- .../en-US/native/release_mac_ng.md | 68 ++++++++-------- .../en-US/native/release_windows_ng.md | 2 +- 4 files changed, 77 insertions(+), 77 deletions(-) diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md index be331de3bcb..6184e34a631 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_android_ng.md @@ -86,7 +86,7 @@ To ensure parameter naming consistency, this version renames `channelName` to `c 4. **CPU consumption reduction of in-ear monitoring** - This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1 <<15). For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. + This release adds an enumerator `EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER` (1<<15). For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption. 5. **Other improvements** @@ -141,7 +141,7 @@ This version fixed the following issues: - [`CameraCapturerConfiguration`](/api-ref/rtc/android/API/class_cameracapturerconfiguration) adds a new member `cameraId` (Android) - [`CAMERA_DIRECTION`](/api-ref/rtc/android/API/enum_cameradirection) adds `CAMERA_EXTRA`(2) - [`switchCamera`](/api-ref/rtc/android/API/toc_video_device#api_irtcengine_switchcamera2)[2/2] -- `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 <<15) +- `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1<<15) ## v4.3.0 diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md index 775d1e54d76..f9a627ee18e 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_ios_ng.md @@ -14,7 +14,7 @@ This version is released on 2024 Month x, Day x. 1. **Speech Driven Avatar** - The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [`setFaceInfoDelegate`](/api-ref/rtc/ios/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) method and [`onFaceInfo`](/api-ref/rtc/ios/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). @@ -30,40 +30,40 @@ This version is released on 2024 Month x, Day x. 3. **Center stage camera** - To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. + To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [`enableCameraCenterStage`](/api-ref/rtc/ios/API/toc_center_stage#api_irtcengine_enablecameracenterstage) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. - Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html). A list of supported devices can be found in the API documentation at [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html). + Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [`isCameraCenterStageSupported`](/api-ref/rtc/ios/API/toc_center_stage#api_irtcengine_iscameracenterstagesupported). A list of supported devices can be found in the API documentation at [`enableCameraCenterStage`](/api-ref/rtc/ios/API/toc_center_stage#api_irtcengine_enablecameracenterstage). 4. **Camera stabilization** - To improve video stability in mobile filming, low-light environments, and hand-held shooting scenarios, this version introduces a camera stabilization feature. You can activate this feature and select an appropriate stabilization mode by calling [setCameraStabilizationMode](API/api_irtcengine_setcamerastabilizationmode.html), achieving more stable and clearer video footage. + To improve video stability in mobile filming, low-light environments, and hand-held shooting scenarios, this version introduces a camera stabilization feature. You can activate this feature and select an appropriate stabilization mode by calling [`setCameraStabilizationMode`](/api-ref/rtc/ios/API/toc_camera_capture#api_irtcengine_setcamerastabilizationmode), achieving more stable and clearer video footage. 5. **Wide and ultra-wide cameras** - To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [queryCameraFocalLengthCapability](API/api_irtcengine_querycamerafocallengthcapability.html) to check the device's focal length capabilities, and then call [setCameraCapturerConfiguration](API/api_irtcengine_setcameracapturerconfiguration.html) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. + To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call [`queryCameraFocalLengthCapability`](/api-ref/rtc/ios/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) to check the device's focal length capabilities, and then call [`setCameraCapturerConfiguration`](/api-ref/rtc/ios/API/toc_video_device#api_irtcengine_setcameracapturerconfiguration) and set `cameraFocalLengthType` to the supported focal length types, including wide and ultra-wide. 6. **Data stream encryption** - This version adds `datastreamEncryptionEnabled` to [AgoraEncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + This version adds `datastreamEncryptionEnabled` to [`AgoraEncryptionConfig`](/api-ref/rtc/ios/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/ios/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. 7. **Local Video Rendering** - This version adds the following members to [AgoraRtcVideoCanvas](API/class_videocanvas.html) to support more local rendering capabilities: + This version adds the following members to [`AgoraRtcVideoCanvas`](/api-ref/rtc/ios/API/class_videocanvas) to support more local rendering capabilities: - enableAlphaMask: This member enables the receiving end to initiate alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content. 8. **Adaptive configuration for low-quality video streams** - This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode [2/2\]](API/api_irtcengine_setdualstreammode2.html), the SDK defaults to the following behaviors: + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/ios/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. 9. **Other features** - - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. - - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. - - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + - New method [`enableEncryptionEx`](/api-ref/rtc/ios/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/ios/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. + - New method [`getCallIdEx`](/api-ref/rtc/ios/API/toc_network#api_irtcengineex_getcallidex) is introduced for retrieving call IDs in multi-channel scenarios. #### Improvements @@ -77,7 +77,7 @@ This version is released on 2024 Month x, Day x. 2. **Custom audio capture optimization** - To enhance the flexibility of custom audio capture, this release deprecates [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) and introduces [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html). Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. + To enhance the flexibility of custom audio capture, this release deprecates [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/ios/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer)[1/2] and introduces [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/ios/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer2)[2/2]. Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. 3. **CPU consumption reduction of in-ear monitoring** @@ -90,8 +90,8 @@ This version is released on 2024 Month x, Day x. - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. - Improved stability in processing video by the raw video frame observer. - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. - - In [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. - - In [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + - In [`AgoraAudioEffectPreset`](/api-ref/rtc/ios/API/enum_audioeffectpreset), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [`AgoraRtcRemoteAudioStats`](/api-ref/rtc/ios/API/class_remoteaudiostats), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. #### Issues fixed @@ -103,35 +103,35 @@ This version fixed the following issues: **Added** -- [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) -- [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html) -- [setCameraStabilizationMode](API/api_irtcengine_setcamerastabilizationmode.html) -- [AgoraCameraStabilizationMode](API/enum_camerastabilizationmode.html) -- The `enableAlphaMask` member in [AgoraRtcVideoCanvas](API/class_videocanvas.html) -- [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) -- [AgoraFaceInfoDelegate](API/class_ifaceinfoobserver.html) -- [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) -- [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` -- [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` -- [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` -- [`AgoraEncryptionErrorType`](/api-ref/rtc/ios/API/enum_encryptionerrortype) adds the following enumerations: +- [`enableCameraCenterStage`](/api-ref/rtc/ios/API/toc_center_stage#api_irtcengine_enablecameracenterstage) +- [`isCameraCenterStageSupported`](/api-ref/rtc/ios/API/toc_center_stage#api_irtcengine_iscameracenterstagesupported) +- [`setCameraStabilizationMode`](/api-ref/rtc/ios/API/toc_camera_capture#api_irtcengine_setcamerastabilizationmode) +- [`AgoraCameraStabilizationMode`](/api-ref/rtc/ios/API/enum_camerastabilizationmode) +- The `enableAlphaMask` member in [`AgoraRtcVideoCanvas`](/api-ref/rtc/ios/API/class_videocanvas) +- [`setFaceInfoDelegate`](/api-ref/rtc/ios/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) +- [`AgoraFaceInfoDelegate`](/api-ref/rtc/ios/API/class_ifaceinfoobserver) +- [`onFaceInfo`](/api-ref/rtc/ios/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) +- [`AgoraMediaSourceType`](/api-ref/rtc/ios/API/enum_mediasourcetype) adds `AgoraMediaSourceTypeSpeechDriven` +- [`AgoraVideoSourceType`](/api-ref/rtc/ios/API/enum_videosourcetype) adds `AgoraVideoSourceTypeSpeechDriven` +- [`AgoraEncryptionConfig`](/api-ref/rtc/ios/API/class_encryptionconfig) adds `datastreamEncryptionEnabled` +- [``AgoraEncryptionErrorType``](/api-ref/rtc/ios/API/enum_encryptionerrortype) adds the following enumerations: - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` -- [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` -- [AgoraErrorCode](API/enum_errorcodetype.html) adds `AgoraErrorCodeDatastreamDecryptionFailed` -- [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. -- [getCallIdEx](API/api_irtcengineex_getcallidex.html) -- [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) -- [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) -- [queryCameraFocalLengthCapability](API/api_irtcengine_querycamerafocallengthcapability.html) -- [AgoraFocalLengthInfo](API/class_focallengthinfo.html) -- [AgoraFocalLength](API/enum_camerafocallengthtype.html) -- [AgoraCameraCapturerConfiguration](API/class_cameracapturerconfiguration.html) adds a new member `cameraFocalLengthType` -- [AgoraEarMonitoringFilterType](API/enum_earmonitoringfiltertype.html) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1 <<15) -- [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html) -- +- [`AgoraRtcRemoteAudioStats`](/api-ref/rtc/ios/API/class_remoteaudiostats) adds `e2eDelay` +- [`AgoraErrorCode`](/api-ref/rtc/ios/API/enum_errorcodetype) adds `AgoraErrorCodeDatastreamDecryptionFailed` +- [`AgoraAudioEffectPreset`](/api-ref/rtc/ios/API/enum_audioeffectpreset) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. +- [`getCallIdEx`](/api-ref/rtc/ios/API/toc_network#api_irtcengineex_getcallidex) +- [`enableEncryptionEx`](/api-ref/rtc/ios/API/toc_network#api_irtcengineex_enableencryptionex) +- [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/ios/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) +- [`queryCameraFocalLengthCapability`](/api-ref/rtc/ios/API/toc_video_device#api_irtcengine_querycamerafocallengthcapability) +- [`AgoraFocalLengthInfo`](/api-ref/rtc/ios/API/class_focallengthinfo) +- [`AgoraFocalLength`](/api-ref/rtc/ios/API/enum_camerafocallengthtype) +- [`AgoraCameraCapturerConfiguration`](/api-ref/rtc/ios/API/class_cameracapturerconfiguration) adds a new member `cameraFocalLengthType` +- [`AgoraEarMonitoringFilterType`](/api-ref/rtc/ios/API/enum_earmonitoringfiltertype) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1<<15) +- [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/ios/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer2)[2/2] + **Deprecated** -- [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) +- [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/ios/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer)[1/2] ## v4.3.0 diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md index dacb83f08cd..9d8cea28726 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_mac_ng.md @@ -8,7 +8,7 @@ This version is released on 2024 Month x, Day x. 1. **Speech Driven Avatar** - The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) method and [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. + The SDK introduces a speech driven extension that converts speech information into corresponding facial expressions to animate avatar. You can access the facial information through the newly added [`setFaceInfoDelegate`](/api-ref/rtc/macos/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) method and [`onFaceInfo`](/api-ref/rtc/macos/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) callback. This facial information conforms to the ARKit standard for Blend Shapes (BS), which you can further process using third-party 3D rendering engines. The speech driven extension is a trimmable dynamic library, and details about the increase in app size are available at [reduce-app-size](). @@ -18,32 +18,32 @@ This version is released on 2024 Month x, Day x. 2. **Center stage camera** - To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. + To enhance the presentation effect in online meetings, shows, and online education scenarios, this version introduces the [`enableCameraCenterStage`](/api-ref/rtc/macos/API/toc_center_stage#api_irtcengine_enablecameracenterstage) method to activate the center stage camera feature. This ensures that presenters, regardless of movement, always remain centered in the video frame, achieving better presentation effects. - Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html). A list of supported devices can be found in the API documentation at [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html). + Before enabling Center Stage, it is recommended to verify whether your current device supports this feature by calling [`isCameraCenterStageSupported`](/api-ref/rtc/macos/API/toc_center_stage#api_irtcengine_iscameracenterstagesupported). A list of supported devices can be found in the API documentation at [`enableCameraCenterStage`](/api-ref/rtc/macos/API/toc_center_stage#api_irtcengine_enablecameracenterstage). 3. **Data stream encryption** - This version adds `datastreamEncryptionEnabled` to [AgoraEncryptionConfig](API/class_encryptionconfig.html) for enabling data stream encryption. You can set this when you activate encryption with [enableEncryption](API/api_irtcengine_enableencryption.html). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. + This version adds `datastreamEncryptionEnabled` to [`AgoraEncryptionConfig`](/api-ref/rtc/macos/API/class_encryptionconfig) for enabling data stream encryption. You can set this when you activate encryption with [`enableEncryption`](/api-ref/rtc/macos/API/toc_network#api_irtcengine_enableencryption). If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` and `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` enumerations. 4. **Adaptive configuration for low-quality video streams** - This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [setDualStreamMode [2/2\]](API/api_irtcengine_setdualstreammode2.html), the SDK defaults to the following behaviors: + This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using [`setDualStreamMode`](/api-ref/rtc/macos/API/toc_dual_stream#api_irtcengine_setdualstreammode2)[2/2], the SDK defaults to the following behaviors: - The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution. - The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification. 5. **Other features** - - New method [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) is added for enabling media stream or data stream encryption in multi-channel scenarios. - - New method [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) is introduced for setting the playback speed of audio files. - - New method [getCallIdEx](API/api_irtcengineex_getcallidex.html) is introduced for retrieving call IDs in multi-channel scenarios. + - New method [`enableEncryptionEx`](/api-ref/rtc/macos/API/toc_network#api_irtcengineex_enableencryptionex) is added for enabling media stream or data stream encryption in multi-channel scenarios. + - New method [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/macos/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) is introduced for setting the playback speed of audio files. + - New method [`getCallIdEx`](/api-ref/rtc/macos/API/toc_network#api_irtcengineex_getcallidex) is introduced for retrieving call IDs in multi-channel scenarios. #### Improvements 1. **Optimization of local video status callbacks** - To facilitate understanding of the specific reasons for changes in local video status, this version adds the following enumerations to the [localVideoStateChangedOfState](API/callback_irtcengineeventhandler_onlocalvideostatechanged.html) callback's [AgoraLocalVideoStreamReason](API/enum_localvideostreamreason.html) enumeration class: + To facilitate understanding of the specific reasons for changes in local video status, this version adds the following enumerations to the [`localVideoStateChangedOfState`](/api-ref/rtc/macos/API/toc_video_basic#callback_irtcengineeventhandler_onlocalvideostatechanged) callback's [`AgoraLocalVideoStreamReason`](/api-ref/rtc/macos/API/enum_localvideostreamreason) enumeration class: - `AgoraLocalVideoStreamReasonScreenCaptureRecoverFromMinimized` (27): The window being captured for screen sharing has recovered from a minimized state. @@ -61,7 +61,7 @@ This version is released on 2024 Month x, Day x. 4. **Custom audio capture optimization** - To enhance the flexibility of custom audio capture, this release deprecates [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) and introduces [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html). Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. + To enhance the flexibility of custom audio capture, this release deprecates [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/macos/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer)[1/2] and introduces [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/macos/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer2)[2/2]. Compared to the deprecated method, the new method adds parameters such as `sampleRate`, `channels`, and `trackId`. These support pushing external CMSampleBuffer audio data to the channel via custom audio tracks, and allow for the setting of sample rates and channel counts for external audio sources. 5. **CPU consumption reduction of in-ear monitoring** @@ -72,48 +72,48 @@ This version is released on 2024 Month x, Day x. This version also includes the following improvements: - Optimization of video encoding and decoding strategies in non-screen sharing scenarios to save system performance overhead. - - For macOS 14 and above, optimization of [getScreenCaptureSourcesWithThumbSize](API/api_irtcengine_getscreencapturesources.html) behavior. From this version onward, the method automatically filters out widget windows from the list of available window resources. + - For macOS 14 and above, optimization of [`getScreenCaptureSourcesWithThumbSize`](/api-ref/rtc/macos/API/toc_screencapture#api_irtcengine_getscreencapturesources) behavior. From this version onward, the method automatically filters out widget windows from the list of available window resources. - Enhanced media player capabilities to handle WebM format videos, including support for rendering alpha channels. - - In [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. - - In [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. + - In [`AgoraAudioEffectPreset`](/api-ref/rtc/macos/API/enum_audioeffectpreset), a new enumeration `AgoraAudioEffectPresetRoomAcousticsChorus` (chorus effect) is added, enhancing the spatial presence of vocals in chorus scenarios. + - In [`AgoraRtcRemoteAudioStats`](/api-ref/rtc/macos/API/class_remoteaudiostats), a new `e2eDelay` field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end. #### Issues fixed This version fixed the following issues: - Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player. -- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [stateChanged](API/api_irtcengine_statechanged.html) was triggered multiple times. +- When a user plugged and unplugged a Bluetooth or wired headset once, the audio state change callback [`stateChanged`](/api-ref/rtc/macos/API/toc_common_device#api_irtcengine_statechanged) was triggered multiple times. #### API Changes **Added** -- [enableCameraCenterStage](API/api_irtcengine_enablecameracenterstage.html) -- [isCameraCenterStageSupported](API/api_irtcengine_iscameracenterstagesupported.html) -- The following enumerations in [`AgoraLocalVideoStreamReason`](/api-ref/rtc/macos/API/enum_localvideostreamreason): +- [`enableCameraCenterStage`](/api-ref/rtc/macos/API/toc_center_stage#api_irtcengine_enablecameracenterstage) +- [`isCameraCenterStageSupported`](/api-ref/rtc/macos/API/toc_center_stage#api_irtcengine_iscameracenterstagesupported) +- The following enumerations in [``AgoraLocalVideoStreamReason``](/api-ref/rtc/macos/API/enum_localvideostreamreason): - `AgoraLocalVideoStreamReasonScreenCaptureRecoverFromMinimized` -- [setFaceInfoDelegate](API/api_imediaengine_registerfaceinfoobserver.html) -- [AgoraFaceInfoDelegate](API/class_ifaceinfoobserver.html) -- [onFaceInfo](API/callback_ifaceinfoobserver_onfaceinfo.html) -- [AgoraMediaSourceType](API/enum_mediasourcetype.html) adds `AgoraMediaSourceTypeSpeechDriven` -- [AgoraVideoSourceType](API/enum_videosourcetype.html) adds `AgoraVideoSourceTypeSpeechDriven` -- [AgoraEncryptionConfig](API/class_encryptionconfig.html) adds `datastreamEncryptionEnabled` -- [`AgoraEncryptionErrorType`](/api-ref/rtc/macos/API/enum_encryptionerrortype) adds the following enumerations: +- [`setFaceInfoDelegate`](/api-ref/rtc/macos/API/toc_speech_driven#api_imediaengine_registerfaceinfoobserver) +- [`AgoraFaceInfoDelegate`](/api-ref/rtc/macos/API/class_ifaceinfoobserver) +- [`onFaceInfo`](/api-ref/rtc/macos/API/toc_speech_driven#callback_ifaceinfoobserver_onfaceinfo) +- [`AgoraMediaSourceType`](/api-ref/rtc/macos/API/enum_mediasourcetype) adds `AgoraMediaSourceTypeSpeechDriven` +- [`AgoraVideoSourceType`](/api-ref/rtc/macos/API/enum_videosourcetype) adds `AgoraVideoSourceTypeSpeechDriven` +- [`AgoraEncryptionConfig`](/api-ref/rtc/macos/API/class_encryptionconfig) adds `datastreamEncryptionEnabled` +- [``AgoraEncryptionErrorType``](/api-ref/rtc/macos/API/enum_encryptionerrortype) adds the following enumerations: - `ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE` - `ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE` -- [AgoraRtcDeviceInfo](API/class_agorartcdeviceinfo.html) adds `deviceTypeName` -- [AgoraRtcRemoteAudioStats](API/class_remoteaudiostats.html) adds `e2eDelay` -- [AgoraErrorCode](API/enum_errorcodetype.html) adds `AgoraErrorCodeDatastreamDecryptionFailed` -- [AgoraAudioEffectPreset](API/enum_audioeffectpreset.html) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. -- [getCallIdEx](API/api_irtcengineex_getcallidex.html) -- [enableEncryptionEx](API/api_irtcengineex_enableencryptionex.html) -- [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) -- [AgoraEarMonitoringFilterType](API/enum_earmonitoringfiltertype.html) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1 <<15) -- [pushExternalAudioFrameSampleBuffer [2/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer2.html) +- [`AgoraRtcDeviceInfo`](/api-ref/rtc/macos/API/class_agorartcdeviceinfo) adds `deviceTypeName` +- [`AgoraRtcRemoteAudioStats`](/api-ref/rtc/macos/API/class_remoteaudiostats) adds `e2eDelay` +- [`AgoraErrorCode`](/api-ref/rtc/macos/API/enum_errorcodetype) adds `AgoraErrorCodeDatastreamDecryptionFailed` +- [`AgoraAudioEffectPreset`](/api-ref/rtc/macos/API/enum_audioeffectpreset) adds `AgoraAudioEffectPresetRoomAcousticsChorus`, enhancing the spatial presence of vocals in chorus scenarios. +- [`getCallIdEx`](/api-ref/rtc/macos/API/toc_network#api_irtcengineex_getcallidex) +- [`enableEncryptionEx`](/api-ref/rtc/macos/API/toc_network#api_irtcengineex_enableencryptionex) +- [`setAudioMixingPlaybackSpeed`](/api-ref/rtc/macos/API/toc_audio_mixing#api_irtcengine_setaudiomixingplaybackspeed) +- [`AgoraEarMonitoringFilterType`](/api-ref/rtc/macos/API/enum_earmonitoringfiltertype) adds a new enumeration `AgoraEarMonitoringFilterBuiltInAudioFilters`(1<<15) +- [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/macos/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer2)[2/2] **Deprecated** -- [pushExternalAudioFrameSampleBuffer [1/2\]](API/api_irtcengine_pushexternalaudioframesamplebuffer.html) +- [`pushExternalAudioFrameSampleBuffer`](/api-ref/rtc/macos/API/toc_audio_custom_capturenrendering#api_irtcengine_pushexternalaudioframesamplebuffer)[1/2] ## v4.3.0 diff --git a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md index 660314cf95d..95755cc3918 100644 --- a/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md +++ b/markdown/RTC 4.x/release-notes/en-US/native/release_windows_ng.md @@ -101,7 +101,7 @@ This version fixed the following issues: - [setAudioMixingPlaybackSpeed](API/api_irtcengine_setaudiomixingplaybackspeed.html) -- [EAR_MONITORING_FILTER_TYPE](API/enum_earmonitoringfiltertype.html) adds a new enumeration `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1 <<15) +- [EAR_MONITORING_FILTER_TYPE](API/enum_earmonitoringfiltertype.html) adds a new enumeration `EAR_MONITORING_FILTER_BUILT_IN_AUDIO_FILTERS`(1<<15) ## v4.3.0