Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure SDK Review - [Introduction to Azure AI Vision Face API] #8067

Open
azure-sdk opened this issue Oct 1, 2024 · 0 comments
Open

Azure SDK Review - [Introduction to Azure AI Vision Face API] #8067

azure-sdk opened this issue Oct 1, 2024 · 0 comments
Labels
needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team.

Comments

@azure-sdk
Copy link
Collaborator

azure-sdk commented Oct 1, 2024

New SDK Review meeting has been requested.

Service Name: Azure AI Vision Face API
Review Created By: Nabil Lathiff
Review Date: 10/07/2024 02:05 PM PT

Release Plan: n/a (this is a frontend client SDK that bundles UI components and does not fit any of the existing release plans)
Languages to review: Swift, Kotlin, JavaScript
Hero Scenarios Link: Loop containing sample code and API overview

Solution Architecture Overview:

image

Core Concepts Doc Link: Tutorial from public documentation with core concepts explained
APIView Links:

  1. Swift
  2. JavaScript
  3. Kotlin (note: have to use a static web site instead of apiview since Kotlin is not supported on apiview yet).

Description:

Introduction to the Azure AI Vision Face Liveness Detection feature along with demo videos are available in our release blog1 and blog2.

This feature relies on the frontend application to generate signals from the camera’s field of view to determine whether the person in front of it is real or spoofed. Currently, we use techniques like flashing the screen, asking the user to smile, and having them move their head in random directions to collect the necessary data, which is then processed by AI models on the backend for classification.

In previous SDK versions, the UI was not bundled into it, so developers had to build it themselves by following our sample code. However, we’ve seen a lot of variation in how the UI was implemented, which has often led to issues with the classification accuracy. Each time, we’ve had to request access to the developer’s app for debugging, which isn’t a scalable solution. Moreover, as we’ve added more complex UI flows that are closely tied to our algorithms, it’s become increasingly difficult for developers to implement these correctly.

To address this, we’ve decided to bundle the UI directly into our SDK. This reduces the complexity of the API and makes it easier and more efficient for developers to integrate. This approach also aligns with what our competitors are offering, as they provide SDKs with built-in UI as well.

Detailed meeting information and documents provided can be accessed here

@github-actions github-actions bot added the needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. label Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team.
Projects
None yet
Development

No branches or pull requests

1 participant