Deprecated |
---|
This Dialogflow client library and Dialogflow API V1 have been deprecated and will be shut down on October 23th, 2019. Please migrate to Dialogflow API V2 and the v2 client library |
The api.ai .NET Library makes it easy to integrate the API.AI natural language processing API into your .NET application. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.
Library provides simple programming interface for making text and voice requests to the API.AI service.
Library can be installed with Nuget
PM> Install-Package ApiAiSDK
Or can be downloaded as sources from the Releases page.
Assumed you already have API.AI account and have at least one agent configured. If no, please see documentation on the API.AI website.
First, add following usages to your module:
using ApiAiSDK;
using ApiAiSDK.Model;
Then add ApiAi
field to your class:
private ApiAi apiAi;
Now you need to initialize ApiAi
object with appropriate access keys and language.
var config = new AIConfiguration("YOUR_CLIENT_ACCESS_TOKEN", SupportedLanguage.English);
apiAi = new ApiAi(config);
Done! Now you can easily do requests to the API.AI service
-
using
TextRequest
method for simple text requestsvar response = apiAi.TextRequest("hello");
-
using
VoiceRequest
method for voice binary data in PCM (16000Hz, Mono, Signed 16 bit) formatvar response = apiAi.VoiceRequest(voiceStream);
Also see unit tests for more examples.
Windows Phone version has some additional features such as system speech recognition for easy API.AI service integration. After installing the library you should add permissions for Internet and Sound recording to your app. Currently, speech recognition is performed using Windows Phone System speech recognition. So, you must be sure languages you are using is installed on device (It can be checked on Settings->speech screen of device).
To use special features you need to use AIService
class instead of ApiAi
class.
First, you need to initialize AIConfiguration object with your keys and desired language.
var config = new AIConfiguration("client access token", SupportedLanguage.English);
Second, create AIService object using the configuration object.
var aiService = AIService.CreateService(config);
Now you need add handlers for OnResult and OnError events
aiService.OnResult += aiService_OnResult;
aiService.OnError += aiService_OnError;
And at the end call Initialization method
await aiService.InitializeAsync();
The entire code snippet:
try
{
var config = new AIConfiguration("client access token", SupportedLanguage.English);
aiService = AIService.CreateService(config);
aiService.OnResult += aiService_OnResult;
aiService.OnError += aiService_OnError;
await aiService.InitializeAsync();
}
catch (Exception e)
{
// Some exception processing
}
Now you can use methods for listening and requesting results from server, all you need to call StartRecognitionAsync
method (don't forget to use await
operator, otherwise you will not be able to catch some processing exceptions)
try
{
await aiService.StartRecognitionAsync();
}
catch (Exception exception)
{
// Some exception processing
}
Results will be passed to the OnResult
handler, most of errors will be passed to the OnError
handler. Don't forget to use dispatcher when working with UI, because of handlers can be called from the Background thread.
void aiService_OnError(AIServiceException error)
{
Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
// sample error processing
ResultTextBlock.Text = error.Message;
});
}
void aiService_OnResult(ApiAiSDK.Model.AIResponse response)
{
Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
// sample result processing
ResultTextBlock.Text = response.Result.ResolvedQuery;
});
}
UWP version of the library is similar to Windows Phone version except some differences in API.
After installing the library you should add capabilities for Internet(Client) and Microphone to your app.
Currently, speech recognition is performed using Windows.Media.SpeechRecognition
speech recognition. So, you must be sure languages you are using is installed on device.
API for the platform uses async/await feature. So, you don't need to set up any callbacks.
To use special features you need to use AIService
class instead of ApiAi
class.
First, you need to initialize AIConfiguration object with your keys and desired language.
var config = new AIConfiguration("client access token", SupportedLanguage.English);
Second, create AIService object using the configuration object.
var aiService = AIService.CreateService(config);
And at the end call Initialization method
await aiService.InitializeAsync();
The entire code snippet:
try
{
var config = new AIConfiguration("client access token", SupportedLanguage.English);
aiService = AIService.CreateService(config);
await aiService.InitializeAsync();
}
catch (Exception e)
{
// Some exception processing
}
Now you can use methods for listening and requesting results from server, all you need to call StartRecognitionAsync
method (don't forget to use await
operator, otherwise you will not be able to catch some processing exceptions)
try
{
var response = await aiService.StartRecognitionAsync();
}
catch (Exception exception)
{
// Some exception processing
}
Results will be in the response
variable.
- JSON parsing implemented using Json.NET.