NAOqi Audio - Overview | API | Tutorial
The ALSpeechRecognition module gives to the robot the ability to recognize predefined words or phrases in several languages.
For the complete list of language codes, see: Available languages.
Note
Cannot be tested on a simulated robot - This module is only available on a real robot, you cannot test it on a simulated robot.
ALSpeechRecognition relies on sophisticated speech recognition technologies provided by:
Step | Description |
---|---|
A | Before starting, ALSpeechRecognition needs to be fed by the list of phrases that should be recognized. |
B | Once started, ALSpeechRecognition places in the key SpeechDetected, a boolean that specifies if a speaker is currently heard or not. |
C | If a speaker is heard, the element of the list that best matches what is heard by the robot is placed in the key WordRecognized. |
D | If a speaker is heard, the element of the list that best matches what is heard by the robot is placed in the key WordRecognizedAndGrammar. |
The WordRecognized key is organized as follows:
[phrase_1, confidence_1, phrase_2, confidence_2, ..., phrase_n, confidence_n]
where:
Note that the different hypothesis contained in that key are ordered so that the most likely phrases comes first.
The WordRecognizedAndGrammar key is organized as follows:
[phrase_1, confidence_1, grammar_1, phrase_2, confidence_2, grammar_2, ..., phrase_n, confidence_n, grammar_n]
where:
Note that the different hypothesis contained in that key are ordered so that the most likely phrases comes first.
The parameter enableWordSpotting of ALSpeechRecognitionProxy::setVocabulary() modifies the content of the returned result:
To discover the basic functions of the speech recognition using Choregraphe, see the tutorial: Testing the speech recognition.