NAOqi Audio - Overview | API | Tutorial
The ALTextToSpeech module allows the robot to speak. It sends commands to a text-to-speech engine, and authorizes also voice customization. The result of the synthesis is sent to the robot’s loudspeakers.
ALTextToSpeech is based on speech synthesizers - or speech engines.
According to the selected language, a specific engine is used:
Uses ... | |
---|---|
Japanese | microAITalk engine, provided by AI, Inc. |
Other languages | According to the language package the engine is provided by ACAPELA or Nuance. The engine used is indicated in the description of the package. |
The output audio stream can be modified.
For example, these effects are available:
Additional parameters are available for microAITalk engine.
Further information can be found here : ALTextToSpeechProxy::setParameter()
To add some expressiveness when the robot speaks, it is highly recommended to use “tags” in your text. Tags, allows you to change, in the middle of a sentence, the pitch, the speed, the volume of a word, or add pauses between words, change the emphase of word.
For further details, see: Using tags for voice tuning.
The easiest way to get started with ALTextToSpeech is to use the Say Choregraphe box.
ACAPELA, microAITalk and Nuance engines are only available on the real robot.
When using a virtual robot, said text can be visualized in Choregraphe Robot View and Dialog panel.