NAOqi Audio - Overview | API | Advanced
Namespace : AL
#include <alproxies/alanimatedspeechproxy.h>
As any module, this module inherits methods from ALModule API. It also has the following specific methods:
Say the annotated text given in parameter and animate it with animations inserted in the text. The current Animated Speech configuration will be used.
Parameters: |
|
---|
Say the annotated text given in parameter and animate it with animations inserted in the text. The given configuration will be used. For the unset parameters, their default value will be used.
Here are the configuration parameters:
Key | Value type | Default value | Possible values | For further details, see ... |
---|---|---|---|---|
“bodyLanguageMode” | string | “contextual” | “disabled”, “random”, “contextual” | Body language modes |
alanimatedspeech_say_with_configuration.py
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
'''Say a text with a local configuration'''
import argparse
from naoqi import ALProxy
def main(robotIP, PORT=9559):
animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)
# set the local configuration
configuration = {"bodyLanguageMode":"contextual"}
# say the text with the local configuration
animatedSpeechProxy.say("Hello, I am Nao", configuration)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot ip address")
parser.add_argument("--port", type=int, default=9559,
help="Robot port number")
args = parser.parse_args()
main(args.ip, args.port)
Set the current body language mode.
Parameters: |
|
---|
Set the current body language mode from a string.
Parameters: |
|
---|
Get the current body language mode.
Returns: | The current body language mode. 3 modes exist: 0 (BODY_LANGUAGE_MODE_DISABLED) 1 (BODY_LANGUAGE_MODE_RANDOM) 2 (BODY_LANGUAGE_MODE_CONTEXTUAL) For further details, see: Body language modes. |
---|
Get a string corresponding to the current body language mode.
Returns: | The current body language mode. 3 modes exist: “disabled” “random” “contextual” For further details, see: Body language modes. |
---|
Link some words to some specific animation tags.
Parameters: |
|
---|
alanimatedspeech_add_links_between_tags_and_words.py
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
'''Add some links between tags and words'''
import argparse
from naoqi import ALProxy
def main(robotIP, PORT=9559):
animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)
# associate word "hey" with animation tag "hello"
# associate word "yo" with animation tag "hello"
# assiciate word "everybody" with animation tag "everything"
ttw = { "hello" : ["hey", "yo"],
"everything" : ["everybody"] }
animatedSpeechProxy.addTagsToWords(ttw)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot ip address")
parser.add_argument("--port", type=int, default=9559,
help="Robot port number")
args = parser.parse_args()
main(args.ip, args.port)
Allows using animations contained in the specified package as tagged animations.
Parameters: |
|
---|
Note
The animations package has to have the following tree pattern:
Stand/ => root folder for the standing animations
Sit/ => root folder for the sitting animations
SitOnPod/ => root folder for the sitting on pod animations
Dynamically associate tags and animations.
Parameters: |
|
---|
alanimatedspeech_declare_tags_for_animations.py
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
'''Declare tags to animations'''
import argparse
from naoqi import ALProxy
def main(robotIP, PORT=9559):
animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)
# associate animation tag "myhellotag" to the animation "animations/Stand/Gestures/Hey_1"
# associate animation tag "myhellotag" to the animation "myanimlib/Sit/HeyAnim"
# associate animation tag "cool" to the animation "myanimlib/Sit/CoolAnim"
tfa = { "myhellotag" : ["animations/Stand/Gestures/Hey_1", "myanimlib/Sit/HeyAnim"],
"cool" : ["myanimlib/Sit/CoolAnim"] }
animatedSpeechProxy.declareTagForAnimations(tfa)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot ip address")
parser.add_argument("--port", type=int, default=9559,
help="Robot port number")
args = parser.parse_args()
main(args.ip, args.port)
Deprecated since version 1.22: use ALAnimatedSpeechProxy::setBodyLanguageMode() instead.
Enable or disable the automatic body Language random mode on the speech. If it is enabled, anywhere you have not annotated your text with animation, the robot will fill the gap with automatically calculated gestures. If it is disabled, the robot will move only where you annotate it with animations.
Parameters: |
|
---|
Deprecated since version 1.22: use ALAnimatedSpeechProxy::getBodyLanguageMode() instead.
Indicate whether the body Language is enabled or not.
Returns: | The boolean value: true means it is enabled,
false means it is disabled. For further details, see: Body language modes. |
---|
Deprecated since version 1.18: use ALAnimatedSpeechProxy::setBodyLanguageMode() instead.
Enable or disable the automatic body Language random mode on the speech.
Parameters: |
|
---|
Deprecated since version 1.18: use ALAnimatedSpeechProxy::getBodyLanguageMode() instead.
Indicate whether the body Language is enabled or not.
Returns: | The boolean value: true means it is enabled,
false means it is disabled. For further details, see: Body language modes. |
---|
Raised when an animated speech is done.
Parameters: |
|
---|