ALAnimatedSpeech API

NAOqi Audio - Overview | API | Advanced


Namespace : AL

#include <alproxies/alanimatedspeechproxy.h>

Methods

void ALAnimatedSpeechProxy::say(const std::string& text)

Say the annotated text given in parameter and animate it with animations inserted in the text. The current Animated Speech configuration will be used.

Parameters:
  • text

    An annotated text (for example: “Hello. ^start(animations/Stand/Gestures/Hey_1) My name is John Doe. Nice to meet you!”).

    For further details, see: Annotated text.

void ALAnimatedSpeechProxy::say(const std::string& text, const AL::ALValue& configuration)

Say the annotated text given in parameter and animate it with animations inserted in the text. The given configuration will be used. For the unset parameters, their default value will be used.

Here are the configuration parameters:

Key Value type Default value Possible values For further details, see ...
“bodyLanguageMode” string “contextual” “disabled”, “random”, “contextual” Body language modes

alanimatedspeech_say_with_configuration.py

#! /usr/bin/env python
# -*- encoding: UTF-8 -*-

'''Say a text with a local configuration'''

import argparse
from naoqi import ALProxy

def main(robotIP, PORT=9559):

    animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)

    # set the local configuration
    configuration = {"bodyLanguageMode":"contextual"}

    # say the text with the local configuration
    animatedSpeechProxy.say("Hello, I am Nao", configuration)



if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--ip", type=str, default="127.0.0.1",
                        help="Robot ip address")
    parser.add_argument("--port", type=int, default=9559,
                        help="Robot port number")

    args = parser.parse_args()
    main(args.ip, args.port)
void ALAnimatedSpeechProxy::setBodyLanguageMode(unsigned int bodyLanguageMode)

Set the current body language mode.

Parameters:
  • stringBodyLanguageMode

    The choosen body language mode.

    3 modes exist:

    0 (BODY_LANGUAGE_MODE_DISABLED)

    1 (BODY_LANGUAGE_MODE_RANDOM)

    2 (BODY_LANGUAGE_MODE_CONTEXTUAL)

    For further details, see: Body language modes.

void ALAnimatedSpeechProxy::setBodyLanguageModeFromStr(const std::string& stringBodyLanguageMode)

Set the current body language mode from a string.

Parameters:
  • stringBodyLanguageMode

    The choosen body language mode.

    3 modes exist:

    “disabled”

    “random”

    “contextual”

    For further details, see: Body language modes.

unsigned int ALAnimatedSpeechProxy::getBodyLanguageMode()

Get the current body language mode.

Returns:The current body language mode.

3 modes exist:

0 (BODY_LANGUAGE_MODE_DISABLED)

1 (BODY_LANGUAGE_MODE_RANDOM)

2 (BODY_LANGUAGE_MODE_CONTEXTUAL)

For further details, see: Body language modes.

unsigned int ALAnimatedSpeechProxy::getBodyLanguageModeToStr()

Get a string corresponding to the current body language mode.

Returns:The current body language mode.

3 modes exist:

“disabled”

“random”

“contextual”

For further details, see: Body language modes.

void ALAnimatedSpeechProxy::addTagsToWords(const AL::ALValue& tagsToWords)

Link some words to some specific animation tags.

Parameters:
  • tagsToWords – Map of tags to words.

alanimatedspeech_add_links_between_tags_and_words.py

#! /usr/bin/env python
# -*- encoding: UTF-8 -*-

'''Add some links between tags and words'''

import argparse
from naoqi import ALProxy

def main(robotIP, PORT=9559):

    animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)

    # associate word "hey" with animation tag "hello"
    # associate word "yo" with animation tag "hello"
    # assiciate word "everybody" with animation tag "everything"
    ttw = { "hello" : ["hey", "yo"],
            "everything" : ["everybody"] }

    animatedSpeechProxy.addTagsToWords(ttw)



if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--ip", type=str, default="127.0.0.1",
                        help="Robot ip address")
    parser.add_argument("--port", type=int, default=9559,
                        help="Robot port number")

    args = parser.parse_args()
    main(args.ip, args.port)
void ALAnimatedSpeechProxy::declareAnimationsPackage(const std::string& animationsPackage)

Allows using animations contained in the specified package as tagged animations.

Parameters:
  • animationsPackage – The name of the package containing animations (and only animations).

Note

The animations package has to have the following tree pattern:

Stand/ => root folder for the standing animations

Sit/ => root folder for the sitting animations

SitOnPod/ => root folder for the sitting on pod animations

void ALAnimatedSpeechProxy::declareTagForAnimations(const AL::ALValue& tagsToAnimations)

Dynamically associate tags and animations.

Parameters:
  • tagsToAnimations – Map of tag to animations.

alanimatedspeech_declare_tags_for_animations.py

#! /usr/bin/env python
# -*- encoding: UTF-8 -*-

'''Declare tags to animations'''

import argparse
from naoqi import ALProxy

def main(robotIP, PORT=9559):

    animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)

    # associate animation tag "myhellotag" to the animation "animations/Stand/Gestures/Hey_1"
    # associate animation tag "myhellotag" to the animation "myanimlib/Sit/HeyAnim"
    # associate animation tag "cool" to the animation "myanimlib/Sit/CoolAnim"
    tfa = { "myhellotag" : ["animations/Stand/Gestures/Hey_1", "myanimlib/Sit/HeyAnim"],
            "cool" : ["myanimlib/Sit/CoolAnim"] }

    animatedSpeechProxy.declareTagForAnimations(tfa)



if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--ip", type=str, default="127.0.0.1",
                        help="Robot ip address")
    parser.add_argument("--port", type=int, default=9559,
                        help="Robot port number")

    args = parser.parse_args()
    main(args.ip, args.port)
void ALAnimatedSpeechProxy::declareTagForAnimations(const AL::ALValue& tagsToAnimations)

Dynamically associate tags and animations.

Parameters:
  • tagsToAnimations – Map of tag to animations.

alanimatedspeech_declare_tags_for_animations.py

#! /usr/bin/env python
# -*- encoding: UTF-8 -*-

'''Declare tags to animations'''

import argparse
from naoqi import ALProxy

def main(robotIP, PORT=9559):

    animatedSpeechProxy = ALProxy("ALAnimatedSpeech", robotIP, PORT)

    # associate animation tag "myhellotag" to the animation "animations/Stand/Gestures/Hey_1"
    # associate animation tag "myhellotag" to the animation "myanimlib/Sit/HeyAnim"
    # associate animation tag "cool" to the animation "myanimlib/Sit/CoolAnim"
    tfa = { "myhellotag" : ["animations/Stand/Gestures/Hey_1", "myanimlib/Sit/HeyAnim"],
            "cool" : ["myanimlib/Sit/CoolAnim"] }

    animatedSpeechProxy.declareTagForAnimations(tfa)



if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--ip", type=str, default="127.0.0.1",
                        help="Robot ip address")
    parser.add_argument("--port", type=int, default=9559,
                        help="Robot port number")

    args = parser.parse_args()
    main(args.ip, args.port)
void ALAnimatedSpeechProxy::setBodyLanguageEnabled(const bool& enable)

Deprecated since version 1.22: use ALAnimatedSpeechProxy::setBodyLanguageMode() instead.

Enable or disable the automatic body Language random mode on the speech. If it is enabled, anywhere you have not annotated your text with animation, the robot will fill the gap with automatically calculated gestures. If it is disabled, the robot will move only where you annotate it with animations.

Parameters:
  • enable

    The boolean value: true to enable, false to disable.

    For further details, see: Body language modes.

bool ALAnimatedSpeechProxy::isBodyLanguageEnabled()

Deprecated since version 1.22: use ALAnimatedSpeechProxy::getBodyLanguageMode() instead.

Indicate whether the body Language is enabled or not.

Returns:The boolean value: true means it is enabled, false means it is disabled.

For further details, see: Body language modes.

void ALAnimatedSpeechProxy::setBodyTalkEnabled(const bool& enable)

Deprecated since version 1.18: use ALAnimatedSpeechProxy::setBodyLanguageMode() instead.

Enable or disable the automatic body Language random mode on the speech.

Parameters:
  • enable

    The boolean value: true to enable, false to disable.

    For further details, see: Body language modes.

bool ALAnimatedSpeechProxy::isBodyTalkEnabled()

Deprecated since version 1.18: use ALAnimatedSpeechProxy::getBodyLanguageMode() instead.

Indicate whether the body Language is enabled or not.

Returns:The boolean value: true means it is enabled, false means it is disabled.

For further details, see: Body language modes.

Events

Event: "ALAnimatedSpeech/EndOfAnimatedSpeech"
callback(std::string eventName, int taskId, std::string subscriberIdentifier)

Raised when an animated speech is done.

Parameters:
  • eventName (std::string) – “ALAnimatedSpeech/EndOfAnimatedSpeech”
  • taskId – The ID of the animated speech which is done.
  • subscriberIdentifier (std::string) –