Aldebaran documentation What's new in NAOqi 2.4.3?

NAOqi API & SDKs 2.3.0

From NAOqi Framework to qi Framework

The documentation of qi Framework, initially published separately, is now merged on Aldebaran main documentation.

You can now easily discover the new tools allowing you to create more efficient code.

In few words, you will:

  • create services instead of modules,
  • watch signals instead of subscribing to events.

NAOqi is evolving step by step, creating new services, while maintaining the module you are used to.

For further details, see: qi Framework and qi Framework - ChangeLog.

Core - New services

New service: ALExpressionWatcher

ALExpressionWatcher allows you to be notified or query the validity of a condition expression.

For further details, see: ALExpressionWatcher.

New service: ALKnowledge

ALKnowledge implements a triplestore which is a way to store information in a Subject-Predicate-Object representation. It uses a library called Soprano.

For further details, see: ALKnowledge.

New service: ALMood

ALMood reads instantaneous emotion of persons and ambiance.

For further details, see: ALMood.

New service: ALUserInfo

ALUserInfo provides a simple API to store and get information about a given user. It is built on top of ALKnowledge.

For further details, see: ALUserInfo.

Core - Improvements

ALUserSession: API finalized

ALUserSession is no longer a “work in progress”.

New methods to get user creation and detection dates:

New methods for users introspection:

New methods for translating a UserSession ID in a PeoplePerception ID and conversely:

API change for user bindings introspection:

Data sources API deprecated, the following methods should not be used anymore:

Note: use ALUserInfo to store user data.

ALUserSession: Improved user recognition

When a user is focused by ALBasicAwareness, ALUserSession tries to recognize them only with good quality pictures. It can make the overall process a bit longer but reduces the risk of false positives.

ALTabletService: deprecated method

Interaction engines

New: Dialog lexicon

The content of the Dialog lexicon is published.

For further details, see: Dialog Lexicon.

ALDialog: deprecated methods

  • ALDialogProxy::startApp
  • ALDialogProxy::startUpdate

Trigger conditions: improved

  • New operators and functions for trigger conditions: @, #, bang(), stable(), pref(), int(), float(), string()
  • Non-existing memory keys used in a trigger condition will make that key evaluate to false instead of ignoring the entire condition. Uninitialized memory keys (with ALValue type “Invalid”) also evaluate to false.

For further details, see: Launch trigger conditions.

ALAutonomousLife: new methods

Motion

New service: ALAnimationPlayer

ALAnimationPlayer allows you to run animations. This service is a wrapper of ALBehaviorManager, its goal is to have an easy way to start animations according to animation tags or the current posture of the robot.

ALAnimationPlayer is used by ALAnimatedSpeech and brings some new features to this module (dynamic selection of animations depending on robot position and model).

For further details, see: ALAnimationPlayer.

ALRecharge: new error code

The following method now returns an ALValue containing an int Error code:

The following methods now returns an int Error code:

For further details, see: Error code.

Audio

ALTextToSpeech: New API to change the pronunciation of a word

This new API makes it possible to tweak the pronunciation of a word by providing a phonetic transcription.

New methods:

ALSpeechRecognition: VOCON 4.7 update

Vocon is the embedded part of the Nuance Vocon hybrid distribution. It segments and processes speech utterances on the robot without server access.

New features in the 4.7 update:

  • Increased performance

    During our tests, the speech recognition latency was reduced by 15%. Nuance claims the reduction can be as good as 20%.

  • Updated mandarin Chinese acoustic models

    The new acoustic models improve the handling of tonal information for standard Mandarin Chinese, and reduces the error rate.

ALTextToSpeech: Increase Speech Volume

Japanese Voice only

When the speech volume is already at its maximum, and the gain hardware cannot be increased, the speech volume can be increased with an audio dynamic compressor. This audio dynamic compressor will be applied on the voice only (not on the other played sounds). It will increased the perceived volume of the voice.

For further details, see: ALTextToSpeechProxy::setParameter

Configuration Required: new version of Japanese language package.

Vision

ALVisionRecognition: New API for database management and to work on files

API for database management:

New methods to learn and detect objects from image files:

New methods to control the number of objects that can be detected at the same time, in the same image:

People Perception

ALFaceCharacteristics: New memory key with the facial features coordinates

This memory key contains, for a given user, the coordinates of all detected facial features.

Sensors

New service: ALTactileGesture

ALTactileGesture is intended to manage tactile gestures on the head sensors.

With ALTactileGesture, you can:

  • Detect tactile gestures performed on the head sensors,
  • Respond to tactile gestures via qi.Signals and ALMemory events,
  • Create new tactile gestures on-the-fly.

For further details, see: ALTactileGesture.