ALFaceDetection is now based on a face detection/recognition solution provided by OMRON.
As the two technologies, OMRON and OKI, do not provide the same set of data, and for backward-compatibility, the structure of the data returned has not been modified, but the pieces of data formerly provided by OKI have been replaced by null values.
For further details, see: FaceDetected Event.
The ALDialog module allows you to endow your robot with conversational skills by using a list of “rules” written and categorized in an appropriate way.
For further details, see: ALDialog.
The ALAnimatedSpeech module allows you to make the robot talk in an expressive way.
For further details, see: ALAnimatedSpeech.
ALFaceTracker and ALRedBallTracker are now deprecated. They are replaced by a new and more generic module: ALTracker.
ALTracker module allows the robot to track different targets (red ball, face, landmark, etc) using different means (head only, whole body, move, etc).
The main goal of this module is to establish a bridge between target detection and motion in order to make the robot keep in view the target in the middle of the camera.
For further details, see: ALTracker.
ALVisualSpaceHistory keeps track of the head movements of the robot to build a timestamped map of the head positions. This can be useful when exploring the environment to make the robot look in every directions.
For further details, see: ALVisualSpaceHistory.
This module requires a robot with a 3D sensor.
ALSegmentation3D extracts the objects present in the field of view of the robot by doing a segmentation of the depth image (returned by the 3D sensor) in blobs of similar depth.
It also allows you to track, with the head of the robot, a blob at a specified distance or simply track the nearest blob with respect to the camera.
For further details, see: ALSegmentation3D.
ALBasicAwareness is a simple way to make the robot establish and keep eye contact with people.
For further details, see: ALBasicAwareness.
ALPeoplePerception is an extractor which keeps track of the people around the robot and provides basic information about them. It gathers visual information from RGB cameras and a 3D sensor if available.
For further details, see: ALPeoplePerception.
ALEngagementZones allows you to detect if someone is approaching the robot, or moving away, using the concept of engagement zones.
For further details, see: ALEngagementZones.
ALFaceCharacteristics updates every person with some additional information such as an estimation of age and gender. It also tries and detects whether the person is smiling.
For further details, see: ALFaceCharacteristics.
ALGazeAnalysis allows you to analyze the direction of the gaze of a detected person, in order to know if he/she is looking at the robot.
For further details, see: ALGazeAnalysis.
This module requires a robot with a 3D sensor.
ALSittingPeopleDetection updates every person with the information of whether he or she is sitting (on a chair for example) or standing.
For further details, see: ALSittingPeopleDetection.
This module requires a robot with a 3D sensor.
ALWavingDetection allows you to detect if a person is moving in order to catch the robot’s attention (for example waving at the robot).
For further details, see: ALWavingDetection.
Autonomous Life facilitates the autonomous launching of Activities, and keeps the robot visually alive at all times.
For further details, see: ALAutonomousLife.
ALAutonomousMoves enables subtle movements that the robot does autonomously.
For further details, see: ALAutonomousMoves.
ALTabletService allows tablet operations. It can be used to:
For further details, see: ALTabletService.
ALUserSession manages the state of active users, and the bindings to their data.
For further details, see: ALUserSession.
ALWorldRepresentation is a module dedicated to the long term storage of data about generic objects. It allows you to persistently store some data, but also to make some generic queries on the stored data with intelligent criterions.
For further details, see: ALWorldRepresentation.
Note that the Robot View displays the stored objects.
ALSystem provides primitives that can be used to configure the system and perform operations such as shutting down or rebooting.
For further details, see: ALSystem.
PackageManager lets you install packages on the robot.
For further details, see: PackageManager.
ALPreferenceManager allows managing the robot preferences. Preferences are used to store, among other, all the settings for the applications running in the robot.
For further details, see: ALPreferenceManager.
Use ALPreferenceManager instead.
Methods
event
Methods
For further details, see: ALBehaviorManager API.
Diagnosis effect is a reflex designed to protect the robot and the user in case of a malfunctioning actuator or sensor.
For further details, see: Diagnosis effect.
The aim of External-collision avoidance is to avoid damaging the robot, its environment, and first of all avoid hurting people.
For further details, see: External-collision avoidance.
For further details, see: Idle.
The Masses as well as the Center of Mass positions taken in account in the robot models have been updated in order to remove the slight dissymmetry between left and right limbs.
For further details, see: Masses.
A new methods, getPosture allows knowing the name of the current posture.
For further details, see: ALRobotPostureProxy::getPosture().
A new predefined posture has been defined: SittingOnChair.
For further details, see: Predefined postures.
Stiffness control API has a new method and a new event:
**Method**
Events
ALVoiceEmotionAnalysis identifies the emotion expressed by the speaker’s voice, independently of what is being said.
For further details, see: ALVoiceEmotionAnalysis.
AlAudioPlayer can now run on a virtual robot, but with limited functionalities:
For further details, see: ALAudioPlayer.
Methods
Grammar files (.bnf) can be compiled and loaded in the speech recognition engine with the following functions:
Event
When the Word Spotting option is activated, WordRecognized() may now contain the following chain: <...>.
ALSpeechRecognitionProxy::setParameter() and ALSpeechRecognitionProxy::getParameter() support a new parameter: NbHypotheses Number of hypotheses returned by the engine.
Due to technical improvements, there is no point to generate a file and playing it after, so ALTextToSpeechProxy::sayToFileAndPlay() has been deprecated.
Module name ALAudioSourceLocalization is deprecated, please use ALSoundLocalization instead (same API).
For further details, see: ALSoundLocalization.
The ‘Sensibility’ parameter has been deprecated and replaced by a ‘Sensitivity’ parameter with a more usable scale.
Added in the documentation:
Due to technical improvements, inputBufferSize parameter has no effect anymore.
ALBarcodeReader scans an image from the camera and looks for a barcode. If a barcode is found in the image, the module tries to decipher it.
For further details, see: ALBarcodeReader.
This module requires a robot with a 3D sensor.
ALCloseObjectDetection allows you to detect objects that are too close to the robot to be directly detected by the 3D sensor.
For further details, see: ALCloseObjectDetection.
ALColorBlobDetection is a module that provides a fast 2D vision-based color blob detector.
For further details, see: ALColorBlobDetection.
ALLocalization is a module dedicated to the localization of the robot in an indoor environment.
For further details, see: ALLocalization.
ALMovementDetection allows you to detect movement in the field of view of the robot.
The detection uses the best available camera:
Two new methods have been added to ALMovementDetection:
For further details, see: ALMovementDetection.
Two new methods have been added to ALPhotoCapture:
The complete refactoring of ALVideoDevice module has been continued.
For further details, see: ALVideoDevice and ALVideoDevice API.
Method
Events
The ALTouch module is responsible for raising events when the robot is touched.
For further details, see: ALTouch.
Former ALSentinel events: SimpleClickOccured, :DoubleClickOccured and TripleClickOccured are now ALChestButton events:
For further details, see: ALChestButton.
Additionally to Joints and CPU, ALBodyTemperature now monitors Actuators temperature.
For further details, see: ALBodyTemperature.
New event
Deprecated event
Deprecated Events | Instead, use ... |
---|---|
HotJointDetected() | HotDeviceDetected() |
New methods have been added to ALBattery:
ALLedsProxy::rotateEyes(): eyes animation smoother with a better timing.
ALDiagnosis module allows the robot to detect if there is an hardware trouble (mainly electrical connection).
For further details, see: ALDiagnosis.
QiMessaging provides JavaScript bindings to use QiMessaging services (modules) in a web browser. It allows you to build HTML5 application for your robot.
For further details, see: QiMessaging JavaScript.
Python 2.6 is not anymore supported.
Moreover, Choregraphe uses now Python 2.7 to interpret its scripts.
Support for Visual Studio 2008 is gone. This will let us focus on supporting new compilers (Visual Studio 2012), and new architectures (windows 64 bits) in the future.
Ubuntu 10.04 LTS (Lucid) is not anymore supported.
This release only supports Ubuntu 12.04 LTS (Precise) and later.
For further details, see: Supported Operating Systems.
qiBuild has now its own release cycle. This means:
This should not change anything because the latest qiBuild release will always be retro-compatible with the latest Aldebaran C++ SDK.
During the re-factoring of the framework libraries, some minor incompatible changes were introduced.
BIND_METHOD macro call must now be followed by a colon:
// old:
BIND_METHOD("myMethod", "MyModule", "Documentation for myMethod")
// new
BIND_METHOD("myMethod", "MyModule", "Documentation for myMethod");
ALCOMMON no longer depends on ALTHREAD, so if you are using it, you must patch the CMake code
# Old:
qi_use_lib(my_module ALCOMMON)
# New:
qi_use_lib(my_module ALTHREAD ALCOMMON)
Support for multiple robot models was added.
NAOqi is now automatically launched at startup.