NAOqi Vision - Overview | API
ALMovementDetection allows you to detect movement in the field of view of the Aldebaran robots.
The Aldebaran robots use the 3d camera if available, otherwise they use the RGB camera.
Frames are collected at a regular interval and each new frame is compared with the previous one. The pixels for which the difference is above a threshold are identified as “moving pixels”. Then all the “moving pixels” are clustered using both their physical proximity and their value difference.
For Aldebaran robots without 3d camera, it is possible to change the threshold using: - ALMovementDetectionProxy::setColorSensitivity()
For Aldebaran robots with 3d camera, it is possible to change the threshold using: - ALMovementDetectionProxy::setDepthSensitivity()
Each time some movement is detected, the ALMemory key MovementDetection/MovementInfo is updated and an ALMemory event, MovementDetection/MovementDetected, is raised.
The memory key contains the information about the different clusters of “moving” pixels. It is organized as follows:
MovementInfo =
[
TimeStamp,
[ClusterInfo_1, ClusterInfo_2, ... ClusterInfo_n],
CameraPose_InTorsoFrame,
CameraPose_InRobotFrame,
Camera_Id
]
TimeStamp: this field is the time stamp of the image that was used to perform the detection.
TimeStamp [
TimeStamp_Seconds,
Timestamp_Microseconds
]
ClusterInfo_i: each of these fields contains the description of a “moving” cluster. It has the following structure, depending on the type of camera:
RGB camera
ClusterInfo_i(RGB) =
[
PositionOfCog,
AngularRoi,
ProportionMovingPixels,
]
Depth camera
ClusterInfo_i(Depth) =
[
PositionOfCog,
AngularRoi,
ProportionMovingPixels,
MeanDistance,
RealSizeRoi,
PositionOfAssociatedPoint
]
All cameras
Depth camera only
CameraPose_InTorsoFrame: describes the Position6D of the camera at the time the image was taken, in FRAME_TORSO.
CameraPose_InRobotFrame: describes the Position6D of the camera at the time the image was taken, in FRAME_ROBOT.
Camera_Id: gives the Id of the camera used for the detection.
The algorithm used for movement detection only works if the camera is not moving. Therefore, when the robot is moving, the detection is automatically disabled: the events are not raised and the memory keys are not updated.