NAOqi Vision - Overview | API
ALMovementDetection allows you to detect movement in the field of view of the robot.
The detection uses the best available camera: 1. a depth camera, if the robot has one, or if not, 2. an RGB camera.
Frames are collected at a regular interval and each new frame is compared with the previous one.
Comparison method varies depending on the type of camera, it uses:
The pixels for which the difference (of grey level or depth) is above a threshold are identified as “moving pixels”. Then all the “moving pixels” are clustered using both their physical proximity and their value difference.
The threshold for the detection can be changed with one of the following functions, depending on the camera used:
Each time some movement is detected, the ALMemory key MovementDetection/MovementInfo is updated and an ALMemory event, MovementDetection/MovementDetected, is raised.
The memory key contains the information about the different clusters of “moving” pixels. It is organized as follows:
MovementInfo =
[
TimeStamp,
[ClusterInfo_1, ClusterInfo_2, ... ClusterInfo_n],
CameraPose_InTorsoFrame,
CameraPose_InRobotFrame,
Camera_Id
]
TimeStamp: this field is the time stamp of the image that was used to perform the detection.
TimeStamp [
TimeStamp_Seconds,
Timestamp_Microseconds
]
ClusterInfo_i: each of these fields contains the description of a “moving” cluster It has the following structure, depending on the type of camera:
RGB camera
ClusterInfo_i(RGB) =
[
PositionOfCog,
AngularRoi,
ProportionMovingPixels,
]
Depth camera
ClusterInfo_i(Depth) =
[
PositionOfCog,
AngularRoi,
ProportionMovingPixels,
MeanDistance,
RealSizeRoi,
PositionOfAssociatedPoint
]
All cameras
Depth camera only
CameraPose_InTorsoFrame: describes the Position6D of the camera at the time the image was taken, in FRAME_TORSO.
CameraPose_InRobotFrame: describes the Position6D of the camera at the time the image was taken, in FRAME_ROBOT.
Camera_Id: gives the Id of the camera used for the detection.
The algorithm used for movement detection only works if the camera is not moving. Therefore, when the robot is moving, the detection is automatically disabled: the events are not raised and the memory keys are not updated.