In Kinect for Windows SDK 2.0, face recognition is defined as follows.
Quoted from Kinect.Face.h of Kinect for Windows SDK 2.0 |
---|
enum _FacePointType { FacePointType_None= -1, FacePointType_EyeLeft= 0, FacePointType_EyeRight= 1, FacePointType_Nose= 2, FacePointType_MouthCornerLeft= 3, FacePointType_MouthCornerRight= 4, FacePointType_Count= ( FacePointType_MouthCornerRight + 1 ) }; enum _FaceProperty { FaceProperty_Happy= 0, FaceProperty_Engaged= 1, FaceProperty_WearingGlasses= 2, FaceProperty_LeftEyeClosed= 3, FaceProperty_RightEyeClosed= 4, FaceProperty_MouthOpen= 5, FaceProperty_MouthMoved= 6, FaceProperty_LookingAway= 7, FaceProperty_Count= ( FaceProperty_LookingAway + 1 ) }; |
Quoted from Kinect.h of Kinect for Windows SDK 2.0 |
---|
enum _DetectionResult { DetectionResult_Unknown= 0, DetectionResult_No= 1, DetectionResult_Maybe= 2, DetectionResult_Yes= 3 }; |
Quoted from Kinect.h of Kinect for Windows SDK 2.0 |
---|
typedef struct _PointF { float X; float Y; } PointF; |
If you define USE_FACE constant before including NtKinect.h, the functions and variables of NtKinect for face recognition become effective.
After calling setSkeleton() function to recognize skeleton, you can call setFace() function to recognize human face.
type of return value | function name | descriptions | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
void | setFace() | version1.2 or earlier. After calling setSkeleton(), this function can be called to recognize human face. Values are set to the next member variables.
|
|||||||||||||||
void | setFace(bool isColorSpace = true) | version1.3 or later. After calling setSkeleton(), this function can be called to recognize human face. Values are set to the next member variables.
|
type | variable name | descriptions |
---|---|---|
vector<vector<PointF>> | facePoint | Face part positions. The position of one person's "left eye, right eye, nose, left end of mouth, right end of mouse" is represented with vector<PointF> . To handle multiple people's data, the type is vector<vector<PointF>> . (version1.2 or earlier) coordinates in ColorSpace. (version1.3 or after) coordinates in ColorSpace or DepthSpace. |
vector<cv::Rect> | faceRect | Vector of BoundingBox of face. (version 1.2 or earlier) coordinates in ColorSpace. (version1.3 or after) coordinates in ColorSpace or DepthSpace. |
vector<cv::Vec3f> | faceDirection | Vector of face direction (pitch, yaw, roll). |
vector<vector<DetectionResult>> | faceProperty | Face States. The state of one person's "happy, engaged, wearing glases, left eye closed, right eye closed, mouth open, mouth moved, looking away" is the vector<DetectionResult> . To handle multiple people, the data type is vector<vector<DetectionResult>> . |
vector<UINT64> | faceTrackingId |
version 1.4 or later. Vector of trackingId. The trackingId corresponding to face information faceRect[index ] is faceTrackingId[index ]. |
All the human faces recognized by skeleton may not be recognized.
If you want to know a skeleton's face state
such as "Is the skeleton smiling?",
you can judge it by skeletonTrackingId[i] == faceTrackingId[j] and faceProperty[j][FaceProperty_Happy] == DetectionResult_Yes.
(version 1.4 or later)
When using NtKinect version 1.3 or earlier, judge it by
comparing the face position kinect.skeleton[i][JointType_Head]
and the kinect.faceRect[j].
Right-click on the project name in Solution Explorer and select Properties from the menu.
Select "Additional dependency file" in "Input" of "Linker" in "Configuration Properties" and select "Edit".
Add Kinect20.Face.lib.
xcopy "$(KINECTSDK20_DIR)Redist\Face\x64" "$(OutDir)" /e /y /i /r
Define the USE_FACE constant before including NtKinect.h.
Call the kinect.setSkeleton() function. However each joint of kinect.skeleton is displayed on RGB image in this example, you can omit the process. It is important to recognize skeleton by calling kinect.setSkeleton() function before face recognition.
Then, call kinect.setFace() function. Since the values of kinect.faceRect, kinect.facePoint, kinect.faceProperty, and kinect.faceDirection are set, you can use them as necessary.
main.cpp |
|
Recognized face information is displayed on the RGB image.
In this example, the rectangular area of the face is displayed with a cyan square, and a yellow rectangle is written at the position of face parts (left eye, right eye, nose, left end of mouse, right end of mouse).
Since the above zip file may not include the latest "NtKinect.h", Download the latest version from here and replace old one with it.