Kinect V2 can acquire RGB camera image with resolution of 1920 x 1080. Since OpenCV uses BGR format or BGRA format as the basis, NtKinect adopts the BGRA format.
type of return value | function name | description |
---|---|---|
void | setRGB() | Get the RGB image and set it to the public member variable rgbImage. |
type | variable name | description |
---|---|---|
cv::Mat | rgbImage | Image of RGB camera. The resolution is 1920 x 1080 and GBRA format. The coordinates of the image are the positions in the ColorSpace coordinate system.
cv::Vec4b pixel = rgbImage.at<cv::Vec4b>(y,x); pixel[0] // Blue pixel[1] // Green pixel[2] // Red pixel[3] // Alpha |
Depth (distance) images can be acquired with a resolution of 512 x 424. The measurable distance range is from 500 mm to 8000 mm, but the range to recognize human beings is from 500 mm to 4500 mm.
In the Kinect20.lib IDepthFrameSource has the "get_DepthMaxReliableDistance()" and "get_DepthMaxReliableDistance()" functions, each returns 500 and 4500 respectively.
In NtKinect, the obtained Depth image is represented by UINT16 (16 bit unsigned integer) for each pixel.
type of return value | function name | descriptions |
---|---|---|
void | setDepth(bool raw = true) |
Set the Depth image to the member variable "depthImage". When this function is called with no argument or "true" as first argument, the distance is set in mm for each pixel. When this function is called with "false", a value obtained by multiplying the distance by 65535/4500 is set for each pixel. That is, the image is mapped to the luminance of the black and white image of 0 (black) to 65535 (white) with the distance of 0 mm to 4500 mm. |
type | variable name | descriptions |
---|---|---|
cv::Mat | depthImage | Depth image.
The resolution is of 512 x 424 and each pixel is represented by UINT16. The coordinates of the image are the position in the DepthSpace coordinate system.
UINT16 depth = rgbImage.at<UINT16>(y , x ); |
Since the position and resolution of each sensor is different, the data is obtained as a value expressed in the coordinate system of each sensor. When using data obtained from different sensors at the same time, it is necessary to convert the coordinates to match.
Kinect V2 has 3 coordinate systems, ColorSpace, DepthSpace, and CameraSpace. There are 3 data types ColorSpacePoint, DepthSpacePoint, and CameraSpacePoint representing coordinates in each coordinate system.
Quoted from Kinect.h of Kinect for Windows SDK 2.0 |
---|
typedef struct _ColorSpacePoint { float X; float Y; } ColorSpacePoint; typedef struct _DepthSpacePoint { float X; float Y; } DepthSpacePoint; typedef struct _CameraSpacePoint { float X; float Y; float Z; } CameraSpacePoint; |
For the RGB image, Depth image, and skeleton information, the coordinate system is different. The coordinate system of the RGB image is ColorSpace, that of the Depth image is DepthSpace, and that of the skeleton information is CameraSpace.
Coordinate system | type of coordinates | Captured Data |
---|---|---|
ColorSpace | ColorSpacePoint | RGB image |
DepthSpace | DepthSpacePoint | depth image, bodyIndex image, infrared image |
CameraSpace | CameraSpacePoint | skeleton information |
CameraSpace coordinate system representing skeleton position |
---|
The CameraSpace is a 3-dimensional coordinate system with the following features.
(2016/11/12 figure changed, and description added). |
"Coordinate system conversion function" held by ICoordinateMapper class of Kinect V2 is as follows.
type of return value | function name | descriptions |
---|---|---|
HRESULT | MapCameraPointToColorSpace( CameraSpacePoint sp , ColorSpacePoint *cp ) |
Convert the coordinates sp in the CameraSpace to the coordinates cp in the ColorSpace. Return value is S_OK or error code. |
HRESULT | MapCameraPointToDepthSpace( CameraSpacePoint sp , DelpthSpacePoint *dp ) |
Convert the coordinates sp in the CameraSpace to the coordinates dp in DepthSpace. Return value is S_OK or error code. |
HRESULT | MapDepthPointToColorSpace( DepthSpacePoint dp , UINT16 depth , ColorSpacePoint *cp ) |
Convert the coordinates dp in DepthSpace and distance depth to the coordinates cp in ColorSpace. Return value is S_OK or error code. |
HRESULT | MapDepthPointToCameraSpace( DepthSpacePoint dp , UINT16 depth , CameraSpacePoint *sp ) |
Convert the coordinates dp in DepthSpace and distance depth to the coordinates sp in CameraSpace. Return value is S_OK or error code. |
An instance of ICoordinateMapper class used for mapping coordinate systems in Kinect V2 is held in NtKinect's member variable "coordinateMapper".
type | variable name | descriptions |
---|---|---|
CComPtr<ICoordinateMapper> | coordinateMapper | An instance of ICoordinateMapper used for mapping coordinate systems. |
Call kinect.setDepth() function to set depth (distance) data to kinect.depthImage. Since no argument is specified, the value of pixel is raw, that is, the distance to the object in millimeters.
main.cpp |
|
[Caution] Run this program in Debug mode in Visual Studio 2017. In Release mode, for some reason, it may crash during the process. Programs in Debug mode must link opencv_world330d.lib as an OpenCV library.
To tell the truth, DFFT is not appropriate methods to compute breath period. The reason is as follows.
Looking at the execution example, since $N_s = 165$ pieces of measurement data are obtained at $T = 30$ seconds in this example, the sampling frequency is $\displaystyle f_s = \frac{N_s}{T} = \frac{165}{30} = 5.5 Hz $ , that is, sampling is performed 5.5 times per second. I ran it on the MacBook Pro, but it's pretty slow. The decomposition ability of the sampleling period is $\displaystyle \Delta f = \frac{1}{T} = \frac{1}{30} = 0.0333\cdots $. When the discrete Fourier transform is performed on this data, it is decomposed into the waves of frequencies of $\Delta f$, $2 \Delta f$, $\cdots$, $\displaystyle \frac{N_s}{2} \Delta f$, that is, $\displaystyle \frac{1}{30}, \frac{2}{30}, \frac{3}{30}, \frac{4}{30}, \cdots$ Hz and the period of the reciprocal of the frequency, so $\displaystyle 30, 15, 7.5, 3.25, 1.125, 0.5625, \cdots$ seconds. If the breath is a cycle of about 2 seconds, the waves around here are very sparse, so sampling for a longer period is necessary to give meaningful values.
Therefore, it is considered that another method other than FFT is appropriate for calculating the breath cycle period. I will not discuss here which method is appropriate to calculate breath cycle period.
Since the above zip file may not include the latest "NtKinect.h", Download the latest version from here and replace old one with it.