C++ 从骨架数据到深度数据的Kinect变换

C++ 从骨架数据到深度数据的Kinect变换,c++,kinect,kinect-sdk,C++,Kinect,Kinect Sdk,我正在试验kinect API,我正在尝试(但失败)实现以下目标: 首先,我从kinect获取骨架数据,并计算用户右手与kinect的距离 mRightHandPosition = skeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_HAND_RIGHT]; distance = sqrt(pow(mRightHandPosition.x, 2) + pow(mRightHandPosition.y,

我正在试验kinect API,我正在尝试(但失败)实现以下目标:

首先,我从kinect获取骨架数据,并计算用户右手与kinect的距离

mRightHandPosition = skeletonFrame.SkeletonData[i].SkeletonPositions[NUI_SKELETON_POSITION_HAND_RIGHT];    
distance = sqrt(pow(mRightHandPosition.x, 2) + pow(mRightHandPosition.y, 2) + pow(mRightHandPosition.z, 2));
我将右手的骨架数据转换为深度数据,以获得手在(深度/颜色)图像中的位置

获得手的像素位置后,我想将该像素转换回骨架数据,并再次计算该像素中的对象(手)与kinect的距离。我会假设这样做会给我与以前大致相同的距离(当然有一些小误差),但事实并非如此。以下是我的工作:

//the position of the depth pixel in the mLockedRect.pBits array 
//i have set the depth sensor resolution to 320x240
int pixelPosition = 2 * ((int)curRightX + (int)curRightY * 320);
USHORT p;
//convert the two consecutive bytes to USHORT
p = (((unsigned short)mLockedRect.pBits[pixelPosition]) << 8) | mLockedRect.pBits[pixelPosition + 1];
//get the pixel in skeleton space
pixelInSkeletonSpace = NuiTransformDepthImageToSkeleton(LONG(curRightX), LONG(curRightY), p, cDepthResolution);
//calculate again the distance (which turns out completely wrong)
distance = sqrt(pow(pixelInSkeletonSpace.x, 2) + pow(pixelInSkeletonSpace.y, 2) + pow(pixelInSkeletonSpace.z, 2));
//深度像素在mLockedRect.pBits数组中的位置
//我已将深度传感器分辨率设置为320x240
int pixelPosition=2*((int)curRightX+(int)curRightY*320);
USHORT p;
//将两个连续字节转换为USHORT

p=(((unsigned short)mLockedRect.pBits[pixelPosition])经过大量搜索,我发现了问题所在。下面是其他任何试图做类似事情的人的解决方案

首先,为了保存深度数据,最好的方法(我发现))是

在processDepth()函数中:

bghr = m_pBackgroundRemovalStream->ProcessDepth(m_depthWidth * m_depthHeight * cBytesPerPixel, LockedRect.pBits, depthTimeStamp);
const NUI_DEPTH_IMAGE_PIXEL* pDepth = reinterpret_cast<const NUI_DEPTH_IMAGE_PIXEL*>(LockedRect.pBits);
memcpy(mLockedBits, pDepth, m_depthWidth * m_depthHeight * sizeof(NUI_DEPTH_IMAGE_PIXEL));
bghr=m_pBackgroundRemovalStream->ProcessDepth(m_depthWidth*m_depthHeight*cBytesPerPixel,LockedRect.pBits,depthTimeStamp);
const NUI_DEPTH_IMAGE_PIXEL*pDepth=reinterpret_cast(LockedRect.pBits);
memcpy(mLockedBits、pDepth、m_depthWidth*m_depthHeight*sizeof(NUI_DEPTH_IMAGE_PIXEL));
在ComposeImage()函数(或任何要使用深度数据的函数)中:

//将骨架数据点转换为深度数据
NuitransformSkeletonDepthimage(mRightHandPosition和curRightX以及curRightY和cDepthResolution);
//计算阵列中像素的位置
int pixelPosition=(int)curRightX+((int)curRightY*m_depthWidth);
//获取像素的深度值
const USHORT depth=mLockedBits[pixelPosition]。深度;
//使用从上一次转换中获得的数据在骨架空间中创建一个新点
pixelInSkeletonSpace=NuiTransformDepthImageToSkeleton(长(curRightX)、长(curRightY)、深度
bghr = m_pBackgroundRemovalStream->ProcessDepth(m_depthWidth * m_depthHeight * cBytesPerPixel, LockedRect.pBits, depthTimeStamp);
const NUI_DEPTH_IMAGE_PIXEL* pDepth = reinterpret_cast<const NUI_DEPTH_IMAGE_PIXEL*>(LockedRect.pBits);
memcpy(mLockedBits, pDepth, m_depthWidth * m_depthHeight * sizeof(NUI_DEPTH_IMAGE_PIXEL));
//transform skeleton data point to depth data
NuiTransformSkeletonToDepthImage(mRightHandPosition, &curRightX, &curRightY, cDepthResolution);

//calculate position of pixel in array
int pixelPosition = (int)curRightX + ((int)curRightY * m_depthWidth);

//get the depth value of the pixel
const USHORT depth = mLockedBits[pixelPosition].depth;

//create a new point in skeleton space using the data we got from the previous transformation
pixelInSkeletonSpace = NuiTransformDepthImageToSkeleton(LONG(curRightX), LONG(curRightY), depth << 3, cDepthResolution);

//calculate estimated distance of right hand from the kinect sensor using our recreated data
FLOAT estimated_distance = sqrt(pow(pixelInSkeletonSpace.x, 2) + pow(pixelInSkeletonSpace.y, 2) + pow(pixelInSkeletonSpace.z, 2));

//calculate the distance of the right hand from the kinect sensor using the skeleton data that we got straight from the sensor
FLOAT actual_distance = sqrt(pow(mRightHandPosition.x, 2) + pow(mRightHandPosition.y, 2) + pow(mRightHandPosition.z, 2));