Performance 获取XNA性能问题中Kinect播放器的图像

Performance 获取XNA性能问题中Kinect播放器的图像,performance,xna,kinect,Performance,Xna,Kinect,我正在开发一款使用Kinect的XNA游戏。屏幕上看到的播放器是在Kinect传感器前播放的人的真实图像。为了消除背景并仅获取播放器的图像,我在kinect.AllFramesReady中执行以下操作: using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame()) { if (colorVideoFrame != null) { //Getting the image of

我正在开发一款使用Kinect的XNA游戏。屏幕上看到的播放器是在Kinect传感器前播放的人的真实图像。为了消除背景并仅获取播放器的图像,我在
kinect.AllFramesReady
中执行以下操作:

using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame())
{
    if (colorVideoFrame != null)
    {
        //Getting the image of the colorVideoFrame to a Texture2D named colorVideo
    }
    //And setting its information on a Color array named colors with GetData
    colorVideo.GetData(colors); 
}

using (DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame())
{
    if (depthVideoFrame != null){
        //Copying the the image to a DepthImagePixel array
        //Using only the pixels with PlayerIndex > 0 to create a Color array
        //And then setting the colors of this array from the 'colors' array by using MapDepthPointToColorPoint method, provided by Kinect SDK
        //Finally I use SetData function in order to set the colors to a Texture2D I created before
    }
}
但性能非常低,这并不奇怪。因为我必须使用
GetData
来表示640*480=307200长度的颜色数组(因为
ColorImageFormat
),并且
SetData
来表示另一个320*480=76800长度的颜色数组(因为
DepthImageFormat
),在每帧


我想知道这个问题是否还有其他解决方案,可能还有
SetData
GetData
的替代方案。因为我知道这些功能在GPU和CPU之间移动数据,这对于大数据来说是一项昂贵的操作。感谢您的帮助。

Kinect for Windows工具箱附带了一个“绿屏WPF”示例,该示例将提供一些有关处理信息的见解。因为您使用的是XNA,所以可能会有一些差异,但总体概念应该在两个示例之间起作用

该示例通过提取多个玩家来工作。以下是处理功能的业务端:

private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)
{
    // in the middle of shutting down, so nothing to do
    if (null == this.sensor)
    {
        return;
    }

    bool depthReceived = false;
    bool colorReceived = false;

    using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
    {
        if (null != depthFrame)
        {
            // Copy the pixel data from the image to a temporary array
            depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);

            depthReceived = true;
        }
    }

    using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
    {
        if (null != colorFrame)
        {
            // Copy the pixel data from the image to a temporary array
            colorFrame.CopyPixelDataTo(this.colorPixels);

            colorReceived = true;
        }
    }

    // do our processing outside of the using block
    // so that we return resources to the kinect as soon as possible
    if (true == depthReceived)
    {
        this.sensor.CoordinateMapper.MapDepthFrameToColorFrame(
            DepthFormat,
            this.depthPixels,
            ColorFormat,
            this.colorCoordinates);

        Array.Clear(this.greenScreenPixelData, 0, this.greenScreenPixelData.Length);

        // loop over each row and column of the depth
        for (int y = 0; y < this.depthHeight; ++y)
        {
            for (int x = 0; x < this.depthWidth; ++x)
            {
                // calculate index into depth array
                int depthIndex = x + (y * this.depthWidth);

                DepthImagePixel depthPixel = this.depthPixels[depthIndex];

                int player = depthPixel.PlayerIndex;

                // if we're tracking a player for the current pixel, do green screen
                if (player > 0)
                {
                    // retrieve the depth to color mapping for the current depth pixel
                    ColorImagePoint colorImagePoint = this.colorCoordinates[depthIndex];

                    // scale color coordinates to depth resolution
                    int colorInDepthX = colorImagePoint.X / this.colorToDepthDivisor;
                    int colorInDepthY = colorImagePoint.Y / this.colorToDepthDivisor;

                    // make sure the depth pixel maps to a valid point in color space
                    // check y > 0 and y < depthHeight to make sure we don't write outside of the array
                    // check x > 0 instead of >= 0 since to fill gaps we set opaque current pixel plus the one to the left
                    // because of how the sensor works it is more correct to do it this way than to set to the right
                    if (colorInDepthX > 0 && colorInDepthX < this.depthWidth && colorInDepthY >= 0 && colorInDepthY < this.depthHeight)
                    {
                        // calculate index into the green screen pixel array
                        int greenScreenIndex = colorInDepthX + (colorInDepthY * this.depthWidth);

                        // set opaque
                        this.greenScreenPixelData[greenScreenIndex] = opaquePixelValue;

                        // compensate for depth/color not corresponding exactly by setting the pixel 
                        // to the left to opaque as well
                        this.greenScreenPixelData[greenScreenIndex - 1] = opaquePixelValue;
                    }
                }
            }
        }
    }

    // do our processing outside of the using block
    // so that we return resources to the kinect as soon as possible
    if (true == colorReceived)
    {
        // Write the pixel data into our bitmap
        this.colorBitmap.WritePixels(
            new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
            this.colorPixels,
            this.colorBitmap.PixelWidth * sizeof(int),
            0);

        if (this.playerOpacityMaskImage == null)
        {
            this.playerOpacityMaskImage = new WriteableBitmap(
                this.depthWidth,
                this.depthHeight,
                96,
                96,
                PixelFormats.Bgra32,
                null);

            MaskedColor.OpacityMask = new ImageBrush { ImageSource = this.playerOpacityMaskImage };
        }

        this.playerOpacityMaskImage.WritePixels(
            new Int32Rect(0, 0, this.depthWidth, this.depthHeight),
            this.greenScreenPixelData,
            this.depthWidth * ((this.playerOpacityMaskImage.Format.BitsPerPixel + 7) / 8),
            0);
    }
}
private void SensorAllFramesReady(对象发送方,AllFramesReadyEventArgs e)
{
在关闭的过程中,没什么可做的
if(null==此传感器)
{
返回;
}
bool depthReceived=假;
bool colorReceived=false;
使用(DepthImageFrame depthFrame=e.OpenDepthImageFrame())
{
如果(空!=深度帧)
{
//将像素数据从图像复制到临时阵列
CopyDepthImagePixelDataTo(this.depthPixels);
深度接收=真;
}
}
使用(ColorImageFrame colorFrame=e.OpenColorImageFrame())
{
如果(空!=彩色帧)
{
//将像素数据从图像复制到临时阵列
colorFrame.CopyPixelDataTo(this.colorPixels);
colorReceived=true;
}
}
//在使用块之外进行处理
//以便我们尽快将资源返回kinect
if(true==depthReceived)
{
this.sensor.CoordinateMapper.MapDepthFrameToColorFrame(
深度格式,
这是深度像素,
彩色格式,
这是颜色坐标);
Array.Clear(this.greenScreenPixelData,0,this.greenScreenPixelData.Length);
//在深度的每行和每列上循环
对于(int y=0;y0)
{
//检索当前深度像素的深度到颜色映射
ColorImagePoint ColorImagePoint=此.ColorCoordinations[depthIndex];
//将颜色坐标缩放到深度分辨率
int colorInDepthX=colorImagePoint.X/this.colorToDepthDivisor;
int colorInDepthY=colorImagePoint.Y/this.colorToDepthDivisor;
//确保深度像素映射到颜色空间中的有效点
//检查y>0和y0而不是>=0,因为为了填补空白,我们将不透明当前像素加上左侧的像素
//由于传感器的工作方式,采用这种方式比设置为右侧更为正确
如果(colorInDepthX>0&&colorInDepthX=0&&colorInDepthY
如果你只对一个玩家感兴趣,
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
    if (skeletonFrame != null && skeletonFrame.SkeletonArrayLength > 0)
    {
        if (_skeletons == null || _skeletons.Length != skeletonFrame.SkeletonArrayLength)
        {
            _skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
        }

        skeletonFrame.CopySkeletonDataTo(_skeletons);

        // grab the tracked skeleton and set the playerIndex for use pulling
        // the depth data out for the silhouette.
        this.playerIndex = -1;
        for (int i = 0; i < _skeletons.Length; i++)
        {
            if (_skeletons[i].TrackingState != SkeletonTrackingState.NotTracked)
            {
                this.playerIndex = i+1;
            }
        }
    }
}
depthFrame.CopyPixelDataTo(this.pixelData);

for (int i16 = 0, i32 = 0; i16 < pixelData.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
{
    int player = pixelData[i16] & DepthImageFrame.PlayerIndexBitmask;
    if (player == this.playerIndex)
    {
        // the player we are tracking
    }
    else if (player > 0)
    {
        // a player, but not the one we want.
    }
    else
    {
        // background or something else we don't care about
    }
}