Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/225.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 从点云中选择点_Java_Android_Math_Point Clouds_Google Project Tango - Fatal编程技术网

Java 从点云中选择点

Java 从点云中选择点,java,android,math,point-clouds,google-project-tango,Java,Android,Math,Point Clouds,Google Project Tango,谷歌的探戈项目提供了一个点云,即一个浮动缓冲区,以米为单位显示一组点的xyz位置。我希望能够通过触摸屏幕选择其中一个点 实现这一目标的最佳/最简单的方法是什么 private void selectClosestCloundPoint(float x, float y) { //Get the current rotation matrix Matrix4 projMatrix = mRenderer.getCurrentCamera().getProjectionMatrix

谷歌的探戈项目提供了一个点云,即一个浮动缓冲区,以米为单位显示一组点的xyz位置。我希望能够通过触摸屏幕选择其中一个点

实现这一目标的最佳/最简单的方法是什么

private void selectClosestCloundPoint(float x, float y) {
    //Get the current rotation matrix
    Matrix4 projMatrix =  mRenderer.getCurrentCamera().getProjectionMatrix();



    //Get all the points in the pointcloud and store them as 3D points
    FloatBuffer pointsBuffer =  mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer;
    Vector3[] points3D = new Vector3[pointsBuffer.capacity()/3];

    int j =0;
    for (int i = 0; i < pointsBuffer.capacity() - 3; i = i + 3) {

        points3D[j]= new Vector3(
                pointsBuffer.get(i),
                pointsBuffer.get(i+1),
                pointsBuffer.get(i+2));

        j++;
    }


    //Get the projection of the points in the screen.
    Vector3[] points2D = new Vector3[points3D.length];
    for(int i =0; i < points3D.length-1;i++)
    {
        Log.v("Points", "X: " +points3D[i].x + "\tY: "+ points3D[i].y +"\tZ: "+ points3D[i].z );
        points2D[i] = points3D[i].multiply(projMatrix);
        Log.v("Points", "pX: " +points2D[i].x + "\tpY: "+ points2D[i].y +"\tpZ: "+ points2D[i].z );
    }
}

更新

到目前为止,我包括了代码,正如我建议的那样,我尝试在屏幕上获得点的投影,但是在显示点之后,我发现得到的值太小(即0.5、0.7等)。我不是使用unity,而是使用android studio,因此我没有cam.WorldToScreenPoint(m_points[it])的方法,但是我有一个投影矩阵,但我想这是不正确的(可能是因为我们应该从米到像素)。 实现这一目标的正确矩阵是什么

private void selectClosestCloundPoint(float x, float y) {
    //Get the current rotation matrix
    Matrix4 projMatrix =  mRenderer.getCurrentCamera().getProjectionMatrix();



    //Get all the points in the pointcloud and store them as 3D points
    FloatBuffer pointsBuffer =  mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer;
    Vector3[] points3D = new Vector3[pointsBuffer.capacity()/3];

    int j =0;
    for (int i = 0; i < pointsBuffer.capacity() - 3; i = i + 3) {

        points3D[j]= new Vector3(
                pointsBuffer.get(i),
                pointsBuffer.get(i+1),
                pointsBuffer.get(i+2));

        j++;
    }


    //Get the projection of the points in the screen.
    Vector3[] points2D = new Vector3[points3D.length];
    for(int i =0; i < points3D.length-1;i++)
    {
        Log.v("Points", "X: " +points3D[i].x + "\tY: "+ points3D[i].y +"\tZ: "+ points3D[i].z );
        points2D[i] = points3D[i].multiply(projMatrix);
        Log.v("Points", "pX: " +points2D[i].x + "\tpY: "+ points2D[i].y +"\tpZ: "+ points2D[i].z );
    }
}
private void选择closestcloundpoint(浮点x,浮点y){
//获取当前旋转矩阵
Matrix4 projMatrix=mrender.getCurrentCamera().getProjectionMatrix();
//获取点云中的所有点,并将其存储为三维点
FloatBuffer pointsBuffer=mPointCloudManager.UpdateAndGetLastPointCloudRenderBuffer().FloatBuffer;
Vector3[]points3D=新的Vector3[pointsBuffer.capacity()/3];
int j=0;
对于(int i=0;i

我使用vector3,因为这是返回类型,但据我所知,它的第三个组件并不重要。

使用相机将3D点云的所有点转换到图像平面上。找到图像平面上所有投影点之间的距离,以触摸屏幕上的坐标。从屏幕坐标中选择对应于最小距离或阈值内的3D点。下面给出了代码片段

   for (int it = 0; it < m_pointsCount; ++it)
    {
        Vector3 screenPos3 = cam.WorldToScreenPoint(m_points[it]);
        Vector2 screenPos = new Vector2(screenPos3.x, screenPos3.y);
        float distSqr = Vector2.SqrMagnitude(screenPos - touchPos);
        if (distSqr > sqMaxDist)
        {
            continue;
        }
        closePoints.Add(it);
    }
for(int it=0;itsqMaxDist)
{
继续;
}
关闭点。添加(it);
}

不过,我建议的方法在计算上可能会很昂贵。

您好,谢谢您的回复,很抱歉没有早点回答。你说的关于相机的内在声音是正确的,但是我没有一个世界到屏幕点的方法,我如何计算一个投影矩阵来实现同样的事情?我在相机里找到的那个似乎给错了values@Girauder我已经校准了RGB摄像机,并找到了RGB摄像机的本质。然后使用相同的矩阵在相机平面上投影3D点。这很有效。ProjectTangoJavaAPI将提供一种可以直接在您的方法中使用的功能。你是说TangoXYZij的ij吗?我认为这更像是索引,而不是屏幕上的位置。我正在使用的投影矩阵应该已经有了相机的内部功能,将它乘以矢量3点是否是在屏幕上投影点的正确方法?还是我遗漏了什么?@Girauder是的,但是这个功能还不可用。