Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/118.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何正确使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint_Ios_3d_Swift_Scenekit - Fatal编程技术网

如何正确使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint

如何正确使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint,ios,3d,swift,scenekit,Ios,3d,Swift,Scenekit,我正在开发一些在iOS上使用SceneKit的代码,在我的代码中,我想确定全局z平面上的x和y坐标,其中z为0.0,x和y由点击手势确定。我的设置如下: override func viewDidLoad() { super.viewDidLoad() // create a new scene let scene = SCNScene() // create and add a camera to the scene let cameraNo

我正在开发一些在iOS上使用SceneKit的代码,在我的代码中,我想确定全局z平面上的x和y坐标,其中z为0.0,x和y由点击手势确定。我的设置如下:

    override func viewDidLoad() {
    super.viewDidLoad()

    // create a new scene
    let scene = SCNScene()

    // create and add a camera to the scene
    let cameraNode = SCNNode()
    let camera = SCNCamera()
    cameraNode.camera = camera
    scene.rootNode.addChildNode(cameraNode)
    // place the camera
    cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)

    // create and add an ambient light to the scene
    let ambientLightNode = SCNNode()
    ambientLightNode.light = SCNLight()
    ambientLightNode.light.type = SCNLightTypeAmbient
    ambientLightNode.light.color = UIColor.darkGrayColor()
    scene.rootNode.addChildNode(ambientLightNode)

    let triangleNode = SCNNode()
    triangleNode.geometry = defineTriangle();
    scene.rootNode.addChildNode(triangleNode)

    // retrieve the SCNView
    let scnView = self.view as SCNView

    // set the scene to the view
    scnView.scene = scene

    // configure the view
    scnView.backgroundColor = UIColor.blackColor()
    // add a tap gesture recognizer
    let tapGesture = UITapGestureRecognizer(target: self, action: "handleTap:")
    let gestureRecognizers = NSMutableArray()
    gestureRecognizers.addObject(tapGesture)
    scnView.gestureRecognizers = gestureRecognizers
}

func handleTap(gestureRecognize: UIGestureRecognizer) {
    // retrieve the SCNView
    let scnView = self.view as SCNView
    // check what nodes are tapped
    let p = gestureRecognize.locationInView(scnView)
    // get the camera
    var camera = scnView.pointOfView.camera

    // screenZ is percentage between z near and far
    var screenZ = Float((15.0 - camera.zNear) / (camera.zFar - camera.zNear))
    var scenePoint = scnView.unprojectPoint(SCNVector3Make(Float(p.x), Float(p.y), screenZ))
    println("tapPoint: (\(p.x), \(p.y)) scenePoint: (\(scenePoint.x), \(scenePoint.y), \(scenePoint.z))")
}

func defineTriangle() -> SCNGeometry {

    // Vertices
    var vertices:[SCNVector3] = [
        SCNVector3Make(-2.0, -2.0, 0.0),
        SCNVector3Make(2.0, -2.0, 0.0),
        SCNVector3Make(0.0, 2.0, 0.0)
    ]

    let vertexData = NSData(bytes: vertices, length: vertices.count * sizeof(SCNVector3))
    var vertexSource = SCNGeometrySource(data: vertexData,
        semantic: SCNGeometrySourceSemanticVertex,
        vectorCount: vertices.count,
        floatComponents: true,
        componentsPerVector: 3,
        bytesPerComponent: sizeof(Float),
        dataOffset: 0,
        dataStride: sizeof(SCNVector3))

    // Normals
    var normals:[SCNVector3] = [
        SCNVector3Make(0.0, 0.0, 1.0),
        SCNVector3Make(0.0, 0.0, 1.0),
        SCNVector3Make(0.0, 0.0, 1.0)
    ]

    let normalData = NSData(bytes: normals, length: normals.count * sizeof(SCNVector3))
    var normalSource = SCNGeometrySource(data: normalData,
        semantic: SCNGeometrySourceSemanticNormal,
        vectorCount: normals.count,
        floatComponents: true,
        componentsPerVector: 3,
        bytesPerComponent: sizeof(Float),
        dataOffset: 0,
        dataStride: sizeof(SCNVector3))

    // Indexes
    var indices:[CInt] = [0, 1, 2]
    var indexData  = NSData(bytes: indices, length: sizeof(CInt) * indices.count)
    var indexElement = SCNGeometryElement(
        data: indexData,
        primitiveType: .Triangles,
        primitiveCount: 1,
        bytesPerIndex: sizeof(CInt)
    )

    var geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])

    // material
    var material = SCNMaterial()
    material.diffuse.contents  = UIColor.redColor()
    material.doubleSided = true
    material.shininess = 1.0;
    geo.materials = [material];

    return geo
}
如你所见。我有一个三角形,高4个单位,宽4个单位,设置在以x,y(0.0,0.0)为中心的z平面(z=0)上。摄影机是默认的SCNCamera,它在负z方向上查看,我将其放置在(0,0,15)处。zNear和zFar的默认值分别为1.0和100.0。在我的handleTap方法中,我获取点击的x和y屏幕坐标,并尝试找到z=0.0的x和y全局场景坐标。我正在调用unprojectPoint

unprojectPoint的文档表明

取消投影z坐标为0.0的点时,将返回平面上的点 近裁剪平面;取消投影z坐标为1.0的点 返回远剪裁平面上的点

虽然它没有明确指出,对于中间的点,近平面和远平面之间存在线性关系,但我已经做出了假设,并计算了screenZ的值,即z=0平面所在的近平面和远平面之间的距离百分比。要检查我的答案,我可以单击三角形角附近,因为我知道它们在全局坐标中的位置

我的问题是,当我开始更改相机上的zNear和zFar剪裁平面时,没有得到正确的值,也没有得到一致的值。所以我的问题是,我该怎么做?最后,我将创建一个新的几何体,并将其放置在与用户单击的位置相对应的z平面上


提前感谢您的帮助。

3D图形管道中的典型深度缓冲区。透视图司。()

因此,输入到
unprojectPoint
的z坐标实际上不是您想要的

那么,如何在世界空间中找到与平面匹配的标准化深度坐标呢?如果那架飞机和你的相机是正交的,那会有帮助。然后,您只需在该平面上投影一个点:

let projectedOrigin = gameView.projectPoint(SCNVector3Zero)
现在,世界原点在三维视图+规范化深度空间中的位置已经确定。要将二维视图空间中的其他点映射到此平面上,请使用此向量的z坐标:

let vp = gestureRecognizer.locationInView(scnView)
let vpWithZ = SCNVector3(x: vp.x, y: vp.y, z: projectedOrigin.z)
let worldPoint = gameView.unprojectPoint(vpWithZ)
这将在世界空间中获得一个点,该点将单击/点击位置映射到z=0平面,如果要向用户显示该位置,则适合用作节点的位置


(请注意,此方法仅在映射到垂直于摄影机视图方向的平面上时有效。如果要将视图坐标映射到不同方向的曲面上,则
vpWithZ
中的标准化深度值不会恒定。)

经过一些实验后,下面是我们开发的将一个接触点投影到场景中某个给定点的任意深度

您需要的修改是计算Z=0平面与这条线的交点,这将是您的点

private func touchPointToScenePoint(recognizer: UIGestureRecognizer) -> SCNVector3 {
    // Get touch point
    let touchPoint = recognizer.locationInView(sceneView)

    // Compute near & far points
    let nearVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 0)
    let nearScenePoint = sceneView.unprojectPoint(nearVector)
    let farVector = SCNVector3(x: Float(touchPoint.x), y: Float(touchPoint.y), z: 1)
    let farScenePoint = sceneView.unprojectPoint(farVector)

    // Compute view vector
    let viewVector = SCNVector3(x: Float(farScenePoint.x - nearScenePoint.x), y: Float(farScenePoint.y - nearScenePoint.y), z: Float(farScenePoint.z - nearScenePoint.z))

    // Normalize view vector
    let vectorLength = sqrt(viewVector.x*viewVector.x + viewVector.y*viewVector.y + viewVector.z*viewVector.z)
    let normalizedViewVector = SCNVector3(x: viewVector.x/vectorLength, y: viewVector.y/vectorLength, z: viewVector.z/vectorLength)

    // Scale normalized vector to find scene point
    let scale = Float(15)
    let scenePoint = SCNVector3(x: normalizedViewVector.x*scale, y: normalizedViewVector.y*scale, z: normalizedViewVector.z*scale)

    print("2D point: \(touchPoint). 3D point: \(nearScenePoint). Far point: \(farScenePoint). scene point: \(scenePoint)")

    // Return <scenePoint>
    return scenePoint
}
private func接触点到场景点(识别器:UIgestureRecognitor)->SCInvector3{
//获得接触点
让接触点=识别器。位置查看(场景查看)
//计算近点和远点
设nearVector=scinvector3(x:Float(touchPoint.x),y:Float(touchPoint.y),z:0)
设nearScenePoint=sceneView.unprojectPoint(nearVector)
设farVector=SCInvector3(x:Float(touchPoint.x),y:Float(touchPoint.y),z:1)
设farScenePoint=sceneView.unprojectPoint(farVector)
//计算视图向量
让viewVector=scinvector3(x:Float(farScenePoint.x-nearScenePoint.x),y:Float(farScenePoint.y-nearScenePoint.y),z:Float(farScenePoint.z-nearScenePoint.z))
//规范化视图向量
设vectorLength=sqrt(viewVector.x*viewVector.x+viewVector.y*viewVector.y+viewVector.z*viewVector.z)
设normalizedViewVector=SCInvector3(x:viewVector.x/vectorLength,y:viewVector.y/vectorLength,z:viewVector.z/vectorLength)
//缩放规格化向量以查找场景点
让刻度=浮动(15)
设scenePoint=SCInvector3(x:normalizedViewVector.x*比例,y:normalizedViewVector.y*比例,z:normalizedViewVector.z*比例)
打印(“二维点:\(接触点)。三维点:\(近场景点)。远点:\(远场景点)。场景点:\(场景点)”)
//返回
返回场景点
}

这是我在3d空间中获取精确点的解决方案

// If the object is in 0,0,0 point you can use
float zDepth = [self projectPoint:SCNVector3Zero].z;

// or myNode.position.
//zDepth = [self projectPoint:myNode.position].z;

NSLog(@"2D point: X %f, Y: %f, zDepth: %f", click.x, click.y, zDepth);

SCNVector3 worldPoint = [self unprojectPoint:SCNVector3Make(click.x, click.y,  zDepth)];

SCNVector3 nearVec = SCNVector3Make(click.x, click.y, 0.0);
SCNVector3 nearPoint = [self unprojectPoint:nearVec];

SCNVector3 farVec = SCNVector3Make(click.x, click.y, 1.0);
SCNVector3 farPoint = [self unprojectPoint:farVec];

float z_magnitude = fabs(farPoint.z - nearPoint.z);
float near_pt_factor = fabs(nearPoint.z) / z_magnitude;
float far_pt_factor = fabs(farPoint.z) / z_magnitude;

GLKVector3 nearP = GLKVector3Make(nearPoint.x, nearPoint.y, nearPoint.z);
GLKVector3 farP = GLKVector3Make(farPoint.x, farPoint.y, farPoint.z);

GLKVector3 final_pt = GLKVector3Add(GLKVector3MultiplyScalar(nearP, far_pt_factor), GLKVector3MultiplyScalar(farP, near_pt_factor));

NSLog(@"3D world point = %f, %f, %f", final_pt.x, final_pt.y, worldPoint.z);

正是我想要的。非常感谢您花时间回答这个问题。干杯我只是在寻找与普托伊农完全相同的问题……找不到任何东西……无法弄清楚这是如何工作的。非常感谢瑞克斯特!!!!如果您想将二维坐标投影到相机前面的任意点上(例如,相机前面的15个单位),该怎么办?回答太棒了!非常感谢@rickster!对于未来的谷歌,在我的例子中,我需要节点位于摄像机前10个单位的Z位置。唯一需要进行的调整是将
scinvector3zero
替换为
scinvector3make(0,0,myNode.position.z)
-注意,我已经将其放置在所需的深度Hi@Crashalot,我想知道这是否适用于从3D网格到2D图像点的人脸跟踪。你能帮忙吗?!您是否遇到过unprojectPoint有效返回X和Y坐标相同值的问题?当我使用上面的scenePoint结果向屏幕添加更多节点时,节点位置会因远离接触点而变得更不准确。似乎场景点开始返回一个更靠近先前放置的节点的位置。如果要在
取消项目点中使用它,那么
zDepth
不应该是从0到1的浮点吗?没有必要。我目前正在使用SCeneKit提供的函数,此代码不再使用。GLKVector3Add是什么?