Ios ARKit的面部纹理

Ios ARKit的面部纹理,ios,swift,scenekit,arkit,Ios,Swift,Scenekit,Arkit,我正在ARKit和SceneKit中运行人脸跟踪配置,在每一帧中,我都可以通过snapshot属性或capturedImage作为缓冲区访问相机提要,我还能够将每个人脸顶点映射到图像坐标空间,并添加一些UIView辅助对象(1个点正方形)要在屏幕上实时显示所有面顶点,请执行以下操作: func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let

我正在ARKit和SceneKit中运行人脸跟踪配置,在每一帧中,我都可以通过
snapshot
属性或
capturedImage
作为缓冲区访问相机提要,我还能够将每个人脸顶点映射到图像坐标空间,并添加一些
UIView
辅助对象(1个点正方形)要在屏幕上实时显示所有面顶点,请执行以下操作:

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
        let anchorFace = anchor as? ARFaceAnchor,
        anchorFace.isTracked
        else { return }

    let vertices = anchorFace.geometry.vertices

    for (index, vertex) in vertices.enumerated() {
        let vertex = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
        let xVertex = CGFloat(vertex.x)
        let yVertex = CGFloat(vertex.y)

        let newPosition = CGPoint(x: xVertex, y: yVertex) 
    // Here i update the position of each UIView in the screen with the calculated vertex new position, i have an array of views that matches the vertex count that is consistent across sessions.

    }
}
/*
 <samplecode>
 <abstract>
 SceneKit shader (geometry) modifier for texture mapping ARKit camera video onto the face.
 </abstract>
 </samplecode>
 */

#pragma arguments
float4x4 displayTransform // from ARFrame.displayTransform(for:viewportSize:)

#pragma body

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

/**
 * MARK: Post-process special effects
 */
由于UV坐标在整个会话中也是恒定的,因此我尝试为面网格上的每个像素绘制其在UV纹理中的对应位置,以便在经过一些迭代后,可以将Person face纹理绘制到文件中

我找到了一些理论上的解决方案,比如为每个三角形创建
CGPaths
,询问每个像素是否包含在该三角形中,如果包含在该三角形中,则创建一个三角形图像,裁剪一个矩形,然后应用从三角形顶点在图像坐标中投影的点获得的三角形遮罩,因此,通过这种方式,我可以获得一个必须转换为基础三角形变换的三角形图像(如在适当位置倾斜),然后在
UIView
(1024x1024)中,将每个三角形图像添加为
UIImageView
子视图,最后将该UIView编码为PNG,这听起来需要大量工作,特别是将裁剪的三角形与UV纹理对应的三角形匹配的部分

在中,有一个图像显示UV纹理的外观,如果你编辑此图像并添加一些颜色,它将显示在脸上,但我需要另一种方法,从我在摄影机提要中看到的,创建你的脸的纹理,在同一个演示项目中,有一个示例完全符合我的需要,但使用了着色器,在没有关于如何将纹理提取到文件的线索的情况下,着色器代码如下所示:

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
        let anchorFace = anchor as? ARFaceAnchor,
        anchorFace.isTracked
        else { return }

    let vertices = anchorFace.geometry.vertices

    for (index, vertex) in vertices.enumerated() {
        let vertex = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
        let xVertex = CGFloat(vertex.x)
        let yVertex = CGFloat(vertex.y)

        let newPosition = CGPoint(x: xVertex, y: yVertex) 
    // Here i update the position of each UIView in the screen with the calculated vertex new position, i have an array of views that matches the vertex count that is consistent across sessions.

    }
}
/*
 <samplecode>
 <abstract>
 SceneKit shader (geometry) modifier for texture mapping ARKit camera video onto the face.
 </abstract>
 </samplecode>
 */

#pragma arguments
float4x4 displayTransform // from ARFrame.displayTransform(for:viewportSize:)

#pragma body

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

/**
 * MARK: Post-process special effects
 */
/*
但却无法让它发挥作用

这正是我需要的,但他们似乎不喜欢使用ARKit


谢谢。

您能告诉我如何将人物面部纹理添加到文件中。
??请…您好,您找到解决方案了吗?您共享的示例实际上并没有获得完整的面部纹理,它只是通过将3d面部坐标重新投影到2D,使用精确的相机帧对网格进行纹理处理。你可以尝试编写一个使面变形的着色器,你会看到它崩溃了,我发现在项目中使用它有局限性,因为它不是一个完美的3D网格。因此,解决方案可能涉及一些更复杂的算法来推断完全逼真的人脸纹理。如果有人对此有任何线索,请发表评论。如果你想将面部纹理从演示项目中获取到一个文件中,你只是在谈论获取摄影机纹理,因此你可以只拍摄ARSCNView以获取摄影机纹理,但我认为这不是你的实际目标。这是否回答了你的问题?