Ios 带有跟踪图像的ARKit场景开始跳跃
发生了一些奇怪的事情,我刚刚创建了一个简单的应用程序。我的应用程序可以跟踪图像,我在图像上添加了一个SCNPlane。但是,当我的图像获得某种透视效果时,SCNPlane开始跳跃。请观看视频以便更好地理解: 所以,正如你们看到的,当我检测到像“布鲁斯·韦恩”这样的简单图像时,一切都很好,但当我试图检测到“建筑”时,它开始跳跃。 我想阿尔基特人不知道如何准备这张图片,但我不知道如何修复它以及从哪里开始 下面是一些代码:Ios 带有跟踪图像的ARKit场景开始跳跃,ios,swift,arkit,Ios,Swift,Arkit,发生了一些奇怪的事情,我刚刚创建了一个简单的应用程序。我的应用程序可以跟踪图像,我在图像上添加了一个SCNPlane。但是,当我的图像获得某种透视效果时,SCNPlane开始跳跃。请观看视频以便更好地理解: 所以,正如你们看到的,当我检测到像“布鲁斯·韦恩”这样的简单图像时,一切都很好,但当我试图检测到“建筑”时,它开始跳跃。 我想阿尔基特人不知道如何准备这张图片,但我不知道如何修复它以及从哪里开始 下面是一些代码: ... override func viewDidLoad() {
...
override func viewDidLoad() {
super.viewDidLoad()
initScene()
}
func initScene(){
sceneView.delegate = self
sceneView.showsStatistics = true
// Create a new scene
let scene = SCNScene(named: "art.scnassets/videoScene.scn")!
self.sceneView.autoenablesDefaultLighting = false
// Set the scene to the view
sceneView.scene = scene
}
...
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
openedDetailScreen = false
let configuration = ARImageTrackingConfiguration()
configuration.trackingImages = arImageReference
configuration.maximumNumberOfTrackedImages = 2
self.sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints, ARSCNDebugOptions.showWorldOrigin]
self.configuration = configuration
sceneView.session.run(configuration)
initRecognition()
}
...
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor){
guard anchor is ARImageAnchor else { return }
guard let currentReferenceImageName = ((anchor as? ARImageAnchor)?.referenceImage) else {
return
}
print("size of tracked images \(currentReferenceImageName.name)")
addVideoScene(withImage: currentReferenceImageName, didAdd: node)
}
...
private func addVideoScene(withImage referenceImage: ARReferenceImage,didAdd node: SCNNode){
videoNode = SKVideoNode(avPlayer: videoPlayer)
let videoScene = SKScene(size: CGSize(width: 480, height: 360))
// center our video to the size of our video scene
videoNode.position = CGPoint(x: videoScene.size.width / 2, y: videoScene.size.height / 2)
// invert our video so it does not look upside down
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
// create a plan that has the same real world height and width as our detected image
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
// set the first materials content to be our video scene
plane.firstMaterial?.diffuse.contents = videoScene
// create a node out of the plane
let planeNode = SCNNode(geometry: plane)
// since the created node will be vertical, rotate it along the x axis to have it be horizontal or parallel to our detected image
planeNode.eulerAngles.x = -Float.pi / 2
node.addChildNode(planeNode)
}
图像跟踪的质量在标记本身的质量(例如,可以识别多少“特征点”)、周围环境的质量(是否曾想知道为什么所有苹果ARKit演示都在大屏幕、纹理表上?)和打印标记的质量之间存在很大差异。因为你的图像只是显示在监视器上,所以跟踪质量不是很好。尝试打印图像,然后在那里重试。给我一些时间进行测试)我会在相同的结果后返回,我在iPhone 6s和XS Max上进行了测试。也许我们可以在图像上设置一些默认参数,而无需跟踪透视图。更新结果: