Android Camera2 API:图片是在请求提交后4-5秒拍摄的

Android Camera2 API:图片是在请求提交后4-5秒拍摄的,android,android-camera2,Android,Android Camera2,目标很简单:只需使用前置摄像头拍摄一张照片。照片应在照片请求发送时固定。甚至不需要预览,因此CameraSession使用来自ImageReader的单个曲面进行实例化。但问题是,在某些设备上,图像仅在4-5秒后被捕获。以下是一些日志: 13:47:29.049要求拍照 于13:47:29.062请求捕获 已写入文件,并在13:47:33.313将文件发送到频道 照片文件于13:47:33.339收到 13:47:39.073要求拍照 于13:47:39.074请求捕获 已写入文件,并在13:4

目标很简单:只需使用前置摄像头拍摄一张照片。照片应在照片请求发送时固定。甚至不需要预览,因此
CameraSession
使用来自
ImageReader
的单个曲面进行实例化。但问题是,在某些设备上,图像仅在4-5秒后被捕获。以下是一些日志:

13:47:29.049要求拍照

于13:47:29.062请求捕获

已写入文件,并在13:47:33.313将文件发送到频道

照片文件于13:47:33.339收到

13:47:39.073要求拍照

于13:47:39.074请求捕获

已写入文件,并在13:47:43.199将文件发送到频道

照片文件于13:47:43.215收到

问题是图片在4秒后被捕获,并且不支持自动对焦功能(在小米米-5上测试)。如何在捕获之前消除如此长的延迟或执行焦点锁定?或者,这里是否有另一种解决方案来消除上述问题

值得一提的是华硕的平板电脑日志:

照片请求时间为07:07:03.443

于07:07:03.454请求捕获

文件已写入,并于07:07:03.907将文件发送到频道

照片文件于07:07:03.944收到

照片请求时间为07:07:08.449

于07:07:08.449请求捕获

文件已写入,并在07:07:08.635将文件发送到频道

照片文件于07:07:08.651收到

代码如下:

视图模型:

private fun makePhoto() {
    GlobalScope.launch(Main) {
        Log.i("Photo", "Photo was requested at ${LocalTime.now()}")
        val picture: File = camera.makePhoto()
        Log.i("Photo", "Photo file was received at ${LocalTime.now()}")
        //process the file somehow
    }
}
光电照相机:

//the method is called in onStart of an Activity or Fragment instance
override suspend fun open() {
    val surfaces = listOf(outputSurface) //surface of an ImageReader instance, comes into object's constructor
    cameraDevice =
        suspendCoroutine { cameraManager.openCamera(specification.id, SuspendingCameraStateCallback(it), handler) } //callback just resumes the coroutine with CameraDevice when onOpened method was called.
    session = suspendCoroutine { cameraDevice.createCaptureSession(surfaces, SuspendSessionCallback(it), handler) } //same, just resumes the continuation with the session that comes into onConfigured method
}

override suspend fun makePhoto(): File {
    return suspendCoroutine {
        session.apply {
            stopRepeating()
            abortCaptures()
            Log.i("Photo", "Capture was requested on ${LocalTime.now()}")
            capture(createCaptureRequest(outputSurface), captureAwaitFactory.createListener(it), handler)
        }
    }
}

private fun createCaptureRequest(target: Surface): CaptureRequest {
    val requestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)
    requestBuilder.addTarget(target)
    requestBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO)
    requestBuilder.set(CaptureRequest.JPEG_ORIENTATION, orientation.rotation)
    return requestBuilder.build()
}
使用setOnImageAvailableListener附加的ImageReader侦听器代码:

override fun onImageAvailable(reader: ImageReader) {
    reader.acquireLatestImage().use { image: Image ->
        val byteBuffer = image.planes[0].buffer
        val byteArray = ByteArray(byteBuffer.capacity())
        byteBuffer.get(byteArray)
        val outputFile = createOutputFile()
        FileOutputStream(outputFile).use { stream: FileOutputStream -> stream.write(byteArray) }
        Log.i("Photo", "File was written, sending file to the channel on ${LocalTime.now()}")
        scope.launch {
            fileChannel.send(outputFile)
        }
    }
}

private fun createOutputFile() = //creates a unique file
工厂的
createListener
实现:

override fun createListener(continuation: Continuation<File>): CameraCaptureSession.CaptureCallback {
    return CoroutineCaptureCallback(channel, this, continuation)
}

创建捕获会话时运行的代码不包括在内,因此很难说您当时在做什么

也就是说,您应该发出一个重复的捕获请求,以便将自动曝光和自动对焦聚合在一起,否则您的图像捕获可能会使用非常糟糕的值。为此,我建议添加第二个曲面目标,如虚拟SurfaceTexture(使用一些随机纹理ID作为参数创建;只是永远不要对其调用UpdateMaximage,并且不需要GL上下文或任何东西)

这样,一旦你发出了你的拍照请求,一切都准备好了

internal class CoroutineCaptureCallback(
    private val channel: ReceiveChannel<File>,
    private val scope: CoroutineScope,
    private val continuation: Continuation<File>
) : CameraCaptureSession.CaptureCallback() {

    override fun onCaptureCompleted(
        session: CameraCaptureSession,
        request: CaptureRequest,
        result: TotalCaptureResult
    ) {
        super.onCaptureCompleted(session, request, result)
        scope.launch {
            continuation.resume(channel.receive())
        }
    }
}