Ios 将UIImage转换为8灰度型像素缓冲区
我可以将Ios 将UIImage转换为8灰度型像素缓冲区,ios,swift,uiimage,cvpixelbuffer,coreml,Ios,Swift,Uiimage,Cvpixelbuffer,Coreml,我可以将UIImage转换为ARGBCVPixelBuffer,但现在我正在尝试将UIImage转换为灰度单缓冲区。 我原以为代码已经通过了,但coreML模型抱怨说: “Error Domain=com.apple.CoreML Code=1”映像不是预期的类型 8-灰色,不受支持(40)” 这是我到目前为止得到的灰度CGContext: public func pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {
UIImage
转换为ARGBCVPixelBuffer
,但现在我正在尝试将UIImage
转换为灰度单缓冲区。
我原以为代码已经通过了,但coreML模型抱怨说:
“Error Domain=com.apple.CoreML Code=1”映像不是预期的类型
8-灰色,不受支持(40)”
这是我到目前为止得到的灰度CGContext
:
public func pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {
var pixelBuffer : CVPixelBuffer?
let attributes = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(width), Int(height), kCVPixelFormatType_8IndexedGray_WhiteIsZero, attributes as CFDictionary, &pixelBuffer)
guard status == kCVReturnSuccess, let imageBuffer = pixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let imageData = CVPixelBufferGetBaseAddress(imageBuffer)
guard let context = CGContext(data: imageData, width: Int(width), height:Int(height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer),
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGImageAlphaInfo.none.rawValue) else {
return nil
}
context.translateBy(x: 0, y: CGFloat(height))
context.scaleBy(x: 1, y: -1)
UIGraphicsPushContext(context)
self.draw(in: CGRect(x:0, y:0, width: width, height: height) )
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
return imageBuffer
}
任何帮助都将不胜感激即使图像被称为灰度,正确的像素格式是:
kCVPixelFormatType\u OneComponent8
希望这段完整的代码片段能够帮助其他人:
public func pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {
var pixelBuffer : CVPixelBuffer?
let attributes = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(width), Int(height), kCVPixelFormatType_OneComponent8, attributes as CFDictionary, &pixelBuffer)
guard status == kCVReturnSuccess, let imageBuffer = pixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let imageData = CVPixelBufferGetBaseAddress(imageBuffer)
guard let context = CGContext(data: imageData, width: Int(width), height:Int(height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer),
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGImageAlphaInfo.none.rawValue) else {
return nil
}
context.translateBy(x: 0, y: CGFloat(height))
context.scaleBy(x: 1, y: -1)
UIGraphicsPushContext(context)
self.draw(in: CGRect(x:0, y:0, width: width, height: height) )
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
return imageBuffer
}
在这段代码中创建上下文时,是否尝试用kCVPixelFormatType_8IndexedGray_WhiteIsZero或kCVPixelFormatType_8IndexedGray_索引替换空格参数?感谢您的帮助,正确的像素格式是kCVPixelFormatType_OneComponent8,对于灰度UIImage8,您可以投票支持我的评论,至少loli刚才这么做了,干杯!很高兴您得到了它正在工作…但您不需要自己进行像素缓冲区转换来将图像提供给核心ML模型:Vision framework会为您这样做。请看,我不明白如何使用此代码段,上下文从何处获取图像信息?这是UIImage扩展吗?请提供一个示例。谢谢