Iphone 如何在不使用iOS 5.0特定功能的情况下重写GLCameraRipple示例?

Iphone 如何在不使用iOS 5.0特定功能的情况下重写GLCameraRipple示例?,iphone,ios,opengl-es,Iphone,Ios,Opengl Es,我如何重写苹果的,使它不需要iOS 5.0 我需要在iOS 4.x上运行它,所以我不能使用CVOpenGLESTextureCacheCreateTextureFromImage。我该怎么办 接下来,我将使用下面的代码来提供YUV数据,而不是RGB,但是图片不对,屏幕是绿色的。看起来UV平面好像不工作 CVPixelBufferLockBaseAddress(cameraFrame, 0); int bufferHeight = CVPixelBufferGetHeight(cameraFram

我如何重写苹果的,使它不需要iOS 5.0

我需要在iOS 4.x上运行它,所以我不能使用CVOpenGLESTextureCacheCreateTextureFromImage。我该怎么办

接下来,我将使用下面的代码来提供YUV数据,而不是RGB,但是图片不对,屏幕是绿色的。看起来UV平面好像不工作

CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);

glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, 
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));

glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, 
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

[self drawFrame];

glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

如何解决这个问题?

iOS 5.0的快速纹理上传功能可以实现非常快速的纹理上传,这就是苹果在其最新示例代码中使用它们的原因。在iPhone 4S上使用这些iOS 5.0纹理缓存,我看到640x480帧的上传时间从9毫秒增加到了1.8毫秒,而在拍摄电影方面,我看到切换到这些缓存的效果提高了四倍多

也就是说,您可能仍然希望为尚未升级到iOS 5.x的掉队者提供一个后备方案。我通过使用运行时检查纹理上载功能来实现这一点:

+ (BOOL)supportsFastTextureUpload;
{
    return (CVOpenGLESTextureCacheCreate != NULL);
}
如果返回否,我将使用自iOS 4.0以来的标准上载过程:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

GLCameraRipple在上传过程中有一个怪癖,那就是它使用YUV平面帧(分为Y和UV图像),而不是一个BGRA图像。我从我的BGRA上传中获得了相当好的性能,因此我没有看到自己需要处理YUV数据。您可以修改GLCameraRipple以使用BGRA帧和上述代码,也可以将我上面的内容修改为YUV平面数据上传。

iOS 5.0快速纹理上传功能可以实现非常快速的纹理上传,这就是苹果在其最新示例代码中使用它们的原因。在iPhone 4S上使用这些iOS 5.0纹理缓存,我看到640x480帧的上传时间从9毫秒增加到了1.8毫秒,而在拍摄电影方面,我看到切换到这些缓存的效果提高了四倍多

也就是说,您可能仍然希望为尚未升级到iOS 5.x的掉队者提供一个后备方案。我通过使用运行时检查纹理上载功能来实现这一点:

+ (BOOL)supportsFastTextureUpload;
{
    return (CVOpenGLESTextureCacheCreate != NULL);
}
如果返回否,我将使用自iOS 4.0以来的标准上载过程:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

GLCameraRipple在上传过程中有一个怪癖,那就是它使用YUV平面帧(分为Y和UV图像),而不是一个BGRA图像。我从我的BGRA上传中获得了相当好的性能,因此我没有看到自己需要处理YUV数据。您可以修改GLCameraRipple以使用BGRA帧和上述代码,或者将我上面的内容修改为YUV平面数据上载。

如果您将像素格式从
kCVPixelFormatType_420YPCBCR8BIPLANARFULLANGE
切换到
kCVPixelFormatType_32BGRA
(在RippleViewController的第315行)然后,
captureOutput:didOutputSampleBuffer:fromConnection:
将接收一个样本缓冲区,其中图像缓冲区可以通过
glTexImage2D
直接上传到OpenGL(或者
glTexSubImage2D
,如果您想将纹理大小保持为二次方)。这是因为迄今为止所有iOS设备都支持
GL\u APPLE\u texture\u format\u bgra888
扩展,允许您指定
GL\u BGRA
的非标准格式

因此,您可以使用
glGenTextures
预先在某处创建纹理,并将第235行替换为如下内容:

glBindTexture(GL_TEXTURE_2D, myTexture);

CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_BGRA, GL_UNSIGNED_BYTE, 
      CVPixelBufferGetBaseAddress(pixelBuffer));

CVPixelBufferUnlockBaseAddress(pixelBuffer);
您可能需要检查
CVPixelBufferGetBytesPerRow
的结果是否是
CVPixelBufferGetWidth
结果的四倍;从文档中我不确定它是否总是保证(实际上,这可能意味着它不是),但只要它是四的倍数,您就可以提供
CVPixelBufferGetBytesPerRow
除以四作为您的假装宽度,因为您无论如何都在上传子图像

编辑:在回答下面作为评论发布的问题时,如果您希望继续接收帧并使其在YUV中可供GPU使用,那么代码在视觉上会变得难看,因为您返回的是指向各种通道组件的结构,但您希望类似这样:

// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);

// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);

uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);

/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
   otherwise you'll need to shuffle memory a little */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_LUMINANCE, GL_UNSIGNED_BYTE, 
      yBaseAddress);

// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);

uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);

/* TODO: a check on uvRowBytes, as above */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
      GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 
      uvBaseAddress);

CVPixelBufferUnlockBaseAddress(pixelBuffer);

如果将像素格式从
kCVPixelFormatType\U 420YPCBCCR8BIPLANARFULLRANGE
切换到
kCVPixelFormatType\U 32BGRA
(位于RippleViewController的第315行)然后,
captureOutput:didOutputSampleBuffer:fromConnection:
将接收一个样本缓冲区,其中图像缓冲区可以通过
glTexImage2D
直接上传到OpenGL(或者
glTexSubImage2D
,如果您想将纹理大小保持为二次方)。这是因为迄今为止所有iOS设备都支持
GL\u APPLE\u texture\u format\u bgra888
扩展,允许您指定
GL\u BGRA
的非标准格式

因此,您可以使用
glGenTextures
预先在某处创建纹理,并将第235行替换为如下内容:

glBindTexture(GL_TEXTURE_2D, myTexture);

CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_BGRA, GL_UNSIGNED_BYTE, 
      CVPixelBufferGetBaseAddress(pixelBuffer));

CVPixelBufferUnlockBaseAddress(pixelBuffer);
您可能需要检查
CVPixelBufferGetBytesPerRow
的结果是否是
CVPixelBufferGetWidth
结果的四倍;从文档中我不确定它是否总是保证(实际上,这可能意味着它不是),但只要它是四的倍数,您就可以提供
CVPixelBufferGetBytesPerRow
除以四作为您的假装宽度,因为您无论如何都在上传子图像

编辑:在回答下面作为评论发布的问题时,如果您希望继续接收帧并使其在YUV中可供GPU使用,那么代码在视觉上会变得难看,因为您返回的是指向各种通道组件的结构,但您希望类似这样:

// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);

// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);

uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);

/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
   otherwise you'll need to shuffle memory a little */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_LUMINANCE, GL_UNSIGNED_BYTE, 
      yBaseAddress);

// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);

uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);

/* TODO: a check on uvRowBytes, as above */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
      GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 
      uvBaseAddress);

CVPixelBufferUnlockBaseAddress(pixelBuffer);

我想知道你为什么喜欢支持iOS4?您是否支持3G模式?请注意,GLCameraRipple使用的是OpenGL ES 2.0,因此我重新编写了您的问题,以明确您询问的是iOS 5.0特定的快速纹理上载功能。我