Ios RTCI420Frame对象到图像或纹理
我在iOS的WebRTC应用程序中工作。我的目标是从WebRTC对象录制视频 我有一个为我提供此方法的委托RTCDeorendererIos RTCI420Frame对象到图像或纹理,ios,objective-c,webrtc,Ios,Objective C,Webrtc,我在iOS的WebRTC应用程序中工作。我的目标是从WebRTC对象录制视频 我有一个为我提供此方法的委托RTCDeorenderer -(void)renderFrame:(RTCI420Frame *)frame{ } 我的问题是:如何将对象RTCI420Frame转换为有用对象以显示图像或保存到磁盘。RTCI420Frames使用YUV420格式。您可以使用OpenCV轻松地将它们转换为RGB,然后将它们转换为UIImage。确保您导入了# 您可能希望在单独的线程上执行此操作,尤其是在进
-(void)renderFrame:(RTCI420Frame *)frame{
}
我的问题是:如何将对象RTCI420Frame转换为有用对象以显示图像或保存到磁盘。RTCI420Frames使用YUV420格式。您可以使用OpenCV轻松地将它们转换为RGB,然后将它们转换为UIImage。确保您导入了
#
您可能希望在单独的线程上执行此操作,尤其是在进行任何视频处理时。另外,请记住使用.mm文件扩展名,这样就可以使用C++ +/p>
如果您不想使用OpenCV,可以手动执行。下面的代码可以正常工作,但颜色混乱,几秒钟后就会崩溃
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);
int-width=(int)frame.width;
int height=(int)frame.height;
uint8_t*数据=(uint8_t*)malloc(宽*高*4);
const uint8_t*yPlane=frame.yPlane;
const uint8_t*uPlane=frame.uPlane;
const uint8_t*vPlane=frame.vPlane;
对于(int i=0;i
Hello我最后使用了[NSData datawithbytesnopy:mRGB.data length:mRGB.elemSize()*mRGB.total()]
而不是dataWithBytes
。否则我会增加巨大的记忆损失。
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);