从YUV创建CVPixelBuffer并支持IOSurface

前端之家收集整理的这篇文章主要介绍了从YUV创建CVPixelBuffer并支持IOSurface前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
所以我从网络回调(voip app)获得3个独立阵列中的原始YUV数据.根据我的理解,根据 here,你无法使用CVPixelBufferCreateWithPlanarBytes创建IOSurface支持的像素缓冲区

Important: You cannot use CVPixelBufferCreateWithBytes() or
CVPixelBufferCreateWithPlanarBytes() with
kCVPixelBufferIOSurfacePropertiesKey. Calling
CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes()
will result in CVPixelBuffers that are not IOSurface-backed

因此,您必须使用CVPixelBufferCreate创建它,但是如何将数据从回调传输回您创建的CVPixelBufferRef?

- (void)videoCallBack(uint8_t *yPlane,uint8_t *uPlane,uint8_t *vPlane,size_t width,size_t height,size_t stride yStride,size_t uStride,size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kcfAllocatorDefault,width,height,kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,(__bridge CFDictionaryRef)(pixelAttributes),&pixelBuffer);

我不确定之后该做什么?最终我想把它变成一个CIImage,然后我可以使用我的GLKView来渲染视频.人们如何在创建数据时将数据“放入”缓冲区?

解决方法

我想出来了,这是相当微不足道的.以下是完整的代码.唯一的问题是我收到了BSXPCMessage收到的消息错误:连接中断,视频显示需要一段时间.
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kcfAllocatorDefault,&pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer,0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer,0);
memcpy(yDestPlane,yPlane,width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer,1);
memcpy(uvDestPlane,uvPlane,numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d",result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

我忘了添加代码来交错两个U和V平面,但这不应该太糟糕.

原文链接:https://www.f2er.com/iOS/328814.html

猜你在找的iOS相关文章