在iOS 4.3中将YpCbCr iPhone 4相机框架渲染为OpenGL ES 2.0纹理

前端之家收集整理的这篇文章主要介绍了在iOS 4.3中将YpCbCr iPhone 4相机框架渲染为OpenGL ES 2.0纹理前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我试图在iPhone 4上的iOS 4.3中将原生平面图像渲染为OpenGL ES 2.0纹理.然而,纹理全部变黑.我的相机配置如下:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] 
                                                          forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

我将像素数据传递给我的纹理,如下所示:

glTexImage2D(GL_TEXTURE_2D,GL_RGBA,bufferWidth,bufferHeight,GL_RGB_422_APPLE,GL_UNSIGNED_SHORT_8_8_REV_APPLE,CVPixelBufferGetBaseAddress(cameraFrame));

我的fragement着色器是:

varying highp vec2 textureCoordinate;

uniform sampler2D videoFrame; 

void main() { 
    lowp vec4 color; 
    color = texture2D(videoFrame,textureCoordinate); 
    lowp vec3 convertedColor = vec3(-0.87075,0.52975,-1.08175); 
    convertedColor += 1.164 * color.g; // Y
    convertedColor += vec3(0.0,-0.391,2.018) * color.b; // U
    convertedColor += vec3(1.596,-0.813,0.0) * color.r; // V
    gl_FragColor = vec4(convertedColor,1.0); 
}

我的顶点着色器是

attribute vec4 position;
attribute vec4 inputTextureCoordinate;

varying vec2 textureCoordinate;

void main()
{
    gl_Position = position;
    textureCoordinate = inputTextureCoordinate.xy;
}

当我使用BGRA图像时,这很好用,而我的片段着色器只能这样做

gl_FragColor = texture2D(videoFrame,textureCoordinate);

如果我在这里遗失了什么?谢谢!

解决方法

好.我们在这里取得了成功.关键是将Y和UV作为两个单独的纹理传递给片段着色器.这是最终的着色器:
#ifdef GL_ES
precision mediump float;
#endif

varying vec2 textureCoordinate;

uniform sampler2D videoFrame; 
uniform sampler2D videoFrameUV;

const mat3 yuv2rgb = mat3(
                            1,1.2802,1,-0.214821,-0.380589,2.127982,0
                            );

void main() {    
    vec3 yuv = vec3(
                    1.1643 * (texture2D(videoFrame,textureCoordinate).r - 0.0625),texture2D(videoFrameUV,textureCoordinate).r - 0.5,textureCoordinate).a - 0.5
                    );
    vec3 rgb = yuv * yuv2rgb;

    gl_FragColor = vec4(rgb,1.0);
}

你需要像这样创建纹理:

int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D,videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D,GL_LUMINANCE,GL_UNSIGNED_BYTE,CVPixelBufferGetBaseAddressOfPlane(cameraFrame,0));

glBindTexture(GL_TEXTURE_2D,videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D,GL_LUMINANCE_ALPHA,bufferWidth/2,bufferHeight/2,1));

然后像这样传递它们:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,videoFrameTexture);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D,videoFrameTextureUV);

glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform,0);
glUniform1i(videoFrameUniformUV,1);

男孩,我松了一口气!

附: yuv2rgb矩阵的值来自这里http://en.wikipedia.org/wiki/YUV,我从这里复制代码http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877,以找出如何获得正确的YUV值.

猜你在找的iOS相关文章