渲染的方案之前探索過很多,但是很遺憾,那些方案都是基于系統(tǒng)控件,并沒有接觸到真正的OpenGL。網(wǎng)上也沒有太多MacOS上使用純OpenGL渲染的實現(xiàn),這次我就基于蘋果 GLEssentials 示例代碼來實現(xiàn)。
使用純OpenGL要掌握不少知識,對OpenGL不熟的可以先看看我前幾篇GLFW文章。
首先,需要先創(chuàng)建vertex信息
- (GLuint) buildVAO
{
// Set up vertex data (and buffer(s)) and attribute pointers
GLfloat vertices[] = {
-1.0f, -1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 1.0f, 0.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.f, 0.f,
-1.0f, -1.0f, 0.0f, 1.0f, 1.0f,
1.0f, -1.0f, 0.0f, 0.0f, 1.f,
1.0f, 1.0f, 0.0f, 0.0f, 0.0f
};
GLuint VBO, VAO;
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
// Bind the Vertex Array Object first, then bind and set vertex buffer(s) and attribute pointer(s).
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT,GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0); // Note that this is allowed, the call to glVertexAttribPointer registered VBO as the currently bound vertex buffer object so afterwards we can safely unbind
glBindVertexArray(0); // Unbind VAO (it's always a good thing to unbind any buffer/array to prevent strange bugs)
return VAO;
}
頂點信息是2個三角形組成的矩形,以及每個點的紋理坐標。后面一堆代碼是創(chuàng)建VAO的。
接下來是寫shader。我們的窗口非常簡單,就是把紋理顯示出來。
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
out vec2 TexCoord;
void main()
{
gl_Position = vec4(position.x, position.y, position.z, 1.0);
TexCoord = texCoord;
}
TexCoord是紋理坐標,后面是要傳給fragment shader的。
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture;
void main()
{
color = texture(ourTexture, TexCoord);
}
texture方法是從紋理中取出坐標上的顏色。
-(GLuint) buildTexture:(demoImage*) image
{
GLuint texName;
if (_characterTexName == 0) {
// Create a texture object to apply to model
glGenTextures(1, &texName);
} else {
texName = _characterTexName;
}
glBindTexture(GL_TEXTURE_2D, texName);
// Set up filter and wrap modes for this texture object
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Indicate that pixel rows are tightly packed
// (defaults to stride of 4 which is kind of only good for
// RGBA or FLOAT data types)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Allocate and load image data into texture
glTexImage2D(GL_TEXTURE_2D, 0, image->format, image->width, image->height, 0,
image->format, image->type, image->data);
// Create mipmaps for this texture for better image quality
glGenerateMipmap(GL_TEXTURE_2D);
GetGLError();
return texName;
}
- (void)setImage:(CVImageBufferRef)pixelBuffer {
// glDeleteBuffers(1, &_characterTexName);
// _characterTexName = 0;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
demoImage image = {0};
image.width = width;
image.height = height;
image.rowByteSize = image.width * 3;
image.format = GL_RGB;
image.type = GL_UNSIGNED_BYTE;
image.size = CVPixelBufferGetDataSize(pixelBuffer);
image.data = CVPixelBufferGetBaseAddress(pixelBuffer);
_characterTexName = [self buildTexture:&image];
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
}
buildTexture是方便創(chuàng)建紋理的輔助函數(shù)。AVCapture項目只需要一個紋理,所以在前面判斷_characterTexName紋理是否存在,只有不存在的時候才需要創(chuàng)建(刪除紋理并不能有效的釋放紋理占有的內(nèi)存,而且也沒有這個必要)。
刷新紋理用的是CVImageBufferRef對象,這個對象從CMSampleBuffer中很容易獲得。這次我選用的是RGB格式,因為OpenGL支持。使用pixelBuffer前,需要包地址鎖住,不然取到的數(shù)據(jù)會是臟數(shù)據(jù)。
- (void) render
{
// Calculate the projection matrix
// Clear the colorbuffer
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw our first triangle
glUseProgram(_characterPrgName);
glBindTexture(GL_TEXTURE_2D, _characterTexName);
// glActiveTexture(GL_TEXTURE0);
// glBindTexture(GL_TEXTURE_2D, _characterTexName);
// glUniform1i(glGetUniformLocation(_characterPrgName, "ourTexture"), 0);
glBindVertexArray(_characterVAOName);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
}
渲染的過程就太簡單了,首先啟用紋理,最后把兩個三角形畫出來,圖像就顯示出來了。
render是方法是由單獨的CVDisplayLink驅(qū)動的,DisplayLink的刷新頻率要比攝像頭快,所以我的做法是先將圖像保存,render在刷新的時候取出。OpenGL不是線程安全,不能在Capture線程直接操作紋理。
最后實測下來,CPU的占用率和CIContext持平,都是7%,可見效率還是不錯的。下一篇將介紹如何對I420格式的支持,畢竟YUV才是視頻中的主流格式?;谀壳癘penGL的框架,支持I420并不難,大部分工作都集中在Shader中,等我有時間了再寫。