想要高效的進(jìn)行界面刷新,OpenGL/硬件加速是必須的。最近我在研究OpenGL的過(guò)程中,被OpenGL的API、Shader、GSLS等燒腦得不要不要的。也不怪它,OpenGL本來(lái)就是為3D動(dòng)畫(huà)設(shè)計(jì)的,一上來(lái)肯定高大上。
網(wǎng)絡(luò)上有不少OpenGL ES 2.0視頻render的開(kāi)源實(shí)現(xiàn),蘋(píng)果自家的GLCameraRipple示例也非常棒,但是鮮有OpenGL的實(shí)現(xiàn)。OpenGL渲染紋理,CoreImage也可以做得到。
蘋(píng)果有個(gè)Demo中有一個(gè)VideoCIView,這個(gè)類(lèi)基本實(shí)現(xiàn)了將一個(gè)CIImage繪制到NSOpenGLView中。
VideoCIView繼承自NSOpenGLView,主要是想利用顯示區(qū)域變化的事件回調(diào)。還有一點(diǎn)要注意,NSOpenGLView不能有subview。
+ (NSOpenGLPixelFormat *)defaultPixelFormat
{
static NSOpenGLPixelFormat *pf;
if (pf == nil)
{
// You must make sure that the pixel format of the context does not
// have a recovery renderer is important. Otherwise CoreImage may not be able to
// create contexts that share textures with this context.
static const NSOpenGLPixelFormatAttribute attr[] = {
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFAColorSize, 32,
#if MAC_OS_X_VERSION_MAX_ALLOWED > MAC_OS_X_VERSION_10_4
NSOpenGLPFAAllowOfflineRenderers,
#endif
0
};
pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void *)&attr];
}
return pf;
}
NSOpenGLPixelFormat是每個(gè)NSOpenGLView初始化所必需的,這種已C數(shù)組作為attribute傳入到OC中的做法比較少見(jiàn)。attribute分兩種類(lèi)型:1、BOOL;2、帶整數(shù)。此函數(shù)主要是設(shè)置OpenGL的一些參數(shù),實(shí)現(xiàn)大同小異。
- (void)prepareOpenGL
{
GLint parm = 1;
// Set the swap interval to 1 to ensure that buffers swaps occur only during the vertical retrace of the monitor.
[[self openGLContext] setValues:&parm forParameter:NSOpenGLCPSwapInterval];
// To ensure best performance, disbale everything you don't need.
glDisable (GL_ALPHA_TEST);
glDisable (GL_DEPTH_TEST);
glDisable (GL_SCISSOR_TEST);
glDisable (GL_BLEND);
glDisable (GL_DITHER);
glDisable (GL_CULL_FACE);
glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask (GL_FALSE);
glStencilMask (0);
glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
glHint (GL_TRANSFORM_HINT_APPLE, GL_FASTEST);
_needsReshape = YES;
}
當(dāng)OpenGL初始化完成當(dāng)前context,就會(huì)調(diào)用一次此函數(shù),姑且認(rèn)為它就是viewDidLoad吧。Apple這里關(guān)閉了很多不需要的參數(shù)。
// Called when the user scrolls, moves, or resizes the view.
- (void)reshape
{
// Resets the viewport on the next draw operation.
_needsReshape = YES;
}
- (void)updateMatrices
{
NSRect visibleRect = [self visibleRect];
NSRect mappedVisibleRect = NSIntegralRect([self convertRect: visibleRect toView: [self enclosingScrollView]]);
[[self openGLContext] update];
// Install an orthographic projection matrix (no perspective)
// with the origin in the bottom left and one unit equal to one device pixel.
glViewport (0, 0,mappedVisibleRect.size.width, mappedVisibleRect.size.height);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(visibleRect.origin.x,
visibleRect.origin.x + visibleRect.size.width,
visibleRect.origin.y,
visibleRect.origin.y + visibleRect.size.height,
-1, 1);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
_needsReshape = NO;
}
當(dāng)外部窗口大小發(fā)生變化時(shí),會(huì)調(diào)用此方法(這也是不選NSOpenGLLayer的原因)。窗口大小變化后,沒(méi)有直接修改glViewport,而是設(shè)置_needsReshap = YES!這算是OpenGL的一個(gè)通用設(shè)計(jì)模式吧——所有繪制操作都在render函數(shù)里完成,且render函數(shù)不重入(我瞎BB的)。OpenGL繪制不需要一定在主線程,至于render函數(shù)嘛,一般都是用CADisplayLink驅(qū)動(dòng),我們這個(gè)工程有onCapture回調(diào),所以省了。
- (void)render
{
NSRect frame = [self bounds];
[[self openGLContext] makeCurrentContext];
if (_needsReshape)
{
[self updateMatrices];
glClear (GL_COLOR_BUFFER_BIT);
}
CGRect imageRect = [_image extent];
CGRect destRect = *((CGRect*)&frame);
[[self ciContext] drawImage:_image inRect:destRect fromRect:imageRect];
// Flush the OpenGL command stream. If the view is double-buffered
// you should replace this call with [[self openGLContext]
glFlush ();
}
- (CIContext*)ciContext
{
// Allocate a CoreImage rendering context using the view's OpenGL
// context as its destination if none already exists.
// You must do this before sending any queries to the CIContext.
if (_context == nil)
{
[[self openGLContext] makeCurrentContext];
NSOpenGLPixelFormat *pf;
pf = [self pixelFormat];
if (pf == nil)
pf = [[self class] defaultPixelFormat];
_context = [[CIContext contextWithCGLContext: CGLGetCurrentContext()
pixelFormat: [pf CGLPixelFormatObj] options: nil] retain];
}
return _context;
}
真正的render就干兩件事
- 可視區(qū)域變化?更新viewport和映射關(guān)系
- 把CIImage繪制到OpenGL中
第2步前需要先得到用于CoreImage繪制的CIContext。繪制函數(shù)就一行,drawImage。當(dāng)然,這個(gè)繪制是在GPU完成的,速度非??臁?/p>
最后,通過(guò)setImage來(lái)驅(qū)動(dòng)render
- (void)setImage:(CIImage *)image
{
if (_image != image)
{
[_image release];
_image = [image retain];
}
[self render];
}
最后一步關(guān)鍵點(diǎn),獲得CIImage。通過(guò)CMSampleBuffer得到CVImageBuffer,然后再通過(guò)CVImageBuffer得到CIImage
CVImageBufferRef videoFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* image = [CIImage imageWithCVImageBuffer:videoFrame];
+[CIImage imageWithCVImageBuffer:]并不總是能成功,這與采集的圖像格式有關(guān)。
整個(gè)過(guò)程N(yùn)SOpenGLView繁雜了點(diǎn),iOS上的GLKView用起來(lái)要簡(jiǎn)單許多。
最后測(cè)試下來(lái),CIContext的方式CPU占用率約7%,比AVSampleBufferDisplayLayer稍差。原因主要是CVImageBufferRef -> CIImage占用了許多時(shí)間,而繪制過(guò)程還是相當(dāng)?shù)母咝А?/p>
我是比較喜歡這種渲染方式,它既利用到硬件加速,而且CIImage還要好多濾鏡可以玩。最重要的,這種實(shí)現(xiàn)方案比較簡(jiǎn)單。