CVPixelBufferRef 生成方式

在iOS里,我們經(jīng)常能看到 CVPixelBufferRef 這個類型,在Camera 采集返回的數(shù)據(jù)里得到一個CMSampleBufferRef,而每個CMSampleBufferRef里則包含一個 CVPixelBufferRef,在視頻硬解碼的返回數(shù)據(jù)里也是一個 CVPixelBufferRef,它的格式NV12(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange或者kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)這兩種形式。由于是C對象,它是不受 ARC 管理的,就是說要開發(fā)者自己來管理引用計數(shù),控制對象的生命周期,可以用CVPixelBufferRetain,CVPixelBufferRelease函數(shù)用來加減引用計數(shù),其實和CFRetain和CFRelease是等效的,所以可以用 CFGetRetainCount來查看當(dāng)前引用計數(shù)。

通過下面的方法CVImageBufferRef:

CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(videoSample);

從CVImageBufferRef 里面獲取yuv數(shù)據(jù),轉(zhuǎn)為yuv420(NV12)

// AWVideoEncoder.m文件
-(NSData *) convertVideoSmapleBufferToYuvData:(CMSampleBufferRef) videoSample{
    // 獲取yuv數(shù)據(jù)
    // 通過CMSampleBufferGetImageBuffer方法,獲得CVImageBufferRef。
    // 這里面就包含了yuv420(NV12)數(shù)據(jù)的指針
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(videoSample);

    //表示開始操作數(shù)據(jù)
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    //圖像寬度(像素)
    size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
    //圖像高度(像素)
    size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
    //yuv中的y所占字節(jié)數(shù)
    size_t y_size = pixelWidth * pixelHeight;
    //yuv中的uv所占的字節(jié)數(shù)
    size_t uv_size = y_size / 2;

    uint8_t *yuv_frame = aw_alloc(uv_size + y_size);

    //獲取CVImageBufferRef中的y數(shù)據(jù)
    uint8_t *y_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(yuv_frame, y_frame, y_size);

    //獲取CMVImageBufferRef中的uv數(shù)據(jù)
    uint8_t *uv_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(yuv_frame + y_size, uv_frame, uv_size);

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    //返回數(shù)據(jù)
    return [NSData dataWithBytesNoCopy:yuv_frame length:y_size + uv_size];
}

顧名思義,CVPixelBufferRef 是一種像素圖片類型,由于CV開頭,所以它是屬于 CoreVideo 模塊的。
反之,NV12數(shù)據(jù)可以填充CVPixelBufferRef,示例如下所示:

-(CVPixelBufferRef)createCVPixelBufferRefFromNV12buffer:(unsigned char *)buffer width:(int)w height:(int)h {
    NSDictionary *pixelAttributes = @{(NSString*)kCVPixelBufferIOSurfacePropertiesKey:@{}};
    
    CVPixelBufferRef pixelBuffer = NULL;
    
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          w,
                                          h,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);//kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
    
    CVPixelBufferLockBaseAddress(pixelBuffer,0);
    unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    // Here y_ch0 is Y-Plane of YUV(NV12) data.
    unsigned char *y_ch0 = buffer;
    memcpy(yDestPlane, y_ch0, w * h);
    unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    // Here y_ch1 is UV-Plane of YUV(NV12) data.
    unsigned char *y_ch1 = buffer + w * h;
    memcpy(uvDestPlane, y_ch1, w * h/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    return pixelBuffer;
}

通過CVPixelBufferGetBaseAddressOfPlane可以得到每個平面的數(shù)據(jù)指針。在得到 Address之前需要調(diào)用CVPixelBufferLockBaseAddress,這說明CVPixelBufferRef的內(nèi)部存儲不僅是內(nèi)存也可能是其它外部存儲,比如現(xiàn)存,所以在訪問前要lock下來實現(xiàn)地址映射,同時lock也保證了沒有讀寫沖突。
在逐行copy數(shù)據(jù)的時候,pixel內(nèi)部地址每個循環(huán)步進(jìn) current_row * bytesPerRowChrominance 的大小,這是pixelbuffer內(nèi)部的內(nèi)存排列。然后我的數(shù)據(jù)來源內(nèi)存排列是緊密排列不考慮內(nèi)存多少位對齊的問題的,所以每次的步進(jìn)是 current_row * _outVideoWidth 也就是真正的視頻幀的寬度。每次copy的大小也應(yīng)該是真正的寬度。對于這個通道來說,寬度和高度都是亮度通道的一半,每個元素有UV兩個信息,所以這個通道每一行占用空間和亮度通道應(yīng)該是一樣的。也就是每一行copy數(shù)據(jù)的大小是這樣算出來的:_outVideoWidth / 2 * 2.

UIImage 生成 CVPixelBufferRef

- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img
{
    CGImageRef image = [img CGImage];
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    
    CVPixelBufferRef pxbuffer = NULL;
    
    CGFloat frameWidth = CGImageGetWidth(image);
    CGFloat frameHeight = CGImageGetHeight(image);
    
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          frameWidth,
                                          frameHeight,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);
    
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    
    CGContextRef context = CGBitmapContextCreate(pxdata,
                                                 frameWidth,
                                                 frameHeight,
                                                 8,
                                                 CVPixelBufferGetBytesPerRow(pxbuffer),
                                                 rgbColorSpace,
                                                 (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0,
                                           0,
                                           frameWidth,
                                           frameHeight),
                       image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
    return pxbuffer;
}

CVPixelBufferRef 生成 UIImage

-(UIImage* )createCVPixelBufferRefFromNV12buffer:(unsigned char *)buffer width:(int)w height:(int)h {
    NSDictionary *pixelAttributes = @{(NSString*)kCVPixelBufferIOSurfacePropertiesKey:@{}};
    
    CVPixelBufferRef pixelBuffer = NULL;
    
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          w,
                                          h,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);//kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
    
    CVPixelBufferLockBaseAddress(pixelBuffer,0);
    unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    // Here y_ch0 is Y-Plane of YUV(NV12) data.
    unsigned char *y_ch0 = buffer;
    memcpy(yDestPlane, y_ch0, w * h);
    unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    // Here y_ch1 is UV-Plane of YUV(NV12) data.
    unsigned char *y_ch1 = buffer + w * h;
    memcpy(uvDestPlane, y_ch1, w * h/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    
    // CIImage Conversion
    CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
                                                       fromRect:CGRectMake(0, 0, w, h)];

    // UIImage Conversion
    UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
                                                     scale:1.0
                                               orientation:UIImageOrientationRight];

    CVPixelBufferRelease(pixelBuffer);
    CGImageRelease(MyvideoImage);
    
    return Mynnnimage;
}

## CVPixelBufferRef裁剪
![image.png](https://upload-images.jianshu.io/upload_images/1996279-6c1f4c38fb8bf40d.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

如果使用vimage,則可以直接處理緩沖區(qū)數(shù)據(jù),而無需將其轉(zhuǎn)換為任何圖像格式。
outImg包含裁剪和縮放的圖像數(shù)據(jù)。 outWidth和cropWidth之間的關(guān)系設(shè)置縮放。將cropX0 = 0和cropY0 = 0以及cropWidth和cropHeight設(shè)置為原始大小意味著不進(jìn)行裁剪(使用整個原始圖像)。設(shè)置outWidth = cropWidth和outHeight = cropHeight會導(dǎo)致無縮放。請注意,inBuff.rowBytes應(yīng)始終是完整源緩沖區(qū)的長度,而不是裁剪長度。

int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;

int startpos = cropY0bytesPerRow+4cropX0;
inBuff.data = baseAddress+startpos;

unsigned char outImg= (unsigned char)malloc(4outWidthoutHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};

vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容