【iOS開(kāi)發(fā)】使用GPUImage框架對(duì)接拓幻美顏SDK

關(guān)于拓幻的SDK基本使用: 拓幻美顏對(duì)接進(jìn)騰訊云直播過(guò)程以及問(wèn)題

新項(xiàng)目中有一個(gè)美顏設(shè)置界面, 可提前設(shè)置視頻時(shí)的美顏效果,主要是預(yù)覽+美顏, 所以找了下, 使用GPUImage圖像處理框架來(lái)對(duì)接拓幻SDK。

一個(gè)簡(jiǎn)單的效果:


想要圖像預(yù)覽, 那就要有一個(gè)圖像視頻流。

在GPUImage框架中, GPUImageVideoCamera作為GPUImageOutput的子類,會(huì)把圖像作為紋理, 傳給OpenGL ES處理,然后把紋理往下傳遞, 所以我們需要?jiǎng)?chuàng)建一個(gè)GPUImageVideoCamera的對(duì)象:

@property (nonatomic, strong) GPUImageVideoCamera *videoCamera;
- (GPUImageVideoCamera *)videoCamera {
    if (!_videoCamera) {
        _videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720 cameraPosition:AVCaptureDevicePositionFront];
        _videoCamera.frameRate = 30; // 調(diào)整幀率
        _videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
        _videoCamera.horizontallyMirrorFrontFacingCamera = true;
        _videoCamera.horizontallyMirrorRearFacingCamera = false;
        
        [_videoCamera addAudioInputsAndOutputs];
        _videoCamera.delegate = self;
        
        [_videoCamera addTarget:self.previewView];
        
    }
    return _videoCamera;
}

視頻流有了, 還需要一個(gè)視圖來(lái)進(jìn)行渲染,GPUImageView 一個(gè)用作GPUImage 輸出的端點(diǎn)的UIView子類, 所以我們?cè)賮?lái)創(chuàng)建一個(gè)GPUImageView的對(duì)象:

@property (nonatomic, strong) GPUImageView *previewView;
- (GPUImageView *)previewView {
    if (!_previewView) {
        _previewView = [[GPUImageView alloc] initWithFrame:self.view.frame];
        _previewView.fillMode = kGPUImageFillModePreserveAspectRatioAndFill;
    }
    return _previewView;
}

準(zhǔn)備就緒后, 初始化TiSDK,將美顏UI 添加到self. previewView上加載,并設(shè)置代理, 開(kāi)始相機(jī)拍攝:

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    
    // 美顏設(shè)置
    [TiSDK init:kTuoHuanKey CallBack:^(InitStatus callBack) {
        
    }];
    [[TiUIManager shareManager] loadToView:self.previewView forDelegate:self];
    
    [self setUI];
    
    [self.videoCamera startCameraCapture];
}

GPUImageVideoCamera有個(gè)代理GPUImageVideoCameraDelegate,其實(shí)就是一個(gè)攝像頭采集協(xié)議,實(shí)現(xiàn)協(xié)議方法-(void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer

這個(gè)方法用于對(duì)幀動(dòng)畫的手動(dòng)處理, 它會(huì)在捕獲每一幀畫面的時(shí)候被調(diào)用,此時(shí)就可以對(duì)畫面進(jìn)行一些處理, 那對(duì)上我們的需求, 就可以使用TiSDK對(duì)每一幀進(jìn)行紋理渲染:

-(void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    
    // CoreMedia.framework: CMSampleBufferGetImageBuffer
    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    
    BOOL isMirror = (self.videoCamera.cameraPosition == AVCaptureDevicePositionFront);
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    unsigned char *baseAddress = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
    TiRotationEnum rotation;
    switch (orientation) {
        case UIDeviceOrientationPortrait:
            rotation = CLOCKWISE_90;
            break;
        case UIDeviceOrientationLandscapeLeft:
            rotation = isMirror ? CLOCKWISE_180 : CLOCKWISE_0;
            break;
        case UIDeviceOrientationLandscapeRight:
            rotation = isMirror ? CLOCKWISE_0 : CLOCKWISE_180;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            rotation = CLOCKWISE_270;
            break;
        default:
            rotation = CLOCKWISE_90;
            break;
    }
    
    // 視頻幀格式
    TiImageFormatEnum format;
    switch (CVPixelBufferGetPixelFormatType(pixelBuffer)) {
        case kCVPixelFormatType_32BGRA:
            format = BGRA;
            break;
        case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange:
            format = NV12;
            break;
        case kCVPixelFormatType_420YpCbCr8BiPlanarFullRange:
            format = NV12;
            break;
        default:
            NSLog(@"錯(cuò)誤的視頻幀格式!");
            format = BGRA;
            break;
    }
    
    int imageWidth, imageHeight;
    if (format == BGRA) {
        imageWidth = (int)CVPixelBufferGetBytesPerRow(pixelBuffer) / 4;
        imageHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
    } else {
        imageWidth = (int)CVPixelBufferGetWidthOfPlane(pixelBuffer , 0);
        imageHeight = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer , 0);
    }
    
    //todo --- tillusory start ---
    [[TiSDKManager shareManager] renderPixels:baseAddress Format:format Width:imageWidth Height:imageHeight Rotation:rotation Mirror:isMirror];
    //todo --- tillusory end ---
    
//    self.outputImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    self.outputImagePixelBuffer = pixelBuffer;
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
}
@property(nonatomic, assign) CVPixelBufferRef outputImagePixelBuffer;

在iOS里,我們經(jīng)常能看到 CVPixelBufferRef 這個(gè)類型,在Camera 采集返回的數(shù)據(jù)里得到一個(gè)CMSampleBufferRef,而每個(gè)CMSampleBufferRef里則包含一個(gè) CVPixelBufferRef,在視頻硬解碼的返回?cái)?shù)據(jù)里也是一個(gè) CVPixelBufferRef。
顧名思義,CVPixelBufferRef 是一種像素圖片類型,由于CV開(kāi)頭,所以它是屬于 CoreVideo 模塊的。
CVPixelBufferRef里包含很多圖片相關(guān)屬性,比較重要的有 width,height,PixelFormatType等。

到這里,預(yù)覽+拓幻美顏的功能基本就實(shí)現(xiàn)了, 當(dāng)設(shè)置了不同的美顏, 就會(huì)實(shí)時(shí)進(jìn)行紋理渲染。

由于拓幻SDK有緩存, 即每次設(shè)置之后都會(huì)做緩存,下次使用時(shí)會(huì)自動(dòng)讀取緩存, 所以我們?cè)O(shè)置完成后,在視頻時(shí)就可以直接使用設(shè)置好的美顏參數(shù)。


GPUImageVideoCamera的幾個(gè)常用方法:
切換攝像頭:[self.videoCamera rotateCamera];
開(kāi)始視頻采集:[self.videoCamera startCameraCapture];
停止視頻采集:[self.videoCamera stopCameraCapture];

以上, 結(jié)束。

如果對(duì)你有幫助, 就給一個(gè)贊吧~

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容