iOS硬編碼實(shí)現(xiàn)

iOS硬編碼實(shí)現(xiàn)

前言

  • 在上一篇中,我們已經(jīng)知道iOS編碼的一些概念知識(shí),從現(xiàn)在開(kāi)始,我們可以正式對(duì)采集到的視頻進(jìn)行編碼
  • 這里我們重點(diǎn)介紹硬編碼的使用方式,也就是VideoToolBox框架的使用
  • 編碼的流程:采集--> 獲取到視頻幀--> 對(duì)視頻幀進(jìn)行編碼 --> 獲取到視頻幀信息 --> 將編碼后的數(shù)據(jù)以NALU方式寫(xiě)入到文件

視頻采集

  • 視頻采集我們已經(jīng)在前面進(jìn)行了介紹和學(xué)習(xí),所有這里就直接貼代碼,只是我對(duì)采集過(guò)程進(jìn)行了一些簡(jiǎn)單的封裝

視頻硬件編碼

  • 初始化壓縮編碼會(huì)話(huà)(VTCompressionSessionRef)

    • 在VideoToolbox框架的使用過(guò)程中,基本都是C語(yǔ)言函數(shù)
  • 初始化后通過(guò)VTSessionSetProperty設(shè)置對(duì)象屬性

    • 編碼方式:H.264編碼
    • 幀率:每秒鐘多少幀畫(huà)面
    • 碼率:?jiǎn)挝粫r(shí)間內(nèi)保存的數(shù)據(jù)量
    • 關(guān)鍵幀(GOPsize)間隔:多少幀為一個(gè)GOP
    • 參數(shù)參考:
  • 準(zhǔn)備編碼

  • 代碼如下:

- (void)setupVideoSession {
    // 1.用于記錄當(dāng)前是第幾幀數(shù)據(jù)(畫(huà)面幀數(shù)非常多)
    self.frameID = 0;

    // 2.錄制視頻的寬度&高度
    int width = [UIScreen mainScreen].bounds.size.width;
    int height = [UIScreen mainScreen].bounds.size.height;

    // 3.創(chuàng)建CompressionSession對(duì)象,該對(duì)象用于對(duì)畫(huà)面進(jìn)行編碼
    // kCMVideoCodecType_H264 : 表示使用h.264進(jìn)行編碼
    // didCompressH264 : 當(dāng)一次編碼結(jié)束會(huì)在該函數(shù)進(jìn)行回調(diào),可以在該函數(shù)中將數(shù)據(jù),寫(xiě)入文件中
    VTCompressionSessionCreate(NULL, width, height, kCMVideoCodecType_H264, NULL, NULL, NULL, didCompressH264, (__bridge void *)(self),  &_compressionSession);

    // 4.設(shè)置實(shí)時(shí)編碼輸出(直播必然是實(shí)時(shí)輸出,否則會(huì)有延遲)
    VTSessionSetProperty(self.compressionSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);

    // 5.設(shè)置期望幀率(每秒多少幀,如果幀率過(guò)低,會(huì)造成畫(huà)面卡頓)
    int fps = 30;
    CFNumberRef  fpsRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &fps);
    VTSessionSetProperty(self.compressionSession, kVTCompressionPropertyKey_ExpectedFrameRate, fpsRef);


    // 6.設(shè)置碼率(碼率: 編碼效率, 碼率越高,則畫(huà)面越清晰, 如果碼率較低會(huì)引起馬賽克 --> 碼率高有利于還原原始畫(huà)面,但是也不利于傳輸)
    int bitRate = 800*1024;
    CFNumberRef bitRateRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bitRate);
    VTSessionSetProperty(self.compressionSession, kVTCompressionPropertyKey_AverageBitRate, bitRateRef);
    NSArray *limit = @[@(bitRate * 1.5/8), @(1)];
    VTSessionSetProperty(self.compressionSession, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit);

    // 7.設(shè)置關(guān)鍵幀(GOPsize)間隔
    int frameInterval = 30;
    CFNumberRef  frameIntervalRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &frameInterval);
    VTSessionSetProperty(self.compressionSession, kVTCompressionPropertyKey_MaxKeyFrameInterval, frameIntervalRef);

    // 8.基本設(shè)置結(jié)束, 準(zhǔn)備進(jìn)行編碼
    VTCompressionSessionPrepareToEncodeFrames(self.compressionSession);
}
  • 將輸入的幀進(jìn)行編碼
    • 將CMSampleBufferRef轉(zhuǎn)成CVImageBufferRef
    • 開(kāi)始對(duì)CVImageBufferRef進(jìn)行編碼
- (void)encodeSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    // 1.將sampleBuffer轉(zhuǎn)成imageBuffer
    CVImageBufferRef imageBuffer = (CVImageBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);

    // 2.根據(jù)當(dāng)前的幀數(shù),創(chuàng)建CMTime的時(shí)間
    CMTime presentationTimeStamp = CMTimeMake(self.frameID++, 1000);
    VTEncodeInfoFlags flags;

    // 3.開(kāi)始編碼該幀數(shù)據(jù)
    OSStatus statusCode = VTCompressionSessionEncodeFrame(self.compressionSession,
                                                          imageBuffer,
                                                          presentationTimeStamp,
                                                          kCMTimeInvalid,
                                                          NULL, (__bridge void * _Nullable)(self), &flags);
    if (statusCode == noErr) {
        NSLog(@"H264: VTCompressionSessionEncodeFrame Success");
    }
}
  • 當(dāng)編碼成功后,將編碼后的碼流寫(xiě)入文件

    • 編碼成功后會(huì)回調(diào)之前輸入的函數(shù)
    • 1> 先判斷是否是關(guān)鍵幀:
      • 如果是關(guān)鍵幀,則需要在寫(xiě)入關(guān)鍵幀之前,先寫(xiě)入PPS、SPS的NALU
      • 取出PPS、SPS數(shù)據(jù),并且封裝成NALU單元,寫(xiě)入文件
    • 2> 將I幀、P幀、B幀分別封裝成NALU單元寫(xiě)入文件
    • 寫(xiě)入后,數(shù)據(jù)存儲(chǔ)方式:
  • 代碼如下:

// 編碼完成回調(diào)
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer) {

    // 1.判斷狀態(tài)是否等于沒(méi)有錯(cuò)誤
    if (status != noErr) {
        return;
    }

    // 2.根據(jù)傳入的參數(shù)獲取對(duì)象
    VideoEncoder* encoder = (__bridge VideoEncoder*)outputCallbackRefCon;

    // 3.判斷是否是關(guān)鍵幀
    bool isKeyframe = !CFDictionaryContainsKey( (CFArrayGetValueAtIndex(CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true), 0)), kCMSampleAttachmentKey_NotSync);
    // 判斷當(dāng)前幀是否為關(guān)鍵幀
    // 獲取sps & pps數(shù)據(jù)
    if (isKeyframe)
    {
        // 獲取編碼后的信息(存儲(chǔ)于CMFormatDescriptionRef中)
        CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);

        // 獲取SPS信息
        size_t sparameterSetSize, sparameterSetCount;
        const uint8_t *sparameterSet;
        CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sparameterSet, &sparameterSetSize, &sparameterSetCount, 0 );

        // 獲取PPS信息
        size_t pparameterSetSize, pparameterSetCount;
        const uint8_t *pparameterSet;
        CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pparameterSet, &pparameterSetSize, &pparameterSetCount, 0 );

        // 裝sps/pps轉(zhuǎn)成NSData,以方便寫(xiě)入文件
        NSData *sps = [NSData dataWithBytes:sparameterSet length:sparameterSetSize];
        NSData *pps = [NSData dataWithBytes:pparameterSet length:pparameterSetSize];

        // 寫(xiě)入文件
        [encoder gotSpsPps:sps pps:pps];
    }

    // 獲取數(shù)據(jù)塊
    CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
    size_t length, totalLength;
    char *dataPointer;
    OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
    if (statusCodeRet == noErr) {
        size_t bufferOffset = 0;
        static const int AVCCHeaderLength = 4; // 返回的nalu數(shù)據(jù)前四個(gè)字節(jié)不是0001的startcode,而是大端模式的幀長(zhǎng)度length

        // 循環(huán)獲取nalu數(shù)據(jù)
        while (bufferOffset < totalLength - AVCCHeaderLength) {
            uint32_t NALUnitLength = 0;
            // Read the NAL unit length
            memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);

            // 從大端轉(zhuǎn)系統(tǒng)端
            NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);

            NSData* data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
            [encoder gotEncodedData:data isKeyFrame:isKeyframe];

            // 移動(dòng)到寫(xiě)一個(gè)塊,轉(zhuǎn)成NALU單元
            // Move to the next NAL unit in the block buffer
            bufferOffset += AVCCHeaderLength + NALUnitLength;
        }
    }
}

- (void)gotSpsPps:(NSData*)sps pps:(NSData*)pps
{
    // 1.拼接NALU的header
    const char bytes[] = "\x00\x00\x00\x01";
    size_t length = (sizeof bytes) - 1;
    NSData *ByteHeader = [NSData dataWithBytes:bytes length:length];

    // 2.將NALU的頭&NALU的體寫(xiě)入文件
    [self.fileHandle writeData:ByteHeader];
    [self.fileHandle writeData:sps];
    [self.fileHandle writeData:ByteHeader];
    [self.fileHandle writeData:pps];

}
- (void)gotEncodedData:(NSData*)data isKeyFrame:(BOOL)isKeyFrame
{
    NSLog(@"gotEncodedData %d", (int)[data length]);
    if (self.fileHandle != NULL)
    {
        const char bytes[] = "\x00\x00\x00\x01";
        size_t length = (sizeof bytes) - 1; //string literals have implicit trailing '\0'
        NSData *ByteHeader = [NSData dataWithBytes:bytes length:length];
        [self.fileHandle writeData:ByteHeader];
        [self.fileHandle writeData:data];
    }
}

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容