音頻編碼

音頻基礎(chǔ)知識(shí)

PCM格式
pcm是經(jīng)過話筒錄音后直接得到的未經(jīng)壓縮的數(shù)據(jù)流
數(shù)據(jù)大小=采樣頻率采樣位數(shù)聲道*秒數(shù)/8
采樣頻率一般是44k,位數(shù)一般是8位或者16位,聲道一般是單聲道或者雙聲道
pcm屬于編碼格式,就是一串由多個(gè)樣本值組成的數(shù)據(jù)流,本身沒有任何頭信息或者幀的概念。如果不是音頻的錄制者,光憑一段PCM數(shù)據(jù),是沒有辦法知道它的采樣率等信息的。

AAC格式
初步了解,AAC文件可以沒有文件頭,全部由幀序列組成,每個(gè)幀由幀頭和數(shù)據(jù)部分組成。幀頭包含采樣率、聲道數(shù)、幀長(zhǎng)度等,有點(diǎn)類似MP3格式。

AAC編碼
初始化編碼轉(zhuǎn)換器

-(BOOL)createAudioConvert{
     if(m_converter != nil){
         return TRUE;
     }
    AudioStreamBasicDescription inputFormat  =  {0};
    inputFormat.mSampleRate = _configuration.audioSampleRate;
    inputFormat.mFormatID     = kAudioFormatLinearPCM;
    inputFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
    inputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;
    inputFormat.mFramesPerPacket = 1; 
    inputFormat.mBitsPerChannel = 16;
    inputFormat.mBytesPerFrame = inputFormat.mBitsPerChannel / 8 * inputFormat.mChannelsPerFrame; 
    inputFormat.mBytesPerPacket = inputFormat.mBytesPerFrame * inputFormat.mFramesPerPacket;

    AudioStreamBasicDescription outputFormat; // 這里開始是輸出音頻格式
    memset(&outputFormat, 0, sizeof(outputFormat)); 
    outputFormat.mSampleRate = inputFormat.mSampleRate; // 采樣率保持一致 
    outputFormat.mFormatID = kAudioFormatMPEG4AAC; // AAC編碼 kAudioFormatMPEG4AAC kAudioFormatMPEG4AAC_HE_V2 
    outputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;;
    outputFormat.mFramesPerPacket = 1024; // AAC一幀是1024個(gè)字節(jié) 
    const OSType subtype = kAudioFormatMPEG4AAC; 
    AudioClassDescription requestedCodecs[2] = { 
       {
           kAudioEncoderComponentType, 
           subtype,
           kAppleSoftwareAudioCodecManufacturer 
       }, 
       {
           kAudioEncoderComponentType, 
           subtype,
           kAppleHardwareAudioCodecManufacturer 
        } 
    };
    OSStatus result = AudioConverterNewSpecific(&inputFormat, &outputFormat, 2, requestedCodecs, &m_converter); 

    if(result != noErr) return NO; 
    return YES; 
}

編碼轉(zhuǎn)換

char *aacBuf;
if(!aacBuf){
    aacBuf = malloc(inBufferList.mBuffers[0].mDataByteSize);
}
// 初始化一個(gè)輸出緩沖列表 
AudioBufferList outBufferList; 
outBufferList.mNumberBuffers = 1; 
outBufferList.mBuffers[0].mNumberChannels = inBufferList.mBuffers[0].mNumberChannels; 
outBufferList.mBuffers[0].mDataByteSize = inBufferList.mBuffers[0].mDataByteSize; // 設(shè)置緩沖區(qū)大小 
outBufferList.mBuffers[0].mData = aacBuf; // 設(shè)置AAC緩沖區(qū) UInt32 
outputDataPacketSize = 1; 
if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr){ 
   return; 
} 
AudioFrame *audioFrame = [AudioFrame new]; 
audioFrame.timestamp = timeStamp;
 audioFrame.data = [NSData dataWithBytes:aacBuf length:outBufferList.mBuffers[0].mDataByteSize];
 char exeData[2]; 
exeData[0] = _configuration.asc[0]; 
exeData[1] = _configuration.asc[1]; 
audioFrame.audioInfo =[NSData dataWithBytes:exeData length:2];

在Ios中,實(shí)現(xiàn)打開和捕獲麥克風(fēng)大多是用的AVCaptureSession這個(gè)組件來實(shí)現(xiàn)的,它可以不僅可以實(shí)現(xiàn)音頻捕獲,還可以實(shí)現(xiàn)視頻的捕獲。
針對(duì)打開麥克風(fēng)和捕獲音頻的代碼,簡(jiǎn)單的整理了一下:

首先,我們需要定義一個(gè)AVCaptureSession類型的變量,它是架起在麥克風(fēng)設(shè)備和數(shù)據(jù)輸出上的一座橋,通過它可以方便的得到麥克風(fēng)的實(shí)時(shí)原始數(shù)據(jù)。

    AVCaptureSession  *m_capture;

同時(shí),定義一組函數(shù),用來打開和關(guān)閉麥克風(fēng);為了能使數(shù)據(jù)順利的導(dǎo)出,你還需要實(shí)現(xiàn)AVCaptureAudioDataOutputSampleBufferDelegate這個(gè)協(xié)議

    -(void)open;  
    -(void)close;  
    -(BOOL)isOpen;  

下面我們將分別實(shí)現(xiàn)上述參數(shù)函數(shù),來完成數(shù)據(jù)的捕獲

-(void)open {  
    NSError *error;  
    m_capture = [[AVCaptureSession alloc]init];  
    AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];  
    if (audioDev == nil)  
    {  
        CKPrint("Couldn't create audio capture device");  
        return ;  
    }  
      
    // create mic device  
    AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];  
    if (error != nil)  
    {  
        CKPrint("Couldn't create audio input");  
        return ;  
    }  
      
      
    // add mic device in capture object  
    if ([m_capture canAddInput:audioIn] == NO)  
    {  
        CKPrint("Couldn't add audio input")  
        return ;  
    }  
    [m_capture addInput:audioIn];  
    // export audio data  
    AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];  
    [audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];  
    if ([m_capture canAddOutput:audioOutput] == NO)  
    {  
        CKPrint("Couldn't add audio output");  
        return ;  
    }  
    [m_capture addOutput:audioOutput];  
    [audioOutput connectionWithMediaType:AVMediaTypeAudio];  
    [m_capture startRunning];  
    return ;  
} 
    -(void)close {  
        if (m_capture != nil && [m_capture isRunning])  
        {  
            [m_capture stopRunning];  
        }  
          
        return;  
    }  
    -(BOOL)isOpen {  
        if (m_capture == nil)  
        {  
            return NO;  
        }  
          
        return [m_capture isRunning];  
    }  

通過上面三個(gè)函數(shù),即可完成所有麥克風(fēng)捕獲的準(zhǔn)備工作,現(xiàn)在我們就等著數(shù)據(jù)主動(dòng)送上門了。要想數(shù)據(jù)主動(dòng)送上門,我們還需要實(shí)現(xiàn)一個(gè)協(xié)議接口:

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {  
        char szBuf[4096];  
        int  nSize = sizeof(szBuf);  
          
    #if SUPPORT_AAC_ENCODER  
        if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)  
        {  
            [g_pViewController sendAudioData:szBuf len:nSize channel:0];  
        }  
    #else //#if SUPPORT_AAC_ENCODER  
        AudioStreamBasicDescription outputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer)));  
        nSize = CMSampleBufferGetTotalSampleSize(sampleBuffer);  
        CMBlockBufferRef databuf = CMSampleBufferGetDataBuffer(sampleBuffer);  
        if (CMBlockBufferCopyDataBytes(databuf, 0, nSize, szBuf) == kCMBlockBufferNoErr)  
        {  
            [g_pViewController sendAudioData:szBuf len:nSize channel:outputFormat.mChannelsPerFrame];  
        }  
    #endif  
    }  

到這里,我們的工作也就差不多做完了,所捕獲出來的數(shù)據(jù)是原始的PCM數(shù)據(jù)。

當(dāng)然,由于PCM數(shù)據(jù)本身比較大,不利于網(wǎng)絡(luò)傳輸,所以如果需要進(jìn)行網(wǎng)絡(luò)傳輸時(shí),就需要對(duì)數(shù)據(jù)進(jìn)行編碼;Ios系統(tǒng)本身支持多種音頻編碼格式,這里我們就以AAC為例來實(shí)現(xiàn)一個(gè)PCM編碼AAC的函數(shù)。

在Ios系統(tǒng)中,PCM編碼AAC的例子,在網(wǎng)上也是一找一大片,但是大多都是不太完整的,而且相當(dāng)一部分都是E文的,對(duì)于某些童鞋而言,這些都是深惡痛絕的。我這里就做做好人,把它們整理了一下,寫成了一個(gè)函數(shù),方便使用。

在編碼前,需要先創(chuàng)建一個(gè)編碼轉(zhuǎn)換對(duì)象

 AVAudioConverterRef m_converter;
#if SUPPORT_AAC_ENCODER  
-(BOOL)createAudioConvert:(CMSampleBufferRef)sampleBuffer { //根據(jù)輸入樣本初始化一個(gè)編碼轉(zhuǎn)換器  
    if (m_converter != nil)  
    {  
        return TRUE;  
    }  
      
    AudioStreamBasicDescription inputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer))); // 輸入音頻格式  
    AudioStreamBasicDescription outputFormat; // 這里開始是輸出音頻格式  
    memset(&outputFormat, 0, sizeof(outputFormat));  
    outputFormat.mSampleRate       = inputFormat.mSampleRate; // 采樣率保持一致  
    outputFormat.mFormatID         = kAudioFormatMPEG4AAC;    // AAC編碼  
    outputFormat.mChannelsPerFrame = 2;  
    outputFormat.mFramesPerPacket  = 1024;                    // AAC一幀是1024個(gè)字節(jié)  
      
    AudioClassDescription *desc = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC fromManufacturer:kAppleSoftwareAudioCodecManufacturer];  
    if (AudioConverterNewSpecific(&inputFormat, &outputFormat, 1, desc, &m_converter) != noErr)  
    {  
        CKPrint(@"AudioConverterNewSpecific failed");  
        return NO;  
    }  
  
    return YES;  
} 
-(BOOL)encoderAAC:(CMSampleBufferRef)sampleBuffer aacData:(char*)aacData aacLen:(int*)aacLen { // 編碼PCM成AAC  
    if ([self createAudioConvert:sampleBuffer] != YES)  
    {  
        return NO;  
    }  
      
    CMBlockBufferRef blockBuffer = nil;  
    AudioBufferList  inBufferList;  
    if (CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inBufferList, sizeof(inBufferList), NULL, NULL, 0, &blockBuffer) != noErr)  
    {  
        CKPrint(@"CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer failed");  
        return NO;  
    }  
    // 初始化一個(gè)輸出緩沖列表  
    AudioBufferList outBufferList;  
    outBufferList.mNumberBuffers              = 1;  
    outBufferList.mBuffers[0].mNumberChannels = 2;  
    outBufferList.mBuffers[0].mDataByteSize   = *aacLen; // 設(shè)置緩沖區(qū)大小  
    outBufferList.mBuffers[0].mData           = aacData; // 設(shè)置AAC緩沖區(qū)  
    UInt32 outputDataPacketSize               = 1;  
    if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr)  
    {  
        CKPrint(@"AudioConverterFillComplexBuffer failed");  
        return NO;  
    }  
      
    *aacLen = outBufferList.mBuffers[0].mDataByteSize; //設(shè)置編碼后的AAC大小  
    CFRelease(blockBuffer);  
    return YES;  
}  
-(AudioClassDescription*)getAudioClassDescriptionWithType:(UInt32)type fromManufacturer:(UInt32)manufacturer { // 獲得相應(yīng)的編碼器  
    static AudioClassDescription audioDesc;  
      
    UInt32 encoderSpecifier = type, size = 0;  
    OSStatus status;  
      
    memset(&audioDesc, 0, sizeof(audioDesc));  
    status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size);  
    if (status)  
    {  
        return nil;  
    }  
      
    uint32_t count = size / sizeof(AudioClassDescription);  
    AudioClassDescription descs[count];  
    status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size, descs);  
    for (uint32_t i = 0; i < count; i++)  
    {  
        if ((type == descs[i].mSubType) && (manufacturer == descs[i].mManufacturer))  
        {  
            memcpy(&audioDesc, &descs[i], sizeof(audioDesc));  
            break;  
        }  
    }  
    return &audioDesc;  
}  
OSStatus inputDataProc(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData,AudioStreamPacketDescription **outDataPacketDescription, voidvoid *inUserData) { //<span style="font-family: Arial, Helvetica, sans-serif;">AudioConverterFillComplexBuffer 編碼過程中,會(huì)要求這個(gè)函數(shù)來填充輸入數(shù)據(jù),也就是原始PCM數(shù)據(jù)</span>  
    AudioBufferList bufferList = *(AudioBufferList*)inUserData;  
    ioData->mBuffers[0].mNumberChannels = 1;  
    ioData->mBuffers[0].mData           = bufferList.mBuffers[0].mData;  
    ioData->mBuffers[0].mDataByteSize   = bufferList.mBuffers[0].mDataByteSize;  
    return noErr;  
}  
#endif 

好了,世界是那么美好,一個(gè)函數(shù)即可所有的事情搞定了。當(dāng)你需要進(jìn)行AAC編碼時(shí),調(diào)用encoderAAC這個(gè)函數(shù)就可以了(在上面有完整的代碼)

    char szBuf[4096];  
    int  nSize = sizeof(szBuf);  
    if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)  
    {  
        // do something   
    }  
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容