從CMSampleBufferRef中提取PCM數(shù)據(jù)
脈沖編碼調(diào)制,其實(shí)是將不規(guī)則的模擬信號(hào)轉(zhuǎn)換成數(shù)字信號(hào),這樣就可以通過(guò)物理介質(zhì)存儲(chǔ)起來(lái)。
而聲音也是一種特定頻率(20-20000HZ)的模擬信號(hào),也可以通過(guò)這種技術(shù)轉(zhuǎn)換成數(shù)字信號(hào),從而保存下來(lái)。
PCM格式,就是錄制聲音時(shí),保存的最原始的聲音數(shù)據(jù)格式。比如 wav格式的音頻,它其實(shí)就是給PCM數(shù)據(jù)流加上一段header數(shù)據(jù),就成為了wav格式。而wav格式有時(shí)候之所以被稱為無(wú)損格式,就是因?yàn)樗4娴氖窃紁cm數(shù)據(jù)(也跟采樣率和比特率有關(guān))。像我們耳熟能詳?shù)哪切┮纛l格式,mp3,aac等等,都是有損壓縮,為了節(jié)約占用空間,在很少損失音效的基礎(chǔ)上,進(jìn)行最大程度的壓縮。
所有的音頻編碼器,都支持pcm編碼,而且錄制的聲音,默認(rèn)也是PCM格式,所以我們下一步就是要獲取錄制的PCM數(shù)據(jù)。
-(NSData *) convertAudioSmapleBufferToPcmData:(CMSampleBufferRef) audioSample{
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(pcmData));
//獲取CMBlockBufferRef
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(pcmData);
//獲取pcm數(shù)據(jù)大小
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
//分配空間
char buffer[length];
//直接將數(shù)據(jù)copy至我們自己分配的內(nèi)存中
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, buffer);
if ((inAudioStreamBasicDescription.mFormatFlags & kAudioFormatFlagIsBigEndian) == kAudioFormatFlagIsBigEndian)
{
for (int i = 0; i < length; i += 2)
{
char tmp = buffer[i];
buffer[i] = buffer[i+1];
buffer[i+1] = tmp;
}
}
uint32_t ch = inAudioStreamBasicDescription.mChannelsPerFrame;
uint32_t fs = inAudioStreamBasicDescription.mSampleRate;
//返回?cái)?shù)據(jù)
return [NSData dataWithBytesNoCopy:buffer length:audioDataSize];
}
PCM填充CMSampleBufferRef
根據(jù)采樣精度我們可以知道一個(gè)采樣點(diǎn)的數(shù)據(jù)量,比如16位精度,即一個(gè)采樣點(diǎn)需要2子節(jié),則有200ms需要的數(shù)據(jù)量為:
//200ms 采樣點(diǎn)數(shù)量
NSUInteger samples = self->mSampleRate * 200 * self->mChannelsPerFrame/1000;
//200ms pcm數(shù)量量
int len = samples*2;
PCM填充CMSampleBufferRef 代碼示例:
- (CMSampleBufferRef)createAudioSampleBuffer:(char*) buf withLen:(int) len withASBD:(AudioStreamBasicDescription) asbd{
AudioBufferList audioData;
audioData.mNumberBuffers = 1;
char* tmp = malloc(len);
memcpy(tmp, buf, len);
audioData.mBuffers[0].mData = tmp;
audioData.mBuffers[0].mNumberChannels = asbd.mChannelsPerFrame;
audioData.mBuffers[0].mDataByteSize = len;
CMSampleBufferRef buff = NULL;
CMFormatDescriptionRef format =NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd,0, NULL, 0, NULL, NULL, &format);
if (status) {
return nil;
}
CMSampleTimingInfo timing = {CMTimeMake(asbd.mFramesPerPacket,asbd.mSampleRate), kCMTimeZero, kCMTimeInvalid };
status = CMSampleBufferCreate(kCFAllocatorDefault,NULL, false,NULL, NULL, format, (CMItemCount)asbd.mFramesPerPacket,1, &timing, 0,NULL, &buff);
if (status) { //失敗
return nil;
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,kCFAllocatorDefault,kCFAllocatorDefault,0, &audioData);
if (tmp) {
free(tmp);
}
CFRelease(format);
return buff;
}