概述
在直播應(yīng)用開(kāi)發(fā)中我們經(jīng)常需要實(shí)時(shí)對(duì)音頻做處理,比如音頻錄制、播放、編碼等。本文介紹的是使用AudioQueue對(duì)音頻做上述處理。
PCM和AAC是音頻的兩種不同的格式,PCM是無(wú)損音頻數(shù)據(jù),AAC是壓縮編碼過(guò)的數(shù)據(jù)。我們?cè)诮榻BAudioQueue的用法之前,首先對(duì)音頻的這兩種格式做大致了解。關(guān)于音頻的基礎(chǔ)請(qǐng)參考 音頻基礎(chǔ)知識(shí)
文章目錄:
- AAC音頻
- AudioQueue錄制音頻原始幀PCM數(shù)據(jù)
- AudioQueue播放PCM音頻文件
- AudioQueue錄制音頻PCM數(shù)據(jù),同時(shí)轉(zhuǎn)化為aac數(shù)據(jù)保存到沙盒中,最后利用
ffmpeg命令查看并播放aac文件
示例代碼:
代碼結(jié)構(gòu):
AAC音頻
因?yàn)槲覀儠?huì)介紹從PCM轉(zhuǎn)化為AAC,所以我們先對(duì)AAC有一個(gè)大致的了解。
AAC的音頻文件格式有兩種分別是ADIF 和 ADTS 。
ADIF和ADTS
- ADIF : (Audio Data Interchange Format )音頻數(shù)據(jù)交換格式。這種格式的特征是可以確定的找到這個(gè)音頻數(shù)據(jù)的開(kāi)始,不需進(jìn)行在音頻數(shù)據(jù)流中間開(kāi)始的解碼,即它的解碼必須在明確定義的開(kāi)始處進(jìn)行。故這種格式常用在磁盤(pán)文件中。
- ADTS : (Audio Data Transport Stream)音頻數(shù)據(jù)傳輸流。這種格式的特征是它是一個(gè)有同步字的比特流,解碼可以在這個(gè)流中任何位置開(kāi)始。它的特征類似于mp3數(shù)據(jù)流格式。
簡(jiǎn)單說(shuō),ADTS可以在任意幀解碼,也就是說(shuō)它每一幀都有頭信息。ADIF只有一個(gè)統(tǒng)一的頭,所以必須得到所有的數(shù)據(jù)后解碼。且這兩種的header的格式也是不同的,目前一般編碼后的和抽取出的都是ADTS格式的音頻流。
ADIF格式:
ADTS格式:
ADIF和ADTS的頭信息
ADIF的頭信息如圖:
ADIF頭信息位于AAC文件的起始處,接下來(lái)就是連續(xù)的 raw data blocks。組成 ADIF頭信息的各個(gè)域如下所示:
ADTS的固定頭信息:
ADTS的可變頭信息:
- 幀同步目的在于找出幀頭在比特流中的位置,13818-7規(guī)定,aac ADTS格式的幀頭。同步字為12比特的“1111 1111 1111”
- ADTS的頭信息為兩部分組成,其一為固定頭信息,緊接著是可變頭信息。固定頭信息中的數(shù)據(jù)每一幀都相同,而可變頭信息則在幀與幀之間可變
AAC元素信息
在AAC中,原始數(shù)據(jù)塊的組成可能有六種不同的元素:
- SCE: Single Channel Element單通道元素。單通道元素基本上只由一個(gè)ICS組成。一個(gè)原始數(shù)據(jù)塊最可能由16個(gè)SCE組成。
- CPE: Channel Pair Element 雙通道元素,由兩個(gè)可能共享邊信息的ICS和一些聯(lián)合立體聲編碼信息組成。一個(gè)原始數(shù)據(jù)塊最多可能由16個(gè)SCE組成。
- CCE: Coupling Channel Element 藕合通道元素。代表一個(gè)塊的多通道聯(lián)合立體聲信息或者多語(yǔ)種程序的對(duì)話信息。
- LFE: Low Frequency Element 低頻元素。包含了一個(gè)加強(qiáng)低采樣頻率的通道。
- DSE: Data Stream Element 數(shù)據(jù)流元素,包含了一些并不屬于音頻的附加信息。
- PCE: Program Config Element 程序配置元素。包含了聲道的配置信息。它可能出現(xiàn)在ADIF 頭部信息中。
- FIL: Fill Element 填充元素。包含了一些擴(kuò)展信息。如SBR,動(dòng)態(tài)范圍控制信息等。
AudioQueue錄制PCM音頻
關(guān)于什么是AudioQueue及其介紹在此不再描述,請(qǐng)查閱蘋(píng)果官方文檔或譯:Audio Queue Services Programming Guide
設(shè)置錄制音頻格式
音頻格式:PCM ; 采樣率:48K ; 每幀數(shù)據(jù)通道數(shù):1 ; 一幀數(shù)據(jù)中每個(gè)通道的樣本數(shù)據(jù)位數(shù): 16 ; 下面是具體代碼:
- (void)settingAudioFormat
{
/*** setup audio sample rate , channels number, and format ID ***/
memset(&dataFormat, 0, sizeof(dataFormat));
UInt32 size = sizeof(dataFormat.mSampleRate);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &dataFormat.mSampleRate);
dataFormat.mSampleRate = kAudioSampleRate;
size = sizeof(dataFormat.mChannelsPerFrame);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &dataFormat.mChannelsPerFrame);
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mChannelsPerFrame = 1;
dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
dataFormat.mBitsPerChannel = 16;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}
創(chuàng)建AudioQueue
extern OSStatus
AudioQueueNewInput( const AudioStreamBasicDescription *inFormat,
AudioQueueInputCallback inCallbackProc,
void * __nullable inUserData,
CFRunLoopRef __nullable inCallbackRunLoop,
CFStringRef __nullable inCallbackRunLoopMode,
UInt32 inFlags,
AudioQueueRef __nullable * __nonnull outAQ);
- inFormt: 所錄制音頻的格式,是
AudioStreamBasicDescription的實(shí)例。AudioStreamBasicDescription是對(duì)音頻格式的描述。 - inCallbackProc : 是一個(gè)回調(diào),當(dāng)一個(gè)buffer被填充完成時(shí),會(huì)觸發(fā)這個(gè)回調(diào)。
- inCallbackRunLoop:要調(diào)用inCallbackProc的事件循環(huán)。如果指定NULL,則在其中一個(gè)音頻隊(duì)列的內(nèi)部線程上調(diào)用回調(diào)。這個(gè)參數(shù)一般填寫(xiě)NULL
- inCallbackRunLoopMode:為RunLoop模式,如果傳入NULL就相當(dāng)于kCFRunLoopCommonModes,一般這個(gè)參數(shù)也是填寫(xiě)NULL
- inFlags : 保留字段,直接傳0
- outAQ: 返回生成的AudioQueue實(shí)例,返回值用來(lái)判斷是否成功創(chuàng)建(OSStatus == noErr)
下面代碼時(shí)創(chuàng)建錄制的AudioQueue的代碼:
- (void)settingCallBackFunc
{
/*** 設(shè)置錄音回調(diào)函數(shù) ***/
OSStatus status = 0;
// int bufferByteSize = 0;
UInt32 size = sizeof(dataFormat);
status = AudioQueueNewInput(&dataFormat, inputAudioQueueBufferHandler, (__bridge void *)self, NULL, NULL, 0, &mQueue);
if (status != noErr) {
NSLog(@"AppRecordAudio,%s,AudioQueueNewInput failed status:%d ",__func__,(int)status);
}
for (int i = 0 ; i < kQueueBuffers; i++) {
status = AudioQueueAllocateBuffer(mQueue, kAudioPCMTotalPacket * kAudioBytesPerPacket * dataFormat.mChannelsPerFrame, &mBuffers[i]);
status = AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL);
}
}
音頻錄制回調(diào)函數(shù)
在此回調(diào)中,我們直接把PCM數(shù)據(jù)寫(xiě)到沙盒中(只寫(xiě)前800幀,數(shù)目達(dá)到時(shí)會(huì)停止錄制)。
/*!
@discussion
AudioQueue 音頻錄制回調(diào)函數(shù)
@param inAQ
回調(diào)函數(shù)的音頻隊(duì)列.
@param inBuffer
是一個(gè)被音頻隊(duì)列填充新的音頻數(shù)據(jù)的音頻隊(duì)列緩沖區(qū),它包含了回調(diào)函數(shù)寫(xiě)入文件所需要的新數(shù)據(jù).
@param inStartTime
是緩沖區(qū)中的一采樣的參考時(shí)間
@param inNumberPacketDescriptions
參數(shù)中包描述符(packet descriptions)的數(shù)量,如果你正在錄制一個(gè)VBR(可變比特率(variable bitrate))格式, 音頻隊(duì)列將會(huì)提供這個(gè)參數(shù)給你的回調(diào)函數(shù),這個(gè)參數(shù)可以讓你傳遞給AudioFileWritePackets函數(shù). CBR (常量比特率(constant bitrate)) 格式不使用包描述符。對(duì)于CBR錄制,音頻隊(duì)列會(huì)設(shè)置這個(gè)參數(shù)并且將inPacketDescs這個(gè)參數(shù)設(shè)置為NULL
*/
static void inputAudioQueueBufferHandler(void * __nullable inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * __nullable inPacketDescs)
{
if (!inUserData) {
NSLog(@"AppRecordAudio,%s,inUserData is null",__func__);
return;
}
NSLog(@"%s, audio length: %d",__func__,inBuffer->mAudioDataByteSize);
static int createCount = 0;
static FILE *fp_pcm = NULL;
if (createCount == 0) {
NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *debugUrl = [paths stringByAppendingPathComponent:@"debug"] ;
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager createDirectoryAtPath:debugUrl withIntermediateDirectories:YES attributes:nil error:nil];
NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_pcm_48k.pcm"] ;
fp_pcm = fopen([audioFile UTF8String], "wb++");
}
createCount++;
MIAudioQueueRecord *miAQ = (__bridge MIAudioQueueRecord *)inUserData;
if (createCount <= 800) {
void *bufferData = inBuffer->mAudioData;
UInt32 buffersize = inBuffer->mAudioDataByteSize;
fwrite((uint8_t *)bufferData, 1, buffersize, fp_pcm);
}else{
fclose(fp_pcm);
NSLog(@"AudioQueue, close PCM file ");
[miAQ stopRecorder];
createCount = 0;
}
if (miAQ.m_isRunning) {
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
}
開(kāi)始錄制和停止錄制
開(kāi)始錄制:
- (void)startRecorder
{
[self createAudioSession];
[self settingAudioFormat];
[self settingCallBackFunc];
if (self.m_isRunning) {
return;
}
/*** start audioQueue ***/
OSStatus status = AudioQueueStart(mQueue, NULL);
if (status != noErr) {
NSLog(@"AppRecordAudio,%s,AudioQueueStart failed status:%d ",__func__,(int)status);
}
self.m_isRunning = YES;
}
停止錄制:
- (void)stopRecorder
{
if (!self.m_isRunning) {
return;
}
self.m_isRunning = NO;
if (mQueue) {
OSStatus stopRes = AudioQueueStop(mQueue, true);
if (stopRes == noErr) {
for (int i = 0; i < kQueueBuffers; i++) {
AudioQueueFreeBuffer(mQueue, mBuffers[i]);
}
}else{
NSLog(@"AppRecordAudio,%s,stop AudioQueue failed. ",__func__);
}
AudioQueueDispose(mQueue, true);
mQueue = NULL;
}
}
利用ffplay播放錄制的音頻
我們此時(shí)進(jìn)入到沙河目錄中先用ffplay命令播放一下錄制好的pcm音頻,后面我們會(huì)利用AudioQueue來(lái)播放pcm文件。
ffplay -f s16le -ar 48000 -ac 1 queue_pcm_48k.pcm
AudioQueue播放PCM文件
我們?cè)诒拘〗Y(jié)中,就播放上面剛剛錄制的pcm文件。
設(shè)置待播放的音頻格式
- (void)settingAudioFormat
{
/*** setup audio sample rate , channels number, and format ID ***/
memset(&dataFormat, 0, sizeof(dataFormat));
UInt32 size = sizeof(dataFormat.mSampleRate);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &dataFormat.mSampleRate);
dataFormat.mSampleRate = kAudioSampleRate;
size = sizeof(dataFormat.mChannelsPerFrame);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &dataFormat.mChannelsPerFrame);
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mChannelsPerFrame = 1;
dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
dataFormat.mBitsPerChannel = 16;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}
創(chuàng)建播放的AudioQueue并設(shè)置callback
- (void)settingCallBackFunc
{
/*** 設(shè)置錄音回調(diào)函數(shù) ***/
OSStatus status = 0;
// int bufferByteSize = 0;
UInt32 size = sizeof(dataFormat);
/*** 設(shè)置播放回調(diào)函數(shù) ***/
status = AudioQueueNewOutput(&dataFormat,
miAudioPlayCallBack,
(__bridge void *)self,
NULL,
NULL,
0,
&mQueue);
if (status != noErr) {
NSLog(@"AppRecordAudio,%s, AudioQueueNewOutput failed status:%d",__func__,(int)status);
}
for (int i = 0 ; i < kQueueBuffers; i++) {
status = AudioQueueAllocateBuffer(mQueue, kAudioPCMTotalPacket * kAudioBytesPerPacket * dataFormat.mChannelsPerFrame, &mBuffers[i]);
status = AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL);
}
}
從pcm文件中讀音頻數(shù)據(jù)
- (void)initPlayedFile
{
NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_pcm_48k.pcm"] ;
NSFileManager *manager = [NSFileManager defaultManager];
NSLog(@"file exist = %d",[manager fileExistsAtPath:audioFile]);
NSLog(@"file size = %lld",[[manager attributesOfItemAtPath:audioFile error:nil] fileSize]) ;
file = fopen([audioFile UTF8String], "r");
if(file)
{
fseek(file, 0, SEEK_SET);
pcmDataBuffer = malloc(1024);
}
else{
NSLog(@"!!!!!!!!!!!!!!!!");
}
synlock = [[NSLock alloc] init];
}
把pcm數(shù)據(jù)送給AudioQueue Buffer播放
- (void)startPlay
{
[self initPlayedFile];
[self createAudioSession];
[self settingAudioFormat];
[self settingCallBackFunc];
AudioQueueStart(mQueue, NULL);
for (int i = 0; i < kQueueBuffers; i++) {
[self readPCMAndPlay:mQueue buffer:mBuffers[i]];
}
}
-(void)readPCMAndPlay:(AudioQueueRef)outQ buffer:(AudioQueueBufferRef)outQB
{
[synlock lock];
int readLength = fread(pcmDataBuffer, 1, 1024, file);//讀取文件
NSLog(@"read raw data size = %d",readLength);
outQB->mAudioDataByteSize = readLength;
Byte *audiodata = (Byte *)outQB->mAudioData;
for(int i=0;i<readLength;i++)
{
audiodata[i] = pcmDataBuffer[i];
}
/*
將創(chuàng)建的buffer區(qū)添加到audioqueue里播放
AudioQueueBufferRef用來(lái)緩存待播放的數(shù)據(jù)區(qū),AudioQueueBufferRef有兩個(gè)比較重要的參數(shù),AudioQueueBufferRef->mAudioDataByteSize用來(lái)指示數(shù)據(jù)區(qū)大小,AudioQueueBufferRef->mAudioData用來(lái)保存數(shù)據(jù)區(qū)
*/
AudioQueueEnqueueBuffer(outQ, outQB, 0, NULL);
[synlock unlock];
}
運(yùn)行
如上圖所是,點(diǎn)擊播放會(huì)播放上一次錄制的pcm音頻。
AudioQueue實(shí)時(shí)編碼PCM數(shù)據(jù)為AAC并保存到沙盒中
在本小結(jié)中,我們首先啟動(dòng)AudioQueue錄制pcm音頻,同時(shí)創(chuàng)建一個(gè)轉(zhuǎn)化器把PCM數(shù)據(jù)轉(zhuǎn)化成AAC,最后把AAC保存到沙盒中。流程大致如下:
采樣率等參數(shù)依然和前面錄制和播放的保持一致,都是48K
設(shè)置輸入輸出音頻編碼參數(shù)并創(chuàng)建轉(zhuǎn)化器
- (void)settingInputAudioFormat
{
/*** setup audio sample rate , channels number, and format ID ***/
memset(&inAudioStreamDes, 0, sizeof(inAudioStreamDes));
UInt32 size = sizeof(inAudioStreamDes.mSampleRate);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &inAudioStreamDes.mSampleRate);
inAudioStreamDes.mSampleRate = kAudioSampleRate;
size = sizeof(inAudioStreamDes.mChannelsPerFrame);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &inAudioStreamDes.mChannelsPerFrame);
inAudioStreamDes.mFormatID = kAudioFormatLinearPCM;
inAudioStreamDes.mChannelsPerFrame = 1;
inAudioStreamDes.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
inAudioStreamDes.mBitsPerChannel = 16;
inAudioStreamDes.mBytesPerPacket = inAudioStreamDes.mBytesPerFrame = (inAudioStreamDes.mBitsPerChannel / 8) * inAudioStreamDes.mChannelsPerFrame;
inAudioStreamDes.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}
- (void)settingDestAudioStreamDescription
{
outAudioStreamDes.mSampleRate = kAudioSampleRate;
outAudioStreamDes.mFormatID = kAudioFormatMPEG4AAC;
outAudioStreamDes.mBytesPerPacket = 0;
outAudioStreamDes.mFramesPerPacket = 1024;
outAudioStreamDes.mBytesPerFrame = 0;
outAudioStreamDes.mChannelsPerFrame = 1;
outAudioStreamDes.mBitsPerChannel = 0;
outAudioStreamDes.mReserved = 0;
AudioClassDescription *des = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
OSStatus status = AudioConverterNewSpecific(&inAudioStreamDes, &outAudioStreamDes, 1, des, &miAudioConvert);
if (status != 0) {
NSLog(@"create convert failed...\n");
}
UInt32 targetSize = sizeof(outAudioStreamDes);
UInt32 bitRate = 64000;
targetSize = sizeof(bitRate);
status = AudioConverterSetProperty(miAudioConvert,
kAudioConverterEncodeBitRate,
targetSize, &bitRate);
if (status != noErr) {
NSLog(@"set bitrate error...");
return;
}
}
獲取編解碼器
/**
* 獲取編解碼器
* @param type 編碼格式
* @param manufacturer 軟/硬編
* @return 指定編碼器
*/
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
// 取得給定屬性的信息
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(@"error getting audio format propery info: %d", (int)(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
// 取得給定屬性的數(shù)據(jù)
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(@"error getting audio format propery: %d", (int)(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
填充PCM到緩沖區(qū)
/**
* 填充PCM到緩沖區(qū)
*/
- (size_t) copyPCMSamplesIntoBuffer:(AudioBufferList*)ioData {
size_t originalBufferSize = _pcmBufferSize;
if (!originalBufferSize) {
return 0;
}
ioData->mBuffers[0].mData = _pcmBuffer;
ioData->mBuffers[0].mDataByteSize = (int)_pcmBufferSize;
_pcmBuffer = NULL;
_pcmBufferSize = 0;
return originalBufferSize;
}
AAC的DTS計(jì)算
具體可參考:
下面代碼時(shí)支持采樣率為48K,通道數(shù)為1的DTS計(jì)算:
- (NSData*)adtsDataForPacketLength:(NSUInteger)packetLength {
int adtsLength = 7;
char *packet = malloc(sizeof(char) * adtsLength);
// Variables Recycled by addADTStoPacket
int profile = 2; //AAC LC
//39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 3; //48KHz
int chanCfg = 1; //MPEG-4 Audio Channel Configuration. 1 Channel front-center
NSUInteger fullLength = adtsLength + packetLength;
// fill in ADTS data
packet[0] = (char)0xFF; // 11111111 = syncword
packet[1] = (char)0xF9; // 1111 1 00 1 = syncword MPEG-2 Layer CRC
packet[2] = (char)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (char)(((chanCfg&3)<<6) + (fullLength>>11));
packet[4] = (char)((fullLength&0x7FF) >> 3);
packet[5] = (char)(((fullLength&7)<<5) + 0x1F);
packet[6] = (char)0xFC;
NSData *data = [NSData dataWithBytesNoCopy:packet length:adtsLength freeWhenDone:YES];
return data;
}
PCM轉(zhuǎn)AAC并把AAC寫(xiě)入到沙盒
static int initTime = 0;
- (void)encodePCMToAAC:(MIAudioQueueConvert *)convert
{
if (initTime == 0) {
initTime = 1;
[self settingDestAudioStreamDescription];
}
OSStatus status;
memset(_aacBuffer, 0, _aacBufferSize);
AudioBufferList *bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = outAudioStreamDes.mChannelsPerFrame;
bufferList->mBuffers[0].mData = _aacBuffer;
bufferList->mBuffers[0].mDataByteSize = (int)_aacBufferSize;
AudioStreamPacketDescription outputPacketDescriptions;
UInt32 inNumPackets = 1;
status = AudioConverterFillComplexBuffer(miAudioConvert,
pcmEncodeConverterInputCallback,
(__bridge void *)(self),//inBuffer->mAudioData,
&inNumPackets,
bufferList,
&outputPacketDescriptions);
if (status == noErr) {
NSData *aacData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
static int createCount = 0;
static FILE *fp_aac = NULL;
if (createCount == 0) {
NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *debugUrl = [paths stringByAppendingPathComponent:@"debug"] ;
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager createDirectoryAtPath:debugUrl withIntermediateDirectories:YES attributes:nil error:nil];
NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_aac_48k.aac"] ;
fp_aac = fopen([audioFile UTF8String], "wb++");
}
createCount++;
if (createCount <= 800) {
NSData *rawAAC = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
NSData *adtsHeader = [self adtsDataForPacketLength:rawAAC.length];
NSMutableData *fullData = [NSMutableData dataWithData:adtsHeader];
[fullData appendData:rawAAC];
void * bufferData = fullData.bytes;
int buffersize = fullData.length;
fwrite((uint8_t *)bufferData, 1, buffersize, fp_aac);
}else{
fclose(fp_aac);
NSLog(@"AudioQueue, close aac file ");
[self stopRecorder];
createCount = 0;
}
}
}
利用ffplay播放
ffplay命令:
ffplay -ar 48000 queue_aac_48k.aac