最近在做UDP實(shí)時(shí)語(yǔ)音通信,采用了GCDAsyncUdpSocket進(jìn)行UDP傳輸,音頻使用的是AudioUnit,錄音回調(diào)和播放回調(diào)那里使用的TPCircularBuffer進(jìn)行音頻環(huán)形緩沖。由于錄音和從服務(wù)器收到的語(yǔ)音數(shù)據(jù)都是PCM編碼格式的,8K的采樣率,每20ms一幀,數(shù)據(jù)比較大,也耗流量,因此采用了opus進(jìn)行編碼解碼。編碼之后發(fā)現(xiàn)數(shù)據(jù)明顯變小,以前是512長(zhǎng)度的,現(xiàn)在是20左右的長(zhǎng)度,因?yàn)?code>opus編碼后是不定長(zhǎng)的。使用APP跟手機(jī)通話時(shí),由于APP發(fā)過(guò)去的語(yǔ)音數(shù)據(jù),不僅錄下了說(shuō)話者的聲音還把對(duì)方從揚(yáng)聲器播放的聲音也一起錄下來(lái)傳給對(duì)方,所以產(chǎn)生了回音。而對(duì)方手機(jī)發(fā)過(guò)來(lái)的語(yǔ)音,是經(jīng)過(guò)手機(jī)系統(tǒng)進(jìn)行了回音消除的,所以我們聽(tīng)不到自己的聲音。項(xiàng)目里采用了webRtc的AEC模塊進(jìn)行了回音消除。
在UDP創(chuàng)建那里修改了一下,之前socket使用的代理隊(duì)列是dispatch_get_main_queue(),現(xiàn)在改成了自己創(chuàng)建的串行隊(duì)列DISPATCH_QUEUE_SERIAL。
// self.udpSocket = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
self.udpSocket = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:self.socketQueue];
- (dispatch_queue_t)socketQueue {
if (_socketQueue == nil) {
_socketQueue = dispatch_queue_create("com.sendSocket", DISPATCH_QUEUE_SERIAL);
}
return _socketQueue;
}
使用串行隊(duì)列作為代理隊(duì)列之后,發(fā)數(shù)據(jù)那里也要使用代理隊(duì)列,不可用使用其他線程來(lái)設(shè)置本socket
//發(fā)數(shù)據(jù)到UDP
-(int) vopc_send_dataA:(const void *)data length:(int)len
{
DLog(@"=====發(fā)數(shù)據(jù)到UDP時(shí)加協(xié)議頭");
dispatch_async(self.socketQueue, ^{
NSData *data1=[NSData dataWithBytes:data length:320];
unsigned char outBuf[320];
short *readBuf = NULL;
readBuf = (short *)[data1 bytes];
//回音消除Process
int aceProcess = WebRtcAecm_Process(self.waveIO.AecmInst, (short *)refBuf, readBuf, (short *)outBuf, 160, 60);
DLog(@"===aceProcess:%d",aceProcess);
NSData *outData = [NSData dataWithBytes:(Byte *)outBuf length:320];
//劉文靜添加 opus 編碼數(shù)據(jù)
outData = [self.waveIO.opusCode encodePCMData:outData];
// DLog(@"===outData:%@",outData);
outData = [self getSIMUDPData:outData];
// DLog(@"===加頭發(fā)數(shù)據(jù)DATA:%@",outData);
[self.udpSocket sendData:outData toHost:self.host port:self.hostPort withTimeout:0.0 tag:_udpTag];
++_udpTag;
});
return true;
}
這樣可以把原先在主線程跑的數(shù)據(jù)處理拿到串行隊(duì)列里來(lái)。
在收數(shù)據(jù)那里使用了串行隊(duì)列receiveQueue.
-(dispatch_queue_t )receiveQueue
{
if (_receiveQueue == nil)
{
_receiveQueue = dispatch_queue_create("com.udp.receiveQueue", DISPATCH_QUEUE_SERIAL);
}
return _receiveQueue;
}
- (void)udpSocket:(GCDAsyncUdpSocket *)sock didReceiveData:(NSData *)data
fromAddress:(NSData *)address
withFilterContext:(id)filterContext
{
// 防止 didReceiveData 被阻塞,用個(gè)其他隊(duì)列里的線程去回調(diào) block
dispatch_async(self.receiveQueue, ^{
@autoreleasepool {
NSData* myBlob=[[NSData alloc]init];
DLog(@"-===UDP收到數(shù)據(jù)====didReceiveData");
if (data.length>4)
{
myBlob=[data subdataWithRange:NSMakeRange(4, data.length-4)];
}
//劉文靜添加 收到數(shù)據(jù) opus解碼
myBlob = [self.waveIO.opusCode decodeOpusData:myBlob];
// DLog(@"===UDP收到語(yǔ)音解壓數(shù)據(jù)data:%@",myBlob);
NSInteger len=myBlob.length;
if (len>0)
{
if (len==320)
{
if(len%320==0)
{
// DLog(@"===解碼后數(shù)據(jù)長(zhǎng)度len/320==0");
NSUInteger length = [myBlob length];
NSUInteger offset = 0;
do {
NSUInteger thisChunkSize=320;
NSData* chunk = [NSData dataWithBytesNoCopy:(char *)[myBlob bytes] + offset
length:thisChunkSize
freeWhenDone:NO];
// DLog(@"--==解碼后=-chunk:%@",chunk);
memcpy(refBuf,[chunk bytes], 320);
int aecBuffer = WebRtcAecm_BufferFarend(self.waveIO.AecmInst, (short *)refBuf, 160);
DLog(@"===aecBuffer:%d",aecBuffer);
[self procAUdpPack:chunk];
offset += thisChunkSize;
} while (offset < length);
}else
{
DLog(@"--====I can not handle half package,%ld",(long)len);
}
return;
}
else
{
[self procAUdpPack:myBlob];
}
}
}
});
}
在使用opus編碼解碼后,CPU的使用率上去了??吹接泻芏嗳耸褂?code>ProtocolBuffer。那么為什么要用ProtocolBuffer呢?它是一種二進(jìn)制格式,免去了文本格式轉(zhuǎn)換的各種困擾,并且轉(zhuǎn)換效率非???,由于它的跨平臺(tái)、跨編程語(yǔ)言的特點(diǎn),讓它越來(lái)越普及,尤其是網(wǎng)絡(luò)數(shù)據(jù)交換方面日趨成為一種主流.這個(gè)序列化成的二進(jìn)制的包比其他傳輸格式小很多,可以節(jié)約網(wǎng)絡(luò)流量。后面優(yōu)化時(shí),或許可以嘗試一下。
相關(guān)文章
ProtocolBuffer for Objective-C 運(yùn)行環(huán)境配置及使用
iOS 下Opus 壓縮PCM音頻數(shù)據(jù)方法
iOS下Opus 編譯教程
視頻通話之音頻(介紹的很詳細(xì))
iOS音頻播放 (二):AudioSession