WebRTC我編譯的97版本的webrtc_android,這里我就以Android視角從視頻采集,渲染,編碼,發(fā)送四個(gè)流程來答題說一下WebRtc的視頻推流過程,
采集
Android攝像頭采集可以調(diào)用WebRtc提供的VideoCapturer,他封裝了Camera和Camera2的兩套api在里面。
VideoCapturer
public interface VideoCapturer {
void initialize(
SurfaceTextureHelper surfaceTextureHelper,
Context applicationContext,
CapturerObserver capturerObserver);
}
SurfaceTextureHelper封裝了SurfaceTexture,使用Camera2 api 可以直接調(diào)用CameraDevice.createCaptureSession()打開攝像頭,把Surface傳進(jìn)去。
private class CameraStateCallback extends CameraDevice.StateCallback {
// other definitions...
@Override
public void onOpened(CameraDevice camera) {
// method body...
surfaceTextureHelper.setTextureSize(captureFormat.width, captureFormat.height);
surface = new Surface(surfaceTextureHelper.getSurfaceTexture());
try {
camera.createCaptureSession(
Arrays.asList(surface), new CaptureSessionCallback(), cameraThreadHandler);
} catch (CameraAccessException e) {
// method body...
}
}
// other definitions...
}
攝像頭就會(huì)將輸出直接輸出到綁定的Surface中去。
當(dāng)SurfaceTexture緩沖區(qū)BufferQueue中存在幀數(shù)據(jù)的時(shí)候,會(huì)回調(diào)CapturerObserver.onFrameCaptured(frame)把幀數(shù)據(jù)通過NativeAndroidVideoTrackSource傳到WebRTC的native層。
public class VideoSource extends MediaSource {
// other definitions...
private final CapturerObserver capturerObserver = new CapturerObserver() {
// other definitions..
@Override
public void onFrameCaptured(VideoFrame frame) {
// body method...
VideoFrame adaptedFrame = VideoProcessor.applyFrameAdaptationParameters(frame, parameters);
if (adaptedFrame != null) {
nativeAndroidVideoTrackSource.onFrameCaptured(adaptedFrame);
adaptedFrame.release();
}
}
};
}
在打印NavtiveAndroidVideoTrackSource的調(diào)用站
webrtc::jni::AndroidVideoTrackSource::onFrameCaptured
→ rtc::AdaptedVideoTrackSource::OnFrame
→ rtc::VideoBroadcaster::OnFrame
采集過程大致是這樣子。
渲染過程
Android上的渲染主要是顯示的預(yù)覽畫面內(nèi)容。我們直接調(diào)用VideoTrack.addsink(sink)添加預(yù)覽畫面進(jìn)去,這時(shí)候就相當(dāng)于是訂閱了VideoBroadCaster,當(dāng)有數(shù)據(jù)進(jìn)來的時(shí)候,VideoBroadCaster會(huì)回調(diào)VideoSink.onFrame(frame),然后回渲染到EGL中去了。
編碼和發(fā)送
因?yàn)榫幋a和發(fā)送大部分都是在native層實(shí)現(xiàn)的,這里我直接弄出他們的調(diào)用棧來。
webrtc::VideoStreamEncoder::OnFrame
→ webrtc::VideoStreamEncoder::MaybeEncodeVideoFrame
→ webrtc::VideoStreamEncoder::EncodeVideoFrame
→ webrtc::LibvpxVp8Encoder::Encode #1
→ webrtc::LibvpxVp8Encoder::GetEncodedPartitions
→ webrtc::VideoStreamEncoder::OnEncodedImage
→ webrtc::internal::VideoSendStreamImpl::OnEncodedImage
→ webrtc::RtpVideoSender::OnEncodedImage
→ webrtc::RTPSenderVideo::SendEncodedImage
→ webrtc::RTPSenderVideo::SendVideo
→ webrtc::RTPSenderVideo::LogAndSendToNetwork
→ webrtc::RTPSender::EnqueuePackets
→ webrtc::PacedSender::EnqueuePackets
→ webrtc::PacingController::EnqueuePacket
→ webrtc::PacingController::EnqueuePacketInternal
→ webrtc::PacedSender::Process #2
→ webrtc::PacingController::ProcessPackets
→ webrtc::PacedSender::SendRtpPacket
→ webrtc::ModuleRtpRtcpImpl2::TrySendPacket
→ webrtc::RtpSenderEgress::SendPacket
→ webrtc::RtpSenderEgress::SendPacketToNetwork
→ cricket::WebRtcVideoChannel::SendRtp
→ cricket::MediaChannel::SendPacket
→ cricket::MediaChannel::DoSendPacket
→ cricket::VideoChannel::SendPacket
→ webrtc::DtlsSrtpTransport::SendRtpPacket #3
從調(diào)用??梢钥吹骄幋a器是LibvpxVp8Encoder,當(dāng)然可以自己換成H264Encoder.
然后就進(jìn)入了PeerConnection的發(fā)送數(shù)據(jù)包流程了。
體外話:
1.其實(shí)這里我們可以做的事情很多,比如保存錄制回放可以可以不用Camera2的信息,直接讀取我們本地的信息進(jìn)去?;蛘呓o攝像頭添加濾鏡什么的。還有一些事添加人臉識(shí)別進(jìn)去的,拿到流檢測(cè)書人臉,拿到人臉信息直接添加到流上去。