由于使用 MediaMuxer 進(jìn)行文件封裝不支持邊轉(zhuǎn)碼邊分塊,所以選擇通過使用 FFmpeg muxer 進(jìn)行文件封裝, 在封裝過程中完成文件的分塊。
FFmpeg 音視頻復(fù)用器(Muxer)
音視頻封裝 - 將視頻壓縮數(shù)據(jù)(例如H.264)和音頻壓縮數(shù)據(jù)(例如AAC)合并到一個(gè)封裝格式數(shù)據(jù)(例如MP4)中去。如圖所示:

參考雷霄華的博客,整理出 FFmpeg 封裝音視頻的流程:

創(chuàng)建 AVFormatContext 結(jié)構(gòu)體
調(diào)用 avformat_alloc_output_context2() 方法初始化 AVFormatContext 關(guān)于輸出文件相關(guān)屬性
-
avio_open2() - 打開輸出文件
AVDictionary *options = NULL; // 指定分塊文件的大小 av_dict_set(&options, "BLK_BUF_TAG", buff, 0); // 回調(diào)函數(shù)結(jié)構(gòu)體 AVIOInterruptCB interrupt_callBack = (AVIOInterruptCB *)malloc(sizeof(AVIOInterruptCB)); memset(interrupt_callBack, 0, sizeof(*interrupt_callBack)); // 指定回調(diào)函數(shù),處理分塊文件時(shí)調(diào)用 interrupt_callBack->callback_blk = ffmuxer_notify_blk_callback; interrupt_callBack->opaque = NULL; interrupt_callBack->handle = (void *)pMuxer; res = avio_open2(&fmtCtx->pb, out, AVIO_FLAG_BLK_WRITE, interrupt_callBack, &options); 通過 avformat_new_stream 創(chuàng)建 video & audio stream 并設(shè)置相關(guān)配置信息
avformat_write_header() - 寫入文件頭
av_interleaved_write_frame() - 寫入一個(gè) AVPacket 到輸出文件
av_write_trail() - 寫入文件尾
關(guān)于 FFmpeg 復(fù)用音視頻的流程介紹到此,總結(jié)一下重點(diǎn)就是:初始化相關(guān)結(jié)構(gòu)體,配置屬性,寫入壓縮數(shù)據(jù),釋放。 下面介紹在 MediaCodec 中如何調(diào)用:
因?yàn)?MediaCodec 調(diào)用 FFmpeg 相關(guān)代碼是通過 jni 調(diào)用,這里不展開介紹。下面的 native*** 方法均為 FFmpeg 對應(yīng)的原生方法
-
nativeMuxerOpen
初始化 FFmpeg muxer: 當(dāng) video&audio 編碼器的輸出 MediaFormat 均發(fā)生變化,說明編碼器即將輸出數(shù)據(jù)。此時(shí)即可初始化 FFmpeg muxer(執(zhí)行 FFmpeg 復(fù)用流程中的 1,2,3步驟)
-
nativeAddAudioTrack & nativeAddVideoTrack
添加 video & audio stream: 編碼器輸出壓縮數(shù)據(jù),且 flag 為 BUFFER_FLAG_CODEC_CONFIG(即配置信息)時(shí),
// 添加 video stream AVStream *stream = avformat_new_stream(fmtCtx, avcodec_find_encoder(codec)); stream->codec->width = info.videoWidth; stream->codec->height = info.videoHeight; stream->codec->extradata = (uint8_t *)av_mallocz(info.bufferLen + FF_INPUT_BUFFER_PADDING_SIZE); memc(stream->codec->extradata, info.buffer, info.bufferLen); stream->codec->extradata_size = info.bufferLen; // 添加 audio stream AVStream *stream = avformat_new_stream(fmtCtx, avcodec_find_encoder(codec)); stream->codec->sample_fmt = AV_SAMPLE_FMT_S32; stream->codec->sample_rate = info.audioSampleRate; stream->codec->channel_layout = getChannelLayout(info.audioChannel); stream->codec->channels = info.audioChannel; stream->codec->bit_rate = info.audioBitrate; stream->codec->extradata = (uint8_t *)av_mallocz(info.bufferLen + FF_INPUT_BUFFER_PADDING_SIZE); memc(stream->codec->extradata, info.buffer, info.bufferLen); stream->codec->extradata_size = info.bufferLen; -
nativeWriteVideoStream & nativeWriteAudioStream
寫入 video & audio 壓縮數(shù)據(jù):編碼器輸出壓縮數(shù)據(jù),且 flag 不是 BUFFER_FLAG_CODEC_CONFIG:
// 寫入 video packet AVPacket packet; av_init_packet(&packet); packet.stream_index = pMuxer->videoIndex; packet.data = info.buffer; packet.size = info.bufferLen; packet.pts = rescaleTime(info.stamp, stream->time_base); packet.duration = 0; packet.dts = packet.pts; packet.pos = -1; // 關(guān)鍵幀 if (info.flag == BUFFER_FLAG_KEY_FRAME) { packet.flags |= AV_PKT_FLAG_KEY; } // 寫入一個(gè) AVPacket 到輸出文件 int ret = av_interleaved_write_frame(fmtCtx, &packet);// 寫入 audio packet AVPacket packet; av_init_packet(&packet); packet.stream_index = pMuxer->audioIndex; packet.data = buffer; packet.size = bufferLen; packet.pts = rescaleTime(stamp, stream->time_base); packet.dts = packet.pts; packet.pos = -1; // 寫入一個(gè) AVPacket 到輸出文件 int ret = av_interleaved_write_frame(fmtCtx, &packet); -
nativeMuxerClose
寫入文件尾及釋放相關(guān)結(jié)構(gòu)體:當(dāng)編碼完成后,調(diào)用 muxerclose 方法進(jìn)行釋放:
if (pMuxer->videoStream) { avcodec_close(((AVStream *)pMuxer->videoStream)->codec); pMuxer->videoStream = 0; } if (pMuxer->audioStream) { avcodec_close(((AVStream *)pMuxer->audioStream)->codec); pMuxer->audioStream = 0; } if (pMuxer->pContext) { AVFormatContext *fmtCtx = (AVFormatContext *)pMuxer->pContext; // 寫入文件尾 av_write_trailer(fmtCtx); avio_close(fmtCtx->pb); avformat_free_context(fmtCtx); pMuxer->pContext = 0; }