[AVFoundation]導(dǎo)出

原文:AVFoundation Programming Guide

要讀取和寫入視聽資源,您必須使用AVFoundation框架提供的導(dǎo)出API。AVAssetExportSession類提供了一個(gè)簡(jiǎn)單導(dǎo)出需求的接口,例如修改文件格式或修剪assets的長度(請(qǐng)參考Trimming and Transcoding a Movie)。對(duì)于更高的導(dǎo)出需求,請(qǐng)使用AVAssetReaderAVAssetWriter類。

當(dāng)您要對(duì)資源(asset)的內(nèi)容執(zhí)行操作時(shí),請(qǐng)使用AVAssetReader。 例如,您可以讀取資源(asset)的音軌以產(chǎn)生波形的可視化表示。 要從媒體(如樣本緩沖區(qū)或靜態(tài)圖像)生成資源(asset),請(qǐng)使用AVAssetWriter對(duì)象。

注意: 資源的讀寫器類不能用于實(shí)時(shí)處理。 實(shí)際上,資源的讀取器甚至不能用于從HTTP實(shí)時(shí)流中實(shí)時(shí)讀取數(shù)據(jù)。 但是,如果您正在使用具有實(shí)時(shí)數(shù)據(jù)源的資源編寫器(例如AVCaptureOutput 對(duì)象),需要將 expectsMediaDataInRealTime屬性設(shè)置為YES。對(duì)于非實(shí)時(shí)數(shù)據(jù)源,將此屬性設(shè)置為YES將導(dǎo)致文件不正確交叉。

讀取資源

每個(gè)AVAssetReader對(duì)象一次只能與單個(gè)資源相關(guān)聯(lián),但此資源可能包含多個(gè)軌道。 因此,在開始閱讀之前,您必須將 AVAssetReaderOutput類的具體子類分配給資源讀取器,以配置媒體數(shù)據(jù)的讀取方式。 AVAssetReaderOutput基類的三個(gè)具體子類可用于資源讀取需求:AVAssetReaderTrackOutput,AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput。

創(chuàng)建一個(gè)讀取器

初始化一個(gè)AVAssetReader對(duì)象所需要的就是你想要讀取的的資源。

NSError *outError;

AVAsset *someAsset = <#AVAsset that you want to read#>;

AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];

BOOL success = (assetReader != nil);

注意: 始終檢查資產(chǎn)讀取器的返回值是否為nil,以確保資產(chǎn)讀取器已初始化成功。 否則,錯(cuò)誤參數(shù)(前面的例子中的outError)將包含相關(guān)的錯(cuò)誤信息。

設(shè)置資源讀取器的輸出

創(chuàng)建資源讀取器后,設(shè)置至少一個(gè)輸出以接收正在讀取的媒體數(shù)據(jù)。 設(shè)置輸出時(shí),請(qǐng)確保將alwaysCopiesSampleData屬性設(shè)置為NO。 通過這種方式,您可以獲得性能改進(jìn)的好處。 在本章中的所有示例中,此屬性可以并且應(yīng)該設(shè)置為NO。

如果您只想從一個(gè)或多個(gè)軌道中讀取媒體數(shù)據(jù),并可能將該數(shù)據(jù)轉(zhuǎn)換為不同的格式,請(qǐng)使用AVAssetReaderTrackOutput類,資源讀取的每個(gè)AVAssetTrack對(duì)象對(duì)應(yīng)一個(gè)單獨(dú)的軌道輸出對(duì)象。 要使用資產(chǎn)讀取器將音軌解壓縮到Linear PCM,您可以按如下方式設(shè)置音軌輸出:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
    [assetReader addOutput:trackOutput];

注意:要以存儲(chǔ)格式從特定資源軌道讀取媒體數(shù)據(jù),請(qǐng)將outputSettings參數(shù)設(shè)置為nil。

您可以使用AVAssetReaderAudioMixOutput和AVAssetReaderVideoCompositionOutput類來讀取混合或合成在一起的媒體數(shù)據(jù),分別使用AVAudioMix對(duì)象或AVVideoComposition對(duì)象。 通常,當(dāng)您的資源讀取器從AVComposition對(duì)象讀取時(shí),將使用這些輸出。

對(duì)于單個(gè)音頻混合輸出,您可以使用AVAudioMix對(duì)象從資源中讀取已混合在一起的多個(gè)音軌。 要指定音軌如何混合,請(qǐng)?jiān)诔跏蓟髮⒒煲舴峙浣oAVAssetReaderAudioMixOutput對(duì)象。 以下代碼展示了如何使用資源中的所有音軌創(chuàng)建音頻混合輸出,將音軌解壓縮為Linear PCM,并將音頻混合對(duì)象分配給輸出。 有關(guān)如何配置音頻混合的詳細(xì)信息,請(qǐng)參閱 Editing。

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;

// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];

// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };

// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];

// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
    [assetReader addOutput:audioMixOutput];

注意: 通過將audioSettings參數(shù)設(shè)置為nil可以讓資源讀取器以方便的未壓縮格式返回樣本。 對(duì)于AVAssetReaderVideoCompositionOutput類也是如此。

視頻合成輸出的行為方式大致相同:您可以使用AVVideoComposition對(duì)象從資源中讀取已經(jīng)合成在一起的多個(gè)視頻軌道。 要從多個(gè)合成視頻軌道讀取媒體數(shù)據(jù)并將其解壓縮到ARGB,請(qǐng)按如下所示設(shè)置輸出:

AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;

// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];

// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };

// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];

// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];
讀取資源的媒體數(shù)據(jù)

在設(shè)置所需的所有輸出后,可以在資源讀取器上調(diào)用startReading方法開始讀取。 接下來,使用copyNextSampleBuffer方法從每個(gè)輸出中單獨(dú)檢索媒體數(shù)據(jù)。 要使用單個(gè)輸出啟動(dòng)資源讀取器并讀取其所有媒體數(shù)據(jù),請(qǐng)執(zhí)行以下操作:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];

  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
    // Find out why the asset reader output couldn't copy another sample buffer.
    if (self.assetReader.status == AVAssetReaderStatusFailed)
    {
      NSError *failureError = self.assetReader.error;
      // Handle the error here.
    }
    else
    {
      // The asset reader output has read all of its samples.
      done = YES;
    }
  }
}
寫資源

AVAssetWriter類將媒體數(shù)據(jù)從多個(gè)源寫入指定文件格式的單個(gè)文件。 您不需要將資源寫入對(duì)象與特定資源相關(guān)聯(lián),但必須為要?jiǎng)?chuàng)建的每個(gè)輸出文件使用單獨(dú)的資源寫入器。 由于資源寫入對(duì)象可以從多個(gè)源寫入媒體數(shù)據(jù),因此必須為要寫入輸出文件的每個(gè)單獨(dú)的軌道創(chuàng)建一個(gè)AVAssetWriterInput對(duì)象。 每個(gè)AVAssetWriterInput對(duì)象都希望接收CMSampleBufferRef對(duì)象格式的數(shù)據(jù),但如果要將CVPixelBufferRef對(duì)象附加到資源寫入中,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor類。

創(chuàng)建資源寫入對(duì)象

要?jiǎng)?chuàng)建資源寫入對(duì)象,請(qǐng)指定輸出文件的URL和所需的文件類型。 以下代碼顯示如何初始化資源寫入對(duì)象以創(chuàng)建一個(gè)QuickTime電影:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:&outError];
BOOL success = (assetWriter != nil);
設(shè)置Asset Writer Inputs

要使資產(chǎn)寫入對(duì)象能夠編寫媒體數(shù)據(jù),必須至少設(shè)置一個(gè)資源寫入對(duì)象輸入(asset writer input)。 例如,如果您的媒體數(shù)據(jù)源已經(jīng)將媒體樣本作為CMSampleBufferRef對(duì)象,那么只需使用AVAssetWriterInput類。 要設(shè)置資源寫入對(duì)象輸入(asset writer input),將音頻媒體數(shù)據(jù)壓縮為128 kbps AAC并將其連接到資源寫入對(duì)象,請(qǐng)執(zhí)行以下操作:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
    AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
    AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
    AVSampleRateKey : [NSNumber numberWithInteger:44100],
    AVChannelLayoutKey : channelLayoutAsData,
    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];

// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
    [assetWriter addInput:assetWriterInput];

注意: 如果要以存儲(chǔ)格式寫入媒體數(shù)據(jù),可以將outputSettings參數(shù)設(shè)置為nil。 只有當(dāng)資源寫入對(duì)象是使用AVFileTypeQuickTimeMovie的文件類型初始化時(shí),才可以設(shè)置為nil。

您的資源寫入對(duì)象輸入可以分別使用metadatatransform屬性來選擇性地包括一些元數(shù)據(jù)或?yàn)樘囟ǖ能壍乐付ú煌淖儞Q。 對(duì)于數(shù)據(jù)源是視頻軌道的資源寫入對(duì)象輸入,您可以通過執(zhí)行以下操作在輸出文件中維持視頻的原始變換:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

注意: 在開始寫入之前,設(shè)置元數(shù)據(jù)和變換屬性,使其生效。

將媒體數(shù)據(jù)寫入輸出文件時(shí),有時(shí)您可能需要分配像素緩沖區(qū)。 為此,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor類。 為了最大的效率,不要使用單獨(dú)的池分配的像素緩沖區(qū),而是使用像素緩沖適配器提供的像素緩沖池。 以下代碼創(chuàng)建一個(gè)在RGB域中工作的像素緩沖區(qū)對(duì)象,它將使用CGImage對(duì)象創(chuàng)建其像素緩沖區(qū)。

NSDictionary *pixelBufferAttributes = @{
    kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
    kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
    kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

注意: 所有AVAssetWriterInputPixelBufferAdaptor對(duì)象必須連接到單個(gè)的資產(chǎn)寫入對(duì)象輸入。 資源寫入對(duì)象輸入必須接受AVMediaTypeVideo類型的媒體數(shù)據(jù)。

寫入媒體數(shù)據(jù)

當(dāng)您配置了資源寫入所需的所有輸入后,即可開始寫入媒體數(shù)據(jù)。 與資源讀取器一樣,通過調(diào)用startWriting方法啟動(dòng)寫入過程。 然后,您需要通過調(diào)用startSessionAtSourceTime: 方法啟動(dòng)示例寫入會(huì)話。 資源寫入器進(jìn)行的所有寫入都必須在這些會(huì)話之一內(nèi)進(jìn)行,每個(gè)會(huì)話的時(shí)間范圍定義了從源中包含的媒體數(shù)據(jù)的時(shí)間范圍。 例如,如果您的來源是資源讀取器,它提供從AVAsset對(duì)象讀取的媒體數(shù)據(jù),并且你不希望包含資源的前半部分媒體數(shù)據(jù),則可以執(zhí)行以下操作:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

通常,要結(jié)束寫入會(huì)話,您必須調(diào)用endSessionAtSourceTime:方法。 但是,如果您的寫作會(huì)話到了文件的末尾,您可以簡(jiǎn)單地通過調(diào)用finishWriting方法來結(jié)束寫入會(huì)話。 要開始一個(gè)包含單個(gè)輸入的資源寫入器,并寫入其所有媒體數(shù)據(jù),請(qǐng)執(zhí)行以下操作:

// Prepare the asset writer for writing.
[self.assetWriter startWriting];

// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the next sample buffer.
          CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
          if (nextSampleBuffer)
          {
               // If it exists, append the next sample buffer to the output file.
               [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
               CFRelease(nextSampleBuffer);
               nextSampleBuffer = nil;
          }
          else
          {
               // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
               [self.assetWriterInput markAsFinished];
               break;
          }
     }
}];

上面代碼中的copyNextSampleBufferToWrite方法只是一個(gè)存根。 此存根的位置是您需要插入一些邏輯以返回表示要寫入的媒體數(shù)據(jù)的CMSampleBufferRef對(duì)象。 樣本緩沖區(qū)的一個(gè)可能來源是資源讀取器輸出。

重新編碼資源

您可以一起使用資源讀取器和資產(chǎn)寫入器將資源從一種表示形式轉(zhuǎn)換為另一種。 使用這些對(duì)象,您可以比使用AVAssetExportSession對(duì)象更能控制轉(zhuǎn)換。 例如,您可以選擇要輸出到文件的軌道,指定您自己的輸出格式,或在轉(zhuǎn)換過程中修改資源。 此過程的第一步都是根據(jù)需要設(shè)置資源讀取器輸出和資產(chǎn)寫入器輸入。 當(dāng)資源讀寫器配置好后,您可以分別啟動(dòng)startReading和startWriting方法。 以下代碼片段顯示了如何使用單個(gè)資源寫入器輸入來寫入單個(gè)資源讀取器輸出提供的媒體數(shù)據(jù):

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the asset reader output's next sample buffer.
          CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
          if (sampleBuffer != NULL)
          {
               // If it exists, append this sample buffer to the output file.
               BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
               CFRelease(sampleBuffer);
               sampleBuffer = NULL;

               // Check for errors that may have occurred when appending the new sample buffer.
               if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
               {
                    NSError *failureError = self.assetWriter.error;
                    //Handle the error.
               }
          }
          else
          {
               // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
               if (self.assetReader.status == AVAssetReaderStatusFailed)
               {
                    NSError *failureError = self.assetReader.error;
                    //Handle the error here.
               }
               else
               {
                    // The asset reader output must have vended all of its samples. Mark the input as finished.
                    [self.assetWriterInput markAsFinished];
                    break;
               }
          }
     }
}];
整合: 使用資源讀寫器來重新編碼資源

這個(gè)簡(jiǎn)短的代碼示例說明了如何使用資源讀取器和寫入器將資源的第一個(gè)視頻和音軌重新編碼為新的文件。步驟如下:

  • 使用串行隊(duì)列來處理讀和寫視聽數(shù)據(jù)
  • 初始化資源讀取器并配置兩個(gè)資源讀取器輸出,一個(gè)用于音頻,另一個(gè)用于視頻
  • 初始化資源寫入器并配置兩個(gè)資源寫入器輸入,一個(gè)用于音頻,一個(gè)用于視頻
  • 使用資源讀取器通過兩種不同的輸出/輸入組合異步地向資源寫入器提供媒體數(shù)據(jù)
  • 使用dispatch group通知處理重新編碼過程的完成
  • 允許用戶取消重新編碼過程

注意:為了專注于最相關(guān)的代碼,本示例省略了一個(gè)完整應(yīng)用程序的幾個(gè)方面。要使用AVFoundation,您需要有足夠的Cocoa編程經(jīng)驗(yàn),能夠推斷出缺少的部分。

初始設(shè)置

在創(chuàng)建資源讀寫器并配置其輸出和輸入之前,需要做一些初始設(shè)置。 此設(shè)置的第一部分是創(chuàng)建三個(gè)單獨(dú)的串行隊(duì)列來處理讀寫過程。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

主串行隊(duì)列用于處理資源讀寫器的啟動(dòng)和停止(取消),其他兩個(gè)串行隊(duì)列用于序列化每個(gè)輸出/輸入組合的讀取和寫入,為了以后可能存在的取消行為。

現(xiàn)在您有一些序列化隊(duì)列,可以加載資源軌道并開始重新編碼過程。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;

// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{

     // Once the tracks have finished loading, dispatch the work to the main serialization queue.
     dispatch_async(self.mainSerializationQueue, ^{

          // Due to asynchronous nature, check to see if user has already cancelled.
          if (self.cancelled)
               return;
          BOOL success = YES;
          NSError *localError = nil;

          // Check for success of loading the assets tracks.
          success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
          if (success)
          {
               // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
               NSFileManager *fm = [NSFileManager defaultManager];
               NSString *localOutputPath = [self.outputURL path];

               if ([fm fileExistsAtPath:localOutputPath])
                    success = [fm removeItemAtPath:localOutputPath error:&localError];
          }

          if (success)
               success = [self setupAssetReaderAndAssetWriter:&localError];
          if (success)
               success = [self startAssetReaderAndWriter:&localError];
          if (!success)
               [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

當(dāng)軌道加載過程完成時(shí),無論成功與否,其余的工作都將被分派到主序列化隊(duì)列,以確保所有這些工作都被序列化并可以取消。 現(xiàn)在剩下的是在上一個(gè)代碼結(jié)束時(shí)實(shí)現(xiàn)取消流程和三個(gè)自定義方法。

初始化資源讀寫器

自定義setupAssetReaderAndAssetWriter:方法用來初始化讀寫器并配置兩個(gè)輸出/輸入組合,一個(gè)用于音軌,一個(gè)用于視頻軌道。 在此示例中,音頻使用資源讀取器解壓縮為Linear PCM,并使用資源寫入器壓縮回128 kbps AAC。 使用資源讀取器將視頻解壓縮為YUV,并使用資源寫入器將其壓縮為H.264。

- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
     // Create and initialize the asset reader.
     self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
     BOOL success = (self.assetReader != nil);
     if (success)
     {
          // If the asset reader was successfully initialized, do the same for the asset writer.
          self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
          success = (self.assetWriter != nil);
     }

     if (success)
     {
          // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
          AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
          NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];

          if ([audioTracks count] > 0)
               assetAudioTrack = [audioTracks objectAtIndex:0];

          NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];

          if ([videoTracks count] > 0)
               assetVideoTrack = [videoTracks objectAtIndex:0];

          if (assetAudioTrack)
          {
               // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
               NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
               self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
               [self.assetReader addOutput:self.assetReaderAudioOutput];

               // Then, set the compression settings to 128kbps AAC and create the asset writer input.
               AudioChannelLayout stereoChannelLayout = {
                    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                    .mChannelBitmap = 0,
                    .mNumberChannelDescriptions = 0
               };

               NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

               NSDictionary *compressionAudioSettings = @{
                    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                    AVChannelLayoutKey    : channelLayoutAsData,
                    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
               };

               self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
               [self.assetWriter addInput:self.assetWriterAudioInput];
          }

          if (assetVideoTrack)
          {
               // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
               NSDictionary *decompressionVideoSettings = @{
                    (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                    (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
               };

               self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
               [self.assetReader addOutput:self.assetReaderVideoOutput];

               CMFormatDescriptionRef formatDescription = NULL;

               // Grab the video format descriptions from the video track and grab the first one if it exists.
               NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
               if ([videoFormatDescriptions count] > 0)
                    formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
               CGSize trackDimensions = {
                    .width = 0.0,
                    .height = 0.0,
               };

               // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
               if (formatDescription)
                    trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
               else
                    trackDimensions = [assetVideoTrack naturalSize];

               NSDictionary *compressionSettings = nil;

               // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
               if (formatDescription)
               {
                    NSDictionary *cleanAperture = nil;
                    NSDictionary *pixelAspectRatio = nil;
                    CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);

                    if (cleanApertureFromCMFormatDescription)
                    {
                         cleanAperture = @{
                              AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                              AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                              AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                              AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                         };
                    }

                    CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);

                    if (pixelAspectRatioFromCMFormatDescription)
                    {
                         pixelAspectRatio = @{
                              AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                              AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                         };
                    }

                    // Add whichever settings we could grab from the format description to the compression settings dictionary.
                    if (cleanAperture || pixelAspectRatio)
                    {
                         NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                         if (cleanAperture)
                              [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];

                         if (pixelAspectRatio)
                              [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];

                         compressionSettings = mutableCompressionSettings;
                    }
               }

               // Create the video settings dictionary for H.264.
               NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                    AVVideoCodecKey  : AVVideoCodecH264,
                    AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                    AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
               };

               // Put the compression settings into the video settings dictionary if we were able to grab them.
               if (compressionSettings)
                    [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];

               // Create the asset writer input and add it to the asset writer.
               self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
               [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}
重新編碼資源

如果資產(chǎn)讀取器和編寫器已成功初始化和配置,則調(diào)用Handling the Initial Setup 的startAssetReaderAndWriter:方法。 這個(gè)方法實(shí)現(xiàn)了資源讀取和寫入。

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
     BOOL success = YES;

     // Attempt to start the asset reader.
     success = [self.assetReader startReading];

     if (!success)
          *outError = [self.assetReader error];

     if (success)
     {
          // If the reader started successfully, attempt to start the asset writer.
          success = [self.assetWriter startWriting];

          if (!success)
               *outError = [self.assetWriter error];
     }

     if (success)
     {
          // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
          self.dispatchGroup = dispatch_group_create();
          [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
          self.audioFinished = NO;
          self.videoFinished = NO;

          if (self.assetWriterAudioInput)
          {
               // If there is audio to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);

               // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
               [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{

                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.audioFinished)
                         return;

                    BOOL completedOrFailed = NO;

                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next audio sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];

                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }

                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                         BOOL oldFinished = self.audioFinished;

                         self.audioFinished = YES;

                         if (oldFinished == NO)
                         {
                              [self.assetWriterAudioInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          if (self.assetWriterVideoInput)
          {
               // If we had video to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);

               // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
               [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{

                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.videoFinished)
                         return;

                    BOOL completedOrFailed = NO;

                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next video sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];

                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }

                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                         BOOL oldFinished = self.videoFinished;
                         self.videoFinished = YES;

                         if (oldFinished == NO)
                         {
                              [self.assetWriterVideoInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          // Set up the notification that the dispatch group will send when the audio and video work have both finished.
          dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{

               BOOL finalSuccess = YES;

               NSError *finalError = nil;

               // Check to see if the work has finished due to cancellation.
               if (self.cancelled)
               {
                    // If so, cancel the reader and writer.
                    [self.assetReader cancelReading];
                    [self.assetWriter cancelWriting];
               }
               else
               {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    if ([self.assetReader status] == AVAssetReaderStatusFailed)
                    {
                         finalSuccess = NO;
                         finalError = [self.assetReader error];
                    }

                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    if (finalSuccess)
                    {
                         finalSuccess = [self.assetWriter finishWriting];
                         if (!finalSuccess)
                              finalError = [self.assetWriter error];
                    }
               }
               // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
               [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
          });
     }

     // Return success here to indicate whether the asset reader and writer were started successfully.
     return success;
}

在重新編碼時(shí),音頻和視頻軌道在各個(gè)串行隊(duì)列中異步處理,以增加進(jìn)程的整體性能,但是兩個(gè)隊(duì)列都包含在同一個(gè)group中。 通過將每個(gè)軌道的工作放在同一個(gè)group中,group可以在完成所有工作并且可以確定重新編碼過程的成功時(shí)進(jìn)行相應(yīng)處理。

處理完成操作

為了處理讀寫過程的完成,readAndWritingDidFinishSuccessfully:方法被調(diào)用,其中的參數(shù)指示重新編碼是否成功完成。 如果進(jìn)程沒有成功完成,則資源讀取器和寫入器都將被取消,任何UI相關(guān)任務(wù)都將分派到主隊(duì)列中處理。

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
     if (!success)
     {
          // If the reencoding process failed, we need to cancel the asset reader and writer.
          [self.assetReader cancelReading];
          [self.assetWriter cancelWriting];
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to failure.
          });
     }
     else
     {
          // Reencoding was successful, reset booleans.
          self.cancelled = NO;
          self.videoFinished = NO;
          self.audioFinished = NO;
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to success.
          });
     }
}
處理取消行為

使用多個(gè)序列化隊(duì)列,您可以允許應(yīng)用程序的用戶輕松取消重新編碼過程。 在主序列化隊(duì)列中,消息被異步發(fā)送到每個(gè)資源重編碼序列化隊(duì)列以取消其讀取和寫入。 當(dāng)這兩個(gè)序列化隊(duì)列完成取消時(shí),dispatch group會(huì)向主序列化隊(duì)列發(fā)送通知,其中已取消的屬性設(shè)置為YES。 您可以將以下代碼中的取消方法與UI上的按鈕相關(guān)聯(lián)。

- (void)cancel
{
     // Handle cancellation asynchronously, but serialize it with the main queue.
     dispatch_async(self.mainSerializationQueue, ^{

          // If we had audio data to reencode, we need to cancel the audio work.
          if (self.assetWriterAudioInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
               dispatch_async(self.rwAudioSerializationQueue, ^{

                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.audioFinished;
                    self.audioFinished = YES;

                    if (oldFinished == NO)
                    {
                         [self.assetWriterAudioInput markAsFinished];
                    }

                    // Leave the dispatch group since the audio work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          if (self.assetWriterVideoInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the video queue.
               dispatch_async(self.rwVideoSerializationQueue, ^{

                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.videoFinished;
                    self.videoFinished = YES;

                    if (oldFinished == NO)
                    {
                         [self.assetWriterVideoInput markAsFinished];
                    }

                    // Leave the dispatch group, since the video work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
          self.cancelled = YES;
     });
}
資源輸出設(shè)置助手

AVOutputSettingsAssistant類有助于為資源讀取器或?qū)懭肫鲃?chuàng)建輸出設(shè)置字典。 這使得設(shè)置更簡(jiǎn)單,特別是對(duì)于具有多個(gè)特定預(yù)設(shè)的高幀率H264電影。 下面展示了使用輸出設(shè)置助手使用設(shè)置助手的示例。

AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
    [outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
    [outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容