AVFoundation框架提供了一組功能豐富的類,以簡化音視頻的編輯。compositions是AVFoundation編輯API的核心。compositions只是來自一個(gè)或多個(gè)不同媒體asset的track的集合。AVMutableComposition類提供用于插入和移除軌道,以及管理他們的時(shí)間排序的接口。圖3-1顯示了如何將現(xiàn)有asset組合成新的composition以形成新asset。如果您要做的只是將多個(gè)資產(chǎn)依次合并到一個(gè)文件中,那么這就是您需要的詳細(xì)信息。如果你要對composition中的音軌或視軌進(jìn)行自定義處理,則需要分別合并混音或視頻作品。
圖3-1 AVMutableComposition將資產(chǎn)組合在一起

使用AVMutableAudioMix該類,您可以對composition的音軌執(zhí)行自定義音頻處理,如圖3-2所示。當(dāng)前,您可以為音頻軌道指定最大音量或設(shè)置音量斜坡。
圖3-2 AVMutableAudioMix執(zhí)行音頻混合

您可以使用AVMutableVideoComposition該類直接將作品中的視軌進(jìn)行編輯,如圖3-3所示。使用單個(gè)視頻合成,您可以為輸出視頻指定所需的渲染大小和比例以及幀持續(xù)時(shí)間。通過視頻合成的指令(由AVMutableVideoCompositionInstruction類表示),您可以修改視頻的背景色并應(yīng)用圖層指令。這些圖層指令(由AVMutableVideoCompositionLayerInstruction該類表示)可用于將變型,變換漸變,不透明度和不透明度漸變應(yīng)用于合成中的視頻軌道。視頻合成類還使您能夠使用animationTool屬性將Core Animation框架中的效果引入視頻中。
圖3-3 AVMutableVideoComposition

可以使用AVAssetExportSession對象,將composition與 audio mix 和video composition合在一起,如圖3-4所示。您可以使用composition來初始化export session,然后分別將 audio mix 和 video composition分別分配給audioMix和videoComposition屬性。
圖3-4 使用AVAssetExportSession將媒體元素到輸出文件中

創(chuàng)建composition
要?jiǎng)?chuàng)建自己的composition,請使用AVMutableComposition該類。要將媒體數(shù)據(jù)添加到您的composition中,您必須添加一個(gè)或多個(gè)以AVMutableCompositionTrack類表示的composition track。最簡單的情況是使用一個(gè)video track和一個(gè)audio track創(chuàng)建mutable composition:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
初始composition track的option
向composition添加新tracks時(shí),必須同時(shí)提供媒體類型和 track ID。盡管音頻和視頻是最常用的媒體類型,但是您也可以指定其他媒體類型,例如AVMediaTypeSubtitle或AVMediaTypeText。
與某些視聽數(shù)據(jù)關(guān)聯(lián)的每個(gè)軌道都有一個(gè)唯一的標(biāo)識(shí)符,稱為trace ID。如果您指定kCMPersistentTrackID_Invalid為首選軌道ID,則會(huì)自動(dòng)為您生成一個(gè)唯一標(biāo)識(shí)符并將其與該軌道相關(guān)聯(lián)。
將音視數(shù)據(jù)添加到composition中
一旦你有一個(gè)包含一個(gè)或多個(gè)tracks的composition,你就可以開始將媒體數(shù)據(jù)添加到適當(dāng)?shù)膖rack。要將媒體數(shù)據(jù)添加到composition track,您需要訪問AVAsset媒體數(shù)據(jù)所在的對象。您可以使用mutable composition track interface 將具有相同基礎(chǔ)媒體類型的多個(gè)tracks放到同一track。以下示例說明了如何將兩個(gè)不同的asset tracks 依次添加到同一composition track:
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
檢索兼容的Composition Tracks
在可能的情況下,每種媒體類型盡可能只有一個(gè)composition track。兼容 asset tracks的這種統(tǒng)一導(dǎo)致最少的資源使用量。串行顯示媒體數(shù)據(jù)時(shí),應(yīng)將相同類型的所有媒體數(shù)據(jù)放在同一composition track。您可以查詢mutable composition,以了解是否有任何與所需composition tracks兼容的 asset track:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
注意: 將多個(gè)視頻片段放置在同一構(gòu)圖軌道上可能會(huì)導(dǎo)致在視頻片段之間的過渡處(特別是在嵌入式設(shè)備上)過渡時(shí)丟幀。為視頻片段選擇composition tracks數(shù)量完全取決于應(yīng)用程序的設(shè)計(jì)及其預(yù)期的平臺(tái)。
生成音量斜坡
單個(gè)AVMutableAudioMix對象可以單獨(dú)對composition中的所有audio tracks執(zhí)行自定義音頻處理。您可以使用audioMixclass方法創(chuàng)建audio mix ,并使用AVMutableAudioMixInputParameters的實(shí)例將音頻混音與composition中的特定tracks相關(guān)聯(lián)。audio mix可用于改變track的音量。下面的示例顯示如何在特定音軌上設(shè)置音量漸變,以在composition期間使音頻逐漸淡出:
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
執(zhí)行自定義視頻處理
與audio mix一樣,您只需要一個(gè)AVMutableVideoComposition對象就可以對合成的視頻軌道執(zhí)行所有自定義視頻處理。使用視頻composition,您可以直接為合成的視頻track設(shè)置合適的渲染大小,比例和幀頻。有關(guān)為這些屬性設(shè)置適當(dāng)值的詳細(xì)示例,請參見設(shè)置渲染大小和幀持續(xù)時(shí)間。
更改構(gòu)圖的背景色
所有視頻的compositions必須包含一個(gè)AVVideoCompositionInstruction的對象數(shù)組,這個(gè)數(shù)組中至少有一個(gè)video composition instruction。您可以使用AVMutableVideoCompositionInstruction該類來創(chuàng)建自己的視頻composition instructions。使用視頻composition instructions令,您可以修改composition的背景顏色,指定是否需要后期處理或應(yīng)用圖層指令。
以下示例說明了如何創(chuàng)建視頻合成指令,該指令將整個(gè)composition的背景色更改為紅色。
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
應(yīng)用不透明度斜坡
視頻composition instructions也可以用于應(yīng)用composition layer instructions。一個(gè)AVMutableVideoCompositionLayerInstruction對象可以應(yīng)用到composition中的變換,變換坡道,不透明度和不透明度坡道到一定視頻track 。一個(gè)視頻 composition instruction中的layerInstructions數(shù)組中各個(gè)layer instructions的順序決定了在該composition instruction期間,應(yīng)如何對來自源軌道的視頻幀進(jìn)行分層和組裝。以下代碼片段顯示了如何設(shè)置不透明度漸變,以在過渡到第二個(gè)視頻之前緩慢淡出合成中的第一個(gè)視頻:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
整合核心動(dòng)畫效果
視頻合成可以通過該animationTool屬性將Core Animation的功能添加到您的composition中。通過此動(dòng)畫工具,您可以完成視頻加水印,添加標(biāo)題或設(shè)置動(dòng)畫之類的任務(wù)??梢砸詢煞N不同的方式將Core Animation與視頻合成一起使用:您可以將Core Animation圖層添加為其自己的單獨(dú)composition track,或者可以將Core Animation效果(使用Core Animation圖層)直接渲染到composition中的視頻幀中。以下代碼通過在視頻中心添加水印來顯示后一種選項(xiàng):
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
放在一起:組合多種資產(chǎn)并將結(jié)果保存到相機(jī)膠卷中
此簡短的代碼示例說明了如何組合兩個(gè)視頻asset track和一個(gè)音頻asset track來創(chuàng)建單個(gè)視頻文件。它顯示了如何:
創(chuàng)建一個(gè)AVMutableComposition對象并添加多個(gè)AVMutableCompositionTrack對象
將AVAssetTrack對象的時(shí)間范圍添加到兼容的composition tracks
檢查preferredTransform視頻asset track的屬性,以確定視頻的方向
使用AVMutableVideoCompositionLayerInstruction對象將變換應(yīng)用于composition中的video track
為視頻的composition中的renderSize和frameDuration屬性設(shè)置適當(dāng)?shù)闹?/p>
導(dǎo)出到視頻文件時(shí),將composition與video composition結(jié)合使用
將視頻文件保存到相機(jī)膠卷
注意: 為了專注于最相關(guān)的代碼,此示例省略了完整應(yīng)用程序的多個(gè)方面,例如內(nèi)存管理和錯(cuò)誤處理。要使用AVFoundation,您應(yīng)該對Cocoa有足夠的經(jīng)驗(yàn)來推斷出缺失的部分。
創(chuàng)建composition
要將來自不同asset的track拼湊在一起,可以使用一個(gè)AVMutableComposition對象。創(chuàng)建composition并添加一個(gè)audio track和一個(gè)video track。
AVMutableComposition * mutableComposition = [AVMutableComposition組成];
AVMutableCompositionTrack * videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack * audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
添加asset
空的composition對您是沒用的。將兩個(gè)video asset tracks和audio asset track添加到composition中。
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
注意: 假設(shè)您擁有兩個(gè)asset,每個(gè)asset至少包含一個(gè)video track,而第三asset又包含至少一個(gè)音audio track??梢詮摹跋鄼C(jī)膠卷”中檢索video,并且可以從音樂庫或視頻本身中檢索audio track。
檢查視頻方向
將video和audio軌道添加到composition中后,需要確保兩個(gè)video tracks的方向正確。默認(rèn)情況下,所有視頻軌道均假定為橫向模式。如果您的視頻軌道是以縱向模式拍攝的,則在導(dǎo)出視頻時(shí)將無法正確定向視頻。同樣,如果您嘗試將縱向模式下的視頻與橫向模式下的視頻結(jié)合起來,則導(dǎo)出會(huì)話將無法完成。
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
應(yīng)用video composition layer Instructions
一旦知道視頻片段具有兼容的方向,就可以將必要的layer instructions應(yīng)用于每個(gè)layer instructions,并將這些layer instructions添加到video composition中。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction]
所有AVAssetTrack對象都有一個(gè)preferredTransform屬性,其中包含該資產(chǎn)軌道的方向信息。每當(dāng)資產(chǎn)軌道顯示在屏幕上時(shí),都會(huì)應(yīng)用此轉(zhuǎn)換。在先前的代碼中,將layer instruction’s transform 設(shè)置為asset track’s transform,以便一旦調(diào)整其渲染大小,新composition中的視頻就會(huì)正確顯示。
設(shè)置渲染大小和幀持續(xù)時(shí)間
要完成視頻方向修正,您必須相應(yīng)地調(diào)整renderSize屬性。您還應(yīng)該為frameDuration屬性選擇一個(gè)合適的值,例如1/30秒(或每秒30幀)。默認(rèn)情況下,該renderScale屬性設(shè)置為1.0,適用于此composition。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
}
else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
}
else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
導(dǎo)出composition并將其保存到相機(jī)膠卷
此過程的最后一步是將整個(gè)composition導(dǎo)出到單個(gè)視頻文件中,并將該視頻保存到相機(jī)膠卷中。您使用一個(gè)AVAssetExportSession對象來創(chuàng)建新的視頻文件,并將所需的輸出文件URL傳遞給該對象。然后,您可以使用ALAssetsLibrary該類將生成的視頻文件保存到“相機(jī)膠卷”中。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];