視頻其實(shí)就是一張張圖片組成的,將視頻拆分成圖片。
這里視頻分解圖片使用的是AVAssetImageGenerator,利用這個(gè)類可以很方便的實(shí)現(xiàn)不同時(shí)間戳下,視頻幀的抓取。注意一般這種視頻分解圖片幀的方法都是放在子線程中的,而UI更新操作都是放在主線程中的。
- 使用下面方法獲取視頻的每一幀圖片進(jìn)行處理:
/* !
@method generateCGImagesAsynchronouslyForTimes:completionHandler
@abstract Returns a series of CGImageRefs for an asset at or near the specified times.
@param requestedTimes An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested.
@param handler A block that will be called when an image request is complete.
@discussion Employs an efficient "batch mode" for getting images in time order. The client will receive exactly one handler callback for each requested time in requestedTimes. Changes to generator properties (snap behavior, maximum size, etc...) will not affect outstanding asynchronous image generation requests. The generated image is not retained. Clients should retain the image if they wish it to persist after the completion handler returns.
*/
// 獲取每一幀圖片
- (void)generateCGImagesAsynchronouslyForTimes:(NSArray<NSValue *> *)requestedTimes completionHandler:(AVAssetImageGeneratorCompletionHandler)handler;
- 下面核心代碼:
- (void)splitVideo:(NSURL *)fileUrl fps:(float)fps splitCompleteBlock:(void(^)(BOOL success, NSMutableArray *splitimgs))splitCompleteBlock {
if (!fileUrl) {
return;
}
NSMutableArray *splitImages = [NSMutableArray array];
NSDictionary *optDict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *avasset = [[AVURLAsset alloc] initWithURL:fileUrl options:optDict];
CMTime cmtime = avasset.duration; //視頻時(shí)間信息結(jié)構(gòu)體
Float64 durationSeconds = CMTimeGetSeconds(cmtime); //視頻總秒數(shù)
NSMutableArray *times = [NSMutableArray array];
Float64 totalFrames = durationSeconds * fps; //獲得視頻總幀數(shù)
CMTime timeFrame;
for (int i = 1; i <= totalFrames; i++) {
timeFrame = CMTimeMake(i, fps); //第i幀 幀率
NSValue *timeValue = [NSValue valueWithCMTime:timeFrame];
[times addObject:timeValue];
}
AVAssetImageGenerator *imgGenerator = [[AVAssetImageGenerator alloc] initWithAsset:avasset]; //防止時(shí)間出現(xiàn)偏差
imgGenerator.requestedTimeToleranceBefore = kCMTimeZero;
imgGenerator.requestedTimeToleranceAfter = kCMTimeZero;
NSInteger timesCount = [times count]; // 獲取每一幀的圖片
[imgGenerator generateCGImagesAsynchronouslyForTimes:times completionHandler:^(CMTime requestedTime, CGImageRef _Nullable image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * _Nullable error) {
NSLog("current-----: %lld", requestedTime.value);
NSLog("timeScale----: %d",requestedTime.timescale); // 幀率
BOOL isSuccess = NO;
switch (result) {
case AVAssetImageGeneratorCancelled:
NSLog(@"Cancelled");
break;
case AVAssetImageGeneratorFailed:
NSLog(@"Failed");
break;
case AVAssetImageGeneratorSucceeded: {
UIImage *frameImg = [UIImage imageWithCGImage:image];
[splitImages addObject:frameImg];
if (requestedTime.value == timesCount) {
isSuccess = YES;
NSLog(@"completed");
}
}
break;
}
if (splitCompleteBlock) {
splitCompleteBlock(isSuccess,splitImages);
}
}];
貼上github地址:https://github.com/ismilesky/MaxVideo.git