用到是AVFoundation 這個庫
人臉檢測使用思路:用Session管理,輸入Input ?,呈現(xiàn)捕獲對象Output,最后在layer上顯示
使用到的如下類:
1.AVCaptureSession ???繼承與NSObject,是AVFoundation的核心類 ,用于管理捕獲對象AVCaptureInput的視頻和音頻的輸入,協(xié)調(diào)捕獲的輸出AVCaptureOutput
2.AVCaptureDeviceInput ?用于從AVCaptureDevice對象捕獲數(shù)據(jù)。
3.AVCaptureVideoDataOutput ? 繼承于AVCaptureOutput 用于將捕獲輸出數(shù)據(jù)(如文件和視頻預覽)連接到捕獲會話AVCaptureSession的實例?
4.AVCaptureMetadataOutput ??繼承于AVCaptureOutput (主要用于人臉檢測)
定義對象:
@property (nonatomic,strong) AVCaptureSession *session;
@property (nonatomic,strong) AVCaptureDeviceInput*input;
@property (nonatomic, strong) AVCaptureMetadataOutput *MetadataOutput;
@property (nonatomic,strong) AVCaptureVideoDataOutput *videoOutput;
@property (nonatomic,strong) AVCaptureVideoPreviewLayer *previewLayer;
初始化各個對象:
-(void)deviceInit{
? ? //1.獲取輸入設備(攝像頭)可以切換前后攝像頭
? ? NSArray *devices = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack].devices;
? ? AVCaptureDevice*deviceF = devices[0];
? ? //2.根據(jù)輸入設備創(chuàng)建輸入對象
? ? self.input= [[AVCaptureDeviceInputalloc]initWithDevice:deviceFerror:nil];
? ? // 設置代理監(jiān)聽輸出對象輸出的數(shù)據(jù)
? ? self.MetadataOutput = [[AVCaptureMetadataOutput alloc] init];
? ? // 設置代理監(jiān)聽輸出對象輸出的數(shù)據(jù)
? ? self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
? ? //對實時視頻幀進行相關的渲染操作,指定代理
? ? [_videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
? ? self.session = [[AVCaptureSession alloc] init];
? ? //5.設置輸出質(zhì)量(高像素輸出)
? ? if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
? ? ? ? [self.session setSessionPreset:AVCaptureSessionPreset640x480];
? ? }
? ? //6.添加輸入和輸出到會話
? ? [self.session beginConfiguration];
? ? if([self.sessioncanAddInput:_input]) {
? ? ? ? [self.sessionaddInput:_input];
? ? }
? ? if ([self.session canAddOutput:_MetadataOutput]) {
? ? ? ? [self.session addOutput:_MetadataOutput];
? ? }
? ? if ([self.session canAddOutput:_videoOutput]) {
?? ? ? ? ? [self.session addOutput:_videoOutput];
?? ? ? }
? ? [self.MetadataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
? ? [self.MetadataOutput setMetadataObjectsDelegate:self? queue:dispatch_queue_create("face", NULL)];
? ? self.MetadataOutput.rectOfInterest = self.view.bounds;
? ? [self.session commitConfiguration];
? ? AVCaptureSession *session = (AVCaptureSession *)self.session;
? ?//7.創(chuàng)建預覽圖層 ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? _previewLayer = [[AVCaptureVideoPreviewLayer alloc] ? initWithSession:self.session];? ? _previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;? ????????????????????????????????????????? _previewLayer.frame = self.view.bounds; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?[self.view.layer insertSublayer:_previewLayer atIndex:0];
? ? //8. 開始掃描
? ? [self.session startRunning];
}
主要用到這些代理
<AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureMetadataOutputObjectsDelegate>
代理方法如下:
捕獲到視頻流,刷新率和手機的刷新率一樣
AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput*)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
檢測到人臉才會回調(diào),只會返回一些人臉信息
AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput*)outputdidOutputMetadataObjects:(NSArray<__kindofAVMetadataObject*> *)metadataObjectsfromConnection:(AVCaptureConnection*)connection;
識別到人臉后出現(xiàn)
? (?"<AVMetadataFaceObject: 0x282d44da0, faceID=7, bounds={0.6,0.6 0.2x0.3}, rollAngle=210.0, yawAngle=0.0, time=235358881818541>",
? ??"<AVMetadataFaceObject: 0x282d45020, faceID=5, bounds={0.2,0.3 0.2x0.3}, rollAngle=210.0, yawAngle=0.0, time=235358881818541>")