1. 程式人生 > >用CoreImage實現人臉識別_iOS

用CoreImage實現人臉識別_iOS

前言

因需求需要,需要實現人臉檢測功能,這次正好將這個功能整理了一下,簡單的寫了一個Demo。程式碼有點亂,不過,也不怎麼想花時間去改了,感覺層次方面還算比較清晰的,好了,進入正題。

一、匯入框架,實現自定義相機

1、匯入框架

#import <AVFoundation/AVFoundation.h>
#import <CoreImage/CoreImage.h>

2、實現自定義相機

2.1初始化相機
#pragma mark - 初始化相機
- (void)getCameraSession
{
    //初始化會話
    _captureSession=[[AVCaptureSession alloc]init];
    if
([_captureSession canSetSessionPreset:AVCaptureSessionPreset1280x720]) {//設定解析度 _captureSession.sessionPreset = AVCaptureSessionPreset1280x720; } //獲得輸入裝置 AVCaptureDevice *captureDevice=[self getCameraDeviceWithPosition:AVCaptureDevicePositionFront];//取得前置攝像頭 if (!captureDevice) { NSLog
(@"取得前置攝像頭時出現問題."); return; } NSError *error=nil; //根據輸入裝置初始化裝置輸入物件,用於獲得輸入資料 _captureDeviceInput=[[AVCaptureDeviceInput alloc]initWithDevice:captureDevice error:&error]; if (error) { NSLog(@"取得裝置輸入物件時出錯,錯誤原因:%@",error.localizedDescription); return; } [_captureSession addInput:_captureDeviceInput]; //初始化裝置輸出物件,用於獲得輸出資料
_captureStillImageOutput=[[AVCaptureStillImageOutput alloc]init]; NSDictionary *outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG}; [_captureStillImageOutput setOutputSettings:outputSettings];//輸出設定 //將裝置輸入新增到會話中 if ([_captureSession canAddInput:_captureDeviceInput]) { [_captureSession addInput:_captureDeviceInput]; } //將裝置輸出新增到會話中 if ([_captureSession canAddOutput:_captureStillImageOutput]) { [_captureSession addOutput:_captureStillImageOutput]; } //建立視訊預覽層,用於實時展示攝像頭狀態 _captureVideoPreviewLayer=[[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession]; CALayer *layer=self.videoMainView.layer; layer.masksToBounds=YES; _captureVideoPreviewLayer.frame=layer.bounds; _captureVideoPreviewLayer.videoGravity=AVLayerVideoGravityResizeAspectFill;//填充模式 //將視訊預覽層新增到介面中 [layer addSublayer:_captureVideoPreviewLayer]; [layer insertSublayer:_captureVideoPreviewLayer below:self.focusCursor.layer]; }

三、獲取相機資料流

因為我需要動態進行人臉識別,所以需要啟用資料流,在這裡需要設定並遵守代理

// 遵守代理
<AVCaptureVideoDataOutputSampleBufferDelegate>
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES;
    dispatch_queue_t queue;
    queue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL);
    [captureOutput setSampleBufferDelegate:self queue:queue];

    NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
    NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary *settings = @{key:value};
    [captureOutput setVideoSettings:settings];
    [self.captureSession addOutput:captureOutput];
四、實現相機資料流的代理方法
#pragma mark - Samle Buffer Delegate
// 抽樣快取寫入時所呼叫的委託程式
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{

}
// 這個方法是將資料流的幀轉換成圖片
//在該代理方法中,sampleBuffer是一個Core Media物件,可以引入Core Video供使用
// 通過抽樣快取資料建立一個UIImage物件
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer))];
    UIImage *result = [[UIImage alloc] initWithCGImage:videoImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
    CGImageRelease(videoImage);
    return result;
}

五、對圖片進行處理

在這裡需要說明一下,因為上面的方法轉換出來的圖片都是反過來的,所以需要再轉一下

/**
 *  用來處理圖片翻轉90度
 *
 *  @param aImage
 *
 *  @return
 */
- (UIImage *)fixOrientation:(UIImage *)aImage
{
    // No-op if the orientation is already correct
    if (aImage.imageOrientation == UIImageOrientationUp)
        return aImage;

    CGAffineTransform transform = CGAffineTransformIdentity;

    switch (aImage.imageOrientation) {
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;

        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);
            transform = CGAffineTransformRotate(transform, -M_PI_2);
            break;
        default:
            break;
    }

    switch (aImage.imageOrientation) {
        case UIImageOrientationUpMirrored:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;

        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;
        default:
            break;
    }

    // Now we draw the underlying CGImage into a new context, applying the transform
    // calculated above.
    CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,
                                             CGImageGetBitsPerComponent(aImage.CGImage), 0,
                                             CGImageGetColorSpace(aImage.CGImage),
                                             CGImageGetBitmapInfo(aImage.CGImage));
    CGContextConcatCTM(ctx, transform);
    switch (aImage.imageOrientation) {
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            // Grr...
            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);
            break;

        default:
            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);
            break;
    }

    // And now we just create a new UIImage from the drawing context
    CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
    UIImage *img = [UIImage imageWithCGImage:cgimg];
    CGContextRelease(ctx);
    CGImageRelease(cgimg);
    return img;
}
六、利用CoreImage中的detectFace進行人臉檢測
/**識別臉部*/
-(NSArray *)detectFaceWithImage:(UIImage *)faceImag
{
    //此處是CIDetectorAccuracyHigh,若用於real-time的人臉檢測,則用CIDetectorAccuracyLow,更快
    CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                  context:nil
                                                  options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
    CIImage *ciimg = [CIImage imageWithCGImage:faceImag.CGImage];
    NSArray *features = [faceDetector featuresInImage:ciimg];
    return features;
}
七、總結

Demo原始碼
連結: https://github.com/daniel1214/CoreImage_Detector
我的思路是將相機裡獲取的資料,通過代理方法,將幀轉換成每一張圖片,拿到圖片,去實現人臉識別。功能沒問題,但是很耗效能,但是暫時我不太清楚還有什麼好的方法來實現,如果有什麼好的方法,也可以留言告訴我,感謝!亦或者對我寫的有些疑問也可以留言,看到我會第一時間回覆的,當然也可以電郵我:[email protected],謝謝!