1. 程式人生 > >視訊流的處理(實時美顏、濾鏡)並通過簡單的coreImage渲染

視訊流的處理(實時美顏、濾鏡)並通過簡單的coreImage渲染

主要思路 :通過攝像頭捕獲畫面,獲取視訊流之後,進行美顏處理,然後將處理後的流給coreImage進行渲染

視訊的捕獲:框架 AVFoundation/AVFoundation.h

說明:
 AVCaptureDevice 是關於相機硬體的介面。它被用於控制硬體特性,諸如鏡頭的位置、曝光、閃光燈等。

 AVCaptureOutput 是一個抽象類,描述 capture session 的結果。以下是三種關於靜態圖片捕捉的具體子類:

 AVCaptureStillImageOutput 用於捕捉靜態圖片
 AVCaptureMetadataOutput 啟用檢測人臉和二維碼
 AVCaptureVideoOutput 為實時預覽圖提供原始幀
 AVCaptureSession 管理輸入與輸出之間的資料流,以及在出現問題時生成執行時錯誤。

說明:

 AVCaptureVideoPreviewLayer 是 CALayer 的子類,可被用於自動顯示相機產生的實時影象。它還有幾個工具性質的方法,可將 layer 上的座標轉化到裝置上。它看起來像輸出,但其實不是。另外,它擁有 session (outputs 被 session 所擁有)。


AVCaptureSession用來控制輸入裝置(AVCaptureDeviceInput)視訊影象到流入輸出緩衝區(AVCaptureOutput)的過程。一旦AVCaptureSession啟動以後,就會收集來自輸入裝置的資訊,並在適當的時候將這些資訊輸出到資料緩衝區中。


 AVCaptureVideoPreviewLayer預設情況下會顯示來自輸入裝置的原始資料,如果要實現實時濾鏡或在這個圖層上繪製額外的物體,那麼就要衝視訊輸出緩衝區獲取資料幀資料,並對其進行處理,處理完畢後可將畫素資料輸出到另一個圖層或者OpenGL上下文中。

**
如果僅僅獲取攝像頭資料:
效果如下:

**
這裡寫圖片描述
程式碼如下:

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    [self setupCaptureSession];
}
- (void)setupCaptureSession
{
      // 初始化GPUView
    _gpuView = [[GPUView alloc] initWithFrame:self.view.bounds];
    [self.view addSubview:_gpuView];
    self.view.backgroundColor = [UIColor redColor];

    NSError *error = nil;
    // 初始化第二個CIFilter

    // Create
the session AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your // processing algorithm can cope. We'll specify medium quality for the // chosen device. session.sessionPreset = AVCaptureSessionPresetMedium; // Find a suitable AVCaptureDevice AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a device input with the device and add it to the session. AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handling the error appropriately. } [session addInput:input]; // Create a VideoDataOutput and add it to the session AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; [session addOutput:output]; // Configure your output. dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Specify the pixel format output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; // If you wish to cap the frame rate to a known value, such as 15 fps, set // minFrameDuration. output.minFrameDuration = CMTimeMake(1, 15); // [self setSession:session]; // Start the session running to start the flow of data preLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; preLayer.videoGravity = AVLayerVideoGravityResizeAspect; preLayer.frame = [UIScreen mainScreen].bounds; [self.view.layer addSublayer:preLayer]; [session startRunning]; }
如果對影象進行coreImage進行GPU渲染,需要使用這個代理   AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection NS_AVAILABLE(10_7, 6_0);

對得到的sampleBuffer 進行處理
效果 如下:
這裡寫圖片描述

這裡寫圖片描述

程式碼如下:

#import "ViewController.h"
#import <AVFoundation/AVFoundation.h>
//#import <CoreImage/CoreImage.h>
#import "GPUView.h"
@interface ViewController ()<AVCaptureVideoDataOutputSampleBufferDelegate>
{
    AVCaptureVideoPreviewLayer *preLayer;
}
@property (nonatomic, strong) CIFilter  *ciFilter2;
@property (nonatomic, strong) GPUView   *gpuView;

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    [self setupCaptureSession];
}
- (void)setupCaptureSession
{
      // 初始化GPUView
    _gpuView = [[GPUView alloc] initWithFrame:self.view.bounds];
    [self.view addSubview:_gpuView];
    self.view.backgroundColor = [UIColor redColor];

    NSError *error = nil;
    // 初始化第二個CIFilter

    // Create the session
    AVCaptureSession *session = [[AVCaptureSession alloc] init];

    // Configure the session to produce lower resolution video frames, if your
    // processing algorithm can cope. We'll specify medium quality for the
    // chosen device.
    session.sessionPreset = AVCaptureSessionPresetMedium;

    // Find a suitable AVCaptureDevice
    AVCaptureDevice *device = [AVCaptureDevice
                               defaultDeviceWithMediaType:AVMediaTypeVideo];

    // Create a device input with the device and add it to the session.
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
                                                                        error:&error];
    if (!input) {
        // Handling the error appropriately.
    }
    [session addInput:input];

    // Create a VideoDataOutput and add it to the session
    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
    [session addOutput:output];

    // Configure your output.
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    // Specify the pixel format
    output.videoSettings =
    [NSDictionary dictionaryWithObject:
     [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];


    // If you wish to cap the frame rate to a known value, such as 15 fps, set
    // minFrameDuration.
    output.minFrameDuration = CMTimeMake(1, 15);
//    [self setSession:session];
    // Start the session running to start the flow of data

//    preLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
//    preLayer.videoGravity = AVLayerVideoGravityResizeAspect;
//    preLayer.frame = [UIScreen mainScreen].bounds;
//    [self.view.layer addSublayer:preLayer];
    [session startRunning];


}

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{

    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

    CIImage *ciimage = [[CIImage alloc] initWithImage:image];
    _ciFilter2 = [CIFilter filterWithName:@"CIHueAdjust"];
    [_ciFilter2 setValue:ciimage forKey:kCIInputImageKey];

    [_ciFilter2 setValue:@1.f forKeyPath:kCIInputAngleKey];
        CIImage *outputImage = [_ciFilter2 outputImage];
    [_gpuView drawCIImage:outputImage];

}

// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}@end

gpuimage

//
//  GPUView.m
//  OpenGL_ES_1
//
//  Created by fsk-0-1-n on 16/9/8.
//  Copyright © 2016年 Xoxo. All rights reserved.
//

#import "GPUView.h"

@interface GPUView ()

@property (nonatomic, assign)  CGRect     rectInPixels;
@property (nonatomic, strong)  CIContext *context;
@property (nonatomic, strong)  GLKView   *showView;

@end

@implementation GPUView

- (id)initWithFrame:(CGRect)frame
{
    self = [super initWithFrame:frame];
    if (self)
    {
        // 獲取OpenGLES渲染環境
        EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

        // 初始化GLKView並指定OpenGLES渲染環境 + 繫結
        _showView = [[GLKView alloc] initWithFrame:frame context:eaglContext];
        [_showView bindDrawable];

        // 新增進圖層
        [self addSubview:_showView];

        // 建立CIContext環境
        _context = \
        [CIContext contextWithEAGLContext:eaglContext
                                  options:@{kCIContextWorkingColorSpace:[NSNull null]}];

        // 定義繪製區域(畫素描述)
        _rectInPixels = \
        CGRectMake(0.0, 0.0, _showView.drawableWidth, _showView.drawableHeight);
    }
    return self;
}

- (void)drawCIImage:(CIImage *)ciImage
{
    // 開始繪製
    [_context drawImage:ciImage
                 inRect:_rectInPixels
               fromRect:[ciImage extent]];
    //將CIImage轉變為UIImage
//    CGImageRef cgimg = [_context createCGImage:ciImage fromRect:[ciImage extent]];
//    UIImage *newImg = [UIImage imageWithCGImage:cgimg];
//    CGImageRelease(cgimg);
    // 顯示
    [_showView display];
}

@end