1. 程式人生 > >iOS直播技術分享-音視訊採集(一)

iOS直播技術分享-音視訊採集(一)

1、iOS直播技術的流程

       直播技術的流程大致可以分為幾個步驟:資料採集、影象處理(實時濾鏡)、視訊編碼、封包、上傳、雲端(轉碼、錄製、分發)、直播播放器。

  • 資料採集:通過攝像頭和麥克風獲得實時的音視訊資料;
  • 影象處理:將資料採集的輸入流進行實時濾鏡,得到我們美化之後的視訊幀;
  • 視訊編碼:編碼分為軟編碼和硬編碼。現在一般的編碼方式都是H.264,比較新的H.265據說壓縮率比較高,但演算法也相當要複雜一些,使用還不夠廣泛。軟編碼是利用CPU進行編碼,硬編碼就是使用GPU進行編碼,軟編碼支援現在所有的系統版本,由於蘋果在iOS8才開放硬編碼的API,故硬編碼只支援iOS8以上的系統;
  • 封包:現在直播推流中,一般採用的格式是FLV;
  • 上傳:常用的協議是利用RTMP協議進行推流;
  • 雲端:進行流的轉碼、分發和錄製;
  • 直播播放器:負責拉流、解碼、播放。

用一張騰訊雲的圖來說明上面的流程:


直播技術流程

2、獲取系統的授權

直播的第一步就是採集資料,包含視訊和音訊資料,由於iOS許可權的要求,需要先獲取訪問攝像頭和麥克風的許可權:

請求獲取訪問攝像頭許可權

__weak typeof(self) _self = self;
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo
]; switch (status) { case AVAuthorizationStatusNotDetermined:{ // 許可對話沒有出現,發起授權許可 [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) { if (granted) { dispatch_async(dispatch_get_main_queue(), ^{ [_self
.session setRunning:YES]; }); } }]; break; } case AVAuthorizationStatusAuthorized:{ // 已經開啟授權,可繼續 [_self.session setRunning:YES]; break; } case AVAuthorizationStatusDenied: case AVAuthorizationStatusRestricted: // 使用者明確地拒絕授權,或者相機裝置無法訪問 break; default: break; }

請求獲取訪問麥克風許可權

AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
    switch (status) {
        case AVAuthorizationStatusNotDetermined:{
            [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) {
            }];
            break;
        }
        case AVAuthorizationStatusAuthorized:{
            break;
        }
        case AVAuthorizationStatusDenied:
        case AVAuthorizationStatusRestricted:
            break;
        default:
            break;
    }

3、配置取樣引數

音訊:需要配置位元速率、取樣率;
視訊:需要配置視訊解析度、視訊的幀率、視訊的位元速率。

4、音視訊的錄製

音訊的錄製

 self.taskQueue = dispatch_queue_create("com.1905.live.audioCapture.Queue", NULL);

        AVAudioSession *session = [AVAudioSession sharedInstance];
        [session setActive:YES withOptions:kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation error:nil];

        [[NSNotificationCenter defaultCenter] addObserver: self
                                                 selector: @selector(handleRouteChange:)
                                                     name: AVAudioSessionRouteChangeNotification
                                                   object: session];
        [[NSNotificationCenter defaultCenter] addObserver: self
                                                 selector: @selector(handleInterruption:)
                                                     name: AVAudioSessionInterruptionNotification
                                                   object: session];

        NSError *error = nil;

        [session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:nil];

        [session setMode:AVAudioSessionModeVideoRecording error:&error];

        if (![session setActive:YES error:&error]) {
            [self handleAudioComponentCreationFailure];
        }

        AudioComponentDescription acd;
        acd.componentType = kAudioUnitType_Output;
        acd.componentSubType = kAudioUnitSubType_RemoteIO;
        acd.componentManufacturer = kAudioUnitManufacturer_Apple;
        acd.componentFlags = 0;
        acd.componentFlagsMask = 0;

        self.component = AudioComponentFindNext(NULL, &acd);

        OSStatus status = noErr;
        status = AudioComponentInstanceNew(self.component, &_componetInstance);

        if (noErr != status) {
            [self handleAudioComponentCreationFailure];
        }

        UInt32 flagOne = 1;

        AudioUnitSetProperty(self.componetInstance, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flagOne, sizeof(flagOne));

        AudioStreamBasicDescription desc = {0};
        desc.mSampleRate = _configuration.audioSampleRate;
        desc.mFormatID = kAudioFormatLinearPCM;
        desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
        desc.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;
        desc.mFramesPerPacket = 1;
        desc.mBitsPerChannel = 16;
        desc.mBytesPerFrame = desc.mBitsPerChannel / 8 * desc.mChannelsPerFrame;
        desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;

        AURenderCallbackStruct cb;
        cb.inputProcRefCon = (__bridge void *)(self);
        cb.inputProc = handleInputBuffer;
        status = AudioUnitSetProperty(self.componetInstance, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &desc, sizeof(desc));
        status = AudioUnitSetProperty(self.componetInstance, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 1, &cb, sizeof(cb));

        status = AudioUnitInitialize(self.componetInstance);

        if (noErr != status) {
            [self handleAudioComponentCreationFailure];
        }

        [session setPreferredSampleRate:_configuration.audioSampleRate error:nil];


        [session setActive:YES error:nil];

視訊的錄製:呼叫GPUImage中的GPUImageVideoCamera

_videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:_configuration.avSessionPreset cameraPosition:AVCaptureDevicePositionFront];
_videoCamera.outputImageOrientation = _configuration.orientation;
_videoCamera.horizontallyMirrorFrontFacingCamera = NO;
_videoCamera.horizontallyMirrorRearFacingCamera = NO;
_videoCamera.frameRate = (int32_t)_configuration.videoFrameRate;

_gpuImageView = [[GPUImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
[_gpuImageView setFillMode:kGPUImageFillModePreserveAspectRatioAndFill];
[_gpuImageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight];
        [_gpuImageView setInputRotation:kGPUImageFlipHorizonal atIndex:0];