1. 程式人生 > >ios之audio unit的錄音和播放一起,解決audioqueue播放PCM延遲問題

ios之audio unit的錄音和播放一起,解決audioqueue播放PCM延遲問題

    因為用audioqueue的錄音播放,或者用audioqueue錄音,openal播放都有延遲。

    然後用底層些的audio unit,果然延遲問題就好很多了,至少一邊錄一邊播的問題可以很好的解決。。有不少audio unit的三方庫,暫時沒去細研究,查了點,自己修改了下。需要在進行錄音的時候和播放單開執行緒。。之前有問題沒明白,卡了一天突然明白了。。。直接上程式碼來得方便。。。多餘的程式碼和變數也不在進行刪除修改了,也為自己以後看吧

.h檔案   

#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1
#define  kSampleRate 8000
#define kFramesPerPacket 1
#define kChannelsPerFrame 1
#define kBitsPerChannel 16
#define BUFFER_SIZE 1024

@interface AudioController : NSObject

@property (readonly) AudioComponentInstance audioUnit;

@property (readonly) AudioBuffer audioBuffer;

@property (strong, readwrite) NSMutableData *mIn;

@property (strong, readwrite) NSMutableData *mOut;

@property (strong, readwrite) NSMutableData *mAllAudioData;

- (void)hasError:(int)statusCode file:(char*)file line:(int)line;

- (void)processBuffer: (AudioBufferList* )audioBufferList;

+ (AudioController *) sharedAudioManager;

-(void)clearDataArray;

-(void)startAudio;

@end


.m檔案的內容

#import "AudioController.h"

static NSMutableData *mIn;
static NSMutableData *mOut;
static NSMutableData *mAllAudioData;
static bool mIsStarted; // audio unit start
static bool mSendServerStart; // send server continue loop
static bool mRecServerStart; // rec server continue loop
static bool mIsTele; // telephone call

static OSStatus recordingCallback(void *inRefCon,
                                  
                                  AudioUnitRenderActionFlags *ioActionFlags,
                                  const AudioTimeStamp *inTimeStamp,
                                  UInt32 inBusNumber,
                                  UInt32 inNumberFrames,
                                  AudioBufferList *ioData) {
    
    // the data gets rendered here
    AudioBuffer buffer;
    // a variable where we check the status
    OSStatus status;
    //This is the reference to the object who owns the callback.
    AudioController *audioProcessor = (__bridge AudioController* )inRefCon;
    /**
     on this point we define the number of channels, which is mono
     for the iphone. the number of frames is usally 512 or 1024.
     */
    buffer.mDataByteSize = inNumberFrames * 2; // sample size
    buffer.mNumberChannels = 1; // one channel
    buffer.mData = malloc( inNumberFrames * 2 ); // buffer size
    // we put our buffer into a bufferlist array for rendering
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;
    // render input and check for error
    status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
//    [audioProcessor hasError:status file:__FILE__ line:__LINE__];
    // process the bufferlist in the audio processor
    [audioProcessor processBuffer: &bufferList];
    // clean up the buffer
    free(bufferList.mBuffers[0].mData);
    
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        [audioProcessor clearDataArray];
    });
    
    return noErr;
    
}

#pragma mark Playback callback

static OSStatus playbackCallback(void *inRefCon,
                                 
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {
    
    long len = [mIn length];
    len = len > 1024 ? 1024 : len;
    if (len <= 0)
    {
        return noErr;
    }
    
    for (int i = 0; i < ioData -> mNumberBuffers; i++)
    {
//        NSLog( @"len:%ld", len);
        AudioBuffer buffer = ioData -> mBuffers[i];
        NSData *pcmBlock = [mIn subdataWithRange: NSMakeRange(0, len)];
        UInt32 size = (UInt32)MIN(buffer.mDataByteSize, [pcmBlock length]);
        memcpy(buffer.mData, [pcmBlock bytes], size);
        [mIn replaceBytesInRange: NSMakeRange(0, size) withBytes: NULL length: 0];
        buffer.mDataByteSize = size;
    }
    return noErr;
}

@implementation AudioController

@synthesize audioUnit;

@synthesize audioBuffer;

/*
 
 * It's Singleton pattern
 
 * the flow is init(if there isn't existed self) -> initializeAudioConfig(set audio format, io pipe and callback functions)
 
 *                                              -> recordingCallback -> processBuffer
 
 *                                              -> playbackCallback
 
 */

// 封裝一個單例

+ (AudioController *) sharedAudioManager{
    
    static AudioController *sharedAudioManager;
    @synchronized(self)
    {
        if (!sharedAudioManager) {
            sharedAudioManager = [[AudioController alloc] init];
        }
        return sharedAudioManager;
    }
}

- (AudioController* )init {
    
    self = [super init];
    
    if (self) {
        
        [self initializeAudioConfig];
        
        mIn = [[NSMutableData alloc] init];
        mOut = [[NSMutableData alloc] init];
        mAllAudioData = [[NSMutableData alloc] init];
        
        mIsStarted = false;
        mSendServerStart = false;
        mRecServerStart = false;
        mIsTele = false;

    }
    return self;
}

- (void)initializeAudioConfig {
    
    OSStatus status;
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output; // we want to ouput
    desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
    desc.componentFlags = 0; // must be zero
    desc.componentFlagsMask = 0; // must be zero
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    [self hasError:status file:__FILE__ line:__LINE__];
    // define that we want record io on the input bus
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_EnableIO, // use io
                                  kAudioUnitScope_Input, // scope to input
                                  kInputBus, // select input bus (1)
                                  &flag, // set flag
                                  sizeof(flag));
    [self hasError:status file:__FILE__ line:__LINE__];
    // define that we want play on io on the output bus
    status = AudioUnitSetProperty(audioUnit,
                                kAudioOutputUnitProperty_EnableIO, // use io
                                  kAudioUnitScope_Output, // scope to output
                                  kOutputBus, // select output bus (0)
                                  &flag, // set flag
                                  sizeof(flag));
    [self hasError:status file:__FILE__ line:__LINE__];
    // specifie our format on which we want to work.
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate	= kSampleRate;
    audioFormat.mFormatID	= kAudioFormatLinearPCM;
    audioFormat.mFormatFlags	= kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket	= kFramesPerPacket;
    audioFormat.mChannelsPerFrame	= kChannelsPerFrame;
    audioFormat.mBitsPerChannel	= kBitsPerChannel;
    audioFormat.mBytesPerPacket	= kBitsPerChannel * kChannelsPerFrame * kFramesPerPacket / 8;
    audioFormat.mBytesPerFrame	= kBitsPerChannel * kChannelsPerFrame / 8;
    // set the format on the output stream
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    [self hasError:status file:__FILE__ line:__LINE__];
    // set the format on the input stream
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    [self hasError:status file:__FILE__ line:__LINE__];
    /**
     We need to define a callback structure which holds
     a pointer to the recordingCallback and a reference to
     the audio processor object
     */
    AURenderCallbackStruct callbackStruct;
    // set recording callback struct
    callbackStruct.inputProc = recordingCallback; // recordingCallback pointer
    callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
    // set input callback to recording callback on the input bus
    status = AudioUnitSetProperty(audioUnit,kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    [self hasError:status file:__FILE__ line:__LINE__];
    // set playback callback struct
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
    // set playbackCallback as callback on our renderer for the output bus
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    [self hasError:status file:__FILE__ line:__LINE__];
    // reset flag to 0
    flag = 0;
    /*
     we need to tell the audio unit to allocate the render buffer,
     that we can directly write into it.
     */
    status = AudioUnitSetProperty(audioUnit,kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    /*
     we set the number of channels to mono and allocate our block size to
     1024 bytes.
     kiki: I don't know where the size 1024 bytes comes from...
     */
    audioBuffer.mNumberChannels = kChannelsPerFrame;
    audioBuffer.mDataByteSize = 512 * 2;
    audioBuffer.mData = malloc( 512 * 2 );
    // Initialize the Audio Unit and cross fingers =)
    status = AudioUnitInitialize(audioUnit);
    [self hasError:status file:__FILE__ line:__LINE__];
}

- (void)processBuffer: (AudioBufferList* )audioBufferList {
    
    AudioBuffer sourceBuffer = audioBufferList -> mBuffers[0];
    // we check here if the input data byte size has changed
    if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize)
    {
        // clear old buffer
        free(audioBuffer.mData);
        // assing new byte size and allocate them on mData
        audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }
    // copy incoming audio data to the audio buffer
    
    memcpy(audioBuffer.mData, audioBufferList -> mBuffers[0].mData, audioBufferList -> mBuffers[0].mDataByteSize);
    NSData *pcmBlock = [NSData dataWithBytes:sourceBuffer.mData length:sourceBuffer.mDataByteSize];
    [mOut appendData: pcmBlock];
}

-(void)clearDataArray
{
    if ([mOut length] <= 0)
    {
        return;
    }
//    [mAllAudioData appendBytes:mOut.bytes length:mOut.length];
    [mIn appendBytes:mOut.bytes length:mOut.length];
    [mOut replaceBytesInRange: NSMakeRange(0, mOut.length) withBytes: NULL length: 0];
}

- (void)start
{
    if (mIsStarted)
    {
//        NSLog( @"-- already start --");
        return;
    }
//    NSLog( @"-- start --");
    mIsStarted = true;
    [mIn replaceBytesInRange: NSMakeRange(0, [mIn length]) withBytes: NULL length: 0];
    [mOut replaceBytesInRange: NSMakeRange(0, [mOut length]) withBytes: NULL length: 0];
    OSStatus status = AudioOutputUnitStart(audioUnit);
    [self hasError:status file:__FILE__ line:__LINE__];
}

- (void)stop {
    NSLog( @"-- stop --");
    OSStatus status = AudioOutputUnitStop(audioUnit);
    [self hasError:status file:__FILE__ line:__LINE__];
    mIsStarted = false;
    [mIn replaceBytesInRange: NSMakeRange(0, [mIn length]) withBytes: NULL length: 0];
    [mOut replaceBytesInRange: NSMakeRange(0, [mOut length]) withBytes: NULL length: 0];
}

#pragma mark Error handling

- (void)hasError:(int)statusCode file:(char*)file line:(int)line
{
    if (statusCode)
    {
        NSLog(@"Error Code responded %d in file %s on line %d", statusCode, file, line);
        exit(-1);
    }
}




-(void)startAudio
{
    [self stop];
    [self start];
}

@end


相關推薦

iosaudio unit錄音播放一起解決audioqueue播放PCM延遲問題

    因為用audioqueue的錄音播放,或者用audioqueue錄音,openal播放都有延遲。     然後用底層些的audio unit,果然延遲問題就好很多了,至少一邊錄一邊播的問題可以很好的解決。。有不少audio unit的三方庫,暫時沒去細研究,查了點

學習ios高階控制元件協議(資料來源協議委託協議)

一、選擇器 ios的選擇器分的比較清楚,分為時間選擇器和自定義的選擇器,先說說時間選擇器吧,比較簡單 1、時間選擇器 直接看程式碼,所有介紹都在程式碼中 var datePicker:UIDatePicker! var lab

iOS視訊硬編碼軟編碼、硬解碼、軟解碼

軟編碼:使用CPU進行編碼。編碼框架ffmpeg+x264。 硬編碼:不使用CPU進行編碼,使用顯示卡GPU,專用的DSP、FPGA、ASIC晶片等硬體進行編碼。編碼框架Video ToolBox和AudioToolbox。

JSP——表單資訊圖片一起提交

// Check that we have a file upload request 檢查是否是表單檔案上請求 boolean isMultipart = ServletFileUpload.i

iOS開發SDK(.framework.bundle)(包括支援ATSssl雙向驗證及瘦身)

一,說明 我在開發在開發SDK之前,看了這2篇文章.1,http://blog.sina.com.cn/s/blog_87533a080102vzyg.html  2, http://www.jianshu.com/p/43d55ae49f59 現在總結一下我開發的過程:

iOS卡片模式設計實現

卡片效果花樣很多,每個人都有自己的想法來實現更好的動態效果,從而達到更好的使用者體驗;以下就是寫出的卡片翻頁效果要求: (1).靜態的頁面有三張卡片遞進疊加; (2).當手勢向下滑動卡片時,當前卡片向下滑出,上一層卡片遞進滑到上一層; (3).當手勢向上滑

ios 音訊佇列實現錄音播音(轉)

轉自 http://www.verydemo.com/demo_c92_i301380.html 使用AudioQueue來實現音訊播放功能時最主要的步驟,可以更簡練的歸納如下。 1. 開啟播放音訊檔案 2. 取得播放音訊檔案的資料格式 3. 準備播放用的佇列

iOSPOP動畫使用實戰

- POP是一個來自於Facebook,在iOS與OSX上通用的極具擴充套件性的動畫引擎。它在基本的靜態動畫的基礎上增加的彈簧動畫與衰減動畫,使之能創造出更真實更具物理性的互動動畫。 - Pop Animation在使用上和Core Animation很相

iOS基本控制元件tabBar右上角角標顯示與隱藏

#import #import "UIView+redPoint.h" #define USERDEF [NSUserDefaults standardUserDefaults] @implementation UIView (redPoint) #pragma other(redPoint) //新

Spring Boot的學習路(02):一起閱讀Spring Boot官網

官網是我們學習的第一手資料,我們不能忽視它。卻往往因為是英文版的,我們選擇了逃避它,打開了又關閉。 我們平常開發學習中,很少去官網上看。也許學完以後,我們連官網長什麼樣子,都不是很清楚。所以,我們在開始去學習之前,我們先拜讀一下Spring Boot官網,對其有一個大體上的瞭解。我們在後續的講解中,

8Linux伺服器程式設計:chdir()函式cd命令getcwd()函式pwd

 1chdir依賴的標頭檔案 #include<unistd.h> 2函式定義 int chdir(const char *path); int fchdir(int fd)

iOS稽核5.2.13.2拒絕解決過程

The seller and company names associated with your app do not reflect the name of a financial institution in the app or its metadata, as required by Guidel

CSS-同一個li下圖片文字一起如何使得全部垂直居中

HTML程式碼 <ul class="wrapper">   <li>     <img src="../../../assets/top_view1.png" alt="THE CATIC FOREST CASE"/>     <

不修改系統日期時間格式解決Delphi報錯提示 '****-**-**'is not a valid date and time

ali class ngs als ica 日期和時間 val 添加 ats 假如操作系統的日期格式不是yyyy-MM-dd格式,而是用strtodate(‘2014-10-01‘)) 來轉換的話,程序會提示爆粗 ‘****-**-**‘is not a valid dat

Mediaplayer實現音樂播放支援後臺播放

mediaplayer是Android開發中常見的播放音訊檔案的類。這個demo主要實現掃描本地的mp3音訊檔案並支援後臺播放,廢話不多說,直接上程式碼 1,佈局檔案: <?xml version="1.0" encoding="utf-8"?> <LinearLayout

VMLinux:Linux的Ubuntu中解決安裝後螢幕太小的問題

1.操作環境 vmware14Pro ubuntu 16.04LTS 2.問題描述 在使用vmware14Pro安裝ubuntu 16.04LTS系統後,螢幕始終比較小,無法根據vmware的變化而變化。 3.問題原因 問題在於未設定vmware的選單選項或者未安裝vmwar

unity5.6新功能VideoPlayer播放聲音判斷視訊播放完畢以及遇到的坑總結

總體使用方式如下 http://blog.csdn.net/dark00800/article/details/70160463 如何判斷視訊播放完畢呢,用這幾個屬性就可以了 //判斷是否播放完畢,在update裡 // Debug.Log("vp.fra

videojs能播放mp4不能播放rtmp流的問題解決

最近給了一個小任務是要驗證下videojs播放rtmp流的問題。我先是在 http://www.jq22.com/jquery-info404 《視訊播放外掛Video.js》下載了根據它提供的demo修改程式碼如下:<!doctype html> <htm

視訊轉碼後video只能播放聲音不能播放畫面

最近用video來播放視訊,用格式工廠把mov,mpg格式的視訊轉碼成了MP4格式,結果video只能播放聲音,不能播放畫面,但是用視訊播放器能播放出來。 解決方法:視訊的編碼格式不對, MP4有4種

aliplaye,rH5同層播放解決Android端wap視訊點選全屏問題

做直播專案時候在h5播放器方面遇到一些問題。 先說需求。主要是在微信瀏覽器開啟,上面是視訊播放視窗,下面是對話互動視窗,介面類似下圖 預想的是,上邊視訊可以小窗播放,下面可以傳送訊息。在ios系統是可以實現,但坑爹的是,在安卓上沒辦法實