1. 程式人生 > >Android 4.4KitKat AudioRecord 流程分析

Android 4.4KitKat AudioRecord 流程分析

  Android是架構分為三層:

  • 底層      Linux Kernel
  • 中間層  主要由C++實現 (Android 60%原始碼都是C++實現)
  • 應用層  主要由JAVA開發的應用程式

  應用程式執行過程大致如下: JAVA應用程式產生操作(播放音樂或停止),然後通過JNI呼叫進入中間層執行C++程式碼,中間層處理後可能需要硬體產生動作的,會繼續將操作傳到Linux Kernel,Kernel ,不需要硬體產生操作的可能在中間層做一些處理就直接返回。需要硬體產生操作的動作則需通過Kernel呼叫相關的驅動執行動作或一些處理。

  在這裡大家需要明白一點:Android僅使用了Linux的Kernel ,即便是一些常用的庫例如pthread等,都是Android自已用C/C++/彙編重寫實現的。

  因為在音訊通路建立過程中,涉及Android IPC通訊及系統服務管理,所以下面就這兩點先做個簡述:

  ①Android IPC通訊採用的是Client/Server結構,Client 客戶端 (AudioRecord)通過介面(IAudioRecord)呼叫Server 伺服器物件(AudioFlinger及AudioFlinger::RecordThread等)的方法,並獲取執行結果。AudioRecord.cpp 主要是對類AudioRecord的實現,AudioFlinger.cpp主要是對類AudioFlinger的實現。在底層音訊通訊中,可以將AudioRecord作為Android IPC通訊的客戶端,而將AudioFlinger作為伺服器端。AudioRecord獲取伺服器端介面(mAudioRecord)後就可以像執行自已的方法一樣呼叫伺服器端方法(AudioFlinger)。

  ②Android 啟動時會建立一個服務管理程序。Android系統中所有的服務都必需註冊新增到該程序中,可以通過sp<IServiceManager> sm=defaultServiceManager()獲取管理程序介面,然後可以通過它的AddService方法將服務註冊新增:sm->addService(String16("media.audio_flinger"), new AudioFlinger());只有將服務新增到管理程序中才能被其它的程序使用:

sp<IServiceManager> sm = defaultServiceManager();
sp
<IBinder> binder = sm->getService(String16("media.audio_flinger"));

Android的音訊系統在啟動的時候會建立兩個服務:一個是上面的示例 AudioFlingerService,一個是AudioPolicyService,並新增到管理程序中,之後其它程序可以使用它們提供的方法。

以下簡稱AudioFlingerService為AudioFlinger, AudioPolicyService為AudioPolicy

核心流程:

AudioSystem:getinput(…)->aps->getinput(..)->AudioPolicyService::getInput(…)->mpPolicyManager->getInput(…)->

<AudioPolicyService>mpClientInterface->openInput(…)->AudioFlinger::openInput(…)

錄音流程分析

應用層錄音

  AndioRecord類的主要功能是讓各種JAVA應用能夠管理音訊資源,以便它們通過此類能夠錄製平臺的聲音輸入硬體所收集的聲音。此功能的實現就是通過”pulling同步”(reading讀取)AudioRecord物件的聲音資料來完成的。在錄音過程中,應用所需要做的就是通過read方法去及時地獲取AudioRecord物件的錄音資料. AudioRecord類提供的三個獲取聲音資料的方法分別是read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). 無論選擇使用那一個方法都必須事先設定方便使用者的聲音資料的儲存格式。

  開始錄音的時候,一個AudioRecord需要初始化一個相關聯的聲音buffer, 這個buffer主要是用來儲存新的聲音資料。這個buffer的大小,我們可以在物件構造期間去指定。它表明一個AudioRecord物件還沒有被讀取(同步)聲音資料前能錄多長的音(即一次可以錄製的聲音容量)。聲音資料從音訊硬體中被讀出,資料大小不超過整個錄音資料的大小(可以分多次讀出),即每次讀取初始化buffer容量的資料。一般情況下錄音實現的簡單流程如下:

  1. 建立一個數據流。
  2. 構造一個AudioRecord物件,其中需要的最小錄音快取buffer大小可以通過getMinBufferSize方法得到。如果buffer容量過小,將導致物件構造的失敗。
  3. 初始化一個buffer,該buffer大於等於AudioRecord物件用於寫聲音資料的buffer大小。
  4. 開始錄音。
  5. 從AudioRecord中讀取聲音資料到初始化buffer,將buffer中資料匯入資料流。
  6. 停止錄音。
  7. 關閉資料流。

程式示例 :

// Create a DataOuputStream to write the audio data into the saved file.
OutputStream os = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(os);
DataOutputStream dos = new DataOutputStream(bos);
// Create a new AudioRecord object to record the audio.
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration,  audioEncoding);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
              11025, AudioFormat.CHANNEL_IN_MONO,
              AudioFormat.ENCODING_PCM_16BIT, bufferSize);
short[] buffer = new short[bufferSize]; audioRecord.startRecording(); isRecording = true ; while (isRecording) { int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); for (int i = 0; i < bufferReadResult; i++
) dos.writeShort(buffer[i]); } audioRecord.stop(); dos.close();

1. getMinBufferSize

   getMinBufferSize函式前文已做介紹,不再細說,檢視原始碼可知函式實現中通過呼叫native_get_min_buff_size這個JNI函式進入framework/base/core/jni/android_media_AudioRecord.cpp函式中的android_media_AudioRecord_get_min_buff_size.

native_get_min_buff_size函式到android_media_AudioRecord_get_min_buff_size的關聯是通過android_media_AudioRecord.cpp中的函式陣列來檢視的:

static JNINativeMethod gMethods[] = {
    // name,               signature,  funcPtr
    {"native_start",         "(II)I",    (void *)android_media_AudioRecord_start},
    {"native_stop",          "()V",    (void *)android_media_AudioRecord_stop},
    {"native_setup",         "(Ljava/lang/Object;IIIII[I)I", (void *)android_media_AudioRecord_setup},
    {"native_finalize",      "()V",    (void *)android_media_AudioRecord_finalize},
    {"native_release",       "()V",    (void *)android_media_AudioRecord_release},
    {"native_read_in_byte_array", "([BII)I", (void *)android_media_AudioRecord_readInByteArray},
    {"native_read_in_short_array",  "([SII)I", (void *)android_media_AudioRecord_readInShortArray},
    {"native_read_in_direct_buffer","(Ljava/lang/Object;I)I", (void *)android_media_AudioRecord_readInDirectBuffer},
    {"native_set_marker_pos","(I)I",   (void *)android_media_AudioRecord_set_marker_pos},
    {"native_get_marker_pos","()I",    (void *)android_media_AudioRecord_get_marker_pos},
    {"native_set_pos_update_period", "(I)I",   (void *)android_media_AudioRecord_set_pos_update_period},
    {"native_get_pos_update_period", "()I",    (void *)android_media_AudioRecord_get_pos_update_period},
    {"native_get_min_buff_size", "(III)I",   (void *)android_media_AudioRecord_get_min_buff_size},
};

  android_media_AudioRecord_get_min_buff_size程式碼如下:

// ----------------------------------------------------------------------------
// returns the minimum required size for the successful creation of an AudioRecord instance.
// returns 0 if the parameter combination is not supported.
// return -1 if there was an error querying the buffer size.
static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,  jobject thiz,
    jint sampleRateInHertz, jint nbChannels, jint audioFormat) {
    ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",sampleRateInHertz, nbChannels, audioFormat);
    size_t frameCount = 0;
  //以地址的方式獲取frameCount的值。 status_t result
= AudioRecord::getMinFrameCount(&frameCount,sampleRateInHertz, (audioFormat == ENCODING_PCM_16BIT ?AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT), audio_channel_in_mask_from_count(nbChannels)); if (result == BAD_VALUE) { return 0; } if (result != NO_ERROR) { return -1; } return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1); }

  根據最小的framecount計算最小的buffersize。音訊中最常見的是frame這個單位,一個frame就是1個取樣點的位元組數*聲道。為啥搞個frame出來?因為對於多//聲道的話,用1個取樣點的位元組數表示不全,因為播放的時候肯定是多個聲道的資料都要播出來//才行。所以為了方便,就說1秒鐘有多少個frame,這樣就能拋開聲道數,把意思表示全了。getMinBufSize函式完了後,我們得到一個滿足最小要求的緩衝區大小。這樣使用者分配緩衝區就有了依據。

 2. new AudioRecord

  public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
            int bufferSizeInBytes) throws IllegalArgumentException {
        mRecordingState = RECORDSTATE_STOPPED;

        // remember which looper is associated with the AudioRecord instanciation
     // 獲得主執行緒的Looper,關於Looper的介紹見其他專題。
     if ((mInitializationLooper = Looper.myLooper()) == null) {
            mInitializationLooper = Looper.getMainLooper();
        }
        audioParamCheck(audioSource, sampleRateInHz, channelConfig, audioFormat);
        audioBuffSizeCheck(bufferSizeInBytes);

        // native initialization
        int[] session = new int[1];
        session[0] = 0;
        //TODO: update native initialization when information about hardware init failure
        //      due to capture device already open is available.
     //呼叫native層的native_setup,把自己的WeakReference傳進去
int initResult = native_setup( new WeakReference<AudioRecord>(this), mRecordSource, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes, session); if (initResult != SUCCESS) { loge("Error code "+initResult+" when initializing native AudioRecord object."); return; // with mState == STATE_UNINITIALIZED } mSessionId = session[0]; mState = STATE_INITIALIZED; }

   函式實現通過呼叫native_setup函式進入了framework/base/core/jni/android_media_AudioRecord.cpp中的android_media_AudioRecord_setup:

static int android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jint source, jint sampleRateInHertz, jint channelMask,
                // Java channel masks map directly to the native definition
        jint audioFormat, jint buffSizeInBytes, jintArray jSession)
{
    //ALOGV(">> Entering android_media_AudioRecord_setup");
    //ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d",
    //     sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes);

    if (!audio_is_input_channel(channelMask)) {
        ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);
        return AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
    }
//popCount是統計一個整數中有多少位為1的演算法 uint32_t nbChannels
= popcount(channelMask); // compare the format against the Java constants if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) { ALOGE("Error creating AudioRecord: unsupported audio format."); return AUDIORECORD_ERROR_SETUP_INVALIDFORMAT; } int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1; audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT; if (buffSizeInBytes == 0) { ALOGE("Error creating AudioRecord: frameCount is 0."); return AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT; } int frameSize = nbChannels * bytesPerSample; size_t frameCount = buffSizeInBytes / frameSize; if ((uint32_t(source) >= AUDIO_SOURCE_CNT) && (uint32_t(source) != AUDIO_SOURCE_HOTWORD)) { ALOGE("Error creating AudioRecord: unknown source."); return AUDIORECORD_ERROR_SETUP_INVALIDSOURCE; } jclass clazz = env->GetObjectClass(thiz); if (clazz == NULL) { ALOGE("Can't find %s when setting up callback.", kClassPathName); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED; } if (jSession == NULL) { ALOGE("Error creating AudioRecord: invalid session ID pointer"); return AUDIORECORD_ERROR; } jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); return AUDIORECORD_ERROR; } int sessionId = nSession[0]; env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; // create an uninitialized AudioRecord object sp<AudioRecord> lpRecorder = new AudioRecord(); // create the callback information: // this data will be passed with every AudioRecord callback audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie; lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz); // we use a weak reference so the AudioRecord object can be garbage collected. lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this); lpCallbackData->busy = false; lpRecorder->set((audio_source_t) source, sampleRateInHertz, format, // word length, PCM channelMask, frameCount, recorderCallback,// callback_t lpCallbackData,// void* user 0, // notificationFrames, true, // threadCanCallJava sessionId); if (lpRecorder->initCheck() != NO_ERROR) { ALOGE("Error creating AudioRecord instance: initialization check failed."); goto native_init_failure; } nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); goto native_init_failure; } // read the audio session ID back from AudioRecord in case a new session was created during set() nSession[0] = lpRecorder->getSessionId(); env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; { // scope for the lock Mutex::Autolock l(sLock); sAudioRecordCallBackCookies.add(lpCallbackData); } // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field of the Java object
  // 把剛建立的AudioRecord物件儲存在Java層,後面會通過getAudioRecord函式再獲取。
  setAudioRecord(env, thiz, lpRecorder); // save our newly created callback information in the "nativeCallbackCookie" field // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize() env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, (int)lpCallbackData); return AUDIORECORD_SUCCESS; // failure: native_init_failure: env->DeleteGlobalRef(lpCallbackData->audioRecord_class); env->DeleteGlobalRef(lpCallbackData->audioRecord_ref); delete lpCallbackData; env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED; }

比較關鍵的是lpRecorder->set函式,跟蹤實現:

status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        int frameCountInt,
        callback_t cbf,
        void* user,
        int notificationFrames,
        bool threadCanCallJava,
        int sessionId,
        transfer_type transferType,
        audio_input_flags_t flags)
{
    switch (transferType) {
    case TRANSFER_DEFAULT:
        if (cbf == NULL || threadCanCallJava) {
            transferType = TRANSFER_SYNC;
        } else {
            transferType = TRANSFER_CALLBACK;
        }
        break;
    case TRANSFER_CALLBACK:
        if (cbf == NULL) {
            ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");
            return BAD_VALUE;
        }
        break;
    case TRANSFER_OBTAIN:
    case TRANSFER_SYNC:
        break;
    default:
        ALOGE("Invalid transfer type %d", transferType);
        return BAD_VALUE;
    }
    mTransfer = transferType;

    // FIXME "int" here is legacy and will be replaced by size_t later
    if (frameCountInt < 0) {
        ALOGE("Invalid frame count %d", frameCountInt);
        return BAD_VALUE;
    }
    size_t frameCount = frameCountInt;

    ALOGV("set(): sampleRate %u, channelMask %#x, frameCount %u", sampleRate, channelMask,
            frameCount);

    AutoMutex lock(mLock);

    if (mAudioRecord != 0) {
        ALOGE("Track already in use");
        return INVALID_OPERATION;
    }

    if (inputSource == AUDIO_SOURCE_DEFAULT) {
        inputSource = AUDIO_SOURCE_MIC;
    }
    mInputSource = inputSource;

    if (sampleRate == 0) {
        ALOGE("Invalid sample rate %u", sampleRate);
        return BAD_VALUE;
    }
    mSampleRate = sampleRate;

    // these below should probably come from the audioFlinger too...
    if (format == AUDIO_FORMAT_DEFAULT) {
        format = AUDIO_FORMAT_PCM_16_BIT;
    }

    // validate parameters
    if (!audio_is_valid_format(format)) {
        ALOGE("Invalid format %d", format);
        return BAD_VALUE;
    }
    // Temporary restriction: AudioFlinger currently supports 16-bit PCM only
    if (format != AUDIO_FORMAT_PCM_16_BIT) {
        ALOGE("Format %d is not supported", format);
        return BAD_VALUE;
    }
    mFormat = format;

    if (!audio_is_input_channel(channelMask)) {
        ALOGE("Invalid channel mask %#x", channelMask);
        return BAD_VALUE;
    }
    mChannelMask = channelMask;
    uint32_t channelCount = popcount(channelMask);
    mChannelCount = channelCount;

    // Assumes audio_is_linear_pcm(format), else sizeof(uint8_t)
    mFrameSize = channelCount * audio_bytes_per_sample(format);

    // validate framecount
    size_t minFrameCount = 0;
    status_t status = AudioRecord::getMinFrameCount(&minFrameCount,
            sampleRate, format, channelMask);
    if (status != NO_ERROR) {
        ALOGE("getMinFrameCount() failed; status %d", status);
        return status;
    }
    ALOGV("AudioRecord::set() minFrameCount = %d", minFrameCount);

    if (frameCount == 0) {
        frameCount = minFrameCount;
    } else if (frameCount < minFrameCount) {
        ALOGE("frameCount %u < minFrameCount %u", frameCount, minFrameCount);
        return BAD_VALUE;
    }
    mFrameCount = frameCount;

    mNotificationFramesReq = notificationFrames;
    mNotificationFramesAct = 0;

    if (sessionId == 0 ) {
        mSessionId = AudioSystem::newAudioSessionId();
    } else {
        mSessionId = sessionId;
    }
    ALOGV("set(): mSessionId %d", mSessionId);

    mFlags = flags;

    // create the IAudioRecord
    status = openRecord_l(0 /*epoch*/);
    if (status) {
        return status;
    }

    if (cbf != NULL) {
        mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
        mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
    }

    mStatus = NO_ERROR;

    // Update buffer size in case it has been limited by AudioFlinger during track creation
    mFrameCount = mCblk->frameCount_;

    mActive = false;
    mCbf = cbf;
    mRefreshRemaining = true;
    mUserData = user;
    // TODO: add audio hardware input latency here
    mLatency = (1000*mFrameCount) / sampleRate;
    mMarkerPosition = 0;
    mMarkerReached = false;
    mNewPosition = 0;
    mUpdatePeriod = 0;
    AudioSystem::acquireAudioSessionId(mSessionId);
    mSequence = 1;
    mObservedSequence = mSequence;
    mInOverrun = false;

    return NO_ERROR;
}

 openRecord_l跟蹤:

// must be called with mLock held
status_t AudioRecord::openRecord_l(size_t epoch)
{
    status_t status;
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    if (audioFlinger == 0) {
        ALOGE("Could not get audioflinger");
        return NO_INIT;
    }

    IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
    pid_t tid = -1;

    // Client can only express a preference for FAST.  Server will perform additional tests.
    // The only supported use case for FAST is callback transfer mode.
    if (mFlags & AUDIO_INPUT_FLAG_FAST) {
        if ((mTransfer != TRANSFER_CALLBACK) || (mAudioRecordThread == 0)) {
            ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");
            // once denied, do not request again if IAudioRecord is re-created
            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
        } else {
            trackFlags |= IAudioFlinger::TRACK_FAST;
            tid = mAudioRecordThread->getTid();
        }
    }

    mNotificationFramesAct = mNotificationFramesReq;

    if (!(mFlags & AUDIO_INPUT_FLAG_FAST)) {
        // Make sure that application is notified with sufficient margin before overrun
        if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {
            mNotificationFramesAct = mFrameCount/2;
        }
    }

    audio_io_handle_t input = AudioSystem::getInput(mInputSource, mSampleRate, mFormat,
            mChannelMask, mSessionId);
    if (input == 0) {
        ALOGE("Could not get audio input for record source %d", mInputSource);
        return BAD_VALUE;
    }

    int originalSessionId = mSessionId;
    sp<IAudioRecord> record = audioFlinger->openRecord(input,
                                                       mSampleRate, mFormat,
                                                       mChannelMask,
                                                       mFrameCount,
                                                       &trackFlags,
                                                       tid,
                                                       &mSessionId,
                                                       &status);
    ALOGE_IF(originalSessionId != 0 && mSessionId != originalSessionId,
            "session ID changed from %d to %d", originalSessionId, mSessionId);

    if (record == 0 || status != NO_ERROR) {
        ALOGE("AudioFlinger could not create record track, status: %d", status);
        AudioSystem::releaseInput(input);
        return status;
    }
    sp<IMemory> iMem = record->getCblk();
    if (iMem == 0) {
        ALOGE("Could not get control block");
        return NO_INIT;
    }
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) {
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    }
    if (mAudioRecord != 0) {
        mAudioRecord->asBinder()->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }
    mInput = input;
    mAudioRecord = record;
    mCblkMemory = iMem;
    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    mCblk = cblk;
    // FIXME missing fast track frameCount logic
    mAwaitBoost = false;
    if (mFlags & AUDIO_INPUT_FLAG_FAST) {
        if (trackFlags & IAudioFlinger::TRACK_FAST) {
            ALOGV("AUDIO_INPUT_FLAG_FAST successful; frameCount %u", mFrameCount);
            mAwaitBoost = true;
            // double-buffering is not required for fast tracks, due to tighter scheduling
            if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount) {
                mNotificationFramesAct = mFrameCount;
            }
        } else {
            ALOGV("AUDIO_INPUT_FLAG_FAST denied by server; frameCount %u", mFrameCount);
            // once denied, do not request again if IAudioRecord is re-created
            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
            if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {
                mNotificationFramesAct = mFrameCount/2;
            }
        }
    }

    // starting address of buffers in shared memory
    void *buffers = (char*)cblk + sizeof(audio_track_cblk_t);

    // update proxy
    mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
    mProxy->setEpoch(epoch);
    mProxy->setMinimum(mNotificationFramesAct);

    mDeathNotifier = new DeathNotifier(this);
    mAudioRecord->asBinder()->linkToDeath(mDeathNotifier, this);

    return NO_ERROR;
}

AudioSystem::getInput跟蹤實現:

audio_io_handle_t AudioSystem::getInput(audio_source_t inputSource,
                                    uint32_t samplingRate,
                                    audio_format_t format,
                                    audio_channel_mask_t channelMask,
                                    int sessionId)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return 0;
    return aps->getInput(inputSource, samplingRate, format, channelMask, sessionId);
}

 AudioSystem.cpp相關部分:

// client singleton for AudioPolicyService binder interface
sp<IAudioPolicyService> AudioSystem::gAudioPolicyService;
sp<AudioSystem::AudioPolicyServiceClient> AudioSystem::gAudioPolicyServiceClient;


// establish binder interface to AudioPolicy service
const sp<IAudioPolicyService>& AudioSystem::get_audio_policy_service()
{
    gLock.lock();
    if (gAudioPolicyService == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.audio_policy"));
            if (binder != 0)
                break;
            ALOGW("AudioPolicyService not published, waiting...");
            usleep(500000); // 0.5 s
        } while (true);
        if (gAudioPolicyServiceClient == NULL) {
            gAudioPolicyServiceClient = new AudioPolicyServiceClient();
        }
        binder->linkToDeath(gAudioPolicyServiceClient);
        gAudioPolicyService = interface_cast<IAudioPolicyService>(binder);
        gLock.unlock();
    } else {
        gLock.unlock();
    }
    return gAudioPolicyService;
}
// establish binder interface to AudioFlinger service
const sp<IAudioFlinger>& AudioSystem::get_audio_flinger()
{
    Mutex::Autolock _l(gLock);
    if (gAudioFlinger == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.audio_flinger"));
            if (binder != 0)
                break;
            ALOGW("AudioFlinger not published, waiting...");
            usleep(500000); // 0.5 s
        } while (