1. 程式人生 > >(二)Audio子系統之new AudioRecord() (一)Audio子系統之AudioRecord.getMinBufferSize

(二)Audio子系統之new AudioRecord() (一)Audio子系統之AudioRecord.getMinBufferSize

在上一篇文章《(一)Audio子系統之AudioRecord.getMinBufferSize》中已經介紹了AudioRecord如何獲取最小緩衝區大小,接下來,繼續分析AudioRecorder方法中的new AudioRecorder的實現,本文基於Android5.1,Android4.4請戳這裡

 

函式原型:

   public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes) throws IllegalArgumentException

  作用:

    建立AudioRecord物件

  引數:

    audioSource:錄製源,這裡設定MediaRecorder.AudioSource.MIC,其他請見MediaRecorder.AudioSource錄製源定義,比如MediaRecorder.AudioSource.FM_TUNER等;

    sampleRateInHz:預設取樣率,單位Hz,這裡設定為44100,44100Hz是當前唯一能保證在所有裝置上工作的取樣率;

    channelConfig: 描述音訊通道設定,這裡設定為AudioFormat.CHANNEL_CONFIGURATION_MONO,CHANNEL_CONFIGURATION_MONO保證能在所有裝置上工作;

    audioFormat:音訊資料保證支援此格式,這裡設定為AudioFormat.ENCODING_16BIT;

    bufferSizeInBytes:在錄製過程中,音訊資料寫入緩衝區的總數(位元組),即getMinVufferSize()獲取到的值。

  異常:

    當引數設定不正確或不支援的引數時,將會丟擲IllegalArgumentException

 

接下來進入系統分析具體實現

frameworks/base/media/java/android/media/AudioRecord.java

    public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
            int bufferSizeInBytes)
    throws IllegalArgumentException {
        this((new AudioAttributes.Builder())
                    .setInternalCapturePreset(audioSource)
                    .build(),
                (new AudioFormat.Builder())
                    .setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,//0x10
                                        true/*allow legacy configurations*/))
                    .setEncoding(audioFormat)
                    .setSampleRate(sampleRateInHz)
                    .build(),
                bufferSizeInBytes,
                AudioManager.AUDIO_SESSION_ID_GENERATE);
    }

呼叫相應的方法,檢查引數的合法性,然後對引數進行儲存等操作,然後呼叫自己的建構函式this()

    public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
            int sessionId) throws IllegalArgumentException {
        mRecordingState = RECORDSTATE_STOPPED;

        if (attributes == null) {
            throw new IllegalArgumentException("Illegal null AudioAttributes");
        }
        if (format == null) {
            throw new IllegalArgumentException("Illegal null AudioFormat");
        }

        // remember which looper is associated with the AudioRecord instanciation
        if ((mInitializationLooper = Looper.myLooper()) == null) {
            mInitializationLooper = Looper.getMainLooper();
        }

        // is this AudioRecord using REMOTE_SUBMIX at full volume?
        if (attributes.getCapturePreset() == MediaRecorder.AudioSource.REMOTE_SUBMIX) {
            final AudioAttributes.Builder filteredAttr = new AudioAttributes.Builder();
            final Iterator<String> tagsIter = attributes.getTags().iterator();
            while (tagsIter.hasNext()) {
                final String tag = tagsIter.next();
                if (tag.equalsIgnoreCase(SUBMIX_FIXED_VOLUME)) {
                    mIsSubmixFullVolume = true;
                    Log.v(TAG, "Will record from REMOTE_SUBMIX at full fixed volume");
                } else { // SUBMIX_FIXED_VOLUME: is not to be propagated to the native layers
                    filteredAttr.addTag(tag);
                }
            }
            filteredAttr.setInternalCapturePreset(attributes.getCapturePreset());
            mAudioAttributes = filteredAttr.build();
        } else {
            mAudioAttributes = attributes;
        }

        int rate = 0;
        if ((format.getPropertySetMask()
                & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)
        {
            rate = format.getSampleRate();
        } else {
            rate = AudioSystem.getPrimaryOutputSamplingRate();
            if (rate <= 0) {
                rate = 44100;
            }
        }

        int encoding = AudioFormat.ENCODING_DEFAULT;
        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0)
        {
            encoding = format.getEncoding();
        }

        audioParamCheck(attributes.getCapturePreset(), rate, encoding);

        mChannelCount = AudioFormat.channelCountFromInChannelMask(format.getChannelMask());
        mChannelMask = getChannelMaskFromLegacyConfig(format.getChannelMask(), false);

        audioBuffSizeCheck(bufferSizeInBytes);

        int[] session = new int[1];
        session[0] = sessionId;
        //TODO: update native initialization when information about hardware init failure
        //      due to capture device already open is available.
        int initResult = native_setup( new WeakReference<AudioRecord>(this),
                mAudioAttributes, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes,
                session);
        if (initResult != SUCCESS) {
            loge("Error code "+initResult+" when initializing native AudioRecord object.");
            return; // with mState == STATE_UNINITIALIZED
        }

        mSessionId = session[0];
	
        mState = STATE_INITIALIZED;
    }

在這個函式中,主要做了如下工作

    1.標記mRecordingState為stoped狀態;

    2.獲取一個MainLooper;

    3.判斷錄音源是否是REMOTE_SUBMIX,有興趣的童鞋可以深入研究;

    4.重新獲取rate與format引數,這裡會根據AUDIO_FORMAT_HAS_PROPERTY_X來判斷從哪裡獲取引數,而在之前的建構函式中,設定引數的時候已經標記了該標誌位,所以這兩個引數還是我們設定的;

    5.呼叫audioParamCheck對引數再一次進行檢查合法性;

    6.獲取聲道數以及聲道掩碼,單聲道掩碼為0x10,雙聲道掩碼為0x0c;

    7.呼叫audioBuffSizeCheck檢查最小緩衝區大小是否合法

    8.呼叫native_setup的native函式,注意這裡傳過去的引數包括:指向自己的指標,錄製源,rate,聲道掩碼,format,minBuffSize,session[];

    9.標記mRecordingState為inited狀態;

        注:關於SessionId
             一個Session就是一個會話,每個會話都有一個獨一無二的Id來標識。該Id的最終管理在AudioFlinger中。
             一個會話可以被多個AudioTrack物件和MediaPlayer共用。
             共用一個Session的AudioTrack和MediaPlayer共享相同的AudioEffect(音效)。

我們只分析native_setup函式

frameworks/base/core/jni/android_media_AudioRecord.cpp

static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
        jobject jaa, jint sampleRateInHertz, jint channelMask,
                // Java channel masks map directly to the native definition
        jint audioFormat, jint buffSizeInBytes, jintArray jSession)
{
    if (jaa == 0) {
        ALOGE("Error creating AudioRecord: invalid audio attributes");
        return (jint) AUDIO_JAVA_ERROR;
    }

    if (!audio_is_input_channel(channelMask)) {
        ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);
        return (jint) AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
    }
    uint32_t channelCount = audio_channel_count_from_in_mask(channelMask);

    // compare the format against the Java constants
    audio_format_t format = audioFormatToNative(audioFormat);
    if (format == AUDIO_FORMAT_INVALID) {
        ALOGE("Error creating AudioRecord: unsupported audio format %d.", audioFormat);
        return (jint) AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;
    }

    size_t bytesPerSample = audio_bytes_per_sample(format);

    if (buffSizeInBytes == 0) {
         ALOGE("Error creating AudioRecord: frameCount is 0.");
        return (jint) AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
    }
    size_t frameSize = channelCount * bytesPerSample;
    size_t frameCount = buffSizeInBytes / frameSize;

    jclass clazz = env->GetObjectClass(thiz);
    if (clazz == NULL) {
        ALOGE("Can't find %s when setting up callback.", kClassPathName);
        return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
    }

    if (jSession == NULL) {
        ALOGE("Error creating AudioRecord: invalid session ID pointer");
        return (jint) AUDIO_JAVA_ERROR;
    }

    jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
    if (nSession == NULL) {
        ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
        return (jint) AUDIO_JAVA_ERROR;
    }
    int sessionId = nSession[0];
    env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
    nSession = NULL;

    // create an uninitialized AudioRecord object
    sp<AudioRecord> lpRecorder = new AudioRecord();

    audio_attributes_t *paa = NULL;
    // read the AudioAttributes values
    paa = (audio_attributes_t *) calloc(1, sizeof(audio_attributes_t));
    const jstring jtags =
            (jstring) env->GetObjectField(jaa, javaAudioAttrFields.fieldFormattedTags);
    const char* tags = env->GetStringUTFChars(jtags, NULL);
    // copying array size -1, char array for tags was calloc'd, no need to NULL-terminate it
    strncpy(paa->tags, tags, AUDIO_ATTRIBUTES_TAGS_MAX_SIZE - 1);
    env->ReleaseStringUTFChars(jtags, tags);
    paa->source = (audio_source_t) env->GetIntField(jaa, javaAudioAttrFields.fieldRecSource);
    paa->flags = (audio_flags_mask_t)env->GetIntField(jaa, javaAudioAttrFields.fieldFlags);
    ALOGV("AudioRecord_setup for source=%d tags=%s flags=%08x", paa->source, paa->tags, paa->flags);

    audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE;
    if (paa->flags & AUDIO_FLAG_HW_HOTWORD) {
        flags = AUDIO_INPUT_FLAG_HW_HOTWORD;
    }
    // create the callback information:
    // this data will be passed with every AudioRecord callback
    audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie;
    lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);
    // we use a weak reference so the AudioRecord object can be garbage collected.
    lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);
    lpCallbackData->busy = false;

    const status_t status = lpRecorder->set(paa->source,
        sampleRateInHertz,
        format,        // word length, PCM
        channelMask,
        frameCount,
        recorderCallback,// callback_t
        lpCallbackData,// void* user
        0,             // notificationFrames,
        true,          // threadCanCallJava
        sessionId,
        AudioRecord::TRANSFER_DEFAULT,
        flags,
        paa);

    if (status != NO_ERROR) {
        ALOGE("Error creating AudioRecord instance: initialization check failed with status %d.",
                status);
        goto native_init_failure;
    }
    nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
    if (nSession == NULL) {
        ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
        goto native_init_failure;
    }
    // read the audio session ID back from AudioRecord in case a new session was created during set()
    nSession[0] = lpRecorder->getSessionId();
    env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
    nSession = NULL;

    {   // scope for the lock
        Mutex::Autolock l(sLock);
        sAudioRecordCallBackCookies.add(lpCallbackData);
    }
    // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field
    // of the Java object
    setAudioRecord(env, thiz, lpRecorder);

    // save our newly created callback information in the "nativeCallbackCookie" field
    // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
    env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);

    return (jint) AUDIO_JAVA_SUCCESS;

    // failure:
native_init_failure:
    env->DeleteGlobalRef(lpCallbackData->audioRecord_class);
    env->DeleteGlobalRef(lpCallbackData->audioRecord_ref);
    delete lpCallbackData;
    env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0);

    return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
}

在這個函式中主要工作如下:

    1.判斷聲道掩碼是否合法,然後通過掩碼計算出聲道數;

    2.由於最小緩衝區大小是取樣幀數量*每個取樣幀大小得出,每個取樣幀大小為所有聲道數所佔的位元組數,從而求出取樣幀數量frameCount;

    3.進行一系列的JNI處理錄音源,以及把AudioRecord.java的指標繫結到lpCallbackData回撥資料中,這樣就能把資料通過回撥的方式通知到上層;

    4.呼叫AudioRecord的set函式,這裡注意下flags,他的型別為audio_input_flags_t,定義在system\core\include\system\audio.h中,作為音訊輸入的標誌,這裡設定為AUDIO_INPUT_FLAG_NONE

typedef enum {
    AUDIO_INPUT_FLAG_NONE       = 0x0,  // no attributes
    AUDIO_INPUT_FLAG_FAST       = 0x1,  // prefer an input that supports "fast tracks"
    AUDIO_INPUT_FLAG_HW_HOTWORD = 0x2,  // prefer an input that captures from hw hotword source
} audio_input_flags_t;

    5.把lpRecorder物件以及lpCallbackData回撥儲存到javaAudioRecordFields的相應欄位中。

這裡分析lpRecorder->set函式

frameworks\av\media\libmedia\AudioRecord.cpp

status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        bool threadCanCallJava,
        int sessionId,
        transfer_type transferType,
        audio_input_flags_t flags,
        const audio_attributes_t* pAttributes)
{
    switch (transferType) {
    case TRANSFER_DEFAULT:
        if (cbf == NULL || threadCanCallJava) {
            transferType = TRANSFER_SYNC;
        } else {
            transferType = TRANSFER_CALLBACK;
        }
        break;
    case TRANSFER_CALLBACK:
        if (cbf == NULL) {
            ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");
            return BAD_VALUE;
        }
        break;
    case TRANSFER_OBTAIN:
    case TRANSFER_SYNC:
        break;
    default:
        ALOGE("Invalid transfer type %d", transferType);
        return BAD_VALUE;
    }
    mTransfer = transferType;

    AutoMutex lock(mLock);

    // invariant that mAudioRecord != 0 is true only after set() returns successfully
    if (mAudioRecord != 0) {
        ALOGE("Track already in use");
        return INVALID_OPERATION;
    }

    if (pAttributes == NULL) {
        memset(&mAttributes, 0, sizeof(audio_attributes_t));
        mAttributes.source = inputSource;
    } else {
        // stream type shouldn't be looked at, this track has audio attributes
        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
        ALOGV("Building AudioRecord with attributes: source=%d flags=0x%x tags=[%s]",
              mAttributes.source, mAttributes.flags, mAttributes.tags);
    }

    if (sampleRate == 0) {
        ALOGE("Invalid sample rate %u", sampleRate);
        return BAD_VALUE;
    }
    mSampleRate = sampleRate;

    // these below should probably come from the audioFlinger too...
    if (format == AUDIO_FORMAT_DEFAULT) {
        format = AUDIO_FORMAT_PCM_16_BIT;
    }

    // validate parameters
    if (!audio_is_valid_format(format)) {
        ALOGE("Invalid format %#x", format);
        return BAD_VALUE;
    }
    // Temporary restriction: AudioFlinger currently supports 16-bit PCM only
    if (format != AUDIO_FORMAT_PCM_16_BIT) {
        ALOGE("Format %#x is not supported", format);
        return BAD_VALUE;
    }
    mFormat = format;

    if (!audio_is_input_channel(channelMask)) {
        ALOGE("Invalid channel mask %#x", channelMask);
        return BAD_VALUE;
    }
    mChannelMask = channelMask;
    uint32_t channelCount = audio_channel_count_from_in_mask(channelMask);
    mChannelCount = channelCount;

    if (audio_is_linear_pcm(format)) {
        mFrameSize = channelCount * audio_bytes_per_sample(format);
    } else {
        mFrameSize = sizeof(uint8_t);
    }

    // mFrameCount is initialized in openRecord_l
    mReqFrameCount = frameCount;

    mNotificationFramesReq = notificationFrames;
    // mNotificationFramesAct is initialized in openRecord_l

    if (sessionId == AUDIO_SESSION_ALLOCATE) {
        mSessionId = AudioSystem::newAudioUniqueId();
    } else {
        mSessionId = sessionId;
    }
    ALOGV("set(): mSessionId %d", mSessionId);

    mFlags = flags;
    mCbf = cbf;

    if (cbf != NULL) {
        mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
        mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
    }

    // create the IAudioRecord
    status_t status = openRecord_l(0 /*epoch*/);

    if (status != NO_ERROR) {
        if (mAudioRecordThread != 0) {
            mAudioRecordThread->requestExit();   // see comment in AudioRecord.h
            mAudioRecordThread->requestExitAndWait();
            mAudioRecordThread.clear();
        }
        return status;
    }

    mStatus = NO_ERROR;
    mActive = false;
    mUserData = user;
    // TODO: add audio hardware input latency here
    mLatency = (1000*mFrameCount) / sampleRate;
    mMarkerPosition = 0;
    mMarkerReached = false;
    mNewPosition = 0;
    mUpdatePeriod = 0;
    AudioSystem::acquireAudioSessionId(mSessionId, -1);
    mSequence = 1;
    mObservedSequence = mSequence;
    mInOverrun = false;

    return NO_ERROR;
}

在這個函式中主要工作如下:

    1.在JNI中傳遞過來的引數:transferType為TRANSFER_DEFAULT,cbf!=null,threadCanCallJava=true,所以mTransfer設定為TRANSFER_SYNC,他是決定如何從AudioRecord傳輸資料方式,後面會用到;

    2.儲存相關的引數,如錄製源mAttributes.source,取樣率mSampleRate,取樣精度mFormat,聲道掩碼mChannelMask,聲道數mChannelCount,取樣幀大小mFrameSize,取樣幀數量mReqFrameCount,通知幀計數mNotificationFramesReq,mSessionId在這裡更新了,音訊輸入標誌mFlags還是之前的AUDIO_INPUT_FLAG_NONE

    3.當cbf資料回撥函式不為null時,開啟一個錄音執行緒AudioRecordThread;

    4.呼叫openRecord_l(0)建立IAudioRecord物件;

    5.如果建立失敗,就銷燬錄音執行緒AudioRecordThread,否則更新引數;

這裡繼續分析如何建立IAudioRecord物件

status_t AudioRecord::openRecord_l(size_t epoch)
{
    status_t status;
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    if (audioFlinger == 0) {
        ALOGE("Could not get audioflinger");
        return NO_INIT;
    }

    // Fast tracks must be at the primary _output_ [sic] sampling rate,
    // because there is currently no concept of a primary input sampling rate
    uint32_t afSampleRate = AudioSystem::getPrimaryOutputSamplingRate();
    if (afSampleRate == 0) {
        ALOGW("getPrimaryOutputSamplingRate failed");
    }

    // Client can only express a preference for FAST.  Server will perform additional tests.
    if ((mFlags & AUDIO_INPUT_FLAG_FAST) && !(
            // use case: callback transfer mode
            (mTransfer == TRANSFER_CALLBACK) &&
            // matching sample rate
            (mSampleRate == afSampleRate))) {
        ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");
        // once denied, do not request again if IAudioRecord is re-created
        mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
    }

    IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;

    pid_t tid = -1;
    if (mFlags & AUDIO_INPUT_FLAG_FAST) {
        trackFlags |= IAudioFlinger::TRACK_FAST;
        if (mAudioRecordThread != 0) {
            tid = mAudioRecordThread->getTid();
        }
    }

    audio_io_handle_t input;
    status = AudioSystem::getInputForAttr(&mAttributes, &input, (audio_session_t)mSessionId,
                                        mSampleRate, mFormat, mChannelMask, mFlags);

    if (status != NO_ERROR) {
        ALOGE("Could not get audio input for record source %d, sample rate %u, format %#x, "
              "channel mask %#x, session %d, flags %#x",
              mAttributes.source, mSampleRate, mFormat, mChannelMask, mSessionId, mFlags);
        return BAD_VALUE;
    }
    {
    // Now that we have a reference to an I/O handle and have not yet handed it off to AudioFlinger,
    // we must release it ourselves if anything goes wrong.

    size_t frameCount = mReqFrameCount;
    size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                // but we will still need the original value also
    int originalSessionId = mSessionId;

    // The notification frame count is the period between callbacks, as suggested by the server.
    size_t notificationFrames = mNotificationFramesReq;

    sp<IMemory> iMem;           // for cblk
    sp<IMemory> bufferMem;

	//return recordHandle = new RecordHandle(recordTrack);
	//class RecordHandle : public android::BnAudioRecord
    sp<IAudioRecord> record = audioFlinger->openRecord(input,
                                                       mSampleRate, mFormat,
                                                       mChannelMask,
                                                       &temp,
                                                       &trackFlags,
                                                       tid,
                                                       &mSessionId,
                                                       &notificationFrames,
                                                       iMem,
                                                       bufferMem,
                                                       &status);
    ALOGE_IF(originalSessionId != AUDIO_SESSION_ALLOCATE && mSessionId != originalSessionId,
            "session ID changed from %d to %d", originalSessionId, mSessionId);

    if (status != NO_ERROR) {
        ALOGE("AudioFlinger could not create record track, status: %d", status);
        goto release;
    }
    ALOG_ASSERT(record != 0);

    // AudioFlinger now owns the reference to the I/O handle,
    // so we are no longer responsible for releasing it.

    if (iMem == 0) {
        ALOGE("Could not get control block");
        return NO_INIT;
    }
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) {
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    }
    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);

    // Starting address of buffers in shared memory.
    // The buffers are either immediately after the control block,
    // or in a separate area at discretion of server.
    void *buffers;
    if (bufferMem == 0) {
        buffers = cblk + 1;
    } else {
        buffers = bufferMem->pointer();
        if (buffers == NULL) {
            ALOGE("Could not get buffer pointer");
            return NO_INIT;
        }
    }

    // invariant that mAudioRecord != 0 is true only after set() returns successfully
    if (mAudioRecord != 0) {
        mAudioRecord->asBinder()->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }
    mAudioRecord = record;
    mCblkMemory = iMem;
    mBufferMemory = bufferMem;
    IPCThreadState::self()->flushCommands();

    mCblk = cblk;
    // note that temp is the (possibly revised) value of frameCount
    if (temp < frameCount || (frameCount == 0 && temp == 0)) {
        ALOGW("Requested frameCount %zu but received frameCount %zu", frameCount, temp);
    }
    frameCount = temp;

    mAwaitBoost = false;
    if (mFlags & AUDIO_INPUT_FLAG_FAST) {
        if (trackFlags & IAudioFlinger::TRACK_FAST) {
            ALOGV("AUDIO_INPUT_FLAG_FAST successful; frameCount %zu", frameCount);
            mAwaitBoost = true;
        } else {
            ALOGV("AUDIO_INPUT_FLAG_FAST denied by server; frameCount %zu", frameCount);
            // once denied, do not request again if IAudioRecord is re-created
            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
        }
    }

    // Make sure that application is notified with sufficient margin before overrun
    if (notificationFrames == 0 || notificationFrames > frameCount) {
        ALOGW("Received notificationFrames %zu for frameCount %zu", notificationFrames, frameCount);
    }
    mNotificationFramesAct = notificationFrames;

    // We retain a copy of the I/O handle, but don't own the reference
    mInput = input;
    mRefreshRemaining = true;

    mFrameCount = frameCount;
    // If IAudioRecord is re-created, don't let the requested frameCount
    // decrease.  This can confuse clients that cache frameCount().
    if (frameCount > mReqFrameCount) {
        mReqFrameCount = frameCount;
    }

    // update proxy
    mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
    mProxy->setEpoch(epoch);
    mProxy->setMinimum(mNotificationFramesAct);

    mDeathNotifier = new DeathNotifier(this);
    mAudioRecord->asBinder()->linkToDeath(mDeathNotifier, this);

    return NO_ERROR;
    }

release:
    AudioSystem::releaseInput(input, (audio_session_t)mSessionId);
    if (status == NO_ERROR) {
        status = NO_INIT;
    }
    return status;
}

在這個函式中主要工作如下:

    1.獲取IAudioFlinger物件,其通過binder和AudioFlinger通訊,所以也就是相當於直接呼叫到AudioFlinger服務中了;

    2.判斷音訊輸入標誌,是否需要清除AUDIO_INPUT_FLAG_FAST標誌位,這裡不需要,一直是AUDIO_INPUT_FLAG_NONE;

    3.呼叫AudioSystem::getInputForAttr獲取輸入流的控制代碼input;

    4.呼叫audioFlinger->openRecord建立IAudioRecord物件;

    5.通過IMemory共享記憶體,獲取錄音資料;

    6.更新AudioRecordClientProxy客戶端代理的錄音資料;

下面主要分析第3、4點:

    首先看下AudioRecord.cpp::openRecord_l(0)的第3步.獲取輸入流的控制代碼input

frameworks\av\media\libmedia\AudioSystem.cpp

status_t AudioSystem::getInputForAttr(const audio_attributes_t *attr,
                                audio_io_handle_t *input,
                                audio_session_t session,
                                uint32_t samplingRate,
                                audio_format_t format,
                                audio_channel_mask_t channelMask,
                                audio_input_flags_t flags)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getInputForAttr(attr, input, session, samplingRate, format, channelMask, flags);
}

獲取AudioPolicy的服務,繼續呼叫AudioPolicyService的函式

frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp

status_t AudioPolicyService::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags)
{
    if (mAudioPolicyManager == NULL) {
        return NO_INIT;
    }

    // already checked by client, but double-check in case the client wrapper is bypassed
    if (attr->source >= AUDIO_SOURCE_CNT && attr->source != AUDIO_SOURCE_HOTWORD &&
        attr->source != AUDIO_SOURCE_FM_TUNER) {
        return BAD_VALUE;
    }

    if (((attr->source == AUDIO_SOURCE_HOTWORD) && !captureHotwordAllowed()) ||
        ((attr->source == AUDIO_SOURCE_FM_TUNER) && !captureFmTunerAllowed())) {
        return BAD_VALUE;
    }
    sp<AudioPolicyEffects>audioPolicyEffects;
    status_t status;
    AudioPolicyInterface::input_type_t inputType;
    {
        Mutex::Autolock _l(mLock);
        // the audio_in_acoustics_t parameter is ignored by get_input()
        status = mAudioPolicyManager->getInputForAttr(attr, input, session,
                                                     samplingRate, format, channelMask,
                                                     flags, &inputType);
        audioPolicyEffects = mAudioPolicyEffects;

        if (status == NO_ERROR) {
            // enforce permission (if any) required for each type of input
            switch (inputType) {
            case AudioPolicyInterface::API_INPUT_LEGACY:
                break;
            case AudioPolicyInterface::API_INPUT_MIX_CAPTURE:
                if (!captureAudioOutputAllowed()) {
                    ALOGE("getInputForAttr() permission denied: capture not allowed");
                    status = PERMISSION_DENIED;
                }
                break;
            case AudioPolicyInterface::API_INPUT_MIX_EXT_POLICY_REROUTE:
                if (!modifyAudioRoutingAllowed()) {
                    ALOGE("getInputForAttr() permission denied: modify audio routing not allowed");
                    status = PERMISSION_DENIED;
                }
                break;
            case AudioPolicyInterface::API_INPUT_INVALID:
            default:
                LOG_ALWAYS_FATAL("getInputForAttr() encountered an invalid input type %d",
                        (int)inputType);
            }
        }

        if (status != NO_ERROR) {
            if (status == PERMISSION_DENIED) {
                mAudioPolicyManager->releaseInput(*input, session);
            }
            return status;
        }
    }

    if (audioPolicyEffects != 0) {
        // create audio pre processors according to input source
        status_t status = audioPolicyEffects->addInputEffects(*input, attr->source, session);
        if (status != NO_ERROR && status != ALREADY_EXISTS) {
            ALOGW("Failed to add effects on input %d", *input);
        }
    }
    return NO_ERROR;
}

在這個函式中主要的工作如下:

    1.對source為HOTWORD或FM_TUNER的錄音源,判斷是否具有相應的錄音許可權(根據應用程序號);

    2.繼續呼叫AudioPolicyManager的方法獲取input以及inputType;

    3.檢查應用是否具有該inputType的錄音許可權;

    4.判斷是否需要新增音效(audioPolicyEffects),需要則使用audioPolicyEffects->addInputEffects新增音效;

繼續分析第2步

frameworks\av\services\audiopolicy\AudioPolicyManager.cpp

status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_session_t session,
                                             uint32_t samplingRate,
                                             audio_format_t format,
                                             audio_channel_mask_t channelMask,
                                             audio_input_flags_t flags,
                                             input_type_t *inputType)
{
    *input = AUDIO_IO_HANDLE_NONE;
    *inputType = API_INPUT_INVALID;
    audio_devices_t device;
    // handle legacy remote submix case where the address was not always specified
    String8 address = String8("");
    bool isSoundTrigger = false;
    audio_source_t inputSource = attr->source;
    audio_source_t halInputSource;
    AudioMix *policyMix = NULL;

    if (inputSource == AUDIO_SOURCE_DEFAULT) {
        inputSource = AUDIO_SOURCE_MIC;
    }
    halInputSource = inputSource;

    if (inputSource == AUDIO_SOURCE_REMOTE_SUBMIX &&
            strncmp(attr->tags, "addr=", strlen("addr=")) == 0) {

        device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
        address = String8(attr->tags + strlen("addr="));
        ssize_t index = mPolicyMixes.indexOfKey(address);
        if (index < 0) {
            ALOGW("getInputForAttr() no policy for address %s", address.string());
            return BAD_VALUE;
        }
        if (mPolicyMixes[index]->mMix.mMixType != MIX_TYPE_PLAYERS) {
            ALOGW("getInputForAttr() bad policy mix type for address %s", address.string());
            return BAD_VALUE;
        }
        policyMix = &mPolicyMixes[index]->mMix;
        *inputType = API_INPUT_MIX_EXT_POLICY_REROUTE;
    } else {
        device = getDeviceAndMixForInputSource(inputSource, &policyMix);
        if (device == AUDIO_DEVICE_NONE) {
            ALOGW("getInputForAttr() could not find device for source %d", inputSource);
            return BAD_VALUE;
        }
        if (policyMix != NULL) {
            address = policyMix->mRegistrationId;
            if (policyMix->mMixType == MIX_TYPE_RECORDERS) {
                // there is an external policy, but this input is attached to a mix of recorders,
                // meaning it receives audio injected into the framework, so the recorder doesn't
                // know about it and is therefore considered "legacy"
                *inputType = API_INPUT_LEGACY;
            } else {
                // recording a mix of players defined by an external policy, we're rerouting for
                // an external policy
                *inputType = API_INPUT_MIX_EXT_POLICY_REROUTE;
            }
        } else if (audio_is_remote_submix_device(device)) {
            address = String8("0");
            *inputType = API_INPUT_MIX_CAPTURE;
        } else {
            *inputType = API_INPUT_LEGACY;
        }
        // adapt channel selection to input source
        switch (inputSource) {
        case AUDIO_SOURCE_VOICE_UPLINK:
            channelMask = AUDIO_CHANNEL_IN_VOICE_UPLINK;
            break;
        case AUDIO_SOURCE_VOICE_DOWNLINK:
            channelMask = AUDIO_CHANNEL_IN_VOICE_DNLINK;
            break;
        case AUDIO_SOURCE_VOICE_CALL:
            channelMask = AUDIO_CHANNEL_IN_VOICE_UPLINK | AUDIO_CHANNEL_IN_VOICE_DNLINK;
            break;
        default:
            break;
        }
        if (inputSource == AUDIO_SOURCE_HOTWORD) {
            ssize_t index = mSoundTriggerSessions.indexOfKey(session);
            if (index >= 0) {
                *input = mSoundTriggerSessions.valueFor(session);
                isSoundTrigger = true;
                flags = (audio_input_flags_t)(flags | AUDIO_INPUT_FLAG_HW_HOTWORD);
                ALOGV("SoundTrigger capture on session %d input %d", session, *input);
            } else {
                halInputSource = AUDIO_SOURCE_VOICE_RECOGNITION;
            }
        }
    }

    sp<IOProfile> profile = getInputProfile(device, address,
                                            samplingRate, format, channelMask,
                                            flags);
    if (profile == 0) {
		PLOGV("profile == 0");
        //retry without flags
        audio_input_flags_t log_flags = flags;
        flags = AUDIO_INPUT_FLAG_NONE;
        profile = getInputProfile(device, address,
                                  samplingRate, format, channelMask,
                                  flags);
        if (profile == 0) {
            ALOGW("getInputForAttr() could not find profile for device 0x%X, samplingRate %u,"
                    "format %#x, channelMask 0x%X, flags %#x",
                    device, samplingRate, format, channelMask, log_flags);
            return BAD_VALUE;
        }
    }

    if (profile->mModule->mHandle == 0) {
		PLOGV("getInputForAttr(): HW module %s not opened", profile->mModule->mName);
        ALOGE("getInputForAttr(): HW module %s not opened", profile->mModule->mName);
        return NO_INIT;
    }

    audio_config_t config = AUDIO_CONFIG_INITIALIZER;
    config.sample_rate = samplingRate;
    config.channel_mask = channelMask;
    config.format = format;

    status_t status = mpClientInterface->openInput(profile->mModule->mHandle,
                                                   input,
                                                   &config,
                                                   &device,
                                                   address,
                                                   halInputSource,
                                                   flags);
    // only accept input with the exact requested set of parameters
    if (status != NO_ERROR || *input == AUDIO_IO_HANDLE_NONE ||
        (samplingRate != config.sample_rate) ||
        (format != config.format) ||
        (channelMask != config.channel_mask)) {
        ALOGW("getInputForAttr() failed opening input: samplingRate %d, format %d, channelMask %x",
                samplingRate, format, channelMask);
        if (*input != AUDIO_IO_HANDLE_NONE) {
            mpClientInterface->closeInput(*input);
        }
        return BAD_VALUE;
    }

    sp<AudioInputDescriptor> inputDesc = new AudioInputDescriptor(profile);
    inputDesc->mInputSource = inputSource;
    inputDesc->mRefCount = 0;
    inputDesc->mOpenRefCount = 1;
    inputDesc->mSamplingRate = samplingRate;
    inputDesc->mFormat = format;
    inputDesc->mChannelMask = channelMask;
    inputDesc->mDevice = device;
    inputDesc->mSessions.add(session);
    inputDesc->mIsSoundTrigger = isSoundTrigger;
    inputDesc->mPolicyMix = policyMix;

    ALOGV("getInputForAttr() returns input type = %d", inputType);

    addInput(*input, inputDesc);
    mpClientInterface->onAudioPortListUpdate();
    return NO_ERROR;
}

在這個函式中主要工作如下:

    1.呼叫getDeviceAndMixForInputSource函式獲取policyMix裝置以及對應的audio_device_t裝置型別(device),device定義在system\core\include\system\audio.h中,這裡使用了內建的MIC,所以device為AUDIO_DEVICE_IN_BUILTIN_MIC,另外如果還需要新增一種音訊裝置的話,需要在這裡增加;

enum {
    AUDIO_DEVICE_NONE                          = 0x0,
    /* reserved bits */
    AUDIO_DEVICE_BIT_IN                        = 0x80000000,
    AUDIO_DEVICE_BIT_DEFAULT                   = 0x40000000,
    /* output devices */
    AUDIO_DEVICE_OUT_EARPIECE                  = 0x1,
    AUDIO_DEVICE_OUT_SPEAKER                   = 0x2,
    AUDIO_DEVICE_OUT_WIRED_HEADSET             = 0x4,
...
    /* input devices */
    AUDIO_DEVICE_IN_COMMUNICATION         = AUDIO_DEVICE_BIT_IN | 0x1,
    AUDIO_DEVICE_IN_AMBIENT               = AUDIO_DEVICE_BIT_IN | 0x2,
    AUDIO_DEVICE_IN_BUILTIN_MIC           = AUDIO_DEVICE_BIT_IN | 0x4,
...
    AUDIO_DEVICE_IN_ALL     = (AUDIO_DEVICE_IN_COMMUNICATION |
                               AUDIO_DEVICE_IN_AMBIENT |
                               AUDIO_DEVICE_IN_BUILTIN_MIC |
                               AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET |
                               AUDIO_DEVICE_IN_WIRED_HEADSET |
                               AUDIO_DEVICE_IN_HDMI |
                               AUDIO_DEVICE_IN_TELEPHONY_RX |
                               AUDIO_DEVICE_IN_BACK_MIC |
                               AUDIO_DEVICE_IN_REMOTE_SUBMIX |
                               AUDIO_DEVICE_IN_ANLG_DOCK_HEADSET |
                               AUDIO_DEVICE_IN_DGTL_DOCK_HEADSET |
                               AUDIO_DEVICE_IN_USB_ACCESSORY |
                               AUDIO_DEVICE_IN_USB_DEVICE |
                               AUDIO_DEVICE_IN_FM_TUNER |
                               AUDIO_DEVICE_IN_TV_TUNER |
                               AUDIO_DEVICE_IN_LINE |
                               AUDIO_DEVICE_IN_SPDIF |
                               AUDIO_DEVICE_IN_BLUETOOTH_A2DP |
                               AUDIO_DEVICE_IN_LOOPBACK |
							   AUDIO_DEVICE_IN_AF |
                               AUDIO_DEVICE_IN_DEFAULT),
    AUDIO_DEVICE_IN_ALL_SCO = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET,
    AUDIO_DEVICE_IN_ALL_USB  = (AUDIO_DEVICE_IN_USB_ACCESSORY |
                                AUDIO_DEVICE_IN_USB_DEVICE),
};

typedef uint32_t audio_devices_t;

    2.獲取inputType的型別

    typedef enum {
        API_INPUT_INVALID = -1,
        API_INPUT_LEGACY  = 0,// e.g. audio recording from a microphone
        API_INPUT_MIX_CAPTURE,// used for "remote submix", capture of the media to play it remotely
        API_INPUT_MIX_EXT_POLICY_REROUTE,// used for platform audio rerouting, where mixes are
                                         // handled by external and dynamically installed
                                         // policies which reroute audio mixes
    } input_type_t;

    3.更新channelMask,適配聲道到輸入源

    4.呼叫getInputProfile,根據傳進來的取樣率/精度/掩碼等引數與獲得的裝置支援的Input Profile比較,返回一個與裝置Profile匹配的IOProfile物件,IOProfile是用來描述輸出或輸入流的能力,策略管理器使用它來確定輸出或輸入是否適合於給定的用例,相應地開啟/關閉它,以及連線/斷開音訊軌道

    5.如果獲取失敗的話,則使用AUDIO_INPUT_FLAG_NONE再次獲取一遍,如果依然失敗,則return一個bad news;

    6.繼續呼叫mpClientInterface->openInput建立起輸入流;

    7.根據IOProfile物件構造AudioInputDescriptor,並繫結到input流中,最後更新AudioPortList;

這裡我們著重分析下第1,6步

   首先看下AudioPolicyManager.cpp::getInputForAttr()的第1步.獲取policyMix裝置以及對應的audio_device_t裝置型別(device)

audio_devices_t AudioPolicyManager::getDeviceAndMixForInputSource(audio_source_t inputSource,
                                                            AudioMix **policyMix)
{
    audio_devices_t availableDeviceTypes = mAvailableInputDevices.types() &
                                            ~AUDIO_DEVICE_BIT_IN;

    for (size_t i = 0; i < mPolicyMixes.size(); i++) {
        if (mPolicyMixes[i]->mMix.mMixType != MIX_TYPE_RECORDERS) {
            continue;
        }
        for (size_t j = 0; j < mPolicyMixes[i]->mMix.mCriteria.size(); j++) {
            if ((RULE_MATCH_ATTRIBUTE_CAPTURE_PRESET == mPolicyMixes[i]->mMix.mCriteria[j].mRule &&
                    mPolicyMixes[i]->mMix.mCriteria[j].mAttr.mSource == inputSource) ||
               (RULE_EXCLUDE_ATTRIBUTE_CAPTURE_PRESET == mPolicyMixes[i]->mMix.mCriteria[j].mRule &&
                    mPolicyMixes[i]->mMix.mCriteria[j].mAttr.mSource != inputSource)) {
                if (availableDeviceTypes & AUDIO_DEVICE_IN_REMOTE_SUBMIX) {
                    if (policyMix != NULL) {
                        *policyMix = &mPolicyMixes[i]->mMix;
                    }
                    return AUDIO_DEVICE_IN_REMOTE_SUBMIX;
                }
                break;
            }
        }
    }

    return getDeviceForInputSource(inputSource);
}

audio_devices_t AudioPolicyManager::getDeviceForInputSource(audio_source_t inputSource)
{
    uint32_t device = AUDIO_DEVICE_NONE;
    audio_devices_t availableDeviceTypes = mAvailableInputDevices.types() &
                                            ~AUDIO_DEVICE_BIT_IN;

    switch (inputSource) {
    case AUDIO_SOURCE_VOICE_UPLINK:
      if (availableDeviceTypes & AUDIO_DEVICE_IN_VOICE_CALL) {
          device = AUDIO_DEVICE_IN_VOICE_CALL;
          break;
      }
      break;

    case AUDIO_SOURCE_DEFAULT:
    case AUDIO_SOURCE_MIC:
    if (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_A2DP) {
        device = AUDIO_DEVICE_IN_BLUETOOTH_A2DP;
    } else if ((mForceUse[AUDIO_POLICY_FORCE_FOR_RECORD] == AUDIO_POLICY_FORCE_BT_SCO) &&
        (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET)) {
        device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
    } else if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
        device = AUDIO_DEVICE_IN_WIRED_HEADSET;
    } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
        device = AUDIO_DEVICE_IN_USB_DEVICE;
    } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
        device = AUDIO_DEVICE_IN_BUILTIN_MIC;
    }
    break;

    case AUDIO_SOURCE_VOICE_COMMUNICATION:
        // Allow only use of devices on primary input if in call and HAL does not support routing
        // to voice call path.
        if ((mPhoneState == AUDIO_MODE_IN_CALL) &&
                (mAvailableOutputDevices.types() & AUDIO_DEVICE_OUT_TELEPHONY_TX) == 0) {
            availableDeviceTypes = availablePrimaryInputDevices() & ~AUDIO_DEVICE_BIT_IN;
        }

        switch (mForceUse[AUDIO_POLICY_FORCE_FOR_COMMUNICATION]) {
        case AUDIO_POLICY_FORCE_BT_SCO:
            // if SCO device is requested but no SCO device is available, fall back to default case
            if (availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET) {
                device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
                break;
            }
            // FALL THROUGH

        default:    // FORCE_NONE
            if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
                device = AUDIO_DEVICE_IN_WIRED_HEADSET;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
                device = AUDIO_DEVICE_IN_USB_DEVICE;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                device = AUDIO_DEVICE_IN_BUILTIN_MIC;
            }
            break;

        case AUDIO_POLICY_FORCE_SPEAKER:
            if (availableDeviceTypes & AUDIO_DEVICE_IN_BACK_MIC) {
                device = AUDIO_DEVICE_IN_BACK_MIC;
            } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
                device = AUDIO_DEVICE_IN_BUILTIN_MIC;
            }
            break;
        }
        break;

    case AUDIO_SOURCE_VOICE_RECOGNITION:
    case AUDIO_SOURCE_HOTWORD:
        if (mForceUse[AUDIO_POLICY_FORCE_FOR_RECORD] == AUDIO_POLICY_FORCE_BT_SCO &&
                availableDeviceTypes & AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET) {
            device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_WIRED_HEADSET) {
            device = AUDIO_DEVICE_IN_WIRED_HEADSET;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_USB_DEVICE) {
            device = AUDIO_DEVICE_IN_USB_DEVICE;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
            device = AUDIO_DEVICE_IN_BUILTIN_MIC;
        }
        break;
    case AUDIO_SOURCE_CAMCORDER:
        if (availableDeviceTypes & AUDIO_DEVICE_IN_BACK_MIC) {
            device = AUDIO_DEVICE_IN_BACK_MIC;
        } else if (availableDeviceTypes & AUDIO_DEVICE_IN_BUILTIN_MIC) {
            device = AUDIO_DEVICE_IN_BUILTIN_MIC;
        }
        break;
    case AUDIO_SOURCE_VOICE_DOWNLINK:
    case AUDIO_SOURCE_VOICE_CALL:
        if (availableDeviceTypes & AUDIO_DEVICE_IN_VOICE_CALL) {
            device = AUDIO_DEVICE_IN_VOICE_CALL;
        }
        break;
    case AUDIO_SOURCE_REMOTE_SUBMIX:
        if (availableDeviceTypes & AUDIO_DEVICE_IN_REMOTE_SUBMIX) {
            device = AUDIO_DEVICE_IN_REMOTE_SUBMIX;
        }
        break;
     case AUDIO_SOURCE_FM_TUNER:
        if (availableDeviceTypes & AUDIO_DEVICE_IN_FM_TUNER) {
            device = AUDIO_DEVICE_IN_FM_TUNER;
        }
        break;
    default:
        ALOGW("getDeviceForInputSource() invalid input source %d", inputSource);
        break;
    }
    ALOGV("getDeviceForInputSource()input source %d, device %08x", inputSource, device);
    return device;
}

   這裡就是通過InputSource去獲取相應的policyMix與audio_device_t裝置型別了,從這裡也可以看出Android系統上對Audio裝置的分類有多少種了。

然後再看下AudioPolicyManager.cpp::getInputForAttr()的第6步.mpClientInterface->openInput如何建立起輸入流

frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp

status_t AudioPolicyService::AudioPolicyClient::openInput(audio_module_handle_t module,
                                                          audio_io_handle_t *input,
                                                          audio_config_t *config,
                                                          audio_devices_t *device,
                                                          const String8& address,
                                                          audio_source_t source,
                                                          audio_input_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }

    return af->openInput(module, input, config, device, address, source, flags);
}

這裡就呼叫到了AF端的openInput函數了

frameworks\av\services\audioflinger\AudioFlinger.cpp

status_t AudioFlinger::openInput(audio_module_handle_t module,
                                          audio_io_handle_t *input,
                                          audio_config_t *config,
                                          audio_devices_t *device,
                                          const String8& address,
                                          audio_source_t source,
                                          audio_input_flags_t flags)
{
    Mutex::Autolock _l(mLock);

    if (*device == AUDIO_DEVICE_NONE) {
        return BAD_VALUE;
    }

    sp<RecordThread> thread = openInput_l(module, input, config, *device, address, source, flags);

    if (thread != 0) {
        // notify client processes of the new input creation
        thread->audioConfigChanged(AudioSystem::INPUT_OPENED);
        return NO_ERROR;
    }
    return NO_INIT;
}

sp<AudioFlinger::RecordThread> AudioFlinger::openInput_l(audio_module_handle_t module,
                                                         audio_io_handle_t *input,
                                                         audio_config_t *config,
                                                         audio_devices_t device,
                                                         const String8& address,
                                                         audio_source_t source,
                                                         audio_input_flags_t flags)
{
    AudioHwDevice *inHwDev = findSuitableHwDev_l(module, device);
    if (inHwDev == NULL) {
        *input = AUDIO_IO_HANDLE_NONE;
        return 0;
    }

    if (*input == AUDIO_IO_HANDLE_NONE) {
        *input = nextUniqueId();
    }

    audio_config_t halconfig = *config;
    audio_hw_device_t *inHwHal = inHwDev->hwDevice();
    audio_stream_in_t *inStream = NULL;
    //獲取inStream物件
    status_t status = inHwHal->open_input_stream(inHwHal, *input, device, &halconfig,
                                        &inStream, flags, address.string(), source);

    // If the input could not be opened with the requested parameters and we can handle the
    // conversion internally, try to open again with the proposed parameters. The AudioFlinger can
    // resample the input and do mono to stereo or stereo to mono conversions on 16 bit PCM inputs.
    if (status == BAD_VALUE &&
            config->format == halconfig.format && halconfig.format == AUDIO_FORMAT_PCM_16_BIT &&
        (halconfig.sample_rate <= 2 * config->sample_rate) &&
        (audio_channel_count_from_in_mask(halconfig.channel_mask) <= FCC_2) &&
        (audio_channel_count_from_in_mask(config->channel_mask) <= FCC_2)) {
        // FIXME describe the change proposed by HAL (save old values so we can log them here)
        ALOGV("openInput_l() reopening with proposed sampling rate and channel mask");
        inStream = NULL;
        status = inHwHal->open_input_stream(inHwHal, *input, device, &halconfig,
                                            &inStream, flags, address.string(), source);
        // FIXME log this new status; HAL should not propose any further changes
    }

    if (status == NO_ERROR && inStream != NULL) {

#ifdef TEE_SINK 
        // Try to re-use most recently used Pipe to archive a copy of input for dumpsys,
        // or (re-)create if current Pipe is idle and does not match the new format
        sp<NBAIO_Sink> teeSink;
        enum {
            TEE_SINK_NO,    // don't copy input
            TEE_SINK_NEW,   // copy input using a new pipe
            TEE_SINK_OLD,   // copy input using an existing pipe
        } kind;
        NBAIO_Format format = Format_from_SR_C(halconfig.sample_rate,
                audio_channel_count_from_in_mask(halconfig.channel_mask), halconfig.format);
        if (!mTeeSinkInputEnabled) {
            kind = TEE_SINK_NO;
        } else if (!Format_isValid(format)) {
            kind = TEE_SINK_NO;
        } else if (mRecordTeeSink == 0) {
            kind = TEE_SINK_NEW;
        } else if (mRecordTeeSink->getStrongCount() != 1) {
            kind = TEE_SINK_NO;
        } else if (Format_isEqual(format, mRecordTeeSink->format())) {
            kind = TEE_SINK_OLD;
        } else {
            kind = TEE_SINK_NEW;
        }
        switch (kind) {
        case TEE_SINK_NEW: {
            Pipe *pipe = new Pipe(mTeeSinkInputFrames, format);
            size_t numCounterOffers = 0;
            const NBAIO_Format offers[1] = {format};
            ssize_t index = pipe->negotiate(offers, 1, NULL, numCounterOffers);
            ALOG_ASSERT(index == 0);
            PipeReader *pipeReader = new PipeReader(*pipe);
            numCounterOffers = 0;
            index = pipeReader->negotiate(offers, 1, NULL, numCounterOffers);
            ALOG_ASSERT(index == 0);
            mRecordTeeSink = pipe;
            mRecordTeeSource = pipeReader;
            teeSink = pipe;
            }
            break;
        case TEE_SINK_OLD:
            teeSink = mRecordTeeSink;
            break;
        case TEE_SINK_NO:
        default:
            break;
        }
#endif
        AudioStreamIn *inputStream = new AudioStreamIn(inHwDev, inStream);
        // Start record thread
        // RecordThread requires both input and output device indication to forward to audio
        // pre processing modules
        sp<RecordThread> thread = new RecordThread(this,
                                  inputStream,
                                  *input,
                                  primaryOutputDevice_l(),
                                  device
#ifdef TEE_SINK
                                  , teeSink
#endif
                                  );
        mRecordThreads.add(*input, thread);
        ALOGV("openInput_l() created record thread: ID %d thread %p", *input, thread.get());
        return thread;
    }

    *input = AUDIO_IO_HANDLE_NONE;
    return 0;
}

在這個函式中主要工作如下:

    1.findSuitableHwDev_l中通過IOProfile中的module.handle與audio_device_t裝置型別找到Hw模組;

    2.呼叫HAL層inHwHal->open_input_stream開啟輸入流;

    3.如果失敗了,再繼續呼叫一次;

    4.根據inHwDev與inStream建立AudioStreamIn物件,如此,就建立起了一個輸入流了,AudioStreamIn定義在frameworks\av\services\audioflinger\AudioFlinger.h;

    5.建立一個RecordThread執行緒,並把該執行緒加入到mRecordThreads執行緒中,這個執行緒是在AudioRecord.cpp::set()函式中建立的;

這裡我們著重分析第2、5步:

首先看下AudioFlinger.cpp::openInput()的第2步:開啟輸入流

hardware\aw\audio\tulip\audio_hw.c

static int adev_open_input_stream(struct audio_hw_device *dev,
                                  audio_io_handle_t handle,
                                  audio_devices_t devices,
                                  struct audio_config *config,
                                  struct audio_stream_in **stream_in)
{
    struct sunxi_audio_device *ladev = (struct sunxi_audio_device *)dev;
    struct sunxi_stream_in *in;
    int ret;
    int channel_count = popcount(config->channel_mask);

    *stream_in = NULL;

    if (check_input_parameters(config->sample_rate, config->format, channel_count) != 0)
        return -EINVAL;

    in = (struct sunxi_stream_in *)calloc(1, sizeof(struct sunxi_stream_in));
    if (!in)
        return -ENOMEM;

    in->stream.common.get_sample_rate 	= in_get_sample_rate;
    in->stream.common.set_sample_rate 	= in_set_sample_rate;
    in->stream.common.get_buffer_size 	= in_get_buffer_size;
    in->stream.common.get_channels 		= in_get_channels;
    in->stream.common.get_format 		= in_get_format;
    in->stream.common.set_format 		= in_set_format;
    in->stream.common.standby 			= in_standby;
    in->stream.common.dump 				= in_dump;
    in->stream.common.set_parameters 	= in_set_parameters;
    in->stream.common.get_parameters 	= in_get_parameters;
    in->stream.common.add_audio_effect 	= in_add_audio_effect;
    in->stream.common.remove_audio_effect = in_remove_audio_effect;
    in->stream.set_gain = in_set_gain;
    in->stream.read 	= in_read;
    in->stream.get_input_frames_lost = in_get_input_frames_lost;

    in->requested_rate 	= config->sample_rate;

    // default config
    memcpy(&in->config, &pcm_config_mm_in, sizeof(pcm_config_mm_in));
    in->config.channels = channel_count;
    //in->config.in_init_channels = channel_count;

    in->buffer = malloc(in->config.period_size *
                        audio_stream_frame