1. 程式人生 > >Qualcomm 音訊學習(Bring up)

Qualcomm 音訊學習(Bring up)

原址

前言 最近在學習高通的音訊驅動,在學習了高通音訊 bring up 和 Audio overview 文件後,並在網上尋找到一篇比較重要的 blog進行學習後,將這部分學習筆記記錄於此。

四個重要部分 高通音訊框架大體分為以下四個部分:  音訊前端(FE)  音訊後端(BE)  DSP  音訊裝置(Audio Device)

(以下,音訊前端使用 FE 代替;音訊後端使用 BE 代替,音訊裝置使用 Device 代替)  其中,一個音訊前端對應著一個 PCM 裝置,一個音訊後端對應著一個 DAI 口;DSP 處於音訊前端和音訊後端之間,起著連線 FE 和 BE 的作用;所有的 Device 都是掛在 DAI 上的。

1. PCM PCM :(Pulse-code modulation)脈衝編碼調製,是將模擬訊號轉化為數字訊號的一種方法(個人理解:PCM 資料流即是通過PCM方法轉換後的二進位制資料流)。大致包含以下這幾種:  deep-buffer (//音樂、視訊等對時延要求不高的播放音)  low-latency (//按鍵音、觸控音、遊戲背景音等低延時的放音)  mutil-channel  compress-offload (//mp3、flac、aac等格式的音源播放)  audio-record (//普通錄音)  usb-audio  a2dp-audio  voice-call (//語音通話)

Auido PCM 解釋( https://blog.csdn.net/shuyong1999/article/details/7165419 )

2. DAI DAI : Direct audio input (音訊匯流排介面)  SLIM_BUS  Aux_PCM  Primary_MI2S  Secondary_MI2S  Tertiary_MI2S  Quatermary_MI2S

3. Audio Device headset  handset  speaker  earpiece  mic  bt  modem  FM

具體其定義如下:

/* Playback devices */ SND_DEVICE_MIN, SND_DEVICE_OUT_BEGIN = SND_DEVICE_MIN, SND_DEVICE_OUT_HANDSET = SND_DEVICE_OUT_BEGIN, SND_DEVICE_OUT_SPEAKER, SND_DEVICE_OUT_SPEAKER_EXTERNAL_1, SND_DEVICE_OUT_SPEAKER_EXTERNAL_2, SND_DEVICE_OUT_SPEAKER_REVERSE, SND_DEVICE_OUT_SPEAKER_WSA, SND_DEVICE_OUT_SPEAKER_VBAT, SND_DEVICE_OUT_LINE, SND_DEVICE_OUT_HEADPHONES, SND_DEVICE_OUT_HEADPHONES_DSD, SND_DEVICE_OUT_HEADPHONES_44_1, SND_DEVICE_OUT_SPEAKER_AND_HEADPHONES, SND_DEVICE_OUT_SPEAKER_AND_LINE, SND_DEVICE_OUT_SPEAKER_AND_HEADPHONES_EXTERNAL_1, SND_DEVICE_OUT_SPEAKER_AND_HEADPHONES_EXTERNAL_2, SND_DEVICE_OUT_VOICE_HANDSET, SND_DEVICE_OUT_VOICE_SPEAKER, SND_DEVICE_OUT_VOICE_SPEAKER_WSA, SND_DEVICE_OUT_VOICE_SPEAKER_VBAT, SND_DEVICE_OUT_VOICE_SPEAKER_2, SND_DEVICE_OUT_VOICE_SPEAKER_2_WSA, SND_DEVICE_OUT_VOICE_SPEAKER_2_VBAT, SND_DEVICE_OUT_VOICE_HEADPHONES, SND_DEVICE_OUT_VOICE_LINE, SND_DEVICE_OUT_HDMI, SND_DEVICE_OUT_SPEAKER_AND_HDMI, SND_DEVICE_OUT_DISPLAY_PORT, SND_DEVICE_OUT_SPEAKER_AND_DISPLAY_PORT, SND_DEVICE_OUT_BT_SCO, SND_DEVICE_OUT_BT_SCO_WB, SND_DEVICE_OUT_BT_A2DP, SND_DEVICE_OUT_SPEAKER_AND_BT_A2DP, SND_DEVICE_OUT_VOICE_TTY_FULL_HEADPHONES, SND_DEVICE_OUT_VOICE_TTY_VCO_HEADPHONES, SND_DEVICE_OUT_VOICE_TTY_HCO_HANDSET, SND_DEVICE_OUT_VOICE_TX, SND_DEVICE_OUT_AFE_PROXY, SND_DEVICE_OUT_USB_HEADSET, SND_DEVICE_OUT_USB_HEADPHONES, SND_DEVICE_OUT_SPEAKER_AND_USB_HEADSET, SND_DEVICE_OUT_TRANSMISSION_FM, SND_DEVICE_OUT_ANC_HEADSET, SND_DEVICE_OUT_ANC_FB_HEADSET, SND_DEVICE_OUT_VOICE_ANC_HEADSET, SND_DEVICE_OUT_VOICE_ANC_FB_HEADSET, SND_DEVICE_OUT_SPEAKER_AND_ANC_HEADSET, SND_DEVICE_OUT_ANC_HANDSET, SND_DEVICE_OUT_SPEAKER_PROTECTED, SND_DEVICE_OUT_VOICE_SPEAKER_PROTECTED, SND_DEVICE_OUT_VOICE_SPEAKER_2_PROTECTED, SND_DEVICE_OUT_SPEAKER_PROTECTED_VBAT, SND_DEVICE_OUT_VOICE_SPEAKER_PROTECTED_VBAT, SND_DEVICE_OUT_VOICE_SPEAKER_2_PROTECTED_VBAT, SND_DEVICE_OUT_SPEAKER_PROTECTED_RAS, SND_DEVICE_OUT_SPEAKER_PROTECTED_VBAT_RAS, #ifdef RECORD_PLAY_CONCURRENCY SND_DEVICE_OUT_VOIP_HANDSET, SND_DEVICE_OUT_VOIP_SPEAKER, SND_DEVICE_OUT_VOIP_HEADPHONES, #endif SND_DEVICE_OUT_END,

/*  * Note: IN_BEGIN should be same as OUT_END because total number of devices  * SND_DEVICES_MAX should not exceed MAX_RX + MAX_TX devices.  */ /* Capture devices */ SND_DEVICE_IN_BEGIN = SND_DEVICE_OUT_END, SND_DEVICE_IN_HANDSET_MIC  = SND_DEVICE_IN_BEGIN, SND_DEVICE_IN_HANDSET_MIC_EXTERNAL, SND_DEVICE_IN_HANDSET_MIC_AEC, SND_DEVICE_IN_HANDSET_MIC_NS, SND_DEVICE_IN_HANDSET_MIC_AEC_NS, SND_DEVICE_IN_HANDSET_DMIC, SND_DEVICE_IN_HANDSET_DMIC_AEC, SND_DEVICE_IN_HANDSET_DMIC_NS, SND_DEVICE_IN_HANDSET_DMIC_AEC_NS, SND_DEVICE_IN_SPEAKER_MIC, SND_DEVICE_IN_SPEAKER_MIC_AEC, SND_DEVICE_IN_SPEAKER_MIC_NS, SND_DEVICE_IN_SPEAKER_MIC_AEC_NS, SND_DEVICE_IN_SPEAKER_DMIC, SND_DEVICE_IN_SPEAKER_DMIC_AEC, SND_DEVICE_IN_SPEAKER_DMIC_NS, SND_DEVICE_IN_SPEAKER_DMIC_AEC_NS, SND_DEVICE_IN_HEADSET_MIC, SND_DEVICE_IN_HEADSET_MIC_FLUENCE, SND_DEVICE_IN_VOICE_SPEAKER_MIC, SND_DEVICE_IN_VOICE_HEADSET_MIC, SND_DEVICE_IN_HDMI_MIC, SND_DEVICE_IN_BT_SCO_MIC, SND_DEVICE_IN_BT_SCO_MIC_NREC, SND_DEVICE_IN_BT_SCO_MIC_WB, SND_DEVICE_IN_BT_SCO_MIC_WB_NREC, SND_DEVICE_IN_CAMCORDER_MIC, SND_DEVICE_IN_VOICE_DMIC, SND_DEVICE_IN_VOICE_SPEAKER_DMIC, SND_DEVICE_IN_VOICE_SPEAKER_QMIC, SND_DEVICE_IN_VOICE_TTY_FULL_HEADSET_MIC, SND_DEVICE_IN_VOICE_TTY_VCO_HANDSET_MIC, SND_DEVICE_IN_VOICE_TTY_HCO_HEADSET_MIC, SND_DEVICE_IN_VOICE_REC_MIC, SND_DEVICE_IN_VOICE_REC_MIC_NS, SND_DEVICE_IN_VOICE_REC_DMIC_STEREO, SND_DEVICE_IN_VOICE_REC_DMIC_FLUENCE, SND_DEVICE_IN_VOICE_RX, SND_DEVICE_IN_USB_HEADSET_MIC, SND_DEVICE_IN_CAPTURE_FM, SND_DEVICE_IN_AANC_HANDSET_MIC, SND_DEVICE_IN_QUAD_MIC, SND_DEVICE_IN_HANDSET_STEREO_DMIC, SND_DEVICE_IN_SPEAKER_STEREO_DMIC, SND_DEVICE_IN_CAPTURE_VI_FEEDBACK, SND_DEVICE_IN_CAPTURE_VI_FEEDBACK_MONO_1, SND_DEVICE_IN_CAPTURE_VI_FEEDBACK_MONO_2, SND_DEVICE_IN_VOICE_SPEAKER_DMIC_BROADSIDE, SND_DEVICE_IN_SPEAKER_DMIC_BROADSIDE, SND_DEVICE_IN_SPEAKER_DMIC_AEC_BROADSIDE, SND_DEVICE_IN_SPEAKER_DMIC_NS_BROADSIDE, SND_DEVICE_IN_SPEAKER_DMIC_AEC_NS_BROADSIDE, SND_DEVICE_IN_VOICE_FLUENCE_DMIC_AANC, SND_DEVICE_IN_HANDSET_QMIC, SND_DEVICE_IN_SPEAKER_QMIC_AEC, SND_DEVICE_IN_SPEAKER_QMIC_NS, SND_DEVICE_IN_SPEAKER_QMIC_AEC_NS, SND_DEVICE_IN_THREE_MIC, SND_DEVICE_IN_HANDSET_TMIC, SND_DEVICE_IN_VOICE_REC_TMIC, SND_DEVICE_IN_UNPROCESSED_MIC, SND_DEVICE_IN_UNPROCESSED_STEREO_MIC, SND_DEVICE_IN_UNPROCESSED_THREE_MIC, SND_DEVICE_IN_UNPROCESSED_QUAD_MIC, SND_DEVICE_IN_UNPROCESSED_HEADSET_MIC, 4. 音訊 DSP 音訊 DSP,大致可以等同於 codec。在高通MSM8953/MSM8937 平臺上,codec分為兩部分,一部分是數字 codec,其在 MSM 上;另一部分是模擬 codec,其在 PMIC 上。

二、音訊場景(usecase) usecase在邏輯上對應著音訊前端,其定義如下:

1.playback: Offload playback

A large-sized buffer is sent to the aDSP and the APSS goes to sleep. The aDSP  decodes, applies postprocessing effects, and outputs the PCM data to the  physical sound device. Before the aDSP decoder input runs out of data, it  interrupts the APSS to wake up and send the next set of buffers.   Supported formats – MP3, AC3, EAC3, AAC, 24bit PCM, 16-bit PCM, FLAC   Sampling rates in kHz – 8, 11.025, 16, 22.05, 32, 44.1, 48, 64, 88.2, 96, 176.4, 192   Flags to be set in AudioTrack – AUDIO_OUTPUT_FLAG_DIRECT  |AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD  |AUDIO_OUTPUT_FLAG_NON_BLOCKING   Supported channels – 1, 2, 2.1, 4, 5, 5.1, 6, 7.1

Deep buffer playback

PCM data is sent to the aDSP, postprocessed, and rendered to an output sound  device. Audio effects can also be applied in the ARM or aDSP.   Use cases – Ringtone, audio/video playback, audio streaming, YouTube streaming, and  so on.   Supported format – PCM   Sampling rates in kHz – 44.1 and 48   Flag to be set in AudioTrack – AUDIO_OUTPUT_FLAG_PRIMARY   Supported channel – Stereo

Low latency

Playback mode is similar to deep buffer, it uses a smaller buffer size and  minimal or no postprocessing in the aDSP so that the PCM stream is rendered  to the output sound device   Use cases – Touchtone, gaming audio, and so on.   Supported format – PCM   Sampling rates in kHz – 44.1 and 48   Flag to be set in AudioTrack – AUDIO_OUTPUT_FLAG_FAST   Supported channel – Stereo

Multichannel

Playback mode where the PCM output of the multichannel decoder is sent to  the aDSP, postprocessed, and rendered at the output device   Examples – AAC 5.1 channel, Dolby AC3/eAC3 playback   Supported format – PCM   Sampling rates in kHz – 44.1 and 48   Flag to be set in AudioTrack – AUDIO_OUTPUT_FLAG_DIRECT   Channels supported – 6 (default); changes dynamically

Playback over A2DP and USB does not go through aDSP  FM playback

In FM playback, PCM data from the FM chip is routed to the output sound  device via the ADSP.   When a headset is connected and an FM app is launched, the FM app sets the  device to AudioSystem.DEVICE_OUT_FM.   fm.c in hardware/qcom/audio/hal/audio_extn implements the code to open and  starts two PCM hostless sessions.   One session for PCM capture   One session for PCM playback   A hostless session is required to keep the AFE ports in the aDSP active when  the APSS is asleep.   The audio HAL sends the routing command to enable loopback from internal FM  Tx port to the primary MI2S Rx port using the mixer control settings in the  mixer_paths.xml file.   Example of mixer settings  path name=”play-fm”  ctl name=”Internal FM RX Volume” value=”1”  ctl name=”PRI_MI2S_RX Port Mixer INTERNAL_FM_TX” value=”1”  ctl name=”MI2S_DL_HL Switch” value=”1”

2.Recording Compress mode

Mode of recording where encoded packets are  received by the APSS directly from the ADSP; it is supported for AMR WB  format only

Nontunnel mode

Mode of recording where PCM data from the mic is  preprocessed in DSP and received by the APSS, which then encodes the  PCM to the required encoding format by using the DSP-based or  software-based encoder   Examples include camcorder recording and in-call recording

Multichannel mode

Used for capturing more than 2 channels of the PCM  stream and encoding them into a multichannel codec format like AC3   Examples include surround sound camcorder recording, 4 to 6 channel  upsampling for 5.1 channel encoding

FM recording

 PCM data from the FM chip is routed to the APSS via the aDSP for recording in  the intended format.   Recording occurs as normal use case from the internal FM Tx port.   Example of mixer settings  name=”audio-record capture-fm  ctl name=”MultiMedia1 Mixer INTERNAL_FM_TX” value=”1”

/* Playback usecases */ USECASE_AUDIO_PLAYBACK_DEEP_BUFFER = 0, USECASE_AUDIO_PLAYBACK_LOW_LATENCY, USECASE_AUDIO_PLAYBACK_MULTI_CH, USECASE_AUDIO_PLAYBACK_OFFLOAD, USECASE_AUDIO_PLAYBACK_OFFLOAD2, USECASE_AUDIO_PLAYBACK_OFFLOAD3, USECASE_AUDIO_PLAYBACK_OFFLOAD4, USECASE_AUDIO_PLAYBACK_OFFLOAD5, USECASE_AUDIO_PLAYBACK_OFFLOAD6, USECASE_AUDIO_PLAYBACK_OFFLOAD7, USECASE_AUDIO_PLAYBACK_OFFLOAD8, USECASE_AUDIO_PLAYBACK_OFFLOAD9, USECASE_AUDIO_PLAYBACK_ULL,

/* FM usecase */ USECASE_AUDIO_PLAYBACK_FM,

/* HFP Use case*/ USECASE_AUDIO_HFP_SCO, USECASE_AUDIO_HFP_SCO_WB,

/* Capture usecases */ USECASE_AUDIO_RECORD, USECASE_AUDIO_RECORD_COMPRESS, USECASE_AUDIO_RECORD_COMPRESS2, USECASE_AUDIO_RECORD_COMPRESS3, USECASE_AUDIO_RECORD_COMPRESS4, USECASE_AUDIO_RECORD_LOW_LATENCY, USECASE_AUDIO_RECORD_FM_VIRTUAL,

/* Voice usecase */ USECASE_VOICE_CALL,

/* Voice extension usecases */ USECASE_VOICE2_CALL, USECASE_VOLTE_CALL, USECASE_QCHAT_CALL, USECASE_VOWLAN_CALL, USECASE_VOICEMMODE1_CALL, USECASE_VOICEMMODE2_CALL, USECASE_COMPRESS_VOIP_CALL,

USECASE_INCALL_REC_UPLINK, USECASE_INCALL_REC_DOWNLINK, USECASE_INCALL_REC_UPLINK_AND_DOWNLINK, USECASE_INCALL_REC_UPLINK_COMPRESS, USECASE_INCALL_REC_DOWNLINK_COMPRESS, USECASE_INCALL_REC_UPLINK_AND_DOWNLINK_COMPRESS,

USECASE_INCALL_MUSIC_UPLINK, USECASE_INCALL_MUSIC_UPLINK2,

USECASE_AUDIO_SPKR_CALIB_RX, USECASE_AUDIO_SPKR_CALIB_TX,

USECASE_AUDIO_PLAYBACK_AFE_PROXY, USECASE_AUDIO_RECORD_AFE_PROXY,

USECASE_AUDIO_PLAYBACK_EXT_DISP_SILENCE,    /* Playback usecases */ USECASE_AUDIO_PLAYBACK_DEEP_BUFFER = 0, USECASE_AUDIO_PLAYBACK_LOW_LATENCY, USECASE_AUDIO_PLAYBACK_MULTI_CH, USECASE_AUDIO_PLAYBACK_OFFLOAD, USECASE_AUDIO_PLAYBACK_OFFLOAD2, USECASE_AUDIO_PLAYBACK_OFFLOAD3, USECASE_AUDIO_PLAYBACK_OFFLOAD4, USECASE_AUDIO_PLAYBACK_OFFLOAD5, USECASE_AUDIO_PLAYBACK_OFFLOAD6, USECASE_AUDIO_PLAYBACK_OFFLOAD7, USECASE_AUDIO_PLAYBACK_OFFLOAD8, USECASE_AUDIO_PLAYBACK_OFFLOAD9, USECASE_AUDIO_PLAYBACK_ULL,

/* FM usecase */ USECASE_AUDIO_PLAYBACK_FM,

/* HFP Use case*/ USECASE_AUDIO_HFP_SCO, USECASE_AUDIO_HFP_SCO_WB,

/* Capture usecases */ USECASE_AUDIO_RECORD, USECASE_AUDIO_RECORD_COMPRESS, USECASE_AUDIO_RECORD_COMPRESS2, USECASE_AUDIO_RECORD_COMPRESS3, USECASE_AUDIO_RECORD_COMPRESS4, USECASE_AUDIO_RECORD_LOW_LATENCY, USECASE_AUDIO_RECORD_FM_VIRTUAL,

/* Voice usecase */ USECASE_VOICE_CALL,

/* Voice extension usecases */ USECASE_VOICE2_CALL, USECASE_VOLTE_CALL, USECASE_QCHAT_CALL, USECASE_VOWLAN_CALL, USECASE_VOICEMMODE1_CALL, USECASE_VOICEMMODE2_CALL, USECASE_COMPRESS_VOIP_CALL,

USECASE_INCALL_REC_UPLINK, USECASE_INCALL_REC_DOWNLINK, USECASE_INCALL_REC_UPLINK_AND_DOWNLINK, USECASE_INCALL_REC_UPLINK_COMPRESS, USECASE_INCALL_REC_DOWNLINK_COMPRESS, USECASE_INCALL_REC_UPLINK_AND_DOWNLINK_COMPRESS,

USECASE_INCALL_MUSIC_UPLINK, USECASE_INCALL_MUSIC_UPLINK2,

USECASE_AUDIO_SPKR_CALIB_RX, USECASE_AUDIO_SPKR_CALIB_TX,

USECASE_AUDIO_PLAYBACK_AFE_PROXY, USECASE_AUDIO_RECORD_AFE_PROXY,

USECASE_AUDIO_PLAYBACK_EXT_DISP_SILENCE 三、音訊通路配置(音訊控制元件配置) 簡單的來說,就是將音訊前端(FE)經過音訊後端(BE),與音訊裝置(Audio Device)連線起來。  (FE <===> BE <====> Devices)  usecase 通過路由與音訊裝置相關聯。其配置一般放在某個 xml 檔案,以 MSM8953 為例,其放在 mixer_paths_qrd_sku3.xml 檔案中。該檔案中的配置,又稱為音訊控制元件配置。  音訊控制元件配置。  在高通 bring up 音訊的時候,一般不會先去修改 mixer-path.xml 檔案,而是先使用以下幾個工具,進行通路配置驗證:

tinymix: 配置音訊路由;  tinycap: 錄音;  tinyplay: 播放

1. 外部 SPK 除錯舉例 比如在除錯外部 SPK,切其音源為右聲道(HPHR),可以使用 tinymix + tinyplay 進行音訊通路驗證:

tinymix "PRI_MI2S_RX Audio Mixer MultiMedia1" "1" tinymix "MI2S_RX Channels" "One" tinymix "RX2 MIX1 INP1" "RX1" tinymix "RDAC2 MUX" "RX2" tinymix "HPHR" "Switch" tinymix "Ext Spk Switch" "On" tinyplay /sdcard/test.wav 如何理解 tinymix “MI2S_RX Channels” “One”  MI2S RX 線路上通道的個數:One or Two

如何理解 tinymix “RX2 MIX1 INP1” “RX1”  表示 SPK 以內部的 Rx Mix2 作為 mixer 作為輸入,在 mixer 端又以 mixer的 Rx1 作為輸入

如何理解 tinymix “RDAC2 MUX” “RX2”  表示音源右聲道的前端資料來自 DAC2 模數轉換器,而 DAC2 的又以 RDAC2 的 Rx2 作為輸入

如何理解 tinymix “HPHR” “Switch”  表示右聲道是開啟還是關閉,其值為 Switch or Zero

如何理解 tinymix “Ext Spk Switch” “On”  表示開啟外部 SPK,一般的,外部SPK都帶有 PA 使能引腳,需要特別的開啟,其值有 On or Off

則整個外部 SPK 的音訊鏈路可以簡化為:

Rx Mix2 –> DAC2 –> HPHR –> SPK

上面對 SPK 的配置,整個路徑如圖所示: 

2.單 Mic 除錯舉例(主麥) tinymix "MultiMedia1 Mixer TERT_MI2S_TX" "1" tinymix "MI2S_TX Channels" "One" tinymix "ADC1 Volume" "6" tinymix "DEC1 MUX" "ADC1" tinymix "IIR1 INP1 MUX" "DEC1" tinycap /data/input1.wav –C 1 –R 44100 –T 20 欄位意思,同 SPK 的差不多,只是需要注意的是,MIC 和 SPK 的音訊流方向是不同的,並且,不同的 Audio Device 是掛在不同的 DAI 口上,這就體現在了以下兩條命令:

tinymix “PRI_MI2S_RX Audio Mixer MultiMedia1” “1” // SPK : BE DAI —> DSP —> FE PCM

tinymix “MultiMedia1 Mixer TERT_MI2S_TX” “1” // MIC: FE PCM —> DSP —> BE DAI

整個單 MIC的路徑如圖所示: 

在確定通路沒有問題的情況下,再將 tinymix 的引數匯入到 mixer-path.xml 對應的音訊控制元件中。並將該 mixer-path.xml push 到 /etc/ 目錄下替換原來的檔案,然後重啟裝置,通過系統 apk 進行測試。

引用參考 1. 高通文件 相關的 overview & bring up & audio debug 文件,此處不方便透漏文件編號和名字

2. csdn blog 引用 本篇筆記,在下面這篇部落格內容上進行了修改和擴充。