1. 程式人生 > >android中AudioRecord採集音訊的引數說明以及audioTrack的播放

android中AudioRecord採集音訊的引數說明以及audioTrack的播放

android中採集音訊的apiandroid.media.AudioRecord

在android中播放音訊也是從api中的類分析

其中構造器的幾個引數就是標準的聲音採集引數

以下是引數的含義解釋

1. public AudioRecord (int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes)

Class constructor.

Parameters

audioSource

the recording source. See for recording source definitions.

音訊源:指的是從哪裡採集音訊。這裡我們當然是從麥克風採集音訊,所以此引數的值為MIC

sampleRateInHz

the sample rate expressed in Hertz. Examples of rates are (but not limited to) 44100, 22050 and 11025.

取樣率:音訊的取樣頻率,每秒鐘能夠取樣的次數,取樣率越高,音質越高。給出的例項是441002205011025但不限於這幾個引數。例如要採集低質量的音訊就可以使用40008000等低取樣率。

channelConfig

describes the configuration of the audio channels. See and

聲道設定:android支援雙聲道立體聲和單聲道。MONO單聲道,STEREO立體聲

audioFormat

編碼制式和取樣大小:採集來的資料當然使用PCM編碼(脈衝程式碼調製編碼,即PCM編碼。PCM通過抽樣、量化、編碼三個步驟將連續變化的模擬訊號轉換為數字編碼。) android支援的取樣大小16bit 或者8bit。當然取樣大小越大,那麼資訊量越多,音質也越高,現在主流的取樣大小都是16bit,在低質量的語音傳輸的時候8bit足夠了。

bufferSizeInBytes

the total size (in bytes) of the buffer where audio data is written to during the recording. New audio data can be read from this buffer in smaller chunks than this size. See to determine the minimum required buffer size for the successful creation of an AudioRecord instance. Using values smaller than getMinBufferSize() will result in an initialization failure.

採集資料需要的緩衝區的大小,如果不知道最小需要的大小可以在getMinBufferSize()檢視。

採集到的資料儲存在一個byteBuffer中,可以使用流將其讀出。亦可儲存成為檔案的形式。

android.media.AudioTrack.AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode) throws IllegalArgumentException

2. public AudioTrack (int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode)

Parameters
sampleRateInHz the sample rate expressed in Hertz. Examples of rates are (but not limited to) 44100, 22050 and 11025.
channelConfig
audioFormat
bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. See to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.
mode


    使用這個類可以很強輕鬆地將音訊資料在Android系統上播放出來,下面貼出我自己寫的原始碼:
    AudioTrack audio = new AudioTrack(
                           AudioManager.STREAM_MUSIC, // 指定在流的型別
                           32000, // 設定音訊資料的取樣率 32k,如果是44.1k就是44100
                           AudioFormat.CHANNEL_OUT_STEREO, // 設定輸出聲道為雙聲道立體聲,而CHANNEL_OUT_MONO型別是單聲道
                           AudioFormat.ENCODING_PCM_16BIT, // 設定音訊資料塊是8位還是16位,這裡設定為16位。好像現在絕大多數的音訊都是16位的了
                           AudioTrack.MODE_STREAM // 設定模式型別,在這裡設定為流型別,另外一種MODE_STATIC貌似沒有什麼效果
                       );
    audio.play(); // 啟動音訊裝置,下面就可以真正開始音訊資料的播放了
    // 開啟mp3檔案,讀取資料,解碼等操作省略 ...
    byte[] buffer = new buffer[4096];
    int count;
    while(true)
    {
        // 最關鍵的是將解碼後的資料,從緩衝區寫入到AudioTrack物件中
        audio.write(buffer, 0, 4096);
        if(檔案結束) break;
    }
    // 最後別忘了關閉並釋放資源
    audio.stop();
    audio.release();