ijkplayer-android框架詳解
本文重點分析其C語言實現的核心程式碼,涉及到不同平臺下的封裝介面或處理方式時,均以Android平臺為例。
一、FFplay原始碼流程圖
由於ijkplayer底層是基於ffplay的,首先需要了解ffplay的程式碼處理流程。FFplay是FFmpeg專案提供的播放器示例。
1). FFplay原始碼的流程圖如圖所示。
2). FFplay的總體函式呼叫結構圖如下圖所示。
函式的具體流程分析請參照雷神的部落格:http://blog.csdn.net/leixiaohua1020/article/details/397621433). ijkplayer在底層重寫了ffplay.c檔案,主要是去除ffplay中使用sdl音視訊庫播放音視訊的部分;並且增加了對移動端硬體解碼部分,視訊渲染部分,以及音訊播放部分的實現,這些部分在android和ios下有不同的實現,具體如下:
Platform | 硬體解碼 | 視訊渲染 | 音訊播放 |
IOS | VideoToolBox | OpenGL ES | AudioQueue |
Android | MediaCodec | OpenGL ES、ANativeWindow | OpenSL ES、AudioTrack |
從上面可以看出ijkplayer是暫時不支援音訊硬體解碼的。
二、ijkplayer程式碼目錄說明
開啟ijkplayer,可看到其主要目錄結構如下:
三、初始化流程android - android平臺上的上層介面封裝以及平臺相關方法
config - 編譯ffmpeg使用的配置檔案
extra - 存放編譯ijkplayer所需的依賴原始檔, 如ffmpeg、openssl等
ijkmedia - 核心程式碼
ijkj4a - android平臺下使用,用來實現c程式碼呼叫java層程式碼,這個資料夾是通過bilibili的另一個開源專案jni4android自動生成的。
ijkplayer - 播放器資料下載及解碼相關
ijksdl - 音視訊資料渲染相關
ios - iOS平臺上的上層介面封裝以及平臺相關方法
tools - 初始化專案工程指令碼
初始化完成的主要工作就是建立播放器物件,在IjkMediaPlayer類中initPlayer方法會呼叫native_setup完成這個這個工作。
private void initPlayer(IjkLibLoader libLoader) { loadLibrariesOnce(libLoader); initNativeOnce(); Looper looper; if ((looper = Looper.myLooper()) != null) { mEventHandler = new EventHandler(this, looper); } else if ((looper = Looper.getMainLooper()) != null) { mEventHandler = new EventHandler(this, looper); } else { mEventHandler = null; } /* * Native setup requires a weak reference to our object. It's easier to * create it here than in C++. */ native_setup(new WeakReference<IjkMediaPlayer>(this)); }
native_setup是native本地方法,最終會呼叫ijkplayer_jni.c中的IjkMediaPlayer_native_setup方法
static void
IjkMediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this)
{
MPTRACE("%s\n", __func__);
IjkMediaPlayer *mp = ijkmp_android_create(message_loop);
JNI_CHECK_GOTO(mp, env, "java/lang/OutOfMemoryError", "mpjni: native_setup: ijkmp_create() failed", LABEL_RETURN);
jni_set_media_player(env, thiz, mp);
ijkmp_set_weak_thiz(mp, (*env)->NewGlobalRef(env, weak_this));
ijkmp_set_inject_opaque(mp, ijkmp_get_weak_thiz(mp));
ijkmp_android_set_mediacodec_select_callback(mp, mediacodec_select_callback, (*env)->NewGlobalRef(env, weak_this));
LABEL_RETURN:
ijkmp_dec_ref_p(&mp);
}
可發現在此建立了IjkMediaPlayer結構體例項_mediaPlayer:IjkMediaPlayer *ijkmp_android_create(int(*msg_loop)(void*))
{
IjkMediaPlayer *mp = ijkmp_create(msg_loop);
if (!mp)
goto fail;
mp->ffplayer->vout = SDL_VoutAndroid_CreateForAndroidSurface();
if (!mp->ffplayer->vout)
goto fail;
mp->ffplayer->pipeline = ffpipeline_create_from_android(mp->ffplayer);
if (!mp->ffplayer->pipeline)
goto fail;
ffpipeline_set_vout(mp->ffplayer->pipeline, mp->ffplayer->vout);
return mp;
fail:
ijkmp_dec_ref_p(&mp);
return NULL;
}
在該方法中主要完成了三個動作:
1. 建立IJKMediaPlayer物件, 通過ffp_create
方法建立了FFPlayer物件,並設定訊息處理函式。
IjkMediaPlayer *ijkmp_create(int (*msg_loop)(void*))
{
IjkMediaPlayer *mp = (IjkMediaPlayer *) mallocz(sizeof(IjkMediaPlayer));
......
mp->ffplayer = ffp_create();
......
mp->msg_loop = msg_loop;
......
return mp;
}
2. 建立影象渲染物件SDL_Vout
SDL_Vout *SDL_VoutAndroid_CreateForANativeWindow()
{
SDL_Vout *vout = SDL_Vout_CreateInternal(sizeof(SDL_Vout_Opaque));
if (!vout)
return NULL;
SDL_Vout_Opaque *opaque = vout->opaque;
opaque->native_window = NULL;
if (ISDL_Array__init(&opaque->overlay_manager, 32))
goto fail;
if (ISDL_Array__init(&opaque->overlay_pool, 32))
goto fail;
opaque->egl = IJK_EGL_create();
if (!opaque->egl)
goto fail;
vout->opaque_class = &g_nativewindow_class;
vout->create_overlay = func_create_overlay;
vout->free_l = func_free_l;
vout->display_overlay = func_display_overlay;
return vout;
fail:
func_free_l(vout);
return NULL;
}
3. 建立平臺相關的IJKFF_Pipeline物件,包括視訊解碼以及音訊輸出部分
IJKFF_Pipeline *ffpipeline_create_from_android(FFPlayer *ffp)
{
ALOGD("ffpipeline_create_from_android()\n");
IJKFF_Pipeline *pipeline = ffpipeline_alloc(&g_pipeline_class, sizeof(IJKFF_Pipeline_Opaque));
if (!pipeline)
return pipeline;
IJKFF_Pipeline_Opaque *opaque = pipeline->opaque;
opaque->ffp = ffp;
opaque->surface_mutex = SDL_CreateMutex();
opaque->left_volume = 1.0f;
opaque->right_volume = 1.0f;
if (!opaque->surface_mutex) {
ALOGE("ffpipeline-android:create SDL_CreateMutex failed\n");
goto fail;
}
pipeline->func_destroy = func_destroy;
pipeline->func_open_video_decoder = func_open_video_decoder;
pipeline->func_open_audio_output = func_open_audio_output;
return pipeline;
fail:
ffpipeline_free_p(&pipeline);
return NULL;
}
至此已經完成了ijkplayer播放器初始化的相關流程,簡單來說,就是建立播放器物件,完成音視訊解碼、渲染的準備工作。
四、核心程式碼剖析
ijkplayer實際上是基於ffplay.c實現的,本章節將以該檔案為主線,從資料接收、音視訊解碼、音視訊渲染及同步這三大方面進行講解,要求讀者有基本的ffmpeg知識。
ffplay.c中主要的程式碼呼叫流程如下圖所示:
當外部呼叫prepareToPlay
啟動播放後,ijkplayer內部最終會呼叫到ffplay.c中的方法
int ffp_prepare_async_l(FFPlayer *ffp, const char *file_name)
該方法是啟動播放器的入口函式,在此會設定player選項,開啟audio output,最重要的是呼叫stream_open
方法。
static VideoState *stream_open(FFPlayer *ffp, const char *filename, AVInputFormat *iformat)
{
......
/* start video display */
if (frame_queue_init(&is->pictq, &is->videoq, ffp->pictq_size, 1) < 0)
goto fail;
if (frame_queue_init(&is->sampq, &is->audioq, SAMPLE_QUEUE_SIZE, 1) < 0)
goto fail;
if (packet_queue_init(&is->videoq) < 0 ||
packet_queue_init(&is->audioq) < 0 )
goto fail;
......
is->video_refresh_tid = SDL_CreateThreadEx(&is->_video_refresh_tid, video_refresh_thread, ffp, "ff_vout");
......
is->read_tid = SDL_CreateThreadEx(&is->_read_tid, read_thread, ffp, "ff_read");
......
}
從程式碼中可以看出,stream_open主要做了以下幾件事情:建立存放video/audio解碼前資料的videoq/audioq
建立存放video/audio解碼後資料的pictq/sampq
建立讀資料執行緒read_thread
建立視訊渲染執行緒video_refresh_thread
說明:subtitle是與video、audio平行的一個stream,ffplay中也支援對它的處理,即建立存放解碼前後資料的兩個queue,並且當檔案中存在subtitle時,還會啟動subtitle的解碼執行緒。
4.1 資料讀取
資料讀取的整個過程都是由ffmpeg內部完成的,接收到網路過來的資料後,ffmpeg根據其封裝格式,完成了解複用的動作,我們得到的,是音視訊分離開的解碼前的資料,步驟如下:
a). 建立上下文結構體,這個結構體是最上層的結構體,表示輸入上下文
ic = avformat_alloc_context();
b). 設定中斷函式,如果出錯或者退出,就可以立刻退出
ic->interrupt_callback.callback = decode_interrupt_cb;
ic->interrupt_callback.opaque = is;
c). 開啟檔案,主要是探測協議型別,如果是網路檔案則建立網路連結等
err = avformat_open_input(&ic, is->filename, is->iformat, &ffp->format_opts);
d). 探測媒體型別,可得到當前檔案的封裝格式,音視訊編碼引數等資訊
err = avformat_find_stream_info(ic, opts);
e).開啟視訊、音訊解碼器。在此會開啟相應解碼器,並建立相應的解碼執行緒。stream_component_open(ffp, st_index[AVMEDIA_TYPE_AUDIO]);
f).讀取媒體資料,得到的是音視訊分離的解碼前資料ret = av_read_frame(ic, pkt);
g). 將音視訊資料分別送入相應的queue中if (pkt->stream_index == is->audio_stream && pkt_in_play_range) {
packet_queue_put(&is->audioq, pkt);
} else if (pkt->stream_index == is->video_stream && pkt_in_play_range && !(is->video_st && (is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC))) {
packet_queue_put(&is->videoq, pkt);
......
} else {
av_packet_unref(pkt);
}
重複f、g兩步,即可不斷獲取待播放的資料。4.2 音視訊解碼
ijkplayer在視訊解碼上支援軟解和硬解兩種方式,可在起播前配置優先使用的解碼方式,播放過程中不可切換。iOS平臺上硬解使用VideoToolbox,Android平臺上使用MediaCodec。ijkplayer中的音訊解碼只支援軟解,暫不支援硬解。
4.2.1 視訊解碼方式選擇
在開啟解碼器的方法中:
static int stream_component_open(FFPlayer *ffp, int stream_index)
{
......
codec = avcodec_find_decoder(avctx->codec_id);
......
if ((ret = avcodec_open2(avctx, codec, &opts)) < 0) {
goto fail;
}
......
case AVMEDIA_TYPE_VIDEO:
......
decoder_init(&is->viddec, avctx, &is->videoq, is->continue_read_thread);
ffp->node_vdec = ffpipeline_open_video_decoder(ffp->pipeline, ffp);
if (!ffp->node_vdec)
goto fail;
if ((ret = decoder_start(&is->viddec, video_thread, ffp, "ff_video_dec")) < 0)
goto out;
......
}
首先會開啟ffmpeg的解碼器,然後通過ffpipeline_open_video_decoder
建立IJKFF_Pipenode。
第三章節中有介紹,在建立IJKMediaPlayer物件時,通過ffpipeline_create_from_android
建立了pipeline。該函式實現如下:
IJKFF_Pipenode* ffpipeline_open_video_decoder(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
return pipeline->func_open_video_decoder(pipeline, ffp);
}
func_open_video_decoder
函式指標最後指向的是ffpipeline_android.c中的func_open_video_decoder
,其定義如下:static IJKFF_Pipenode *func_open_video_decoder(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
IJKFF_Pipeline_Opaque *opaque = pipeline->opaque;
IJKFF_Pipenode *node = NULL;
if (ffp->mediacodec_all_videos || ffp->mediacodec_avc || ffp->mediacodec_hevc || ffp->mediacodec_mpeg2)
node = ffpipenode_create_video_decoder_from_android_mediacodec(ffp, pipeline, opaque->weak_vout);
if (!node) {
node = ffpipenode_create_video_decoder_from_ffplay(ffp);
}
return node;
}
首先通過ffp->mediacodec_all_videos || ffp->mediacodec_avc || ffp->mediacodec_hevc || ffp->mediacodec_mpeg2判斷是否支援硬體解碼,如果支援會優先去嘗試開啟硬體解碼器,如果開啟失敗會自動切換使用ffmpeg軟解碼。關於ffp->mediacodec_all_videos 、ffp->mediacodec_avc 、ffp->mediacodec_hevc 、ffp->mediacodec_mpeg2它們的值需要在起播前通過如下方法配置:
ijkmp_set_option_int(_mediaPlayer, IJKMP_OPT_CATEGORY_PLAYER, "xxxxx", 1);
4.2.2 音視訊解碼video的解碼執行緒為video_thread,audio的解碼執行緒為audio_thread。
不管視訊解碼還是音訊解碼,其基本流程都是從解碼前的資料緩衝區中取出一幀資料進行解碼,完成後放入相應的解碼後的資料緩衝區,如下圖所示:
本文以video的硬體解碼流程為例進行分析,audio的流程可對照研究。
視訊解碼執行緒
static int video_thread(void *arg)
{
FFPlayer *ffp = (FFPlayer *)arg;
int ret = 0;
if (ffp->node_vdec) {
ret = ffpipenode_run_sync(ffp->node_vdec);
}
return ret;
}
ffpipenode_run_sync
中呼叫的是IJKFF_Pipenode物件中的func_run_sync
int ffpipenode_run_sync(IJKFF_Pipenode *node)
{
return node->func_run_sync(node);
}
func_run_sync
取決於播放前配置的軟硬解,假設為硬解,func_run_sync
函式指標最後指向的是ffpipenode_android_mediacodec_vdec.c中的func_run_sync
,其定義如下:
static int func_run_sync(IJKFF_Pipenode *node)
{
.......
opaque->enqueue_thread = SDL_CreateThreadEx(&opaque->_enqueue_thread, enqueue_thread_func, node, "amediacodec_input_thread");
if (!opaque->enqueue_thread) {
ALOGE("%s: SDL_CreateThreadEx failed\n", __func__);
ret = -1;
goto fail;
}
while (!q->abort_request) {
int64_t timeUs = opaque->acodec_first_dequeue_output_request ? 0 : AMC_OUTPUT_TIMEOUT_US;
got_frame = 0;
ret = drain_output_buffer(env, node, timeUs, &dequeue_count, frame, &got_frame);
.......
if (got_frame) {
duration = (frame_rate.num && frame_rate.den ? av_q2d((AVRational){frame_rate.den, frame_rate.num}) : 0);
pts = (frame->pts == AV_NOPTS_VALUE) ? NAN : frame->pts * av_q2d(tb);
ret = ffp_queue_picture(ffp, frame, pts, duration, av_frame_get_pkt_pos(frame), is->viddec.pkt_serial);
......
}
}
}
a). 首先該函式啟動一個輸入執行緒,執行緒的執行函式為enqueue_thread_func,函定義如下static int enqueue_thread_func(void *arg)
{
......
while (!q->abort_request) {
ret = feed_input_buffer(env, node, AMC_INPUT_TIMEOUT_US, &dequeue_count);
if (ret != 0) {
goto fail;
}
}
......
}
該函式在迴圈中通過feed_iput_buffer呼叫ffp_packet_queue_get_or_buffering一直不停的取資料,並將取得的資料交給硬體解碼器。
b). 建立完輸入執行緒後,直接進入while迴圈,迴圈中呼叫drain_output_buffer去獲取硬體解碼後的資料,該函式最後一個引數用來標記是否接收到完整的一幀資料。
c). 當got_frame為true時,將接收的幀通過ffp_queue_picture送入pictq佇列裡。
4.3 音視訊渲染及同步
4.3.1 音訊輸出
ijkplayer中Android平臺使用OpenSL ES或AudioTrack輸出音訊,iOS平臺使用AudioQueue輸出音訊。
audio output節點,在ffp_prepare_async_l方法中被建立:
ffp->aout = ffpipeline_open_audio_output(ffp->pipeline, ffp);
ffpipeline_open_audio_output方法實際上呼叫的是IJKFF_Pipeline物件的函式指標func_open_audio_utput,該函式指標在初始化中的ijkmp_android_create方法中被賦值,最後指向的是ffpipeline_android.c中的函式func_open_audio_output
static SDL_Aout *func_open_audio_output(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
SDL_Aout *aout = NULL;
if (ffp->opensles) {
aout = SDL_AoutAndroid_CreateForOpenSLES();
} else {
aout = SDL_AoutAndroid_CreateForAudioTrack();
}
if (aout)
SDL_AoutSetStereoVolume(aout, pipeline->opaque->left_volume, pipeline->opaque->right_volume);
return aout;
}
該函式會根據ffp->opensles來決定是否使用openSLES來進行音訊播放,後面的分析是基於openSLES方式處理音訊的。
SDL_AoutAndroid_CreateForOpenSLES
定義如下,主要完成的是建立SDL_Aout物件
SDL_Aout *SDL_AoutAndroid_CreateForOpenSLES()
{
SDLTRACE("%s\n", __func__);
SDL_Aout *aout = SDL_Aout_CreateInternal(sizeof(SDL_Aout_Opaque));
if (!aout)
return NULL;
SDL_Aout_Opaque *opaque = aout->opaque;
opaque->wakeup_cond = SDL_CreateCond();
opaque->wakeup_mutex = SDL_CreateMutex();
int ret = 0;
SLObjectItf slObject = NULL;
ret = slCreateEngine(&slObject, 0, NULL, 0, NULL, NULL);
CHECK_OPENSL_ERROR(ret, "%s: slCreateEngine() failed", __func__);
opaque->slObject = slObject;
ret = (*slObject)->Realize(slObject, SL_BOOLEAN_FALSE);
CHECK_OPENSL_ERROR(ret, "%s: slObject->Realize() failed", __func__);
SLEngineItf slEngine = NULL;
ret = (*slObject)->GetInterface(slObject, SL_IID_ENGINE, &slEngine);
CHECK_OPENSL_ERROR(ret, "%s: slObject->GetInterface() failed", __func__);
opaque->slEngine = slEngine;
SLObjectItf slOutputMixObject = NULL;
const SLInterfaceID ids1[] = {SL_IID_VOLUME};
const SLboolean req1[] = {SL_BOOLEAN_FALSE};
ret = (*slEngine)->CreateOutputMix(slEngine, &slOutputMixObject, 1, ids1, req1);
CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateOutputMix() failed", __func__);
opaque->slOutputMixObject = slOutputMixObject;
ret = (*slOutputMixObject)->Realize(slOutputMixObject, SL_BOOLEAN_FALSE);
CHECK_OPENSL_ERROR(ret, "%s: slOutputMixObject->Realize() failed", __func__);
aout->free_l = aout_free_l;
aout->opaque_class = &g_opensles_class;
aout->open_audio = aout_open_audio;
aout->pause_audio = aout_pause_audio;
aout->flush_audio = aout_flush_audio;
aout->close_audio = aout_close_audio;
aout->set_volume = aout_set_volume;
aout->func_get_latency_seconds = aout_get_latency_seconds;
return aout;
fail:
aout_free_l(aout);
return NULL;
}
回到ffplay.c中,如果發現待播放的檔案中含有音訊,那麼在呼叫stream_component_open
開啟解碼器時,該方法裡面也呼叫audio_open
打開了audio
output裝置。static int audio_open(FFPlayer *opaque, int64_t wanted_channel_layout, int wanted_nb_channels, int wanted_sample_rate, struct AudioParams *audio_hw_params)
{
FFPlayer *ffp = opaque;
VideoState *is = ffp->is;
SDL_AudioSpec wanted_spec, spec;
......
wanted_nb_channels = av_get_channel_layout_nb_channels(wanted_channel_layout);
wanted_spec.channels = wanted_nb_channels;
wanted_spec.freq = wanted_sample_rate;
wanted_spec.format = AUDIO_S16SYS;
wanted_spec.silence = 0;
wanted_spec.samples = FFMAX(SDL_AUDIO_MIN_BUFFER_SIZE, 2 << av_log2(wanted_spec.freq / SDL_AoutGetAudioPerSecondCallBacks(ffp->aout)));
wanted_spec.callback = sdl_audio_callback;
wanted_spec.userdata = opaque;
while (SDL_AoutOpenAudio(ffp->aout, &wanted_spec, &spec) < 0) {
.....
}
......
return spec.size;
}
在audio_open
中配置了音訊輸出的相關引數SDL_AudioSpec
,並通過int SDL_AoutOpenAudio(SDL_Aout *aout, const SDL_AudioSpec *desired, SDL_AudioSpec *obtained)
{
if (aout && desired && aout->open_audio)
return aout->open_audio(aout, desired, obtained);
return -1;
}
設定給了Audio Output, android平臺上即為OpenSLES。
OpenSLES模組在工作過程中,通過不斷的callback來獲取pcm資料進行播放。
4.3.2 視訊渲染這裡只介紹Android平臺上採用OpenGL渲染解碼後的YUV影象,渲染執行緒為video_refresh_thread,最後渲染影象的方法為video_image_display2,定義如下:
static void video_image_display2(FFPlayer *ffp)
{
VideoState *is = ffp->is;
Frame *vp;
Frame *sp = NULL;
vp = frame_queue_peek_last(&is->pictq);
......
SDL_VoutDisplayYUVOverlay(ffp->vout, vp->bmp);
......
}
從程式碼實現上可以看出,該執行緒的主要工作為:
-
呼叫
frame_queue_peek_last
從pictq中讀取當前需要顯示視訊幀 -
呼叫
SDL_VoutDisplayYUVOverlay
進行繪製
int SDL_VoutDisplayYUVOverlay(SDL_Vout *vout, SDL_VoutOverlay *overlay)
{
if (vout && overlay && vout->display_overlay)
return vout->display_overlay(vout, overlay);
return -1;
}
display_overlay函式指標在前面初始化流程有介紹過,它在函式SDL_VoutAndroid_CreateForANativeWindow()中被賦值為vout_display_overlay
,該方法就是呼叫OpengGL繪製圖像。
對於播放器來說,音視訊同步是一個關鍵點,同時也是一個難點,同步效果的好壞,直接決定著播放器的質量。通常音視訊同步的解決方案就是選擇一個參考時鐘,播放時讀取音視訊幀上的時間戳,同時參考當前時鐘參考時鐘上的時間來安排播放。如下圖所示:
如果音視訊幀的播放時間大於當前參考時鐘上的時間,則不急於播放該幀,直到參考時鐘達到該幀的時間戳;如果音視訊幀的時間戳小於當前參考時鐘上的時間,則需要“儘快”播放該幀或丟棄,以便播放進度追上參考時鐘。
參考時鐘的選擇也有多種方式:
選取視訊時間戳作為參考時鐘源
選取音訊時間戳作為參考時鐘源
選取外部時間作為參考時鐘源
考慮人對視訊、和音訊的敏感度,在存在音訊的情況下,優先選擇音訊作為主時鐘源。
ijkplayer在預設情況下也是使用音訊作為參考時鐘源,處理同步的過程主要在視訊渲染video_refresh_thread的執行緒中:
static int video_refresh_thread(void *arg)
{
FFPlayer *ffp = arg;
VideoState *is = ffp->is;
double remaining_time = 0.0;
while (!is->abort_request) {
if (remaining_time > 0.0)
av_usleep((int)(int64_t)(remaining_time * 1000000.0));
remaining_time = REFRESH_RATE;
if (is->show_mode != SHOW_MODE_NONE && (!is->paused || is->force_refresh))
video_refresh(ffp, &remaining_time);
}
return 0;
}
從上述實現可以看出,該方法中主要迴圈做兩件事情:
- 休眠等待,
remaining_time
的計算在video_refresh
中 - 呼叫
video_refresh
方法,重新整理視訊幀
可見同步的重點是在video_refresh
中,下面著重分析該方法:
lastvp = frame_queue_peek_last(&is->pictq);
vp = frame_queue_peek(&is->pictq);
......
/* compute nominal last_duration */
last_duration = vp_duration(is, lastvp, vp);
delay = compute_target_delay(ffp, last_duration, is);
lastvp是上一幀,vp是當前幀,last_duration則是根據當前幀和上一幀的pts,計算出來上一幀的顯示時間,經過compute_target_delay
方法,計算出顯示當前幀需要等待的時間。static double compute_target_delay(FFPlayer *ffp, double delay, VideoState *is)
{
double sync_threshold, diff = 0;
/* update delay to follow master synchronisation source */
if (get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER) {
/* if video is slave, we try to correct big delays by
duplicating or deleting a frame */
diff = get_clock(&is->vidclk) - get_master_clock(is);
/* skip or repeat frame. We take into account the
delay to compute the threshold. I still don't know
if it is the best guess */
sync_threshold = FFMAX(AV_SYNC_THRESHOLD_MIN, FFMIN(AV_SYNC_THRESHOLD_MAX, delay));
/* -- by bbcallen: replace is->max_frame_duration with AV_NOSYNC_THRESHOLD */
if (!isnan(diff) && fabs(diff) < AV_NOSYNC_THRESHOLD) {
if (diff <= -sync_threshold)
delay = FFMAX(0, delay + diff);
else if (diff >= sync_threshold && delay > AV_SYNC_FRAMEDUP_THRESHOLD)
delay = delay + diff;
else if (diff >= sync_threshold)
delay = 2 * delay;
}
}
.....
return delay;
}
在compute_target_delay方法中,如果發現當前主時鐘源不是video,則計算當前視訊時鐘與主時鐘的差值:******************************************************************************************************************
關於diff = get_clock(&is->vidclk) - get_master_clock(is)的計算說明(轉發)
最終視訊和音訊的diff可以用下面的公式表示:
diff = (pre_video_pts-pre_audio_pts) +(video_system_time_diff - audio_system_time_diff)
****************************************************************************************************************************************************
如果當前視訊幀落後於主時鐘源,則需要減小下一幀畫面的等待時間;
如果視訊幀超前,並且該幀的顯示時間大於顯示更新門檻,則顯示下一幀的時間為超前的時間差加上上一幀的顯示時間
如果視訊幀超前,並且上一幀的顯示時間小於顯示更新門檻,則採取加倍延時的策略。
回到video_refresh中
time= av_gettime_relative()/1000000.0;
if (isnan(is->frame_timer) || time < is->frame_timer)
is->frame_timer = time;
if (time < is->frame_timer + delay) {
*remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
goto display;
}
frame_timer實際上就是上一幀的播放時間,而frame_timer + delay實際上就是當前這一幀的播放時間,如果系統時間還沒有到當前這一幀的播放時間,直接跳轉至display,而此時is->force_refresh變數為0,不顯示當前幀,進入video_refresh_thread中下一次迴圈,並睡眠等待。is->frame_timer += delay;
if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)
is->frame_timer = time;
SDL_LockMutex(is->pictq.mutex);
if (!isnan(vp->pts))
update_video_pts(is, vp->pts, vp->pos, vp->serial);
SDL_UnlockMutex(is->pictq.mutex);
if (frame_queue_nb_remaining(&is->pictq) > 1) {
Frame *nextvp = frame_queue_peek_next(&is->pictq);
duration = vp_duration(is, vp, nextvp);
if(!is->step && (ffp->framedrop > 0 || (ffp->framedrop && get_master_sync_type(is) != AV_SYNC_VIDEO_MASTER)) && time > is->frame_timer + duration) {
frame_queue_next(&is->pictq);
goto retry;
}
}
如果當前這一幀的播放時間已經過了,並且其和當前系統時間的差值超過了AV_SYNC_THRESHOLD_MAX,則將當前這一幀的播放時間改為系統時間,並在後續判斷是否需要丟幀,其目的是為後面幀的播放時間重新調整frame_timer,如果緩衝區中有更多的資料,並且當前的時間已經大於當前幀的持續顯示時間,則丟棄當前幀,嘗試顯示下一幀。{
frame_queue_next(&is->pictq);
is->force_refresh = 1;
SDL_LockMutex(ffp->is->play_mutex);
......
display:
/* display picture */
if (!ffp->display_disable && is->force_refresh && is->show_mode == SHOW_MODE_VIDEO && is->pictq.rindex_shown)
video_display2(ffp);
否則進入正常顯示當前幀的流程,呼叫video_display2
開始渲染。五、事件處理
在播放過程中,某些行為的完成或者變化,如prepare完成,開始渲染等,需要以事件形式通知到外部,以便上層作出具體的業務處理。
ijkplayer支援的事件比較多,具體定義在ijkplayer/ijkmedia/ijkplayer/ff_ffmsg.h中
#define FFP_MSG_FLUSH 0
#define FFP_MSG_ERROR 100 /* arg1 = error */
#define FFP_MSG_PREPARED 200
#define FFP_MSG_COMPLETED 300
#define FFP_MSG_VIDEO_SIZE_CHANGED 400 /* arg1 = width, arg2 = height */
#define FFP_MSG_SAR_CHANGED 401 /* arg1 = sar.num, arg2 = sar.den */
#define FFP_MSG_VIDEO_RENDERING_START 402
#define FFP_MSG_AUDIO_RENDERING_START 403
#define FFP_MSG_VIDEO_ROTATION_CHANGED 404 /* arg1 = degree */
#define FFP_MSG_BUFFERING_START 500
#define FFP_MSG_BUFFERING_END 501
#define FFP_MSG_BUFFERING_UPDATE 502 /* arg1 = buffering head position in time, arg2 = minimum percent in time or bytes */
#define FFP_MSG_BUFFERING_BYTES_UPDATE 503 /* arg1 = cached data in bytes, arg2 = high water mark */
#define FFP_MSG_BUFFERING_TIME_UPDATE 504 /* arg1 = cached duration in milliseconds, arg2 = high water mark */
#define FFP_MSG_SEEK_COMPLETE 600 /* arg1 = seek position, arg2 = error */
#define FFP_MSG_PLAYBACK_STATE_CHANGED 700
#define FFP_MSG_TIMED_TEXT 800
#define FFP_MSG_VIDEO_DECODER_OPEN 10001
5.1 訊息上報初始化在IJKMediaPlayer的初始化方法中:
static void
IjkMediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this)
{
MPTRACE("%s\n", __func__);
IjkMediaPlayer *mp = ijkmp_android_create(message_loop);
......
}
可以看到在建立播放器時,message_loop
函式地址作為引數傳入了ijkmp_android_create
,繼續跟蹤程式碼,可以發現,該函式地址最終被賦值給了IjkMediaPlayer中的msg_loop
函式指標IjkMediaPlayer *ijkmp_create(int (*msg_loop)(void*))
{
......
mp->msg_loop = msg_loop;
......
}
開始播放時,會啟動一個訊息執行緒,static int ijkmp_prepare_async_l(IjkMediaPlayer *mp)
{
......
mp->msg_thread = SDL_CreateThreadEx(&mp->_msg_thread, ijkmp_msg_loop, mp, "ff_msg_loop");
......
}
ijkmp_msg_loop
方法中呼叫的即是mp->msg_loop
。
5.2 訊息上報處理
播放器底層上報事件時,實際上就是將待發送的訊息放入訊息佇列,另外有一個執行緒會不斷從佇列中取出訊息,上報給外部,其程式碼流程大致如下圖所示:
以prepare完成事件為例,看看程式碼中事件上報的具體流程。
ffplay.c中上報PREPARED完成時呼叫:
ffp_notify_msg1(ffp, FFP_MSG_PREPARED);
ffp_notify_msg1
方法實現如下:inline static void ffp_notify_msg1(FFPlayer *ffp, int what) {
msg_queue_put_simple3(&ffp->msg_queue, what, 0, 0);
}
msg_queue_put_simple3
中將事件及其引數封裝成了AVMessge物件,inline static void msg_queue_put_simple3(MessageQueue *q, int what, int arg1, int arg2)
{
AVMessage msg;
msg_init_msg(&msg);
msg.what = what;
msg.arg1 = arg1;
msg.arg2 = arg2;
msg_queue_put(q, &msg);
}
最後呼叫msg_queue_put_private將訊息加入到佇列裡。
在5.1節中,我們有提到在建立播放器時,會傳入
message_loop
函式地址,最後作為一個單獨的執行緒執行,現在來看一下message_loop方法的實現:static void message_loop_n(JNIEnv *env, IjkMediaPlayer *mp)
{
jobject weak_thiz = (jobject) ijkmp_get_weak_thiz(mp);
JNI_CHECK_GOTO(weak_thiz, env, NULL, "mpjni: message_loop_n: null weak_thiz", LABEL_RETURN);
while (1) {
AVMessage msg;
int retval = ijkmp_get_msg(mp, &msg, 1);
if (retval < 0)
break;
// block-get should never return 0
assert(retval > 0);
switch (msg.what) {
case FFP_MSG_FLUSH:
MPTRACE("FFP_MSG_FLUSH:\n");
post_event(env, weak_thiz, MEDIA_NOP, 0, 0);
break;
case FFP_MSG_ERROR:
MPTRACE("FFP_MSG_ERROR: %d\n", msg.arg1);
post_event(env, weak_thiz, MEDIA_ERROR, MEDIA_ERROR_IJK_PLAYER, msg.arg1);
break;
case FFP_MSG_PREPARED:
MPTRACE("FFP_MSG_PREPARED:\n");
post_event(env, weak_thiz, MEDIA_PREPARED, 0, 0);
break;
......
}
}
message_loop中不停的呼叫ijkmp_get_msg獲取訊息,然後呼叫post_event方法傳遞訊息給java層,post_event實現如下:inline static void post_event(JNIEnv *env, jobject weak_this, int what, int arg1, int arg2)
{
// MPTRACE("post_event(%p, %p, %d, %d, %d)", (void*)env, (void*) weak_this, what, arg1, arg2);
J4AC_IjkMediaPlayer__postEventFromNative(env, weak_this, what, arg1, arg2, NULL);
// MPTRACE("post_event()=void");
}
post_event方法呼叫J4AC_IjkMediaPlayer__postEventFromNative(j4a資料夾中實現c程式碼呼叫java程式碼),呼叫該函式其實就是呼叫IJKMeadiaPlayer.java類的postEventFromNative方法。
@CalledByNative
private static void postEventFromNative(Object weakThiz, int what,
int arg1, int arg2, Object obj) {
if (weakThiz == null)
return;
@SuppressWarnings("rawtypes")
IjkMediaPlayer mp = (IjkMediaPlayer) ((WeakReference) weakThiz).get();
if (mp == null) {
return;
}
if (what == MEDIA_INFO && arg1 == MEDIA_INFO_STARTED_AS_NEXT) {
// this acquires the wakelock if needed, and sets the client side
// state
mp.start();
}
if (mp.mEventHandler != null) {
Message m = mp.mEventHandler.obtainMessage(what, arg1, arg2, obj);
mp.mEventHandler.sendMessage(m);
}
}
該函式將訊息資訊封裝為Message,投遞到EventHandler的處理佇列中去處理。EventHandler中handleMessage函式負責對訊息進行處理,其定義如下:public void handleMessage(Message msg) {
IjkMediaPlayer player = mWeakPlayer.get();
if (player == null || player.mNativeMediaPlayer == 0) {
DebugLog.w(TAG,
"IjkMediaPlayer went away with unhandled events");
return;
}
switch (msg.what) {
case MEDIA_PREPARED:
player.notifyOnPrepared();
return;
......
}
對於MEDIA_PREPARED訊息,會使用IjkMediaPlayer的notifyOnPrepared()方法,該方法定義在基類AbstractMediaPlayer.java中,具體如下:
protected final void notifyOnPrepared() {
if (mOnPreparedListener != null)
mOnPreparedListener.onPrepared(this);
}
notifyOnPrepared()方法會呼叫成員變數mOnPreparedListener的onPrepared方法。
成員變數mOnPreparedListener是通過其set方法賦值的,如下:
public final void setOnPreparedListener(OnPreparedListener listener) {
mOnPreparedListener = listener;
}
在examples樣例工程下IjkVideoView.java的openVideo會呼叫set介面給mOnPreparedListner賦值,具體如下:
private void openVideo() {
if (mUri == null || mSurfaceHolder == null) {
// not ready for playback just yet, will try again later
return;
}
// we shouldn't clear the target state, because somebody might have
// called start() previously
release(false);
AudioManager am = (AudioManager) mAppContext.getSystemService(Context.AUDIO_SERVICE);
am.requestAudioFocus(null, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);
try {
mMediaPlayer = createPlayer(mSettings.getPlayer());
// TODO: create SubtitleController in MediaPlayer, but we need
// a context for the subtitle renderers
final Context context = getContext();
// REMOVED: SubtitleController
// REMOVED: mAudioSession
mMediaPlayer.setOnPreparedListener(mPreparedListener);
mMediaPlayer.setOnVideoSizeChangedListener(mSizeChangedListener);
......
}
文章中三、四、五部分主要是參考下面的文章寫的:
https://www.jianshu.com/p/daf0a61cc1e0