1. 程式人生 > >Android 7.0 Camera架構原始碼分析

Android 7.0 Camera架構原始碼分析

Android 7.0之前CameraService是在mediaserver程序中註冊的,看下Android 6.0的程式碼:

    //path: frameworks\av\media\mediaserver\main_mediaserver.cpp
    int main()
    {
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p"
, sm.get()); AudioFlinger::instantiate(); MediaPlayerService::instantiate(); ResourceManagerService::instantiate(); //初始化相機服務 CameraService::instantiate(); AudioPolicyService::instantiate(); SoundTriggerHwService::instantiate(); RadioService::instantiate
(); registerExtensions(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); }
  • 接著看下Android 7.0中main_mediaserver.cpp的程式碼,發現沒有了CameraService::instantiate(); 也就是說Android 7.0之後就不在main_mediaserver.cpp中註冊了。
//沒有CameraService::instantiate(),也少了幾個別的服務,這裡只關注CameraService
int main(int argc __unused, char **argv __unused) { signal(SIGPIPE, SIG_IGN); sp<ProcessState> proc(ProcessState::self()); sp<IServiceManager> sm(defaultServiceManager()); ALOGI("ServiceManager: %p", sm.get()); InitializeIcuOrDie(); MediaPlayerService::instantiate(); ResourceManagerService::instantiate(); registerExtensions(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); }
  • 我們看下FrameWork層Camera的程式碼(frameworks\av\camera ),發現多了個cameraserver資料夾 ,看下里面的main_cameraserver.cpp,原來CameraServe::instantiaicte()在這裡。
int main(int argc __unused, char** argv __unused)
{
    signal(SIGPIPE, SIG_IGN);

    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    ALOGI("ServiceManager: %p", sm.get());
    //初始化CameraService服務
    CameraServe::instantiaicte();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}
  • 從這裡可以猜測cameraserver應該是作為獨立的程序執行的,我們看下 cameraserver資料夾下面的另一個檔案cameraserver.rc
//這表示由init程序啟動名字為cameraserver的程序,果然是獨立程序,路徑為/system/bin/cameraserver
service cameraserver /system/bin/cameraserver
    //class表示類別,同一類別(這裡是main類別)的程序同時啟動,同時停止
    class main
    //使用者名稱及分組
    user cameraserver
    group audio camera input drmrpc
    ioprio rt 4
    writepid /dev/cpuset/camera-daemon/tasks /dev/stune/top-app/tasks
  • cameraserver.rc由Android.mk檔案來打包到指定位置:
LOCAL_PATH:= $(call my-dir)

include $(CLEAR_VARS)

//原始檔
LOCAL_SRC_FILES:= \
    main_cameraserver.cpp

LOCAL_SHARED_LIBRARIES := \
    libcameraservice \
    libcutils \
    libutils \
    libbinder \
    libcamera_client

//模組的名稱
LOCAL_MODULE:= cameraserver
LOCAL_32_BIT_ONLY := true

LOCAL_CFLAGS += -Wall -Wextra -Werror -Wno-unused-parameter

//LOCAL_INIT_RC會將cameraserver.rc放在/system/etc/init/目錄中,這個目錄下的指令碼會由init程序來啟動。
LOCAL_INIT_RC := cameraserver.rc

include $(BUILD_EXECUTABLE)
  • 那cameraserver.rc檔案什麼時候執行呢?這裡我們需要理解下Android系統啟動過程,Android系統啟動包括兩大塊:Linux核心啟動,Android框架啟動。

1,Linux核心啟動:

  • BootLoader啟動

核心啟動首先裝載BootLoader載入程式,執行完進入kthreadd核心程序,這是所有核心程序的父程序。

  • 載入Linux核心

初始化驅動、安裝根檔案系統等,最後啟動第一個使用者程序init程序,它是所有使用者程序的父程序。這樣就進入了Android框架的啟動階段。

2,Android框架啟動

init程序啟動後會載入init.rc(system\core\rootdir\init.rc)指令碼,當它執行mount_all指令掛載分割槽時,會載入/{system,vendor,odm}/etc/init目錄下的所有rc指令碼,這樣就會啟動cameraserver程序,同時也會啟動zygote程序(第一個Java層程序,也是Java層所有程序的父程序)、ServiceManager、mediaserver(多媒體服務程序)、surfaceflinger(螢幕渲染相關的程序)等。之後zygote會孵化出啟動system_server程序,Android framework裡面的所有service(ActivityManagerService、WindowManagerService等)都是由system_server啟動,這裡就不細講了。

那為什麼Android 7.0之前cameraservice是執行在mediaserver程序中的,而從Android 7.0開始將cameraservice分離出來成一個單獨的cameraserver程序?這是為了安全性,因為mediaserver 程序中有很多其它的Service,如AudioFlinger、MediaPlayerService等,如果這其中有一個Service掛掉就會導致mediaserver程序重啟,如果相機正在執行,這樣就會掛掉,使用者體驗很差。

現在知道了cameraserver程序是怎麼啟動的了,下面分析下它的啟動過程,cameraserver程序的入口是frameworks\av\camera\cameraserver\main_cameraserver.cpp的main函式,看下程式碼:

int main(int argc __unused, char** argv __unused)
{
    signal(SIGPIPE, SIG_IGN);
    //獲取一個ProcessState跟Binder驅動打交道
    sp<ProcessState> proc(ProcessState::self());
    //獲取ServiceManager用以註冊該服務
    sp<IServiceManager> sm = defaultServiceManager();
    ALOGI("ServiceManager: %p", sm.get());
    CameraService::instantiate();
    //執行緒池管理
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}
  • 我們主要關注CameraService::instantiate(),別的幾行屬於Binder機制,我這裡假設你已經熟悉了,不熟悉的話先補下Binder知識,不然後面很難理解的。

CameraService繼承了BinderService和BnCameraService類。CameraService::instantiate()函式是呼叫其父類BinderService類的方法。

class CameraService :
    public BinderService<CameraService>,
    public ::android::hardware::BnCameraService,
    public IBinder::DeathRecipient,
    public camera_module_callbacks_t
{
......
}

//模板類
template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }

    static void publishAndJoinThreadPool(bool allowIsolated = false) {
        publish(allowIsolated);
        joinThreadPool();
    }

    static void instantiate() { publish(); }
......
}
  • BinderService使用了模板類,使用CameraService::instantiate()初始化時,SERVICE就是CameraService。替換模板後CameraService::instantiate()函式如下:
static void instantiate() { publish(); }

static status_t publish() {
    sp<IServiceManager> sm(defaultServiceManager());
    /getServiceName()返回"media.camera"字串
    return sm->addService(
         String16(CameraService ::getServiceName()), new CameraService (), false);
}
  • 我們知道defaultServiceManager返回的是BpServiceManager,看下它的addService函式:
    virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    {
        Parcel data, reply;
        //寫入Interface name,這裡是"android.os.IServiceManager"
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        //寫入Service name
        data.writeString16(name);
        //寫入Service例項
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        //remote()實際上是指BpBinder(0)
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }

這裡的service就是CameraService,看下它的onFirstRef()函式:

//原始程式碼有點長,這裡只保留了比較重要的程式碼
void CameraService::onFirstRef()
{
    BnCameraService::onFirstRef();

    //camera_module_t和CAMERA_HARDWARE_MODULE_ID定義在hardware\libhardware\include\hardware\camera_common.h中,
    camera_module_t *rawModule;
    //注意這裡把rawModule強轉成camera_module_t,至於為什麼能強轉,下面會講的
    int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID,
            (const hw_module_t **)&rawModule);

    //建立CameraModule物件並初始化
    mModule = new CameraModule(rawModule);
    err = mModule->init();

    //獲取攝像頭數量
    mNumberOfCameras = mModule->getNumberOfCameras();
    mNumberOfNormalCameras = mNumberOfCameras;

    int latestStrangeCameraId = INT_MAX;
    for (int i = 0; i < mNumberOfCameras; i++) {
        String8 cameraId = String8::format("%d", i);
        //獲取每個camera的資訊以及初始化狀態
    }

    //因為CameraService繼承了camera_module_callbacks_t,定義在camera_common.h中,所以這裡的Callback主要監聽camera_device_status_change和torch_mode_status_change
    if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_1) {
        mModule->setCallbacks(this);
    }
    //連線CameraServiceProxy服務,也就是"media.camera.proxy"服務,此服務由SystemServer註冊到ServiceManager中
    CameraService::pingCameraServiceProxy();
}
  • camera_module_t(hardware\libhardware\include\hardware\camera_common.h)的宣告如下:
typedef struct camera_module {
    //注意common必須是camera_module的第一個成員,這樣就可以根據camera_module_t的地址強轉成hw_module_t
    hw_module_t common;
    int (*get_number_of_cameras)(void);
    ......
} camera_module_t;
  • hw_get_module()函式定義在hardware\libhardware\hardware.c檔案中
int hw_get_module(const char *id, const struct hw_module_t **module)
{
    return hw_get_module_by_class(id, NULL, module);
}

int hw_get_module_by_class(const char *class_id, const char *inst,
                           const struct hw_module_t **module)
{
    ......
    //首先根據ro.hardware.class_id.inst查詢動態連結庫路徑,如果可以找到,直接跳到found位置
    snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
    if (property_get(prop_name, prop, NULL) > 0) {
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    //在所有的配置變數中查詢所需模組的動態連結庫路徑
    for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {
        if (property_get(variant_keys[i], prop, NULL) == 0) {
            continue;
        }
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    return -ENOENT;

found:
    //根據路徑載入動態連結庫,並將硬體模組的結構體地址賦給module
    return load(class_id, path, module);
}
static int load(const char *id,
        const char *path,
        const struct hw_module_t **pHmi)
{
    int status = -EINVAL;
    void *handle = NULL;
    struct hw_module_t *hmi = NULL;

    //開啟.so檔案
    handle = dlopen(path, RTLD_NOW);

    //獲取hal_module_info結構體的地址
    const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
    hmi = (struct hw_module_t *)dlsym(handle, sym);

}

//hardware.h
/**
 * Name of the hal_module_info
 */
#define HAL_MODULE_INFO_SYM         HMI

/**
 * Name of the hal_module_info as a string
 */
#define HAL_MODULE_INFO_SYM_AS_STR  "HMI"
  • 根據上述程式碼,dlsym就是找HMI的地址,而HMI就是HAL_MODULE_INFO_SYM的巨集定義,所以最終就是找HAL_MODULE_INFO_SYM的地址。

通過查詢Android 7.0 程式碼,發現HAL_MODULE_INFO_SYM是在hardware/qcom/camera/QCamera2/QCamera2Hal.cpp檔案中實現的(如果你的7.0原始碼沒有HAL層的程式碼,可以參考線上原始碼http://androidxref.com/)。

static hw_module_t camera_common = {
    .tag                    = HARDWARE_MODULE_TAG,
    .module_api_version     = CAMERA_MODULE_API_VERSION_2_4,
    .hal_api_version        = HARDWARE_HAL_API_VERSION,
    .id                     = CAMERA_HARDWARE_MODULE_ID,
    .name                   = "QCamera Module",
    .author                 = "Qualcomm Innovation Center Inc",
    .methods                = &qcamera::QCamera2Factory::mModuleMethods,
    .dso                    = NULL,
    .reserved               = {0}
};

camera_module_t HAL_MODULE_INFO_SYM = {
    .common                 = camera_common,
    .get_number_of_cameras  = qcamera::QCamera2Factory::get_number_of_cameras,
    .get_camera_info        = qcamera::QCamera2Factory::get_camera_info,
    .set_callbacks          = qcamera::QCamera2Factory::set_callbacks,
    .get_vendor_tag_ops     = qcamera::QCamera3VendorTags::get_vendor_tag_ops,
    .open_legacy            = qcamera::QCamera2Factory::open_legacy,
    .set_torch_mode         = qcamera::QCamera2Factory::set_torch_mode,
    .init                   = NULL,
    .reserved               = {0}
};
  • 所以最終rawModule就是指向HAL_MODULE_INFO_SYM,這樣CameraService就跟Camera HAL層聯絡起來了。

回到CameraService::onFirstRef()函式中:

    //建立CameraModule物件並初始化
    mModule = new CameraModule(rawModule);
    err = mModule->init();

    //獲取攝像頭數量
    mNumberOfCameras = mModule->getNumberOfCameras();
    mNumberOfNormalCameras = mNumberOfCameras;
CameraModule::CameraModule(camera_module_t *module) {
    //對mModule進行初始化
    mModule = module;
}

int CameraModule::init() {
    //mModule->init就是指向HAL_MODULE_INFO_SYM的init,為NULL
    if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 &&
            mModule->init != NULL) {
        ATRACE_BEGIN("camera_module->init");
        res = mModule->init();
        ATRACE_END();
    }
    //呼叫getNumberOfCameras()
    mCameraInfoMap.setCapacity(getNumberOfCameras());
}

int CameraModule::getNumberOfCameras() {
    int numCameras;
    //呼叫HAL_MODULE_INFO_SYM的get_number_of_cameras()函式
    numCameras = mModule->get_number_of_cameras();
    return numCameras;
}

//檢視HAL層程式碼

int QCamera2Factory::get_number_of_cameras()
{
    int numCameras = 0;
    //gQCamera2Factory為空的話,建立一個物件
    if (!gQCamera2Factory) {
        gQCamera2Factory = new QCamera2Factory();
        if (!gQCamera2Factory) {
            LOGE("Failed to allocate Camera2Factory object");
            return 0;
        }
    }

    if(gQCameraMuxer)
        numCameras = gQCameraMuxer->get_number_of_cameras();
    else
        //獲取Camera攝像頭數量
        numCameras = gQCamera2Factory->getNumberOfCameras();
    return numCameras;
}

QCamera2Factory::QCamera2Factory()
{
    mHalDescriptors = NULL;
    mCallbacks = NULL;
    //get_num_of_cameras()在mm_camera_interface.c中,之後就會進入到Linux核心,往下就不講了
    mNumOfCameras = get_num_of_cameras();
    mNumOfCameras_expose = get_num_of_cameras_to_expose();
    ......
}
  • onFirstRef函式主要是根據ID查詢並載入HAL模組的動態連結庫,然後建立CameraModule物件並初始化以及獲取攝像頭數量,之後給HAL層設定一個監聽介面。

回到addService函式中,接著看其裡面的具體內容:

    virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    {
        Parcel data, reply;
        //寫入Interface name,這裡是"android.os.IServiceManager"
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        //寫入Service name
        data.writeString16(name);
        //寫入Service例項
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        //remote()實際上是指BpBinder(0),它是指Binder驅動中的0號引用,也就是指ServiceManager的代理物件
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        //通過IPCThreadState物件來向Binder驅動傳送新增服務的請求,注意mHandle值為0
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        //將資料封裝成binder_transaction_data結構,並寫到mOut變數中
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } 
    return err;
}

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;
    //封裝資料
    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } 
    ......
    //寫到mOut變數中
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}
  • 到這裡只是將資料封裝了一下,但是還沒傳送,接著看waitForResponse函式:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        //跟Binder驅動互動,這個是核心,在下面講解
        if ((err=talkWithDriver()) < NO_ERROR) break;
        //讀取Binder驅動返回的命令
        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

//不穿引數時doReceive為true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    //mProcess是IPCThreadState初始化時所包含的ProcessState物件,它的mDriverFD對應Binder驅動的檔案描述符
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    //將mOut的資料封裝到binder_write_read結構體中
    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        //這裡就是向Binder驅動寫入命令和資料
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
#else
        err = INVALID_OPERATION;
#endif
    } while (err == -EINTR);
    ......
    return err;
}
//路徑
linux/drivers/android/binder.c
linux/include/uapi/linux/android/binder.h
  • 首先binder_init函式會建立 /dev/binder節點:
static int __init binder_init(void)
{
    ......
    while ((device_name = strsep(&device_names, ","))) {
        //初始化Binder裝置
        ret = init_binder_device(device_name);
    }
    return ret;
}

static int __init init_binder_device(const char *name)
{
    int ret;
    struct binder_device *binder_device;

    binder_device = kzalloc(sizeof(*binder_device), GFP_KERNEL);
    if (!binder_device)
        return -ENOMEM;

    //操作函式結構體
    binder_device->miscdev.fops = &binder_fops;
    binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
    //裝置名稱,這裡就是"binder",這樣使用者空間可以通過/dev/binder節點進行操作
    binder_device->miscdev.name = name;

    binder_device->context.binder_context_mgr_uid = INVALID_UID;
    binder_device->context.name = name;
    //向核心中註冊misc裝置
    ret = misc_register(&binder_device->miscdev);
    if (ret < 0) {
        kfree(binder_device);
        return ret;
    }

    hlist_add_head(&binder_device->hlist, &binder_devices);

    return ret;
}

//這裡指定了檔案操作的函式
static const struct file_operations binder_fops = {
    .owner = THIS_MODULE,
    .poll = binder_poll,
    //從Linux kernel 2.6.36版本開始,刪除了ioctl函式指標,用unlocked_ioctl取代
    .unlocked_ioctl = binder_ioctl,
    .compat_ioctl = binder_ioctl,
    .mmap = binder_mmap,
    .open = binder_open,
    .flush = binder_flush,
    .release = binder_release,
};
  • 回到IPCThreadState::talkWithDriver:
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
    err = NO_ERROR;

//ioctl就對應核心中的binder_ioctl函式,
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
    ......
    //從binder_proc中查詢binder_thread,如果當前執行緒已經加入到proc的執行緒佇列則直接返回,如果不存在則建立binder_thread,並將當前執行緒新增到當前的proc
    thread = binder_get_thread(proc);
    switch (cmd) {
    case BINDER_WRITE_READ:
        ret = binder_ioctl_write_read(filp, cmd, arg, thread);
        if (ret)
            goto err;
        break;
}

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    if (size != sizeof(struct binder_write_read)) {
        ret = -EINVAL;
        goto out;
    }
    //把使用者空間的binder_write_read資料拷貝到核心空間bwr
    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }

    if (bwr.write_size > 0) {
        //當寫快取中有資料時,執行Binder寫操作
        ret = binder_thread_write(proc, thread,
                      bwr.write_buffer,
                      bwr.write_size,
                      &bwr.write_consumed);
        trace_binder_write_done(ret);
        //如果Binder寫操作失敗,則將bwr資料拷回到核心空間,並返回
        if (ret < 0) {
            bwr.read_consumed = 0;
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                ret = -EFAULT;
            goto out;
        }
    }
    if (bwr.read_size > 0) {
        //當讀快取中有資料時,執行Binder讀操作
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
                     bwr.read_size,
                     &bwr.read_consumed,
                     filp->f_flags & O_NONBLOCK);
        trace_binder_read_done(ret);
        if (!list_empty(&proc->todo))
            wake_up_interruptible(&proc->wait);
        //如果Binder讀操作失敗,則將bwr資料拷回到核心空間,並返回
        if (ret < 0) {
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                ret = -EFAULT;
            goto out;
        }
    }

    //將bwr資料拷回到核心空間,並返回
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}
  • 看下binder_thread_write寫操作:
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    struct binder_context *context = proc->context;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        trace_binder_command(cmd);
        if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
            binder_stats.bc[_IOC_NR(cmd)]++;
            proc->stats.bc[_IOC_NR(cmd)]++;
            thread->stats.bc[_IOC_NR(cmd)]++;
        }
        switch (cmd) {
         ......
         //我們前面傳送的命令就是BC_TRANSACTION
         case BC_TRANSACTION:
         case BC_REPLY: {
            struct binder_transaction_data tr;
            //拷貝使用者空間資料到核心空間tr
            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr,
                       cmd == BC_REPLY, 0);
            break;
        }
    }
    return 0;
}

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply,
                   binder_size_t extra_buffers_size)
{
    ......
    if (reply) {
        ......
    } else {
        //因為我們傳的tr->target.handle為0,也就是ServiceManager程序
        if (tr->target.handle) {
            ......
        } else {
            //直接獲取ServiceManager的在Binder驅動中的binder_node目標節點
            target_node = context->binder_context_mgr_node;
        }
        //獲取ServiceManager的程序
        target_proc = target_node->proc;
    }
    ......
    if (target_thread) {
        e->to_thread = target_thread->pid;
        //獲取ServiceManager的TODO佇列和等待佇列
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    }
    //分配兩個結構體記憶體
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    binder_stats_created(BINDER_STAT_TRANSACTION);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
    ......
    //為target_proc分配一塊Buffer
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, extra_buffers_size,
        !reply && (t->flags & TF_ONE_WAY));
    ......
    //向target_list,也就是ServiceManager的TODO佇列,新增BINDER_WORK_TRANSACTION事務
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    //向當前執行緒的TODO佇列新增BINDER_WORK_TRANSACTION_COMPLETE事務
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    //如果ServiceManager程序正在等待,則喚醒
    if (target_wait) {
        if (reply || !(t->flags & TF_ONE_WAY))
            wake_up_interruptible_sync(target_wait);
        else
            wake_up_interruptible(target_wait);
    }
    return;
} 
  • 這裡寫圖片描述

對比這張Binder架構圖,Client就是cameraserver程序,Service就是servicemanager程序,整個註冊服務的過程就是cameraserver程序傳送BC_TRANSACTION命令到Binder驅動中,然後Binder驅動傳送BR_TRANSACTION命令到servermanager程序中註冊,這裡會生成一個handle,用來標識”media.camera”服務,然後將此服務新增到svclist全域性連結串列中。然後通過BC_REPLY命令將返回結果傳送到Binder驅動中,Binder驅動再通過BR_REPLY命令將結果傳送給cameraserver程序。

至此,CameraService的servicename和例項就在ServiceManager中註冊了,之後別的程序即可通過ServiceManager跨程序獲取CameraService服務。並且CameraService通過onFirstRef()函式跟Camera HAL層聯絡起來了。