1. 程式人生 > >【ECCV2018】Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry

【ECCV2018】Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry

這篇文章是: 關於基於優化方法的視覺慣性里程計的的變化相機-IMU時間偏移的建模

(這裡需要提一下的是以後閱讀文章我覺得應該先整體過一遍,然後再整理,就和做英語閱讀題一樣,先快速的有個全域性的掌握和認識) ——————————————————————————————————————————————————————————————————

一、自己不知道不清楚的基本常識和基礎知識:

是閱讀文章過程中遇到的一些自己不懂的地方的記錄,也算是擴充自己知識點的一個途徑吧:

1. global-shutter cameras:

首先shutter: 就是我們中文說所的 快門,用來控制相機內部感光片的有效曝光時間的機構

需要曝光時間短:global-shutter--------對應快速移動的東西(運動拍照)——————————噪聲大 需要曝光時間長:rolling-shutter--------對應夜晚絲絹般的車水馬龍——————————————噪聲小

對比完這兩種常見的快門之後就是具體的解釋了: global-shutter: 通過整幅場景在同一時間曝光實現的。Sensor所有畫素點同時收集光線,同時曝光。即在曝光開始的時候,Sensor開始收集光線;在曝光結束的時候,光線收集電路被切斷。然後Sensor值讀出即為一幅照片。CCD就是Global shutter工作方式。所有像元同時曝光。 rolling-shutter:

與Global shutter不同,它是通過Sensor逐行曝光的方式實現的。在曝光開始的時候,Sensor逐行掃描逐行進行曝光,直至所有畫素點都被曝光。當然,所有的動作在極短的時間內完成。不同行像元的曝光時間不同

————————————————————————————————————————————————————————————————————

二、對自己可能有用的資訊:

1.【關於標定的文章:】 [1] Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems. In: Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst. (2013) 線上的標定: [2] Weiss, S., Achtelik, M.W., Lynen, S., Chi, M., Siegwart, R.: Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments. In: Proc. of the IEEE Intl. Conf. on Robot. and Autom. (2012) [3] Yang, Z., Shen, S.: Monocular visual-inertial fusion with online initialization and camera-IMU calibration. In: Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst. (2015)

2.【關於time-offset的文章:】 首先應該是最早引入time-offsets的文章 Jacovitti et al. estimate the time-offset by searching the peak that maximizes the correlation of different sensor measurements: [1]Jacovitti, G., Scarano, G.: Discrete time techniques for time delay estimation. IEEE Transactions on Signal Processing 41 (1993) 這篇文章的侷限性:cannot estimate time-varying time offsets 然後是Liet al. adopted a different approach. They assume constant velocities aroundlocal trajectories. Time offsets are included in the estimator state vector, and optimized together with other state variables within an EKF framework 參考這個文章: [2] Li, M., Mourikis, A.I.: 3D motion estimation and online temporal calibration for camera- IMU systems. In: Proc. of the IEEE Intl. Conf. on Robot. and Autom. (2013) 接著出現了:Instead of explicitly optimizing the time offset, Guo et al. proposed an interpolation model to account for the pose displacements caused by time offsets. 具體參考下面的文章: [3] Hesch, J.A., Kottas, D.G., Bowman, S.L., Roumeliotis, S.I.: Consistency analysis and improvement of vision-aided inertial navigation. IEEE Trans. Robot. 30(1), 158–176 (Feb 2014)

接著是我比較感興趣的地方就是文章的第四部分:Modeling Varying Camera-IMU Time Offset 本來想在time-offsets上面做一點文章的,但是看了之後發現自己目前的能力還是遠遠不能達到這樣的書評,還是慢慢修行吧! 加油加油!

  1. 【關於非線性優化框架的】 下面兩篇文章涉及到了較為具體的細節:之後有時間再看! [1] Shen, S., Michael, N., Kumar, V.: Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs. In: Proc. of the IEEE Intl. Conf. on Robot. and Autom. Seattle, WA (May 2015) [2] Leutenegger, S., Furgale, P., Rabaud, V., Chli, M., Konolige, K., Siegwart, R.: Keyframe-based visual-inertial SLAM using nonlinear optimization. In: Proc. of Robot.: Sci. and Syst. (2013)

————————————————————————————————————————————————————————————————————