1. 程式人生 > >CVPR2018論文筆記(六)TotalCapture_Part2

CVPR2018論文筆記(六)TotalCapture_Part2

上週因為身體原因沒在週五及時更新blog,上週讀paper的時間也不是很充足,不過還是有多少發多少。選取Introduction部分最後一段。

Introduction

To overcome this sensing challenge, we present a novel generative body deformation model that has the ability to express the motion of each principal body part. In particular, we describe a procedure to build an initial body model, named “Frank”, by seamlessly consolidating available part template models [34, 15] into a single skeleton hierarchy. To fit this model to data, we leverage keypoint detection (e.g., faces [20], bodies [58, 16, 36], and hands [44]) in multiple views to obtain 3D keypoints which are robust to multiple people and object interactions. We fit the “Frank” model to a capture of 70 people, and learn a new deformation model, named “Adam”, capable of additionally capturing variations of hair and clothing with a simplified parameterization. We present a method to capture the total body motion of multiple people with the 3D deformable model. Finally, we demonstrate the performance of our method on various sequences of social behavior and person-object interactions, where the combination of face, limb, and finger motion emerges naturally.


為了克服這項感測的挑戰,我們提出了一種新型可生成的身體形變模型,這個模型可以表達身體每一個主要部位的動作。特別得,我們描述了一個過程去建立一個最初的身體模型,名叫“Frank”,通過將可獲得部分的樣板模型無縫地合併到一個單獨的骨架層次中。為了將這個模型與資料相對應,我們在多檢視中利用關鍵點檢測(例如,面部、身體和手部)來獲得3D關鍵點,這些關鍵點對多種人和目標的互動作用具有魯棒性。我們用Frank模型做了一個70個人的捕獲,並且學習一個名叫“Adam”的新的形變模型,這個模型能夠以一個簡化的引數化來額外地捕捉頭髮和衣服的變化。我們提出了一個用3D可變形模型去捕獲多人全身移動的方法。最後,我們在社會行為和人與物的互動的多種序列的方面證明了我們方法的效能,其中包括面部、肢體和手指移動的組合自然出現的情況。