SoftRas 是目前主流三角網格可微渲染器之一。
可微渲染通過計算渲染過程的導數,使得從單張圖片學習三維結構逐漸成為現實。可微渲染目前被廣泛地應用於三維重建,特別是人體重建、人臉重建和三維屬性估計等應用中。
安裝
conda
安裝 PyTorch 環境:
conda create -n torch python=3.8 -y
conda activate torch
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -y
conda activate torch
python - <<-EOF
import platform
import torch
print(f"Python : {platform.python_version()}")
print(f"PyTorch: {torch.__version__}")
print(f" CUDA : {torch.version.cuda}")
EOF
Python : 3.8.10
PyTorch: 1.9.0
CUDA : 11.1
獲取程式碼並安裝:
git clone https://github.com/ShichenLiu/SoftRas.git
cd SoftRas
python setup.py install
可設 setup.py
映象源:
cat <<-EOF > ~/.pydistutils.cfg
[easy_install]
index_url = http://mirrors.aliyun.com/pypi/simple
EOF
應用
安裝模型檢視工具:
snap install ogre-meshviewer
# 或
snap install meshlab
渲染物體
渲染測試:
CUDA_VISIBLE_DEVICES=0 python examples/demo_render.py
渲染結果:
對比前後模型:
ogre-meshviewer data/obj/spot/spot_triangulated.obj
ogre-meshviewer data/results/output_render/saved_spot.obj
Mesh 重建
下載資料集:
bash examples/recon/download_dataset.sh
訓練模型:
$ CUDA_VISIBLE_DEVICES=0 python examples/recon/train.py -eid recon
Loading dataset: 100%|██████████████████████████| 13/13 [00:35<00:00, 2.74s/it]
Iter: [0/250000] Time 1.189 Loss 0.655 lr 0.000100 sv 0.000100
Iter: [100/250000] Time 0.464 Loss 0.405 lr 0.000100 sv 0.000100
...
Iter: [250000/250000] Time 0.450 Loss 0.128 lr 0.000030 sv 0.000030
測試模型:
$ CUDA_VISIBLE_DEVICES=0 python examples/recon/test.py -eid recon \
-d 'data/results/models/recon/checkpoint_0250000.pth.tar'
Loading dataset: 100%|██████████████████████████| 13/13 [00:03<00:00, 3.25it/s]
Iter: [0/97] Time 0.419 IoU 0.697
=================================
Mean IoU: 65.586 for class Airplane
Iter: [0/43] Time 0.095 IoU 0.587
=================================
Mean IoU: 49.798 for class Bench
Iter: [0/37] Time 0.089 IoU 0.621
=================================
Mean IoU: 68.975 for class Cabinet
Iter: [0/179] Time 0.088 IoU 0.741
Iter: [100/179] Time 0.083 IoU 0.772
=================================
Mean IoU: 74.224 for class Car
Iter: [0/162] Time 0.086 IoU 0.565
Iter: [100/162] Time 0.085 IoU 0.522
=================================
Mean IoU: 52.933 for class Chair
Iter: [0/26] Time 0.094 IoU 0.681
=================================
Mean IoU: 60.553 for class Display
Iter: [0/55] Time 0.087 IoU 0.526
=================================
Mean IoU: 45.751 for class Lamp
Iter: [0/38] Time 0.086 IoU 0.580
=================================
Mean IoU: 65.626 for class Loudspeaker
Iter: [0/56] Time 0.090 IoU 0.783
=================================
Mean IoU: 68.683 for class Rifle
Iter: [0/76] Time 0.092 IoU 0.647
=================================
Mean IoU: 68.111 for class Sofa
Iter: [0/204] Time 0.090 IoU 0.405
Iter: [100/204] Time 0.087 IoU 0.435
Iter: [200/204] Time 0.086 IoU 0.567
=================================
Mean IoU: 46.206 for class Table
Iter: [0/25] Time 0.097 IoU 0.901
=================================
Mean IoU: 82.261 for class Telephone
Iter: [0/46] Time 0.087 IoU 0.503
=================================
Mean IoU: 61.019 for class Watercraft
=================================
Mean IoU: 62.287 for all classes
Mesh 重建:
# 獲取 `softras_recon.py` 進 `examples/recon/`
# https://github.com/ikuokuo/start-3d-recon/blob/master/samples/softras_recon.py
# 註釋 `iou` 直接返回 0,位於 `examples/recon/models.py` `evaluate_iou()`
# 2D 影象重構 3D Mesh
CUDA_VISIBLE_DEVICES=0 python examples/recon/softras_recon.py \
-s '.' \
-d 'data/results/models/recon/checkpoint_0250000.pth.tar' \
-img 'data/car_64x64.png'
ogre-meshviewer data/car_64x64.obj
重建影象:
重建結果:
或重建 ShapeNet 資料集內影象:
# mesh recon images of ShapeNet dataset
CUDA_VISIBLE_DEVICES=0 python examples/recon/softras_recon.py \
-s '.' \
-d 'data/results/models/recon/checkpoint_0250000.pth.tar' \
-imgs 'data/datasets/02958343_test_images.npz'
或使用 SoftRas 訓練好的模型:
- SoftRas trained with silhouettes supervision (62+ IoU): google drive
- SoftRas trained with shading supervision (64+ IoU, test with
--shading-model
arg): google drive - SoftRas reconstructed meshes with color (random sampled): google drive
更多
GoCoding 個人實踐的經驗分享,可關注公眾號!