Abstract:
We introduce Gaussian Wardrobe, a novel framework to digitalize compositional 3D neural avatars from multi-view videos. Existing methods for 3D neural avatars typically treat the human body and clothing as an inseparable entity. However, this paradigm fails to capture the dynamics of complex free-form garments and limits the reuse of clothing across different individuals. To overcome these problems, we develop a novel, compositional 3D Gaussian representation to build avatars from multiple layers of free-form garments. The core of our method is decomposing neural avatars into bodies and layers of shape-agnostic neural garments. To achieve this, our framework learns to disentangle each garment layer from multi-view videos and canonicalizes it into a shape-independent space. In experiments, our method models photorealistic avatars with high-fidelity dynamics, achieving new state-of-the-art performance on novel pose synthesis benchmarks. In addition, we demonstrate that the learned compositional garments contribute to a versatile digital wardrobe, enabling a practical virtual try-on application where clothing can be freely transferred to new subjects.
- Clone this repo.
- Install environments.
# install requirements
pip install -r requirements.txt
# install diff-gaussian-rasterization-depth-alpha
cd gaussians/diff_gaussian_rasterization_depth_alpha
python setup.py install
cd ../..
# install styleunet
cd network/styleunet
python setup.py install
cd ../..
# install pytorch3d
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d && pip install -e .
- Download SMPL-X model, and place pkl files to
./smpl_files/smplx. - Download Lpips-weight and place pth files to
.network/lpips/weights/v0.1
We have experimented with 4D-Dress and ActorsHQ datasets Following GEN_DATA.md
Note for ActorsHQ dataset: 1. SMPL-X Registration. We used the smplx registration offered here by Animatable Gaussians
Take the subject 00134 from 4D-Dress as an example: 0. Prepare the training dataset using the instruction from the previous step
- Download its checkpoint or start from scratch
- Set the corresponding data_dir and net_ckpt_dir in the train section in ./configs/4d_dress/avatar.yaml
- Run:
python main_avatar.py -c configs/4d_dress/avatar.yaml --mode=train
Take the subject 00134 from 4D-Dress as an example: 0. Download the checkpoint for the subject
- Prepare the testing dataset according to GEN_DATA.md
- Set the corresponding data_dir and prev_ckpt in the test section in ./configs/4d_dress/avatar.yaml
- Run:
python main_avatar.py -c configs/4d_dress/avatar.yaml --mode=test
Some example test animation examples are provided under assets folder, e.g. 156_test.mp4 and 185_ours.mp4
Take the subject 00134 and 00140 from 4D-Dress as an example
We provided a script generate_pos_script.py for generating the exchange dataset for 4D-Dress subjects:
0. Update the macros in generate_pos_script.py
- Run
generate_pos_script.pyfor the target combination - Run:
python main_avatar.py -c configs/4d_dress/exchange.yaml --mode=exchange_cloth
Some example exchange examples are provided under assets folder, e.g. 185_134_full.mp4 and 127_134_outer.mp4
For other combinations please follow the format in the configs/4d_dress/exchange.yaml configuration files
We provide evaluation metrics in eval/eval_metrics.py.
- Generate the testing pose images
- Then update the data_dir macros in eval/eval_metrics.py.
- Run
python eval/eval_metrics.py
Our code is based on the following repos: