Skip to content
/ CSDN Public

[TVCG 2024] CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion

License

Notifications You must be signed in to change notification settings

czvvd/CSDN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion

This repository contains the PyTorch implementation of the paper:

CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion, TVCG 2024

Zhe Zhu, Liangliang Nan, Haoran Xie, Honghua Chen, Jun Wang, Mingqiang Wei, Jing Qin.

Abstract

How will you repair a physical object with some missings? You may imagine its original shape from previously captured images, recover its overall (global) but coarse shape first, and then refine its local details. We are motivated to imitate the physical repair procedure to address point cloud completion. To this end, we propose a cross-modal shape-transfer dual-refinement network (termed CSDN), a coarse-to-fine paradigm with images of full-cycle participation, for quality point cloud completion. CSDN mainly consists of "shape fusion" and "dual-refinement" modules to tackle the cross-modal challenge. The first module transfers the intrinsic shape characteristics from single images to guide the geometry generation of the missing regions of point clouds, in which we propose IPAdaIN to embed the global features of both the image and the partial point cloud into completion. The second module refines the coarse output by adjusting the positions of the generated points, where the local refinement unit exploits the geometric relation between the novel and the input points by graph convolution, and the global constraint unit utilizes the input image to fine-tune the generated offset. Different from most existing approaches, CSDN not only explores the complementary information from images but also effectively exploits cross-modal data in the whole coarse-to-fine completion procedure. Experimental results indicate that CSDN performs favorably against twelve competitors on the cross-modal benchmark.

Get Started

Environment and Installation

Code has been tested with Ubuntu 20.04, GCC 9.4.0, Python 3.6, PyTorch 1.8.2, CUDA 11.1 and cuDNN 8.1.0.

Install PointNet++ and Chamfer Distance.

pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"

cd ../Chamfer3D

python setup.py install

Dataset

Download the ShapeNetViPC dataset from ViPC and specify the data path in Train.py.

Training

python Train.py

Citation

If you use CSDN in your research, please consider citing our paper:

@article{zhu2023csdn,
  title={CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion},
  author={Zhu, Zhe and Nan, Liangliang and Xie, Haoran and Chen, Honghua and Wang, Jun and Wei, Mingqiang and Qin, Jing},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2023},
  publisher={IEEE}
}

Acknowledgement

The code is based on ViPC. Some of the code is borrowed from:

The point clouds are visualized with Easy3D.

We thanks the authors for their great work.

License

This project is open sourced under MIT license.

About

[TVCG 2024] CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published