Learning to Generate 3D Shapes
from a Single Example

SIGGRAPH Asia 2022 (Journal Track)

Columbia University

Columbia University

Paper (39.3 MB)
Arxiv (5.5 MB)
Pretrained models


Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the tri-plane hybrid representation, which requires only 2D convolutions. We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation. Once trained, our model can generate diverse and high-quality 3D shapes possibly of different sizes and aspect ratios. The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.

A Gallery of Examples

(A GUI demo is available in our git repo!)


         title={Learning to Generate 3D Shapes from a Single Example},
	 author={Wu, Rundi and Zheng, Changxi},
	 journal={ACM Transactions on Graphics (TOG)},
	 publisher={ACM New York, NY, USA}