DeepCAD: A Deep Generative Network for Computer-Aided Design Models

ICCV 2021

1 Columbia University
2Shandong University
3CFCS, Peking University
4CFCS, Peking University

Columbia University

Columbia University

Columbia University



Deep generative models of 3D shapes have received a great deal of research interest. Yet, almost all of them generate discrete shape representations, such as voxels, point clouds, and polygon meshes. We present the first 3D generative model for a drastically different shape representation — describing a shape as a sequence of computer-aided design (CAD) operations. Unlike meshes and point clouds, CAD models encode the user creation process of 3D shapes, widely used in numerous industrial and engineering design tasks. However, the sequential and irregular structure of CAD operations poses significant challenges for existing 3D generative models. Drawing an analogy between CAD operations and natural language, we propose a CAD generative network based on the Transformer. We demonstrate the performance of our model for both shape autoencoding and random shape generation. To train our network, we create a new CAD dataset consisting of 178,238 models and their CAD construction sequences. We have made this dataset publicly available to promote future research on this topic.

A Gallery of Generated Results


 	author    = {Wu, Rundi and Xiao, Chang and Zheng, Changxi},
	title     = {DeepCAD: A Deep Generative Network for Computer-Aided Design Models},
	booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
	month     = {October},
	year      = {2021},
	pages     = {6772-6782}
[if lte IE 9]>