Crumpling Sound Synthesis

Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng

ACM Transactions on Graphics (SIGGRAPH Asia 2016), 35(6)

Crumpling a thin sheet produces a characteristic sound, comprised of distinct clicking sounds corresponding to buckling events. We propose a physically based algorithm that automatically synthesizes crumpling sounds for a given thin shell animation. The resulting sound is a superposition of individually synthesized clicking sounds corresponding to visually significant and insignificant buckling events. We identify visually significant buckling events on the dynamically evolving thin surface mesh, and instantiate visually insignificant buckling events via a stochastic model that seeks to mimic the power-law distribution of buckling energies observed in many materials.

In either case, the synthesis of a buckling sound employs linear modal analysis of the deformed thin shell. Because different buckling events in general occur at different deformed configurations, the question arises whether the calculation of linear modes can be reused. We amortize the cost of the linear modal analysis by dynamically partitioning the mesh into nearly rigid pieces: the modal analysis of a rigidly moving piece is retained over time, and the modal analysis of the assembly is obtained via Component Mode Synthesis (CMS). We illustrate our approach through a series of examples and a perceptual user study, demonstrating the utility of the sound synthesis method in producing realistic sounds at practical computation times.

Paper / Paper (low resolution)
Perceptual Study: Github / live demo / supplemental analysis
Recorded and Simulated Sounds
Video (100MB) / Youtube

bibtex citation
  title={Crumpling Sound Synthesis},
  author={Cirio, Gabriel and Li, Dingzeyu and Grinspun, Eitan and Otaduy, Miguel A. and Zheng, Changxi},
  journal={ACM Trans. Graph.},


We thank the anonymous reviewers for their feedback, as well as Anne-Hélène Olivier, Julien Pettré, Alec Jacobson for insightful discussions, and Breannan Smith for help with the submission. This work was supported in part by the National Science Foundation (CAREER-1453101, IIS-13-19483, IIS-14-09286, IIS-12-08153, and IIS-17257), the Spanish Ministry of Economy (TIN2015-70799-R) and the European Research Council (ERC Starting Grant no. 280135 Animetrics). The work of Gabriel Cirio was supported in part by the Spanish Ministry of Science and Education through a Juan de la Cierva Fellowship, as well as the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 706708. We are grateful for generous support from Pixar, Intel, Disney, Altair, and Adobe. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or others.