Interactive Acoustic Transfer Approximation for Modal Sound

Dingzeyu Li, Yun Fei, Changxi Zheng

ACM Transactions on Graphics (presented at SIGGRAPH 2016), 35(1)



abstract
Current linear modal sound models are tightly coupled with their frequency content. Both the modal vibration of object surfaces and the resulting sound radiation depend on the vibration frequency. Whenever the user tweaks modal parameters to adjust frequencies the modal sound model changes completely, necessitating expensive recomputation of modal vibration and sound radiation.

We propose a new method for interactive and continuous editing as well as exploration of modal sound parameters. We start by sampling a number of key points around a vibrating object, and then devise a compact, low-memory representation of frequency-varying acoustic transfer values at each key point using Prony series. We efficiently precompute these series using an adaptive frequency sweeping algorithm and volume-velocity-preserving mesh simplification. At runtime, we approximate acoustic transfer values using standard multipole expansions. Given user-specified modal frequencies, we solve a small least-squares system to estimate the expansion coefficients, and thereby quickly compute the resulting sound pressure value at arbitrary listening locations. We demonstrate the numerical accuracy, the runtime performance of our method on a set of comparisons and examples, and evaluate sound quality with user perception studies.

downloads
Paper / Paper (low resolution)
Slides: keynote (160MB) / pdf (25MB)
Video (354MB) / Youtube
Parameter Space Demo

slides quickview


bibtex citation
@article{Li:2015:transfer,
  title={Interactive Acoustic Transfer Approximation for Modal Sound},
  author={Li, Dingzeyu and Fei, Yun and Zheng, Changxi},
  journal={ACM Trans. Graph.},
  volume={35},
  number={1},
  year={2015},
  doi={10.1145/2820612},
}
  


acknowledgements
We thank the anonymous reviewers for their feedback. We also thank Jeff Chadwick for sharing the code of Harmonic Shells, Jie Tan for sharing the iJump animation data, Timothy Sun for adding the voice in the video, Breannan Smith for sharing the RoSI code, and Henrique Maia for helping revise an early draft. This research was supported in part by the National Science Foundation (CAREER-1453101) and generous donations from Intel. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of funding agencies or others.