Computer Graphics

IMG
Rundi Wu, Chang Xiao, and Changxi Zheng
DeepCAD: A Deep Generative Network for Computer-Aided Design Models.
International Conference on Computer Vision (ICCV) 2021

Paper Project Page Abstract
Deep generative models of 3D shapes have received a great deal of research interest. Yet, almost all of them generate discrete shape representations, such as voxels, point clouds, and polygon meshes. We present the first 3D generative model for a drastically different shape representation--describing a shape as a sequence of computer-aided design (CAD) operations. Unlike meshes and point clouds, CAD models encode the user creation process of 3D shapes, widely used in numerous industrial and engineering design tasks. However, the sequential and irregular structure of CAD operations poses significant challenges for existing 3D generative models. Drawing an analogy between CAD operations and natural language, we propose a CAD generative network based on the Transformer. We demonstrate the performance of our model for both shape autoencoding and random shape generation. To train our network, we create a new CAD dataset consisting of 178,238 models and their CAD construction sequences. We have made this dataset publicly available to promote future research on this topic.
IMG
Wei Li, Yixin Chen, Mathieu Desbrun, Changxi Zheng, Xiaopei Liu
Fast and Scalable Turbulent Flow Simulation with Two-Way Coupling.
ACM Transaction on Graphics (SIGGRAPH 2020)

Paper (PDF) Project Page Abstract Video
Despite their cinematic appeal, turbulent flows involving fluid-solid coupling remain a computational challenge in animation. At the root of this current limitation is the numerical dispersion from which most accurate Navier-Stokes solvers suffer: proper coupling between fluid and solid often generates artificial dispersion in the form of local, parasitic trains of velocity oscillations, eventually leading to numerical instability. While successive improvements over the years have led to conservative and detail-preserving fluid integrators, the dispersive nature of these solvers is rarely discussed despite its dramatic impact on fluid-structure interaction. In this paper, we introduce a novel low-dissipation and low-dispersion fluid solver that can simulate two-way coupling in an efficient and scalable manner, even for turbulent flows. In sharp contrast with most current CG approaches, we construct our solver from a kinetic formulation of the flow derived from statistical mechanics. Unlike existing lattice Boltzmann solvers, our approach leverages high-order moment relaxations as a key to controlling both dissipation and dispersion of the resulting scheme. Moreover, we combine our new fluid solver with the immersed boundary method to easily handle fluid-solid coupling through time adaptive simulations. Our kinetic solver is highly parallelizable by nature, making it ideally suited for implementation on single- or multi-GPU computing platforms. Extensive comparisons with existing solvers on synthetic tests and real-life experiments are used to highlight the multiple advantages of our work over traditional and more recent approaches, in terms of accuracy, scalability, and efficiency.
IMG
Yun (Raymond) Fei, Christopher Batty, Eitan Grinspun, and Changxi Zheng
A Multi-Scale Model for Coupling Strands with Shear-Dependent Liquid.
ACM Transaction on Graphics (SIGGRAPH Asia 2019)

Paper (PDF) Project Page Abstract Video
We propose a framework for simulating the complex dynamics of strands interacting with compressible, shear-dependent liquids, such as oil paint, mud, cream, melted chocolate, and pasta sauce. Our framework contains three main components: the strands modeled as discrete rods, the bulk liquid represented as a continuum (material point method), and a reduced-dimensional flow of liquid on the surface of the strands with detailed elastoviscoplastic behavior. These three components are tightly coupled together. To enable discrete strands interacting with continuum-based liquid, we develop models that account for the volume change of the liquid as it passes through strands and the momentum exchange between the strands and the liquid. We also develop an extended constraint-based collision handling method that supports cohesion between strands. Furthermore, we present a principled method to preserve the total momentum of a strand and its surface flow, as well as an analytic plastic flow approach for Herschel-Bulkley fluid that enables stable semi-implicit integration at larger time steps. We explore a series of challenging scenarios, involving splashing, shaking, and agitating the liquid which causes the strands to stick together and become entangled.
IMG
Cheng Zhang, Lifan Wu, Changxi Zheng, Ioannis Gkioulekas, Ravi Ramamoorthi, and Shuang Zhao
A Differential Theory of Radiative Transfer.
ACM Transaction on Graphics (SIGGRAPH Asia 2019)

Paper (PDF) Project Page Abstract
Physics-based differentiable rendering is the task of estimating the derivatives of radiometric measures with respect to scene parameters. The ability to compute these derivatives is necessary for enabling gradient-based optimization in a diverse array of applications: from solving analysis-by-synthesis problems to training machine learning pipelines incorporating forward rendering processes. Unfortunately, physics-based differentiable rendering remains challenging, due to the complex and typically nonlinear relation between pixel intensities and scene parameters.

We introduce a differential theory of radiative transfer, which shows how individual components of the radiative transfer equation (RTE) can be differentiated with respect to arbitrary differentiable changes of a scene. Our theory encompasses the same generality as the standard RTE, allowing differentiation while accurately handling a large range of light transport phenomena such as volumetric absorption and scattering, anisotropic phase functions, and heterogeneity. To numerically estimate the derivatives given by our theory, we introduce an unbiased Monte Carlo estimator supporting arbitrary surface and volumetric configurations. Our technique differentiates path contributions symbolically and uses additional boundary integrals to capture geometric discontinuities such as visibility changes.

We validate our method by comparing our derivative estimations to those generated using the finite-difference method. Furthermore, we use a few synthetic examples inspired by real-world applications in inverse rendering, non-line-of-sight (NLOS) and biomedical imaging, and design, to demonstrate the practical usefulness of our technique.
IMG
Zahra Montazeri, Chang Xiao, Yun (Raymond) Fei, Changxi Zheng, and Shuang Zhao
Mechanics-Aware Modeling of Cloth Appearance.
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2019

Paper (PDF) Abstract Video
Micro-appearance models have brought unprecedented fidelity and details to cloth rendering. Yet, these models neglect fabric mechanics: when a piece of cloth interacts with the environment, its yarn and fiber arrangement usually changes in response to external contact and tension forces. Since subtle changes of a fabric's microstructures can greatly affect its macroscopic appearance, mechanics-driven appearance variation of fabrics has been a phenomenon that remains to be captured. We introduce a mechanics-aware model that adapts the microstructures of cloth yarns in a physics-based manner. Our technique works on two distinct physical scales: using physics-based simulations of individual yarns, we capture the rearrangement of yarn-level structures in response to external forces. These yarn structures are further enriched to obtain appearance-driving fiber-level details. The cross-scale enrichment is made practical through a new parameter fitting algorithm for simulation, an augmented procedural yarn model coupled with a custom-design regression neural network. We train the network using a dataset generated by joint simulations at both the yarn and the fiber levels. Through several examples, we demonstrate that our model is capable of synthesizing photorealistic cloth appearance in a mechanically plausible way.
IMG
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, and Changxi Zheng
LayerCode: Optical Barcodes for 3D Printed Shapes.
ACM Transaction on Graphics (SIGGRAPH 2019)

Paper (PDF) Project Page Abstract Video
With the advance of personal and customized fabrication techniques, the capability to embed information in physical objects becomes evermore crucial. We present LayerCode, a tagging scheme that embeds a carefully designed barcode pattern in 3D printed objects as a deliberate byproduct of the 3D printing process. The LayerCode concept is inspired by the structural resemblance between the parallel black and white bars of the standard barcode and the universal layer-by-layer approach of 3D printing. We introduce an encoding algorithm that enables the 3D printing layers to carry information without altering the object geometry. We also introduce a decoding algorithm that reads the LayerCode tag of a physical object by just taking a photo. The physical deployment of LayerCode tags is realized on various types of 3D printers, including Fused Deposition Modeling printers as well as Stereolithography based printers. Each offers its own advantages and tradeoffs. We show that LayerCode tags can work on complex, nontrivial shapes, on which all previous tagging mechanisms may fail. To evaluate LayerCode thoroughly, we further stress test it with a large dataset of complex shapes using virtual rendering. Among 4,835 tested shapes, we successfully encode and decode on more than 99% of the shapes.
IMG
Chang Xiao, Karl Bayer, Changxi Zheng, and Shree K. Nayar
Vidgets: Modular Mechanical Widgets for Mobile Devices.
ACM Transaction on Graphics (SIGGRAPH 2019)

Paper (PDF) Project Page Abstract Video
We present Vidgets, a family of mechanical widgets, specifically push buttons and rotary knobs that augment mobile devices with tangible user interfaces. When these widgets are attached to a mobile device and a user interacts with them, the widgets' nonlinear mechanical response shifts the device slightly and quickly, and this subtle motion can be detected by the accelerometer commonly equipped on mobile devices. We propose a physics-based model to understand the nonlinear mechanical response of widgets. This understanding enables us to design tactile force profiles of these widgets so that the resulting accelerometer signals become easy to recognize. We then develop a lightweight signal processing algorithm that analyzes the accelerometer signals and recognizes how the user interacts with the widgets in real time. Vidgets widgets are low-cost, compact, reconfigurable, and power efficient. They can form a diverse set of physical interfaces that enrich users' interactions with mobile devices in various practical scenarios.
IMG
Yichen (Peter) Chen, Jonathan Blutinger, Yorán Meijers, Changxi Zheng, Eitan Grinspun, and Hod Lipson
Visual Modeling of Laser-induced Dough Browning.
Journal of Food Engineering, 2018

Paper Abstract
A data-driven model that predicatively generates photorealistic RGB images of dough surface browning isproposed. This model was validated in a practical application using a CO2 laser dough browning pipeline, thus confirming that it can be employed to characterize visual appearance of browned samples, such as surface colorand patterns. A supervised deep generative network takes laser speed, laser energy flux, and dough moisture asan input and outputs an image (of 64x64 pixel size) of laser-browned dough. Image generation is achieved bynonlinearly interpolating high-dimensional training data. The proposed prediction framework contributes to thedevelopment of computer-aided design (CAD) software for food processing techniques by creating more accuratephotorealistic models.
IMG
Ye Yuan, Changxi Zheng, and Stelian Coros
Computational Design of Transformables.
ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), 2018

Paper (PDF) Abstract Video
We present a computational approach to designing transformables, physical characters that can shape-shift to take on vastly different forms. The design process begins with a morphological description of an input character and a target object that it should transform into. Guided by a set of objectives that model the core attributes of desirable transformable designs, optimized embeddings are interactively generated. Intuitively, embeddings represent tightly folded character configurations that fit within the target object. From any feasible embedding, skin meshes are then generated for each body part of the character. The process for generating these 3D models is based on a segmentation of the target object, which is achieved through a growth-based model applied to a multiple level set representation of the transformable. A set of transformation-aware post-processing algorithms ensure the feasibility of the final designs. Building on this technical core, our computational design system provides many opportunities for users to inject their intuition and personal preferences into the process of creating transformables, while shielding them from tasks that are challenging and tedious. As a result, they can intuitively explore the vast space of design possibilities. We demonstrated the effectiveness of our computational approach by creating a variety of transformable designs, three of which we fabricate.
IMG
Dingzeyu Li, Timothy Langlois, and Changxi Zheng
Scene-Aware Audio for 360° Videos.
ACM Transactions on Graphics (SIGGRAPH 2018)

Paper (PDF) Project Page Abstract Video Bibtex
Although 360° cameras ease the capture of panoramic footage, it remains challenging to add realistic 360° audio that blends into the captured scene and is synchronized with the camera motion. We present a method for adding scene-aware spatial audio to 360° videos in typical indoor scenes, using only a conventional mono-channel microphone and a speaker. We observe that the late reverberation of a room's impulse response is usually diffuse spatially and directionally. Exploiting this fact, we propose a method that synthesizes the directional impulse response between any source and listening locations by combining a synthesized early reverberation part and a measured late reverberation tail. The early reverberation is simulated using a geometric acoustic simulation and then enhanced using a frequency modulation method to capture room resonances. The late reverberation is extracted from a recorded impulse response, with a carefully chosen time duration that separates out the late reverberation from the early reverberation. In our validations, we show that our synthesized spatial audio matches closely with recordings using ambisonic microphones. Lastly, we demonstrate the strength of our method in several applications.
@article{Li2018360audio,
  title={Scene-Aware Audio for 360\textdegree{} Videos},
  author={Li, Dingzeyu and Langlois, Timothy R. and Zheng, Changxi},
  journal={ACM Trans. Graph.},
  volume={37},
  number={4},
  year={2018},
  publisher = {ACM},
  address = {New York, NY, USA},
}
IMG
Yun (Raymond) Fei, Christopher Batty, Eitan Grinspun, and Changxi Zheng
A Multi-Scale Model for Simulating Liquid-Fabric Interactions.
ACM Transactions on Graphics (SIGGRAPH 2018)

Paper (PDF) Project Page Abstract Video Bibtex
We propose a method for simulating the complex dynamics of partially and fully saturated woven and knit fabrics interacting with liquid, including the effects of buoyancy, nonlinear drag, pore (capillary) pressure, dripping, and convection-diffusion. Our model evolves the velocity fields of both the liquid and solid relying on mixture theory, as well as tracking a scalar saturation variable that affects the pore pressure forces in the fluid. We consider the porous microstructure implied by the fibers composing individual threads, and use it to derive homogenized drag and pore pressure models that faithfully reflect the anisotropy of fabrics. In addition to the bulk liquid and fabric motion, we derive a quasi-static flow model that accounts for liquid spreading within the fabric itself. Our implementation significantly extends standard numerical cloth and fluid models to support the diverse behaviors of wet fabric, and includes a numerical method tailored to cope with the challenging nonlinearities of the problem. We explore a range of fabric-water interactions to validate our model, including challenging animation scenarios involving splashing, wringing, and collisions with obstacles, along with qualitative comparisons against simple physical experiments.
@article{Fei2018MMS,
 author = {Fei, Yun (Raymond) and Batty, Christopher and Grinspun, Eitan and Zheng, Changxi},
 title = {A Multi-scale Model for Simulating Liquid-fabric Interactions},
 journal = {ACM Trans. Graph.},
 issue_date = {Aug 2018},
 volume = {37},
 number = {4},
 month = aug,
 year = {2018},
 pages = {51:1--51:16},
 articleno = {51},
 numpages = {16},
 publisher = {ACM},
 address = {New York, NY, USA},
}
IMG
Gabriel Cirio, Ante Qu, George Drettakis, Eitan Grinspun, and Changxi Zheng
Multi-Scale Simulation of Nonlinear Thin-Shell Sound with Wave Turbulence.
ACM Transactions on Graphics (SIGGRAPH 2018)

Paper (PDF) Project Page Abstract Video Bibtex
Thin shells -- solids that are thin in one dimension compared to the other two -- often emit rich nonlinear sounds when struck. Strong excitations can even cause chaotic thin-shell vibrations, producing sounds whose energy spectrum diffuses from low to high frequencies over time -- a phenomenon known as wave turbulence. It is all these nonlinearities that grant shells such as cymbals and gongs their characteristic "glinting" sound. Yet, simulation models that efficiently capture these sound effects remain elusive.

We propose a physically based, multi-scale reduced simulation method to synthesize nonlinear thin-shell sounds. We first split nonlinear vibrations into two scales, with a small low-frequency part simulated in a fully nonlinear way, and a high-frequency part containing many more modes approximated through time-varying linearization. This allows us to capture interesting nonlinearities in the shells' deformation, tens of times faster than previous approaches. Furthermore, we propose a method that enriches simulated sounds with wave turbulent sound details through a phenomenological diffusion model in the frequency domain, and thereby sidestep the expensive simulation of chaotic high-frequency dynamics. We show several examples of our simulations, illustrating the efficiency and realism of our model.
@article{Cirio2018MSN,
 author = {Cirio, Gabriel and Qu, Ante and Drettakis, George and Grinspun, Eitan and Zheng, Changxi},
 title = {Multi-scale Simulation of Nonlinear Thin-shell Sound with Wave Turbulence},
 journal = {ACM Trans. Graph.},
 volume = {37},
 number = {4},
 month = jul,
 year = {2018},
 pages = {110:1--110:14},
 articleno = {110},
 numpages = {14},
 url = {http://www.cs.columbia.edu/cg/waveturb/},
 address = {New York, NY, USA},
}
IMG
Chang Xiao, Cheng Zhang, and Changxi Zheng
FontCode: Embedding Information in Text Documents using Glyph Perturbation.
ACM Transactions on Graphics, 2018 (Presented at SIGGRAPH 2018)

Paper (PDF) Project Page Abstract Video Bibtex
We introduce FontCode, an information embedding technique for text documents. Provided a text document with specific fonts, our method embeds user-specified information in the text by perturbing the glyphs of text characters while preserving the text content. We devise an algorithm to choose unobtrusive yet machine-recognizable glyph perturbations, leveraging a recently developed generative model that alters the glyphs of each character continuously on a font manifold. We then introduce an algorithm that embeds a user-provided message in the text document and produces an encoded document whose appearance is minimally perturbed from the original document. We also present a glyph recognition method that recovers the embedded information from an encoded document stored as a vector graphic or pixel image, or even on a printed paper. In addition, we introduce a new error-correction coding scheme that rectifies a certain number of recognition errors. Lastly, we demonstrate that our technique enables a wide array of applications, using it as a text document metadata holder, an unobtrusive optical barcode, a cryptographic message embedding scheme, and a text document signature.
@article{Xiao2018FEI,
 author = {Xiao, Chang and Zhang, Cheng and Zheng, Changxi},
 title = {FontCode: Embedding Information in Text Documents Using Glyph Perturbation},
 journal = {ACM Trans. Graph.},
 issue_date = {May 2018},
 volume = {37},
 number = {2},
 month = feb,
 year = {2018},
 pages = {15:1--15:16},
 articleno = {15},
 numpages = {16},
 doi = {10.1145/3152823},
}
IMG
Yun (Raymond) Fei, Henrique Teles Maia, Christopher Batty, Changxi Zheng, and Eitan Grinspun
A Multi-Scale Model for Simulating Liquid-Hair Interactions.
ACM Transactions on Graphics (SIGGRAPH 2017), 36(4)

Paper (PDF) Project Page Abstract Video Bibtex
The diverse interactions between hair and liquid are complex and span multiple length scales, yet are central to the appearance of humans and animals in many situations. We therefore propose a novel multi-component simulation framework that treats many of the key physical mechanisms governing the dynamics of wet hair. The foundations of our approach are a discrete rod model for hair and a particle-in-cell model for fluids. To treat the thin layer of liquid that clings to the hair, we augment each hair strand with a height field representation. Our contribution is to develop the necessary physical and numerical models to evolve this new system and the interactions among its components. We develop a new reduced-dimensional liquid model to solve the motion of the liquid along the length of each hair, while accounting for its moving reference frame and influence on the hair dynamics. We derive a faithful model for surface tension-induced cohesion effects between adjacent hairs, based on the geometry of the liquid bridges that connect them. We adopt an empirically-validated drag model to treat the effects of coarse-scale interactions between hair and surrounding fluid, and propose new volume-conserving dripping and absorption strategies to transfer liquid between the reduced and particle-in-cell liquid representations. The synthesis of these techniques yields an effective wet hair simulator, which we use to animate hair flipping, an animal shaking itself dry, a spinning car wash roller brush dunked in liquid, and intricate hair coalescence effects, among several additional scenarios.
@article{Fei:2017:liquidhair,
    title={A Multi-Scale Model for Simulating Liquid-Hair Interactions},
    author={Fei, Yun (Raymond) and Maia, Henrique Teles and Batty, Christopher and Zheng, Changxi 
and Grinspun, Eitan},
    journal={ACM Trans. Graph.},
    volume={36},
    number={4},
    year={2017},
}
IMG
Adriana Schulz, Jie Xu, Bo Zhu, Changxi Zheng, Eitan Grispun, and Wojciech Matusik
Interactive Design Space Exploration and Optimization for CAD Models.
ACM Transactions on Graphics (SIGGRAPH 2017), 36(4)

Paper (PDF) Project Page Abstract Video Bibtex
Computer Aided Design (CAD) is a multi-billion dollar industry used by almost every mechanical engineer in the world to create practically every existing manufactured shape. CAD models are not only widely available but also extremely useful in the growing field of fabrication-oriented design because they are parametric by construction and capture the engineer's design intent, including manufacturability. Harnessing this data, however, is challenging, because generating the geometry for a given parameter value requires time-consuming computations. Furthermore, the resulting meshes have different combinatorics, making the mesh data inherently dis- continuous with respect to parameter adjustments. In our work, we address these challenges and develop tools that allow interactive exploration and optimization of parametric CAD data. To achieve interactive rates, we use precomputation on an adaptively sampled grid and propose a novel scheme for interpolating in this domain where each sample is a mesh with different combinatorics. Specifically, we extract partial correspondences from CAD representations for local mesh morphing and propose a novel interpolation method for adaptive grids that is both continuous/smooth and local (i.e., the influence of each sample is constrained to the local regions where mesh morphing can be computed). We show examples of how our method can be used to interactively visualize and optimize objects with a variety of physical properties.
@article{Schulz:2017, 
  author = {Schulz, Adriana and Xu, Jie and Zhu, Bo and Zheng, Changxi 
            and Grinpun, Eitan and Matusik, Wojciech}, 
  title	= {Interactive Design Space Exploration and Optimization for CAD Models}, 
  journal = {ACM Transactions on Graphics}, 
  year = {July 2017}, 
  volume = {36}, 
  number = {4}, 
}
IMG
Shuang Zhao, Frédo Durand, and Changxi Zheng
Inverse Diffusion Curves using Shape Optimization.
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2017

Paper (PDF) Abstract Video Bibtex
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
@article{Zhao:2017:IDC,
  title={Inverse Diffusion Curves using Shape Optimization},
  author={Shuang Zhao and Fredo Durand and Changxi Zheng},
  publisher={IEEE Trans Vis Comput Graph. (TVCG)}
  volume={PP},
  number={99},
  year={2017},
}
IMG
Xiang Chen, Changxi Zheng, and Kun Zhou
Example-Based Subspace Stress Analysis for Interactive Shape Design.
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2016

Paper (PDF) Abstract Video Bibtex
Stress analysis is a crucial tool for designing structurally sound shapes. However, the expensive computational cost has hampered its use in interactive shape editing tasks. We augment the existing example-based shape editing tools, and propose a fast subspace stress analysis method to enable stress-aware shape editing. In particular, we construct a reduced stress basis from a small set of shape exemplars and possible external forces. This stress basis is automatically adapted to the current user edited shape on the fly, and thereby offers reliable stress estimation. We then introduce a new finite element discretization scheme to use the reduced basis for fast stress analysis. Our method runs up to two orders of magnitude faster than the full-space finite element analysis, with average L2 estimation errors less than 2% and maximum L2 errors less than 6%. Furthermore, we build an interactive stress-aware shape editing tool to demonstrate its performance in practice.
@article{Chen:2016:stress,
  title={Example-Based Subspace Stress Analysis for Interactive Shape Design},
  author={Xiang Chen and Changxi Zheng and Kun Zhou},
  publisher={IEEE Trans Vis Comput Graph. (TVCG)}
  volume={PP},
  number={99},
  year={2016},
}
IMG
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, and Changxi Zheng
Crumpling Sound Synthesis.
ACM Transactions on Graphics (SIGGRAPH Asia 2016), 35(6)

Paper (PDF) Project Page Abstract Video Bibtex
Crumpling a thin sheet produces a characteristic sound, comprised of distinct clicking sounds corresponding to buckling events. We propose a physically based algorithm that automatically synthesizes crumpling sounds for a given thin shell animation. The resulting sound is a superposition of individually synthesized clicking sounds corresponding to visually-significant and -insignificant buckling events. We identify visually significant buckling events on the dynamically evolving thin surface mesh, and instantiate visually insignificant buckling events via a stochastic model that seeks to mimic the power-law distribution of buckling energies observed in many materials.

In either case, the synthesis of a buckling sound employs linear modal analysis of the deformed thin shell. Because different buckling events in general occur at different deformed configurations, the question arises whether the calculation of linear modes can be reused. We amortize the cost of the linear modal analysis by dynamically partitioning the mesh into nearly rigid pieces: the modal analysis of a rigidly moving piece is retained over time, and the modal analysis of the assembly is obtained via Component Mode Synthesis (CMS). We illustrate our approach through a series of examples and a perceptual user study, demonstrating the utility of the sound synthesis method in producing realistic sounds at practical computation times.
@article{Cirio:2016:crumpling_sound_synthesis,
  title={Crumpling Sound Synthesis},
  author={Cirio, Gabriel and Li, Dingzeyu and Grinspun, Eitan and Otaduy, Miguel A. 
and Zheng, Changxi},
  journal={ACM Trans. Graph.},
  volume={35},
  number={6},
  year={2016},
  url = {http://www.cs.columbia.edu/cg/crumpling/}
}
IMG
Tianjia Shao, Dongping Li, Yuliang Rong, Changxi Zheng, and Kun Zhou
Dynamic Furniture Modeling Through Assembly Instructions.
ACM Transactions on Graphics (SIGGRAPH Asia 2016), 35(6)

Paper (PDF) Abstract Video Bibtex
We present a technique for parsing widely used furniture assembly instructions, and reconstructing the 3D models of furniture components and their dynamic assembly process. Our technique takes as input a multi-step assembly instruction in a vector graphic format and starts to group the vector graphic primitives into semantic elements representing individual furniture parts, mechanical connectors (e.g., screws, bolts and hinges), arrows, visual highlights, and numbers. To reconstruct the dynamic assembly process depicted over multiple steps, our system identifies previously built 3D furniture components when parsing a new step, and uses them to address the challenge of occlusions while generating new 3D components incrementally. With a wide range of examples covering a variety of furniture types, we demonstrate the use of our system to animate the 3D furniture assembly process and, beyond that, the semantic-aware furniture editing as well as the fabrication of personalized furnitures.
@article{Shao:2016:furnitures,
  title={Dynamic Furniture Modeling Through Assembly Instructions},
  author={Tianjia Shao and Dongping Li and Yuliang Rong and Changxi Zheng and Kun Zhou},
  journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2016)},
  volume={35},
  number={6},
  year={2016}
}
IMG
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, and Changxi Zheng
Acoustic Voxels: Computational Optimization of Modular Acoustic Filters.
ACM Transactions on Graphics (SIGGRAPH 2016), 35(4)

Paper (PDF) Project Page Abstract Video Bibtex
Acoustic filters have a wide range of applications, yet customizing them with desired properties is difficult. Motivated by recent progress in additive manufacturing that allows for fast prototyping of complex shapes, we present a computational approach that automates the design of acoustic filters with complex geometries. In our approach, we construct an acoustic filter comprised of a set of parameterized shape primitives, whose transmission matrices can be precomputed. Using an efficient method of simulating the transmission matrix of an assembly built from these underlying primitives, our method is able to optimize both the arrangement and the parameters of the acoustic shape primitives in order to satisfy target acoustic properties of the filter. We validate our results against industrial laboratory measurements and high-quality off-line simulations. We demonstrate that our method enables a wide range of applications including muffler design, musical wind instrument prototyping, and encoding imperceptible acoustic information into everyday objects.
@article{Li:2016:acoustic_voxels,
  title={Acoustic Voxels: Computational Optimization of Modular Acoustic Filters},
  author={Li, Dingzeyu and Levin, David I.W. and Matusik, Wojciech and Zheng, Changxi},
  journal = {ACM Transactions on Graphics (SIGGRAPH 2016)},
  volume={35},
  number={4},
  year={2016},
  url = {http://www.cs.columbia.edu/cg/lego/}
}
IMG
Timothy R. Langlois, Changxi Zheng, and Doug L. James
Toward Animating Water with Complex Acoustic Bubbles.
ACM Transactions on Graphics (SIGGRAPH 2016), 35(4)

Paper (PDF) Project Page Abstract Video Bibtex
This paper explores methods for synthesizing physics-based bubble sounds directly from two-phase incompressible simulations of bubbly water flows. By tracking fluid-air interface geometry, we identify bubble geometry and topological changes due to splitting, merging and popping. A novel capacitance-based method is proposed that can estimate volume-mode bubble frequency changes due to bubble size, shape, and proximity to solid and air interfaces. Our acoustic transfer model is able to capture cavity resonance effects due to near-field geometry, and we also propose a fast precomputed bubble-plane model for cheap transfer evaluation. In addition, we consider a bubble forcing model that better accounts for bubble entrainment, splitting, and merging events, as well as a Helmholtz resonator model for bubble popping sounds. To overcome frequency bandwidth limitations associated with coarse resolution fluid grids, we simulate micro-bubbles in the audio domain using a power-law model of bubble populations. Finally, we present several detailed examples of audiovisual water simulations and physical experiments to validate our frequency model.
@article{Langlois:2016:Bubbles,
  author = {Timothy R. Langlois and Changxi Zheng and Doug L. James},
  title = {Toward Animating Water with Complex Acoustic Bubbles},
  journal = {ACM Transactions on Graphics (SIGGRAPH 2016)},
  year = {2016},
  volume = {35},
  number = {4},
  month = Jul,
  url = {http://www.cs.cornell.edu/projects/Sound/bubbles}
}
IMG
Changxi Zheng, Timothy Sun and Xiang Chen
Deployable 3D Linkages with Collision Avoidance.
ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), July, 2016 (Best Paper Award)

Paper (PDF) Project Page Abstract Video Bibtex
We present a pipeline that allows ordinary users to create deployable scissor linkages in arbitrary 3D shapes, whose mechanisms are inspired by Hoberman's Sphere. From an arbitrary 3D model and a few user inputs, our method can generate a fabricable scissor linkage resembling that shape that aims to save as much space as possible in its most contracted state. Self-collisions are the primary obstacle in this goal, and these are not addressed in prior work. One key component of our algorithm is a succinct parameterization of these types of linkages. The fast continuous collision detection that arises from this parameterization serves as the foundation for the discontinuous optimization procedure that automatically improves joint placement for avoiding collisions. While linkages are usually composed of straight bars, we consider curved bars as a means of improving the contractibility. To that end, we describe a continuous optimization algorithm for locally deforming the bars.
@inproceedings{Zheng16:Deployable,
    author = {Changxi Zheng and Timothy Sun and Xiang Chen},
    title  = {Deployable 3D Linkages with Collision Avoidance}, 
    booktitle = {Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation},
    series = {SCA '16},
    year = {2016},
    month  = Jul,
    url = {http://www.cs.columbia.edu/cg/deployable}
}
IMG
Menglei Chai, Changxi Zheng, and Kun Zhou
Adaptive Skinning for Interactive Hair-Solid Simulation.
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2016

Paper (PDF) Abstract Video Bibtex
Reduced hair models have proven successful for interactively simulating a full head of hair strands, building upon a fundamental assumption that only a small set of guide hairs are needed for explicit simulation, and the rest of the hair move coherently and thus can be interpolated using guide hairs. Unfortunately, hair-solid interactions is a pathological case for traditional reduced hair models, as the motion coherence between hair strands can be arbitrarily broken by interacting with solids.

In this paper, we propose an adaptive hair skinning method for interactive hair simulation with hair-solid collisions. We precompute many eligible sets of guide hairs and the corresponding interpolation relationships that are represented using a compact strand-based hair skinning model. At runtime, we simulate only guide hairs; for interpolating every other hair, we adaptively choose its guide hairs, taking into account motion coherence and potential hair-solid collisions. Further, we introduce a two-way collision correction algorithm to allow sparsely sampled guide hairs to resolve collisions with solids that can have small geometric features. Our method enables interactive simulation of more than 150K hair strands interacting with complex solid objects, using 400 guide hairs. We demonstrate the efficiency and robustness of the method with various hairstyles and user-controlled arbitrary hair-solid interactions.
@article{chai2016adaptive,
  title={Adaptive Skinning for Interactive Hair-Solid Simulation},
  author={Chai, Menglei and Zheng, Changxi and Zhou, Kun},
  year={2016},
  publisher={IEEE Trans Vis Comput Graph. (TVCG)}
}
IMG
Gaurav Bharaj, David Levin, James Tompkin, Yun Fei, Hanspeter Pfister, Wojciech Matusik and Changxi Zheng
Computational Design of Metallophone Contact Sounds.
ACM Transactions on Graphics (SIGGRAPH Asia 2015), 34(6)

Paper (PDF) Project Page Abstract Video Bibtex
Metallophones such as glockenspiels produce sounds in response to contact. Building these instruments is a complicated process, limiting their shapes to well-understood designs such as bars. We automatically optimize the shape of arbitrary 2D and 3D objects through deformation and perforation to produce sounds when struck which match user-supplied frequency and amplitude spectra. This optimization requires navigating a complex energy landscape, for which we develop Latin Complement Sampling to both speed up finding minima and provide probabilistic bounds on landscape exploration. Our method produces instruments which perform similarly to those that have been professionally-manufactured, while also expanding the scope of shape and sound that can be realized, e.g., single object chords. Furthermore, we can optimize sound spectra to create overtones and to dampen specific frequencies. Thus our technique allows even novices to design metallophones with unique sound and appearance.
@article{Bharaj:2015:CDM,
  author = {Bharaj, Gaurav and Levin, David I. W. and Tompkin, James and Fei, Yun and 
            Pfister, Hanspeter and Matusik, Wojciech and Zheng, Changxi},
  title = {Computational Design of Metallophone Contact Sounds},
  journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2015)},
  volume = {34},
  number = {6},
  year = {2015},
  pages = {223:1--223:13},
  publisher = {ACM},
  address = {New York, NY, USA},
}
IMG
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian and Changxi Zheng,
Expediting Precomputation for Reduced Deformable Simulation.
ACM Transactions on Graphics (SIGGRAPH Asia 2015), 34(6)

Paper (PDF) Project Page Abstract Video Bibtex
Model reduction has popularized itself for simulating elastic deformation for graphics applications. While these techniques enjoy orders-of-magnitude speedups at runtime simulation, the efficiency of precomputing reduced subspaces remains largely overlooked. We present a complete system of precomputation pipeline as a faster alternative to the classic linear and nonlinear modal analysis. We identify three bottlenecks in the traditional model reduction precomputation, namely modal matrix construction, cubature training, and training dataset generation, and accelerate each of them. Even with complex deformable models, our method has achieved orders-of-magnitude speedups over the traditional precomputation steps, while retaining comparable runtime simulation quality.
@article{Yang:2015:EPR,
  author = {Yang, Yin and Li, Dingzeyu and Xu, Weiwei and Tian, Yuan and Zheng, Changxi},
  title = {Expediting Precomputation for Reduced Deformable Simulation},
  journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2015)},
  volume = {34},
  number = {6},
  month = oct,
  year = {2015},
  pages = {243:1--243:13},
  publisher = {ACM},
  address = {New York, NY, USA},
}
IMG
Dingzeyu Li, Yun Fei and Changxi Zheng
Interactive Acoustic Transfer Approximation for Modal Sound.
ACM Transactions on Graphics, 35(1), 2015 (presented at SIGGRAPH 2016)

Paper (PDF) Project Page Abstract Video Bibtex
Current linear modal sound models are tightly coupled with their frequency content. Both the modal vibration of object surfaces and the resulting sound radiation depend on the vibration frequency. Whenever the user tweaks modal parameters to adjust frequencies the modal sound model changes completely, necessitating expensive recomputation of modal vibration and sound radiation.

We propose a new method for interactive and continuous editing as well as exploration of modal sound parameters. We start by sampling a number of key points around a vibrating object, and then devise a compact, low-memory representation of frequency-varying acoustic transfer values at each key point using Prony series. We efficiently precompute these series using an adaptive frequency sweeping algorithm and volume-velocity-preserving mesh simplification. At runtime, we approximate acoustic transfer values using standard multipole expansions. Given user-specified modal frequencies, we solve a small least-squares system to estimate the expansion coefficients, and thereby quickly compute the resulting sound pressure value at arbitrary listening locations. We demonstrate the numerical accuracy, the runtime performance of our method on a set of comparisons and examples, and evaluate sound quality with user perception studies.
@article{Li:2015:IAT,
  author = {Li, Dingzeyu and Fei, Yun and Zheng, Changxi},
  title = {Interactive Acoustic Transfer Approximation for Modal Sound},
  journal = {ACM Trans. Graph.},
  volume = {35},
  number = {1},
  month = dec,
  year = {2015},
  pages = {2:1--2:16},
  articleno = {2},
  numpages = {16},
  publisher = {ACM},
  address = {New York, NY, USA},
}
IMG
Timothy Sun and Changxi Zheng
Computational Design of Twisty Joints and Puzzles.
ACM Transactions on Graphics (SIGGRAPH 2015), 34(4)

Paper (PDF) Project Page Abstract Video Bibtex
We present the first computational method that allows ordinary users to create complex twisty joints and puzzles inspired by the Rubik's Cube mechanism. Given a user-supplied 3D model and a small subset of rotation axes, our method automatically adjusts those rotation axes and adds others to construct a "non-blocking" twisty joint in the shape of the 3D model. Our method outputs the shapes of pieces which can be directly 3D printed and assembled into an interlocking puzzle. We develop a group-theoretic approach to representing a wide class of twisty puzzles by establishing a connection between non-blocking twisty joints and the finite subgroups of the rotation group SO(3). The theoretical foundation enables us to build an efficient system for automatically completing the set of rotation axes and fast collision detection between pieces. We also generalize the Rubik's Cube mechanism to a large family of twisty puzzles.
@article{Sun15:TP,
    author = {Timothy Sun and Changxi Zheng},
    title  = {Computational Design of Twisty Joints and Puzzles},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2015)},
    year = {2015},
    volume = {34},
    number = {4},
    month  = Aug,
    url = {http://www.cs.columbia.edu/cg/twisty}
}
IMG
Yizhong Zhang, Chunji Yin, Changxi Zheng, and Kun Zhou
Computational Hydrographic Printing.
ACM Transactions on Graphics (SIGGRAPH 2015), 34(4)

Paper (PDF) Project Page Abstract Video Bibtex
Hydrographic printing is a well-known technique in industry for transferring color inks on a thin film to the surface of a manufactured 3D object. It enables high-quality coloring of object surfaces and works with a wide range of materials, but suffers from the inability to accurately register color texture to complex surface geometries. Thus, it is hardly usable by ordinary users with customized shapes and textures.

We present computational hydrographic printing, a new method that inherits the versatility of traditional hydrographic printing, while also enabling precise alignment of surface textures to possibly complex 3D surfaces. In particular, we propose the first computational model for simulating hydrographic printing pro- cess. This simulation enables us to compute a color image to feed into our hydrographic system for precise texture registration. We then build a physical hydrographic system upon off-the-shelf hardware, integrating virtual simulation, object calibration and controlled immersion. To overcome the difficulty of handling complex surfaces, we further extend our method to enable multiple immersions, each with a different object orientation, so the combined colors of individual immersions form a desired texture on the object surface. We validate the accuracy of our computational model through physical experiments, and demonstrate the efficacy and robustness of our system using a variety of objects with complex surface textures.
@article{Zhang15:CHP,
    author = {Yizhong Zhang and Chunji Yin and Changxi Zheng and Kun Zhou},
    title  = {Computational Hydrographic Printing},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2015)},
    year = {2015},
    volume = {34},
    number = {4},
    month  = Aug,
    url = {http://www.cs.columbia.edu/cg/hydrographics}
}
IMG
Yonghao Yue, Breannan Smith, Christopher Batty, Changxi Zheng, and Eitan Grinspun
Continuum Foam: A Material Point Method for Shear-Dependent Flows.
ACM Transactions on Graphics, 2015 (presented at SIGGRAPH Asia 2015)

Paper (PDF) Project Page Abstract Video Bibtex
We consider the simulation of dense foams composed of microscopic bubbles, such as shaving cream and whipped cream. We represent foam not as a collection of discrete bubbles, but instead as a continuum. We employ the Material Point Method (MPM) to discretize a hyperelastic constitutive relation augmented with the Herschel-Bulkley model of non-Newtonian plastic flow, which is known to closely approximate foam behavior. Since large shearing flows in foam can produce poor distributions of material points, a typical MPM implementation can produce non-physical internal holes in the continuum. To address these artifacts, we introduce a particle resampling method for MPM. In addition, we introduce an explicit tearing model to prevent regions from shearing into artificially-thin, honey-like threads. We evaluate our method's efficacy by simulating a number of dense foams, and we validate our method by comparing to real-world footage of foam.
@article{foam15,
    author = {Yonghao Yue and Breannan Smith and Christopher Batty and 
              Changxi Zheng and Eitan Grinspun},
    title = {Continuum Foam: A Material Point Method for Shear-Dependent Flows},
    journal = {ACM Transactions on Graphics},
    year = {2015},
    volume = {34},
    number = {5}
}
IMG
Zexiang Xu, Hsiang-Tao Wu, Lvdi Wang, Changxi Zheng, Xin Tong and Yue Qi
Dynamic Hair Capture using Spacetime Optimization.
ACM Transactions on Graphics (SIGGRAPH Asia 2014), 33(6)

Paper (PDF) Project Page Abstract Video Bibtex
Dynamic hair strands have complex structures and experience intricate collisions and occlusion, posing significant challenges for high-quality reconstruction of their motions. We present a comprehensive dynamic hair capture system for reconstructing realistic hair motions from multiple synchronized video sequences. To recover hair strands' temporal correspondence, we propose a motion-path analysis algorithm that can robustly track local hair motions in input videos. To ensure the spatial and temporal coherence of the dynamic capture, we formulate the global hair reconstruction as a spacetime optimization problem solved iteratively. Demonstrated using a range of real-world hairstyles driven by different wind conditions and head motions, our approach is able to reconstruct complex hair dynamics matching closely with video recordings both in terms of geometry and motion details.
@article{Xu14:HairCap,
 author = {Zexiang Xu and Hsiang-Tao Wu and Lvdi Wang and Changxi Zheng and Xin Tong and Yue Qi},
 title  = {Dynamic Hair Capture using Spacetime Optimization},
 journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2014)},
 year = {2014},
 Month = Dec,
 volume = {33},
 number = {6},
}
IMG
Timothy Sun, Papoj Thamjaroenporn and Changxi Zheng
Fast Multipole Representation of Diffusion Curves and Points.
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4)

Paper (PDF) Project Page Abstract Video Bibtex
We propose a new algorithm for random-access evaluation of diffusion curve images (DCIs) using the fast multipole method. Unlike all previous methods, our algorithm achieves real-time performance for rasterization and texture-mapping DCIs of up to millions of curves. After precomputation, computing the color at a single pixel takes nearly constant time. We also incorporate Gaussian radial basis functions into our fast multipole representation using the fast Gauss transform. The fast multipole representation is not only a data structure for fast color evaluation, but also a framework for vector graphics analogues of bitmap editing operations. We exhibit this capability by devising new tools for fast diffusion curve Poisson cloning and composition with masks.
@article{Sun14:FMR,
    author = {Timothy Sun and Papoj Thamjaroenporn and Changxi Zheng},
    title  = {Fast Multipole Representation of Diffusion Curves and Points},
    journal = {ACM Transactions on Graphics (SIGGRAPH 2014)},
    year = {2014},
    volume = {33},
    number = {4},
    month  = Aug,
    url = {http://www.cs.columbia.edu/cg/fmr}
}
IMG
Menglei Chai, Changxi Zheng, and Kun Zhou
A Reduced Model for Interactive Hairs.
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4)

Paper (PDF) Project Page Abstract Video Bibtex
Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.
@article{Cai14:ARMI,
    author = {Menglei Cai and Changxi Zheng and Kun Zhou},
    title  = {A Reduced Model for Interactive Hairs},
    journal = {ACM Transactions on Graphics (SIGGRAPH 2014)},
    year = {2014},
    volume = {33},
    number = {4},
    month  = Aug,
}
IMG
Xiang Chen, Changxi Zheng, Weiwei Xu and Kun Zhou
An Asymptotic Numerical Method for Inverse Elastic Shape Design.
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4)

Paper (PDF) Abstract Video Bibtex
Inverse shape design for elastic objects greatly eases the design efforts by letting users focus on desired target shapes without thinking about elastic deformations. Solving this problem using classic iterative methods (e.g., Newton-Raphson methods), however, often suffers from slow convergence toward a desired solution. In this paper, we propose an asymptotic numerical method that exploits the underlying mathematical structure of specific nonlinear material models, and thus runs orders of magnitude faster than traditional Newton-type methods. We apply this method to compute rest shapes for elastic fabrication, where the rest shape of an elastic object is computed such that after physical fabrication the real object deforms into a desired shape. We illustrate the performance and robustness of our method through a series of elastic fabrication experiments.
@article{Chen14:ANM,
    author = {Xiang Chen and Changxi Zheng and Weiwei Xu and Kun Zhou},
    title  = {An Asymptotic Numerical Method for Inverse Elastic Shape Design},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014)},
    year = {2014},
    volume = {33},
    number = {4},
    month  = Aug,
}
IMG
Zherong Pan, Jin Huang, Yiying Tong, Changxi Zheng, and Hujun Bao
Interactive Localized Liquid Motion Editing.
ACM Transactions on Graphics (SIGGRAPH Asia 2013), 32(6)

Paper (PDF) Abstract Video Bibtex
Animation techniques for controlling liquid simulation are challenging: they commonly require carefully setting initial and boundary conditions or performing a costly numerical optimization scheme against user-provided keyframes or animation sequences. Either way, the whole process is laborious and computationally expensive.
We introduce a novel method to provide intuitive and interactive control of liquid simulation. Our method enables a user to locally edit selected keyframes and automatically propagates the editing in a nearby temporal region using geometric deformation. We formulate our local editing techniques as a small-scale nonlinear optimization problem which can be solved interactively. With this uniformed formulation, we propose three editing metaphors, including (i) sketching local fluid features using a few user strokes, (ii) dragging a local fluid region, and (iii) controlling a local shape with a small mesh patch. Finally, we use the edited liquid animation to guide an of offline high-resolution simulation to recover more surface details. We demonstrate the intuitiveness and efficacy of our method in various practical scenarios.
@article{Pan:2013,
    author = {Zherong Pan and Jin Huang and Yiying Tong and Changxi Zheng and Hujun Bao},
    title  = {Interactive Localized Liquid Motion Editing},
    journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2013)},
    year = {2013},
    month  = Nov,
    volume  = {32},
    number  = {6},
}
IMG
Changxi Zheng
One-to-Many: Example-Based Mesh Animation Synthesis.
ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), July, 2013

Paper (PDF) Project Page Abstract Video Bibtex
We propose an example-based approach for synthesizing diverse mesh animations. Provided a short clip of deformable mesh animation, our method synthesizes a large number of different animations of arbitrary length. Combining an automatically inferred linear blending skinning (LBS) model with a PCA-based model reduction, our method identifies possible smooth transitions in the example sequence. To create smooth transitions, we synthesize reduced deformation parameters based on a set of characteristic key vertices on the mesh. Furthermore, by analyzing cut nodes on a graph built upon the LBS model, we are able to decompose the mesh into independent components. Motions of these components are synthesized individually and assembled together. Our method has the complexity independent from mesh resolutions, enabling efficient generation of arbitrarily long animations without tedious parameter tuning and heavy computation. We evaluate our method on various animation examples, and demonstrate that numerous diverse animations can be generated from each single example.
@inproceedings{Zheng12:O2M,
    author = {Changxi Zheng},
    title  = {One-to-Many: Example-Based Mesh Animation Synthesis},
    booktitle = {Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation},
    series = {SCA '13},
    year = {2013},
    month  = Jul,
    url = {http://www.cs.columbia.edu/~cxz/OneToMany}
}
IMG
Changxi Zheng and Doug L. James
Energy-based Self-Collision Culling for Arbitrary Mesh Deformations.
ACM Transactions on Graphics (SIGGRAPH 2012), 31(4), August 2012

Paper (PDF) Project Page Abstract Video Bibtex
In this paper, we accelerate self-collision detection (SCD) for a deforming triangle mesh by exploiting the idea that a mesh cannot self collide unless it deforms enough. Unlike prior work on subspace self-collision culling which is restricted to low-rank deformation subspaces, our energy-based approach supports arbitrary mesh deformations while still being fast. Given a bounding volume hierarchy (BVH) for a triangle mesh, we precompute Energy-based Self-Collision Culling (ESCC) certificates on bounding-volume-related sub-meshes which indicate the amount of deformation energy required for it to self collide. After updating energy values at runtime, many bounding-volume self-collision queries can be culled using the ESCC certificates. We propose an affine-frame Laplacian-based energy definition which sports a highly optimized certificate preprocess, and fast runtime energy evaluation. The latter is performed hierarchically to amortize Laplacian energy and affine-frame estimation computations. ESCC supports both discrete and continuous SCD, detailed and nonsmooth geometry. We demonstrate significant culling on various examples, with SCD speed-ups up to 26X.
@article{ZHENG12:ESCC,
    author = {Changxi Zheng and Doug L. James},
    title  = {Energy-based Self-Collision Culling for Arbitrary Mesh Deformations},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2012)},
    year = {2012},
    volume = {31},
    number = {4},
    month  = Aug,
    url = {http://www.cs.cornell.edu/projects/escc}
}
IMG
Jeffrey N. Chadwick, Changxi Zheng, and Doug L. James
Precomputed Acceleration Noise for Improved Rigid-Body Sound.
ACM Transactions on Graphics (SIGGRAPH 2012), 31(4), August 2012

Paper (PDF) Project Page Abstract Video Bibtex
We introduce an efficient method for synthesizing acceleration noise due to rigid-body collisions using standard data provided by rigid-body solvers. We accomplish this in two main steps. First, we estimate continuous contact force profiles from rigid-body impulses using a simple model based on Hertz contact theory. Next, we compute solutions to the acoustic wave equation due to short acceleration pulses in each rigid-body degree of freedom. We introduce an efficient representation for these solutions - Precomputed Acceleration Noise - which allows us to accurately estimate sound due to arbitrary rigid-body accelerations. We find that the addition of acceleration noise significantly complements the standard modal sound algorithm, especially for small objects.
@article{Chadwick12,
    author = {Jeffrey N. Chadwick and Changxi Zheng and Doug L. James},
    title  = {Precomputed Acceleration Noise for Improved Rigid-Body Sound},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2012)},
    year = {2012},
    volume = {31},
    number = {4},
    month  = Aug,
    url = {http://www.cs.cornell.edu/projects/Sound/impact}
}
IMG
Jeffrey N. Chadwick, Changxi Zheng, and Doug L. James
Faster Acceleration Noise for Multibody Animations using Precomputed Soundbanks.
SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), 2012

Paper (PDF) Project Page Abstract Bibtex
We introduce an efficient method for synthesizing rigid-body acceleration noise for complex multibody scenes. Existing acceleration noise synthesis methods for animation require object-specific precomputation, which is prohibitively expensive for scenes involving rigid-body fracture or other sources of small, procedurally generated debris. We avoid precomputation by introducing a proxy-based method for acceleration noise synthesis in which precomputed acceleration noise data is only generated for a small set of ellipsoidal proxies and stored in a proxy soundbank. Our proxy model is shown to be effective at approximating acceleration noise from scenes with lots of small debris (e.g., pieces produced by rigid-body fracture). This approach is not suitable for synthesizing acceleration noise from larger objects with complicated non-convex geometry; however, it has been shown in previous work that acceleration noise from objects such as these tends to be largely masked by modal vibration sound. We manage the cost of our proxy soundbank with a new wavelet-based compression scheme for acceleration noise and use our model to significantly improve sound synthesis results for several multibody animations.
@article{Chadwick12:SCA,
    author = {Jeffrey N. Chadwick and Changxi Zheng and Doug L. James},
    title  = {Faster Acceleration Noise for Multibody Animations using
    Precomputed Soundbanks},
    journal = {ACM SIGGRAPH/Eurographics Symposium on Computer Animation},
    year = {2012},
    month  = July,
    url = {http://www.cs.cornell.edu/projects/Sound/proxy}
}
IMG
Changxi Zheng and Doug L. James
Toward High-Quality Modal Contact Sound.
ACM Transactions on Graphics (SIGGRAPH 2011), 30(4), August 2011

Paper (PDF) Project Page Abstract Video Bibtex
Contact sound models based on linear modal analysis are commonly used with rigid body dynamics. Unfortunately, treating vibrating objects as "rigid" during collision and contact processing fundamentally limits the range of sounds that can be computed, and contact solvers for rigid body animation can be ill-suited for modal contact sound synthesis, producing various sound artifacts. In this paper, we resolve modal vibrations in both collision and frictional contact processing stages, thereby enabling non-rigid sound phenomena such as micro-collisions, vibrational energy exchange, and chattering. We propose a frictional multibody contact formulation and modified Staggered Projections solver which is well-suited to sound rendering and avoids noise artifacts associated with spatial and temporal contact-force fluctuations which plague prior methods. To enable practical animation and sound synthesis of numerous bodies with many coupled modes, we propose a novel asynchronous integrator with model-level adaptivity built into the frictional contact solver. Vibrational contact damping is modeled to approximate contact-dependent sound dissipation. Results are provided that demonstrate high-quality contact resolution with sound.
@article{ZHENG11,
    author = {Changxi Zheng and Doug L. James},
    title  = {Toward High-Quality Modal Contact Sound},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2011)},
    year = {2011},
    volume = {30},
    number = {4},
    month  = Aug,
    url = {http://www.cs.cornell.edu/projects/Sound/mc}
}
IMG
Changxi Zheng and Doug L. James
Rigid-Body Fracture Sound with Precomputed Soundbanks.
ACM Transactions on Graphics (SIGGRAPH 2010), 29(3), July 2010

Paper (PDF) Project Page Abstract Video Bibtex
We propose a physically based algorithm for synthesizing sounds synchronized with brittle fracture animations. Motivated by laboratory experiments, we approximate brittle fracture sounds using time-varying rigid-body sound models. We extend methods for fracturing rigid materials by proposing a fast quasistatic stress solver to resolve near-audio-rate fracture events, energy-based fracture pattern modeling and estimation of "crack"-related fracture impulses. Multipole radiation models provide scalable sound radiation for complex debris and level of detail control. To reduce soundmodel generation costs for complex fracture debris, we propose Precomputed Rigid-Body Soundbanks comprised of precomputed ellipsoidal sound proxies. Examples and experiments are presented that demonstrate plausible and affordable brittle fracture sounds.
@article{ZHENG10,
    author = {Changxi Zheng and Doug L. James},
    title  = {Rigid-Body Fracture Sound with Precomputed Soundbanks},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2010)},
    year = {2010},
    volume = {29},
    number = {3},
    month  = jul,
    url = {http://www.cs.cornell.edu/projects/fracturesound/}
}
IMG
Changxi Zheng and Doug L. James
Harmonic Fluids.
ACM Transactions on Graphics (SIGGRAPH 2009), 28(3), August 2009

Paper (PDF) Project Page Abstract Video Bibtex
Fluid sounds, such as splashing and pouring, are ubiquitous and familiar but we lack physically based algorithms to synthesize them in computer animation or interactive virtual environments. We propose a practical method for automatic procedural synthesis of synchronized harmonic bubble-based sounds from 3D fluid animations. To avoid audio-rate time-stepping of compressible fluids, we acoustically augment existing incompressible fluid solvers with particle-based models for bubble creation, vibration, advection, and radiation. Sound radiation from harmonic fluid vibrations is modeled using a time-varying linear superposition of bubble oscillators. We weight each oscillator by its bubble-to-ear acoustic transfer function, which is modeled as a discrete Green's function of the Helmholtz equation. To solve potentially millions of 3D Helmholtz problems, we propose a fast dual-domain multipole boundary-integral solver, with cost linear in the complexity of the fluid domain's boundary. Enhancements are proposed for robust evaluation, noise elimination, acceleration, and parallelization. Examples of harmonic fluid sounds are provided for water drops, pouring, babbling, and splashing phenomena, often with thousands of acoustic bubbles, and hundreds of thousands of transfer function solves.
@article{ZHENG09,
    author = {Changxi Zheng and Doug L. James},
    title  = {Harmonic Fluids},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2009)},
    year = {2009},
    volume = {28},
    number = {3},
    month  = Aug,
    url = {http://www.cs.cornell.edu/projects/HarmonicFluids/}
}