Columbia researchers presenting eight papers at this year’s SIGGRAPH

Share

Columbia University researchers are presenting eight papers at this year’s SIGGRAPH, held July 24-26 at Anaheim’s convention center. High-level descriptions are given below with links provided to the papers and companion videos.

Surface-Only Liquids
Computational Design of Reconfigurables
Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Interactive Acoustic Transfer Approximation for Modal Sound
Mesh Arrangements for Solid Geometry
DisCo: Display-Camera Communication Using Rolling Shutter Sensors
Rig Animation with a Tangible and Modular Input Device
Toward Animating Water with Complex Acoustic Bubbles


Category: Fluids simulation
Tuesday, 26 July, 3:45 pm – 5:35 pm, Ballroom D

Surface-Only Liquids

Fang Dasurface-only-pic, Columbia University
David Hahn, Institute of Science and Technology, Austria
Christopher Batty, University of Waterloo
Chris Wojtan, IST Austria
Eitan Grinspun, Columbia University
watch-video-button
 blank-image-60pixelsh
blank-image-60pixelsh
An interview with Eitan Grinspun.
The paper describes simulating splashes and other liquid behaviors by modeling only the surface. Where did the idea come from?
When you look at a glass of water, what is it that you see? There is a volume, and light travels through it, but the only geometry you see is the geometry of the surface. So at a philosophical level—on a minimalist level—you can ask, is it necessary to explicitly represent every point on the interior of the water if we see only the exterior surface? Our paper shows that in many cases, even complex cases, the answer is no.

It’s important to point out that while we are not representing the interior explicitly, we do so implicitly. We don’t view the surface as a membrane with just air inside. That would never work. We treat the interior as filled with solid water, though we do so in an implicit way. And we make certain assumptions about the interior that allow us to reduce away all the representation of the liquid on the interior.

This paper is the first to describe a surface-based treatment of liquids. Was such a treatment known to be a hard problem?
I think it was assumed to be impossible. Five or six years also, I asked physicist colleagues who said that a surface-only treatment would likely miss the physics of an emerging drop or crown splash, because such a method wouldn’t consider the swirling vortices that are inside the liquid, and which are just as, or more, important than anything happening on the surface.
When we started this project, we had to really think hard about what we needed to know about the interior of the water given only a surface representation. What can you say about some particular point in the interior if your only representation is on the boundary?

The outside cannot possibly summarize everything that’s happening on the inside unless you make certain assumptions. The assumption we made is that the interior is in some sense “as uninteresting as possible” given what’s happening on the boundary. It has no extra swirls. Only the swirls that can be seen or inferred from the boundary.

Were you surprised that your method worked as well as it did?
Yes, we were surprised; I think everyone was surprised.
Fortunately, there is a physical justification for it. The motion of liquids includes an effect called baroclincity, which basically means that the formation of swirls can arise only at locations having a change in density. In the interior, where the density is constant, there can be no sudden formation of a swirl. At the boundary, there is a difference in density, and there a swirl can form suddenly.

Now does a swirl formed at the boundary migrate to the interior? It depends; the migration or spreading out of swirls is caused by viscosity. Honey with high viscosity will have a lot of migration, but water, having low-viscosity, will have little migration. Effectively we assume that our water is inviscid, that it has zero viscosity. Real water does have some viscosity (the physicist Feynman would have said that we are working with “the flow of dry water”), but it’s a good enough assumption.

Would you say that you’re contributing new knowledge to the study of water dynamics?
I think we’re contributing new questions. Some previous assumptions about the complexity of what is going on inside these droplets and jets must be reexamined, otherwise our algorithm shouldn’t work. A more complex view of behavior of the inside of a volume of water apparently is not needed to explain a splash, since we can simulate that splash when we factor out that complex behavior.

No one’s been able to see exactly what the interior flow is during a crown splash; so by virtue of making the assumptions we did about what the flow can and cannot be, we’re effectively adding a data point that says the flow is actually pretty simple in the interior, and our assumptions must be sufficiently close to the truth to observe the results that we did.


Category: Deformable Surface Design
Wednesday, 27 July, 9:00 am – 10:30 am, Ballroom E

Computational Design of Reconfigurables

reconfigurable-pic

Akash Garg, Columbia University
Alec Jacobson, Columbia University
Eitan Grinspun, Columbia University

watch-video-button

An interview with Eitan Grinspun

The reconfigurables paper—where you have hard, rigid surfaces and planes and the possibility of collision—seems very different from the subject of surface-only liquids. What is the connection?

The theory behind the two papers is absolutely different. The connection is geometry. For surface-only liquids, it was helpful to focus on the geometry of the surface.

What’s interesting for me when looking at reconfigurables—whether it’s a bicycle that folds up or an extremely efficient kitchen in a small space—is that your attention is focused on one configuration but then you make a geometric change to the shape that works great in some way but interferes with functionality or causes two parts to collide.

I like to look for abstractions, and one of the general areas where designing is hard is when an object can be in multiple configurations. So we wanted a CAD program where these transitions or different states were not an afterthought, but primary to the entire process.

This project seems to have more actual application for a wider range of people than simulating liquid motion.

I think you are right. We actually make a conscious effort to have different projects in the lab spanning the spectrum from more conceptual to more applied. Our idea was to create a tool that would aid designers by alerting them when and where parts might collide while also offering suggestions and edits on how to resolve collisions. Manually making adjustments through trial and error can be very tedious; our method makes the process more automatic and fluid.

How did you three authors divide up the work on this paper?

At first, Akash focused primarily on the underlying collision detection “engine” that drives the interactive collision notifications of the software, while Alec focused on the human experience, including assistive tools such as “smart” camera that automatically selected the best viewpoint for observing problem areas. But these two branches of work quickly merged, and pretty soon Alec was also working on collisions, helping to formulate the mathematics of a new “spacetime collision resolver” that automatically fixed subtle penetrations, while Akash was reciprocally contributing to the human experience, for example with a “smart picture-in-picture” that popped up automatically to highlight unintended side-effects of the present editing operation. So in the end, it’s harder to tease apart the roles. While I pitched the original project vision, the project really took shape when we as a team identified more and more domain examples where reconfigurables arise, from a folding bicycle to a kitchen or burr puzzle.

From your perspective, what most distinguishes this project from the surface-only liquids paper?
Since one project is about water and the other is about picnic benches and kitchens, I think at first glance they are completely different. So let me answer instead the question of what is the least obvious commonality among these two projects. As I said before, on a technical level, I like them both because they are both inherently geometric approaches. But I think on a scholarly level there is a birds-eye commonality that I can share. I think of both projects somewhat as “conceptual pieces” such as a “concept cars.” You build a concept car not because you think everyone should be driving one tomorrow, but because it provides inspiration and vision for where to go. Both the surface-only liquids and reconfigurables papers provide this kind of vision.
On the liquids side, we are really breaking the ice on decades of ongoing research on liquid simulation where the entire volume is discretized, and we are saying, hey, people have thought that maybe you can just simulate on the surface, but now it’s not just a pipe dream, here’s this avenue we can begin to pursue.

On the reconfigurables side, we don’t pretend to have built a computer aided design (CAD) tool that is feature-rich like commercial tools; rather, we feel that we are calling attention to a broad and practical class of design problems—reconfigurables—for which current CAD tools do not provide sufficient support. We hope that the kinds of questions (and maybe some answers) that came up in how to support the design of reconfigurables can drive the next set of features in commercial CAD packages.

Are there concrete plans to distribute your method so that designers will be able to use it?

We are definitely interested in disseminating the code. Alec has already publicly released his popular libIGL mesh processing library. Reconfigurables will be a separate code, but we hope that it will be useful for others. On a more entrepreneurial front, we are also reaching out to design and engineering firms to find out how the technologies that we have developed match up against their realities.


Category: Computational Design of Structures, Shapes, and Sound
Wednesday, 27 July, 9:00 am – 10:30 am, Ballroom D

Acoustic Voxels: Computational Optimization of Modular Acoustic Filters

acoustic-voxels-pic

Dingzeyu Li, Columbia University
David I.W. Levin, Disney Research
Wojciech Matusik, MIT CSAIL
Changxi Zheng, Columbia University

watch-video-button

A trumpet and a muffler may seem dissimilar but inside each is an acoustic filter that modifies sound; as sound waves pass through the filter’s hollow chamber, they get reflected back and forth, which boosts or suppresses certain frequencies. Changing the chamber’s shape changes the sound, but predicting the shape-sound relationship is not intuitive; for this reason the chamber’s shape is almost always a simple tube whose acoustics are relatively simple to understand, and simple also to manufacture. But now with computational methods that accurately simulate sound wave propagation, researchers can design—and fabricate with 3D printers—more complex chambers to gain more control over the acoustics. Freed from traditional constraints, researchers re-imagined acoustic filters, building them out of small primitives called acoustic voxels. It’s a general approach that works for both wind instruments and mufflers. And, in an unexpected and propitious twist, it leads researchers in a completely new direction: acoustic tagging for uniquely identifying an object, and acoustic encoding for implanting information (think copyright) into an object’s very form.

Category: Sound, Fluids, and Boundaries
Wednesday, 27 July, 10:45 am – 12:15 pm, Ballroom D

Interactive Acoustic Transfer Approximation for Modal Sound

modal-sounds
Different sounds for metal, porcelain, and wood

Dingzeyu Li, Columbia University
Yun Fei, Columbia University
Changxi Zheng, Columbia University

watch-video-button

blank-image-60pixelsh

 blank-image-60pixelsh
An interview with Dingzeyu Li
What is the problem you’re solving?
Sound in animation is very important, but it’s hard to get right. Artists are usually given a mesh animation and from that they try to imagine what sounds might result from two objects colliding. But it’s hard, especially in the virtual world where anything can happen, like a cartoon car character crashing into something. There’s no real-world counterpart, and artists must rely on their imagination.

They end up recording a sound and integrating it into the animation so it aligns with the action—which takes time to get right—but often the sound is not quite what they want or they decide to change something about an object’s characteristics, and they must start all over again. It might take hours or days for a single sequence.

But you’re not using prerecorded sound?
That’s correct. Ours is a physics-based, computational approach where sound is automatically generated from vibrations produced by the collision of two rigid objects. While there are existing physics-based methods—sound from vibrations is a well-studied area—our method focuses on how that sound propagates and how it would sound at various angles and locations.
We also looked for ways to precompute many of the calculations that go into simulating a sound so that if something changes in the animation, we can quickly recompute the sound in a highly accurate, realistic way. These precomputations—which have to take into account the geometry, the sound frequency, and many other variables—can take several hours, but once they are done, they are done and stored to be immediately available if needed.

Existing methods are not as flexible. If something changes, the entire process of recomputing everything has to be redone from the beginning.

What makes your method fast?
Previous methods relied on multipole coefficients, called moments, which are much more volatile and apt to change from one sample to another, making it very hard to interpolate smoothly. It would require many samples to accurately approximate the sound propagation behavior.
We look at a very smooth function that we can interpolate easily; specifically we use the acoustic pressure value, which describes how acoustic pressure propagates in space. Because the pressure value changes smoothly in the frequency domain, we don’t have to take many samples to get a faithful approximation. It’s these pressure values that are being precomputed at a sparse set of frequencies. At runtime, the moments can be recovered from these smooth pressure values efficiently.

We look at a very smooth function that we can interpolate easily; specifically we use the acoustic pressure value, which describes how acoustic pressure propagates in space. Because the pressure value changes smoothly in the frequency domain, we don’t have to take many samples to get a faithful approximation. It’s these pressure values that are being precomputed at a sparse set of frequencies. At runtime, the moments can be recovered from these smooth pressure values efficiently.

You’re also a coauthor on the acoustic voxel paper. Was there much overlap?
One interesting link between the two projects is that the method of precomputing the pressure values can be used also to precompute size changes in the voxel primitives and so accelerate the acoustic voxel precomputation even further.
leading-30points
The acoustic transfer method you describe is for the modal sounds produced by rigid objects. Is it possible to apply it to deformable objects?
In the paper, we showed one extension to handle deformable body sound propagation. Currently we assume the modal shapes remain unchanged in the animation.

For animations involving large deformations, the computation is more challenging since the modal shapes are no longer constant. We are currently working on simulating sounds for deformable objects.


Category: Geometry
Monday, 25 July, 9:00 am – 10:30 am, Ballroom E

Mesh Arrangements for Solid Geometry

mesh-arrangements-pic

Qingnan Zhou, New York University
Eitan Grinspun, Columbia University
Denis Zorin, New York University
Alec Jacobson, Columbia University

leading-20points

An interview with Alec Jacobson
Why are meshes important in computer graphics?
In mathematics we prefer to represent 3D objects in terms of continuous functions, however to work with 3D objects on the computer we need a discrete, finite representation. Memory is not infinite, so we can’t store all of the points on an object’s surface. Instead, we look to approximations. Meshes are surfaces formed out of connecting many small polygons, often triangles. This is one of the most basic ways to represent a very large class of objects. One particular advantage to traditional computer graphics is the ease at which one can display or render meshes on screen. Meshes are called an explicit representation because it is easy to march along the surface and trace out lines or areas.

This comes at a cost compared to implicit representations that can easily answer whether or how far any query point is from the surface. Implicit surfaces make Solid Geometry tasks like taking the union or difference of two objects very ease. Explicit surfaces are much trickier and these tasks require great care.

Can you describe at a high level the main innovation of your paper?

Once one or many 3D objects are represented as meshes, our method enables conducting certain operations robustly them. For example, many physical objects we use are a designed by merging multiple 3D shapes together: a chess piece pawn is a sphere merged with a cone merged with a flat disk. With out method we can achieve these type of operations on meshes—a common format for surfaces in computer graphics. Previous methods either required unrealistically high-quality inputs or produced flaws in their output.

chess-piece-for-mesh-paper
What was your motivation in addressing this particular problem in computer graphics?
My original motivation for solving this problem was curiosity and desire for an easy to use open source implementation of these tools. Solid geometry operations are fundamental and a very powerful tool to have in one’s tool chest. I had read a recent work on peeling off layers of self-intersecting meshes to reveal their outermost layer. Initially I thought this method could be extended to more general operations on solid objects. Inevitably our method matured from that idea into its current form.
What makes your method fast? Your paper describes how a cascading operation that previously would have taken weeks to compute can be completed in only a few seconds with your method.

Our method is faster in certain scenarios because we avoid a “domino effect.” Faced with a series of operations, previous methods would resolve them one by one. In the worst case this can lead to an explosion in the number of new elements created after each operation. Our method resolves all operations simultaneously and the number of new elements is no more than what’s necessary to represent the output.

mesh-paper-union-of-ten-tetrahedra
Simple and complex variadic operations cost the same using our mesh arrangements. Converting variadic operations to a cascade of binary operations is worst-case exponential in time.
Do you plan further enhancements?

There are many directions I would like to take this work in the future. Specific to this project I would like to further improve our performance. Beyond solid operations, this work is one step toward a larger goal of making all parts of the geometry processing pipeline more robust.


Category: User Interfaces
Thursday, 28 July, 2:00 pm – 3:30 pm, Ballroom D

Kensei Jo, Columbia University
Mohit Gupta, Columbia University
Shree NayarColumbia University

watch-video-button

disco-pic
DisCo is a novel human computer interface based on display sensor communication. It uses fast temporal modulation of displays to transmit messages, and rolling shutter sensors to receive them. The messages are imperceptible to humans, allowing displays to serve the dual purposes of displaying images to humans while simultaneously conveying messages to cameras. (b) A scene comprising a display. (c) Image captured by a rolling shutter camera. Due to rolling shutter, temporal modulation of the display is converted into a spatial flicker pattern. The flicker pattern is superimposed on the displayed pattern. By using a sensor that can capture two exposures simultaneously, we can separate the flicker and the display pattern, and thus recover both the message (d), and flicker-free scene image (e) from a single captured image.
Abstract:

We present DisCo, a novel display-camera communication system that enables displays to send short messages to digital sensors, while simultaneously displaying images for human consumption. Existing display-camera communication methods are largely based on spatial-domain steganography, where the information is encoded as an imperceptible spatial signal (e.g., QR-code). These methods, while simple to implement, are prone to errors due to common causes of image degradations such as occlusions, display being outside the sensor’s field-of-view, defocus blur and perspective distortion. Due to these limitations, steganography-based techniques have not been widely adopted, especially in uncontrolled settings involving consumer cameras and public displays.

DisCo overcomes these limitations by embedding messages in temporal signals instead of spatial signals. We draw inspiration from the emerging field of visible light communication (VLC), where information is transmitted between a light source (transmitter) and a sensor (receiver) via high-frequency temporally modulated light. Most of these techniques require specialized high-speed cameras or photo-diodes as signal receivers [Elgala et al. 2009; Vucic et al. 2010; Sarkera et al. 2009]. Recently, a method was proposed for using low-cost rolling shutter sensors as receivers. This method, however, places strong restrictions on the transmitter; only light sources (e.g., LEDs) or surfaces with constant brightness [Danakis et al. 2012] can be used. These systems do not work with displays that need to display arbitrary images. The goal of this paper is on designing systems that can use a broad range of signal transmitters, especially displays showing arbitrary images, as well as objects that are illuminated with temporally modulated light. The objects can have arbitrary textures.

DisCo builds upon the method proposed in [Danakis et al. 2012] and uses rolling shutter cameras as signal receivers. In rolling shutter sensors, different rows of pixels are exposed in rapid succession, thereby sampling the incident light at different time instants. This converts the temporally modulated light coming from the display into a spatial flicker pattern in the captured image. The flicker encodes the transmitted signal. However, the flicker pattern is superimposed with the (unknown) display pattern. In order to extract the message, the flicker and the display pattern must be separated. Our key contribution is to show that the two components can be separated by capturing images at two different camera exposures. We also show that the flicker component is invariant to the display pattern and other common imaging degradations (e.g., defocus blur, occlusion, camera rotation and variable display size). The effect of all these degradations can be absorbed in the display pattern component. Since the display pattern is separated from the flicker component before signal recovery, the imaging degradations do not adversely affect the communication process.


Category: User Interfaces
Thursday, 28 July, 2:00 pm – 3:30 pm, Ballroom D

Oliver Glauser, ETH Zurich
Wan-Chun, ETH Zurich
Daniele Panozzo, New York University & ETH Zurich
Alec Jacobson, Columbia University
Otmar Hilliges, ETH Zurich
Olga Sorkine-Hornung, ETH Zurich

watch-video-button

rig-animation

Abstract:

We propose a novel approach to digital character animation, combining the benefits of tangible input devices and sophisticated rig animation algorithms. A symbiotic software and hardware ap- proach facilitates the animation process for novice and expert users alike. We overcome limitations inherent to all previous tangible devices by allowing users to directly control complex rigs using only a small set (5-10) of physical controls. This avoids oversimplification of the pose space and excessively bulky device configurations. Our algorithm derives a small device configuration from complex character rigs, often containing hundreds of degrees of freedom, and a set of sparse sample poses. Importantly, only the most influential degrees of freedom are controlled directly, yet detailed motion is preserved based on a pose interpolation technique. We designed a modular collection of joints and splitters, which can be assembled to represent a wide variety of skeletons. Each joint piece combines a universal joint and two twisting elements, allowing to accurately sense its configuration. The mechanical design provides a smooth inverse kinematics-like user experience and is not prone to gimbal locking. We integrate our method with the professional 3D software Autodesk Maya® and discuss a variety of results created with characters available online. Comparative user experiments show significant improvements over the closest state-of-the-art in terms of accuracy and time in a keyframe posing task.


Category: Sound, Fluids, & Boundaries
Wednesday, 27 July, 10:45 am – 12:15 pm, Ballroom D

Toward Animating Water with Complex Acoustic Bubbles

Timothy Langloisanimating-bubbles-pic, Cornell University
Changxi Zheng, Columbia University,
Doug James, Stanford University

watch-video-button

Abstract:
This paper explores methods for synthesizing physics-based bubble sounds directly from two-phase incompressible simulations of bubbly water flows. By tracking fluid-air interface geometry, we identify bubble geometry and topological changes due to splitting, merging and popping. A novel capacitance-based method is proposed that can estimate volume-mode bubble frequency changes due to bubble size, shape, and proximity to solid and air interfaces. Our acoustic transfer model is able to capture cavity resonance effects due to near-field geometry, and we also propose a fast precomputed bubble-plane model for cheap transfer evaluation. In addition, we consider a bubble forcing model that better accounts for bubble entrainment, splitting, and merging events, as well as a Helmholtz resonator model for bubble popping sounds. To overcome frequency bandwidth limitations associated with coarse resolution fluid grids, we simulate micro-bubbles in the audio domain using a power-law model of bubble populations. Finally, we present several detailed examples of audiovisual water simulations and physical experiments to validate our frequency model.
Posted 7/21/2016
Share