Papers presented at SIGGRAPH 2015

Share
Columbia University researchers are presenting six papers at this year’s SIGGRAPH. In addition, Changxi Zheng is session chair for Wave-Particle Fluidity (Monday, August 10, 3:45-5:35), and Keenan Crane (now Assistant Professor at Carnegie-Mellon University) is session chair for Simsquishal Geometry (Tuesday, August 11, 9-10:30 am).
High-level descriptions of each paper are given below with links provided to the papers and companion videos.

Phasor imaging for resolving the problem of multi-path interference

Category: Computational Illumination, Monday, 10 August, 9-10:30 am
Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging
phasor-imaging
Phasor imaging

Mohit Gupta, Columbia University
Shree Nayar, Columbia University
Matthias B. Hullin, Rheinische Friedrich-Wilhelms-Universität Bonn
Jaime Martin, Rheinische Friedrich-Wilhelms-Universität Bonn

An interview with Mohit Gupta.
Can you briefly describe how time-of-flight imaging systems work, and their applications?
Time-of-flight cameras measure 3D shape of objects by measuring the time it takes light to travel between the camera and the object. In recent years, they have fast become the preferred 3D imaging method, specially in consumer settings. Microsoft Kinect, one of the fastest selling consumer devices in history, is based on time-of-flight. Going forward, these cameras can potentially have a broad range of applications, including autonomous cars, industrial and surgical robots, human-computer interfaces, and augmented reality.
How does your approach differ from traditional time-of-flight imaging systems?
Conventional ToF systems assume that light travels only directly between the scene and the camera. While this idealized model is valid in laboratory settings, in almost any real-world setting, light bounces and scatters multiple times within the scene before reaching the camera. This phenomena, called multi-path interference, poses a fundamental challenge to conventional ToF imaging systems. Imagine a robot sorting machine parts or performing surgery inside human body, an autonomous car driving in strong fog or a dust storm, or a robot navigating inside a cave or a room. In all these scenarios, light bounces and scatters multiple times, which results in large errors in the 3D shape. Because these errors are highly structured and not random, they cannot be removed just by using computational algorithms.
Phasor imaging is a new ToF imaging technique that avoids the multipath interference problem to a large extent by re-designing the image capture process itself. We show that by capturing images at multiple high temporal frequencies that are slightly different, the captured images are error free to start with. From these images, it becomes possible to recover highly accurate 3D shape even in challenging scenarios with strong multi-path interference (including fog and smoke).
What interested you about this particular problem? What were the difficult or unexpected challenges?
Multipath interference (MPI) is an important problem which can severely limit the applicability of ToF cameras. By solving this problem, we can significantly increase the range of applications in which ToF cameras can be used. From a theoretical stand-point as well, MPI is one of the most elegant problems which has been widely researched in computer vision over past few decades. Existing solutions, however, are still restricted to a limited class of objects and scenes. The problem is challenging because in addition to the errors being large (up to 50% in some settings), the structure of errors depends on 3D shape itself, making it a ‘chicken-and-egg problem.’ We could remove the errors if we knew the shape, but to measure the shape, we must remove the errors first.
How efficient is your approach?
Phasor imaging approach, for the first time, could solve the MPI problem for a very general class of objects and scenes. The solution is based on a first-principles analysis of the physics of how light travels in and interacts with the physical world. The approach is highly efficient—it requires capturing only one extra image (four vs. three for conventional ToF systems) and lowers MPI errors by 1-2 orders of magnitude.
Why present your paper at SIGGRAPH?
SIGGRAPH has recently emerged as one of the most inter-disciplinary venues among all of computer science, which promises to give our paper wide visibility among researchers and practitioners in a broad range of fields.

Synthesizing stripe patterns on triangulated surfaces

SIGGRAPH2015Cover
Stripe patterns on surfaces
 
Category: Geometry Field Trip, Monday, 10 August, 9-10:30 am
Stripe Patterns on Surfaces
Felix Knöppel, Technische Universität Berlin
Keenan Crane, Columbia University
Ulrich Pinkall, Technische Universität Berlin
Peter Schröder, California Institute of Technology
Watch-video
An interview with Keenan Crane.
What motivated you and your co-authors to work on a new method for computationally simulating stripes?
Although the current paper focuses on stripes, the motivation actually comes from a fundamental problem in geometry that we are all still thinking about: how do you map a curved surface to the flat plane with as little distortion as possible? This problem was originally raised by cartographers trying to make flat maps of the round Earth. What you discover very quickly is that there is no perfect way to do it: either areas get stretched out (like Greenland becoming bigger than Australia in a Mercator projection) or angles get distorted (for instance, North and East may no longer be at right angles). The route we are pursuing with “stripe patterns” suggests a third possibility: wherever map making gets hard, insert singularities that allow you to do a good job of preserving lengths and angles everywhere else. As you add more and more singularities, the map gets closer and closer to being perfect (or “isometric”) away from these isolated points. The current paper is one step in that direction, where we just focus on one of two coordinates, i.e., just longitude rather than longitude and latitude. As you can see from the pictures, our longitude lines are already looking pretty good!
GlobeVsBunny

Can you describe at a high level the main innovation of your paper? How much of a departure is your method from previous methods on generating stripes?

The method builds on a lot of great work done in the past decade or so by folks in geometry processing and discrete differential geometry. From a technical point of view, perhaps the biggest point of departure is simply in recognizing that for many problems singularities are actually a desirable feature, rather than a nasty artifact. At a computational level, this means we can relax some of the constraints that previously made solutions very difficult or expensive to produce—the current method is about ten times faster than anything that came before it. The paper also adds some new insights to the problem by linking it back to ideas from smooth differential geometry, which allows us to make additional guarantees that we didn’t have before (for instance, that there are no ugly “breaks” in the pattern as you walk around the surface).
What insight prodded you and your co-authors to think new singularities would render more realistic stripe patterns?
The connection to stripe patterns actually happened by chance: originally we were working on the map making problem, and noticed that our solutions bear a striking resemblance to natural phenomena like spikes on a cactus or kernels on an ear of corn. If you dig a bit further, you discover that this phenomenon (known variously as “edge dislocations” or “topological defects”) shows up at all scales in nature, from the crystal structure of atoms to sand dunes in the desert. At some point I contacted a few good friends from grad school who provided all sorts of beautiful examples from the physical sciences, some of which appear in the paper. The fact that we can model the basic behavior of so many different phenomena using one simple, geometric formulation is very satisfying. In the future it would be fun to explore how our algorithm might also apply to problems in physical simulation—for instance, how does a piece of metal like spoon “remember” its shape when you bend it? This has everything to do with how singularities get distributed throughout the metal, which in turn has to do with minimizing distortion (as in the map making problem).
Your image of a rabbit with embedded cacti stripes made the cover of this month’s ACM Transactions on Graphics. Of all the possible images to be seen at SIGGRAPH this month, why do you think yours was chosen?
I couldn’t tell you! But the SIGGRAPH community produces some truly beautiful work every year, and I’m very flattered for our work to be recognized in this way.

A method for interactively deforming shapes

subspace-250
Choosing handles in interactive tim
Category: Simsquishal Geometry, Tuesday, 11 August, 9-10:30 am
Linear Subspace Design for Real-Time Shape Deformation
Yu Wang, University of Pennsylvania
Alec Jacobson, Columbia University, ETH Zürich
Jernej Barbič, University of Southern California
Ladislav Kavan, University of Pennsylvania
An interview with Alec Jacobson
Watch-video
What is the practical application of your work?
In computer graphics, a common task is to change the shape or pose of an object or character. The normal process is for the user to pick a point, or handle, from which to control movement. As the user moves the handle, changes propagate outward, affecting the nearby area. The question is always, how far does the influence of the handle extend and how quickly does the influence fall off? Previously the user decided, typically by using a paintbrush-type tool to define the area and the rate of fall-off.
Over the past four years, we have been working to automate the interpolation process by developing an optimization that calculates the correspondence between distance and how much weight to assign the influence of the handle, and does so in a way that ensures smooth, natural motions.
This makes the deformation task easier for the user who no longer has to figure out the interpolations. But the calculations take time; every time the user inserts a handle, it takes a few minutes for the calculations to complete, interfering with the desired interactivity. Our new paper fixes this problem by speeding up the calculations to interactive time.
What is the technical innovation that allowed you to reduce the time of calculations?
There are several; I will mention two.
We examined the algebraic properties of our optimization for computing correspondences and discovered that if we were more careful about the cost function, we could speed the process. Previously the weighting function would consider all possible functions between 1 and 0, and choose the smoothest function. We found that by carefully modeling the idea of smoothness, it suddenly became possible to re-use a lot of the computation for finding the weights of one handle and use it for all other handles. That was a big time savings.
Another came from removing the non-negative constraint from the weighting function, a constraint that seems very intuitive to have. If this one handle is negative and gets moved right, the other handle should have an opposite response and go left. Plus a non-negative constraint makes it unnecessary to optimize over all functions, only those that are non-negative. But the constraint makes the optimization much harder. It turns out to be easier and faster to optimize over a generic function set than a very specific set.
What we realized, to our surprise, was we could either have very smooth deformations—without bumps at each of the handle—or we could have non-negative weights, but we could not have both. It was a tradeoff we were willing to make; we could put up with a bit of non-intuitiveness for better results. Further, without that constraint, the optimization is much faster.
In the end, we were able to speed things up to the point that a user can plop down a handle, decide it is not right, and immediately choose another handle and continue working. The design process is now easy and fluid.
When can people in industry expect to be able to interactively deform shapes using your method?
It might be a while. Studios have well-established procedures, and introducing something new into that can take time. Plus new features have to be adapted for the different packages where the underlying geometries may be hidden by additional layers.
It is the research community that will make immediate use of new projects and features. We have made our software available online, and we expect that the first users will be researchers whose work depends on deformations that perhaps they were not able to do previously. Now instead of having to rely on artists or modelers, researchers themselves can explore new territory.

Transforming almost any object into a Rubik’s Cube

twisty-puzzles-250
Transforming a 3D object into a twisty puzzle
 

Category: Fabrication & Function, Wednesday, August 12, 10:45 am to 12:15 pm
Computational Design of Twisty Joints and Puzzles
Timothy Sun, Columbia University
Changxi Zheng, Columbia University

Watch-video
The popular Rubik’s cube is a 3D puzzle composed of 26 separate interlocking pieces that rotate along six rotation axes, each of which can be moved independently. While the most well-known Rubik’s cube is a square, others are constructed using different geometric shapes (tetrahedron, octahedron, dodecahedron, icosahedron); together they compose a class of twisty joint puzzles.
Now two Columbia researchers have greatly expanded what else can be included in this class of puzzles. Timothy Sun and Changxi Zheng have developed an algebraic method that makes it possible for ordinary users to create twisty joint puzzles out of almost any arbitrary 3D object. Given rotation axes defined by the user, the method automatically adjusts the supplied axes while adding others to ensure all parts rotate freely. So that rotating pieces do not block or collide with one another, all possible configurations of pairs of pieces are enumerated (by defining a group structure on the pieces). Should a possible collision be detected, the pieces are automatically deformed—while retaining as much of the original form as possible—to ensure free rotation of all parts and yield a workable twisty joint.
Holding all pieces together is an internal mechanism where the centers (which rotate in place) of each rotation axis are connected together. This internal mechanism is generated so that all pieces, once assembled, snap together and interlock with one another to form a working puzzle. No screws or springs are needed, thereby facilitating 3D printing of the snapping mechanism.
The work is both original—representing a new theoretical result—and sophisticated, drawing on algebraic methods to determine the best places to put rotation axes.

Precisely aligning colors and patterns to arbitrary 3D objects

hydrograph-250
Computational hydrographic printing
 
Category: Computational Printing, Thursday, August 13, 9-10:30 am
Computational Hydrographic Printing
Yizhong Zhang, Zhejiang University
Chunji Yin, Zhejiang University
Changxi Zheng, Columbia University
Kun Zhou, Zhejiang University
As 3D printing gains in popularity, the demand will grow for the ability to decorate objects by painting them or applying patterns. One common method to do so today is hydrographic (or water transfer) printing, which works as follows: the pattern to be applied to the object is printed onto a piece of PVA (polyvinyl alcohol) film. The film is then floated on water and sprayed with an activator chemical to soften it and make it sticky. The object, from a suspended position over the floating film, is slowly lowered and pushed through the film, which wraps around the object while the ink on the film transfers to the object.
The one problem is that it’s not possible to precisely predict where ink will adhere to a complex object due to the distortion that occurs as the object presses against and stretches the film. For this reason, hydrographic printing is limited to using repeating or abstract patterns where alignment is not critical.
But now researchers from Columbia and Zhejiang University (China) have developed a way to model and simulate the distortion and then compute a pixel image that takes into account the distortion. The key innovation is the use of viscous sheets. Building on previous work on fluid and viscous sheets done at Columbia Computer Graphics Group (see Discrete viscous sheets by Batty, C., Uribe, A., Audoly, B., and Grinspun, E., 2012), the researchers model the floating color film as a thin viscous sheet to simulate the distortion of the film as an object is immersed; from this information, they can establish a mapping between the points on the color film to their eventual locations on the object and compute a new pixel image.
For validating the concept, the researchers built an apparatus using off-the-shelf components ordered online in China. It incorporates a 3D vision system for precisely measuring and controlling the dipping process. A video of the experiments has been viewed over .5 million times.
The method is low cost (less than 40 US cents per printing) and works for complex surface geometries and a wide range of materials, including plastic, wood, and porcelain.

Vortex sheets: A faster, more efficient method for simulating bubbles

bubbles-250x250
Double bubbles
 
Category: Simulating With Surfaces, Thursday, August 13, 2-3:30 pm
A Double Bubbles Sans Toil and Trouble: Discrete Circulation-Preserving Vortex Sheets for Soap Films and FoamsFang Da, Columbia University
Christopher Batty, University of Waterloo
Chris Wojtan, Institute of Science and Technology Austria
Eitan Grinspun, Columbia University
Watch-video
While people tend to see a bubble as a thin soap film enveloping a volume of air, the most commonly used computer graphics methods simulate bubbles by considering, not the visible surface of the bubble, but the volume of air inside and outside the bubble. Such methods measure velocity, circulation, and other variables at certain points in the volume of air and infer the location and shape of the bubble surface from information collected about the points.
This approach of taking measurements at non-moving points within a flowing fluid, termed Eulerian in fluid dynamics, is computationally intensive when used to simulate bubbles because it requires computing velocities for points within the entire 3D air volume—even though the visual appearance of the bubbles is completely dominated by the thin liquid surface. Behind this difficulty are the equations governing the air flow, known as the Navier-Stokes equations, which are expensive to solve over the air space.
Wanting something simpler and more computationally efficient, the paper’s authors speculated that bubbles might be simulated by measuring only surface points, which thus require far fewer calculations than Eulerian methods. (Measuring velocities under surface points that move with the fluid is termed Lagrangian.) The authors realized that a vortex sheet—used for modeling the interface between two fluids moving at different speeds—would be ideal for representing a surface.
vortex-sheet
In a vortex sheet method, circulation points are put on a surface (a non-manifold mesh). Circulation is measured at each point, providing a description of the velocity under and over that point.
While vortex sheets are often used to represent smoke, there is no precedence in computer graphics for using vortex sheets to represent liquids, and the authors couldn’t know how well they would work at simulating bubbles and their associated dynamic effects. That work was left to Fang Da.
The main technical challenge was handling the topological complexity presented by a surface-only approach. In particular it would require Da to define the surface curvature when two or more bubbles join together in a cluster and form a non-manifold network of films where surfaces are connected but not smooth. A simple and general strategy would be essential due to the difficulty of enumerating the countless cases of non-manifold junctions that occur in three dimensions (i.e, foams).
Finding no previous use of vortex sheets being used in non-manifold fashion, Da was able to resolve the problem by recasting it; rather than looking at the thin film itself, he looked at the liquid-air interface, where the line is a mathematical line with no branching.
Print
Two ways to represent the triple junction of soap films: as a non-manifold intersection (left) and as a mathematical line with no branching (right).
Starting from the Navier-Stokes equations, Da was able to derive a very simple and efficient equation tailored for vortex sheets to describe the motion of soap films.
In the end, Da’s formulation proved not only to be mathematically accurate, it supported a range of bubble behaviors, with no need to rely on physically unprincipled assumptions or avoid hard-to-handle bubble dynamics. A video accompanying the paper gives the visual proof, showing how the vortex sheet method enables the interaction of soap films with wires, as well as foam rearrangement, catenoid collapse, blowing bubbles, and double bubbles being pulled apart. According to Da, “It’s really a great match between technique and problem: the vortex-sheet technique is specialized but fits this particular and challenging problem of bubble dynamic behavior very well.”
Posted 8/25/2015
Share