In Memoriam: Joseph F. Traub

 

Joseph-Traub-250x250
Joseph F. Traub
Let me tell you how I got hooked on computing. For my thesis I worked for six months starting from a mathematical model of the helium atom and writing a program to compute the energy and other parameters of the atom. I took the cards from the IBM 650 and loaded them on the printer. The printer started spewing out approximations to the ground state energy of helium. I was using a variational principle which means I was converging down to the ground state energy of the helium matter. Watching, after the six months of work, the numbers rolling off the printer, and seeing that the initial numbers approximated the experimentally measured ground state energy of the helium atom good to four places. That was the moment.
Joseph F. Traub, a pioneering computer scientist and founder of the Computer Science department at Columbia University, died Monday, August 24, 2015 in Santa Fe, NM. He was 83. Most recently the Edwin Howard Armstrong Professor of Computer Science, Traub was an early pioneer in computer science years before such a discipline existed, and he would do a lot to shape the field.
Traub was most known for his work on optimal algorithms and computational complexity applied to continuous scientific problems. In collaboration with Henryk Woźniakowski, he created the field of information-based complexity, where the goal is to understand the cost of solving problems when information is partial, contaminated, or priced. Applications for information-based complexity are diverse and include differential and integral equations, continuous optimization, path integrals, high-dimensional integration and approximation, and low-discrepancy sequences.
traub-memorial-frame-520
Understanding the role of information about a problem was a unifying theme of Traub’s contributions to a number of diverse areas of computing. Often collaborating with others, he created significant new algorithms, including the Jenkins-Traub algorithm for polynomial zeros, the Kung-Traub algorithm for comparing the expansion of an algebraic function, and the Shaw-Traub algorithm to increase computational speed. He authored or edited ten monographs and some 120 papers in computer science, mathematics, physics, computational finance, and quantum computing.
Apart from his scientific research, he had a major role in building and leading organizations that promoted computer science. In 1971, at the age of 38, he was appointed chair of the computer science department at Carnegie Mellon University (CMU), overseeing its expansion from fewer than 10 professors to 50, and making it one of the strongest computer science departments in the country. Based on his achievements at CMU, Columbia University in 1979 extended an offer to Traub to found the University’s Computer Science department. He accepted the offer and chose to locate Computer Science within the Engineering School, which at the time offered a single computer, only three tenured faculty members teaching computer science, and a huge demand for computer classes.
After securing a $600,000 gift from IBM (which later provided another $4 million), he was able to add faculty and attract top students. Within a year the department was awarding bachelor’s and master’s degrees as well as PhDs. He would chair the department until 1989.
In 1982 he oversaw the construction of the Computer Science Building, working closely with architects to come up with a final design that would later win awards.
Traub liked building things from scratch. In 1985 while still chair of the Computer Science department, he became the founding editor-in-chief of the Journal of Complexity (a position he held at the time of his death). In 1986, he founded the Computer Science and Technology Board (CSTB) of the National Research Council, serving as its chair from 1986 until 1992 and again in 2005 and 2009.
His awards and honors are many and include election to the National Academy of Engineering in 1985, the 1991 Emanuel R. Piore Gold Medal from IEEE, and the 1992 Distinguished Service Award from the Computer Research Association (CRA). He is a Fellow of the Association for Computing Machinery (ACM), the American Association for the Advancement of Science (AAAS), the Society for Industrial and Applied Mathematics (SIAM), and the New York Academy of Sciences (NYAS). He was selected by the Accademia Nazionale dei Lincei in Rome to present the 1993 Lezione Lincee, a cycle of six lectures. Traub received the 1999 Mayor’s Award for Excellence in Science and Technology, an award presented by Mayor Rudy Giuliani.
In 2012, his 80th birthday was commemorated by a symposium at Columbia’s Davis Auditorium to celebrate his research and contributions to computer science.
Traub’s “contributions to Columbia’s Computer Science Department have been instrumental in establishing the strong foundation of excellence of our Computer Science department today, enabling our ongoing frontier leadership in this field,” said Dean Mary C. Boyce. “Joe will be sorely missed by all of us at Columbia and by the computer science community across the globe.”
A life of science and discovery
Traub always described himself as lucky: Lucky in his early life that his parents were able to flee Nazi Germany in 1939 and settle in New York City; that he had a knack for math and problem-solving just when those skills were needed; that a fellow student’s prescient suggestion led him to visit IBM’s Watson Laboratories where he first encountered computers. And lucky to be among the first to enter a new, unexplored field when he had the ambition to make new discoveries and a hunger to do something significant. In an interview recalling his life, he once said “I’m almost moved to tears but who could have expected such a wonderful life and such a wonderful career.”
That he returned to New York City to found Columbia’s computer science department is entirely appropriate. He attended both Bronx High School of Science and City College of New York (earning degrees in math and physics) before entering Columbia University in 1954 intent on a PhD in theoretical physics. That plan changed when he discovered computers, not at Columbia—which had no computers—but at the IBM Watson lab then located in Casa Hispanica, just off campus at 612 W. 116th Street. He was hired there as a fellow, gaining the perk of unlimited computer time.
In 1959 he earned his PhD under the Committee of Applied Mathematics at Columbia. After his first choice to work on a chess problem was rejected, he proposed instead a quantum problem that involved six months of programing to calculate the ground energy state of a helium atom, correct to four decimal points.
After graduating Columbia, Traub went to work at Bell Labs then in its “golden 60s” when researchers were given wide latitude to choose projects and conduct pure research. It was there that a colleague one day walked into his office with a problem. Could Traub find the zero of a function that involved an integral? Mulling over the problem led to two observations: one, it was expensive to compute the function; and two, there were lots of ways of solving it. His thinking about how to select the best, most optimal algorithm culminated in his 1964 monograph Iterative Methods for the Solution of Equations. It was the start of his career with many publications to come.
His luck extended to his personal life. He was married to Pamela McCorduck, a noted author who also taught science writing at Columbia. He enjoyed skiing, tennis, hiking, travel, and good food.
He regularly spent his summers in Santa Fe, where he was an External Professor at the Santa Fe Institute and played a variety of roles over the years, often organizing workshops to bring together those working in science and math. It was in Santa Fe where he died Monday morning, unexpectedly and quickly, after having made plans to travel to Germany, Poland, and CMU. He is survived by his wife Pamela and two daughters, Claudia Traub-Cooper and Hillary Spector.
Joseph Traub was an important and valued member of the Computer Science department he founded. He will be missed by faculty, staff, and students.
Posted 8/25/2015
Linda Crane

Vishal Misra goes before Indian Parliament Committee to present views on net neutrality

The contentious issue of net neutrality, which roiled the US earlier this year, is now playing out in India, with Vishal Misra an active participant. Last week he appeared before Parliament to present his views on net neutrality, and subsequently contributed articles and interviews for leading media outlets (including in The Hindu and in The Financial Express).
His opinion is sought both for his deep grasp of the technical issues surrounding net neutrality as well as his experience building Internet-based businesses. An associate professor of computer science at Columbia, Misra has been researching Internet economics issues for the past eight years, particularly how profit-motivated decisions by Internet service providers are eroding the long-standing principle of net neutrality. (His research is summarized in Net neutrality is all good and fine; the real problem is elsewhere.) He is also an early and still active Internet entrepreneur, having co-founded CricInfo, the world’s most popular single sport website (exceeding even nfl.com or mlb.com in popularity) and is ESPN’s most popular property (ESPN acquired CricInfo in 2007). More recently, Misra founded the data center storage startup Infinio.
Misra’s appearance before Parliament came at the invitation of the Standing Committee on Information Technology, which is considering a recent government report (prepared by the Department of Transportation, or DoT) on net-neutrality policy formulations. The report was made available for public comments while a small group of interested parties—including Misra, Google, Facebook, Microsoft, and a small number of local companies and public interest groups—were asked by the committee to present their views on the report in person.
The DoT report for the most part enforces net neutrality by banning blocking, throttling, and prioritizing content. However, it still allows for—subject to regulatory approval—the practice of zero rating.
In zero rating, consumers access certain apps and sites for free, without having to pay data charges. Those charges are either absorbed by the service provider (as a way to attract customers) or are underwritten by content providers through payments to the service provider. Facebook for instance might pay Airtel (India’s largest telecom operator) an agreed-upon amount to cover data charges for Facebook customers on Airtel. For Facebook customers on networks other than Airtel, Facebook would separately negotiate with those other providers.
On the surface zero rating seems like a good deal for consumers, and Facebook has been especially active in promoting zero rating, pitching it as an accessibility effort to make Internet access more affordable for the large percentage of Indian consumers who otherwise couldn’t afford to pay for that access. (It is estimated that only around 19% of India’s population has Internet access.)
But as critics point out, zero rating creates an uneven playing field that benefits large, well-funded companies that can afford to subsidize their customers’ data charges. Their packets are “free” while packets from companies unable to afford zero-rating payments are not. It’s a pricing structure that discriminates against mainly small companies and startups, even if they offer superior services or innovative features.
While the exact definition of net neutrality is often debated (Misra himself addresses this issue in What Constitutes Net Neutrality?), net neutrality is universally understood to prohibit service providers from discriminating among packets based on who the content is for. Enacting the DoT’s rule as it is written now gives an opening to Indian service providers to do just that.
Facebook and Google and other large, well-funded companies not surprisingly tend to support zero rating. From public statements, it can be assumed Facebook voiced its support for the policy before the Standing Committee on Information Technology. (Google declined its invitation to depose.)
Testimony before the committee is confidential, but Misra’s support for net neutrality practices is well-known not only from articles and statements he has made in the past, but through his extensive research examining network neutrality from engineering, networking, and economic perspectives. (See On Cooperative Settlement Between Content, Transit and Eyeball Internet Service Providers and The Public Option: A Non-regulatory Alternative to Network Neutrality.) This research in particular highlights the danger of anti-competitive, non-neutral practices. Whether it’s ISPs favoring certain packets over others (as in zero rating) or a lack of competition in the last-mile connection, the effect is the same: innovation suffers, and service providers lose the incentive to improve service and keep prices low.
That zero rating should be banned seems to be shared by a growing segment of the Indian population. The deadline for the public to comment on the government report, originally set for August 15, had to be extended by five days to accommodate the huge number of comments, almost all of which is favor banning zero rating.
The government’s response to the DoT report and public comments is expected in two to three months.
About Vishal Misra

Vishal Misra is an Associate Professor in the Department of Computer Science at Columbia University. He is credited with inventing live-microblogging at Cricinfo, a company he co-founded while a graduate student at UMass Amherst, thus predating Twitter by 10 years. Cricinfo was later acquired by ESPN and is still the world’s most popular sports portal.
Dr. Misra has received a National Science Foundation CAREER Award, a Department of Energy CAREER Award, and Google and IBM Faculty Awards. His research emphasis is on mathematical modeling of networking systems, bridging the gap between practice and analysis.
He served as the Vice-Chair of the Computer Science Department at Columbia University from 2009 to 2011, and in 2011 he spun out Infinio, a company in the area of datacenter storage. After raising $24 million from top-tier venture capital firms and recruiting a professional executive team, he has transitioned to the role of Chief Scientist of the company. Infinio is based in Boston and employs over 50 people.
Misra received his B. Tech. from IIT Bombay (1992) and an MS (1996) and PhD (2000) from UMass Amherst, which just awarded him a Junior Alumni Award for extraordinary effort and notable early-career success.
He has been interested in net neutrality for several years. In 2008, he coauthored On Cooperative Settlement Between Content, Transit and Eyeball Internet Service Providers, which predicted the rise of paid peering by looking at the profit-motivated decisions of the ISPs. In 2012, he and Richard T. B. Ma authored The Public Option: A Non-regulatory Alternative to Network Neutrality to understand how the introduction of a public-option ISP would affect change in the behaviors of ISPs.
Misra also sometimes blogs on the subject at Peer unreviewed and tweets about it at @vishalmisra. For more information, see his Columbia website. He also maintains a personal blog, A little corner of a foreign field, where the subject is often cricket.
In the Press

Henning Schulzrinne named recipient of 2016 IEEE Internet Award

Schulzrinne-250x250
Henning Schulzinne
Henning Schulzrinne, the Julian Clarence Levi Professor of Mathematical Methods and Computer Science at The Fu Foundation School of Engineering at Columbia University, has been named the recipient of the 2016 IEEE Internet Award for exceptional contributions to the advancement of Internet technology.
Schulzrinne was recognized “for formative contributions to the design and standardization of Internet multimedia protocols and applications.” Schulzrinne is particularly known for his contributions in developing the Session Initiation Protocol (SIP) and Real-Time Transport Protocol (RTP), the key protocols that enable Voice-over-IP (VoIP) and other multimedia applications. Each is now an Internet standard and together they have had an immense impact on telecommunications, both by greatly reducing consumer costs and by providing a flexible alternative to the traditional and expensive public-switched telephone network.
“This award also recognizes the work by my students and visitors in the Columbia IRT lab as well as all the other colleagues who contributed to making Internet-based multimedia possible,” says Schulzrinne, in referring to the Internet Real-Time (IRT) Lab, which he directs and which conducts research in the areas of Internet and multimedia services.
The Internet award follows on the heel of two other honors recently accorded Schulzrinne. In January, he was named an ACM Fellow, and in December 2014 he received an Outstanding Service Award by the Internet Technical Committee (ITC), of which he was the founding chair. In 2013, Schulzrinne was inducted into the Internet Hall of Fame. Other notable awards include the New York City Mayor’s Award for Excellence in Science and Technology and the VON Pioneer Award.
Schulzrinne whose research interests include applied network engineering, wireless networks, security, quality of service, and performance evaluation, continues to work on VoIP and other multimedia applications and is currently investigating an overall architecture for the Internet of Things and making it easier to diagnose network problems. He is also active in designing technology solutions to limit phone spam (“robocalls”) and recently testified on this topic before the Senate Special Committee on Aging.
In addition to his research, Schulzrinne is active in public policy and in serving the broader technology community. From 2012 until 2014, he was the Chief Technology Officer for the Federal Communications Committee where he guided the FCC’s work on technology and engineering issues and played a major role in the FCC’s decision to require mobile carriers to support customers’ abilities to contact 911 using text messages. He continues to serve as a technical advisor to the FCC.
Schulzrinne is a past member of the Board of Governors of the IEEE Communications Society and a current vice chair of ACM SIGCOMM. He has served on the editorial board of several key publications, chaired important conferences, and published more than 250 journal and conference papers and more than 86 Internet Requests for Comment.
Schulzrinne received his undergraduate degree in economics and electrical engineering from the Darmstadt University of Technology, Germany, his MSEE degree as a Fulbright scholar from the University of Cincinnati, Ohio and his PhD from the University of Massachusetts in Amherst, Massachusetts.
Posted 6/30/2015
-Linda Crane

Papers presented at SIGGRAPH 2015

Columbia University researchers are presenting six papers at this year’s SIGGRAPH. In addition, Changxi Zheng is session chair for Wave-Particle Fluidity (Monday, August 10, 3:45-5:35), and Keenan Crane (now Assistant Professor at Carnegie-Mellon University) is session chair for Simsquishal Geometry (Tuesday, August 11, 9-10:30 am).
High-level descriptions of each paper are given below with links provided to the papers and companion videos.

Phasor imaging for resolving the problem of multi-path interference

Category: Computational Illumination, Monday, 10 August, 9-10:30 am
Phasor Imaging: A Generalization of Correlation-Based Time-of-Flight Imaging
phasor-imaging
Phasor imaging

Mohit Gupta, Columbia University
Shree Nayar, Columbia University
Matthias B. Hullin, Rheinische Friedrich-Wilhelms-Universität Bonn
Jaime Martin, Rheinische Friedrich-Wilhelms-Universität Bonn

An interview with Mohit Gupta.
Can you briefly describe how time-of-flight imaging systems work, and their applications?
Time-of-flight cameras measure 3D shape of objects by measuring the time it takes light to travel between the camera and the object. In recent years, they have fast become the preferred 3D imaging method, specially in consumer settings. Microsoft Kinect, one of the fastest selling consumer devices in history, is based on time-of-flight. Going forward, these cameras can potentially have a broad range of applications, including autonomous cars, industrial and surgical robots, human-computer interfaces, and augmented reality.
How does your approach differ from traditional time-of-flight imaging systems?
Conventional ToF systems assume that light travels only directly between the scene and the camera. While this idealized model is valid in laboratory settings, in almost any real-world setting, light bounces and scatters multiple times within the scene before reaching the camera. This phenomena, called multi-path interference, poses a fundamental challenge to conventional ToF imaging systems. Imagine a robot sorting machine parts or performing surgery inside human body, an autonomous car driving in strong fog or a dust storm, or a robot navigating inside a cave or a room. In all these scenarios, light bounces and scatters multiple times, which results in large errors in the 3D shape. Because these errors are highly structured and not random, they cannot be removed just by using computational algorithms.
Phasor imaging is a new ToF imaging technique that avoids the multipath interference problem to a large extent by re-designing the image capture process itself. We show that by capturing images at multiple high temporal frequencies that are slightly different, the captured images are error free to start with. From these images, it becomes possible to recover highly accurate 3D shape even in challenging scenarios with strong multi-path interference (including fog and smoke).
What interested you about this particular problem? What were the difficult or unexpected challenges?
Multipath interference (MPI) is an important problem which can severely limit the applicability of ToF cameras. By solving this problem, we can significantly increase the range of applications in which ToF cameras can be used. From a theoretical stand-point as well, MPI is one of the most elegant problems which has been widely researched in computer vision over past few decades. Existing solutions, however, are still restricted to a limited class of objects and scenes. The problem is challenging because in addition to the errors being large (up to 50% in some settings), the structure of errors depends on 3D shape itself, making it a ‘chicken-and-egg problem.’ We could remove the errors if we knew the shape, but to measure the shape, we must remove the errors first.
How efficient is your approach?
Phasor imaging approach, for the first time, could solve the MPI problem for a very general class of objects and scenes. The solution is based on a first-principles analysis of the physics of how light travels in and interacts with the physical world. The approach is highly efficient—it requires capturing only one extra image (four vs. three for conventional ToF systems) and lowers MPI errors by 1-2 orders of magnitude.
Why present your paper at SIGGRAPH?
SIGGRAPH has recently emerged as one of the most inter-disciplinary venues among all of computer science, which promises to give our paper wide visibility among researchers and practitioners in a broad range of fields.

Synthesizing stripe patterns on triangulated surfaces

SIGGRAPH2015Cover
Stripe patterns on surfaces
 
Category: Geometry Field Trip, Monday, 10 August, 9-10:30 am
Stripe Patterns on Surfaces
Felix Knöppel, Technische Universität Berlin
Keenan Crane, Columbia University
Ulrich Pinkall, Technische Universität Berlin
Peter Schröder, California Institute of Technology
Watch-video
An interview with Keenan Crane.
What motivated you and your co-authors to work on a new method for computationally simulating stripes?
Although the current paper focuses on stripes, the motivation actually comes from a fundamental problem in geometry that we are all still thinking about: how do you map a curved surface to the flat plane with as little distortion as possible? This problem was originally raised by cartographers trying to make flat maps of the round Earth. What you discover very quickly is that there is no perfect way to do it: either areas get stretched out (like Greenland becoming bigger than Australia in a Mercator projection) or angles get distorted (for instance, North and East may no longer be at right angles). The route we are pursuing with “stripe patterns” suggests a third possibility: wherever map making gets hard, insert singularities that allow you to do a good job of preserving lengths and angles everywhere else. As you add more and more singularities, the map gets closer and closer to being perfect (or “isometric”) away from these isolated points. The current paper is one step in that direction, where we just focus on one of two coordinates, i.e., just longitude rather than longitude and latitude. As you can see from the pictures, our longitude lines are already looking pretty good!
GlobeVsBunny

Can you describe at a high level the main innovation of your paper? How much of a departure is your method from previous methods on generating stripes?

The method builds on a lot of great work done in the past decade or so by folks in geometry processing and discrete differential geometry. From a technical point of view, perhaps the biggest point of departure is simply in recognizing that for many problems singularities are actually a desirable feature, rather than a nasty artifact. At a computational level, this means we can relax some of the constraints that previously made solutions very difficult or expensive to produce—the current method is about ten times faster than anything that came before it. The paper also adds some new insights to the problem by linking it back to ideas from smooth differential geometry, which allows us to make additional guarantees that we didn’t have before (for instance, that there are no ugly “breaks” in the pattern as you walk around the surface).
What insight prodded you and your co-authors to think new singularities would render more realistic stripe patterns?
The connection to stripe patterns actually happened by chance: originally we were working on the map making problem, and noticed that our solutions bear a striking resemblance to natural phenomena like spikes on a cactus or kernels on an ear of corn. If you dig a bit further, you discover that this phenomenon (known variously as “edge dislocations” or “topological defects”) shows up at all scales in nature, from the crystal structure of atoms to sand dunes in the desert. At some point I contacted a few good friends from grad school who provided all sorts of beautiful examples from the physical sciences, some of which appear in the paper. The fact that we can model the basic behavior of so many different phenomena using one simple, geometric formulation is very satisfying. In the future it would be fun to explore how our algorithm might also apply to problems in physical simulation—for instance, how does a piece of metal like spoon “remember” its shape when you bend it? This has everything to do with how singularities get distributed throughout the metal, which in turn has to do with minimizing distortion (as in the map making problem).
Your image of a rabbit with embedded cacti stripes made the cover of this month’s ACM Transactions on Graphics. Of all the possible images to be seen at SIGGRAPH this month, why do you think yours was chosen?
I couldn’t tell you! But the SIGGRAPH community produces some truly beautiful work every year, and I’m very flattered for our work to be recognized in this way.

A method for interactively deforming shapes

subspace-250
Choosing handles in interactive tim
Category: Simsquishal Geometry, Tuesday, 11 August, 9-10:30 am
Linear Subspace Design for Real-Time Shape Deformation
Yu Wang, University of Pennsylvania
Alec Jacobson, Columbia University, ETH Zürich
Jernej Barbič, University of Southern California
Ladislav Kavan, University of Pennsylvania
An interview with Alec Jacobson
Watch-video
What is the practical application of your work?
In computer graphics, a common task is to change the shape or pose of an object or character. The normal process is for the user to pick a point, or handle, from which to control movement. As the user moves the handle, changes propagate outward, affecting the nearby area. The question is always, how far does the influence of the handle extend and how quickly does the influence fall off? Previously the user decided, typically by using a paintbrush-type tool to define the area and the rate of fall-off.
Over the past four years, we have been working to automate the interpolation process by developing an optimization that calculates the correspondence between distance and how much weight to assign the influence of the handle, and does so in a way that ensures smooth, natural motions.
This makes the deformation task easier for the user who no longer has to figure out the interpolations. But the calculations take time; every time the user inserts a handle, it takes a few minutes for the calculations to complete, interfering with the desired interactivity. Our new paper fixes this problem by speeding up the calculations to interactive time.
What is the technical innovation that allowed you to reduce the time of calculations?
There are several; I will mention two.
We examined the algebraic properties of our optimization for computing correspondences and discovered that if we were more careful about the cost function, we could speed the process. Previously the weighting function would consider all possible functions between 1 and 0, and choose the smoothest function. We found that by carefully modeling the idea of smoothness, it suddenly became possible to re-use a lot of the computation for finding the weights of one handle and use it for all other handles. That was a big time savings.
Another came from removing the non-negative constraint from the weighting function, a constraint that seems very intuitive to have. If this one handle is negative and gets moved right, the other handle should have an opposite response and go left. Plus a non-negative constraint makes it unnecessary to optimize over all functions, only those that are non-negative. But the constraint makes the optimization much harder. It turns out to be easier and faster to optimize over a generic function set than a very specific set.
What we realized, to our surprise, was we could either have very smooth deformations—without bumps at each of the handle—or we could have non-negative weights, but we could not have both. It was a tradeoff we were willing to make; we could put up with a bit of non-intuitiveness for better results. Further, without that constraint, the optimization is much faster.
In the end, we were able to speed things up to the point that a user can plop down a handle, decide it is not right, and immediately choose another handle and continue working. The design process is now easy and fluid.
When can people in industry expect to be able to interactively deform shapes using your method?
It might be a while. Studios have well-established procedures, and introducing something new into that can take time. Plus new features have to be adapted for the different packages where the underlying geometries may be hidden by additional layers.
It is the research community that will make immediate use of new projects and features. We have made our software available online, and we expect that the first users will be researchers whose work depends on deformations that perhaps they were not able to do previously. Now instead of having to rely on artists or modelers, researchers themselves can explore new territory.

Transforming almost any object into a Rubik’s Cube

twisty-puzzles-250
Transforming a 3D object into a twisty puzzle
 

Category: Fabrication & Function, Wednesday, August 12, 10:45 am to 12:15 pm
Computational Design of Twisty Joints and Puzzles
Timothy Sun, Columbia University
Changxi Zheng, Columbia University

Watch-video
The popular Rubik’s cube is a 3D puzzle composed of 26 separate interlocking pieces that rotate along six rotation axes, each of which can be moved independently. While the most well-known Rubik’s cube is a square, others are constructed using different geometric shapes (tetrahedron, octahedron, dodecahedron, icosahedron); together they compose a class of twisty joint puzzles.
Now two Columbia researchers have greatly expanded what else can be included in this class of puzzles. Timothy Sun and Changxi Zheng have developed an algebraic method that makes it possible for ordinary users to create twisty joint puzzles out of almost any arbitrary 3D object. Given rotation axes defined by the user, the method automatically adjusts the supplied axes while adding others to ensure all parts rotate freely. So that rotating pieces do not block or collide with one another, all possible configurations of pairs of pieces are enumerated (by defining a group structure on the pieces). Should a possible collision be detected, the pieces are automatically deformed—while retaining as much of the original form as possible—to ensure free rotation of all parts and yield a workable twisty joint.
Holding all pieces together is an internal mechanism where the centers (which rotate in place) of each rotation axis are connected together. This internal mechanism is generated so that all pieces, once assembled, snap together and interlock with one another to form a working puzzle. No screws or springs are needed, thereby facilitating 3D printing of the snapping mechanism.
The work is both original—representing a new theoretical result—and sophisticated, drawing on algebraic methods to determine the best places to put rotation axes.

Precisely aligning colors and patterns to arbitrary 3D objects

hydrograph-250
Computational hydrographic printing
 
Category: Computational Printing, Thursday, August 13, 9-10:30 am
Computational Hydrographic Printing
Yizhong Zhang, Zhejiang University
Chunji Yin, Zhejiang University
Changxi Zheng, Columbia University
Kun Zhou, Zhejiang University
As 3D printing gains in popularity, the demand will grow for the ability to decorate objects by painting them or applying patterns. One common method to do so today is hydrographic (or water transfer) printing, which works as follows: the pattern to be applied to the object is printed onto a piece of PVA (polyvinyl alcohol) film. The film is then floated on water and sprayed with an activator chemical to soften it and make it sticky. The object, from a suspended position over the floating film, is slowly lowered and pushed through the film, which wraps around the object while the ink on the film transfers to the object.
The one problem is that it’s not possible to precisely predict where ink will adhere to a complex object due to the distortion that occurs as the object presses against and stretches the film. For this reason, hydrographic printing is limited to using repeating or abstract patterns where alignment is not critical.
But now researchers from Columbia and Zhejiang University (China) have developed a way to model and simulate the distortion and then compute a pixel image that takes into account the distortion. The key innovation is the use of viscous sheets. Building on previous work on fluid and viscous sheets done at Columbia Computer Graphics Group (see Discrete viscous sheets by Batty, C., Uribe, A., Audoly, B., and Grinspun, E., 2012), the researchers model the floating color film as a thin viscous sheet to simulate the distortion of the film as an object is immersed; from this information, they can establish a mapping between the points on the color film to their eventual locations on the object and compute a new pixel image.
For validating the concept, the researchers built an apparatus using off-the-shelf components ordered online in China. It incorporates a 3D vision system for precisely measuring and controlling the dipping process. A video of the experiments has been viewed over .5 million times.
The method is low cost (less than 40 US cents per printing) and works for complex surface geometries and a wide range of materials, including plastic, wood, and porcelain.

Vortex sheets: A faster, more efficient method for simulating bubbles

bubbles-250x250
Double bubbles
 
Category: Simulating With Surfaces, Thursday, August 13, 2-3:30 pm
A Double Bubbles Sans Toil and Trouble: Discrete Circulation-Preserving Vortex Sheets for Soap Films and FoamsFang Da, Columbia University
Christopher Batty, University of Waterloo
Chris Wojtan, Institute of Science and Technology Austria
Eitan Grinspun, Columbia University
Watch-video
While people tend to see a bubble as a thin soap film enveloping a volume of air, the most commonly used computer graphics methods simulate bubbles by considering, not the visible surface of the bubble, but the volume of air inside and outside the bubble. Such methods measure velocity, circulation, and other variables at certain points in the volume of air and infer the location and shape of the bubble surface from information collected about the points.
This approach of taking measurements at non-moving points within a flowing fluid, termed Eulerian in fluid dynamics, is computationally intensive when used to simulate bubbles because it requires computing velocities for points within the entire 3D air volume—even though the visual appearance of the bubbles is completely dominated by the thin liquid surface. Behind this difficulty are the equations governing the air flow, known as the Navier-Stokes equations, which are expensive to solve over the air space.
Wanting something simpler and more computationally efficient, the paper’s authors speculated that bubbles might be simulated by measuring only surface points, which thus require far fewer calculations than Eulerian methods. (Measuring velocities under surface points that move with the fluid is termed Lagrangian.) The authors realized that a vortex sheet—used for modeling the interface between two fluids moving at different speeds—would be ideal for representing a surface.
vortex-sheet
In a vortex sheet method, circulation points are put on a surface (a non-manifold mesh). Circulation is measured at each point, providing a description of the velocity under and over that point.
While vortex sheets are often used to represent smoke, there is no precedence in computer graphics for using vortex sheets to represent liquids, and the authors couldn’t know how well they would work at simulating bubbles and their associated dynamic effects. That work was left to Fang Da.
The main technical challenge was handling the topological complexity presented by a surface-only approach. In particular it would require Da to define the surface curvature when two or more bubbles join together in a cluster and form a non-manifold network of films where surfaces are connected but not smooth. A simple and general strategy would be essential due to the difficulty of enumerating the countless cases of non-manifold junctions that occur in three dimensions (i.e, foams).
Finding no previous use of vortex sheets being used in non-manifold fashion, Da was able to resolve the problem by recasting it; rather than looking at the thin film itself, he looked at the liquid-air interface, where the line is a mathematical line with no branching.
Print
Two ways to represent the triple junction of soap films: as a non-manifold intersection (left) and as a mathematical line with no branching (right).
Starting from the Navier-Stokes equations, Da was able to derive a very simple and efficient equation tailored for vortex sheets to describe the motion of soap films.
In the end, Da’s formulation proved not only to be mathematically accurate, it supported a range of bubble behaviors, with no need to rely on physically unprincipled assumptions or avoid hard-to-handle bubble dynamics. A video accompanying the paper gives the visual proof, showing how the vortex sheet method enables the interaction of soap films with wires, as well as foam rearrangement, catenoid collapse, blowing bubbles, and double bubbles being pulled apart. According to Da, “It’s really a great match between technique and problem: the vortex-sheet technique is specialized but fits this particular and challenging problem of bubble dynamic behavior very well.”
Posted 8/25/2015