SIAM News honors David Johnson

Mihalis Yannakakis & Cliff Stein, colleagues and friends of Johnson, cite Johnson’s fundamental contributions to NP-completeness theory, optimization and approximation, and other areas of computer science.

As providers roll out zero-rated services, it’s time to re-define net neutrality, says Vishal Misra

In February 2015, the FCC ruled to enforce net neutrality by banning paid prioritization, throttling, and blocking. The decision was hailed as a victory for net neutrality since it prevented Internet service providers (ISPs) from discriminating against certain services and apps by relegating them to a lower grade of service. However, the FCC did not ban zero rating, the practice of exempting websites from data caps. Zero rating is discriminatory also because it favors some apps and websites by making them cheaper and more attractive than competing services.

What the FCC failed to do, regulators in India succeeded in doing. In February 2016, India banned differential data pricing and zero rating, a major defeat for Facebook, which had lobbied hard for its zero-rated Free Basics service. This decision came after Internet and consumer activists rallied the public to the danger discriminatory practices pose to net neutrality. Crucial to this effort was a new definition of net neutrality proposed by Vishal Misra, a professor of computer science at Columbia whose research emphasis is on mathematical modeling of networking systems. In this interview he explains the need to redefine net neutrality.

 

 

misra-250x250
Vishal Misra

Why is it necessary to change the definition of net neutrality?

Though the term network neutrality was coined by Tim Wu in a 2003 paper—which discusses how a neutral network treats all content, sites, and platforms equally—the common, or folk, definition of net neutrality today usually means that all packets on the Internet should be treated exactly the same.

However today when the Internet is more about services, treating all packets equally is not possible or desirable. Voice services like Skype, gaming apps, and live streaming depend on packets being delivered in real time. ISPs reasonably give higher priority to time-sensitive packets than they do to emails and files, which are more tolerant of small time delays. Emergency services or health monitoring apps are also prioritized.

So for practical reasons, it doesn’t make sense to treat all packets equally. But Wu’s original sentiment that there should be no discrimination on the Internet needs to be re-iterated in context of how people use the Internet today.

What is the definition you propose?

“The Internet is a platform where ISPs provide no competitive advantage to specific apps or services, either through pricing or quality of control.”

This definition shifts the focus from individual packets onto services and apps, and it explicitly prevents companies from using either quality of service (fast lanes, no throttling) or a pricing strategy (zero rating or reduced pricing) to favor some sites over others. The FCC banned one type of discrimination but not the other. This definition closes up a major loophole.

It’s important to point out that this definition allows differentiation between services but not discrimination within a service. It is not discriminatory if all email traffic is given one level of service, while all voice traffic is given another; that is service differentiation.

What is discriminatory is treating some voice traffic differently from other voice traffic, such as when an ISP gives special treatment to its own voice call service that is not extended to competing services like Skype.

The “no competitive advantage” clause is there to make explicit the notion that net neutrality is not how we treat packets but how we treat competition.

The lack of competition in the last mile is really the whole crux of the net neutrality issue. In my research applying economic modeling to Internet dynamics, one recurring motif is the stranglehold ISPs have over their customers and content providers alike. [For a recent article by Misra on this subject, see Routing Money, Not Packets. More of Misra’s research can be found here.] This stranglehold allows ISPs to discriminate against services that compete with their own or with those of their “partners”—companies that pay extra for special treatment.

Though ISPs can no longer degrade service of their competitors—as Comcast was doing when it was throttling Netflix traffic a couple of years ago—they can use zero rating to exempt some services from data caps and make them more attractive to customers. If people have a choice to watch a video without it counting towards a data cap vs watching it on non-zero-rated service, it’s pretty clear what video service customers will choose.

Competition is important if the Internet is to remain a platform for innovation. Many popular services—Instagram, Uber, Snapchat, Netflix, even Facebook and Google once—started off small but were able to grow because they had a ready-made platform that allowed them to reach users. If ISPs are allowed to discriminate against fledging startups in favor of their own services or if the Internet is reduced a pay-to-play market that favors rich companies over startups and not-for-profit sites, we lose an important engine of new services and innovations.

Why do you think the Indian public adopted your definition of net neutrality?

Because the issue of zero rating posed an immediate challenge to net neutrality. In early 2015, Facebook introduced Free Basics to let people access certain websites and services without paying data charges.

On the face of it, Free Basics seems like a good thing for India, where many people can’t afford Internet service, and only about 35% of the population is currently online. But in giving competitive advantages to some websites and not others—Facebook had a list of 30 or so sites—Free Basics violates net neutrality’s guarantee of a free and open Internet where everyone has equal access to all sites. Worse, Facebook would essentially be a gatekeeper to the Internet, the arbiter of what sites or services Free Basics customers can access. People using Free Basics wouldn’t actually be on the Internet, only within a walled garden of sites chosen by Facebook.

There are other ways to give people free access to the Internet—ad-supported or time-limited free access—but Facebook chose a way that put the company in control of customers’ Internet experiences.

Nikhil Pahwa, an entrepreneur and journalist, put together the Save The Internet coalition in March 2015 to rally support for net neutrality and oppose Free Basics. At an early stage, I started working with the group, sharing my research and helping refine the definition of net neutrality so that it took into account differential pricing.

Save the Internet energized an army of supporters, with over one million emailing regulators to oppose Free Basics and other zero-rated or price-differential services; this was an unprecedented outpouring of public sentiment and it got regulators’ attention.

Why do you think Save the Internet was successful?

It articulated very well the importance of equal access to the Internet, tying it to India’s ability to grow and develop its entrepreneurial culture. An open Internet is crucial to that.

But even with this clear ruling by Indian regulators, companies will continue to try to game the market to favor their own content. Facebook lost this round, but it’s easy to imagine that Facebook and other companies will adjust tactics to find some other way to favor its content. Maintaining a neutral Internet will likely be a constant battle. Shortly after the ruling, I was contacted by TRAI [Telecom Regulatory Authority of India] to help clarify the ruling and help identify loopholes that might be exploited.

Do you think this new definition of net neutrality will take hold in the US, where zero rating isn’t a big issue?

People may not realize it, but zero rating is becoming a big issue in the US. When the FCC failed to ban zero rating, ISPs took it as a green light to more aggressively promote their zero-rated services. T-Mobile has Binge On and Comcast has Stream TV. Verizon and AT&T, in exchange for payments, sponsor data services that subsidize bandwidth.
ISPs and the cable companies prefer to talk about giving customers free data, but digging down only a little shows how zero-rating schemes distort the market and undermine net neutrality.

Zero rating has other bad aspects. In Angola, hackers have figured out how to use Wikipedia Zero [a zero-rated version of Wikipedia] to illegally distribute pornography and it’s all free. I personally know of a hack that tricks T-Mobile into thinking any type of data is zero-rated video; anyone using this hack has a way to get unlimited data.
So yes, I hope this new definition of net neutrality takes hold in the US. But it will take the public getting involved to put pressure on the FCC. The ISPs on their part will be lobbying hard to maintain the status quo.

Posted: 5/6/2016

Peter Allen and Yaniv Erlich awarded provost grants

allen-200x200
Peter Allen

Peter Allen and Yaniv Erlich have been awarded provost grants through the Hybrid Learning Course Redesign and Delivery program, which supports faculty in developing innovative and technology-rich learning strategies.

Allen will use the grant to expand and make over Computational Aspects of Robotics (COMS 4733), an introductory robotics applications course where students learn about robotics (and future research directions in robotics) by building and programming real robots. The class, which serves upper-level undergrads and master’s students, is an elective in Vision, Interaction, Graphics and Robotics (VIGR), one of the most popular Computer Science tracks.

gopigo
GoPiGo

Demand is growing rapidly as the entire robotics field is becoming increasingly important in all areas of society and the economy. Last semester, 100 students signed up for COMS 4733, though only 60 could be accommodated due to a limited number of robots.

With the provost funding, Allen will buy GoPiGo robotic kits, which are inexpensive, easy-to-build, full-capacity mobile robots with cameras and sensors. The robots will be used by students working in groups of three, and it will also increase the number of syllabus topics that can be taught “hands-on.” The GoPiGo robots will provide full network connectivity, vision cameras, ultrasonic sensors, and on-board processing with a Raspberry Pi computer. Thus topics that had to be previously covered through lectures can now be learned directly through the hands-on laboratory assignments favored by students.

“Feedback from actually running an algorithm on a real physical system, with noise, is something students cannot gain from a simulation or lecture,” says Allen. “Introducing the full-capability GoPiGo robots will let students get ‘hands-on’ experience in all parts of the course syllabus, truly bringing it all together in a coherent package.”

The small size of the new robots will also change the nature of the class. Students can take GoPiGOs anywhere, “flipping” the laboratory so that learning, once confined to the lab, can happen anywhere, and in the process, free up valuable lab space.

erlich-200x200
Yaniv Erlich

 

Erlich’s provost funding will allow him to scale up his class Ubiquitous Genomics (COMSE 6998). This class, which debuted Fall 2015, is structured around mobile DNA sequencing, an emerging technology made possible through small, hand-held, and low-cost devices that plug into a computer’s USB drive to sequence DNA in real time.

minION-3
MinION

Five such devices, called MinIONs, were loaned to Erlich by the British-based manufacturer Oxford Technologies for the first iteration of the class.

The 20 Columbia students who took that class were among the first to use MinIONs as part of their educational training. The class has generated interest among the community growing up around the MinION and was the subject of numerous articles, including a recent one appearing in eLife, one of the most prestigious journals in biology.

Students used the devices to handle and sequence DNA samples themselves rather than sending them out to a lab for sequencing. Two hackathons, worth 50% of the grade, tested their ability to design and implement a workable sequencing pipeline; in the first, student teams identified food ingredients from unlabeled DNA samples; in the second, students sequenced human DNA to identify the donor. In-class presentations gave each team the opportunity to explain their methodologies and take questions and peer feedback.

The provost funding will allow Erlich to buy the MinIONs (and ensure future iterations of the class), increase class size from 20 to 40 students (and accommodate medical and biology students wanting to take the class), and expand the class goals in a couple of different ways. One new goal is for students to implement early prototypes of the Internet of Living Things, where autonomic sensors collect and analyze DNA information; a second goal is to incorporate an updated library preparation procedure that will give students more opportunity to design their own experiments from material they choose, whether it is to identify plants in Central Park, analyze the microbiome of water fountains in the city, or ensure fish served in a restaurant is as advertised on the menu.

A bigger class and its expanded goals will require ensuring students get the needed guidance. To this end, Erlich is adding a TA and also incorporating ideas from the flipped classroom approach. Student-produced YouTube videos will replace the time-consuming in-class hackathon presentations. Students will also be enlisted to help one another, both through an online forum and via short videos created by students to alert others to the common pitfalls of analyzing MinION data.

Posted 5/12/2016
Linda Crane

Rocco Servedio awarded NSF grant to study limits of computing

servedio-for-sidebar
Rocco Servedio

Rocco Servedio, Associate Professor of Computer Science, and his former student Li-Yang Tan (PhD ’14) are recipients of a four-year, $1.2M National Science Foundation (NSF) Award for their proposal to use random projections to prove lower bounds on Boolean circuits. The award will allow Servedio and Tan, now on the faculty of Toyota Technological Institute at Chicago, to continue work they started last year in their paper “An average-case depth hierarchy theorem for Boolean circuits.” Named Best Paper at the FOCS 2015 conference, it resolved a conjecture that had been open for close to 30 years.

“I’m grateful to receive this award and excited to continue working on circuit lower bounds,” says Servedio. “A big part of computer science aims at inventing new algorithms to extend what computers can do, but there are limits to what tasks computers can perform efficiently, and it’s important to understand those limits. This grant will allow us to more thoroughly study this question while giving graduate students the opportunity to be involved in this research at an early stage. ”

Similar to physicists probing the laws of nature to know what is possible in the physical world, theoretical computer scientists working in the field of computational complexity aim to understand the inherent limitations of efficient computing. The ultimate goal of this line of work is to prove mathematically that computers cannot efficiently solve certain problems no matter how cleverly they are programmed. Proving lower bounds on Boolean circuits (a mathematical abstraction of the digital circuits that computers are built from) is an important topic in computational complexity.

After an early burst of progress on circuit lower bounds in the 1980s, research advances in this area have been few and far between. The main challenge facing researchers in this area is establishing a negative: how is it possible to rule out the existence of some clever, unexpected approach to solve a computational problem (a small Boolean circuit capturing some as-yet-unthought-of algorithmic insight)?

Servedio and Tan will employ the method of random projections introduced in their FOCS paper. Random projections are an extension of random restrictions, a method of proving circuit lower bounds that was responsible for much of the progress made in the 1980s. Roughly speaking, random restrictions work by randomly fixing some input variables to randomly chosen constant values. If C is a circuit that is supposed to compute some function f, it is sometimes possible to show that (i) applying a random restriction to C simplifies C “a lot”, while (ii) applying the same random restriction to f only causes it to simplify “a little.”   This can lead to a contradiction, since it may be possible to show that a “greatly simplified” circuit cannot compute a “still-complex” function–if this is indeed the case, then the original circuit C could not actually have computed the function f correctly, which gives the desired circuit lower bound.

The random projection method shows some promise in revitalizing this line of research. The method works similarly to random restrictions but further generalizes the approach by identifying certain sets of variables in a careful way and setting them all to the same value (i.e., “projecting” a set of distinct input variables to a common new variable). These variable identifications enable a further level of “control”  that can be useful in arguing that the function f “stays complex” after the random projection (while the circuit C “still simplifies a lot”). Random projections were at the heart of the circuit lower bound established by Servedio and Tan in their FOCS paper; supported by this grant, they will search for other applications of random projections.

Circuit lower bounds obtained via this work may help guide algorithm development away from dead ends. It may also deepen our fundamental understanding of computation.

“Sometimes work on circuit lower bounds can double back and help us to develop new algorithms and models,” says Servedio. “To establish limits on the power of different types of circuits, you have to really understand what those types of circuits can do, and this understanding can be helpful for solving other problems and developing new algorithms.”

Posted 4/25/2016