The journal also publishes a list of a small number of Physical Review A papers that the editors and referees find of particular interest, importance, or clarity. These Editors' Suggestion papers are listed prominently on http://pra.aps.org/ and marked with a special icon in the print and online Tables of Contents and in online searches.
"Measures of quantum computing speedup" introduces the concept of strong quantum speedup. It is shown that approximating the ground-state energy of an instance of the time-independent Schrodinger equation with d degrees of freedom and d large enjoys strong exponential quantum speedup. It can be easily solved on a quantum computer. Some researchers in QMA theory believe that quantum computation is not effective for eigenvalue problems. One of the goals of this paper is to explain this dissonance.
The first is entitled "Collection, Analysis, and Uses of Parallel Block Vectors." Authored by PhD student Melanie Kambadur, undergraduate Kui Tang, and Assistant Professor Martha Kim, this research establishes a novel perspective from which to reason about the correctness and performance of parallel software. In addition, it describes the design and implementation of an open source tool that automatically instruments an program to gather the necessary runtime information.
The second paper is titled "A Quantitative, Experimental Approach to Measuring Processor Side-Channel Security." The authors are John Demme, Robert Martin, Adam Waksman and Simha Sethumadhavan. This paper describes quantitative method to identify bad hardware design decisions that weaken security. The methodology can be used in the early processor design stages when security vulnerabilities can be easily fixed. The paper marks the beginning of a quantitative approach to securing computer architectures.
"Learning and Testing Classes of Distributions" as part of the
Algorithmic Foundations program.
A long and successful line of work in theoretical computer science has
focused on understanding the ability of computationally efficient
algorithms to learn and test membership in various classes of Boolean
functions. This proposal advocates an analogous focus on developing
efficient algorithms for learning and testing natural and important
classes of probability distributions over extremely large domains. The
research is motivated by the ever-increasing availability of large
amounts of raw unlabeled data from a wide range of problem domains
across the natural and social sciences. Efficient algorithms for these
learning and testing problems can provide useful modelling tools in
data-rich environments and may serve as a theoretically grounded
"computational substrate" on which large-scale machine learning applications
for real-world unsupervised learning problems can be developed.
One specific goal of the project is to develop efficient algorithms to
learn and test univariate probability distributions that satisfy
different natural kinds of "shape constraints" on the underlying
probability density function. Preliminary results suggest that dramatic
improvements in efficiency may be possible for algorithms that are
designed to exploit this type of structure. Another goal is to develop
efficient algorithms for learning and testing complex distributions that
result from the aggregation of many independent simple sources of
randomness.
Abella is currently the executive director of the Innovative Devices and Services Research Department at AT&T Labs, managing a multi-disciplinary technical staff specializing in human-computer interaction, Abella is an award-winning advocate for encouraging minorities and women to pursue careers in science and engineering. She earned her Ph.D. and master’s degree from Columbia, graduating in 1995, under the guidance of Prof. John Kender.
Timothy Sun is being recognized for his complete set of undergraduate research projects, which include his paper
On Milgram's construction and the Duke embedding conjectures.
Timothy was advised by Prof. Jonathan Gross.
For a video of the ARM see Engadget.
Kui Tang also worked with Prof. Tony Jebara on Tractable Inference in Graphical Models and published the paper Bethe Bounds and Approximating the Global Optimum. Sixteenth International Conference on Artificial Intelligence and Statistics, 2013.
Congratulations to Kui Tang, Martha Kim, and Tony Jebara!
Today, data on customers is what makes a company profitable. Tomorrow, data about citizens can make our society successful. But how to reconcile this progress with privacy? Analytics - the science of identifying individual types and collective trends - runs now behind closed doors on your data and outside your control. Prof. Chaintreau aims at showing that an alternative exists that is more socially efficient; managing personal data should be made transparent and easy for each of us. In his NSF Career award, he will develop algorithms that run analytics on data regained by users, while leveraging information on their social context. Moreover, mechanisms will be designed for incentive to make privacy not only a choice, but one that leads to a socially efficient outcome. Demonstrating this concept will start in the classroom. Not only the engineers but also the future journalists informing our citizens will be involved in a new program on the management of personal data, as enabling privacy raises technical, economic and societal challenges. The ultimate goal of this work is to improve how the web treats information about our life without the high cost of a top-down regulation.
the 45th ACM Symposium on the Theory of Computing, for his
single-authored paper titled "Maintaining Shortest Paths Under
Deletions in Weighted Directed Graphs." The work is on maintaining
distance information in a network that is changing over time.
STOC is one of the most prestigious conferences in theoretical
computer science. Two papers shared the award at STOC 2013. Before
this, Aaron was also the sole winner of the Best Student Paper Award
at SODA (ACM-SIAM Symposium on Discrete Algorithms) 2012. As a
third-year PhD student, Aaron's research interest lies in the design
and analysis of efficient algorithms. He has made significant
contribution to this area, and has already published seven papers in
STOC, FOCS and SODA.
The work presents novel algorithms to cope with the growing complexity of designing Systems-on-Chip by simplifying heterogeneous component integration and enabling reuse of predesigned components. It was the only best paper assigned for DATE 2012, which received some 950 paper submissions, more than 50% from outside Europe. The best-paper selection was performed by an award committee, based on the results of the reviewing process, the quality of the final paper, and the quality of the presentation, which was given by Hung-Yi.
The award was announced at the 2013 edition of the conference, which was held in March 2012 in Grenoble, France.
''Computer engineering research is intrinsically an interdisciplinary effort and the complex challenges of developing future embedded systems require a vertically integrated approach to innovation that spans from circuit design to application software,'' says Professor Carloni, Principal Investigator on the program. ''We are excited with this award which recognizes the continuous progress of Columbia Engineering faculty in leading interdisciplinary and multi-institution research programs.''
In the framework of the PERFECT program, the ESP Team will investigate a variety of scalable innovations in circuits, architecture, software, and computer-aided design (CAD) methods, including: scalable 3D-stacked voltage regulators for integrated fine-grain power management; highly-resilient near-threshold-voltage circuit operation; seamless integration of programmable cores and specialized accelerators into a scalable system-on-chip (SoC) architecture; efficient network-on-chip infrastructure for both message-passing communications and distributed power control; static and dynamic scheduling of on-chip resources driven by performance profiling; and an integrated CAD environment for full-system simulation and application-driven optimization.
On October 28 at the Stony Brook University, team members Long Chen, Gang Hu, Xinhao Yuan participated in a grueling five-hour competition, winning first place. The team is coached by Xiaorui Sun.
ACM ICPC is an annual competitive programming competition among the universities of the world. The contest helps students enhance their programming skills, and enables contestants to test their ability to perform under pressure. ACM ICPC is the oldest, largest, and most prestigious programming contest in the world. Each year, more than 5,000 teams from about 2,000 universities all over the world compete at the regional level, and about 100 teams participate the World Finals.
Congratulations, team!
The journal also publishes a list of a small number of Physical Review A papers that the editors and referees find of particular interest, importance, or clarity. These Editors' Suggestion papers are listed prominently on http://pra.aps.org/ and marked with a special icon in the print and online Tables of Contents and in online searches.
"Measures of quantum computing speedup" introduces the concept of strong quantum speedup. It is shown that approximating the ground-state energy of an instance of the time-independent Schrodinger equation with d degrees of freedom and d large enjoys strong exponential quantum speedup. It can be easily solved on a quantum computer. Some researchers in QMA theory believe that quantum computation is not effective for eigenvalue problems. One of the goals of this paper is to explain this dissonance.
The first is entitled "Collection, Analysis, and Uses of Parallel Block Vectors." Authored by PhD student Melanie Kambadur, undergraduate Kui Tang, and Assistant Professor Martha Kim, this research establishes a novel perspective from which to reason about the correctness and performance of parallel software. In addition, it describes the design and implementation of an open source tool that automatically instruments an program to gather the necessary runtime information.
The second paper is titled "A Quantitative, Experimental Approach to Measuring Processor Side-Channel Security." The authors are John Demme, Robert Martin, Adam Waksman and Simha Sethumadhavan. This paper describes quantitative method to identify bad hardware design decisions that weaken security. The methodology can be used in the early processor design stages when security vulnerabilities can be easily fixed. The paper marks the beginning of a quantitative approach to securing computer architectures.
"Learning and Testing Classes of Distributions" as part of the
Algorithmic Foundations program.
A long and successful line of work in theoretical computer science has
focused on understanding the ability of computationally efficient
algorithms to learn and test membership in various classes of Boolean
functions. This proposal advocates an analogous focus on developing
efficient algorithms for learning and testing natural and important
classes of probability distributions over extremely large domains. The
research is motivated by the ever-increasing availability of large
amounts of raw unlabeled data from a wide range of problem domains
across the natural and social sciences. Efficient algorithms for these
learning and testing problems can provide useful modelling tools in
data-rich environments and may serve as a theoretically grounded
"computational substrate" on which large-scale machine learning applications
for real-world unsupervised learning problems can be developed.
One specific goal of the project is to develop efficient algorithms to
learn and test univariate probability distributions that satisfy
different natural kinds of "shape constraints" on the underlying
probability density function. Preliminary results suggest that dramatic
improvements in efficiency may be possible for algorithms that are
designed to exploit this type of structure. Another goal is to develop
efficient algorithms for learning and testing complex distributions that
result from the aggregation of many independent simple sources of
randomness.
Abella is currently the executive director of the Innovative Devices and Services Research Department at AT&T Labs, managing a multi-disciplinary technical staff specializing in human-computer interaction, Abella is an award-winning advocate for encouraging minorities and women to pursue careers in science and engineering. She earned her Ph.D. and master’s degree from Columbia, graduating in 1995, under the guidance of Prof. John Kender.
Timothy Sun is being recognized for his complete set of undergraduate research projects, which include his paper
On Milgram's construction and the Duke embedding conjectures.
Timothy was advised by Prof. Jonathan Gross.
For a video of the ARM see Engadget.
Kui Tang also worked with Prof. Tony Jebara on Tractable Inference in Graphical Models and published the paper Bethe Bounds and Approximating the Global Optimum. Sixteenth International Conference on Artificial Intelligence and Statistics, 2013.
Congratulations to Kui Tang, Martha Kim, and Tony Jebara!
Today, data on customers is what makes a company profitable. Tomorrow, data about citizens can make our society successful. But how to reconcile this progress with privacy? Analytics - the science of identifying individual types and collective trends - runs now behind closed doors on your data and outside your control. Prof. Chaintreau aims at showing that an alternative exists that is more socially efficient; managing personal data should be made transparent and easy for each of us. In his NSF Career award, he will develop algorithms that run analytics on data regained by users, while leveraging information on their social context. Moreover, mechanisms will be designed for incentive to make privacy not only a choice, but one that leads to a socially efficient outcome. Demonstrating this concept will start in the classroom. Not only the engineers but also the future journalists informing our citizens will be involved in a new program on the management of personal data, as enabling privacy raises technical, economic and societal challenges. The ultimate goal of this work is to improve how the web treats information about our life without the high cost of a top-down regulation.
the 45th ACM Symposium on the Theory of Computing, for his
single-authored paper titled "Maintaining Shortest Paths Under
Deletions in Weighted Directed Graphs." The work is on maintaining
distance information in a network that is changing over time.
STOC is one of the most prestigious conferences in theoretical
computer science. Two papers shared the award at STOC 2013. Before
this, Aaron was also the sole winner of the Best Student Paper Award
at SODA (ACM-SIAM Symposium on Discrete Algorithms) 2012. As a
third-year PhD student, Aaron's research interest lies in the design
and analysis of efficient algorithms. He has made significant
contribution to this area, and has already published seven papers in
STOC, FOCS and SODA.
The work presents novel algorithms to cope with the growing complexity of designing Systems-on-Chip by simplifying heterogeneous component integration and enabling reuse of predesigned components. It was the only best paper assigned for DATE 2012, which received some 950 paper submissions, more than 50% from outside Europe. The best-paper selection was performed by an award committee, based on the results of the reviewing process, the quality of the final paper, and the quality of the presentation, which was given by Hung-Yi.
The award was announced at the 2013 edition of the conference, which was held in March 2012 in Grenoble, France.
''Computer engineering research is intrinsically an interdisciplinary effort and the complex challenges of developing future embedded systems require a vertically integrated approach to innovation that spans from circuit design to application software,'' says Professor Carloni, Principal Investigator on the program. ''We are excited with this award which recognizes the continuous progress of Columbia Engineering faculty in leading interdisciplinary and multi-institution research programs.''
In the framework of the PERFECT program, the ESP Team will investigate a variety of scalable innovations in circuits, architecture, software, and computer-aided design (CAD) methods, including: scalable 3D-stacked voltage regulators for integrated fine-grain power management; highly-resilient near-threshold-voltage circuit operation; seamless integration of programmable cores and specialized accelerators into a scalable system-on-chip (SoC) architecture; efficient network-on-chip infrastructure for both message-passing communications and distributed power control; static and dynamic scheduling of on-chip resources driven by performance profiling; and an integrated CAD environment for full-system simulation and application-driven optimization.
On October 28 at the Stony Brook University, team members Long Chen, Gang Hu, Xinhao Yuan participated in a grueling five-hour competition, winning first place. The team is coached by Xiaorui Sun.
ACM ICPC is an annual competitive programming competition among the universities of the world. The contest helps students enhance their programming skills, and enables contestants to test their ability to perform under pressure. ACM ICPC is the oldest, largest, and most prestigious programming contest in the world. Each year, more than 5,000 teams from about 2,000 universities all over the world compete at the regional level, and about 100 teams participate the World Finals.
Congratulations, team!
SecurityWatch
Dec 20, 2012
http://securitywatch.pcmag.com/none/306223-the-internet-will-literally-kill-you-by-2014-predicts-security-firm
Can Your Cisco VoIP Phone Spy On You?
SecurityWatch
Dec 19, 2012
http://securitywatch.pcmag.com/none/306172-can-your-cisco-voip-phone-spy-on-you
Security researchers find vulnerability in Cisco VoIP phones
PhysOrg
Dec 19, 2012
http://phys.org/news/2012-12-vulnerability-cisco-voip.html
Cisco phone exploit allows attackers to listen in on phone calls
The Verge
Jan 10, 2013
http://www.theverge.com/2013/1/10/3861316/cisco-phone-exploit-discretely-enables-microphone
Your worst office nightmare: Hack makes Cisco phone spy on you
ExtremeTech
Jan 10, 2013
http://www.extremetech.com/computing/145371-your-worst-office-nightmare-hack-makes-cisco-phone-spy-on-you
Cisco VoIP Phone Flaw Could Plant Bugs In Your Cubicle
Readwrite Hack
Jan 11, 2013
http://readwrite.com/2013/01/10/cisco-voip-phone-flaw-could-plant-bugs-in-your-cubicle
Hack turns Cisco desk phones into remote listening devices
Slashgear
Jan 11, 2013
http://www.slashgear.com/hack-turns-cisco-desk-phones-into-remote-listening-devices-11264898/
Cisco IP Phone Vulnerability Enables Remote Eavesdropping
Tekcert
Jan 10, 2013
http://tekcert.com/blog/2013/01/10/cisco-ip-phone-vulnerability-enables-remote-eavesdropping
Cisco issues advisory to plug security hole in VoIP phone
FierceEnterprise Communications
Jan 10, 2013
http://www.fierceenterprisecommunications.com/story/cisco-issues-advisory-plug-security-hole-voip-phones/2013-01-10
Hack Turns Cisco's Desk Phone into a Spying Device
Istruck.me
Jan 11, 2013
http://itstruck.me/hack-turns-ciscos-desk-phone-into-a-spying-device/
Hack Turns Cisco’s Desk Phone Into a Spying Device
Gizmodo
Jan 10, 2013
http://gizmodo.com/5974814/hack-turns-ciscos-desk-phone-into-a-spying-device
Warning: That Cisco phone on your desk may be spying on you
BetaNews
Jan 10, 2013
http://betanews.com/2013/01/10/warning-that-cisco-phone-on-your-desk-may-be-spying-on-you/
Hack turns the Cisco phone on your desk into a remote bugging device
Arstechnica
Jan 10,2013
http://arstechnica.com/security/2013/01/hack-turns-the-cisco-phone-on-your-desk-into-a-remote-bugging-device/
Cisco VoIP phone vulnerability allow eavesdropping remotely
IOtechie
Jan 9, 2013
http://hackersvalley.iotechie.com/hacks/cisco-voip-phone-vulnerability-allow-eavesdropping-remotely/
Cisco issues advisory to plug security hole in VoIP phones
FierceEnterpriseCommunications
Jan 10, 2013
http://www.fierceenterprisecommunications.com/story/cisco-issues-advisory-plug-security-hole-voip-phones/2013-01-10
Malware leaves Cisco VoIP phones "open to call tapping"
PC Pro
Jan 8, 2013
http://www.pcpro.co.uk/news/security/379129/malware-leaves-cisco-voip-phones-open-to-call-tapping
Researcher exposes VoIP phone vulnerability
Business Wire for Security InfoWatch
Dec 13, 2012
http://www.securityinfowatch.com/news/10842240/researcher-exposes-voip-phone-vulnerability
Cisco IP Phones Vulnerable
IEEE Spectrum
Dec 18, 2012
http://spectrum.ieee.org/computing/embedded-systems/cisco-ip-phones-vulnerable
Cisco IP phones buggy
NetworkWorld
Dec 12, 2012
http://www.networkworld.com/community/node/82046
Researchers Identify Security Vulnerabilities In VoIP Phones
Red Orbit
Jan 8, 2013
http://www.redorbit.com/news/technology/1112759485/voip-phones-security-vulnerability-software-symbiote-010813/
Security Researcher Compromises Cisco VoIP Phones With Vulnerability
Darkreading
Dec 13, 2012
http://www.darkreading.com/threat-intelligence/167901121/security/attacks-breaches/240144378/security-researcher-compromises-cisco-voip-phones-with-vulnerability.html
Remotely listen in via hacked VoIP phones: Cisco working on eavesdropping patch
Computerworld
Jan 8, 2013
http://blogs.computerworld.com/cybercrime-and-hacking/21600/remotely-listen-hacked-voip-phones-cisco-working-eavesdropping-patch
Cisco IP Phones Hacked
Fast Company
Dec 19, 2012
http://www.fastcompany.com/3004163/cisco-ip-phones-hacked
Cisco rushing to fix broken VoIP patch
IT World Canada
Jan 8, 2013
http://www.itworldcanada.com/news/cisco-rushing-to-fix-broken-voip-patch/146562
Cisco working to fix broken patch for VoIP phones
IDG News Service for CSO Online
Jan 7, 2013
http://www.csoonline.com/article/725788/cisco-working-to-fix-broken-patch-for-voip-phones
Your Cisco phone is listening to you: 29C3 talk on breaking Cisco phones
Boing Boing
Dec 29, 2012
http://boingboing.net/2012/12/29/your-cisco-phone-is-listening.html
Yet another eavesdrop vulnerability in Cisco phones
The Register
December 13, 2012
http://www.theregister.co.uk/2012/12/13/cisco_voip_phones_vulnerable/
Cisco VoIP Phones Affected By On Hook Security Vulnerability
Dec 6, 2012
Forbes
http://www.forbes.com/sites/robertvamosi/2012/12/06/off-hook-voip-phone-security-vulnerability-affects-some-cisco-models/
Discovered vulnerabilities in Cisco VoIP phones
KO IT (RUSSIAN)
Jan 8, 2013
http://ko.com.ua/obnaruzheny_uyazvimosti_v_telefonah_cisco_voip_70011
http://forums.cnet.com/7726-6132_102-5409269.html
http://www.xsnet.com/blog/bid/112454/Jenn%20Cano
http://news.softpedia.com/news/Kernel-Vulnerability-in-Cisco-Phones-Can-Be-Exploited-for-Covert-Surveillance-Video-320168.shtml
http://www.securelist.com/en/advisories/51768
http://accublog.wordpress.com/2013/01/10/eavesdropping-on-your-phone-from-anywhere-in-the-world/
http://geekapolis.fooyoh.com/geekapolis_gadgets_wishlist/8247285
http://eddydemland.blogspot.com/2013/01/hack-turns-ciscos-desk-phone-into.html
http://www.onenewspage.us/n/Technology/74vnp9j0m/Kernel-Vulnerability-in-Cisco-Phones-Can-Be-Exploited.htm
http://technology.automated.it/2013/01/10/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://www.i4u.com/2013/01/youtube/warning-your-be-you-desk-may-spying-phone-cisco
http://www.shafaqna.com/english/other-services/featured/itemlist/tag/cisco.html
http://www.ieverythingtech.com/2013/01/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://dailyme.com/story/2013011000002065/hack-turns-cisco-s-desk-phone-into-a-spying-device
http://truthisscary.com/2013/01/video-hacked-phones-could-be-listening-to-everything-you-say/
http://www.smokey-services.eu/forums/index.php?topic=227209.0
http://technewstube.com/theverge/154392/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://finance.yahoo.com/news/security-researcher-demonstrates-enterprise-voip-130000432.html
The program is designed for students interested in the intersection between the two departments. In particular, its focus is on computer systems, combining skills in both hardware and software, including the areas of: digital design, computer architecture (both sequential and parallel), embedded systems, computer-aided design and networking.
To learn more about this program, please see http://www.compeng.columbia.edu
Vasilis Pappas, Michalis Polychronakis, Angelos D. Keromytis IEEE Security & Privacy, May 2012
Photo/announcement:
https://www.facebook.com/photo.php?fbid=491657707521734&set=a.157827437571431.30830.157394210948087&type=1&theater
Details of the competition:
http://www.poly.edu/csaw2012/csaw-kaspersky
He now goes to the International Round, in London.
For more please see http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1218222
Smartphones are increasingly ubiquitous. Many users are
inconveniently forced to carry multiple smartphones for
work, personal, and geographic mobility needs.
This research is developing Cells, a lightweight virtualization
architecture for enabling multiple virtual smartphones to run
simultaneously on the same physical cellphone device in a securely
isolated manner. Cells introduces a new device namespace mechanism
and novel device proxies that efficiently and securely multiplex phone
hardware devices across multiple virtual phones while providing native
hardware device performance to all applications. Virtual phone
features include fully-accelerated graphics for gaming, complete power
management features, easy-to-use security and safety mechanisms that
can transparently and dynamically control the availability of phone
features, and full telephony functionality with separately assignable
telephone numbers and caller ID support. Cells is being implemented in
Android, the most widely used smartphone platform, to transparently
support multiple Android virtual phones on the same phone hardware.
While the primary focus of this research is smartphone devices, the
development of these ideas will also be explored in the context of
tablet devices.
The results of this research are providing a foundation for future
innovations in smartphone computing, enabling new uses and
applications and transforming the way the devices can be used. This
includes not only greater system security, but greater user safety
especially for young people. Integrating this research with the CS
curriculum provides students with hands-on learning through
programming projects on smartphone devices, enabling them to become
contributors to the workforce as smartphones become an increasingly
dominant computing platform.
While races in multithreaded programs have drawn huge attention from the
research community, little has been done for API races, a class
of errors as dangerous and as difficult to debug as traditional thread
races. An API race occurs when multiple activities, whether they be
threads or processes, access a shared resource via an application
programming interface (API) without proper synchronization. Detecting
API races is an important and difficult problem as existing race
detectors are unlikely to work well with API races.
Software reliability increasingly affects everyone, whether or not
they personally use computers. This research studies and
automatically detects for the first time an important class of races
that has a significant impact on software reliability. The study
quantitatively demonstrates how API races are numerous, difficult to
debug, and a real threat to software reliability. To address this
problem, this research is developing RacePro, a new system to
automatically detect API races in deployed systems. RacePro checks
deployed systems in-vivo by recording live executions then
deterministically replay and check them later. This approach
increases checking coverage beyond the configurations or executions
covered by software vendors or beta testing sites. RacePro records
multiple processes and threads, detects races in the recording among
API methods that may concurrently access shared objects, then explores
different execution orderings of such API methods to determine which races
are harmful and result in failures. Technologies developed will help
application developers detect insidious software defects, enabling
more robust, reliable, and secure software infrastructure.
The grant will support Profs. Sethumadhavan (CS), Seok and Tsividis' (EE) work on Hybrid Continuous-Discrete Computers for Cyber-Physical Systems, aiming at specialized single-chip computers with improved power/performance.
Professors Tsividis (EE), Seok (EE), Sethumadhavan (CS) and their collaborators in the Department of Mechanical Engineering at the University of Texas at Austin, have been awarded a three year, $1.1M NSF grant under the agency’s Cyber-Physical Systems program, for research in Hybrid Continuous-Discrete Computers for Cyber-Physical Systems.
The research augments the today-ubiquitous discrete (digital) model of computation with continuous (analog) computing, which is well-suited to the continuous natural variables involved in cyber-physical systems, and to the error-tolerant nature of computation in such systems. The result is a computing platform on a single silicon chip, with higher energy efficiency, higher speed, and better numerical convergence than is possible with purely discrete computation. The research has thrusts in hardware, architecture, microarchitecture, and applications.
For more on DaMoN see http://fusion.hpl.hp.com/damon2012/program.html
The paper is currently highlighted on the journal home page and will be available to the public for free for about six months (http://www.computer.org/cal).
Congratulations to the authors for this recognition of their research!
Heterogeneous SoC architectures, which combine a variety of programmable components and special-function accelerators, are emerging as a fundamental computing platform for many systems from computer servers in data centers to embedded systems and mobile devices.
Design productivity for SoC platforms depends on creating and maintaining reusable components at higher levels of abstraction and on hierarchically combining them to form optimized subsystems. While the design of a single component is important, the critical challenges are in the integration and management of many heterogeneous components. The goal of this project is to establish Supervised Design-Space Exploration as the foundation for a new component-based design environment in which hardware-accelerator developers, software programmers and system architects can interact effectively while they each pursue their specific goals.
For more details:
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1219001
The work will focus on low-power and high-performance interconnection
networks, targeted to both shared-memory parallel processors and
systems-on-chip for consumer electronics. The aim is to develop a new class of dynamically-adaptable on-chip digital networks which continually self-reconfigure, at very fine-granularity, to customize their operation to actual observed traffic patterns.
Prediction and learning techniques will be explored, to optimally
reconfigure the on-chip networks.
The use of asynchronous networks supports the seamless integration of multiple synchronous processors and memories operating at different
clock rates. The ultimate goal is a significant breakthrough in system latency, power, area and reliability, over synchronous approaches.
Kristen Parton, Nizar Habash and Kathy McKeown won a best paper award at EAMT 12 (Conference of the European Association for Machine Translation) for their paper entitled "Can Automatic Post-Editing make MT more Meaningful?". This paper presents research done by Kristen Parton for her dissertation.
The paper is currently highlighted on the CACM website (http://cacm.acm.org/research?date=year&subject=11).
Congratulations to the authors for this recognition of their research!
http://www.nytimes.com/2011/05/29/nyregion/immersed-in-nature-eyes-on-the-screen-app-city.html
http://www.nytimes.com/2011/09/01/technology/personaltech/mobile-apps-make-it-easy-to-point-and-identify.html?pagewanted=all
http://www.nytimes.com/2012/04/05/garden/new-gardening-apps.html?pagewanted=2&_r=2
http://www.nytimes.com/2011/08/14/fashion/this-life-a-plugged-in-summer.html?pagewanted=all
http://intransit.blogs.nytimes.com/2011/10/05/leaf-peeping-theres-an-app-for-that/
http://query.nytimes.com/gst/fullpage.html?res=9504E4DB143EF934A25750C0A9679D8B63&pagewanted=all
http://www.nytimes.com/2011/06/09/technology/personaltech/09PHONES.html?pagewanted=all
http://query.nytimes.com/gst/fullpage.html?res=9B01E2DF1530F93AA35753C1A9679D8B63
Fortunately, many such applications reflect 'metamorphic properties' that define a relationship between pairs of inputs and outputs, such that for any previous input i with its already known output o, one can easily derive a test input i' and predict the expected output o'. If the actual output o'' is different from o', then there must be an error in the code. This project investigates methodology for determining the metamorphic properties of software and for devising good test cases from which the secondary tests can be derived. The project extends the inputs/outputs considered in previous work on metamorphic testing to focus on application state, before and after, rather than just functional parameters and results. The research also extends the pairwise relations implied by metamorphic properties to 'semantic similarity' for nondeterministic applications, applied to profiles from numerous executions, since an exact relation cannot be expected to hold for a single pair of test executions. These extensions enable treatment of more sophisticated properties that preliminary experiments have shown to reveal defects that were not detected otherwise. Read more.
This project aims to improve the programmability and efficiency of distributed memory systems, a key issue in the execution of parallel algorithms. While it is fairly easy to put, say, thousands of independent adders on a single chip, it is far more difficult to supply them with useful data to add, a task that falls to the memory system. This research will develop compiler optimization algorithms able to configure and orchestrate parallel memory systems able to
utilize such parallel computational resources.
To make more than incremental progress, this project departs from existing hegemony in two important ways. First, its techniques will be applied only to algorithms expressed in the functional style, a more abstract, mathematically sound representation that enables precise reasoning about parallel algorithms and very aggressive optimizations. Second, it targets field-programmable gate arrays (FPGAs) rather than existing parallel computing platforms. FPGAs provide a highly flexible platform that enables exploring parallel architectures far different than today's awkward solutions, which are largely legacy sequential architectures glued together. While FPGAs are far too flexible and power-hungry to be the long-term "solution" to the parallel computer architecture question, their use grounds this project in physical reality and will produce useful hardware synthesis algorithms as a side-effect.
Judicious and efficient data movement is the linchpin of parallel computing. This project attacks that challenge head on, establishing the constructs and algorithms necessary for hardware and software to efficiently manipulate data together. This research will lay the groundwork for the next generation of storage and instruction set architectures, compilers, and programming paradigms -- the bedrock of today's mainstream computing.
The PI's Intrusion Detection Lab (IDS) will investigate and evaluate techniques to detect and defend against advanced malware threats to the internet routing infrastructure. A recent study published by the IDS Lab demonstrates that there are a vast number of unsecured embedded systems on the internet, primarily routers, that are trivially vulnerable to exploitation with little to no effort. As of December 2011, 1.4 million trivially vulnerable devices are in easy reach of even the most unsophisticated attacker. The IDS lab will fully develop and deploy an experimental system that injects intrusion detection functionality within the firmware of a (legacy) router that senses the unauthorized modification of router firmware. The technology may be developed and deployed as a sensor in an Early Attack Warning System, but it may also be implemented to prevent firmware modifications. The IDS lab will demonstrate the highest levels of protection that can be achieved with this novel technology in a range of embedded system device types. This is thesis research of PhD GRA Ang Cui and a team of project students.
https://academicjobs.columbia.edu/applicants/jsp/shared/frameset/Frameset.jsp?time=1332985141422 for application details and a fuller description of the fellowships.
For more on the fellowship, please see
http://www.capitalnewyork.com/article/media/2012/01/5160427/helen-gurley-brown-gives-transformative-18-million-columbias-journalis
For more on our joint Journalism + CS Masters program, please see
http://www.wired.com/epicenter/2010/04/will-columbia-trained-code-savvy-journalists-bridge-the-mediatech-divide/
Congratulations to Jeremy and his advisor Jason!
The Air Force YIP supports scientists and engineers who show exceptional ability and promise for conducting basic research.
Junfeng will investigate concurrency attacks and defenses. Today's multithreaded programs are plagued with subtle but serious concurrency vulnerabilities such as race conditions. Just as vulnerabilities in sequential programs can lead to security exploits, concurrency vulnerabilities can also be exploited by
attackers to gain privilege, steal information, inject arbitrary code, etc. Concurrency attacks targeting these vulnerabilities are impending (see CVE http://www.cvedetails.com/vulnerability-list/cweid-362/vulnerabilities.html), yet few existing defense techniques can deal with concurrency vulnerabilities. In fact, many of the traditional defense techniques are rendered unsafe by concurrency vulnerabilities.
The objective of this project is to take a holistic approach to creating novel program analysis/protection techniques and a system called DASH to secure multithreaded programs and harden traditional defense techniques in a concurrent environment. The greatest impact of our project will be drastically improved software security and reliability, benefiting the Nation’s cyber infrastructure.
For more on this award, see http://www.wpafb.af.mil/library/factsheets/factsheet.asp?id=9332
The demo titled "Organic Solar Cell-equipped Energy Harvesting Active Networked Tag (EnHANT) Prototypes" was developed by 10 students (Gerald Stanje, Paul Miller, Jianxun Zhu, Alexander Smith, Olivia Winn, Robert Margolies, Maria Gorlatova, John Sarik, Marcin Szczodrak, and Baradwaj Vigraham) from the groups of Professors Carloni (CS), Kinget, Kymissis, and Zussman.
The EnHANTs Project is an interdisciplinary project that focuses on developing small, flexible, and energetically self-reliant devices. These devices can be attached to objects that are traditionally not networked (e.g., books, furniture, walls, doors, toys, keys, clothing, and produce), thereby providing the infrastructure for various novel tracking applications. Examples of these applications include locating misplaced items, continuous monitoring of objects (e.g., items in a store and boxes in transit), and determining locations of disaster survivors.
The SenSys demo showcased EnHANT prototypes that are integrated with novel custom-developed organic solar cells and with novel custom Ultra-Wideband (UWB) transceivers, and demonstrated various network adaptations to environmental energy conditions. A video of the demo will soon be available on the EnHANTs website.
In 2009, the project won first place in the Vodafone Americas Foundation Wireless Innovation Competition; in 2011, it received the IEEE Communications Society Award for Outstanding Paper on New Communication Topics. The project has been supported by the National Science Foundation, the Department of Energy, the Department of Homeland Security, Google, and Vodafone.
Recently concepts and methodologies from game theory and economics have found numerous successful applications in the study of the Internet and e-commerce. The main goal of this proposal is to bridge the algorithmic gap between these three disciplines. The PI will work to develop efficient algorithms for some of the fundamental models and solution concepts and to understand the computational difficulties inherent within them, with the aim to inspire and enable the next-generation e-commerce systems. The proposed research will contribute to a more solid algorithmic and complexity-theoretic foundation for the interdisciplinary field of Algorithmic Game Theory.
Details of the event are at http://www.kaspersky.com/educational-events/it_security_conference_2012_usa
Cells: A Virtual Mobile Smartphone Architecture
by Jeremy Andrus, Christoffer Dall, Alex Van’t Hof, Oren Laadan, Jason Nieh
Smartphones are increasingly ubiquitous, and many users carry multiple phones to accommodate work, personal, and geographic mobility needs. The authors created Cells, a virtualization architecture for enabling multiple virtual smartphones to run simultaneously on the same physical cellphone in an isolated, secure manner. Cells introduces a usage model of having one foreground virtual phone and multiple background virtual phones. This model enables a new device namespace mechanism and novel device proxies that integrate with lightweight operating system virtualization to multiplex phone hardware across multiple virtual phones while providing native hardware device performance. Cells virtual phone features include fully accelerated 3D graphics, complete power management features, and full telephony functionality with separately assignable telephone numbers and caller ID support. They have implemented a prototype of Cells that supports multiple Android virtual phones on the same phone. Their performance results demonstrate that Cells imposes only modest runtime and memory overhead, works seamlessly across multiple hardware devices including Google Nexus 1 and Nexus S phones, and transparently runs Android applications at native speed without any modifications.
Presented in Basel, Switzerland, "Augmented Reality in the Psychomotor Phase of a Procedural Task" reports on a key part of Steve Henderson's spring 2011 dissertation, and was coauthored by Dr. Henderson and his advisor, Prof. Steve Feiner. It presents the design and evaluation of a prototype augmented reality user interface designed to assist users in performing an aircraft maintenance assembly task. The prototype tracks the user and multiple physical task objects, and provides dynamic, prescriptive, overlaid instructions on a tracked, see-through, head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete aspects of the assembly task in which they physically manipulated task objects significantly faster and with significantly greater accuracy when using augmented reality than when using 3D-graphics-based assistance presented on a stationary LCD panel.
He is a member of the CryptoLab at Columbia University, where he is advised by Tal Malkin. Congratulations to Aaron and to the CryptoLab for this outstanding research contribution!
"All of the winning applications have applied advanced networking technology to enable significant progress in research, teaching, learning or collaboration to increase the impact of next-generation networks around the world,” said Tom Knab, chair of the IDEA award judging committee and chief information officer, Case Western Reserve University’s College of Arts & Sciences. “The winning submissions were from an exceptionally strong nominations pool and represent a cross-section of the wide-ranging innovation that is occurring within the Internet2 member community. Also, for the first time, we added a category for applications developed by students and those were remarkable for their creativity and relevance.”
Kyung-Hwa Kim’s project, DYSWIS, is a collaborative network fault diagnosis system, with a complete framework for fault detection, user collaboration and fault diagnosis for advanced networks. With the increase in application complexity, the need for network fault diagnosis for end-users has increased. However, existing failure diagnosis techniques fail to assist end-users in accessing applications and services. The key idea of DYSWIS is a collaboration of end-users to diagnose a network fault in real-time to collect diverse information from different parts of the networks and infer the cause of failure.
Internet2, owned by U.S. research universities, is the world’s most advanced networking consortium for global researchers and scientists who develop breakthrough Internet technologies and applications and spark tomorrow’s essential innovations. Internet2, consists of more than 350 U.S. universities; corporations; government agencies; laboratories; higher learning; and other major national, regional and state research and education networks; and organizations representing more than 50 countries. Internet2 is a registered trademark.
Kyung-Hwa Kim is a Ph.D. student in the Internet Real-Time Lab, headed by Prof. Henning Schulzrinne.
Congratulations to Kyung-Hwa Kim, and his advisor, Henning Schulzrinne!
Press release: http://www.internet2.edu/news/pr/2011.10.04.idea.html
MEERKATS includes partners at George Mason University and Symantec Research Labs.
operations. While the need for data protection is clear, the queries must be protected as
well, since they may reveal insights of the requester's interests, agenda, mode of operation,
etc. The PIs will develop an efficient and secure system for database access, which allows execution of complex queries, and guarantees protection to both server and client. The PIs will build on their existing successful solution, which relies on encrypted Bloom Filters (BF) and novel reroutable encryption to achieve simple keyword searches. The PIs will expand and enhance this system to handle far more complicated queries, support verifiable and private compliance checking, and maintain high performance even for very large databases. First, the PIs will design novel BF population and matching algorithms, which will allow for secure querying based on combinations of basic keywords. Then, the PIs will design and apply various heuristics and data representation and tokenization to extend this power to range, wildcard, and other query types. Some of the subprotocols will be implemented using Yao's Garbled Circuit (GC) technique, combined with techniques for seamless integration of BF- and GC-based secure computations. In particular, this will prove useful in secure query compliance checking. Finally, the PIs will investigate efficient solutions that eliminate all third helper parties, through the application of (and enhancements to) proxy re-encryption schemes. Using this tool, the (single) server in posession of the searchable encrypted database will be able to perform search and to re-encrypt the obtained result for decryption with the client's key.
The results of Paskov and Traub were due to computer experimentation. Theoretical explanations of these results continue to be an active research area. There is no generally accepted explanation.
Read the article at on the American Scientist website.
For more information, visit http://www.sigmm.org/news/sigmm-award-2011. Read more.
This past March, Julia Hirschberg also received the IEEE James L. Flanagan Speech and Audio Processing Award.
Julia Hirschberg has been a fellow of the American Association for Artificial Intelligence since 1994, a fellow of the International Speech Communication Association (ISCA) since 2008 and president of ISCA from 2005-2007, editor-in-chief of Computational Linguistics from 1993-2003 and co-editor-in-chief of Speech Communication from 2003-2005, and received a Columbia Engineering School Alumni Association (CESAA) Distinguished Faculty Teaching Award in 2009.
This research poses questions whose answers have consequences at several levels of the traditional system stack: Can programmers be freed from hardware-specific optimization of communication without degrading performance? What abstractions are needed to allow hardware to adapt to the programmer, rather than the other way around? Can communication efficiency be improved when running on an application-specific communication platform? The project answers these questions by exploring abstractions and algorithms to profile a parallel program's communication, synthesize a custom network design, and implement it in a configurable network architecture substrate. The research methods center around the X10 language, and include compiler instrumentation passes, offline communication profile analyses, development of a portable network intermediate representation, and network place and route software algorithms.
The research activities span three fields of computer science: Hardware system and architecture research is carried out in software simulation. This portion of the research explores multiple aspects of the hardware system including efficient implementations of software-style polymorphism and mechanisms to enforce data encapsulation. The project is grounded in a specific, performance-critical, real-world problem of database query processing. This component of the research identifies target types for hardware acceleration that are used in common, complex database operations such as range partitioning. Performance results will be obtained both by direct measurement and by simulation. Finally, the compiler segment of the project develops compiler techniques to link high-level languages to the accelerators available on the target hardware system. The compiler adapts software at runtime to best utilize the available accelerators and to partition code among general-purpose and specialized processing cores.
This project addresses programming challenges posed by the new trend in multicore computing. Multithreaded programs are difficult to write, test, and debug. They often contain numerous insidious concurrency errors, including data races, atomicity violations, and order violations, which we broadly define to be races. A good deal of prior research has focused on race detection. However, little progress has been made to help developers fix races because existing systems for fixing races work only with a small, fixed set of race patterns and, for the most part, do not work with simple order violations, a common type of concurrency errors.
The research objective of this project, LOOM: a Language and System for Bypassing and Diagnosing Concurrency Errors, is to create effective systems and technologies to help developers fix races. A preliminary study revealed a key challenge yet to be addressed on fixing races that is, how to help developers immediately protect deployed programs from known races. Even with the correct diagnosis of a race, fixing this race in a deployed program is complicated and time consuming. This delay leaves large vulnerability windows potentially compromising reliability and security.
To address these challenges, the LOOM project is creating an intuitive, expressive synchronization language and a system called LOOM for bypassing races in live programs. The language enables developers to write declarative, succinct execution filters to describe their synchronization intents on code. To fix races, LOOM installs these filters in live programs for immediate protection against races, until a software update is available and the program can be restarted.
The greatest impact of this project will be a new, effective language and system and novel technologies to improve the reliability of multithreaded program, benefiting business, government, and individuals.
The PI intends to explore the theoretical underpinnings of the cryptographic challenges that arise in this context. The proposed directions of research touch on the following questions:
-- How can we safely allow others to perform computation on our encrypted data while maintaining its privacy?
-- How can we verify that outsourced computation was done correctly?
-- What stronger security models are needed in this new, highly interactive environment?
We will address the theoretical aspects of these problems, including modeling, protocol design, and negative results. As part of our investigations, we will study the powerful cryptographic primitives of fully homomorphic encryption and functional encryption (in particular, the relationship between them and outsourced and veriable computations), as well as the area of leakage-resilient cryptography.
The research will develop a novel foundation for creating and exploiting a critical intermediate representation layer between low-level audio-visual features and high-level human events. Signal-based information will be abstracted and represented by "unit models", each of which is trained from small samples of exemplar data, in a sub-space selected from the larger intersection of semantic concepts with image and sound features. These resulting individual discriminators are then leveraged for higher-level ensemble modeling and detection. This middle layer of hundreds of thousands of models provides several advantages: models are trained and reused across many humanly meaningful categories; they each carry a machine-derived unit of semantic information; and they all are trained and applied in an easily parallelized fashion. The work will directly impact the key issues of accuracy, robustness, scalability, and responsiveness of video analysis systems.
The work originated with a CS undergraduate's insight and initiative, and Prof. Bellovin's class assignment dealing with privacy. Of her own initiative, undergraduate Michelle Madejski wrote a Facebook app, found subjects, and did a preliminary version of the study. Based on early promising results, Prof. Bellovin and Ph.D. student Martiza Johnson teamed up with Madejski to carry out the full-scale study.
Read the paper here: https://mice.cs.columbia.edu/getTechreport.php?techreportID=1459
For more on the conference see http://docs.law.gwu.edu/facweb/dsolove/PLSC/
Leafsnap is the first in a series of electronic field guides being developed by researchers from Columbia University, the University of Maryland, and the Smithsonian Institution. This free mobile app uses visual recognition software to help identify tree species from photographs of their leaves. Leafsnap contains beautiful high-resolution images of leaves, flowers, fruit, petiole, seeds, and bark. Leafsnap currently includes the trees of New York City and Washington, D.C., and will soon grow to include the trees of the entire continental United States.
Leafsnap turns users into citizen scientists, automatically sharing images, species identifications, and geo-coded stamps of species locations with a community of scientists who will use the stream of data to map and monitor the ebb and flow of flora nationwide.
The Leafsnap family of electronic field guides aims to leverage digital applications and mobile devices to build an ever-greater awareness of and appreciation for biodiversity.
The genesis of Leafsnap was the realization that many techniques used for face recognition developed by Professor Peter Belhumeur and Professor David Jacobs, of the Computer Science departments of Columbia University and the University of Maryland, respectively, could be applied to automatic species identification.
Professors Jacobs and Belhumeur approached Dr. John Kress, Chief Botanist at the Smithsonian, to start a collaborative effort for designing and building such a system for plant species. Columbia and the University of Maryland designed and implemented the visual recognition system used for automatic identification. In addition, Columbia University designed and wrote the iPhone, iPad, and Android apps, the leafsnap.com website, and wrote the code that powers the recognition servers. The Smithsonian was instrumental in collecting the datasets of leaf species and supervising the curation efforts throughout the course of the project. As part of this effort, the Smithsonian contracted the not-for-profit nature photography group Finding Species, which collected and photographed the high-quality photos available in the apps and the website.
The IEEE Communications Society Award for Outstanding Paper on New Communication Topics is given to "outstanding papers that open new lines of work, envision bold approaches to communication, formulate new problems to solve, and essentially enlarge the field of communications engineering." It is given to a paper published in any IEEE Communications Society publication in the previous calendar year.
The award will be presented at the 2011 IEEE International Conference on Communications (ICC'2011) award ceremony.
More information about the EnHANTs project can be found in http://enhants.ee.columbia.edu/ Read more.
This award is highly visible due to its media-oriented backing. One of 13 awardees, Dana will receive $750k direct for 3 years for her project on "A Systems Approach to Understanding Tumor Specific Drug Response." Pe’er’s research is focused on elucidating tumor-specific molecular networks, working towards personalized cancer care. The project will develop and use machine learning approaches for the integration and analysis of high-throughput data toward understanding the tumor regulatory network and its response to drug, as well as the genetic determinants of this response.
Please see:
http://www.standup2cancer.org/node/4782
http://www.youtube.com/watch?v=9lDh1iiO9KA&feature=player_embedded
Read more on http://www.engineering.columbia.edu/nae-elects-prof-yannakakis-member Read more.
The build-and-learn aspect of BigShot has a lot of appeal, says Margaret Honey, CEO of the New York Hall of Science in Queens, N.Y., a hands-on, family-oriented science and technology museum. "I've seen lots of technology and engineering projects throughout my career, and I was really taken with this," she says. "The strategy of engineering this device so that kids can fairly easily put this together without starting from scratch is incredibly smart. I love that kids end up with a working camera and that the assembly of the project is just the beginning."
Multithreaded programs are becoming increasingly critical driven by the
rise of multicore hardware and the coming storm of cloud computing.
Unfortunately, these programs remain difficult to write, test, and debug.
A key reason for this difficulty is nondeterminism: different runs of a
multithreaded program may show different behaviors depending on how the
threads interleave. Nondeterminism complicates almost every development
step of multithreaded programs. For instance, it weakens testing because
the schedules tested may not be the ones run in the field; it complicates
debugging because reproducing a buggy schedule is hard.
In the past three decades, researchers have developed many techniques to
address nondeterminism. Despite these efforts, it remains an open
challenge to achieve both efficiency and determinism for general
multithreaded programs on commodity multiprocessors.
This project aims to address this fundamental challenge. Its key insight
is that one can reuse a small number of schedules to process a large
number of inputs. Based on this insight, it takes an approach called
schedule memoization that memoizes past schedules and, when possible,
reuses them for future runs. This approach amortizes the high overhead of
making one schedule deterministic over many reuses and makes a program
repeat familiar behaviors whenever possible. A real-world analogy to this
approach is animals' natural tendencies to follow familiar routes to avoid
hazards and discovery overhead of unknown routes.
The greatest impact of this project will be a novel approach and new,
effective systems and technologies to improving software reliability, thus
benefiting every business, government, and individual.
Harmon, who was an NSF Graduate Research Fellow during his doctoral studies at the Columbia University School of Engineering & Applied Science, completed his PhD thesis in 2010, as a member of the Columbia Computer Graphics Group directed by Prof. Eitan Grinspun. He has worked at both Walt Disney Animation Studios (makers of Snow White through Tangled) and Weta Digital (makers of The Lord of the Rings through Avatar), applying research technologies to problems in digital special effects. His work on contact algorithms for the motion of fabric is used in films such as Disney's Tangled.
The conference byline is "UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces." The conference has been held yearly for the past 22 years. This is the eighth year of Lasting Impact awards.
and sensor networks need to process large volumes of updates while
supporting on-line analytic queries. With large amounts of RAM, single
machines are potentially able to manage hundreds of millions of
items. With multiple hardware threads, as many as 64 on modern
commodity multicore chips, many operations can be processed
concurrently.
Processing queries and updates concurrently can cause
interference. Queries need to see a consistent database state, meaning
that at least some of the time, updates will need to wait for queries
to complete. To address this problem, a RAM-resident snapshot of the database is taken at
various points in time. Analytic queries operate over the snapshot,
eliminating interference, but allowing answers to be slightly out of
date. Several different snapshot creation methods are being developed
and studied, with the goal of being able to create snapshots
rapidly (e.g., in fractions of a second) while minimizing the overhead
on update processing.
These problems are studied both for traditional server machines, as
well as for multicore mobile devices. By keeping personalized, up to
date data on a user's mobile device, a wide range of potential new
applications could be supported while avoiding the privacy concerns of
widely distributing one's location. The research focus is on how to
efficiently utilize the many processing cores available on modern
machines, both traditional and mobile devices. A primary goal is to
allow performance to scale as additional cores become available in
newer generations of hardware.
More information can be found at http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1049898
from the Department of Computer Science. This award is in recognition of
his dedication to teaching and his efforts to make Computer Science
accessible to all students.
Google’s goal for the award is to encourage women to excel in computing and technology and become active role models and leaders in the field. The company will sponsor the award recipients to the Grace Hopper Celebration of Women in Computing to be held in Atlanta in September. According to Google, “Anita Borg devoted her adult life to revolutionizing the way we think about technology and dismantling barriers that keep women and minorities from entering computing and technology fields. Her combination of technical expertise and fearless vision continues to inspire and motivate countless women to become active participants and leaders in creating technology.”
Another Columbia Engineering student – Zeinab Abbassi Ph.D. Computer Science – was a finalist for the scholarship. Her adviser is Vishal Misra, associate professor. Read more.
The proposal, entitled "Power-Adaptive, Event-Driven Data Conversion and Signal Processing Using Asynchronous Digital Techniques", addresses the increasing demand for ultra low-power and high-quality microelectronic systems that continuously acquire and process information, as soon as it becomes available. In these applications, new information is generated infrequently, at irregular and unpredictable intervals. This event-based nature of the information calls for a drastic re-thinking of how these signals are monitored and processed.
Traditional synchronous (i.e. clocked) digital techniques, which use fixed-rate operation to evaluate data whether or not it has changed, are a poor match for the above applications, and often lead to excessive power consumption. This research aims instead to provide viable "event-based" systems: controlled not by a clock but rather by the arrival of each event. Asynchronous (i.e. clock-less) digital logic techniques, which are ideally suited for this work, are combined with continuous-time digital signal processing, to make this task possible. Such continuous-time data acquisition and processing promises significant power and energy reduction, flexible support for a variety of signal processing protocols and encodings, high-quality output signals, and graceful scalability to future microelectronic technologies. A series of silicon chips will be designed and fully evaluated, culminating in a fully programmable, event-driven data acquisition and signal processing system, which can be used as a testbed for a wide variety of real-world applications.
The ONR Young Investigator Program invests in academic scientists and engineers who show exceptional promise for creative study. In 2010 the ONR selected 17 award recipients from 211 proposal submissions.
Read more about this on the Columbia SEAS
news webpage.
For more information on the ONR Young Investigator Program please see
the official press release.
the premier Human-Computer Interaction conference. The paper,
"Designing Patient-Centric Information Displays for Hospitals,"
proposes a design for in-room, patient-centric information displays,
based on iterative design with physicians and a study with emergency
department patients at Washington Hospital Center, a large urban
hospital. The research was conducted by Wilcox during a summer
internship at Microsoft Research with Dan Morris and Desney Tan of
Microsoft Research, in collaboration with Justin Gatewood of MedStar
Institute for Innovation. The study included the presentation of
real-time information to patients based on their medical records,
during their visit to an Emergency Department. Subjective responses to
in-room displays were overwhelmingly positive, and the study elicited
guidelines (regarding specific information types, privacy, use cases,
and information presentation techniques) that could be used
for a fully-automatic implementation of the design.
The Goldwater Scholarship funds and supports outstanding undergraduate scholars in the sciences, mathematics, and engineering to pursue a Ph.D. in those fields.
were awarded the Best Paper Award at
the 2010 International Conference on Computational Photography (ICCP), for
their paper titled "Spectral Focal Sweep: Extended Depth of Field from
Chromatic Aberrations." The paper described a new technique for capturing
photographs with very wide depth of field. The conference was held at
MIT on March 28-30.
with the goal of improving their energy-efficiency, comfort, and safety. Traditional buildings account for about 40% of the total energy consumed in the
United States. A central theme of the proposed research is to model a future high-performance building as a cyber-physical system whose complex dynamics arise from the interaction among its physical attributes, the operating equipment (such as sensors, embedded processors, and HVAC components), and the
behavior of its occupants. Emphasis is laid on the development of methods to make the distributed embedded system robust to uncertainty and adaptive to change.
More details: http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0931870
the winners of this year’s Anita Borg Women of Vision Awards. Three
leaders in technology – Kristina M. Johnson, Under Secretary for Energy,
Department of Energy, Kathleen R. McKeown, Henry and Gertrude Rothschild
Professor of Computer Science, Columbia University, and Lila Ibrahim,
General Manager, Emerging Markets Platform Group, Intel Corporation will
be* *honored for their accomplishments and contributions as women in
technology at ABI’s fifth annual Women of Vision Awards Banquet at the
Mission City Ballroom, Santa Clara, California on May 12, 2010. Read more.
The DEPS portfolio ranges from disciplinary boards such as mathematics, physics, computer science,and astronomy to boards and standing committees serving
each of the major military services as well as the intelligence community and the Department of Homeland Security.
After 10 years of service Traub has stepped down as Chair of the Computer Science and Telecommunications Board (CSTB). He served as founding chair 1986-1992 and served again 2005-2009.
The Tech Awards, presented by Applied Materials, is a signature program of The Tech Museum. Established in 2001, The Tech Awards recognizes Laureates in five categories: environment, economic development, education, equality, and health. These Laureates have developed new technological solutions or innovative ways to use existing technologies to significantly improve the lives of people around the world. Dr. White received one of the three Intel Environment Awards. Read more.
Team Columbia 1 (ranked 2nd):
- Jingyue Wu (PhD, computer science)
- Varun Jalan (MS, computer science)
- Zifeng Yuan (PhD, civil engineering)
Team Columbia 2 (ranked 6th):
- Chen Chen (PhD, IEOR)
- Huzaifa Neralwala (MS, computer science)
- Jiayang Jiang (Junior, mathematics)
Due to their performance, team Columbia 1 was also selected to be one of 100 teams (chosen from over 7,000 around the world) to advance to the world finals competition, to be held in Harbin, China from February 1--6. The teams were led by coach John Zhang (PhD student, computer science).
media.
enhance routing protocols such that they can compute high-performance
routes in a computationally efficient manner without revealing
information that might reveal the location of participating nodes.
This allows users to send and receive high-bandwidth, low-latency
transmissions such as video and audio feeds without revealing their
location. Potential applications include celebrity multimedia
twitter-like feeds, and network-supported action gaming.
Many important scientific and engineering problems involve a large number of variables. Equivalently they are said to be high dimensional. Examples of such problems occur in quantum mechanics, molecular biology, and economics. For example, the Schrodinger equation for p particles has dimension d = 3p; system with a large number of particles are of great interest in physics and chemistry. This problem can only be solved numerically. In decades of work scientists have found that the problems get increasingly hard as p increases. The investigators believe this does not stem from a failure to create good numerical methods--the difficulty is intrinsic. The investigators believe solving the Schrodinger equation suffers the curse of dimensionality on a classical computer. That is, the time to solve this problem must grow exponentially with p. (A classical computer is any machine not based on the principles of quantum mechanics--all machines in use today are classical computers.) The investigators hope to show this problem is tractable on a quantum computer. Success in this research would mark the first instance of a PROVEN exponential quantum speedup for an important non-artificial problem.
the workings of the camera. The website also allows young photographers from around
the world to share their pictures. “The idea here was not to create a device that
was an inexpensive toy,” says Nayar. “The idea was to create something that
could be used as a platform for education across many societies.
Visit the Bigshot website. Read more about the Bigshot project. Read more.
November 2-5, 2009, Newark Liberty International Airport Marriott
Newark (NYC Metropolitan Area), New Jersey, USA
Learning from Data using Matchings and Graphs (pdf version)
Tony Jebara
Columbia University
Many machine learning problems on data can naturally be formulated as problems on graphs. For example, dimensionality reduction and visualization are related to graph embedding. Given a sparse graph between n high-dimensional data nodes, how do we faithfully embed it in low dimension? We present an algorithm that improves dimensionality reduction by extending semidefinite embedding methods. But, given only a dataset of n samples, how do we construct a graph in the first place? The space to explore is daunting with 2^(n^2) graphs to choose from yet two interesting subfamilies are tractable: matchings and b-matchings. By placing distributions over matchings and using loopy belief propagation, we can efficiently and optimally infer maximum weight subgraphs. Matching not only has intriguing combinatorial properties but it also leads to improvements in graph reconstruction, graph embedding, graph transduction, and graph partitioning. We will show applications on text, network and image data. Time permitting, we will also show results on location data from millions of tracked mobile phone users which lets us discover patterns of human behavior, networks of places and networks of people. Read more.
Alumni Achievement Award, which recognizes an individual
for exceptional accomplishments that have brought honor to
the receipient and to Carnegie Mellon. He is being recognized
for his "pioneering research contributions and teaching in
the field of computer vision." Read more.
the premier conference in its field. The paper,
"Evaluating the Benefits of Augmented Reality for Task Localization in
Maintenance of an Armored Personnel Carrier Turret," was coauthored
by Steve Henderson and Prof. Steve Feiner. It presents the design,
implementation, and user testing of a prototype augmented reality
application to support military mechanics conducting routine
maintenance tasks inside an armored vehicle turret. The prototype
uses a tracked head-worn display to augment a mechanic's view with
text, labels, arrows, and animated sequences documenting tasks to
perform. A formal human subject experiment with military mechanics
showed that the augmented reality condition allowed them to locate
tasks more quickly than using when two baseline conditions (an untracked
head-worn display, and a stationary display representing an improved
version of existing electronic technical manuals).
checking is unquestionably crucial to improve software reliability,
but the checking coverage of most existing techniques is severely
hampered by where they are applied: a software product is typically
checked only at the site where it is developed, thus the number of
different states checked is throttled by those sites' resources (e.g.,
machines, testers/users, software/hardware configurations).
To address this fundamental problem, we will investigate mechanisms
that will enable software vendors to continue checking for bugs after
a product is deployed, thus checking a drastically more diverse set of
states. Our research contributions will include the investigation,
development, and deployment of: (1) a wide-area autonomic software
checking infrastructure to support continuous checking of deployed
software in a transparent, efficient, and scalable manner; (2) a
simple yet general and powerful checking interface to facilitate
creation of new checking techniques and combination of existing
techniques into more powerful means to find subtle bugs that are often
not found during conventional pre-deployment testing; (3) lightweight
isolation, checkpoint, migration, and deterministic replay mechanisms
that enable replication of application processes as checking launch
points, isolation of replicas from users, migration of replicas across
hosts, and replay of identified bugs without need for the original
execution environment; and (4) distributed computing mechanisms for
efficiently and scalably leveraging geographically dispersed idle
resources to determine where and when replicas should be executed to
improve the speed and coverage of software checking, thereby
converting available hardware cycles into improved software
reliability.
Prof. Carloni was also named a Senior Member of the the Association for Computing Machinery (ACM) on July 21, 2009. According to the ACM website, "the Senior Member grade recognizes those ACM members with at least 10 years of professional experience and 5 years of continuous Professional Membership who have demonstrated performance that sets them apart from their peers."
posture of large enterprises. The project is intended to devise metrics and
measurement methods, and test and evaluate these in a real institution, to
evaluate how human users behave in a security context.
To develop computer security as a science and engineering discipline,
metrics need to be defined to evaluate the safety and security of
alternative system designs. Security policies are often specified by large
organizations but there are no direct means to evaluate how well these
policies are followed by human users. The proposed project explores
fundamental means of measuring the security posture of large enterprises.
Risk management and risk mitigation requires measurement to assess
alternative outcomes in any decision process. The project is intended to
devise metrics and measurement methods, and test and evaluate these in a
real institution, to evaluate how human users behave in a security context.
Financial institutions in particular require significant controls over the
handling of confidential financial information and employees must adhere to
these policies to protect assets, which are subject to continual adversarial
attack by thieves and fraudsters. Hence, financial institutions are the
primary focus of the measurement work. The technical means of measuring user
actions that may violate security policy is performed in a non-intrusive
manner. The measurement system uses specially crafted decoy documents and
email messages that signal when they have been opened or copied by a user in
violation of policy. The project will develop collaborations with financial
experts to devise risk models associated with users of information
technology within large enterprises. This line of work extends traditional
research in computer security by opening up a new area focused on the human
aspect of security.
To survive and flourish, people must interact with their environment in an organized fashion. To do so, they need to learn, imagine, and perform an assortment of transformations on and in the world. Primary among these are manipulation of objects and navigation in space. This project integrates research in computer science and cognitive science to develop and evaluate augmented reality tools to create effective dynamic explanations that enhance manipulation and navigation, in conjunction with identification and visualization. Augmented reality refers to user interfaces in which virtual material is integrated with and overlaid on the user's experience of the real world; for example, by using tracked head-worn and hand-held displays. Dynamic explanations are task-appropriate sequences of actions, presented interactively, with appropriate added information. The tools will be created in collaboration with subject matter experts for exploratory use in indoor and outdoor real world domains: navigating and identifying landmarks in a wooded park area, assembling a piece of furniture, and navigating and visualizing for planning the site of a new urban campus. Cognitive science research will determine the best ways to convey explanations and information to people. Computer science research will address the design and implementation of systems that embody the best candidate approaches for identifying objects and locations, specifying actions, and adding non-visible information. In situ experiments will be used to assess and refine the systems.
Manipulation, navigation, identification, and visualization are representative of important things that people do every day, ranging from fixing broken equipment to reaching a desired destination in an unfamiliar environment. The ways in which we perform these tasks could potentially be improved significantly through augmented reality systems designed using the principles to be developed by this project. Both the cognitive principles and the augmented reality tools will have broad applicability. The systems developed will inform the design of future systems that can aid the general public, for educational and recreational ends, as well as systems that can assist people with auditory, visual, or physical impairments.
State-of-the-art desktop search tools are valuable for searching various forms of individual user documents -—interpreted broadly and including user files, email messages, web pages, and chat sessions. Unfortunately, focusing on individual, relatively static documents in isolation is often insufficient for important search scenarios, where the history and patterns of access to all information on a desktop –-static or otherwise–- are themselves of value and, in fact, critical to answer certain queries effectively. We propose to design, implement, and evaluate new mechanisms for enabling users to search all information that has been displayed on their desktops, preserving and exploiting the same personal context and display layout as in the original desktop computing experience. Our next-generation desktop search system will rely on a virtualization record-and-play architecture that enables both display and application execution on a desktop to be recorded (and, in fact, replayed) efficiently without user-perceived degradation on application performance. Our system will capture and index all activity on the desktop, and will exploit this aggregate desktop information to produce effective, display-centric search results.
The project will develop and experimentally evaluate novel techniques for conducting fine-grained tracking of information of interest (as defined by the system operator or, in the future, by end-users, in a flexible, context-sensitive manner) toward mapping the paths that such information takes through the enterprise and providing a means for enforcing information flow and access control policies. Prof. Keromytis' hypothesis is that it is possible to create efficient fine-grained information tracking and access control mechanisms that operate throughout an enterprise legacy computing infrastructure through appropriate use of hypervisors and distributed tag propagation protocols.
This research analytically and experimentally investigates defensive infrastructure addressing vulnerabilities in open cellular operating systems and telecommunications networks. In this, we are exploring the requirements and design of such defenses in three coordinated efforts; a) extending and applying formal policy models for telecommunication systems, and provide tools for phone manufacturer, provider, developer, and end-user policy compliance verification, b) building a security-conscious distribution of the open-source Android operating system, and c) explore the needs and designs of overload controls in telecommunications networks needed to absorb changes in mobile phone behavior, traffic models, and the diversity of communication end-points.
This research symbiotically supports educational goals at the constituent institutions by supporting graduate and undergraduate student research, and is integral to the security and network curricula. This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This project aims to develop and evaluate a new family of user-controllable policy learning techniques capable of leveraging user feedback and present users with incremental, user-understandable suggestions on how to improve their security or privacy policies. In contrast to traditional machine learning techniques, which are generally configured as “black boxes” than take over from the user, user-controllable policy learning aims to ensure that users continue to understand their policies and remain in control of policy changes. As a result, this family of policy learning techniques offers the prospect of empowering lay and expert users to more effectively configure a broad range of security and privacy policies.
The techniques to be developed in this project will be evaluated and refined in the context of two important domains, namely privacy policies in social networks and firewall policies. In the process, work to be conducted in this project is also expected to lead to a significantly deeper understanding of (1) the difficulties experienced by users as they try to specify and refine security and privacy policies and of (2) what it takes to overcome these challenges (e.g., better understanding of policy modifications that users can relate to, better understanding of how many policy modifications users can realistically be expected to handle, and how these issues relate to the expressiveness of underlying policy languages, modes of interactions with the user, and the topologies across which policies are deployed).
Textually-generated 3D scenes will have a profound, paradigm-shifting effect in human computer interaction, giving people unskilled in graphical design the ability to directly express intentions and constraints in natural language -- bypassing standard low-level direct-manipulation techniques. This research will open up the world of 3D scene creation to a much larger group of people and a much wider set of applications. In particular, the research will target middle-school age students who need to improve their communicative skills, including those whose first language is not English or who have learning difficulties: a field study in a New York after-school program will test whether use of the system can improve literacy skills. The technology also has the potential for interesting a more diverse population in computer science at an early age, as interactions with K-12 teachers have indicated.
According to the award web page, "Established in 1996, the presidential awards honor the best of Columbia's teachers for the influence they have on the development of their students and their part in maintaining the University's longstanding reputation for educational excellence."
David Elson is working on his dissertation in natural language understanding, advised by Prof. Kathleen McKeown. Read more.
Michael Rand (CC) was awarded the Computer Science Department Award for Scholastic Achievements as acknowledgment of his contributions to the Department of Computer Science and to the university as a whole.
Brian Smith (SEAS) garnered the Computer Science Department Scholarship Award, awarded to an undergraduate Computer Science degree candidate who demonstrated scholastic excellence through projects or class contributions
Peter Tsonev (SEAS) was awarded the Computer Engineering Award of Excellence, for demonstrating scholastic excellence.
The Andrew P. Kosoresow Memorial Award for Excellence in Teaching and service is awarded to students who demonstrated outstanding teaching and exemplary service. This year, it was given to Tristan Naumann (SEAS), Dokyun Lee (CC), Jae Woo Lee (GSAS), Paul Etienne Vouga (GSAS), and Oren Laadan (GSAS).
The Russell C. Mills Award for Excellence in Computer Science recognizes academic excellence in the area of Computer Science and went to Joshua Weinberg (GS) and Eliane Stampfer (CC).
The Theodore R. Bashkow Award for Excellence in Independent Projects is awarded to Computer Science seniors who have excelled in independent projects. This year, Adam Waksman (CC) and Kimberly Manis (SEAS) were recognized.
The Paul Charles Michelman Memorial Award recognizes PhD students in Computer Science who have performed exemplary service to the department, devoting time and effort beyond the call to further the department's goal, and went to Matei Ciocarlie (GSAS) and Chris Murphy (GSAS).
The Certificate of Distinction for Academic Excellence is given at graduation to Computer Science and Computer Engineering majors who have an overall cumulative GPA in the top 10% among graduating seniors in CS and CE:
Michael Rand (CC), Brian Smith (SEAS), Daniel Weiner (GS), Peter Tsonev (SEAS), Adam Waksman (CC), Eliane Stampfer (CC).
The Computer Science Service Award is awarded to PhD students who were selected to be in the top 10% in service contribution to the Department: Hila Becker, Matei Ciocarlie, Gabriella Cretu-Ciocarlie, Kevin Egan, David Elson, Jin Wei Gu, David Harmon, Bert Huang, Maritza Johnson, Gurunandan Krishnan, Chris Murphy, Kristen Parton, Paul Etienne Vouga, John Zhang and Hang Zhao.
Taking data from GPS-equipped taxis and other vehicles, cell phones and other devices, Jebara's Citysense can tell you, in real time, where the action is. Read more.
Sixty-seven researchers were honored on December 19 in a ceremony presided over by Dr. John H. Marburger III, Science Advisor to the President and Director of the White House Office of Science and Technology Policy.
"The Presidential Early Career Awards for Scientists and Engineers, established in 1996, honors the most promising researchers in the Nation within their fields. Nine federal departments and agencies annually nominate scientists and engineers who are at the start of their independent careers and whose work shows exceptional promise for leadership at the frontiers of scientific knowledge. Participating agencies award these talented scientists and engineers with up to five years of funding to further their research in support of critical government missions." Read more.
"Beating out more than 400 entrants from across the country, StackSafe was awarded first prize after a rigorous assessment by an online panel of over 300 venture capitalists, angel investors, and university judges." Read more.
Ian Vo wrote a paper titled "Quality Assurance of Software Applications Using the In Vivo Testing Approach", which has been accepted for publication at ICST 2009, the 2nd IEEE International Conference on Software Testing, Verification and Validation. According to its abstract, "software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws." The authors present a testing methodology they call in vivo testing, in which tests are continuously executed in the deployment environment. They discuss the approach and the prototype testing framework for Java applications called Invite and provide the results of case studies that demonstrate Invite's effectiveness and efficiency. Invite found real bugs in OSCache, Apache JCS and Apache Tomcat, with about 5% overhead. The project was supervised by Prof. Kaiser.
The CRA honored a total of 22 female and 44 male students in this year's competition. Read more.
system design, or related research.
Charles Han is a doctoral student in the Columbia Computer Graphics Group, co-advised by Profs. Eitan Grinspun and Ravi Ramamoorthi. His research focuses on finding principled representations and efficient algorithms that operare well across a wide range of visual scales.There are many instances in graphics where one would like to render the same object at different scales: for example, an architect
designing a building may want to preview the entire structure at once or may want to zoom in on individual parts; characters and terrain in computer games may be seen at extremely close distances or as distant pixels on the horizon. Current techniques in computer graphics are generally tailored to perform well at a particular physical scale, and often to not translate well to coarser or finer scales.
In work presented at SIGGRAPH 2007, Han presented a solution to the long-standing problem of normal map filtering. By reinterpreting normal mapping in the frequency-domain as a convolution of geometry and BRDF, this work has enabled accurate multiscale rendering of normal maps at speeds orders of magnitude faster than previously possible. More recently, Han has developed a framework for the efficient example-based synthesis of very large textures, with features spanning a wide (or infinite) range of physical scales. He continues to extend this work to add further expressive power and intuitive user control.
Bergou's work builds on the ideas of Discrete Differential Geometry (DDG), whose goal is to identify the root from which the desirable properties of a continuous system stem and then to build discrete models using an appropriate discrete version of that root. This led to his work on discrete models for cloth and elastic rods. His work on artistic control of a physical system builds on constrained Lagrangian mechanics, in which constraints define the allowable states that a system may be in. Within the context of directing a physical simulation, this framework can be used to define constraints that allow for entirely physical motions for the system being simulated while still closely obeying the intent of the user controlling the
simulation.
Miklós is a Ph.D. candidate in the Columbia Computer Graphics Group, advised by Prof. Eitan Grinspun.
in Qubit Complexity".
tangible user interfaces for augmented reality" was coauthored by Steve Henderson and Steve Feiner. It presents a class of interaction techniques, called opportunistic controls, in which naturally occurring physical artifacts in a task domain are used to provide input to a user interface through simple vision-based processing. Tactile feedback from an opportunistic control can make possible eyes-free interaction. For example, a ridged surface can be used as a slider or a spinning washer as a rotary pot.
According to the IARPA mission statement, "The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk/high-payoff research that has the potential to provide our nation with an overwhelming intelligence advantage over future adversaries."
The classical complexity of many continuous problems is known due to information theoretic arguments. This may be contrasted with discrete problems such as integer factorization where one has to settle for conjectures about the complexity hierarchy. Among the issues the investigators will study are the following:
- For the foreseeable future the number of qubits will be a crucial computational resource. The investigators have shown that modifying the standard definition of quantum algorithms to permit randomized queries leads to an exponential improvement in the qubit complexity of path integration. The investigators propose to exploit the power of the randomized query setting. For example, are there exponential improvements in the query complexity for other important problems?
- A basic problem in physics and chemistry is to compute the ground state
energy of a system. The ground state energy is given by the smallest eigenvalue of the time-independent Schrödinger equation. If the number of particles in the system is p, the number of variables is d = 3p. In the worst case classical setting, the problem we study suffers the curse of dimensionality. The curse is broken in the quantum setting. The investigators want to determine if the randomized classical setting suffers the curse of dimensionality. If it does, a quantum computer enjoys exponential speedup for this problem. This would mark the first example of proven exponential quantum speedup for an important problem.
- The Schrödinger equation is fundamental to quantum physics and quantum chemistry. Solving this equation for quantum systems with a large number of variables would have a huge payoff for many applications. The investigators propose to study algorithms and initiate the study of the computational complexity of the Schrödinger equation in the worst case and randomized settings on a classical computer and in the quantum setting.
The Pe'er-Bussemaker Lab is using high-throughput genomics data to infer a universal protein-DNA recognition code. Shown are the positions of protein side-chains contacting a Watson-Crick base-pair in a variety of protein-DNA complexes. The data is the result of research efforts such as the Human Genome Project and revolutionary sequencing technologies that are capable of reading over 100 billion letters of DNA in just a few days. Such technologies include high-density microarrays, which measure and analyze the activity within a cell and are capable of quantifying the levels of more than a million unique RNAs in a single experiment, and multi-laser flow cytometry, which measures the abundance of multiple signaling molecules in over 100,000 individual cells in a just few minutes.
"Vast amounts of data are being produced in super-exponential rates; novel ground-breaking technologies are being invented so much faster than the rate at which scientists can understand and leverage them to gain biological insights," adds Pe'er. "It's like buying a whole pie, eating a tiny piece and throwing the rest away. Most of the data is only looked at on the very, very surface. And most of the data is only scarcely being used, leaving the rest untouched."
Professors Pe'er and Harmen say their new lab reflects Columbia's support for computational biology, a commitment Pe'er says can be seen in the Center for Computational Biology and Bioinformatics (C2B2), established in 2006 at the Medical campus.
"Columbia has seen a very dramatic elevation in status in systems and computational biology with the initiation of the C2B2, which is fast becoming one of the best computational centers around," said Pe'er. "The activity between the uptown medical campus and here on Morningside makes Columbia one of the top five computational biology centers in the world."
(from the University press release of July 25, 2008) Read more.
Henning Schulzrinne. It describes and evaluates mechanisms so that VoIP servers can continue to operate at full capacity even under severe overload. Such overload may occur during natural disasters or mass call-in events, such as voting for TV game show contestants. Without these measures, servers are likely to suffer from congestion collapse. Read more.
While the current reality is that the jury is still out on how the processor-of-the-future will look, one clear certainty is that it will be parallel. All major commercial processor vendors are now committed to increasing the number of processors (i.e., cores) that fit on a single chip. However, there are major obstacles of power consumption, performance and scalability in existing synchronous design methodologies. This proposal focuses on a particular existing easy-to-program and easy-to-teach multi-core architecture. It then identifies the interconnection network, connecting multiples cores and
memories, as the critical bottleneck to achieving lower overall power consumption. The target is to substantially improve the power, robustness and scalability of the system by designing and fabricating a high-speed asynchronous communication mesh.
The resulting parallel architecture will be globally-asynchronous locally-synchronous (i.e. GALS-style), that gracefully accommodates synchronous cores and memories operating at arbitrary unrelated clock
rates, while providing robustness to timing variability and support for plug-and-play (i.e. scalable) system design. Unlike most prior GALS architectures, this one will have significant performance and power requirements in a complex pipelined topology. In addition, computer-aided design (i.e., CAD) tools will be developed to support the design of this new mesh, as well as simulation, timing verification and performance analysis tools to be applied to the entire parallel architecture. This
work will be performed in collaboration with a separate NSF CPA proposal under Prof. Ken Stevens (University of Utah). The two proposals will be linked together into a larger framework: the Utah group will coordinate to provide and refine their commercial-based physical design tool development and support, while the Columbia/Maryland group will provide a new substantial test case for their asynchronous tool applications.
The work is expected to have broad impact. First, while it is targeted to one parallel architecture, several other architectures will benefit from this work, since the interconnection network can be applied to them as well. Second, the work is expected to demonstrate the benefits and role of
asynchronous design for complex high-performance systems. Finally, the outcome of the work could make a step in the paradigm shift from serial to parallel that the field is now undergoing; the resulting first-of-its-kind partly-asynchronous high-end massively-parallel on-chip computer could push
the level of scalability beyond what it currently possible and have a broad impact in supporting parallel applications in much of computer science and engineering.
The Academic Alliance Seed Fund was established in 2007 to provide members of NCWIT’s Academic Alliance with startup funds (up to $15,000 per project) to develop and implement projects for recruiting and retaining women in computing and information technology. Funding for the Seed Fund is provided by Microsoft Research.
The NCWIT Academic Alliance includes more than 75 computer science and IT departments across the country — including research universities, community colleges, women’s colleges, and minority-serving institutions — dedicated to gender equity and institutional change in higher education computing and information technology.
The honorary doctorate (Dr. rer. nat. hc) cited Prof. Wozniakowski foundational contribution to numerical methods, particularly the deep insights due to the new discipline of information-based complexity and the work on the "curse of dimensionality" that helps determine which high-dimension problems are solvable.
The Friedrich-Schiller University in Jena was founded in 1588.
chemical and biochemical processes and advanced functional materials, informatics and telecommunications, land, sea and air transportation, agrobiotechnology and food engineering,
environmentally friendly technologies for solid fuels and alternative energy sources, as well as
biomedical informatics, biomedical engineering, biomolecular medicine and pharmacogenetics." Read more.
This research project aims to harness the recent extraordinary advances in nanoscale silicon photonic technologies for developing optical interconnection networks that address the critical bandwidth and power challenges of future CMP-based system. The insertion of photonic interconnection networks essentially changes the power scaling rules: once a photonic path is established, the data are transmitted end-to-end without the need for repeating, regeneration or buffering. This means that the energy for generating and receiving the data is only expended once per communication transaction anywhere across the computing system. The PIs will investigate the complete cohesive design of an on-chip optical interconnection network that employs nanoscale CMOS photonic devices and enables seamless off-chip communications to other CMP computing nodes and to external memory. System-wide optical interconnection network architectures will be specifically studied in the context of stream processing models of computation. Read more.
Professor Ross has been selected as one of the two recipients of this year's Columbia Engineering School Alumni Association (CESAA) Distinguished Faculty Teaching Awards. Mr. Lee presented the award to Professor Ross at Class Day ceremonies on Monday, May 19.
"The Columbia Engineering School Alumni Association created this award more than a decade ago to recognize the exceptional commitment of members of the SEAS faculty to undergraduate education," said Mr. Lee. "This year, I am pleased to present these awards to two senior faculty members, a testament to their continuing faithfulness to the central mission of teaching undergraduates."
The awardees were selected by a Committee of the Alumni Association chaired by Eric Schon '68, with representation from the student body, and based on nominations from the students themselves. The Board of Managers of the Columbia Engineering School Alumni Association voted unanimously to approve the selection.
Students enthusiastically wrote that courses taught by these professors were the best they have taken at Columbia. The qualities that both professors share and the ones most frequently mentioned by students are their enthusiasm for the subject matter, caring attitude, approachability, responsiveness to student concerns, and the ability to make complex subject matter understandable. Read more.
Ryan Overbeck's, advised by Prof. Ravi Ramamoorthi, focuses on real-time ray tracing. Ray tracing is the core of many physically-based algorithms for rendering 3D scenes with global illumination (shadows, reflections, refractions, indirect illumination, and other effects), but has not been fast enough for interactive rendering on commodity computers until recently. He develops algorithms to ray trace 3D scenes with high quality shadows, reflections, and refractions providing a higher degree of realism to interactive content.
Prof. Allen will be investigating semantically searchable dynamic 3D databases, developing
new methods to take an unstructured set of 3D models and organize them into a database that can be intelligently and efficiently queried. The database will be searchable, tagged and dynamic, and will be able to support queries based on whole object and partial object geometries.
In the project titled "Safe Browsing Through Web-based Application Communities", Profs. Keromytis and Stolfo will investigate the use of collaborative software monitoring, anomaly detection, and software self-healing to enable groups of users to browse safely. The project seeks to counter the increasingly virulent class of web-bourne malware by exchanging information among users about detected attacks and countermeasures when browsing unknown websites or even specific pages.
In the project "Privacy and Search: Having it Both Ways in Web Services", Prof. Keromytis will investigate techniques for addressing the privacy and confidentiality concerns of businesses and individuals while allowing for the use of hosted, web-based applications such as Google Docs and Gmail. Specifically, the project will combine data confidentiality mechanisms with Private Information Matching and Retrieval protocols, to develop schemes that offer different tradeoffs between stored-data confidentiality/privacy and legitimate business and user needs.
Rocco Servedio was awarded a Google Research Award to develop improved martingale ranking algorithms. Martingale ranking is an extension of martingale boosting, a provably noise-tolerant boosting algorithm from learning theory which was jointly developed by Rocco and Phil Long, a researcher at Google. Rocco will work to design adaptive and noise-tolerant martingale rankers that perform well 'at the top of the list' of items being ranked, which is where accurate rankings are most important.
these attacks actually occurring, and the uncertainties surrounding assumptions about these risks.
contributions to the engineering literature," and to the "pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education."
The National Academy of Engineering (NAE) has elected a total of 65 new members and nine foreign associates spanning all disciplines of engineering and applied sciences.
Members are elected to the NAE by their peers (current NAE members). All members have distinguished themselves in technical positions, as university faculty, and as leaders in government and business organizations. They serve as "advisers to the nation on science, engineering, and medicine," and perform an unparalleled public service by addressing the scientific and technical aspects of some of society’s
most pressing problems. The NAE was established in 1964 as an independent, nonprofit organization and is one of four United States National Academies. Read more.
"Julia Hirschberg, professor in computer science, is active within the area of speech communications at Columbia University, USA. She belongs to the leading researchers in this field, having performed research in both industry and academia. In her work at AT&T, she contributed to the development of several voice-controlled telephone services. Julia Hirschberg has performed leading research on a variety of topics related to human-to-human and human-to-machine interaction. Specifically, within the area of prosody, she studied how people use other means than speech to communicate focus, turn-taking and emotions in a dialogue. She has also studied how this knowledge can be applied to various speech-based services. Julia Hirschberg has been president of the International Speech Communication Association (ISCA) since 2005. As such she is responsible for
the yearly conference Interspeech that attracts more than 1000 attendee each year."
current needs of the consumers, such that the system is truly autonomic. The project proposes to modularize the ASM into separate components, and then design the various components using both cutting edge novel control theoretic and scheduling analyses. Read more.
According to the citation, "Prof. Yechiam Yemini is that rare individual who embodies excellence in research, innovation and entrepreneurship. He was already a successful entrepreneur before he joined CATT. He then started System Management Arts or SMARTS, a company with over 150 employees that developed network management solutions. This company was acquired by EMC Corporation. He is now working on yet another start up called Arootz. In all his ventures he brings technological innovation and an unerring vision of the market."
Henning Schulzrinne was cited a pioneer in the development of Voice over IP technology that is supplanting circuit-switched voice, which has been the basis of phone service since the days of Alexander Graham Bell. He is a co-inventor of the Session Initiation Protocol (SIP) and the Real-Time Transport Protocol (RTP), which form the basis of VOIP, and additional standards for multimedia transport over the Internet.
In addition, Verizon Communication was honored for a joint project conducted with the lab of Prof. Schulzrinne.
The Center for Advanced Technology in Telecommunications and Distributed Information Systems (CATT) is a research and education group at Polytechnic University, long-recognized as one of the best engineering schools in the country. CATT researchers are leaders in the fields of electrical engineering and computer science. The Center also draws on the expertise of key researchers at Columbia University. Read more.
With the NIH award funding, Pe’er and her team will seek to understand the general underlying principles governing how cells process signals, how molecular networks compute, and how genetic variations alter cellular functioning. Specifically, she wants to understand how changes in DNA codes modify a cells response to its internal and external cues, which then leads to changes throughout the entire body. These changes, or malfunctions, can cause anything from autoimmune disease to cancer." (Columbia News) Read more.
The first is the exploration and refinement of a novel, highly efficient machine learning technique for data-rich domains, which selects small and fast subsets of multimedia features that are most indicative of a given high-level concept. Speed-ups of three decimal orders of magnitude are possible.
The second is the development of new methods and tools for refining user concepts and domain ontologies for video retrieval, based on statistical analyses of their collocation and temporal behavior. The goals are the determination of video synonyms and hypernyms, the verification of temporal shot patterns such as repetition and alternation, and the exploitation of a newly recognized power-law decay of the recurrence of content.
The third is the demonstration of a customizable user interface, the first of its kind, to navigate a library of videos of unedited and relatively unstructured student presentations, using visual, speech, facial, auditory, textual, and other features. These features are shown to be more accurately and quickly derived using the results of the first investigation, and more compactly and saliently presented using the results of the second.
The first main goal of the project is to obtain new cryptographic results based on the presumed hardness of various problems in computational learning theory. Work along these lines will include constructing and applying cryptographic primitives such as public-key cryptosystems and pseudorandom generators from learning problems that are widely believed to be hard, and exploring the average-case learnability of well-studied concept classes such as decision trees and DNF formulas. The second main goal of the project is to obtain new learning results via cryptography. The PIs will work to develop privacy-preserving learning algorithms; to establish computational hardness of learning various Boolean function classes using tools from cryptography; to obtain computational separations between pairs of well-studied learning models; and to explore the foundational assumptions of what are the minimal hardness assumptions required to prove hardness of learning.
including multiple bounces of light (global illumination), material changes and spatially-varying local lighting. Computer graphics is also increasingly used to prototype or design illumination and material
properties, for industries as diverse as animation, entertainment, automobile design, and architecture. A lighting designer on a movie set wants to pre-visualize the scene lit by the final illumination and with
objects having their final material properties, be they paint, velvet or glass. An architect wants to visualize the reflectance properties of building materials in their natural setting. In many applications, much
greater realism and faithfulness can be obtained if the lighting or material designer could interactively specify these properties. The project will develop the theoretical foundations and next generation
practical algorithms for high quality real-time rendering and lighting/material design.
reliability both during and after maintenance while imposing little management overhead. The contributions stem primarily from a virtualization architecture that decouples application instances from operating system instances, enabling either to be independently updated. The results, disseminated via web download, will improve availability of legacy applications, with no source code access,
modification, recompilation, relinking or application-specific semantic knowledge, and perform efficiently and securely on commodity operating systems and hardware.
SIGMETRICS promotes research in performance analysis techniques as well as the advanced and innovative use of known methods and tools. It sponsors conferences, such as its own annual conference (SIGMETRICS), publishes a newsletter (Performance Evaluation Review), and operates a network bulletin board and web site.
This project will investigate a new communication paradigm, named PacketSpread, which makes feasible the use of capability-like mechanisms on the current Internet, without requiring architectural modifications to networks or hosts. The high-level hypothesis of the research is that practical network capability schemes can be constructed through the use of end-point traffic-redirection mechanisms that use a spread-spectrum-like communication paradigm enabled by an overlay network. To test this hypothesis, the project will prototype and experimentally validate the resistance of such a scheme against attacks launched by realistic adversaries, while minimizing the impact of the approach to end-to-end communication latency and throughput.
The results of this research will enable a better understanding of how network-capability schemes can be deployed and used to provide robust and secure communications under both normal operation and in times of crisis. Improvements in the security and reliability of large-scale systems on which society, business, government, and individuals depend on will have a positive impact on society.
W. Bradford Paley.
W. Bradford Paley, an Adjunct Associate Professor in the Department of Computer Science, worked with two collaborators to produce an illustration that seems itself to have become news. Working with Kevin Boyack (of Sandia National Labs) and Dick Klavans (of SciTech Strategies, Inc.), he developed a way of visualizing the relationships among 776 different scientific paradigms--labelling each node with ten unique descriptive phrases--on a small two-foot square print. The image (originally four feet square) was part of an "Illuminated Diagram," a visual display technique Mr. Paley first presented
at IEEE InfoVis 2002. It was part of an exhibit called "Places and Spaces: Mapping Science" installed in the New York Public Library Science Industry and Business Library, then the New York Hall of Science; it is now travelling worldwide.
The journal Nature noticed the image in that exhibit and opened its annual "Brilliant Images" image gallery of 2006 with a very reduced version. It was picked up by both SEED and Discover magazines and has been mentioned in dozens of news sites and blogs, including Slashdot, Reddit, Complexity Digest, Education Futures, and StumbleUpon.
Mr. Paley's site (didi.com/brad) describes his new label layout algorithm, as well as the rest of the project.
algorithms on their inputs, without revealing any additional information. For example, consider a client holding data which he would like classified by a server (e.g., applying a face detection algorithm). However, the client does not want to reveal any information on his data to the server, and the server does not want to reveal any information to the client, beyond the classification result. While general cryptographic techniques for secure multiparty computation may be applied, these often entail a performance overhead that is prohibitive for the real-world applications we address. Prof. Malkin and her team will work to design efficient privacy preserving protocols for common information classifiers including density estimation using Parzen windows, K-NN classification, neural networks, and support vector machines. We will also design privacy preserving protocols for other useful vision and learning problems, such as oblivious matching protocols, allowing two parties to find whether they are holding an
image of the same object or not, without disclosing any additional information on their images.
Details about the methodology can be found at
http://chronicle.com/stats/productivity/page.php?primary=4&secondary=34&bycat=Go Read more.
The images that objects produce are heavily influenced by the interplay between natural lighting conditions, complex materials with non-diffuse reflectance, and shadows cast by and on the object.
Modeling these effects, which are omnipresent in natural environments, is critical for image understanding and machine perception. For example, to deploy face recognition systems in airport security or in the outdoors, we must account for uncontrolled illumination, developing lighting-insensitive recognition methods. Recognizing and tracking vehicles requires understanding the bright highlights produced by metallic car bodies. Robotic helpers that provide assistance to the infirm must interpret highlights and shadows from household objects. Unmanned automated vehicles surveying battle scenarios can also benefit from improved image interpretation algorithms, allowing them to understand and build 3D models of their environs.
Therefore, compact mathematical models of illumination and reflectance are essential, to develop robust vision and image interpretation systems for uncontrolled conditions. We will pursue two main avenues. First, we analyze the frequency-domain properties of lighting and reflectance, extending our previous results to specular objects, describing a theory of frequency domain identities, analogous to
classical spatial domain results like reflectance ratios. Second, we analyze a general light transport operator that by definition includes arbitrary reflectance and shadowing. We develop a locally low-dimensional representation, even for high-frequency highlights and intricate shadows. This enables a new level of accuracy in appearance-based lighting-insensitive recognition and other applications.
poise more likely hypotheses, and give artists better control over the process of computer animation. Physical simulations have already achieved remarkable goals, enabling the prediction of systems that are too costly or dangerous to study empirically; however, current simulation technologies are built for precision, not intuition.
The investigators will develop simulation techniques that address the vision of a rapid, interactive design cycle, with a specific focus on the physical simulation of thin shells--flexible surfaces such as air bags, biological membranes, and textiles, with pervasive applications in automotive design, biomedical device optimization, and feature film production. The work will focus on qualitatively-accurate, but not precise, simulation. The research will yield novel methods that quickly but coarsely resolve the physics, skipping over irrelevant data to capture only the coarse variables that drive design decisions. The project will train young scientists with a deep understanding of computation, mathematics, and application domain areas––despite being in high demand, this combination of skills remains rare.
A technical goal of this project is to develop a principled, methodical approach to coarsening an existing discrete geometric model of a mechanical system, using adaptive, multiresolution
decompositions. Whereas adaptivity is commonly studied in the context of error estimators for mesh refinement, interactivity suggests a focus on how best to give up precision in a simulation. Therefore,
this research will (i) build on early work in the field of discrete differential geometry to formulate coarse geometric representations of physical systems that preserve key geometric and physical invariants,
(ii) investigate the convergence, resolution- and meshing-dependence of discrete differential operators, and (iii) contribute toward a software platform for interactive design space exploration with
concrete applications in automotive, biomedical, and feature-film engineering.
The PIs propose to apply these techniques to the problem of detecting new web-bourne malware (e.g., malicious attachments or active content) through a collaborative method that utilizes (a) the users' actions (to drive the browsers and "explore" new pages, in a manner similar to but more comprehensive and less error-prone than other proposed work that uses automated web-crawlers to scan suspicious web sites), (b) new detectors that are either already running on the users' systems (e.g., a host-based anomaly detector) or are easily deployable over the web, (c) a browser extension that communicates with Google to send information about locally found anomalies and to receive information about the threat-level ("maliciousness") of content downloaded or about-to-be downloaded from the web, and (d) Google itself, as the broker of said information. In addition, Google or a third party can act as the "validator" of alerts, using techniques the PIs have developed for protection of servers, albeit applied to the desktop/browser environment.
Steady advances in such enabling technologies as semiconductor circuits, wireless networking, and microelectromechanical systems (MEMS) are making possible the design of complex distributed (networked) embedded systems that could benefit several application areas such as public
infrastructure, industrial automation, automotive industry, and consumer electronics. However, the heterogeneous and distributed nature of many such systems requires design teams with a composite skill set spanning automatic control, communication networks, and hardware/software
computational systems. Computer-aided design, a traditionally interdisciplinary research area, will be instrumental in making these systems feasible and in enhancing the productivity of the design process.
The grant will allow the PI to develop new modeling techniques, optimization algorithms, ommunication protocols and interface processes that combined will yield a novel 'design automation flow for distributed embedded-control applications' such as automotive ``X-by-wire systems'' and integrated buildings. The goal is to enable the integrated design and validation of these systems while assisting the typically multidisciplinary engineering teams that are building them. Intermediate contributions include methods for the robust deployment of real-time embedded software on distributed architectures and for the synthesis of a distributed implementation of an embedded control application where performance requirements are met while the usage of communication and computational resources is well-balanced. The education plan is motivated by the belief that the academic curricula for both computer and electrical engineers need to be updated in order to
overcome the artificial and historical boundaries among those disciplines in electrical engineering and computer science that lie at the core of embedded computing. Read more.
for Future Single-Chip Parallel Processors". The goal is to design a high-throughput, flexible and low-power digital fabric for future desktop parallel processors, e.g., those with 64+ processors
per chip. The fabric will be designed using high-speed asynchronous pipelines, handling the communication between synchronous processor cores and distributed memory. The asynchrony of the fabric will facilitate lower power, handling of heterogeneous interfaces, and high access rates (with fine-grained pipelining). This work is in collaboration with the parallel processing and CAD groups at the University of Maryland, including Prof. Uzi Vishkin.
The Department of Computer Science is seeking applicants for two
tenure-track positions at either the junior or senior level, one each in
computer engineering and software systems. Applicants should have a Ph.D. in a relevant field, and have demonstrated excellence in research and
the potential for leadership in the field. Senior applicants should
also have demonstrated excellence in teaching and continued
strong leadership in research.
Our department of 32 tenure-track faculty and 2 lecturers attracts excellent
Ph.D. students, virtually all of whom are fully supported by research
grants. The department has close ties to the nearby research laboratories
of AT&T, IBM (T.J. Watson), Lucent, NEC, Siemens, Telcordia Technologies
and Verizon, as well as to a number of major companies including financial
companies of Wall Street. Columbia University is one of the leading research
universities in the United States, and New York City is one of the
cultural, financial, and communications capitals of the
world. Columbia's tree-lined campus is located in Morningside Heights
on the Upper West Side.
Applicants should submit summaries of research and teaching interests,
CV, email address, and the names and email addresses of at least three
references by filing an online application at
www.cs.columbia.edu/recruit. Review of applications will begin on January 1, 2007.
Columbia University is an Equal Opportunity/Affirmative Action
Employer. We encourage applications from women and minorities.
specific focus on natural incorporation of existing simulation, solver, and domain-specific codes.
Prof. Eitan Grinspun (Columbia) brings expertise in adaptive multiresolution methods for physical simulation, working as part of a team led by NYU. Prof. Vijay Karamcheti (NYU) offers expertise in application-aware mechanisms for parallel computing, and Prof. Denis Zorin (NYU) provides expertise in interactive geometric modeling and simulation. Finally, Prof. Steve Parker (Utah) brings his expertise in the development of the SCIRun and SCIRun2 platforms for scientific computing.
and Model-Based Reranking".
Strong Detection to reveal bounds on the kinds of errors that these classes of routing protocols can detect. Hence, the research will be identifying complexity classes of routing protocols in terms of their self-monitoring abilities.
cross-cultural scalability, faster than real-time performance, and the exploitation of the temporal evolutionary aspects of video contents. It will build a retrieval workbench with video mining, topic tracking, and cross-linking capabilities, along with other video understanding services.
“sharing” resources across the consumers they support. However, research that explores how to share resources generally derives point solutions, where different resource/consumer configurations require
separately-designed sharing mechanisms. For instance, a scheduler often has implemented separately a single policy (e.g., FCFS, PS, FBPS, SPRT) optimized for a particular load setting, and cannot easily
be switched to another policy when the situation changes.
This project seeks to develop and analyze Adaptive Sharing Mechanisms (ASMs) in which the mechanism used to share resources adapts dynamically to both the set of available resources and the current
needs of the consumers, such that the system is truly autonomic. We initiate our study with a modularization of the ASM into separate components, and then study the various components using both cutting edge novel control theoretic and scheduling analyses. The study ends with prototype and testing ASMs within a server farm environment.
The grant extends over three years and is part of the NSF Computer Systems Research (CSR) program. Only approximately 10% of all grant applications were funded.
powerful and unexpected attacks become possible. The talk took place in December 2005.
identification users who exhibit potential insider threats.
The award initiates research in the IDS lab that has also been proposed to other agencies for joint support with two companies, Symantec and Secure Decisions, Inc.
The project starts in June 2006 and lasts for 6 months.
The grant was awarded in January of 2006.
The Disruptive Technology Office (DTO, formerly ARDA) awarded the grant, while AFRL provides grant administration. The grant duration is 18 months.
explore the limits of what is possible to achieve, for several types of strong and realistic attacks, including chosen ciphertext attack, key tampering attacks, and key exposure attacks.
known secure servers and exposing common weaknesses and pitfalls. In the process, the project will also develop and release a toolkit for probing and testing the security of these servers.
The paper puts text-searching and crawling on a sound foundation. Text is ubiquitous and, not surprisingly, many important applications
rely on textual data for a variety of tasks. As a notable example,
information extraction applications derive structured relations from
unstructured text; as another example, focused crawlers explore the
web to locate pages about specific topics. Execution plans for
text-centric tasks follow two general paradigms for processing a text
database: either they scan, or "crawl," the text database or,
alternatively, they exploit search engine indexes and retrieve the
documents of interest via carefully crafted queries constructed in
task-specific ways. The choice between crawl- and query-based
execution plans can have a substantial impact on both execution time
and output "completeness" (e.g., in terms of recall). Nevertheless,
this choice is typically ad-hoc and based on heuristics or plain
intuition. This paper presents fundamental building blocks to make the
choice of execution plans for text-centric tasks in an informed,
cost-based way. Towards this goal, the paper shows how to analyze
query- and crawl-based plans in terms of both execution time and
output completeness. The paper adapts results from random-graph theory
and statistics to develop a rigorous cost model for the execution
plans. This cost model reflects the fact that the performance of the
plans depends on fundamental task-specific properties of the
underlying text databases. The paper identifies these properties and
presents efficient techniques for estimating the associated parameters
of the cost model. Overall, the paper's approach helps predict the
most appropriate execution plans for a task, resulting in significant
efficiency and output completeness benefits.
of carefully-engineered links are expected to replace traditional on-chip communication schemes by providing higher bandwidth with lower power dissipation. Further, on-chip networks offer the opportunity to mitigate the complexity of system-on-chip design by facilitating the assembling of
multiple processing cores through the emergence of standards for communication protocols and network access points. This project will investigate the design of low-power scalable on-chip networks for multi-core systems-on-chip by combining a new low-latency, low-energy, current-mode signalling techniques with the design of latency-insensitive protocols extended to support fault-tolerant mechanisms.
The project is funded by the NSF Foundations of Computing Processes and Artifacts (CPA) Cluster. In 2005 the NSF CPA cluster received 532 proposals and funded approximately 10% of them.
The NSF CPA cluster supports research and education projects to advance formalisms and methodologies pertaining to the artifacts and processes for building computing and communication systems. Areas of interest include: topics in software engineering such as software design methodologies, tools for software testing, analysis, synthesis, and verification; semantics, design, and implementation of programming languages; software systems and tools for reliable and high performance computing; computer architectures including memory and I/O subsystems,
micro-architectural techniques, and application-specific architectures; system-on-a-chip; performance metrics and evaluation tools; VLSI electronic design and pertinent analysis, synthesis and simulation
algorithms; architecture and design for mixed media or future media (e.g., MEMs and nanotechnology); computer graphics and visualization techniques. Read more.
The SHIM model of computation provides deterministic concurrency with reliable communication, simplifying validation because behavior is reproducible. Based on asynchronous concurrent processes that communicate through rendezvous channels, SHIM can handle control,multi- and variable-rate dataflow, and data-dependent decisions. The components consist of a high-level language based on SHIM, an efficient simulator for SHIM, a software synthesis system that generates C, a formal analysis tool for SHIM and libraries for the SHIM environment.
second language (L2) learners rarely learn. Topic shifts, contrastive
focus, and even simple question/statement distinctions, cannot be
recognized or produced in many languages without an understanding of
their prosody. However, 'translating' between the prosody of one
language and that of another is a little-studied phenomenon. This
research addresses the 'prosody translation' problem for Mandarin
Chinese and English L2 learners by identifying correspondences between
prosodic phenomena in each language that convey similar meanings. The
work is based on comparisons of L1 and L2 prosodic phenomena and the
meanings they convey. Computational models of prosodic variation
suitable for representing these phenomena in each language are
constructed from data collected in the laboratory, with results tested
on L1 and L2 subjects. The models are tested in an interactive
tutoring system which takes an adaptive, self-paced approach to
prosody tutoring. This system modifies training and testing examples
automatically by imcremental enhancement of distinctive prosodic
features in response to student performance. The success of the
system is evaluated via longitudinal studies of L2 students of both
languages to see whether the new techniques improve students' ability
to recognize and produce L2 prosodic variation. By providing a method
and computational support for prosody tutoring, this work will not
only enable students to attain more native-like fluency but it will
provide a model for training students in other pragmatic language
phenomena --- beyond learning the words and the syntax of a new
language.
requests access, it provides its pre-computed egress behavior model to
another node who may grant it access to some service. The receiver
compares the requestor's egress model to its own ingress model to
determine whether the new device conforms to its expected
behavior. Access rights are thus granted or denied based upon the
level of agreement between the two models, and the level of risk the
recipient is willing to manage. The second use of the exchanged models
is to validate active communication after access has been granted.
As a result, MANET nodes, will have greater confidence that a new node is not malicious; if an already admitted node starts misbehaving, other MANET nodes will quickly detect and evict it.
and permutation within statistical learning. These research tools have
applications in national security as a way to identify and match people
from text and multimedia and discover links between them. More
specifically, this proposal addresses the following key application areas:
- Matching authors: permutational clustering methods and permutationally
invariant kernels are used to compute the likelihood the same person wrote
a given publication or text.
- Matching text and multimedia documents: permutational algorithms and
permutationally invariant kernels to perform text, image and word
matchings of descriptions of people to known individuals in a database.
- Matching social networks and graphs: social network matching tools from
permutational algorithms which find a subnetwork in a larger network that
has a desired topology.
As stated by the IBM Ph.D. Fellowship Program, "Award Recipients are selected based on their overall potential for research excellence, the degree to which their technical interests align with those of IBM, and their progress to-date, as evidenced by publications and endorsements from their faculty advisor and department head."
the 19th Large Installation System Administration Conference (LISA
2005) held last week in San Diego, CA for their paper titled:
"Reducing Downtime Due to System Maintenance and Upgrades". Read more.
for Internet multimedia." Read more.
foundations of computer science is awarded every 1.5 years by the ACM
Special Interest Group on Algorithms and Computing Theory (SIGACT) and
the IEEE Technical Committee on the Mathematical Foundations of
Computing. The Prize includes a $5000 award and a $1000 travel stipend
(for travel to the award ceremony) paid by ACM SIGACT and IEEE
TCMFC. The Prize is awarded for major research accomplishments and
contributions to the foundations of computer science over an extended
period of time.
The Prize is named in honor and recognition of the extraordinary
accomplishments of Prof. Donald Knuth, Emeritus at Stanford
University. Prof. Knuth is best known for his ongoing multivolume
series, The Art of Computer Programming, which played a critical role
in establishing and defining Computer Science as a rigorous,
intellectual discipline. Prof. Knuth has also made fundamental
contributions to the subfields of analysis of algorithms, compilers,
string matching, term rewriting systems, literate programming, and
typography. His TeX and MF systems are widely accepted as standards
for electronic typesetting. Prof. Knuth's work is distinguished by its
integration of theoretical analyses and practical real-world
concerns. In his work, theory and practice are not separate components
of Computer Science, but rather he shows them to be inexorably linked
branches of the same whole. Read more.
$425K NIH Exploratory/Developmental Research Grant for Insertable
Imaging and Effector Platforms for Surgery. The grant is to construct
small, mobile, multi-function platforms that can be placed inside a
body cavity to perform robotic minimal access surgery. The robot will
be based upon an existing prototype device developed at the Columbia
Robotics Lab. Read more.
Note that the computer engineering position has a starting date of January 2007.
Applicants should submit summaries of research and teaching interests, CV, email address, and the names and email addresses of at least three references by filing an online application at
www.cs.columbia.edu/recruit. Review of applications will begin on December 1, 2005.
Columbia University is an Equal Opportunity/Affirmative Action Employer. We encourage applications from women and minorities. Read more.
Details about MAGNet can be found at http://magnet.c2b2.columbia.edu/index.html Read more.
"The most pertinent is a project undertaken by Dr. Tal Malkin and her team in the Computer Science Department at Columbia University, in partnership with researchers from IBM, related to the cryptographic security of Internet servers. Cryptography is an essential component of modern electronic commerce. With the explosion of transactions being conducted over the Internet, ensuring the security of data transfer is critically important. Considerable amounts of money are being exchanged over the Internet, either through shopping sites (e.g. Amazon, Buy.com), auction sites (eBay), online banking (Citibank, Chase), stock trading (Schwab), and even the government (irs.gov).
Dr. Malkin and her team made a systematic study of the cryptographic strength of thousands of "secure" servers on the Internet. Servers are computers that “host” the main functions of the Internet, such as Web sites (Web servers), email (mail servers), and other functions. Communication with these sites is secured by a protocol known as the Secure Sockets Layer (SSL) or its variant, Transport Layer Security (TLS). These protocols provide authentication, privacy, and integrity. A key component of the security of SSL/TLS is the cryptographic strength of the underlying algorithms used by the protocol. Dr. Malkin’s study probed 25,000 secure Web servers to determine if SSL was being properly configured and whether it was employed in the most secure way. Improper configuration can lead to attacks on servers, stolen data identity theft, break-ins, etc. Dr. Malkin’s project is the most extensive study of actually existing server security on the Internet.
The team’s findings, relevant to these hearings, included some serious weaknesses in how Web servers, including eCommerce servers employed by financial service companies, are currently being configured.
The most prevalent is that an old, outdated version of SSL, known as SSL 2.0, is still being supported on over 93% of these “secure” servers. SSL 2.0 has many flaws, including a vulnerability to “man in the middle” attacks, which are commonly used for identity theft. While most of these servers also employ a more advanced version of SSL, the incoming communication can choose to use Version 2.0 and thus breach the defenses of the server.
Another serious problem is the use of 512 bit “public keys” (1,024 bits are recommended), which can be broken readily, thus compromising all of the data on the server using this key length. Over 5% of the “secure” servers are using this key length.
These security shortcomings are quite serious, and pose risks both to the consumers and the providers in the financial services industry. Financial server security can be increased both by popularizing the correct configurations and, possibly, by greater government oversight in this area."
Geometry".
Physical phenomena such as the crushing of a car or the evolution of a
storm system are governed by effects ranging from very small to very
large scales. Accurately predicting these by resolving the finest
scales in a computer simulation is prohibitively expensive. The
investigators study how fine scale information impacts coarse scale
behavior and vice versa. In effect "summarizing" these relationships
allows the researchers to model coarse scale effects accurately and
efficiently without the need to explicitly resolve the finest scales
in a computation. A key to this study lies in the careful transfer of
structures present in the mathematical models of these phenomena
(which in essence have infinite resolution) to the computational realm
with its finite resolution and finite computational resources. The
methods being developed will allow rapid assessment of overall effects
with the ability "to drill down" computationally where additional
detail is required.
Physical systems are typically described by a set of continuous
equations using tools from geometric mechanics and differential
geometry to analyze and capture their properties. For purposes of
computation one must derive discrete (in space and time)
representations of the underlying equations. Theories which are
discrete from the start (rather than discretized after the fact), with
key geometric properties built in, can more readily yield robust
numerical simulations which are true to the underlying continuous
systems: they exactly preserve invariants of the continuous systems in
the discrete computational realm. So far these methods have not
accounted for effects across scales. Yet both physics and numerics
require such multiresolution strategies. This research project is
developing a multiresolution theory for discrete variational methods
and discrete differential geometry to apply it to applications in
thin-shell and fluid modeling. Its innovative aspect lies in tools to
conserve symmetries across computational scales.
participants. The main goal of the Association is "to promote Speech Communication Science and Technology, both in the industrial and Academic areas", covering all the aspects of Speech Communication (Acoustics, Phonetics, Phonology, Linguistics, Natural Language Processing, Artificial Intelligence, Cognitive Science, Signal Processing, Pattern Recognition, etc.). Read more.
"For her doctoral dissertation at Columbia University, computer scientist Regina Barzilay led the development of Newsblaster, which does what no computer program could do before: recognize stories from different news services as being about the same basic subject, and then paraphrase elements from all of the stories to create a summary."Read more.
computer science and telecomuniations. Projects include cybersecurity research, biometrics, IT to enhance disaster management, and building certifiably dependable systems. For more information, visit www.cstb.org.
Prof. Traub's appointment marks his return to the CSTB, as he was also its founding chair. "In 1986, along with Marjory Blumenthal, Joe's vision and dedication established the model that has made CSTB one of the strongest boards at the Academies. At this particular point in CSTB's history, I could not think of another person better suited to assume the chair and to guide CSTB to new heights," said Bill Wulf, President of the National Academy of Engineering. Read more.
Dora the Explorer will appear from 12 - 1:00, followed by a Harry
Potter Magician from 1:00 - 2:00.
and two smaller academic efforts. The two goals of the project are
to build a large-scale asynchronous demonstration chip (for Boeing) and design an
asynchronous CAD tool for use future asynchronous designs.
Prof. Nowick and his former PhD student Montek Singh (currently an assistant
professor at UNC), will play a key role in transferring
their high-speed asynchronous pipeline style, MOUSETRAP, to the
Philips commercial asynchronous tool flow, and providing optimizations
for several of the other CAD tools.
of collaborative tools for student groups. In addition, the
introduction of lecture videos into the online curriculum has drawn
attention to the disparity in the network resources used by students.
The paper presents an e-Learning architecture and adaptation model called
AI^2TV (Adaptive Internet Interactive Team Video), which
allows virtual students, possibly some or all disadvantaged in network
resources, to collaboratively view a video in synchrony. AI^2TV upholds the invariant that each student will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be initiated by any student and their results are seen by all the other students. These features
allow group members to review a lecture video in tandem, facilitating
the learning process. Experimental trials show that AI^2TV can successfully synchronize video for distributed students while, at the same time, optimizing the video quality, given fluctuating bandwidth, by adaptively adjusting the quality level for each student.
The grant was awarded to a team lead by SRI and consisting of researchers at Columbia University, University of Massachusetts Amherst, University of California San Diego, University of California Berkeley, University of Washington, Technical University Aachen (Germany), and Systran.
The research to be conducted at the Center for Computational Learning
Systems (CCLS) will center on building natural language processing tools for
Arabic and its dialects, concentrating on leveraging linguistic knowledge
when few resources (annotated corpora or even unannotated corpora) are
available. Mona Diab, Nizar Habash, and Owen Rambow will build on work
accomlished under an existing NSF grant. In addition, Nizar Habash will
continue his work on generation-heavy hybrid machine translation.
collaborative, cross-domain security technologies to detect and prevent
the exploitation of network-based computer systems. The core concept is to
deploy a number of strategically placed sensors across a number of
participating networks that collaborate by sharing information in
real-time to defend the entire network and each of its members. A novel
content-based anomaly detector, PAYL, identifies likely new exploits
targeting vulnerable systems. The Worminator project has developed a new
generation of scalable, collaborative, cross-domain security systems that
exchange alert information including profiled behaviors of attacks and
privacy-preserving anomalous content alerts to detect severe zero-day
security events. The work is a joint collaboration with CounterStorm, a
New York City based company spun out from the DHS and DARPA-sponsored
Columbia IDS lab, headed by Prof. Sal Stolfo. Read more.
Emerging Models and Technologies for Computation (EMT). The EMT cluster
seeks to advance the fundamental capabilities of computer and information
sciences and engineering by capitalizing on advances and insights from
areas such as biological systems, quantum phenomena, nanoscale science and
engineering, and other novel computing concepts. The award will support
Rocco's research on connections between quantum computation and
computational learning theory. Rocco's research in this area will focus
on the fundamental abilities and limitations of quantum learning
algorithms from an information-theoretic perspective, as well as on
developing computationally efficient quantum learning algorithms.
Suhit Gupta, Prof. Gail Kaiser and Prof. Salv Stolfo, all from the Department of Computer Science at Columbia University, won the Best Student Poster Award at WWW 2005 in Japan. Read more.
University on April 15, 2005 to bring together researchers in database and
information retrieval. More than 120 researchers and students from
academic and research institutions across the greater New York area
attended this inaugural workshop, making it a very successful event.
The program consisted of three technical keynote lectures from Alon Halevy
(University of Washington), Craig Nevill-Manning (Google Inc.) and Michael
Stonebraker (MIT), and a poster session for graduate students to present
their latest research. The event was sponsored by IBM research, with additional
funding from Columbia's Graduate Student Advisory Council.
clockless) circuits and systems. The symposium
typically has 100-120 attendees, and over 60 submitted papers.
This year, the symposium will be hosted at Columbia
University in Davis Auditorium, with Prof. Nowick as general
co-chair. Invited speakers include Turing award-winner
Ivan Sutherland with Robert Drost (Sun Microsystems Lab),
Bob Colwell (the former Intel manager of several Pentium
projects), and a tutorial on high-speed clocking with
Prof. Ken Shepard (EE Department) and Phil Restle (IBM
T.J. Watson). Read more.
A proposal from the Columbia Robotics Lab was chosen as one of ten
winners for the CanestaVision 3D sensing design competition. Columbia
Ph.D. student Matei Ciocarlie and Research Scientist Andrew Miller
headed the proposal which focuses on developing an "Eye-in-Hand" range
sensor for robotic grasping.
Each of the winners will receive a $7,500 development kit that
consists of a CanestaVision 3-D sensor chip, a USB interface, and
application program interface (API) software. These hardware and
software development kits will be used to actually build the
applications, and enter them in the "implementation" phase of the
contest which boasts a $10,000 first prize for best use of the technology.
Stay tuned for the Phase II winners in June! Read more.
world. People are captivated by the effects of natural lighting and
shading patterns, such as the soft shadows from the leaves of a tree
in skylight, the glints of sunlight in ocean waves, or the shiny
reflections from a velvet cushion. In computer graphics, it is
important to be able to accurately reproduce these appearance effects,
to create realistic images for applications like video games, vehicle
and flight simulators, or architectural design of interior spaces.
However, it is still very difficult to accurately model complex
illumination and reflection effects in interactive applications like
games, in image-based rendering applications like e-commerce, or in
computer vision applications like face recognition. In the past, the
above applications have been addressed separately, by devising
particular algorithms for specific problems. In this project, the
research focuses on the mathematical and computational fundamentals of
visual appearance, seeking to understand the intrinsic computational
structure of illumination, reflection and shadowing, and develop a
unified approach to many problems in graphics and vision.
The main thrust of the research will be to develop appropriate
mathematical representations for appearance, along with computational
algorithms and signal-processing techniques such as Clebsch-Gordan
expansions, wavelet methods with triple product expansions, and radial
basis functions. A major advantage of this approach is that the same
representations, analysis and computation tools can then be applied to
many application domains, such as real-time and image-based rendering,
Monte Carlo sampling and lighting-insensitive recognition. This
research philosophy builds on the investigator's dissertation, where
he developed a signal-processing framework for reflection, leading to
new frequency domain algorithms for both forward and inverse rendering.
permutation into learning algorithms and statistical data
representations. This includes statistical modeling of images,
text and networks while matching their subcomponents (pixels,
words or nodes). Permutation algorithms are combined with
learning algorithms to more accurately model realistic data.
Experiments focus on face and identity recognition problems.
awareness of four pillars of Trustworthy Computing: security, privacy,
reliability, and business/societal integrity. The project will
develop a new course on Trustworthy Computing, integrate relevant material
into COMS W3157, COMS W4156, and other courses as appropriate, and develop a
student programming competition specifically focused on trustworthy computing.
The overarching aim is to create a multi-year, integrated curriculum on
Trustworthy Computing.
The winners will receive their award at an upcoming CRA conference.
The CRA noted: "This year's nominees were a very impressive group. A number of them
were commended for making significant contributions to more than one
research project, several were authors or coauthors on multiple papers,
others had made presentations at major conferences, and some had
produced software artifacts that were in widespread use. Many of our
nominees had been involved in successful summer research or internship
programs, many had been teaching assistants, tutors, or mentors, and a
number had significant involvement in community volunteer efforts. It is
quite an honor to be selected as one of the top members of this group." Read more.
Ricardo Baratto, Shaya Potter, Gong Su, and Jason Nieh received the
Best Student Paper Award at the 10th International Conference on Mobile
Computing and Networking (MobiCom 2004) held this week in Philadelphia,
PA for their paper titled: "MobiDesk: Mobile Virtual Desktop
Computing". The PC Chairs noted that paper was also the highest rated
paper of the conference as per the original review scores.
MobiCom is the top conference in the field of mobile computing and
networking with a typical acceptance rate of less than 10%. This year
the conference received 326 submissions, of which 26 papers were
accepted. 65% of the accepted papers had a student as first author.
such as news reporting, intelligence information gathering, and
criminal investigation. However, with the advent of the digital age,
the trustworthiness of pictures can no longer be taken for granted.
This project will develop a completely blind and passive system for
detecting digital photograph tampering. We take an innovative
approach integrating techniques from signal-processing and computer
graphics. The signal processing method involves effective use of
higher-order signal statistics to identify tampering artifacts at the
signal level, while the computer graphics approach includes novel
techniques for 3D geometry estimation, illumination field recovery and
relighting, and scene reconstruction to detect inconsistencies at the
scene level like shadows, shading and geometry.
The three-year project was funded at $740,000 as part of the NSF CyberTrust program.
There are many types of sensor networks, covering different
geographical areas, using devices with a variety of energy
constraints, and implementing an assortment of applications. One
driving application is the reporting of conditions within a region
where the environment abruptly change due to an anomalous event, such
as an earthquake, terrorist attack, flood, or fire. During and
immediately following these events, sensor networks can provide
scientists, rescue workers, and even victims with crucial information
such as exit routes, danger spots, and areas that demand additional
rescue and recovery resources. This will facilitate and expedite
recovery procedures and identify the source of the problem.
This proposal focuses specifically on sensor systems that are to be
designed to efficiently deliver information during and immediately
following an event that triggers an abrupt change. The novelty
of this proposal is its focus on sensor networks that must deal with a
sudden impulse of data. The impulse will move the sensor network
almost instantaneously from a state with a light load to a state with
an overloading body of data to report. This data needs to be
delivered through the sensor network quickly to a relatively small
number of sink points that attach to the regular
communication infrastructure. The flow of data out of the network has
similarities to the flow of people out of a large arena after a
sporting event completes: this large impulse of data that is
suddenly on the move must be funneled out through what is typically a
small number of collection sink points.
The project was funded for $750,000 over three years.
The Secure Remote Computing Services (SRCS) project will develop
critical information technology (IT) infrastructure. SRCS will move
all application logic and data from insecure end-user devices, which
attackers can easily corrupt, steal and destroy, to autonomic server
farms in physically secure, remote data centers that can rapidly adapt
to computing demands especially in times of crisis. Users can then
access their computing state from anywhere, anytime, using simple,
stateless Internet-enabled devices. SRCS builds on the hypothesis
that a combination of lightweight process migration, remote display
technology, overlay-based security and trust-management access control
mechanisms, driven by an autonomic management utility, can result in a
significant improvement in overall system reliability and security.
The results of this proposed effort will enable SRCS implementations
to provide a myriad of benefits, including persistence and continuity
of business logic, minimizing the cost of localized computing
failures, robust protection against attacks, and transparent user
mobility with global computing access. SRCS in time of crisis
specifically addresses a major concern of national and homeland
security. The substantially lowered total cost of ownership of
applications running on SRCS is anticipated to dramatically reduce the
gap between IT haves and have nots.
The proposal was funded at $1,200,000 over three years.
You may have noticed some changes in the undergraduate curriculum
for Computer Science majors, as published in the SEAS bulletin.
This year is a transition year, as the CS department is phasing in
the new curriculum, so please bear with us.
How does this affect you now?
Please read this message to find out!
Note that the changes will affect ALL COMPUTER SCIENCE MAJORS, MINORS
and CONCENTRATORS, in all schools, not just SEAS. The bulletins for
Columbia College (CC), General Studies (GS) and Barnard will not
reflect the changes until 2005-06, so please refer to the Computer
Science department web pages for the most up-to-date information.
The new sequence of programming courses is as follows:
- CS-I (COMS W1004): Introduction to Programming
- (for computer science and other science and engineering majors who
have little or no programming experience.) This course introduces
basic computer science concepts underlying modern information
technology along with algorithmic problem-solving techniques using
Java. This course or AP/CS becomes a prerequisite for coms-w1007
starting in Spring 2005.
- CS-II (COMS W1007): Introduction to Computer Science
- (for students who have programmed before and/or taken AP Computer
Science in high school). This course is taught in Java and covers
computer science concepts and intermediate programming skills.
- CS-III (COMS W3157): Tools and Techniques for Advanced Programming.
Pre-requisite: coms-w1007. This course covers C, C++, internet
programming skills and Unix utilities.
- CS-IV (COMS W3137): Data Structures and Algorithms.
- CS-IV (COMS W3137): Data Structures and Algorithms.
- Pre-requisite: coms-w3157.
Pre- or co-requisite: coms-w3203 (Discrete Math).
Introduction to classic data structures and algorithms.
Taught in C/C++ (starting in Spring 2005).
This semester (Fall 2004) will be the last semester that Data
Structures (3137) is taught in Java. Starting in Spring 2005, it will
be taught in C/C++. For this reason, Advanced Programming (3157) is
now a pre-requisite for Data Structures.
Due to errors in scheduling, there unfortunately has been a conflict
between Discrete Math (3203) and Advanced Progamming (3157).
If you are currently enrolled in Discrete Math (W 3203), but have not
already taken COMS W3157, this it is advised that you take COMS W3157 this term.
To work around the time conflict, we have added a second section of
3157, which meets on Monday and Wednesday mornings. (Note that the
Wednesday is a lab section which will appear on the registrar's web
site on Tuesday next week.)
If this second section of 3157 is a conflict for you as well, then
it is recommended by the department that you drop 3203 for this term
and pick up section 1 of 3157; and take 3203 in the Spring.
Also note that if took Introduction to Computer Science (1007) last
year, you have the option of taking Data Structures (3137) this term
in Java or taking Advanced Programming (3157) now and then taking
Data Structures in C/C++ in the Spring.
For equestions, please contact
Prof Elizabeth Sklar (sklar@cs.columbia.edu) or
Prof Alfred Aho (aho@cs.columbi.edu) or
Simon Bird (birds@cs.columbia.edu).
your classmates, for hearing about the latest research and activities
of CS alums, and for catching up on news. The events are open to all friends of the Department, including students and alumni, current and former staff members, current and former faculty and research colleagues. Read more.
field of Computer Vision. This year the conference received
873 submissions, of which 59 papers were accepted as
oral presentations and 200 papers were accepted as posters.
She plans to expand the traditional cryptographic foundations so as to
withstand attacks by stronger, more realistic adversaries. In
particular, we will study security in a complex Internet-like
environment with multiple protocol executions, and will address
security against attackers who can obtain or tamper with the secret
keys.
The IBM Faculty Award is highly competitive: in 2002 IBM granted about 50 such awards across he mathematics and computer science disciplines.
Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made "important contributions to engineering theory and practice, including significant contributions to the literature of engineering theory and practice," and those who have demonstrated accomplishment in "the pioneering of new fields of engineering, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education." Read more.
effective and efficient algorithms for well-defined computational learning
problems. The two main goals are:
* To develop algorithms which can efficiently learn rich classes of
Boolean functions in well-studied models of computational learning.
Anticipated research directions here include learning DNF formulas,
learning various classes of Boolean circuits, and learning in the presence
of irrelevant information.
* To develop and analyze new well-motivated models for computational
learning, and to design efficient learning algorithms for these new
models. Anticipated research directions here include developing
average-case learning algorithms, developing a theory of learning from
nonmalicious random examples, and studying the role of quantum computation
in learning theory.
An important aspect of the proposed research methodology is to explore and
exploit connections between learning problems and complexity-theoretic
structural questions about Boolean functions.
WASHINGTON, D.C. - February 3, 2004 - Internet2(R) today announced that its
Presence and Integrated Communications (PIC) Working Group successfully
completed an experimental communications trial during the advanced
networking, Joint Techs Workshop in Hawaii last week. The trial
demonstrated SIP-based (Session Initiation Protocol) voice, video, and
instant messaging over wireless fidelity (WiFi), and SIP voice conferencing
- all in the context of rich presence derived from WiFi location service and
enterprise calendaring.
"The rich presence efforts at Internet2 point the way towards
next-generation communication services, reaching far beyond the limited
presence and phone systems in use today," said Henning Schulzrinne,
professor in the Departments of Computer Science and Electrical Engineering
at Columbia University. "Beyond the old goal of reachable anywhere,
anytime, rich presence gives control back to users, so that communications
becomes planned and desired instead of disruptive and haphazard."
Participants downloaded and installed one of several integrated
communications clients onto their laptops allowing them to initiate voice,
instant messaging, and video calls to other participants - using the
receiver's email address as a single, converged electronic identity.
With the inclusion of rich presence services, participants were able to see
not only which of their buddies were online or offline, but also, for each
buddy, a current location, activity, and expected call quality. As
participants used the meeting's wireless LAN infrastructure and moved from
one meeting room to another, their locations were tracked by WiFi location
technology from HP. "The open-source SIP Express Router (SER) provided a
solid base for this demo," said Jiri Kuthan, member of the Internet2 PIC
Working Group and director of engineering at iptel.org. "We were able to
extend SER to perform as a SIP presence agent serving rich location,
calendar, and expected call quality presence to clients."
"Location services can add enormous value to integrated communications
applications and can provide life-saving location information to emergency
responders," said Ben Teitelbaum, Internet2 program manager for voice and
integrated communications. "Internet2 is working to ensure that these
technologies are designed and deployed to protect users' privacy and allow
users to control and filter what information about them is published."
Participants were also able to experience placing SIP voice calls to any
user at a SIP.edu-enabled institution (http://voip.internet2.edu/SIP.edu/)
and were able to eavesdrop on meeting sessions by calling special "room
buddies."
"The result of this experiment, as well as the results of future
experiments, is a critical means of helping to determine what presence and
integrated communications means to the end user," said Jamey Hicks, member
of the Internet2 PIC Working Group and principal member of the technical
staff, HP Labs. "Our goal is to develop an improved mode of communication
with a focus on location-based services using 802.11 - for people constantly
on the go and requiring constant contact, such as healthcare providers or
those in the business community."
The individuals who contributed to the success of this experiment are from
the following Internet2 member institutions (in alphabetical order):
+ Columbia University
+ Ford Motor Company
+ HP
+ University of Hawaii
+ University of Pennsylvania
+ Wave Three Software
+ Yale University
# # #
About the Internet2 Presence and Integrated Communications Working Group
The Presence and Integrated Communications (PIC) working group will foster
the deployment of network-based communication technologies through
demonstrations, tutorials, and initiatives in collaboration with both the
private sector and open-source initiatives. This growing area will have an
effect on nearly every individual within higher education and also have the
potential to be a significant driver for network design, security, and
middleware. For more information, visit: http://pic.internet2.edu.
About Columbia University's IRT Laboratory
The Internet Real-Time Lab (IRT) in the Department of Computer Science at
Columbia University conducts research in the areas of:
+ Internet telephony;
+ Streaming Internet media;
+ Internet quality of service;
+ Network measurements and reliability;
+ Service location;
+ Ad-hoc wireless networks;
+ Scalable content distribution; and
+ Ubiquitous and context-aware computing and communication.
About HP Labs Cambridge
HP Labs Cambridge (HPLC) is the primary advanced research facility for HP on
the East Coast. For more information on HP Labs, please visit
http://www.hpl.hp.com.
About iptel.org
Based in Berlin, Germany, iptel.org is a leading innovation organization in
SIP technology. iptel.org is a consultant to vendors and network operators
and is known for having created a unique open-source SIP server with premium
service in flexibility and high performance. iptel.org's server, SIP
Express Router, has been powering public VoIP services of numerous providers
around the world. For more information, visit http://www.iptel.org/.
About Internet2(R)
Led by more than 200 U.S. universities, working with industry and
government, Internet2 develops and deploys advanced network applications and
technologies for research and higher education, accelerating the creation of
tomorrow's Internet. Internet2 recreates the partnerships among academia,
industry, and government that helped foster today's Internet in its infancy.
For more information about Internet2, visit: http://www.internet2.edu/. Read more.
Prof. Angelos D. Keromytis focuses on computer security, cryptography, and networking; Prof. Vishal Misra works on communication networks, while Prof. Elizabeth Sklar's interest lie in human and machine learning.
Prof. Misra's has a joint appointment with Electrical Engineering.
2004, which is awarded to one person in the physical sciences and
engineering once every two years. More information is available at sigmaxi.org. Read more.
The journal also publishes a list of a small number of Physical Review A papers that the editors and referees find of particular interest, importance, or clarity. These Editors' Suggestion papers are listed prominently on http://pra.aps.org/ and marked with a special icon in the print and online Tables of Contents and in online searches.
"Measures of quantum computing speedup" introduces the concept of strong quantum speedup. It is shown that approximating the ground-state energy of an instance of the time-independent Schrodinger equation with d degrees of freedom and d large enjoys strong exponential quantum speedup. It can be easily solved on a quantum computer. Some researchers in QMA theory believe that quantum computation is not effective for eigenvalue problems. One of the goals of this paper is to explain this dissonance.
The first is entitled "Collection, Analysis, and Uses of Parallel Block Vectors." Authored by PhD student Melanie Kambadur, undergraduate Kui Tang, and Assistant Professor Martha Kim, this research establishes a novel perspective from which to reason about the correctness and performance of parallel software. In addition, it describes the design and implementation of an open source tool that automatically instruments an program to gather the necessary runtime information.
The second paper is titled "A Quantitative, Experimental Approach to Measuring Processor Side-Channel Security." The authors are John Demme, Robert Martin, Adam Waksman and Simha Sethumadhavan. This paper describes quantitative method to identify bad hardware design decisions that weaken security. The methodology can be used in the early processor design stages when security vulnerabilities can be easily fixed. The paper marks the beginning of a quantitative approach to securing computer architectures.
"Learning and Testing Classes of Distributions" as part of the
Algorithmic Foundations program.
A long and successful line of work in theoretical computer science has
focused on understanding the ability of computationally efficient
algorithms to learn and test membership in various classes of Boolean
functions. This proposal advocates an analogous focus on developing
efficient algorithms for learning and testing natural and important
classes of probability distributions over extremely large domains. The
research is motivated by the ever-increasing availability of large
amounts of raw unlabeled data from a wide range of problem domains
across the natural and social sciences. Efficient algorithms for these
learning and testing problems can provide useful modelling tools in
data-rich environments and may serve as a theoretically grounded
"computational substrate" on which large-scale machine learning applications
for real-world unsupervised learning problems can be developed.
One specific goal of the project is to develop efficient algorithms to
learn and test univariate probability distributions that satisfy
different natural kinds of "shape constraints" on the underlying
probability density function. Preliminary results suggest that dramatic
improvements in efficiency may be possible for algorithms that are
designed to exploit this type of structure. Another goal is to develop
efficient algorithms for learning and testing complex distributions that
result from the aggregation of many independent simple sources of
randomness.
Abella is currently the executive director of the Innovative Devices and Services Research Department at AT&T Labs, managing a multi-disciplinary technical staff specializing in human-computer interaction, Abella is an award-winning advocate for encouraging minorities and women to pursue careers in science and engineering. She earned her Ph.D. and master’s degree from Columbia, graduating in 1995, under the guidance of Prof. John Kender.
Timothy Sun is being recognized for his complete set of undergraduate research projects, which include his paper
On Milgram's construction and the Duke embedding conjectures.
Timothy was advised by Prof. Jonathan Gross.
For a video of the ARM see Engadget.
Kui Tang also worked with Prof. Tony Jebara on Tractable Inference in Graphical Models and published the paper Bethe Bounds and Approximating the Global Optimum. Sixteenth International Conference on Artificial Intelligence and Statistics, 2013.
Congratulations to Kui Tang, Martha Kim, and Tony Jebara!
Today, data on customers is what makes a company profitable. Tomorrow, data about citizens can make our society successful. But how to reconcile this progress with privacy? Analytics - the science of identifying individual types and collective trends - runs now behind closed doors on your data and outside your control. Prof. Chaintreau aims at showing that an alternative exists that is more socially efficient; managing personal data should be made transparent and easy for each of us. In his NSF Career award, he will develop algorithms that run analytics on data regained by users, while leveraging information on their social context. Moreover, mechanisms will be designed for incentive to make privacy not only a choice, but one that leads to a socially efficient outcome. Demonstrating this concept will start in the classroom. Not only the engineers but also the future journalists informing our citizens will be involved in a new program on the management of personal data, as enabling privacy raises technical, economic and societal challenges. The ultimate goal of this work is to improve how the web treats information about our life without the high cost of a top-down regulation.
the 45th ACM Symposium on the Theory of Computing, for his
single-authored paper titled "Maintaining Shortest Paths Under
Deletions in Weighted Directed Graphs." The work is on maintaining
distance information in a network that is changing over time.
STOC is one of the most prestigious conferences in theoretical
computer science. Two papers shared the award at STOC 2013. Before
this, Aaron was also the sole winner of the Best Student Paper Award
at SODA (ACM-SIAM Symposium on Discrete Algorithms) 2012. As a
third-year PhD student, Aaron's research interest lies in the design
and analysis of efficient algorithms. He has made significant
contribution to this area, and has already published seven papers in
STOC, FOCS and SODA.
The work presents novel algorithms to cope with the growing complexity of designing Systems-on-Chip by simplifying heterogeneous component integration and enabling reuse of predesigned components. It was the only best paper assigned for DATE 2012, which received some 950 paper submissions, more than 50% from outside Europe. The best-paper selection was performed by an award committee, based on the results of the reviewing process, the quality of the final paper, and the quality of the presentation, which was given by Hung-Yi.
The award was announced at the 2013 edition of the conference, which was held in March 2012 in Grenoble, France.
''Computer engineering research is intrinsically an interdisciplinary effort and the complex challenges of developing future embedded systems require a vertically integrated approach to innovation that spans from circuit design to application software,'' says Professor Carloni, Principal Investigator on the program. ''We are excited with this award which recognizes the continuous progress of Columbia Engineering faculty in leading interdisciplinary and multi-institution research programs.''
In the framework of the PERFECT program, the ESP Team will investigate a variety of scalable innovations in circuits, architecture, software, and computer-aided design (CAD) methods, including: scalable 3D-stacked voltage regulators for integrated fine-grain power management; highly-resilient near-threshold-voltage circuit operation; seamless integration of programmable cores and specialized accelerators into a scalable system-on-chip (SoC) architecture; efficient network-on-chip infrastructure for both message-passing communications and distributed power control; static and dynamic scheduling of on-chip resources driven by performance profiling; and an integrated CAD environment for full-system simulation and application-driven optimization.
On October 28 at the Stony Brook University, team members Long Chen, Gang Hu, Xinhao Yuan participated in a grueling five-hour competition, winning first place. The team is coached by Xiaorui Sun.
ACM ICPC is an annual competitive programming competition among the universities of the world. The contest helps students enhance their programming skills, and enables contestants to test their ability to perform under pressure. ACM ICPC is the oldest, largest, and most prestigious programming contest in the world. Each year, more than 5,000 teams from about 2,000 universities all over the world compete at the regional level, and about 100 teams participate the World Finals.
Congratulations, team!
SecurityWatch
Dec 20, 2012
http://securitywatch.pcmag.com/none/306223-the-internet-will-literally-kill-you-by-2014-predicts-security-firm
Can Your Cisco VoIP Phone Spy On You?
SecurityWatch
Dec 19, 2012
http://securitywatch.pcmag.com/none/306172-can-your-cisco-voip-phone-spy-on-you
Security researchers find vulnerability in Cisco VoIP phones
PhysOrg
Dec 19, 2012
http://phys.org/news/2012-12-vulnerability-cisco-voip.html
Cisco phone exploit allows attackers to listen in on phone calls
The Verge
Jan 10, 2013
http://www.theverge.com/2013/1/10/3861316/cisco-phone-exploit-discretely-enables-microphone
Your worst office nightmare: Hack makes Cisco phone spy on you
ExtremeTech
Jan 10, 2013
http://www.extremetech.com/computing/145371-your-worst-office-nightmare-hack-makes-cisco-phone-spy-on-you
Cisco VoIP Phone Flaw Could Plant Bugs In Your Cubicle
Readwrite Hack
Jan 11, 2013
http://readwrite.com/2013/01/10/cisco-voip-phone-flaw-could-plant-bugs-in-your-cubicle
Hack turns Cisco desk phones into remote listening devices
Slashgear
Jan 11, 2013
http://www.slashgear.com/hack-turns-cisco-desk-phones-into-remote-listening-devices-11264898/
Cisco IP Phone Vulnerability Enables Remote Eavesdropping
Tekcert
Jan 10, 2013
http://tekcert.com/blog/2013/01/10/cisco-ip-phone-vulnerability-enables-remote-eavesdropping
Cisco issues advisory to plug security hole in VoIP phone
FierceEnterprise Communications
Jan 10, 2013
http://www.fierceenterprisecommunications.com/story/cisco-issues-advisory-plug-security-hole-voip-phones/2013-01-10
Hack Turns Cisco's Desk Phone into a Spying Device
Istruck.me
Jan 11, 2013
http://itstruck.me/hack-turns-ciscos-desk-phone-into-a-spying-device/
Hack Turns Cisco’s Desk Phone Into a Spying Device
Gizmodo
Jan 10, 2013
http://gizmodo.com/5974814/hack-turns-ciscos-desk-phone-into-a-spying-device
Warning: That Cisco phone on your desk may be spying on you
BetaNews
Jan 10, 2013
http://betanews.com/2013/01/10/warning-that-cisco-phone-on-your-desk-may-be-spying-on-you/
Hack turns the Cisco phone on your desk into a remote bugging device
Arstechnica
Jan 10,2013
http://arstechnica.com/security/2013/01/hack-turns-the-cisco-phone-on-your-desk-into-a-remote-bugging-device/
Cisco VoIP phone vulnerability allow eavesdropping remotely
IOtechie
Jan 9, 2013
http://hackersvalley.iotechie.com/hacks/cisco-voip-phone-vulnerability-allow-eavesdropping-remotely/
Cisco issues advisory to plug security hole in VoIP phones
FierceEnterpriseCommunications
Jan 10, 2013
http://www.fierceenterprisecommunications.com/story/cisco-issues-advisory-plug-security-hole-voip-phones/2013-01-10
Malware leaves Cisco VoIP phones "open to call tapping"
PC Pro
Jan 8, 2013
http://www.pcpro.co.uk/news/security/379129/malware-leaves-cisco-voip-phones-open-to-call-tapping
Researcher exposes VoIP phone vulnerability
Business Wire for Security InfoWatch
Dec 13, 2012
http://www.securityinfowatch.com/news/10842240/researcher-exposes-voip-phone-vulnerability
Cisco IP Phones Vulnerable
IEEE Spectrum
Dec 18, 2012
http://spectrum.ieee.org/computing/embedded-systems/cisco-ip-phones-vulnerable
Cisco IP phones buggy
NetworkWorld
Dec 12, 2012
http://www.networkworld.com/community/node/82046
Researchers Identify Security Vulnerabilities In VoIP Phones
Red Orbit
Jan 8, 2013
http://www.redorbit.com/news/technology/1112759485/voip-phones-security-vulnerability-software-symbiote-010813/
Security Researcher Compromises Cisco VoIP Phones With Vulnerability
Darkreading
Dec 13, 2012
http://www.darkreading.com/threat-intelligence/167901121/security/attacks-breaches/240144378/security-researcher-compromises-cisco-voip-phones-with-vulnerability.html
Remotely listen in via hacked VoIP phones: Cisco working on eavesdropping patch
Computerworld
Jan 8, 2013
http://blogs.computerworld.com/cybercrime-and-hacking/21600/remotely-listen-hacked-voip-phones-cisco-working-eavesdropping-patch
Cisco IP Phones Hacked
Fast Company
Dec 19, 2012
http://www.fastcompany.com/3004163/cisco-ip-phones-hacked
Cisco rushing to fix broken VoIP patch
IT World Canada
Jan 8, 2013
http://www.itworldcanada.com/news/cisco-rushing-to-fix-broken-voip-patch/146562
Cisco working to fix broken patch for VoIP phones
IDG News Service for CSO Online
Jan 7, 2013
http://www.csoonline.com/article/725788/cisco-working-to-fix-broken-patch-for-voip-phones
Your Cisco phone is listening to you: 29C3 talk on breaking Cisco phones
Boing Boing
Dec 29, 2012
http://boingboing.net/2012/12/29/your-cisco-phone-is-listening.html
Yet another eavesdrop vulnerability in Cisco phones
The Register
December 13, 2012
http://www.theregister.co.uk/2012/12/13/cisco_voip_phones_vulnerable/
Cisco VoIP Phones Affected By On Hook Security Vulnerability
Dec 6, 2012
Forbes
http://www.forbes.com/sites/robertvamosi/2012/12/06/off-hook-voip-phone-security-vulnerability-affects-some-cisco-models/
Discovered vulnerabilities in Cisco VoIP phones
KO IT (RUSSIAN)
Jan 8, 2013
http://ko.com.ua/obnaruzheny_uyazvimosti_v_telefonah_cisco_voip_70011
http://forums.cnet.com/7726-6132_102-5409269.html
http://www.xsnet.com/blog/bid/112454/Jenn%20Cano
http://news.softpedia.com/news/Kernel-Vulnerability-in-Cisco-Phones-Can-Be-Exploited-for-Covert-Surveillance-Video-320168.shtml
http://www.securelist.com/en/advisories/51768
http://accublog.wordpress.com/2013/01/10/eavesdropping-on-your-phone-from-anywhere-in-the-world/
http://geekapolis.fooyoh.com/geekapolis_gadgets_wishlist/8247285
http://eddydemland.blogspot.com/2013/01/hack-turns-ciscos-desk-phone-into.html
http://www.onenewspage.us/n/Technology/74vnp9j0m/Kernel-Vulnerability-in-Cisco-Phones-Can-Be-Exploited.htm
http://technology.automated.it/2013/01/10/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://www.i4u.com/2013/01/youtube/warning-your-be-you-desk-may-spying-phone-cisco
http://www.shafaqna.com/english/other-services/featured/itemlist/tag/cisco.html
http://www.ieverythingtech.com/2013/01/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://dailyme.com/story/2013011000002065/hack-turns-cisco-s-desk-phone-into-a-spying-device
http://truthisscary.com/2013/01/video-hacked-phones-could-be-listening-to-everything-you-say/
http://www.smokey-services.eu/forums/index.php?topic=227209.0
http://technewstube.com/theverge/154392/cisco-phone-exploit-allows-attackers-to-listen-in-on-phone-calls/
http://finance.yahoo.com/news/security-researcher-demonstrates-enterprise-voip-130000432.html
The program is designed for students interested in the intersection between the two departments. In particular, its focus is on computer systems, combining skills in both hardware and software, including the areas of: digital design, computer architecture (both sequential and parallel), embedded systems, computer-aided design and networking.
To learn more about this program, please see http://www.compeng.columbia.edu
Vasilis Pappas, Michalis Polychronakis, Angelos D. Keromytis IEEE Security & Privacy, May 2012
Photo/announcement:
https://www.facebook.com/photo.php?fbid=491657707521734&set=a.157827437571431.30830.157394210948087&type=1&theater
Details of the competition:
http://www.poly.edu/csaw2012/csaw-kaspersky
He now goes to the International Round, in London.
For more please see http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1218222
Smartphones are increasingly ubiquitous. Many users are
inconveniently forced to carry multiple smartphones for
work, personal, and geographic mobility needs.
This research is developing Cells, a lightweight virtualization
architecture for enabling multiple virtual smartphones to run
simultaneously on the same physical cellphone device in a securely
isolated manner. Cells introduces a new device namespace mechanism
and novel device proxies that efficiently and securely multiplex phone
hardware devices across multiple virtual phones while providing native
hardware device performance to all applications. Virtual phone
features include fully-accelerated graphics for gaming, complete power
management features, easy-to-use security and safety mechanisms that
can transparently and dynamically control the availability of phone
features, and full telephony functionality with separately assignable
telephone numbers and caller ID support. Cells is being implemented in
Android, the most widely used smartphone platform, to transparently
support multiple Android virtual phones on the same phone hardware.
While the primary focus of this research is smartphone devices, the
development of these ideas will also be explored in the context of
tablet devices.
The results of this research are providing a foundation for future
innovations in smartphone computing, enabling new uses and
applications and transforming the way the devices can be used. This
includes not only greater system security, but greater user safety
especially for young people. Integrating this research with the CS
curriculum provides students with hands-on learning through
programming projects on smartphone devices, enabling them to become
contributors to the workforce as smartphones become an increasingly
dominant computing platform.
While races in multithreaded programs have drawn huge attention from the
research community, little has been done for API races, a class
of errors as dangerous and as difficult to debug as traditional thread
races. An API race occurs when multiple activities, whether they be
threads or processes, access a shared resource via an application
programming interface (API) without proper synchronization. Detecting
API races is an important and difficult problem as existing race
detectors are unlikely to work well with API races.
Software reliability increasingly affects everyone, whether or not
they personally use computers. This research studies and
automatically detects for the first time an important class of races
that has a significant impact on software reliability. The study
quantitatively demonstrates how API races are numerous, difficult to
debug, and a real threat to software reliability. To address this
problem, this research is developing RacePro, a new system to
automatically detect API races in deployed systems. RacePro checks
deployed systems in-vivo by recording live executions then
deterministically replay and check them later. This approach
increases checking coverage beyond the configurations or executions
covered by software vendors or beta testing sites. RacePro records
multiple processes and threads, detects races in the recording among
API methods that may concurrently access shared objects, then explores
different execution orderings of such API methods to determine which races
are harmful and result in failures. Technologies developed will help
application developers detect insidious software defects, enabling
more robust, reliable, and secure software infrastructure.
The grant will support Profs. Sethumadhavan (CS), Seok and Tsividis' (EE) work on Hybrid Continuous-Discrete Computers for Cyber-Physical Systems, aiming at specialized single-chip computers with improved power/performance.
Professors Tsividis (EE), Seok (EE), Sethumadhavan (CS) and their collaborators in the Department of Mechanical Engineering at the University of Texas at Austin, have been awarded a three year, $1.1M NSF grant under the agency’s Cyber-Physical Systems program, for research in Hybrid Continuous-Discrete Computers for Cyber-Physical Systems.
The research augments the today-ubiquitous discrete (digital) model of computation with continuous (analog) computing, which is well-suited to the continuous natural variables involved in cyber-physical systems, and to the error-tolerant nature of computation in such systems. The result is a computing platform on a single silicon chip, with higher energy efficiency, higher speed, and better numerical convergence than is possible with purely discrete computation. The research has thrusts in hardware, architecture, microarchitecture, and applications.
For more on DaMoN see http://fusion.hpl.hp.com/damon2012/program.html
The paper is currently highlighted on the journal home page and will be available to the public for free for about six months (http://www.computer.org/cal).
Congratulations to the authors for this recognition of their research!
Heterogeneous SoC architectures, which combine a variety of programmable components and special-function accelerators, are emerging as a fundamental computing platform for many systems from computer servers in data centers to embedded systems and mobile devices.
Design productivity for SoC platforms depends on creating and maintaining reusable components at higher levels of abstraction and on hierarchically combining them to form optimized subsystems. While the design of a single component is important, the critical challenges are in the integration and management of many heterogeneous components. The goal of this project is to establish Supervised Design-Space Exploration as the foundation for a new component-based design environment in which hardware-accelerator developers, software programmers and system architects can interact effectively while they each pursue their specific goals.
For more details:
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1219001
The work will focus on low-power and high-performance interconnection
networks, targeted to both shared-memory parallel processors and
systems-on-chip for consumer electronics. The aim is to develop a new class of dynamically-adaptable on-chip digital networks which continually self-reconfigure, at very fine-granularity, to customize their operation to actual observed traffic patterns.
Prediction and learning techniques will be explored, to optimally
reconfigure the on-chip networks.
The use of asynchronous networks supports the seamless integration of multiple synchronous processors and memories operating at different
clock rates. The ultimate goal is a significant breakthrough in system latency, power, area and reliability, over synchronous approaches.
Kristen Parton, Nizar Habash and Kathy McKeown won a best paper award at EAMT 12 (Conference of the European Association for Machine Translation) for their paper entitled "Can Automatic Post-Editing make MT more Meaningful?". This paper presents research done by Kristen Parton for her dissertation.
The paper is currently highlighted on the CACM website (http://cacm.acm.org/research?date=year&subject=11).
Congratulations to the authors for this recognition of their research!
http://www.nytimes.com/2011/05/29/nyregion/immersed-in-nature-eyes-on-the-screen-app-city.html
http://www.nytimes.com/2011/09/01/technology/personaltech/mobile-apps-make-it-easy-to-point-and-identify.html?pagewanted=all
http://www.nytimes.com/2012/04/05/garden/new-gardening-apps.html?pagewanted=2&_r=2
http://www.nytimes.com/2011/08/14/fashion/this-life-a-plugged-in-summer.html?pagewanted=all
http://intransit.blogs.nytimes.com/2011/10/05/leaf-peeping-theres-an-app-for-that/
http://query.nytimes.com/gst/fullpage.html?res=9504E4DB143EF934A25750C0A9679D8B63&pagewanted=all
http://www.nytimes.com/2011/06/09/technology/personaltech/09PHONES.html?pagewanted=all
http://query.nytimes.com/gst/fullpage.html?res=9B01E2DF1530F93AA35753C1A9679D8B63
Fortunately, many such applications reflect 'metamorphic properties' that define a relationship between pairs of inputs and outputs, such that for any previous input i with its already known output o, one can easily derive a test input i' and predict the expected output o'. If the actual output o'' is different from o', then there must be an error in the code. This project investigates methodology for determining the metamorphic properties of software and for devising good test cases from which the secondary tests can be derived. The project extends the inputs/outputs considered in previous work on metamorphic testing to focus on application state, before and after, rather than just functional parameters and results. The research also extends the pairwise relations implied by metamorphic properties to 'semantic similarity' for nondeterministic applications, applied to profiles from numerous executions, since an exact relation cannot be expected to hold for a single pair of test executions. These extensions enable treatment of more sophisticated properties that preliminary experiments have shown to reveal defects that were not detected otherwise. Read more.
This project aims to improve the programmability and efficiency of distributed memory systems, a key issue in the execution of parallel algorithms. While it is fairly easy to put, say, thousands of independent adders on a single chip, it is far more difficult to supply them with useful data to add, a task that falls to the memory system. This research will develop compiler optimization algorithms able to configure and orchestrate parallel memory systems able to
utilize such parallel computational resources.
To make more than incremental progress, this project departs from existing hegemony in two important ways. First, its techniques will be applied only to algorithms expressed in the functional style, a more abstract, mathematically sound representation that enables precise reasoning about parallel algorithms and very aggressive optimizations. Second, it targets field-programmable gate arrays (FPGAs) rather than existing parallel computing platforms. FPGAs provide a highly flexible platform that enables exploring parallel architectures far different than today's awkward solutions, which are largely legacy sequential architectures glued together. While FPGAs are far too flexible and power-hungry to be the long-term "solution" to the parallel computer architecture question, their use grounds this project in physical reality and will produce useful hardware synthesis algorithms as a side-effect.
Judicious and efficient data movement is the linchpin of parallel computing. This project attacks that challenge head on, establishing the constructs and algorithms necessary for hardware and software to efficiently manipulate data together. This research will lay the groundwork for the next generation of storage and instruction set architectures, compilers, and programming paradigms -- the bedrock of today's mainstream computing.
The PI's Intrusion Detection Lab (IDS) will investigate and evaluate techniques to detect and defend against advanced malware threats to the internet routing infrastructure. A recent study published by the IDS Lab demonstrates that there are a vast number of unsecured embedded systems on the internet, primarily routers, that are trivially vulnerable to exploitation with little to no effort. As of December 2011, 1.4 million trivially vulnerable devices are in easy reach of even the most unsophisticated attacker. The IDS lab will fully develop and deploy an experimental system that injects intrusion detection functionality within the firmware of a (legacy) router that senses the unauthorized modification of router firmware. The technology may be developed and deployed as a sensor in an Early Attack Warning System, but it may also be implemented to prevent firmware modifications. The IDS lab will demonstrate the highest levels of protection that can be achieved with this novel technology in a range of embedded system device types. This is thesis research of PhD GRA Ang Cui and a team of project students.
https://academicjobs.columbia.edu/applicants/jsp/shared/frameset/Frameset.jsp?time=1332985141422 for application details and a fuller description of the fellowships.
For more on the fellowship, please see
http://www.capitalnewyork.com/article/media/2012/01/5160427/helen-gurley-brown-gives-transformative-18-million-columbias-journalis
For more on our joint Journalism + CS Masters program, please see
http://www.wired.com/epicenter/2010/04/will-columbia-trained-code-savvy-journalists-bridge-the-mediatech-divide/
Congratulations to Jeremy and his advisor Jason!
The Air Force YIP supports scientists and engineers who show exceptional ability and promise for conducting basic research.
Junfeng will investigate concurrency attacks and defenses. Today's multithreaded programs are plagued with subtle but serious concurrency vulnerabilities such as race conditions. Just as vulnerabilities in sequential programs can lead to security exploits, concurrency vulnerabilities can also be exploited by
attackers to gain privilege, steal information, inject arbitrary code, etc. Concurrency attacks targeting these vulnerabilities are impending (see CVE http://www.cvedetails.com/vulnerability-list/cweid-362/vulnerabilities.html), yet few existing defense techniques can deal with concurrency vulnerabilities. In fact, many of the traditional defense techniques are rendered unsafe by concurrency vulnerabilities.
The objective of this project is to take a holistic approach to creating novel program analysis/protection techniques and a system called DASH to secure multithreaded programs and harden traditional defense techniques in a concurrent environment. The greatest impact of our project will be drastically improved software security and reliability, benefiting the Nation’s cyber infrastructure.
For more on this award, see http://www.wpafb.af.mil/library/factsheets/factsheet.asp?id=9332
The demo titled "Organic Solar Cell-equipped Energy Harvesting Active Networked Tag (EnHANT) Prototypes" was developed by 10 students (Gerald Stanje, Paul Miller, Jianxun Zhu, Alexander Smith, Olivia Winn, Robert Margolies, Maria Gorlatova, John Sarik, Marcin Szczodrak, and Baradwaj Vigraham) from the groups of Professors Carloni (CS), Kinget, Kymissis, and Zussman.
The EnHANTs Project is an interdisciplinary project that focuses on developing small, flexible, and energetically self-reliant devices. These devices can be attached to objects that are traditionally not networked (e.g., books, furniture, walls, doors, toys, keys, clothing, and produce), thereby providing the infrastructure for various novel tracking applications. Examples of these applications include locating misplaced items, continuous monitoring of objects (e.g., items in a store and boxes in transit), and determining locations of disaster survivors.
The SenSys demo showcased EnHANT prototypes that are integrated with novel custom-developed organic solar cells and with novel custom Ultra-Wideband (UWB) transceivers, and demonstrated various network adaptations to environmental energy conditions. A video of the demo will soon be available on the EnHANTs website.
In 2009, the project won first place in the Vodafone Americas Foundation Wireless Innovation Competition; in 2011, it received the IEEE Communications Society Award for Outstanding Paper on New Communication Topics. The project has been supported by the National Science Foundation, the Department of Energy, the Department of Homeland Security, Google, and Vodafone.
Recently concepts and methodologies from game theory and economics have found numerous successful applications in the study of the Internet and e-commerce. The main goal of this proposal is to bridge the algorithmic gap between these three disciplines. The PI will work to develop efficient algorithms for some of the fundamental models and solution concepts and to understand the computational difficulties inherent within them, with the aim to inspire and enable the next-generation e-commerce systems. The proposed research will contribute to a more solid algorithmic and complexity-theoretic foundation for the interdisciplinary field of Algorithmic Game Theory.
Details of the event are at http://www.kaspersky.com/educational-events/it_security_conference_2012_usa
Cells: A Virtual Mobile Smartphone Architecture
by Jeremy Andrus, Christoffer Dall, Alex Van’t Hof, Oren Laadan, Jason Nieh
Smartphones are increasingly ubiquitous, and many users carry multiple phones to accommodate work, personal, and geographic mobility needs. The authors created Cells, a virtualization architecture for enabling multiple virtual smartphones to run simultaneously on the same physical cellphone in an isolated, secure manner. Cells introduces a usage model of having one foreground virtual phone and multiple background virtual phones. This model enables a new device namespace mechanism and novel device proxies that integrate with lightweight operating system virtualization to multiplex phone hardware across multiple virtual phones while providing native hardware device performance. Cells virtual phone features include fully accelerated 3D graphics, complete power management features, and full telephony functionality with separately assignable telephone numbers and caller ID support. They have implemented a prototype of Cells that supports multiple Android virtual phones on the same phone. Their performance results demonstrate that Cells imposes only modest runtime and memory overhead, works seamlessly across multiple hardware devices including Google Nexus 1 and Nexus S phones, and transparently runs Android applications at native speed without any modifications.
Presented in Basel, Switzerland, "Augmented Reality in the Psychomotor Phase of a Procedural Task" reports on a key part of Steve Henderson's spring 2011 dissertation, and was coauthored by Dr. Henderson and his advisor, Prof. Steve Feiner. It presents the design and evaluation of a prototype augmented reality user interface designed to assist users in performing an aircraft maintenance assembly task. The prototype tracks the user and multiple physical task objects, and provides dynamic, prescriptive, overlaid instructions on a tracked, see-through, head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete aspects of the assembly task in which they physically manipulated task objects significantly faster and with significantly greater accuracy when using augmented reality than when using 3D-graphics-based assistance presented on a stationary LCD panel.
He is a member of the CryptoLab at Columbia University, where he is advised by Tal Malkin. Congratulations to Aaron and to the CryptoLab for this outstanding research contribution!
"All of the winning applications have applied advanced networking technology to enable significant progress in research, teaching, learning or collaboration to increase the impact of next-generation networks around the world,” said Tom Knab, chair of the IDEA award judging committee and chief information officer, Case Western Reserve University’s College of Arts & Sciences. “The winning submissions were from an exceptionally strong nominations pool and represent a cross-section of the wide-ranging innovation that is occurring within the Internet2 member community. Also, for the first time, we added a category for applications developed by students and those were remarkable for their creativity and relevance.”
Kyung-Hwa Kim’s project, DYSWIS, is a collaborative network fault diagnosis system, with a complete framework for fault detection, user collaboration and fault diagnosis for advanced networks. With the increase in application complexity, the need for network fault diagnosis for end-users has increased. However, existing failure diagnosis techniques fail to assist end-users in accessing applications and services. The key idea of DYSWIS is a collaboration of end-users to diagnose a network fault in real-time to collect diverse information from different parts of the networks and infer the cause of failure.
Internet2, owned by U.S. research universities, is the world’s most advanced networking consortium for global researchers and scientists who develop breakthrough Internet technologies and applications and spark tomorrow’s essential innovations. Internet2, consists of more than 350 U.S. universities; corporations; government agencies; laboratories; higher learning; and other major national, regional and state research and education networks; and organizations representing more than 50 countries. Internet2 is a registered trademark.
Kyung-Hwa Kim is a Ph.D. student in the Internet Real-Time Lab, headed by Prof. Henning Schulzrinne.
Congratulations to Kyung-Hwa Kim, and his advisor, Henning Schulzrinne!
Press release: http://www.internet2.edu/news/pr/2011.10.04.idea.html
MEERKATS includes partners at George Mason University and Symantec Research Labs.
operations. While the need for data protection is clear, the queries must be protected as
well, since they may reveal insights of the requester's interests, agenda, mode of operation,
etc. The PIs will develop an efficient and secure system for database access, which allows execution of complex queries, and guarantees protection to both server and client. The PIs will build on their existing successful solution, which relies on encrypted Bloom Filters (BF) and novel reroutable encryption to achieve simple keyword searches. The PIs will expand and enhance this system to handle far more complicated queries, support verifiable and private compliance checking, and maintain high performance even for very large databases. First, the PIs will design novel BF population and matching algorithms, which will allow for secure querying based on combinations of basic keywords. Then, the PIs will design and apply various heuristics and data representation and tokenization to extend this power to range, wildcard, and other query types. Some of the subprotocols will be implemented using Yao's Garbled Circuit (GC) technique, combined with techniques for seamless integration of BF- and GC-based secure computations. In particular, this will prove useful in secure query compliance checking. Finally, the PIs will investigate efficient solutions that eliminate all third helper parties, through the application of (and enhancements to) proxy re-encryption schemes. Using this tool, the (single) server in posession of the searchable encrypted database will be able to perform search and to re-encrypt the obtained result for decryption with the client's key.
The results of Paskov and Traub were due to computer experimentation. Theoretical explanations of these results continue to be an active research area. There is no generally accepted explanation.
Read the article at on the American Scientist website.
For more information, visit http://www.sigmm.org/news/sigmm-award-2011. Read more.
This past March, Julia Hirschberg also received the IEEE James L. Flanagan Speech and Audio Processing Award.
Julia Hirschberg has been a fellow of the American Association for Artificial Intelligence since 1994, a fellow of the International Speech Communication Association (ISCA) since 2008 and president of ISCA from 2005-2007, editor-in-chief of Computational Linguistics from 1993-2003 and co-editor-in-chief of Speech Communication from 2003-2005, and received a Columbia Engineering School Alumni Association (CESAA) Distinguished Faculty Teaching Award in 2009.
This research poses questions whose answers have consequences at several levels of the traditional system stack: Can programmers be freed from hardware-specific optimization of communication without degrading performance? What abstractions are needed to allow hardware to adapt to the programmer, rather than the other way around? Can communication efficiency be improved when running on an application-specific communication platform? The project answers these questions by exploring abstractions and algorithms to profile a parallel program's communication, synthesize a custom network design, and implement it in a configurable network architecture substrate. The research methods center around the X10 language, and include compiler instrumentation passes, offline communication profile analyses, development of a portable network intermediate representation, and network place and route software algorithms.
The research activities span three fields of computer science: Hardware system and architecture research is carried out in software simulation. This portion of the research explores multiple aspects of the hardware system including efficient implementations of software-style polymorphism and mechanisms to enforce data encapsulation. The project is grounded in a specific, performance-critical, real-world problem of database query processing. This component of the research identifies target types for hardware acceleration that are used in common, complex database operations such as range partitioning. Performance results will be obtained both by direct measurement and by simulation. Finally, the compiler segment of the project develops compiler techniques to link high-level languages to the accelerators available on the target hardware system. The compiler adapts software at runtime to best utilize the available accelerators and to partition code among general-purpose and specialized processing cores.
This project addresses programming challenges posed by the new trend in multicore computing. Multithreaded programs are difficult to write, test, and debug. They often contain numerous insidious concurrency errors, including data races, atomicity violations, and order violations, which we broadly define to be races. A good deal of prior research has focused on race detection. However, little progress has been made to help developers fix races because existing systems for fixing races work only with a small, fixed set of race patterns and, for the most part, do not work with simple order violations, a common type of concurrency errors.
The research objective of this project, LOOM: a Language and System for Bypassing and Diagnosing Concurrency Errors, is to create effective systems and technologies to help developers fix races. A preliminary study revealed a key challenge yet to be addressed on fixing races that is, how to help developers immediately protect deployed programs from known races. Even with the correct diagnosis of a race, fixing this race in a deployed program is complicated and time consuming. This delay leaves large vulnerability windows potentially compromising reliability and security.
To address these challenges, the LOOM project is creating an intuitive, expressive synchronization language and a system called LOOM for bypassing races in live programs. The language enables developers to write declarative, succinct execution filters to describe their synchronization intents on code. To fix races, LOOM installs these filters in live programs for immediate protection against races, until a software update is available and the program can be restarted.
The greatest impact of this project will be a new, effective language and system and novel technologies to improve the reliability of multithreaded program, benefiting business, government, and individuals.
The PI intends to explore the theoretical underpinnings of the cryptographic challenges that arise in this context. The proposed directions of research touch on the following questions:
-- How can we safely allow others to perform computation on our encrypted data while maintaining its privacy?
-- How can we verify that outsourced computation was done correctly?
-- What stronger security models are needed in this new, highly interactive environment?
We will address the theoretical aspects of these problems, including modeling, protocol design, and negative results. As part of our investigations, we will study the powerful cryptographic primitives of fully homomorphic encryption and functional encryption (in particular, the relationship between them and outsourced and veriable computations), as well as the area of leakage-resilient cryptography.
The research will develop a novel foundation for creating and exploiting a critical intermediate representation layer between low-level audio-visual features and high-level human events. Signal-based information will be abstracted and represented by "unit models", each of which is trained from small samples of exemplar data, in a sub-space selected from the larger intersection of semantic concepts with image and sound features. These resulting individual discriminators are then leveraged for higher-level ensemble modeling and detection. This middle layer of hundreds of thousands of models provides several advantages: models are trained and reused across many humanly meaningful categories; they each carry a machine-derived unit of semantic information; and they all are trained and applied in an easily parallelized fashion. The work will directly impact the key issues of accuracy, robustness, scalability, and responsiveness of video analysis systems.
The work originated with a CS undergraduate's insight and initiative, and Prof. Bellovin's class assignment dealing with privacy. Of her own initiative, undergraduate Michelle Madejski wrote a Facebook app, found subjects, and did a preliminary version of the study. Based on early promising results, Prof. Bellovin and Ph.D. student Martiza Johnson teamed up with Madejski to carry out the full-scale study.
Read the paper here: https://mice.cs.columbia.edu/getTechreport.php?techreportID=1459
For more on the conference see http://docs.law.gwu.edu/facweb/dsolove/PLSC/
Leafsnap is the first in a series of electronic field guides being developed by researchers from Columbia University, the University of Maryland, and the Smithsonian Institution. This free mobile app uses visual recognition software to help identify tree species from photographs of their leaves. Leafsnap contains beautiful high-resolution images of leaves, flowers, fruit, petiole, seeds, and bark. Leafsnap currently includes the trees of New York City and Washington, D.C., and will soon grow to include the trees of the entire continental United States.
Leafsnap turns users into citizen scientists, automatically sharing images, species identifications, and geo-coded stamps of species locations with a community of scientists who will use the stream of data to map and monitor the ebb and flow of flora nationwide.
The Leafsnap family of electronic field guides aims to leverage digital applications and mobile devices to build an ever-greater awareness of and appreciation for biodiversity.
The genesis of Leafsnap was the realization that many techniques used for face recognition developed by Professor Peter Belhumeur and Professor David Jacobs, of the Computer Science departments of Columbia University and the University of Maryland, respectively, could be applied to automatic species identification.
Professors Jacobs and Belhumeur approached Dr. John Kress, Chief Botanist at the Smithsonian, to start a collaborative effort for designing and building such a system for plant species. Columbia and the University of Maryland designed and implemented the visual recognition system used for automatic identification. In addition, Columbia University designed and wrote the iPhone, iPad, and Android apps, the leafsnap.com website, and wrote the code that powers the recognition servers. The Smithsonian was instrumental in collecting the datasets of leaf species and supervising the curation efforts throughout the course of the project. As part of this effort, the Smithsonian contracted the not-for-profit nature photography group Finding Species, which collected and photographed the high-quality photos available in the apps and the website.
The IEEE Communications Society Award for Outstanding Paper on New Communication Topics is given to "outstanding papers that open new lines of work, envision bold approaches to communication, formulate new problems to solve, and essentially enlarge the field of communications engineering." It is given to a paper published in any IEEE Communications Society publication in the previous calendar year.
The award will be presented at the 2011 IEEE International Conference on Communications (ICC'2011) award ceremony.
More information about the EnHANTs project can be found in http://enhants.ee.columbia.edu/ Read more.
This award is highly visible due to its media-oriented backing. One of 13 awardees, Dana will receive $750k direct for 3 years for her project on "A Systems Approach to Understanding Tumor Specific Drug Response." Pe’er’s research is focused on elucidating tumor-specific molecular networks, working towards personalized cancer care. The project will develop and use machine learning approaches for the integration and analysis of high-throughput data toward understanding the tumor regulatory network and its response to drug, as well as the genetic determinants of this response.
Please see:
http://www.standup2cancer.org/node/4782
http://www.youtube.com/watch?v=9lDh1iiO9KA&feature=player_embedded
Read more on http://www.engineering.columbia.edu/nae-elects-prof-yannakakis-member Read more.
The build-and-learn aspect of BigShot has a lot of appeal, says Margaret Honey, CEO of the New York Hall of Science in Queens, N.Y., a hands-on, family-oriented science and technology museum. "I've seen lots of technology and engineering projects throughout my career, and I was really taken with this," she says. "The strategy of engineering this device so that kids can fairly easily put this together without starting from scratch is incredibly smart. I love that kids end up with a working camera and that the assembly of the project is just the beginning."
Multithreaded programs are becoming increasingly critical driven by the
rise of multicore hardware and the coming storm of cloud computing.
Unfortunately, these programs remain difficult to write, test, and debug.
A key reason for this difficulty is nondeterminism: different runs of a
multithreaded program may show different behaviors depending on how the
threads interleave. Nondeterminism complicates almost every development
step of multithreaded programs. For instance, it weakens testing because
the schedules tested may not be the ones run in the field; it complicates
debugging because reproducing a buggy schedule is hard.
In the past three decades, researchers have developed many techniques to
address nondeterminism. Despite these efforts, it remains an open
challenge to achieve both efficiency and determinism for general
multithreaded programs on commodity multiprocessors.
This project aims to address this fundamental challenge. Its key insight
is that one can reuse a small number of schedules to process a large
number of inputs. Based on this insight, it takes an approach called
schedule memoization that memoizes past schedules and, when possible,
reuses them for future runs. This approach amortizes the high overhead of
making one schedule deterministic over many reuses and makes a program
repeat familiar behaviors whenever possible. A real-world analogy to this
approach is animals' natural tendencies to follow familiar routes to avoid
hazards and discovery overhead of unknown routes.
The greatest impact of this project will be a novel approach and new,
effective systems and technologies to improving software reliability, thus
benefiting every business, government, and individual.
Harmon, who was an NSF Graduate Research Fellow during his doctoral studies at the Columbia University School of Engineering & Applied Science, completed his PhD thesis in 2010, as a member of the Columbia Computer Graphics Group directed by Prof. Eitan Grinspun. He has worked at both Walt Disney Animation Studios (makers of Snow White through Tangled) and Weta Digital (makers of The Lord of the Rings through Avatar), applying research technologies to problems in digital special effects. His work on contact algorithms for the motion of fabric is used in films such as Disney's Tangled.
The conference byline is "UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces." The conference has been held yearly for the past 22 years. This is the eighth year of Lasting Impact awards.
and sensor networks need to process large volumes of updates while
supporting on-line analytic queries. With large amounts of RAM, single
machines are potentially able to manage hundreds of millions of
items. With multiple hardware threads, as many as 64 on modern
commodity multicore chips, many operations can be processed
concurrently.
Processing queries and updates concurrently can cause
interference. Queries need to see a consistent database state, meaning
that at least some of the time, updates will need to wait for queries
to complete. To address this problem, a RAM-resident snapshot of the database is taken at
various points in time. Analytic queries operate over the snapshot,
eliminating interference, but allowing answers to be slightly out of
date. Several different snapshot creation methods are being developed
and studied, with the goal of being able to create snapshots
rapidly (e.g., in fractions of a second) while minimizing the overhead
on update processing.
These problems are studied both for traditional server machines, as
well as for multicore mobile devices. By keeping personalized, up to
date data on a user's mobile device, a wide range of potential new
applications could be supported while avoiding the privacy concerns of
widely distributing one's location. The research focus is on how to
efficiently utilize the many processing cores available on modern
machines, both traditional and mobile devices. A primary goal is to
allow performance to scale as additional cores become available in
newer generations of hardware.
More information can be found at http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1049898
from the Department of Computer Science. This award is in recognition of
his dedication to teaching and his efforts to make Computer Science
accessible to all students.
Google’s goal for the award is to encourage women to excel in computing and technology and become active role models and leaders in the field. The company will sponsor the award recipients to the Grace Hopper Celebration of Women in Computing to be held in Atlanta in September. According to Google, “Anita Borg devoted her adult life to revolutionizing the way we think about technology and dismantling barriers that keep women and minorities from entering computing and technology fields. Her combination of technical expertise and fearless vision continues to inspire and motivate countless women to become active participants and leaders in creating technology.”
Another Columbia Engineering student – Zeinab Abbassi Ph.D. Computer Science – was a finalist for the scholarship. Her adviser is Vishal Misra, associate professor. Read more.
The proposal, entitled "Power-Adaptive, Event-Driven Data Conversion and Signal Processing Using Asynchronous Digital Techniques", addresses the increasing demand for ultra low-power and high-quality microelectronic systems that continuously acquire and process information, as soon as it becomes available. In these applications, new information is generated infrequently, at irregular and unpredictable intervals. This event-based nature of the information calls for a drastic re-thinking of how these signals are monitored and processed.
Traditional synchronous (i.e. clocked) digital techniques, which use fixed-rate operation to evaluate data whether or not it has changed, are a poor match for the above applications, and often lead to excessive power consumption. This research aims instead to provide viable "event-based" systems: controlled not by a clock but rather by the arrival of each event. Asynchronous (i.e. clock-less) digital logic techniques, which are ideally suited for this work, are combined with continuous-time digital signal processing, to make this task possible. Such continuous-time data acquisition and processing promises significant power and energy reduction, flexible support for a variety of signal processing protocols and encodings, high-quality output signals, and graceful scalability to future microelectronic technologies. A series of silicon chips will be designed and fully evaluated, culminating in a fully programmable, event-driven data acquisition and signal processing system, which can be used as a testbed for a wide variety of real-world applications.
The ONR Young Investigator Program invests in academic scientists and engineers who show exceptional promise for creative study. In 2010 the ONR selected 17 award recipients from 211 proposal submissions.
Read more about this on the Columbia SEAS
news webpage.
For more information on the ONR Young Investigator Program please see
the official press release.
the premier Human-Computer Interaction conference. The paper,
"Designing Patient-Centric Information Displays for Hospitals,"
proposes a design for in-room, patient-centric information displays,
based on iterative design with physicians and a study with emergency
department patients at Washington Hospital Center, a large urban
hospital. The research was conducted by Wilcox during a summer
internship at Microsoft Research with Dan Morris and Desney Tan of
Microsoft Research, in collaboration with Justin Gatewood of MedStar
Institute for Innovation. The study included the presentation of
real-time information to patients based on their medical records,
during their visit to an Emergency Department. Subjective responses to
in-room displays were overwhelmingly positive, and the study elicited
guidelines (regarding specific information types, privacy, use cases,
and information presentation techniques) that could be used
for a fully-automatic implementation of the design.
The Goldwater Scholarship funds and supports outstanding undergraduate scholars in the sciences, mathematics, and engineering to pursue a Ph.D. in those fields.
were awarded the Best Paper Award at
the 2010 International Conference on Computational Photography (ICCP), for
their paper titled "Spectral Focal Sweep: Extended Depth of Field from
Chromatic Aberrations." The paper described a new technique for capturing
photographs with very wide depth of field. The conference was held at
MIT on March 28-30.
with the goal of improving their energy-efficiency, comfort, and safety. Traditional buildings account for about 40% of the total energy consumed in the
United States. A central theme of the proposed research is to model a future high-performance building as a cyber-physical system whose complex dynamics arise from the interaction among its physical attributes, the operating equipment (such as sensors, embedded processors, and HVAC components), and the
behavior of its occupants. Emphasis is laid on the development of methods to make the distributed embedded system robust to uncertainty and adaptive to change.
More details: http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0931870
the winners of this year’s Anita Borg Women of Vision Awards. Three
leaders in technology – Kristina M. Johnson, Under Secretary for Energy,
Department of Energy, Kathleen R. McKeown, Henry and Gertrude Rothschild
Professor of Computer Science, Columbia University, and Lila Ibrahim,
General Manager, Emerging Markets Platform Group, Intel Corporation will
be* *honored for their accomplishments and contributions as women in
technology at ABI’s fifth annual Women of Vision Awards Banquet at the
Mission City Ballroom, Santa Clara, California on May 12, 2010. Read more.
The DEPS portfolio ranges from disciplinary boards such as mathematics, physics, computer science,and astronomy to boards and standing committees serving
each of the major military services as well as the intelligence community and the Department of Homeland Security.
After 10 years of service Traub has stepped down as Chair of the Computer Science and Telecommunications Board (CSTB). He served as founding chair 1986-1992 and served again 2005-2009.
The Tech Awards, presented by Applied Materials, is a signature program of The Tech Museum. Established in 2001, The Tech Awards recognizes Laureates in five categories: environment, economic development, education, equality, and health. These Laureates have developed new technological solutions or innovative ways to use existing technologies to significantly improve the lives of people around the world. Dr. White received one of the three Intel Environment Awards. Read more.
Team Columbia 1 (ranked 2nd):
- Jingyue Wu (PhD, computer science)
- Varun Jalan (MS, computer science)
- Zifeng Yuan (PhD, civil engineering)
Team Columbia 2 (ranked 6th):
- Chen Chen (PhD, IEOR)
- Huzaifa Neralwala (MS, computer science)
- Jiayang Jiang (Junior, mathematics)
Due to their performance, team Columbia 1 was also selected to be one of 100 teams (chosen from over 7,000 around the world) to advance to the world finals competition, to be held in Harbin, China from February 1--6. The teams were led by coach John Zhang (PhD student, computer science).
media.
enhance routing protocols such that they can compute high-performance
routes in a computationally efficient manner without revealing
information that might reveal the location of participating nodes.
This allows users to send and receive high-bandwidth, low-latency
transmissions such as video and audio feeds without revealing their
location. Potential applications include celebrity multimedia
twitter-like feeds, and network-supported action gaming.
Many important scientific and engineering problems involve a large number of variables. Equivalently they are said to be high dimensional. Examples of such problems occur in quantum mechanics, molecular biology, and economics. For example, the Schrodinger equation for p particles has dimension d = 3p; system with a large number of particles are of great interest in physics and chemistry. This problem can only be solved numerically. In decades of work scientists have found that the problems get increasingly hard as p increases. The investigators believe this does not stem from a failure to create good numerical methods--the difficulty is intrinsic. The investigators believe solving the Schrodinger equation suffers the curse of dimensionality on a classical computer. That is, the time to solve this problem must grow exponentially with p. (A classical computer is any machine not based on the principles of quantum mechanics--all machines in use today are classical computers.) The investigators hope to show this problem is tractable on a quantum computer. Success in this research would mark the first instance of a PROVEN exponential quantum speedup for an important non-artificial problem.
the workings of the camera. The website also allows young photographers from around
the world to share their pictures. “The idea here was not to create a device that
was an inexpensive toy,” says Nayar. “The idea was to create something that
could be used as a platform for education across many societies.
Visit the Bigshot website. Read more about the Bigshot project. Read more.
November 2-5, 2009, Newark Liberty International Airport Marriott
Newark (NYC Metropolitan Area), New Jersey, USA
Learning from Data using Matchings and Graphs (pdf version)
Tony Jebara
Columbia University
Many machine learning problems on data can naturally be formulated as problems on graphs. For example, dimensionality reduction and visualization are related to graph embedding. Given a sparse graph between n high-dimensional data nodes, how do we faithfully embed it in low dimension? We present an algorithm that improves dimensionality reduction by extending semidefinite embedding methods. But, given only a dataset of n samples, how do we construct a graph in the first place? The space to explore is daunting with 2^(n^2) graphs to choose from yet two interesting subfamilies are tractable: matchings and b-matchings. By placing distributions over matchings and using loopy belief propagation, we can efficiently and optimally infer maximum weight subgraphs. Matching not only has intriguing combinatorial properties but it also leads to improvements in graph reconstruction, graph embedding, graph transduction, and graph partitioning. We will show applications on text, network and image data. Time permitting, we will also show results on location data from millions of tracked mobile phone users which lets us discover patterns of human behavior, networks of places and networks of people. Read more.
Alumni Achievement Award, which recognizes an individual
for exceptional accomplishments that have brought honor to
the receipient and to Carnegie Mellon. He is being recognized
for his "pioneering research contributions and teaching in
the field of computer vision." Read more.
the premier conference in its field. The paper,
"Evaluating the Benefits of Augmented Reality for Task Localization in
Maintenance of an Armored Personnel Carrier Turret," was coauthored
by Steve Henderson and Prof. Steve Feiner. It presents the design,
implementation, and user testing of a prototype augmented reality
application to support military mechanics conducting routine
maintenance tasks inside an armored vehicle turret. The prototype
uses a tracked head-worn display to augment a mechanic's view with
text, labels, arrows, and animated sequences documenting tasks to
perform. A formal human subject experiment with military mechanics
showed that the augmented reality condition allowed them to locate
tasks more quickly than using when two baseline conditions (an untracked
head-worn display, and a stationary display representing an improved
version of existing electronic technical manuals).
checking is unquestionably crucial to improve software reliability,
but the checking coverage of most existing techniques is severely
hampered by where they are applied: a software product is typically
checked only at the site where it is developed, thus the number of
different states checked is throttled by those sites' resources (e.g.,
machines, testers/users, software/hardware configurations).
To address this fundamental problem, we will investigate mechanisms
that will enable software vendors to continue checking for bugs after
a product is deployed, thus checking a drastically more diverse set of
states. Our research contributions will include the investigation,
development, and deployment of: (1) a wide-area autonomic software
checking infrastructure to support continuous checking of deployed
software in a transparent, efficient, and scalable manner; (2) a
simple yet general and powerful checking interface to facilitate
creation of new checking techniques and combination of existing
techniques into more powerful means to find subtle bugs that are often
not found during conventional pre-deployment testing; (3) lightweight
isolation, checkpoint, migration, and deterministic replay mechanisms
that enable replication of application processes as checking launch
points, isolation of replicas from users, migration of replicas across
hosts, and replay of identified bugs without need for the original
execution environment; and (4) distributed computing mechanisms for
efficiently and scalably leveraging geographically dispersed idle
resources to determine where and when replicas should be executed to
improve the speed and coverage of software checking, thereby
converting available hardware cycles into improved software
reliability.
Prof. Carloni was also named a Senior Member of the the Association for Computing Machinery (ACM) on July 21, 2009. According to the ACM website, "the Senior Member grade recognizes those ACM members with at least 10 years of professional experience and 5 years of continuous Professional Membership who have demonstrated performance that sets them apart from their peers."
posture of large enterprises. The project is intended to devise metrics and
measurement methods, and test and evaluate these in a real institution, to
evaluate how human users behave in a security context.
To develop computer security as a science and engineering discipline,
metrics need to be defined to evaluate the safety and security of
alternative system designs. Security policies are often specified by large
organizations but there are no direct means to evaluate how well these
policies are followed by human users. The proposed project explores
fundamental means of measuring the security posture of large enterprises.
Risk management and risk mitigation requires measurement to assess
alternative outcomes in any decision process. The project is intended to
devise metrics and measurement methods, and test and evaluate these in a
real institution, to evaluate how human users behave in a security context.
Financial institutions in particular require significant controls over the
handling of confidential financial information and employees must adhere to
these policies to protect assets, which are subject to continual adversarial
attack by thieves and fraudsters. Hence, financial institutions are the
primary focus of the measurement work. The technical means of measuring user
actions that may violate security policy is performed in a non-intrusive
manner. The measurement system uses specially crafted decoy documents and
email messages that signal when they have been opened or copied by a user in
violation of policy. The project will develop collaborations with financial
experts to devise risk models associated with users of information
technology within large enterprises. This line of work extends traditional
research in computer security by opening up a new area focused on the human
aspect of security.
To survive and flourish, people must interact with their environment in an organized fashion. To do so, they need to learn, imagine, and perform an assortment of transformations on and in the world. Primary among these are manipulation of objects and navigation in space. This project integrates research in computer science and cognitive science to develop and evaluate augmented reality tools to create effective dynamic explanations that enhance manipulation and navigation, in conjunction with identification and visualization. Augmented reality refers to user interfaces in which virtual material is integrated with and overlaid on the user's experience of the real world; for example, by using tracked head-worn and hand-held displays. Dynamic explanations are task-appropriate sequences of actions, presented interactively, with appropriate added information. The tools will be created in collaboration with subject matter experts for exploratory use in indoor and outdoor real world domains: navigating and identifying landmarks in a wooded park area, assembling a piece of furniture, and navigating and visualizing for planning the site of a new urban campus. Cognitive science research will determine the best ways to convey explanations and information to people. Computer science research will address the design and implementation of systems that embody the best candidate approaches for identifying objects and locations, specifying actions, and adding non-visible information. In situ experiments will be used to assess and refine the systems.
Manipulation, navigation, identification, and visualization are representative of important things that people do every day, ranging from fixing broken equipment to reaching a desired destination in an unfamiliar environment. The ways in which we perform these tasks could potentially be improved significantly through augmented reality systems designed using the principles to be developed by this project. Both the cognitive principles and the augmented reality tools will have broad applicability. The systems developed will inform the design of future systems that can aid the general public, for educational and recreational ends, as well as systems that can assist people with auditory, visual, or physical impairments.
State-of-the-art desktop search tools are valuable for searching various forms of individual user documents -—interpreted broadly and including user files, email messages, web pages, and chat sessions. Unfortunately, focusing on individual, relatively static documents in isolation is often insufficient for important search scenarios, where the history and patterns of access to all information on a desktop –-static or otherwise–- are themselves of value and, in fact, critical to answer certain queries effectively. We propose to design, implement, and evaluate new mechanisms for enabling users to search all information that has been displayed on their desktops, preserving and exploiting the same personal context and display layout as in the original desktop computing experience. Our next-generation desktop search system will rely on a virtualization record-and-play architecture that enables both display and application execution on a desktop to be recorded (and, in fact, replayed) efficiently without user-perceived degradation on application performance. Our system will capture and index all activity on the desktop, and will exploit this aggregate desktop information to produce effective, display-centric search results.
The project will develop and experimentally evaluate novel techniques for conducting fine-grained tracking of information of interest (as defined by the system operator or, in the future, by end-users, in a flexible, context-sensitive manner) toward mapping the paths that such information takes through the enterprise and providing a means for enforcing information flow and access control policies. Prof. Keromytis' hypothesis is that it is possible to create efficient fine-grained information tracking and access control mechanisms that operate throughout an enterprise legacy computing infrastructure through appropriate use of hypervisors and distributed tag propagation protocols.
This research analytically and experimentally investigates defensive infrastructure addressing vulnerabilities in open cellular operating systems and telecommunications networks. In this, we are exploring the requirements and design of such defenses in three coordinated efforts; a) extending and applying formal policy models for telecommunication systems, and provide tools for phone manufacturer, provider, developer, and end-user policy compliance verification, b) building a security-conscious distribution of the open-source Android operating system, and c) explore the needs and designs of overload controls in telecommunications networks needed to absorb changes in mobile phone behavior, traffic models, and the diversity of communication end-points.
This research symbiotically supports educational goals at the constituent institutions by supporting graduate and undergraduate student research, and is integral to the security and network curricula. This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
This project aims to develop and evaluate a new family of user-controllable policy learning techniques capable of leveraging user feedback and present users with incremental, user-understandable suggestions on how to improve their security or privacy policies. In contrast to traditional machine learning techniques, which are generally configured as “black boxes” than take over from the user, user-controllable policy learning aims to ensure that users continue to understand their policies and remain in control of policy changes. As a result, this family of policy learning techniques offers the prospect of empowering lay and expert users to more effectively configure a broad range of security and privacy policies.
The techniques to be developed in this project will be evaluated and refined in the context of two important domains, namely privacy policies in social networks and firewall policies. In the process, work to be conducted in this project is also expected to lead to a significantly deeper understanding of (1) the difficulties experienced by users as they try to specify and refine security and privacy policies and of (2) what it takes to overcome these challenges (e.g., better understanding of policy modifications that users can relate to, better understanding of how many policy modifications users can realistically be expected to handle, and how these issues relate to the expressiveness of underlying policy languages, modes of interactions with the user, and the topologies across which policies are deployed).
Textually-generated 3D scenes will have a profound, paradigm-shifting effect in human computer interaction, giving people unskilled in graphical design the ability to directly express intentions and constraints in natural language -- bypassing standard low-level direct-manipulation techniques. This research will open up the world of 3D scene creation to a much larger group of people and a much wider set of applications. In particular, the research will target middle-school age students who need to improve their communicative skills, including those whose first language is not English or who have learning difficulties: a field study in a New York after-school program will test whether use of the system can improve literacy skills. The technology also has the potential for interesting a more diverse population in computer science at an early age, as interactions with K-12 teachers have indicated.
According to the award web page, "Established in 1996, the presidential awards honor the best of Columbia's teachers for the influence they have on the development of their students and their part in maintaining the University's longstanding reputation for educational excellence."
David Elson is working on his dissertation in natural language understanding, advised by Prof. Kathleen McKeown. Read more.
Michael Rand (CC) was awarded the Computer Science Department Award for Scholastic Achievements as acknowledgment of his contributions to the Department of Computer Science and to the university as a whole.
Brian Smith (SEAS) garnered the Computer Science Department Scholarship Award, awarded to an undergraduate Computer Science degree candidate who demonstrated scholastic excellence through projects or class contributions
Peter Tsonev (SEAS) was awarded the Computer Engineering Award of Excellence, for demonstrating scholastic excellence.
The Andrew P. Kosoresow Memorial Award for Excellence in Teaching and service is awarded to students who demonstrated outstanding teaching and exemplary service. This year, it was given to Tristan Naumann (SEAS), Dokyun Lee (CC), Jae Woo Lee (GSAS), Paul Etienne Vouga (GSAS), and Oren Laadan (GSAS).
The Russell C. Mills Award for Excellence in Computer Science recognizes academic excellence in the area of Computer Science and went to Joshua Weinberg (GS) and Eliane Stampfer (CC).
The Theodore R. Bashkow Award for Excellence in Independent Projects is awarded to Computer Science seniors who have excelled in independent projects. This year, Adam Waksman (CC) and Kimberly Manis (SEAS) were recognized.
The Paul Charles Michelman Memorial Award recognizes PhD students in Computer Science who have performed exemplary service to the department, devoting time and effort beyond the call to further the department's goal, and went to Matei Ciocarlie (GSAS) and Chris Murphy (GSAS).
The Certificate of Distinction for Academic Excellence is given at graduation to Computer Science and Computer Engineering majors who have an overall cumulative GPA in the top 10% among graduating seniors in CS and CE:
Michael Rand (CC), Brian Smith (SEAS), Daniel Weiner (GS), Peter Tsonev (SEAS), Adam Waksman (CC), Eliane Stampfer (CC).
The Computer Science Service Award is awarded to PhD students who were selected to be in the top 10% in service contribution to the Department: Hila Becker, Matei Ciocarlie, Gabriella Cretu-Ciocarlie, Kevin Egan, David Elson, Jin Wei Gu, David Harmon, Bert Huang, Maritza Johnson, Gurunandan Krishnan, Chris Murphy, Kristen Parton, Paul Etienne Vouga, John Zhang and Hang Zhao.
Taking data from GPS-equipped taxis and other vehicles, cell phones and other devices, Jebara's Citysense can tell you, in real time, where the action is. Read more.
Sixty-seven researchers were honored on December 19 in a ceremony presided over by Dr. John H. Marburger III, Science Advisor to the President and Director of the White House Office of Science and Technology Policy.
"The Presidential Early Career Awards for Scientists and Engineers, established in 1996, honors the most promising researchers in the Nation within their fields. Nine federal departments and agencies annually nominate scientists and engineers who are at the start of their independent careers and whose work shows exceptional promise for leadership at the frontiers of scientific knowledge. Participating agencies award these talented scientists and engineers with up to five years of funding to further their research in support of critical government missions." Read more.
"Beating out more than 400 entrants from across the country, StackSafe was awarded first prize after a rigorous assessment by an online panel of over 300 venture capitalists, angel investors, and university judges." Read more.
Ian Vo wrote a paper titled "Quality Assurance of Software Applications Using the In Vivo Testing Approach", which has been accepted for publication at ICST 2009, the 2nd IEEE International Conference on Software Testing, Verification and Validation. According to its abstract, "software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws." The authors present a testing methodology they call in vivo testing, in which tests are continuously executed in the deployment environment. They discuss the approach and the prototype testing framework for Java applications called Invite and provide the results of case studies that demonstrate Invite's effectiveness and efficiency. Invite found real bugs in OSCache, Apache JCS and Apache Tomcat, with about 5% overhead. The project was supervised by Prof. Kaiser.
The CRA honored a total of 22 female and 44 male students in this year's competition. Read more.
system design, or related research.
Charles Han is a doctoral student in the Columbia Computer Graphics Group, co-advised by Profs. Eitan Grinspun and Ravi Ramamoorthi. His research focuses on finding principled representations and efficient algorithms that operare well across a wide range of visual scales.There are many instances in graphics where one would like to render the same object at different scales: for example, an architect
designing a building may want to preview the entire structure at once or may want to zoom in on individual parts; characters and terrain in computer games may be seen at extremely close distances or as distant pixels on the horizon. Current techniques in computer graphics are generally tailored to perform well at a particular physical scale, and often to not translate well to coarser or finer scales.
In work presented at SIGGRAPH 2007, Han presented a solution to the long-standing problem of normal map filtering. By reinterpreting normal mapping in the frequency-domain as a convolution of geometry and BRDF, this work has enabled accurate multiscale rendering of normal maps at speeds orders of magnitude faster than previously possible. More recently, Han has developed a framework for the efficient example-based synthesis of very large textures, with features spanning a wide (or infinite) range of physical scales. He continues to extend this work to add further expressive power and intuitive user control.
Bergou's work builds on the ideas of Discrete Differential Geometry (DDG), whose goal is to identify the root from which the desirable properties of a continuous system stem and then to build discrete models using an appropriate discrete version of that root. This led to his work on discrete models for cloth and elastic rods. His work on artistic control of a physical system builds on constrained Lagrangian mechanics, in which constraints define the allowable states that a system may be in. Within the context of directing a physical simulation, this framework can be used to define constraints that allow for entirely physical motions for the system being simulated while still closely obeying the intent of the user controlling the
simulation.
Miklós is a Ph.D. candidate in the Columbia Computer Graphics Group, advised by Prof. Eitan Grinspun.
in Qubit Complexity".
tangible user interfaces for augmented reality" was coauthored by Steve Henderson and Steve Feiner. It presents a class of interaction techniques, called opportunistic controls, in which naturally occurring physical artifacts in a task domain are used to provide input to a user interface through simple vision-based processing. Tactile feedback from an opportunistic control can make possible eyes-free interaction. For example, a ridged surface can be used as a slider or a spinning washer as a rotary pot.
According to the IARPA mission statement, "The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk/high-payoff research that has the potential to provide our nation with an overwhelming intelligence advantage over future adversaries."
The classical complexity of many continuous problems is known due to information theoretic arguments. This may be contrasted with discrete problems such as integer factorization where one has to settle for conjectures about the complexity hierarchy. Among the issues the investigators will study are the following:
- For the foreseeable future the number of qubits will be a crucial computational resource. The investigators have shown that modifying the standard definition of quantum algorithms to permit randomized queries leads to an exponential improvement in the qubit complexity of path integration. The investigators propose to exploit the power of the randomized query setting. For example, are there exponential improvements in the query complexity for other important problems?
- A basic problem in physics and chemistry is to compute the ground state
energy of a system. The ground state energy is given by the smallest eigenvalue of the time-independent Schrödinger equation. If the number of particles in the system is p, the number of variables is d = 3p. In the worst case classical setting, the problem we study suffers the curse of dimensionality. The curse is broken in the quantum setting. The investigators want to determine if the randomized classical setting suffers the curse of dimensionality. If it does, a quantum computer enjoys exponential speedup for this problem. This would mark the first example of proven exponential quantum speedup for an important problem.
- The Schrödinger equation is fundamental to quantum physics and quantum chemistry. Solving this equation for quantum systems with a large number of variables would have a huge payoff for many applications. The investigators propose to study algorithms and initiate the study of the computational complexity of the Schrödinger equation in the worst case and randomized settings on a classical computer and in the quantum setting.
The Pe'er-Bussemaker Lab is using high-throughput genomics data to infer a universal protein-DNA recognition code. Shown are the positions of protein side-chains contacting a Watson-Crick base-pair in a variety of protein-DNA complexes. The data is the result of research efforts such as the Human Genome Project and revolutionary sequencing technologies that are capable of reading over 100 billion letters of DNA in just a few days. Such technologies include high-density microarrays, which measure and analyze the activity within a cell and are capable of quantifying the levels of more than a million unique RNAs in a single experiment, and multi-laser flow cytometry, which measures the abundance of multiple signaling molecules in over 100,000 individual cells in a just few minutes.
"Vast amounts of data are being produced in super-exponential rates; novel ground-breaking technologies are being invented so much faster than the rate at which scientists can understand and leverage them to gain biological insights," adds Pe'er. "It's like buying a whole pie, eating a tiny piece and throwing the rest away. Most of the data is only looked at on the very, very surface. And most of the data is only scarcely being used, leaving the rest untouched."
Professors Pe'er and Harmen say their new lab reflects Columbia's support for computational biology, a commitment Pe'er says can be seen in the Center for Computational Biology and Bioinformatics (C2B2), established in 2006 at the Medical campus.
"Columbia has seen a very dramatic elevation in status in systems and computational biology with the initiation of the C2B2, which is fast becoming one of the best computational centers around," said Pe'er. "The activity between the uptown medical campus and here on Morningside makes Columbia one of the top five computational biology centers in the world."
(from the University press release of July 25, 2008) Read more.
Henning Schulzrinne. It describes and evaluates mechanisms so that VoIP servers can continue to operate at full capacity even under severe overload. Such overload may occur during natural disasters or mass call-in events, such as voting for TV game show contestants. Without these measures, servers are likely to suffer from congestion collapse. Read more.
While the current reality is that the jury is still out on how the processor-of-the-future will look, one clear certainty is that it will be parallel. All major commercial processor vendors are now committed to increasing the number of processors (i.e., cores) that fit on a single chip. However, there are major obstacles of power consumption, performance and scalability in existing synchronous design methodologies. This proposal focuses on a particular existing easy-to-program and easy-to-teach multi-core architecture. It then identifies the interconnection network, connecting multiples cores and
memories, as the critical bottleneck to achieving lower overall power consumption. The target is to substantially improve the power, robustness and scalability of the system by designing and fabricating a high-speed asynchronous communication mesh.
The resulting parallel architecture will be globally-asynchronous locally-synchronous (i.e. GALS-style), that gracefully accommodates synchronous cores and memories operating at arbitrary unrelated clock
rates, while providing robustness to timing variability and support for plug-and-play (i.e. scalable) system design. Unlike most prior GALS architectures, this one will have significant performance and power requirements in a complex pipelined topology. In addition, computer-aided design (i.e., CAD) tools will be developed to support the design of this new mesh, as well as simulation, timing verification and performance analysis tools to be applied to the entire parallel architecture. This
work will be performed in collaboration with a separate NSF CPA proposal under Prof. Ken Stevens (University of Utah). The two proposals will be linked together into a larger framework: the Utah group will coordinate to provide and refine their commercial-based physical design tool development and support, while the Columbia/Maryland group will provide a new substantial test case for their asynchronous tool applications.
The work is expected to have broad impact. First, while it is targeted to one parallel architecture, several other architectures will benefit from this work, since the interconnection network can be applied to them as well. Second, the work is expected to demonstrate the benefits and role of
asynchronous design for complex high-performance systems. Finally, the outcome of the work could make a step in the paradigm shift from serial to parallel that the field is now undergoing; the resulting first-of-its-kind partly-asynchronous high-end massively-parallel on-chip computer could push
the level of scalability beyond what it currently possible and have a broad impact in supporting parallel applications in much of computer science and engineering.
The Academic Alliance Seed Fund was established in 2007 to provide members of NCWIT’s Academic Alliance with startup funds (up to $15,000 per project) to develop and implement projects for recruiting and retaining women in computing and information technology. Funding for the Seed Fund is provided by Microsoft Research.
The NCWIT Academic Alliance includes more than 75 computer science and IT departments across the country — including research universities, community colleges, women’s colleges, and minority-serving institutions — dedicated to gender equity and institutional change in higher education computing and information technology.
The honorary doctorate (Dr. rer. nat. hc) cited Prof. Wozniakowski foundational contribution to numerical methods, particularly the deep insights due to the new discipline of information-based complexity and the work on the "curse of dimensionality" that helps determine which high-dimension problems are solvable.
The Friedrich-Schiller University in Jena was founded in 1588.
chemical and biochemical processes and advanced functional materials, informatics and telecommunications, land, sea and air transportation, agrobiotechnology and food engineering,
environmentally friendly technologies for solid fuels and alternative energy sources, as well as
biomedical informatics, biomedical engineering, biomolecular medicine and pharmacogenetics." Read more.
This research project aims to harness the recent extraordinary advances in nanoscale silicon photonic technologies for developing optical interconnection networks that address the critical bandwidth and power challenges of future CMP-based system. The insertion of photonic interconnection networks essentially changes the power scaling rules: once a photonic path is established, the data are transmitted end-to-end without the need for repeating, regeneration or buffering. This means that the energy for generating and receiving the data is only expended once per communication transaction anywhere across the computing system. The PIs will investigate the complete cohesive design of an on-chip optical interconnection network that employs nanoscale CMOS photonic devices and enables seamless off-chip communications to other CMP computing nodes and to external memory. System-wide optical interconnection network architectures will be specifically studied in the context of stream processing models of computation. Read more.
Professor Ross has been selected as one of the two recipients of this year's Columbia Engineering School Alumni Association (CESAA) Distinguished Faculty Teaching Awards. Mr. Lee presented the award to Professor Ross at Class Day ceremonies on Monday, May 19.
"The Columbia Engineering School Alumni Association created this award more than a decade ago to recognize the exceptional commitment of members of the SEAS faculty to undergraduate education," said Mr. Lee. "This year, I am pleased to present these awards to two senior faculty members, a testament to their continuing faithfulness to the central mission of teaching undergraduates."
The awardees were selected by a Committee of the Alumni Association chaired by Eric Schon '68, with representation from the student body, and based on nominations from the students themselves. The Board of Managers of the Columbia Engineering School Alumni Association voted unanimously to approve the selection.
Students enthusiastically wrote that courses taught by these professors were the best they have taken at Columbia. The qualities that both professors share and the ones most frequently mentioned by students are their enthusiasm for the subject matter, caring attitude, approachability, responsiveness to student concerns, and the ability to make complex subject matter understandable. Read more.
Ryan Overbeck's, advised by Prof. Ravi Ramamoorthi, focuses on real-time ray tracing. Ray tracing is the core of many physically-based algorithms for rendering 3D scenes with global illumination (shadows, reflections, refractions, indirect illumination, and other effects), but has not been fast enough for interactive rendering on commodity computers until recently. He develops algorithms to ray trace 3D scenes with high quality shadows, reflections, and refractions providing a higher degree of realism to interactive content.
Prof. Allen will be investigating semantically searchable dynamic 3D databases, developing
new methods to take an unstructured set of 3D models and organize them into a database that can be intelligently and efficiently queried. The database will be searchable, tagged and dynamic, and will be able to support queries based on whole object and partial object geometries.
In the project titled "Safe Browsing Through Web-based Application Communities", Profs. Keromytis and Stolfo will investigate the use of collaborative software monitoring, anomaly detection, and software self-healing to enable groups of users to browse safely. The project seeks to counter the increasingly virulent class of web-bourne malware by exchanging information among users about detected attacks and countermeasures when browsing unknown websites or even specific pages.
In the project "Privacy and Search: Having it Both Ways in Web Services", Prof. Keromytis will investigate techniques for addressing the privacy and confidentiality concerns of businesses and individuals while allowing for the use of hosted, web-based applications such as Google Docs and Gmail. Specifically, the project will combine data confidentiality mechanisms with Private Information Matching and Retrieval protocols, to develop schemes that offer different tradeoffs between stored-data confidentiality/privacy and legitimate business and user needs.
Rocco Servedio was awarded a Google Research Award to develop improved martingale ranking algorithms. Martingale ranking is an extension of martingale boosting, a provably noise-tolerant boosting algorithm from learning theory which was jointly developed by Rocco and Phil Long, a researcher at Google. Rocco will work to design adaptive and noise-tolerant martingale rankers that perform well 'at the top of the list' of items being ranked, which is where accurate rankings are most important.
these attacks actually occurring, and the uncertainties surrounding assumptions about these risks.
contributions to the engineering literature," and to the "pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education."
The National Academy of Engineering (NAE) has elected a total of 65 new members and nine foreign associates spanning all disciplines of engineering and applied sciences.
Members are elected to the NAE by their peers (current NAE members). All members have distinguished themselves in technical positions, as university faculty, and as leaders in government and business organizations. They serve as "advisers to the nation on science, engineering, and medicine," and perform an unparalleled public service by addressing the scientific and technical aspects of some of society’s
most pressing problems. The NAE was established in 1964 as an independent, nonprofit organization and is one of four United States National Academies. Read more.
"Julia Hirschberg, professor in computer science, is active within the area of speech communications at Columbia University, USA. She belongs to the leading researchers in this field, having performed research in both industry and academia. In her work at AT&T, she contributed to the development of several voice-controlled telephone services. Julia Hirschberg has performed leading research on a variety of topics related to human-to-human and human-to-machine interaction. Specifically, within the area of prosody, she studied how people use other means than speech to communicate focus, turn-taking and emotions in a dialogue. She has also studied how this knowledge can be applied to various speech-based services. Julia Hirschberg has been president of the International Speech Communication Association (ISCA) since 2005. As such she is responsible for
the yearly conference Interspeech that attracts more than 1000 attendee each year."
current needs of the consumers, such that the system is truly autonomic. The project proposes to modularize the ASM into separate components, and then design the various components using both cutting edge novel control theoretic and scheduling analyses. Read more.
According to the citation, "Prof. Yechiam Yemini is that rare individual who embodies excellence in research, innovation and entrepreneurship. He was already a successful entrepreneur before he joined CATT. He then started System Management Arts or SMARTS, a company with over 150 employees that developed network management solutions. This company was acquired by EMC Corporation. He is now working on yet another start up called Arootz. In all his ventures he brings technological innovation and an unerring vision of the market."
Henning Schulzrinne was cited a pioneer in the development of Voice over IP technology that is supplanting circuit-switched voice, which has been the basis of phone service since the days of Alexander Graham Bell. He is a co-inventor of the Session Initiation Protocol (SIP) and the Real-Time Transport Protocol (RTP), which form the basis of VOIP, and additional standards for multimedia transport over the Internet.
In addition, Verizon Communication was honored for a joint project conducted with the lab of Prof. Schulzrinne.
The Center for Advanced Technology in Telecommunications and Distributed Information Systems (CATT) is a research and education group at Polytechnic University, long-recognized as one of the best engineering schools in the country. CATT researchers are leaders in the fields of electrical engineering and computer science. The Center also draws on the expertise of key researchers at Columbia University. Read more.
With the NIH award funding, Pe’er and her team will seek to understand the general underlying principles governing how cells process signals, how molecular networks compute, and how genetic variations alter cellular functioning. Specifically, she wants to understand how changes in DNA codes modify a cells response to its internal and external cues, which then leads to changes throughout the entire body. These changes, or malfunctions, can cause anything from autoimmune disease to cancer." (Columbia News) Read more.
The first is the exploration and refinement of a novel, highly efficient machine learning technique for data-rich domains, which selects small and fast subsets of multimedia features that are most indicative of a given high-level concept. Speed-ups of three decimal orders of magnitude are possible.
The second is the development of new methods and tools for refining user concepts and domain ontologies for video retrieval, based on statistical analyses of their collocation and temporal behavior. The goals are the determination of video synonyms and hypernyms, the verification of temporal shot patterns such as repetition and alternation, and the exploitation of a newly recognized power-law decay of the recurrence of content.
The third is the demonstration of a customizable user interface, the first of its kind, to navigate a library of videos of unedited and relatively unstructured student presentations, using visual, speech, facial, auditory, textual, and other features. These features are shown to be more accurately and quickly derived using the results of the first investigation, and more compactly and saliently presented using the results of the second.
The first main goal of the project is to obtain new cryptographic results based on the presumed hardness of various problems in computational learning theory. Work along these lines will include constructing and applying cryptographic primitives such as public-key cryptosystems and pseudorandom generators from learning problems that are widely believed to be hard, and exploring the average-case learnability of well-studied concept classes such as decision trees and DNF formulas. The second main goal of the project is to obtain new learning results via cryptography. The PIs will work to develop privacy-preserving learning algorithms; to establish computational hardness of learning various Boolean function classes using tools from cryptography; to obtain computational separations between pairs of well-studied learning models; and to explore the foundational assumptions of what are the minimal hardness assumptions required to prove hardness of learning.
including multiple bounces of light (global illumination), material changes and spatially-varying local lighting. Computer graphics is also increasingly used to prototype or design illumination and material
properties, for industries as diverse as animation, entertainment, automobile design, and architecture. A lighting designer on a movie set wants to pre-visualize the scene lit by the final illumination and with
objects having their final material properties, be they paint, velvet or glass. An architect wants to visualize the reflectance properties of building materials in their natural setting. In many applications, much
greater realism and faithfulness can be obtained if the lighting or material designer could interactively specify these properties. The project will develop the theoretical foundations and next generation
practical algorithms for high quality real-time rendering and lighting/material design.
reliability both during and after maintenance while imposing little management overhead. The contributions stem primarily from a virtualization architecture that decouples application instances from operating system instances, enabling either to be independently updated. The results, disseminated via web download, will improve availability of legacy applications, with no source code access,
modification, recompilation, relinking or application-specific semantic knowledge, and perform efficiently and securely on commodity operating systems and hardware.
SIGMETRICS promotes research in performance analysis techniques as well as the advanced and innovative use of known methods and tools. It sponsors conferences, such as its own annual conference (SIGMETRICS), publishes a newsletter (Performance Evaluation Review), and operates a network bulletin board and web site.
This project will investigate a new communication paradigm, named PacketSpread, which makes feasible the use of capability-like mechanisms on the current Internet, without requiring architectural modifications to networks or hosts. The high-level hypothesis of the research is that practical network capability schemes can be constructed through the use of end-point traffic-redirection mechanisms that use a spread-spectrum-like communication paradigm enabled by an overlay network. To test this hypothesis, the project will prototype and experimentally validate the resistance of such a scheme against attacks launched by realistic adversaries, while minimizing the impact of the approach to end-to-end communication latency and throughput.
The results of this research will enable a better understanding of how network-capability schemes can be deployed and used to provide robust and secure communications under both normal operation and in times of crisis. Improvements in the security and reliability of large-scale systems on which society, business, government, and individuals depend on will have a positive impact on society.
W. Bradford Paley.
W. Bradford Paley, an Adjunct Associate Professor in the Department of Computer Science, worked with two collaborators to produce an illustration that seems itself to have become news. Working with Kevin Boyack (of Sandia National Labs) and Dick Klavans (of SciTech Strategies, Inc.), he developed a way of visualizing the relationships among 776 different scientific paradigms--labelling each node with ten unique descriptive phrases--on a small two-foot square print. The image (originally four feet square) was part of an "Illuminated Diagram," a visual display technique Mr. Paley first presented
at IEEE InfoVis 2002. It was part of an exhibit called "Places and Spaces: Mapping Science" installed in the New York Public Library Science Industry and Business Library, then the New York Hall of Science; it is now travelling worldwide.
The journal Nature noticed the image in that exhibit and opened its annual "Brilliant Images" image gallery of 2006 with a very reduced version. It was picked up by both SEED and Discover magazines and has been mentioned in dozens of news sites and blogs, including Slashdot, Reddit, Complexity Digest, Education Futures, and StumbleUpon.
Mr. Paley's site (didi.com/brad) describes his new label layout algorithm, as well as the rest of the project.
algorithms on their inputs, without revealing any additional information. For example, consider a client holding data which he would like classified by a server (e.g., applying a face detection algorithm). However, the client does not want to reveal any information on his data to the server, and the server does not want to reveal any information to the client, beyond the classification result. While general cryptographic techniques for secure multiparty computation may be applied, these often entail a performance overhead that is prohibitive for the real-world applications we address. Prof. Malkin and her team will work to design efficient privacy preserving protocols for common information classifiers including density estimation using Parzen windows, K-NN classification, neural networks, and support vector machines. We will also design privacy preserving protocols for other useful vision and learning problems, such as oblivious matching protocols, allowing two parties to find whether they are holding an
image of the same object or not, without disclosing any additional information on their images.
Details about the methodology can be found at
http://chronicle.com/stats/productivity/page.php?primary=4&secondary=34&bycat=Go Read more.
The images that objects produce are heavily influenced by the interplay between natural lighting conditions, complex materials with non-diffuse reflectance, and shadows cast by and on the object.
Modeling these effects, which are omnipresent in natural environments, is critical for image understanding and machine perception. For example, to deploy face recognition systems in airport security or in the outdoors, we must account for uncontrolled illumination, developing lighting-insensitive recognition methods. Recognizing and tracking vehicles requires understanding the bright highlights produced by metallic car bodies. Robotic helpers that provide assistance to the infirm must interpret highlights and shadows from household objects. Unmanned automated vehicles surveying battle scenarios can also benefit from improved image interpretation algorithms, allowing them to understand and build 3D models of their environs.
Therefore, compact mathematical models of illumination and reflectance are essential, to develop robust vision and image interpretation systems for uncontrolled conditions. We will pursue two main avenues. First, we analyze the frequency-domain properties of lighting and reflectance, extending our previous results to specular objects, describing a theory of frequency domain identities, analogous to
classical spatial domain results like reflectance ratios. Second, we analyze a general light transport operator that by definition includes arbitrary reflectance and shadowing. We develop a locally low-dimensional representation, even for high-frequency highlights and intricate shadows. This enables a new level of accuracy in appearance-based lighting-insensitive recognition and other applications.
poise more likely hypotheses, and give artists better control over the process of computer animation. Physical simulations have already achieved remarkable goals, enabling the prediction of systems that are too costly or dangerous to study empirically; however, current simulation technologies are built for precision, not intuition.
The investigators will develop simulation techniques that address the vision of a rapid, interactive design cycle, with a specific focus on the physical simulation of thin shells--flexible surfaces such as air bags, biological membranes, and textiles, with pervasive applications in automotive design, biomedical device optimization, and feature film production. The work will focus on qualitatively-accurate, but not precise, simulation. The research will yield novel methods that quickly but coarsely resolve the physics, skipping over irrelevant data to capture only the coarse variables that drive design decisions. The project will train young scientists with a deep understanding of computation, mathematics, and application domain areas––despite being in high demand, this combination of skills remains rare.
A technical goal of this project is to develop a principled, methodical approach to coarsening an existing discrete geometric model of a mechanical system, using adaptive, multiresolution
decompositions. Whereas adaptivity is commonly studied in the context of error estimators for mesh refinement, interactivity suggests a focus on how best to give up precision in a simulation. Therefore,
this research will (i) build on early work in the field of discrete differential geometry to formulate coarse geometric representations of physical systems that preserve key geometric and physical invariants,
(ii) investigate the convergence, resolution- and meshing-dependence of discrete differential operators, and (iii) contribute toward a software platform for interactive design space exploration with
concrete applications in automotive, biomedical, and feature-film engineering.
The PIs propose to apply these techniques to the problem of detecting new web-bourne malware (e.g., malicious attachments or active content) through a collaborative method that utilizes (a) the users' actions (to drive the browsers and "explore" new pages, in a manner similar to but more comprehensive and less error-prone than other proposed work that uses automated web-crawlers to scan suspicious web sites), (b) new detectors that are either already running on the users' systems (e.g., a host-based anomaly detector) or are easily deployable over the web, (c) a browser extension that communicates with Google to send information about locally found anomalies and to receive information about the threat-level ("maliciousness") of content downloaded or about-to-be downloaded from the web, and (d) Google itself, as the broker of said information. In addition, Google or a third party can act as the "validator" of alerts, using techniques the PIs have developed for protection of servers, albeit applied to the desktop/browser environment.
Steady advances in such enabling technologies as semiconductor circuits, wireless networking, and microelectromechanical systems (MEMS) are making possible the design of complex distributed (networked) embedded systems that could benefit several application areas such as public
infrastructure, industrial automation, automotive industry, and consumer electronics. However, the heterogeneous and distributed nature of many such systems requires design teams with a composite skill set spanning automatic control, communication networks, and hardware/software
computational systems. Computer-aided design, a traditionally interdisciplinary research area, will be instrumental in making these systems feasible and in enhancing the productivity of the design process.
The grant will allow the PI to develop new modeling techniques, optimization algorithms, ommunication protocols and interface processes that combined will yield a novel 'design automation flow for distributed embedded-control applications' such as automotive ``X-by-wire systems'' and integrated buildings. The goal is to enable the integrated design and validation of these systems while assisting the typically multidisciplinary engineering teams that are building them. Intermediate contributions include methods for the robust deployment of real-time embedded software on distributed architectures and for the synthesis of a distributed implementation of an embedded control application where performance requirements are met while the usage of communication and computational resources is well-balanced. The education plan is motivated by the belief that the academic curricula for both computer and electrical engineers need to be updated in order to
overcome the artificial and historical boundaries among those disciplines in electrical engineering and computer science that lie at the core of embedded computing. Read more.
for Future Single-Chip Parallel Processors". The goal is to design a high-throughput, flexible and low-power digital fabric for future desktop parallel processors, e.g., those with 64+ processors
per chip. The fabric will be designed using high-speed asynchronous pipelines, handling the communication between synchronous processor cores and distributed memory. The asynchrony of the fabric will facilitate lower power, handling of heterogeneous interfaces, and high access rates (with fine-grained pipelining). This work is in collaboration with the parallel processing and CAD groups at the University of Maryland, including Prof. Uzi Vishkin.
The Department of Computer Science is seeking applicants for two
tenure-track positions at either the junior or senior level, one each in
computer engineering and software systems. Applicants should have a Ph.D. in a relevant field, and have demonstrated excellence in research and
the potential for leadership in the field. Senior applicants should
also have demonstrated excellence in teaching and continued
strong leadership in research.
Our department of 32 tenure-track faculty and 2 lecturers attracts excellent
Ph.D. students, virtually all of whom are fully supported by research
grants. The department has close ties to the nearby research laboratories
of AT&T, IBM (T.J. Watson), Lucent, NEC, Siemens, Telcordia Technologies
and Verizon, as well as to a number of major companies including financial
companies of Wall Street. Columbia University is one of the leading research
universities in the United States, and New York City is one of the
cultural, financial, and communications capitals of the
world. Columbia's tree-lined campus is located in Morningside Heights
on the Upper West Side.
Applicants should submit summaries of research and teaching interests,
CV, email address, and the names and email addresses of at least three
references by filing an online application at
www.cs.columbia.edu/recruit. Review of applications will begin on January 1, 2007.
Columbia University is an Equal Opportunity/Affirmative Action
Employer. We encourage applications from women and minorities.
specific focus on natural incorporation of existing simulation, solver, and domain-specific codes.
Prof. Eitan Grinspun (Columbia) brings expertise in adaptive multiresolution methods for physical simulation, working as part of a team led by NYU. Prof. Vijay Karamcheti (NYU) offers expertise in application-aware mechanisms for parallel computing, and Prof. Denis Zorin (NYU) provides expertise in interactive geometric modeling and simulation. Finally, Prof. Steve Parker (Utah) brings his expertise in the development of the SCIRun and SCIRun2 platforms for scientific computing.
and Model-Based Reranking".
Strong Detection to reveal bounds on the kinds of errors that these classes of routing protocols can detect. Hence, the research will be identifying complexity classes of routing protocols in terms of their self-monitoring abilities.
cross-cultural scalability, faster than real-time performance, and the exploitation of the temporal evolutionary aspects of video contents. It will build a retrieval workbench with video mining, topic tracking, and cross-linking capabilities, along with other video understanding services.
“sharing” resources across the consumers they support. However, research that explores how to share resources generally derives point solutions, where different resource/consumer configurations require
separately-designed sharing mechanisms. For instance, a scheduler often has implemented separately a single policy (e.g., FCFS, PS, FBPS, SPRT) optimized for a particular load setting, and cannot easily
be switched to another policy when the situation changes.
This project seeks to develop and analyze Adaptive Sharing Mechanisms (ASMs) in which the mechanism used to share resources adapts dynamically to both the set of available resources and the current
needs of the consumers, such that the system is truly autonomic. We initiate our study with a modularization of the ASM into separate components, and then study the various components using both cutting edge novel control theoretic and scheduling analyses. The study ends with prototype and testing ASMs within a server farm environment.
The grant extends over three years and is part of the NSF Computer Systems Research (CSR) program. Only approximately 10% of all grant applications were funded.
powerful and unexpected attacks become possible. The talk took place in December 2005.
identification users who exhibit potential insider threats.
The award initiates research in the IDS lab that has also been proposed to other agencies for joint support with two companies, Symantec and Secure Decisions, Inc.
The project starts in June 2006 and lasts for 6 months.
The grant was awarded in January of 2006.
The Disruptive Technology Office (DTO, formerly ARDA) awarded the grant, while AFRL provides grant administration. The grant duration is 18 months.
explore the limits of what is possible to achieve, for several types of strong and realistic attacks, including chosen ciphertext attack, key tampering attacks, and key exposure attacks.
known secure servers and exposing common weaknesses and pitfalls. In the process, the project will also develop and release a toolkit for probing and testing the security of these servers.
The paper puts text-searching and crawling on a sound foundation. Text is ubiquitous and, not surprisingly, many important applications
rely on textual data for a variety of tasks. As a notable example,
information extraction applications derive structured relations from
unstructured text; as another example, focused crawlers explore the
web to locate pages about specific topics. Execution plans for
text-centric tasks follow two general paradigms for processing a text
database: either they scan, or "crawl," the text database or,
alternatively, they exploit search engine indexes and retrieve the
documents of interest via carefully crafted queries constructed in
task-specific ways. The choice between crawl- and query-based
execution plans can have a substantial impact on both execution time
and output "completeness" (e.g., in terms of recall). Nevertheless,
this choice is typically ad-hoc and based on heuristics or plain
intuition. This paper presents fundamental building blocks to make the
choice of execution plans for text-centric tasks in an informed,
cost-based way. Towards this goal, the paper shows how to analyze
query- and crawl-based plans in terms of both execution time and
output completeness. The paper adapts results from random-graph theory
and statistics to develop a rigorous cost model for the execution
plans. This cost model reflects the fact that the performance of the
plans depends on fundamental task-specific properties of the
underlying text databases. The paper identifies these properties and
presents efficient techniques for estimating the associated parameters
of the cost model. Overall, the paper's approach helps predict the
most appropriate execution plans for a task, resulting in significant
efficiency and output completeness benefits.
of carefully-engineered links are expected to replace traditional on-chip communication schemes by providing higher bandwidth with lower power dissipation. Further, on-chip networks offer the opportunity to mitigate the complexity of system-on-chip design by facilitating the assembling of
multiple processing cores through the emergence of standards for communication protocols and network access points. This project will investigate the design of low-power scalable on-chip networks for multi-core systems-on-chip by combining a new low-latency, low-energy, current-mode signalling techniques with the design of latency-insensitive protocols extended to support fault-tolerant mechanisms.
The project is funded by the NSF Foundations of Computing Processes and Artifacts (CPA) Cluster. In 2005 the NSF CPA cluster received 532 proposals and funded approximately 10% of them.
The NSF CPA cluster supports research and education projects to advance formalisms and methodologies pertaining to the artifacts and processes for building computing and communication systems. Areas of interest include: topics in software engineering such as software design methodologies, tools for software testing, analysis, synthesis, and verification; semantics, design, and implementation of programming languages; software systems and tools for reliable and high performance computing; computer architectures including memory and I/O subsystems,
micro-architectural techniques, and application-specific architectures; system-on-a-chip; performance metrics and evaluation tools; VLSI electronic design and pertinent analysis, synthesis and simulation
algorithms; architecture and design for mixed media or future media (e.g., MEMs and nanotechnology); computer graphics and visualization techniques. Read more.
The SHIM model of computation provides deterministic concurrency with reliable communication, simplifying validation because behavior is reproducible. Based on asynchronous concurrent processes that communicate through rendezvous channels, SHIM can handle control,multi- and variable-rate dataflow, and data-dependent decisions. The components consist of a high-level language based on SHIM, an efficient simulator for SHIM, a software synthesis system that generates C, a formal analysis tool for SHIM and libraries for the SHIM environment.
second language (L2) learners rarely learn. Topic shifts, contrastive
focus, and even simple question/statement distinctions, cannot be
recognized or produced in many languages without an understanding of
their prosody. However, 'translating' between the prosody of one
language and that of another is a little-studied phenomenon. This
research addresses the 'prosody translation' problem for Mandarin
Chinese and English L2 learners by identifying correspondences between
prosodic phenomena in each language that convey similar meanings. The
work is based on comparisons of L1 and L2 prosodic phenomena and the
meanings they convey. Computational models of prosodic variation
suitable for representing these phenomena in each language are
constructed from data collected in the laboratory, with results tested
on L1 and L2 subjects. The models are tested in an interactive
tutoring system which takes an adaptive, self-paced approach to
prosody tutoring. This system modifies training and testing examples
automatically by imcremental enhancement of distinctive prosodic
features in response to student performance. The success of the
system is evaluated via longitudinal studies of L2 students of both
languages to see whether the new techniques improve students' ability
to recognize and produce L2 prosodic variation. By providing a method
and computational support for prosody tutoring, this work will not
only enable students to attain more native-like fluency but it will
provide a model for training students in other pragmatic language
phenomena --- beyond learning the words and the syntax of a new
language.
requests access, it provides its pre-computed egress behavior model to
another node who may grant it access to some service. The receiver
compares the requestor's egress model to its own ingress model to
determine whether the new device conforms to its expected
behavior. Access rights are thus granted or denied based upon the
level of agreement between the two models, and the level of risk the
recipient is willing to manage. The second use of the exchanged models
is to validate active communication after access has been granted.
As a result, MANET nodes, will have greater confidence that a new node is not malicious; if an already admitted node starts misbehaving, other MANET nodes will quickly detect and evict it.
and permutation within statistical learning. These research tools have
applications in national security as a way to identify and match people
from text and multimedia and discover links between them. More
specifically, this proposal addresses the following key application areas:
- Matching authors: permutational clustering methods and permutationally
invariant kernels are used to compute the likelihood the same person wrote
a given publication or text.
- Matching text and multimedia documents: permutational algorithms and
permutationally invariant kernels to perform text, image and word
matchings of descriptions of people to known individuals in a database.
- Matching social networks and graphs: social network matching tools from
permutational algorithms which find a subnetwork in a larger network that
has a desired topology.
As stated by the IBM Ph.D. Fellowship Program, "Award Recipients are selected based on their overall potential for research excellence, the degree to which their technical interests align with those of IBM, and their progress to-date, as evidenced by publications and endorsements from their faculty advisor and department head."
the 19th Large Installation System Administration Conference (LISA
2005) held last week in San Diego, CA for their paper titled:
"Reducing Downtime Due to System Maintenance and Upgrades". Read more.
for Internet multimedia." Read more.
foundations of computer science is awarded every 1.5 years by the ACM
Special Interest Group on Algorithms and Computing Theory (SIGACT) and
the IEEE Technical Committee on the Mathematical Foundations of
Computing. The Prize includes a $5000 award and a $1000 travel stipend
(for travel to the award ceremony) paid by ACM SIGACT and IEEE
TCMFC. The Prize is awarded for major research accomplishments and
contributions to the foundations of computer science over an extended
period of time.
The Prize is named in honor and recognition of the extraordinary
accomplishments of Prof. Donald Knuth, Emeritus at Stanford
University. Prof. Knuth is best known for his ongoing multivolume
series, The Art of Computer Programming, which played a critical role
in establishing and defining Computer Science as a rigorous,
intellectual discipline. Prof. Knuth has also made fundamental
contributions to the subfields of analysis of algorithms, compilers,
string matching, term rewriting systems, literate programming, and
typography. His TeX and MF systems are widely accepted as standards
for electronic typesetting. Prof. Knuth's work is distinguished by its
integration of theoretical analyses and practical real-world
concerns. In his work, theory and practice are not separate components
of Computer Science, but rather he shows them to be inexorably linked
branches of the same whole. Read more.
$425K NIH Exploratory/Developmental Research Grant for Insertable
Imaging and Effector Platforms for Surgery. The grant is to construct
small, mobile, multi-function platforms that can be placed inside a
body cavity to perform robotic minimal access surgery. The robot will
be based upon an existing prototype device developed at the Columbia
Robotics Lab. Read more.
Note that the computer engineering position has a starting date of January 2007.
Applicants should submit summaries of research and teaching interests, CV, email address, and the names and email addresses of at least three references by filing an online application at
www.cs.columbia.edu/recruit. Review of applications will begin on December 1, 2005.
Columbia University is an Equal Opportunity/Affirmative Action Employer. We encourage applications from women and minorities. Read more.
Details about MAGNet can be found at http://magnet.c2b2.columbia.edu/index.html Read more.
"The most pertinent is a project undertaken by Dr. Tal Malkin and her team in the Computer Science Department at Columbia University, in partnership with researchers from IBM, related to the cryptographic security of Internet servers. Cryptography is an essential component of modern electronic commerce. With the explosion of transactions being conducted over the Internet, ensuring the security of data transfer is critically important. Considerable amounts of money are being exchanged over the Internet, either through shopping sites (e.g. Amazon, Buy.com), auction sites (eBay), online banking (Citibank, Chase), stock trading (Schwab), and even the government (irs.gov).
Dr. Malkin and her team made a systematic study of the cryptographic strength of thousands of "secure" servers on the Internet. Servers are computers that “host” the main functions of the Internet, such as Web sites (Web servers), email (mail servers), and other functions. Communication with these sites is secured by a protocol known as the Secure Sockets Layer (SSL) or its variant, Transport Layer Security (TLS). These protocols provide authentication, privacy, and integrity. A key component of the security of SSL/TLS is the cryptographic strength of the underlying algorithms used by the protocol. Dr. Malkin’s study probed 25,000 secure Web servers to determine if SSL was being properly configured and whether it was employed in the most secure way. Improper configuration can lead to attacks on servers, stolen data identity theft, break-ins, etc. Dr. Malkin’s project is the most extensive study of actually existing server security on the Internet.
The team’s findings, relevant to these hearings, included some serious weaknesses in how Web servers, including eCommerce servers employed by financial service companies, are currently being configured.
The most prevalent is that an old, outdated version of SSL, known as SSL 2.0, is still being supported on over 93% of these “secure” servers. SSL 2.0 has many flaws, including a vulnerability to “man in the middle” attacks, which are commonly used for identity theft. While most of these servers also employ a more advanced version of SSL, the incoming communication can choose to use Version 2.0 and thus breach the defenses of the server.
Another serious problem is the use of 512 bit “public keys” (1,024 bits are recommended), which can be broken readily, thus compromising all of the data on the server using this key length. Over 5% of the “secure” servers are using this key length.
These security shortcomings are quite serious, and pose risks both to the consumers and the providers in the financial services industry. Financial server security can be increased both by popularizing the correct configurations and, possibly, by greater government oversight in this area."
Geometry".
Physical phenomena such as the crushing of a car or the evolution of a
storm system are governed by effects ranging from very small to very
large scales. Accurately predicting these by resolving the finest
scales in a computer simulation is prohibitively expensive. The
investigators study how fine scale information impacts coarse scale
behavior and vice versa. In effect "summarizing" these relationships
allows the researchers to model coarse scale effects accurately and
efficiently without the need to explicitly resolve the finest scales
in a computation. A key to this study lies in the careful transfer of
structures present in the mathematical models of these phenomena
(which in essence have infinite resolution) to the computational realm
with its finite resolution and finite computational resources. The
methods being developed will allow rapid assessment of overall effects
with the ability "to drill down" computationally where additional
detail is required.
Physical systems are typically described by a set of continuous
equations using tools from geometric mechanics and differential
geometry to analyze and capture their properties. For purposes of
computation one must derive discrete (in space and time)
representations of the underlying equations. Theories which are
discrete from the start (rather than discretized after the fact), with
key geometric properties built in, can more readily yield robust
numerical simulations which are true to the underlying continuous
systems: they exactly preserve invariants of the continuous systems in
the discrete computational realm. So far these methods have not
accounted for effects across scales. Yet both physics and numerics
require such multiresolution strategies. This research project is
developing a multiresolution theory for discrete variational methods
and discrete differential geometry to apply it to applications in
thin-shell and fluid modeling. Its innovative aspect lies in tools to
conserve symmetries across computational scales.
participants. The main goal of the Association is "to promote Speech Communication Science and Technology, both in the industrial and Academic areas", covering all the aspects of Speech Communication (Acoustics, Phonetics, Phonology, Linguistics, Natural Language Processing, Artificial Intelligence, Cognitive Science, Signal Processing, Pattern Recognition, etc.). Read more.
"For her doctoral dissertation at Columbia University, computer scientist Regina Barzilay led the development of Newsblaster, which does what no computer program could do before: recognize stories from different news services as being about the same basic subject, and then paraphrase elements from all of the stories to create a summary."Read more.
computer science and telecomuniations. Projects include cybersecurity research, biometrics, IT to enhance disaster management, and building certifiably dependable systems. For more information, visit www.cstb.org.
Prof. Traub's appointment marks his return to the CSTB, as he was also its founding chair. "In 1986, along with Marjory Blumenthal, Joe's vision and dedication established the model that has made CSTB one of the strongest boards at the Academies. At this particular point in CSTB's history, I could not think of another person better suited to assume the chair and to guide CSTB to new heights," said Bill Wulf, President of the National Academy of Engineering. Read more.
Dora the Explorer will appear from 12 - 1:00, followed by a Harry
Potter Magician from 1:00 - 2:00.
and two smaller academic efforts. The two goals of the project are
to build a large-scale asynchronous demonstration chip (for Boeing) and design an
asynchronous CAD tool for use future asynchronous designs.
Prof. Nowick and his former PhD student Montek Singh (currently an assistant
professor at UNC), will play a key role in transferring
their high-speed asynchronous pipeline style, MOUSETRAP, to the
Philips commercial asynchronous tool flow, and providing optimizations
for several of the other CAD tools.
of collaborative tools for student groups. In addition, the
introduction of lecture videos into the online curriculum has drawn
attention to the disparity in the network resources used by students.
The paper presents an e-Learning architecture and adaptation model called
AI^2TV (Adaptive Internet Interactive Team Video), which
allows virtual students, possibly some or all disadvantaged in network
resources, to collaboratively view a video in synchrony. AI^2TV upholds the invariant that each student will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be initiated by any student and their results are seen by all the other students. These features
allow group members to review a lecture video in tandem, facilitating
the learning process. Experimental trials show that AI^2TV can successfully synchronize video for distributed students while, at the same time, optimizing the video quality, given fluctuating bandwidth, by adaptively adjusting the quality level for each student.
The grant was awarded to a team lead by SRI and consisting of researchers at Columbia University, University of Massachusetts Amherst, University of California San Diego, University of California Berkeley, University of Washington, Technical University Aachen (Germany), and Systran.
The research to be conducted at the Center for Computational Learning
Systems (CCLS) will center on building natural language processing tools for
Arabic and its dialects, concentrating on leveraging linguistic knowledge
when few resources (annotated corpora or even unannotated corpora) are
available. Mona Diab, Nizar Habash, and Owen Rambow will build on work
accomlished under an existing NSF grant. In addition, Nizar Habash will
continue his work on generation-heavy hybrid machine translation.
collaborative, cross-domain security technologies to detect and prevent
the exploitation of network-based computer systems. The core concept is to
deploy a number of strategically placed sensors across a number of
participating networks that collaborate by sharing information in
real-time to defend the entire network and each of its members. A novel
content-based anomaly detector, PAYL, identifies likely new exploits
targeting vulnerable systems. The Worminator project has developed a new
generation of scalable, collaborative, cross-domain security systems that
exchange alert information including profiled behaviors of attacks and
privacy-preserving anomalous content alerts to detect severe zero-day
security events. The work is a joint collaboration with CounterStorm, a
New York City based company spun out from the DHS and DARPA-sponsored
Columbia IDS lab, headed by Prof. Sal Stolfo. Read more.
Emerging Models and Technologies for Computation (EMT). The EMT cluster
seeks to advance the fundamental capabilities of computer and information
sciences and engineering by capitalizing on advances and insights from
areas such as biological systems, quantum phenomena, nanoscale science and
engineering, and other novel computing concepts. The award will support
Rocco's research on connections between quantum computation and
computational learning theory. Rocco's research in this area will focus
on the fundamental abilities and limitations of quantum learning
algorithms from an information-theoretic perspective, as well as on
developing computationally efficient quantum learning algorithms.
Suhit Gupta, Prof. Gail Kaiser and Prof. Salv Stolfo, all from the Department of Computer Science at Columbia University, won the Best Student Poster Award at WWW 2005 in Japan. Read more.
University on April 15, 2005 to bring together researchers in database and
information retrieval. More than 120 researchers and students from
academic and research institutions across the greater New York area
attended this inaugural workshop, making it a very successful event.
The program consisted of three technical keynote lectures from Alon Halevy
(University of Washington), Craig Nevill-Manning (Google Inc.) and Michael
Stonebraker (MIT), and a poster session for graduate students to present
their latest research. The event was sponsored by IBM research, with additional
funding from Columbia's Graduate Student Advisory Council.
clockless) circuits and systems. The symposium
typically has 100-120 attendees, and over 60 submitted papers.
This year, the symposium will be hosted at Columbia
University in Davis Auditorium, with Prof. Nowick as general
co-chair. Invited speakers include Turing award-winner
Ivan Sutherland with Robert Drost (Sun Microsystems Lab),
Bob Colwell (the former Intel manager of several Pentium
projects), and a tutorial on high-speed clocking with
Prof. Ken Shepard (EE Department) and Phil Restle (IBM
T.J. Watson). Read more.
A proposal from the Columbia Robotics Lab was chosen as one of ten
winners for the CanestaVision 3D sensing design competition. Columbia
Ph.D. student Matei Ciocarlie and Research Scientist Andrew Miller
headed the proposal which focuses on developing an "Eye-in-Hand" range
sensor for robotic grasping.
Each of the winners will receive a $7,500 development kit that
consists of a CanestaVision 3-D sensor chip, a USB interface, and
application program interface (API) software. These hardware and
software development kits will be used to actually build the
applications, and enter them in the "implementation" phase of the
contest which boasts a $10,000 first prize for best use of the technology.
Stay tuned for the Phase II winners in June! Read more.
world. People are captivated by the effects of natural lighting and
shading patterns, such as the soft shadows from the leaves of a tree
in skylight, the glints of sunlight in ocean waves, or the shiny
reflections from a velvet cushion. In computer graphics, it is
important to be able to accurately reproduce these appearance effects,
to create realistic images for applications like video games, vehicle
and flight simulators, or architectural design of interior spaces.
However, it is still very difficult to accurately model complex
illumination and reflection effects in interactive applications like
games, in image-based rendering applications like e-commerce, or in
computer vision applications like face recognition. In the past, the
above applications have been addressed separately, by devising
particular algorithms for specific problems. In this project, the
research focuses on the mathematical and computational fundamentals of
visual appearance, seeking to understand the intrinsic computational
structure of illumination, reflection and shadowing, and develop a
unified approach to many problems in graphics and vision.
The main thrust of the research will be to develop appropriate
mathematical representations for appearance, along with computational
algorithms and signal-processing techniques such as Clebsch-Gordan
expansions, wavelet methods with triple product expansions, and radial
basis functions. A major advantage of this approach is that the same
representations, analysis and computation tools can then be applied to
many application domains, such as real-time and image-based rendering,
Monte Carlo sampling and lighting-insensitive recognition. This
research philosophy builds on the investigator's dissertation, where
he developed a signal-processing framework for reflection, leading to
new frequency domain algorithms for both forward and inverse rendering.
permutation into learning algorithms and statistical data
representations. This includes statistical modeling of images,
text and networks while matching their subcomponents (pixels,
words or nodes). Permutation algorithms are combined with
learning algorithms to more accurately model realistic data.
Experiments focus on face and identity recognition problems.
awareness of four pillars of Trustworthy Computing: security, privacy,
reliability, and business/societal integrity. The project will
develop a new course on Trustworthy Computing, integrate relevant material
into COMS W3157, COMS W4156, and other courses as appropriate, and develop a
student programming competition specifically focused on trustworthy computing.
The overarching aim is to create a multi-year, integrated curriculum on
Trustworthy Computing.
The winners will receive their award at an upcoming CRA conference.
The CRA noted: "This year's nominees were a very impressive group. A number of them
were commended for making significant contributions to more than one
research project, several were authors or coauthors on multiple papers,
others had made presentations at major conferences, and some had
produced software artifacts that were in widespread use. Many of our
nominees had been involved in successful summer research or internship
programs, many had been teaching assistants, tutors, or mentors, and a
number had significant involvement in community volunteer efforts. It is
quite an honor to be selected as one of the top members of this group." Read more.
Ricardo Baratto, Shaya Potter, Gong Su, and Jason Nieh received the
Best Student Paper Award at the 10th International Conference on Mobile
Computing and Networking (MobiCom 2004) held this week in Philadelphia,
PA for their paper titled: "MobiDesk: Mobile Virtual Desktop
Computing". The PC Chairs noted that paper was also the highest rated
paper of the conference as per the original review scores.
MobiCom is the top conference in the field of mobile computing and
networking with a typical acceptance rate of less than 10%. This year
the conference received 326 submissions, of which 26 papers were
accepted. 65% of the accepted papers had a student as first author.
such as news reporting, intelligence information gathering, and
criminal investigation. However, with the advent of the digital age,
the trustworthiness of pictures can no longer be taken for granted.
This project will develop a completely blind and passive system for
detecting digital photograph tampering. We take an innovative
approach integrating techniques from signal-processing and computer
graphics. The signal processing method involves effective use of
higher-order signal statistics to identify tampering artifacts at the
signal level, while the computer graphics approach includes novel
techniques for 3D geometry estimation, illumination field recovery and
relighting, and scene reconstruction to detect inconsistencies at the
scene level like shadows, shading and geometry.
The three-year project was funded at $740,000 as part of the NSF CyberTrust program.
There are many types of sensor networks, covering different
geographical areas, using devices with a variety of energy
constraints, and implementing an assortment of applications. One
driving application is the reporting of conditions within a region
where the environment abruptly change due to an anomalous event, such
as an earthquake, terrorist attack, flood, or fire. During and
immediately following these events, sensor networks can provide
scientists, rescue workers, and even victims with crucial information
such as exit routes, danger spots, and areas that demand additional
rescue and recovery resources. This will facilitate and expedite
recovery procedures and identify the source of the problem.
This proposal focuses specifically on sensor systems that are to be
designed to efficiently deliver information during and immediately
following an event that triggers an abrupt change. The novelty
of this proposal is its focus on sensor networks that must deal with a
sudden impulse of data. The impulse will move the sensor network
almost instantaneously from a state with a light load to a state with
an overloading body of data to report. This data needs to be
delivered through the sensor network quickly to a relatively small
number of sink points that attach to the regular
communication infrastructure. The flow of data out of the network has
similarities to the flow of people out of a large arena after a
sporting event completes: this large impulse of data that is
suddenly on the move must be funneled out through what is typically a
small number of collection sink points.
The project was funded for $750,000 over three years.
The Secure Remote Computing Services (SRCS) project will develop
critical information technology (IT) infrastructure. SRCS will move
all application logic and data from insecure end-user devices, which
attackers can easily corrupt, steal and destroy, to autonomic server
farms in physically secure, remote data centers that can rapidly adapt
to computing demands especially in times of crisis. Users can then
access their computing state from anywhere, anytime, using simple,
stateless Internet-enabled devices. SRCS builds on the hypothesis
that a combination of lightweight process migration, remote display
technology, overlay-based security and trust-management access control
mechanisms, driven by an autonomic management utility, can result in a
significant improvement in overall system reliability and security.
The results of this proposed effort will enable SRCS implementations
to provide a myriad of benefits, including persistence and continuity
of business logic, minimizing the cost of localized computing
failures, robust protection against attacks, and transparent user
mobility with global computing access. SRCS in time of crisis
specifically addresses a major concern of national and homeland
security. The substantially lowered total cost of ownership of
applications running on SRCS is anticipated to dramatically reduce the
gap between IT haves and have nots.
The proposal was funded at $1,200,000 over three years.
You may have noticed some changes in the undergraduate curriculum
for Computer Science majors, as published in the SEAS bulletin.
This year is a transition year, as the CS department is phasing in
the new curriculum, so please bear with us.
How does this affect you now?
Please read this message to find out!
Note that the changes will affect ALL COMPUTER SCIENCE MAJORS, MINORS
and CONCENTRATORS, in all schools, not just SEAS. The bulletins for
Columbia College (CC), General Studies (GS) and Barnard will not
reflect the changes until 2005-06, so please refer to the Computer
Science department web pages for the most up-to-date information.
The new sequence of programming courses is as follows:
- CS-I (COMS W1004): Introduction to Programming
- (for computer science and other science and engineering majors who
have little or no programming experience.) This course introduces
basic computer science concepts underlying modern information
technology along with algorithmic problem-solving techniques using
Java. This course or AP/CS becomes a prerequisite for coms-w1007
starting in Spring 2005.
- CS-II (COMS W1007): Introduction to Computer Science
- (for students who have programmed before and/or taken AP Computer
Science in high school). This course is taught in Java and covers
computer science concepts and intermediate programming skills.
- CS-III (COMS W3157): Tools and Techniques for Advanced Programming.
Pre-requisite: coms-w1007. This course covers C, C++, internet
programming skills and Unix utilities.
- CS-IV (COMS W3137): Data Structures and Algorithms.
- CS-IV (COMS W3137): Data Structures and Algorithms.
- Pre-requisite: coms-w3157.
Pre- or co-requisite: coms-w3203 (Discrete Math).
Introduction to classic data structures and algorithms.
Taught in C/C++ (starting in Spring 2005).
This semester (Fall 2004) will be the last semester that Data
Structures (3137) is taught in Java. Starting in Spring 2005, it will
be taught in C/C++. For this reason, Advanced Programming (3157) is
now a pre-requisite for Data Structures.
Due to errors in scheduling, there unfortunately has been a conflict
between Discrete Math (3203) and Advanced Progamming (3157).
If you are currently enrolled in Discrete Math (W 3203), but have not
already taken COMS W3157, this it is advised that you take COMS W3157 this term.
To work around the time conflict, we have added a second section of
3157, which meets on Monday and Wednesday mornings. (Note that the
Wednesday is a lab section which will appear on the registrar's web
site on Tuesday next week.)
If this second section of 3157 is a conflict for you as well, then
it is recommended by the department that you drop 3203 for this term
and pick up section 1 of 3157; and take 3203 in the Spring.
Also note that if took Introduction to Computer Science (1007) last
year, you have the option of taking Data Structures (3137) this term
in Java or taking Advanced Programming (3157) now and then taking
Data Structures in C/C++ in the Spring.
For equestions, please contact
Prof Elizabeth Sklar (sklar@cs.columbia.edu) or
Prof Alfred Aho (aho@cs.columbi.edu) or
Simon Bird (birds@cs.columbia.edu).
your classmates, for hearing about the latest research and activities
of CS alums, and for catching up on news. The events are open to all friends of the Department, including students and alumni, current and former staff members, current and former faculty and research colleagues. Read more.
field of Computer Vision. This year the conference received
873 submissions, of which 59 papers were accepted as
oral presentations and 200 papers were accepted as posters.
She plans to expand the traditional cryptographic foundations so as to
withstand attacks by stronger, more realistic adversaries. In
particular, we will study security in a complex Internet-like
environment with multiple protocol executions, and will address
security against attackers who can obtain or tamper with the secret
keys.
The IBM Faculty Award is highly competitive: in 2002 IBM granted about 50 such awards across he mathematics and computer science disciplines.
Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made "important contributions to engineering theory and practice, including significant contributions to the literature of engineering theory and practice," and those who have demonstrated accomplishment in "the pioneering of new fields of engineering, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education." Read more.
effective and efficient algorithms for well-defined computational learning
problems. The two main goals are:
* To develop algorithms which can efficiently learn rich classes of
Boolean functions in well-studied models of computational learning.
Anticipated research directions here include learning DNF formulas,
learning various classes of Boolean circuits, and learning in the presence
of irrelevant information.
* To develop and analyze new well-motivated models for computational
learning, and to design efficient learning algorithms for these new
models. Anticipated research directions here include developing
average-case learning algorithms, developing a theory of learning from
nonmalicious random examples, and studying the role of quantum computation
in learning theory.
An important aspect of the proposed research methodology is to explore and
exploit connections between learning problems and complexity-theoretic
structural questions about Boolean functions.
WASHINGTON, D.C. - February 3, 2004 - Internet2(R) today announced that its
Presence and Integrated Communications (PIC) Working Group successfully
completed an experimental communications trial during the advanced
networking, Joint Techs Workshop in Hawaii last week. The trial
demonstrated SIP-based (Session Initiation Protocol) voice, video, and
instant messaging over wireless fidelity (WiFi), and SIP voice conferencing
- all in the context of rich presence derived from WiFi location service and
enterprise calendaring.
"The rich presence efforts at Internet2 point the way towards
next-generation communication services, reaching far beyond the limited
presence and phone systems in use today," said Henning Schulzrinne,
professor in the Departments of Computer Science and Electrical Engineering
at Columbia University. "Beyond the old goal of reachable anywhere,
anytime, rich presence gives control back to users, so that communications
becomes planned and desired instead of disruptive and haphazard."
Participants downloaded and installed one of several integrated
communications clients onto their laptops allowing them to initiate voice,
instant messaging, and video calls to other participants - using the
receiver's email address as a single, converged electronic identity.
With the inclusion of rich presence services, participants were able to see
not only which of their buddies were online or offline, but also, for each
buddy, a current location, activity, and expected call quality. As
participants used the meeting's wireless LAN infrastructure and moved from
one meeting room to another, their locations were tracked by WiFi location
technology from HP. "The open-source SIP Express Router (SER) provided a
solid base for this demo," said Jiri Kuthan, member of the Internet2 PIC
Working Group and director of engineering at iptel.org. "We were able to
extend SER to perform as a SIP presence agent serving rich location,
calendar, and expected call quality presence to clients."
"Location services can add enormous value to integrated communications
applications and can provide life-saving location information to emergency
responders," said Ben Teitelbaum, Internet2 program manager for voice and
integrated communications. "Internet2 is working to ensure that these
technologies are designed and deployed to protect users' privacy and allow
users to control and filter what information about them is published."
Participants were also able to experience placing SIP voice calls to any
user at a SIP.edu-enabled institution (http://voip.internet2.edu/SIP.edu/)
and were able to eavesdrop on meeting sessions by calling special "room
buddies."
"The result of this experiment, as well as the results of future
experiments, is a critical means of helping to determine what presence and
integrated communications means to the end user," said Jamey Hicks, member
of the Internet2 PIC Working Group and principal member of the technical
staff, HP Labs. "Our goal is to develop an improved mode of communication
with a focus on location-based services using 802.11 - for people constantly
on the go and requiring constant contact, such as healthcare providers or
those in the business community."
The individuals who contributed to the success of this experiment are from
the following Internet2 member institutions (in alphabetical order):
+ Columbia University
+ Ford Motor Company
+ HP
+ University of Hawaii
+ University of Pennsylvania
+ Wave Three Software
+ Yale University
# # #
About the Internet2 Presence and Integrated Communications Working Group
The Presence and Integrated Communications (PIC) working group will foster
the deployment of network-based communication technologies through
demonstrations, tutorials, and initiatives in collaboration with both the
private sector and open-source initiatives. This growing area will have an
effect on nearly every individual within higher education and also have the
potential to be a significant driver for network design, security, and
middleware. For more information, visit: http://pic.internet2.edu.
About Columbia University's IRT Laboratory
The Internet Real-Time Lab (IRT) in the Department of Computer Science at
Columbia University conducts research in the areas of:
+ Internet telephony;
+ Streaming Internet media;
+ Internet quality of service;
+ Network measurements and reliability;
+ Service location;
+ Ad-hoc wireless networks;
+ Scalable content distribution; and
+ Ubiquitous and context-aware computing and communication.
About HP Labs Cambridge
HP Labs Cambridge (HPLC) is the primary advanced research facility for HP on
the East Coast. For more information on HP Labs, please visit
http://www.hpl.hp.com.
About iptel.org
Based in Berlin, Germany, iptel.org is a leading innovation organization in
SIP technology. iptel.org is a consultant to vendors and network operators
and is known for having created a unique open-source SIP server with premium
service in flexibility and high performance. iptel.org's server, SIP
Express Router, has been powering public VoIP services of numerous providers
around the world. For more information, visit http://www.iptel.org/.
About Internet2(R)
Led by more than 200 U.S. universities, working with industry and
government, Internet2 develops and deploys advanced network applications and
technologies for research and higher education, accelerating the creation of
tomorrow's Internet. Internet2 recreates the partnerships among academia,
industry, and government that helped foster today's Internet in its infancy.
For more information about Internet2, visit: http://www.internet2.edu/. Read more.
Prof. Angelos D. Keromytis focuses on computer security, cryptography, and networking; Prof. Vishal Misra works on communication networks, while Prof. Elizabeth Sklar's interest lie in human and machine learning.
Prof. Misra's has a joint appointment with Electrical Engineering.
2004, which is awarded to one person in the physical sciences and
engineering once every two years. More information is available at sigmaxi.org. Read more.