AI Learns to Predict Human Behavior from Videos

Assistant Professor Carl Vondrick, Didac Souris, and Ruoshi Liu developed a computer vision algorithm for predicting human interactions and body language in video, a capability that could have applications for assistive technology, autonomous vehicles, and collaborative robots.

Robot Displays a Glimmer of Empathy to a Partner Robot

A Columbia Engineering robot has learned to predict its partner robot’s future actions and goals based on just a few initial video frames. The study is part of a broader effort to endow robots with the ability to understand and anticipate the goals of other robots, purely from visual observations.