NSF Grant : Multimodal Brain Computer Interface for Human-Robot Interaction

Peter Allen, Paul Sajda*, Joe Francis** PI's
Department of Computer Science, Columbia University
Department of Biomedical Engineering, Columbia Univeristy*
Columbia University
Department of Biomedical Engineering, University of Houston**
Human Robot Interaction (HRI) is an research area that is a key component in making robots part of our everyday life. The idea of co-robotics, as expressed by NSF, explicitly acknowledges humans as part of a larger robotic control system. Current interface modalities such as video, keyboard, tactile, audio, and speech can all contribute to an HRI interface. However, an emerging area is the use of Brain-Computer Interfaces (BCI) for communication and information exchange between humans and robots. BCIs can provide another channel of communication with more direct access to physiological changes in the brain. BCIs vary widely in their capabilities, particularly with respect to spatial resolution, temporal resolution and noise. This project is aimed at exploring the use of multimodal BCIs for HRI. Multimodal BCIs, also referred to as hybrid BCIs (hBCI), have been shown to improve performance over single modality interfaces. This project is focused on using a novel suite of sensors (Electroencephalography (EEG), eye-tracking, pupillary size, and functional Near Infrared Spectroscopy (fNIRS)) to improve current HRI systems. Each of these sensing modalities can reinforce and complement each other, and when used together, can address a major shortcoming of current BCIs which is the determination of the user state or situational awareness (SA). SA is a necessary component of any complex interaction between agents, as each agent has its own expectations and assumptions about the environment. Traditional BCI systems have difficulty recognizing state and context, and accordingly can become confusing and unreliable. This project will develop techniques to recognize state from multiple modalities, and will also allow the robot and human to learn about each other’s state and expectations using the hBCI we are developing. The goal is to build a usable hBCI for real physical robot environments, with noise, real-time constraints, and added complexity. Human subject operators will test the hBCI and quantify its utility in complex task environments.

Links:

Task Level Hierarchical System for BCI-enabled Shared Autonomy
Workspace Aware Online Grasp Planning