Most people take for granted that when they speak, they will be heard and understood. But for the millions who live with speech impairments caused by physical or neurological conditions, trying to communicate with others can be difficult and lead to frustration. While there have been a great number of recent advances in automatic speech recognition (ASR; a.k.a. speech-to-text) technologies, these interfaces can be inaccessible for those with speech impairments. Further, applications that rely on speech recognition as input for text-to-speech synthesis (TTS) can exhibit word substitution, deletion, and insertion errors. Critically, in today’s technological environment, limited access to speech interfaces, such as digital assistants that depend on directly understanding one’s speech, means being excluded from state-of-the-art tools and experiences, widening the gap between what those with and without speech impairments can access.
Find the COVID-19 Resource Guide here.
Computer Science at Columbia University
NLP Seminar - Alex Tamkin
Thursday 3:00 pm
CS conference room (CSB453)
Alex Tamkin, Stanford University
Decision problems and shallow quantum circuits
Friday 10:00 am
Joseph Slote, California Institute of Technology
Trustworthy Open Source: The Consequences of Success
Distinguished Lecture Series
Monday 11:50 am
CSB 451 CS Auditorium
Eric Brewer, Google