Moti Yung talks about his work on the Google Apple Exposure Notification API contact tracing project at the Responsible Data Summit.
Most people take for granted that when they speak, they will be heard and understood. But for the millions who live with speech impairments caused by physical or neurological conditions, trying to communicate with others can be difficult and lead to frustration. While there have been a great number of recent advances in automatic speech recognition (ASR; a.k.a. speech-to-text) technologies, these interfaces can be inaccessible for those with speech impairments. Further, applications that rely on speech recognition as input for text-to-speech synthesis (TTS) can exhibit word substitution, deletion, and insertion errors. Critically, in today’s technological environment, limited access to speech interfaces, such as digital assistants that depend on directly understanding one’s speech, means being excluded from state-of-the-art tools and experiences, widening the gap between what those with and without speech impairments can access.
Wu gave a lightning talk, “Life of a Machine Learning Dataset”, at the conference last July. She was selected out of 60 applicants to deliver a 15-minute talk to 464 attendees from over 18 North American office locations.
My internship at Google came as a result of my independent research during my five month study abroad program. Two days after I returned to the United States on December 14, 2017, I delivered a 40 minute talk on bias in AI and hiring algorithms at Google headquarters for 453 attendees.
In the talk, I performed neural network experiments demonstrating bias in predictive image/video machine learning programs. A Google employee came up to me afterwards and told me to send him my resume. Taking a chance, I sent it in on the last day summer internship applications were open. Two weeks later, I had an offer.
At Google, I had three major responsibilities. First, I identified key issues across machine learning data curation process, leveraging consulting skills to scope, prioritize and identify technical solutions to major pain points. Second, I collaborated with Google’s machine learning engineers and Google’s 700+ member Operations team in Gurgaon, India to eliminate inefficiencies in training by building a training platform. Third, I created presentations with compelling messages tailored to different audience levels, including directors.
Katie Girskis, the organizer of the Google Women Engineers conference, wanted to highlight intern projects from across Google and selected me to be one of the lightning talk speakers. On the first day of the conference, I presented “Life of a Machine Learning Dataset” at Google’s annual women engineers summit, focusing on how we’re putting a price on units of human judgements/expertise more directly than ever. For the talk, I collaborated with Google’s Business and Strategy Operations team to calculate data curation costs, resulting in new metrics to evaluate ML training costs.
Overall by the end fo the internship, my coworkers awarded me 1 kudos and 2 peer bonuses, as well as a return offer. At Google, I learned to write code using their internal technology stack, collaborate with departments at a larger scale, and communicate more effectively. Apart from working with highly skilled people, I enjoyed making friends with the other interns in Mountain View, and having lunch with Jeff Dean. Most importantly, I loved the fact that my code went into production and that my work is being continued even after my summer internship ended.