BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Inside Google’s Effort To Use AI To Make ASL Accessible To All

Following

Earlier this year, Bay Area-based Google put together a competition intended to use artificial intelligence in decode sign language in real time. According to Google, the goal of the competition is to “classify isolated American Sign Language signs” using a TensorFlow Lite model.

There’s a video on YouTube about the “first-ever” ASL competition. Google announced the competition’s next phase earlier this month at its annual I/O developer conference at its Mountain View headquarters.

In an exclusive interview ahead of I/O, Sam Sepah, Google’s lead accessibility research product manager, told me via email the company’s inspiration for the ASL project comes from a sobering place. He cited a statistic that 33 babies are born in the United States every day with permanent hearing loss; 90% of these children are born to hearing parents, many of whom don’t speak sign language nor do they have to resources to learn. “Without sign language, deaf babies are at risk for having language deprivation syndrome,” Sepah said of lacking linguistic stimuli. “This syndrome is characterized by a lack of access to naturally occurring language acquisition during their critical language-learning years. It can cause serious impacts on different aspects of their lives, such as their abilities to have healthy relationships, be able to understand and access a full education, and their level of employability.”

To help minimize the communicative barrier impacting hearing parents and their deaf babies, beyond the technical goal of leveraging AI in building an assistive technology, Sepah said the practical goal is to teach parents words in ASL while giving them real-time feedback. Moreover, in an effort to bring these technologies to more people, Google has created a Kaggle fingerspelling competition that, Sepah explained to me, focuses on “translating finger-spelled words to English text.”

Distilled, the work is about the foundational part of typing: the alphabet.

When asked why the honing on fingerspelling, Sepah told me it has to do with what people use smartphones for most commonly. He explained many in the Deaf community are able to fingerspell significantly faster—“more than 200% faster,” according to Sepah—than they could type on a virtual keyboard. (As a CODA whose first language is ASL, I can confirm this. I’m much faster at fingerspelling than I am typing on my iPhone.)

“Fingerspelling is beneficial in specific use cases, such as quickly texting someone or queries that do not require many signs, such as when ordering a coffee or spelling the name of a person,” Sepah said. “Fingerspelling communicates words using hand shapes that represent individual letters. We are focusing on fingerspelling because, while fingerspelling is only a part of a sign language, it is often used for communicating names, addresses, phone numbers, and other information that is commonly entered on a mobile phone.”

At a technical level, sign language recognition AI technology currently “lags far behind” voice-to-text recognition AI, Sepah said. To help remedy this, Google Research and the Deaf Professional Arts Network have been collaborating on an initiative to develop what Sepah described as a “massive fingerspelling dataset” that he said will be open-sourced in an effort to iterate on the technology and move it forward. An overarching goal is to advance the language learning model such that people who cannot use their voice to summon digital assistants and other tech—many in the Deaf and hard-of-hearing communities fall into this category—have access to said products. Many signing users are excluded from using voice-first technologies because the interface paradigms don’t take non-verbal people into account, thus restricting access to many devices. “These AI solutions will allow the Deaf and hard of hearing community to more easily communicate with technologies, including Google products, and the hearing community face-to-face, by signing on their phone as opposed to typing or speaking,” Sepah said.

Sepah mentioned the PopSign app, available on Android and on iOS, as one piece of software that uses an AI sign language model. The app, developed by Google in partnership with the Rochester Institute of Technology and Georgia Institute of Technology, aims to, he said, “improve language acquisitions, enabling hearing parents who have Deaf children to learn sign language vocabularies with real time feedback.”

In terms of feedback, Sepah told me it’s been “overwhelmingly positive” across the entire organization, from engineers and senior executives alike. He added many Googlers (the colloquial name for Google employees) spanning various teams have “recognized the value” of all the work being done in this realm, saying it represents an important step towards making Google more accessible to Deaf and hard-of-hearing people everywhere. Indeed, Sepah said the software Google is making will go a long way in helping deaf and hard-of-hearing people “communicate more effectively with people around the world, and, most importantly, provide them with access to the world’s information.”

He continued: “We [at Google] share a common goal: to ensure that sign language is provided as a universal language option for Deaf and hard of hearing people when using our products. Deaf users are empowered when they have the choice to use their native language, have access to information, and can learn, grow, and make informed decisions about their lives. They can also connect with others and build relationships.”

Broadly, Sepah explained to me the sign language technology fits squarely with Google’s institutional mantra of collecting and organizing the world’s information in an accessible and useful manner. Furthermore, he said the company’s principles around AI also support this mission, as the ASL project is “directly useful” to members of the Deaf and hard-of-hearing communities. To paraphrase Google’s ethos, the ASL project is empowering for current and future generations using technology for the common good. “With Google’s sign language tools, I believe that deaf people will continue to achieve great things by getting more education and real-world content through AI technologies,” Sepah said of Google’s contributions to this cause. “I believe the Deaf community will continue to grow and thrive, including being able to escape poverty. I’m excited to see what the future holds for deaf people.”

Sepah was keen to emphasize the sociocultural importance of Google’s work, which extends far beyond the technological merits. Sign languages the world over, he said, are “beautiful and expressive languages” that have their own grammatical rules and vocabularies. “They are not simply a visual representation of a spoken language,” Sepah said.

More than being linguistically unique, Sepah said recognizing sign languages is recognizing the human rights of those who use them.

“Sign language is inseparable from deaf people’s human rights,” he said. “Respecting the individual rights of Deaf people to use sign language is essential for ensuring they have equal access to education, employment, and all other aspects of life. When deaf people are able to communicate in their own language, they are able to participate fully in society.”

Follow me on Twitter or LinkedInCheck out my website