Who doesn’t appreciate closed captioning on televisions in loud sports bars or busy airport terminals? It’s an easy way to follow the action on the field or the anchor on the morning news, without trying too hard to listen. It’s so convenient.
What if you took away the clatter and the noise forever? Would life be less irritating? Now, what if you took away the conversation itself, or the sound of a car horn, the cry of a baby or even the sound of a door closing? What is it like to be deaf or hard of hearing, and to have to depend on technologies such as closed captioning?
That’s the reality of Illeen Moore and Matt Kochie. Nick DiChiara, a software engineering senior involved in undergraduate research, met both of them at the Alabama Institute for the Deaf and Blind (AIDB) located in Talladega. Kochie works there as an assistive technology instructor for the deaf, while Moore is involved in an internship in vocational rehabilitation in Montgomery. DiChiara spoke to them through Carolyn Jones, an academic instructor at AIDB’s E.H. Gentry campus, who interpreted the conversation in American Sign Language (ASL).
DiChiara was meeting with them for his undergraduate research under Daniella Marghitu, a faculty member in the Department of Computer Science and Software Engineering whose own interests include helping those with disabilities. He is a ‘Glass Explorer,’ one of the estimated 8,000 software developers nationwide who was granted early access to Google Glass to explore the development of applications on the new platform. He is the only student at Auburn whose proposal to Google was accepted.
Google Glass isn’t so much a pair of eyeglasses as they are simply wearable computers. Essentially, a smart phone paired to eyeglass frames, they are activated by gestures or swipes across a touchpad on the temple piece, and by voice. “Okay, Glass,” DiChiara would say, “navigate to the nearest drugstore.” A Google map materializes. Or, “Okay, Glass, take a picture.” A frame floats in space, right in front of your eye, on a tiny screen that is almost unnoticeable. The picture – or video, if you want – is captured.
The irony of assisting the deaf by using a device whose primary instructions are made verbally is not lost on DiChiara.
“What we are trying to do is to apply this technology in ways that can improve the day-to-day life of someone with a disability, in this instance, with those who are deaf or hard of hearing,” notes DiChiara. “This is where the research comes in, because Glass is really designed as a mainstream consumer product. We are looking at pushing the boundaries to develop applications that can provide new solutions to existing challenges. Another venue – one that is down the road for me – would be to develop apps for other disabilities, for example, visual impairment, or physical disabilities such as paralysis.”
At the institute, he is trying to assess the challenges of the deaf through a dialogue with them. In this case, it is done through Jones, who interprets Kochie’s ASL into the spoken word, and DiChiara’s dialogue back into ASL. The give and take is not unlike translating a foreign language, because it is, in fact, another language; and some say, almost another culture.
It also points to one of the primary concerns of the deaf: assistance and help are often very centralized, and because it involves other people, often professionals as go-betweens, it can also be expensive. This, DiChiara explains, is why he is intrigued with the possibilities inherent in Glass.
“One of the solutions we are developing involves an app that can ‘listen in’ on a conversation and provide on-screen closed captioning in real time,” he explains. “The end goal here is to develop a tool that can help close the gap in communication – for example, in situations such as closing a sale at a store counter, visiting a doctor’s office or going to a church service. These are all places where hearing people simply take these interactions for granted . . . but they offer real challenges to the deaf and hard of hearing.”
DiChiara points out that some of that technology is already here and simply needs to be adapted to assist those with disabilities.
“My phone can discriminate between unique voices, which makes possible a configuration that restricts commands to only those that I give,” DiChiara notes. “This discernment could allow Glass to key voices to different speakers in a conversation.”
In addition to these kinds of captioning functions, DiChiara points out that it is not out of the question to develop apps that can listen to spoken language and display computer generated hands performing ASL. Another facet of an app he envisions would allow users to use Glass as baby monitors through visual cues, rather than the ones that parents most likely react to now – the baby crying.
“The visit to the Alabama Institute for the Deaf and Blind has opened my eyes and given me a good sense of direction,” DiChiara points out. “The research that I conducted before making the visit was instructive as well, but it really hits home when you’re introduced to the people that would use it. That is really a bottom line in what I am trying to do . . . a desire to understand the kinds of frustrations that I am unaware of as a hearing person.”
DiCharia believes that this is an essential part of the Auburn Engineering experience. He points to the Education and Assistive Technology Laboratory that Marghitu heads as an important part of the learning process. It is where the Access STEM program is housed which provides students with the research opportunities to assist people with disabilities.
“Other developers in this industry are racing to create the next Angry Birds or other computer game,” he points out. “At the same time, there are mothers in this world who can’t hear the sound of their newborn babies in the room next door. Trying to tackle these kinds of problems really drives me in my work and gives me a sense of purpose.”
For more information on the project, visit www.nickdichiara.com