Roozbeh Jafari is an Associate Professor of Biomedical Engineering at Texas A&M University, and he may be on the verge of a breakthrough that could help give a voice to people who are hard of hearing.
Jafari and a team of engineers are developing and prototyping a device – a wearable wrist sensor – that can translate sign language into text. It isn’t the first of its kind; there have been several devices in the past with the same goal, but none intricate enough for reliable use.
What Makes it Different?
In the past, many previous devices were based off cameras or visual technology, which required users to face certain directions or manipulate bulky equipment. Jafari’s prototype is much more accessible for everyday use.
Here’s how it works:
Smartphones have motion sensors that can track the position in which a user is holding the phone. This helps position the screen in the best orientation for readability. That’s the same way Jafari’s wrist sensor works. It combines motion sensitivity with the electrical activity in the user’s muscles. By analyzing these movements, the system can read each intricate movement of the user’s arm, wrist and hand.
“We decode the muscle activities we are capturing from the wrist. Some of it is coming from the fingers indirectly because if I happen to keep my fist like this versus this, the muscle activation is going to be a little different,” Jafari said.
The wrist sensor then taps into Bluetooth capabilities and transmits motion data – the letters and words that the user has signed – to a nearby computer or mobile device. Eventually, Jafari and company want to reduce the wrist sensor’s size so it can be worn like a watch or a bracelet. Along with the size reduction, they’re also hoping to incorporate a voice speaker, so that the device can synthesize sentences into real, audible words.
This Gadget Translates Sign Language On The FlyAnother big break in the communication barrier: Researchers created a wearable device that translates sign language on the fly. http://voc.tv/1P6L9zh
Posted by Vocativ on Tuesday, December 1, 2015
Constraints & Challenges
Jafari has made tremendous progress, but there are several obstacles left to overcome, specifically, the difference in physiology from human to human. No two people sign exactly alike. The device has to be trained to identify each wearer’s muscular structure and tendencies.
Ideally, after refining the prototype, Jafari said he would like it to automatically adapt to each user. He’s also focused on upgrading the sensor’s signal processing techniques, which would allow the device to simulate real conversation. In its present state, the sensor can read only one word at a time, which isn’t practical for users seeking real-time communication.
Initially, Jafari and his team programmed the sensor to recognize words that people use most commonly when they speak. Right now, the device recognizes 40 words of American Sign Language with 96% accuracy. One of their top priorities is expanding the sensor’s vocabulary with words that are less frequently used.
The Outlook
Jafari still has a lot of work in order to realize his ultimate vision, but he’s optimistic. The prototype was built in just two weeks by a pair of graduate students, and many of the constraints and challenges stem from that quick development cycle.
When he has the time and the backing to further develop his vision, Jafari is confident these obstacles will be overcome. Until then, the wearable wrist sensor represents monumental forward-thinking and ambition. It could be the breakthrough that finally bridges communication between the deaf and those who don’t know ASL.
“It might essentially be,” Jafari said, “the right step in the right direction.”