Sunday 3 January 2016

Wearable Sensors May Be Able To Translate Sign Language Into English


Wearable sensors may one day interpret the gestures in sign language and translate them into English, providing a high-tech solution to communication problems between deaf people and those who don’t know sign language.

Engineers at Texas A&M University are developing a wearable device senses movement and muscle activity in a person's arms.
The device works by mapping out the gestures a person makes,using two different sensors: one responsive to the motion of the wrist and the other to the muscular movements in the arm. A program wirelessly receives the information and converts the data into the English translation. [Top 10 Inventions that Changed the World]

After previous research, the engineers found that some devices that tried to translate sign language into text, but they were not as intricate in their designs.
"Most of the technology ... was based on vision- or camera-based solutions," said study lead researcher Roozbeh Jafari, an associate professor of biomedical engineering at Texas A&M.
These existing designs, Jafari said, are not enough, because often when someone is talking with sign language, they are using hand gestures combined with specific finger movements.

"I thought maybe we should look into bringing together motion sensors and muscle activation," Jafari told Live Science. "And the idea here was to build a wearable device."
The researchers built a prototype system that  recognizes words that people use most commonly in their daily conversations. Jafari said that once the team starts expanding the program, the engineers will include more words that are less frequently used, in order to build up a more substantial vocabulary.


One setback of the prototype is that the system has to be "trained" to respond to each individual that wears the device, Jafari said. This training process involves asking the user to essentially repeat or do each hand gesture a couple of times, which can take up to 30 minutes to complete.
"If I'm wearing it and you're wearing it — our bodies are different … our muscle structures are different," Jafari said.

But, Jafari thinks the issue is largely the result of time constraints the team faced in building the prototype. It took two graduate students just two weeks to build the device, so Jafari said he is confident that the device will become more advanced during the next steps of development.

The researchers intend to reduce the training time of the device, or even eliminate it altogether, so that the wearable device responds automatically to the user. Jafari also wants to improve the effectiveness of the system's sensors so that the device will be more useful in real-life conversations. Currently, when a person gestures in sign language, the device can only read words one at a time.
This, however, is not how people speak. "When we're speaking, we put all the words in a single sentence," Jafari said. "The transition from one word to another word is seamless and it's actually immediate."

"We need to build signal-processing techniques that would help us to identify and understand a complete sentence," he added.
Jafari's ultimate vision is to use new technology, such as the wearable sensor, to develop innovative user interfaces between humans and computers.
For instance, people are already comfortable with using keyboards to issue commands to electronic devices, but Jafari thinks typing on devices like smartwatches is not practical because they tend to have  small screens.

"We need to have a new user interface (UI) and a UI modality that helps us to communicate with these devices," he said. "Devices like [the wearable sensor] might help us to get there. It might essentially be the right step in the right direction."
Jafari presented this research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference in June,2015.

No comments:

Post a Comment