There is a reason that we like to use our cell phones. People prefer to talk.”
The technology, Hemami continued, is about much more than convenience. It allows deaf people “untethered communication in their native language” – exactly the same connectivity available to hearing people, she said.
Since the project, Mobile ASL (American Sign Language), started four years ago, the researchers have published several academic papers on their technology and given talks around the world. The first phone prototypes were created last year and are now in the hands of about 25 deaf people in the Seattle area.
Standard videoconferencing is used widely in academia and industry, for example, in distance-learning courses. But the Mobile ASL team designed their video compression software specifically with ASL users in mind, with the goal of sending clear, understandable video over existing limited bandwidth networks. They also faced such constraints as phones’ battery life and their ability to process real-time video at enough frames per second. They solved the battery life problem by writing software smart enough to vary the frames per second based on whether the user is signing or watching the other person sign.
Because ASL requires efficient motion capture, the researchers had to make video compression software that could deliver video at about 10 frames per second. They also had to work within the standard wireless 2G network, which only allows transmission of video at about 15-20 kilobits per second.
This is a relatively small amount of information when compared with a YouTube video, which travels at about 600 kilobits per second. For further comparison, high-definition digital television images come in at 6-10 megabits per second.
Researching how ASL developed gave the team clues on how people use it, said Frank Ciaramello, a graduate student working on the project. They learned that deaf people often use only one hand to sign, depending on the situation, and that they’re very good at adapting as needed.
And they found that when two people are talking to each other, they spend almost the entire time focused on the other person’s face.
“The facial expressions are really important in ASL, because they add a lot of information,” Ciaramello said. They concluded that their cell phone video would have to be clearest in the face and hands, while they could spare some detail in the torso and in the background. Studies with deaf people who rated different videos on an intelligibility scale helped the researchers hone in on the best areas to focus in their video.
The researchers are now perfecting their intelligibility metrics while also looking for ways to bring down the cost of integrating the software into the phones. Making the phones as user friendly as possible is a key goal of the project, Hemami said.
“We don’t want people to use the technology and say, ‘This is annoying,’” Hemami said. “We want it to be really technology transparent. We want them to call their mother and have a nice conversation.”
This research is funded by the National Science Foundation.
Scientists use artificial neural networks to predict new stable materials
18.09.2018 | University of California - San Diego
ORNL-developed technology streamlines computational science projects
17.09.2018 | DOE/Oak Ridge National Laboratory
Thin-film solar cells made of crystalline silicon are inexpensive and achieve efficiencies of a good 14 percent. However, they could do even better if their shiny surfaces reflected less light. A team led by Prof. Christiane Becker from the Helmholtz-Zentrum Berlin (HZB) has now patented a sophisticated new solution to this problem.
"It is not enough simply to bring more light into the cell," says Christiane Becker. Such surface structures can even ultimately reduce the efficiency by...
A study in the journal Bulletin of Marine Science describes a new, blood-red species of octocoral found in Panama. The species in the genus Thesea was discovered in the threatened low-light reef environment on Hannibal Bank, 60 kilometers off mainland Pacific Panama, by researchers at the Smithsonian Tropical Research Institute in Panama (STRI) and the Centro de Investigación en Ciencias del Mar y Limnología (CIMAR) at the University of Costa Rica.
Scientists established the new species, Thesea dalioi, by comparing its physical traits, such as branch thickness and the bright red colony color, with the...
Scientists have succeeded in observing the first long-distance transfer of information in a magnetic group of materials known as antiferromagnets.
An international team of researchers has mapped Nemo's genome, providing the research community with an invaluable resource to decode the response of fish to...
Graphene is considered a promising candidate for the nanoelectronics of the future. In theory, it should allow clock rates up to a thousand times faster than today’s silicon-based electronics. Scientists from the Helmholtz Zentrum Dresden-Rossendorf (HZDR) and the University of Duisburg-Essen (UDE), in cooperation with the Max Planck Institute for Polymer Research (MPI-P), have now shown for the first time that graphene can actually convert electronic signals with frequencies in the gigahertz range – which correspond to today’s clock rates – extremely efficiently into signals with several times higher frequency. The researchers present their results in the scientific journal “Nature”.
Graphene – an ultrathin material consisting of a single layer of interlinked carbon atoms – is considered a promising candidate for the nanoelectronics of the...
03.09.2018 | Event News
27.08.2018 | Event News
17.08.2018 | Event News
18.09.2018 | Materials Sciences
18.09.2018 | Materials Sciences
18.09.2018 | Information Technology