Techniques from natural-language processing enable computers to efficiently search video for actions
With the commodification of digital cameras, digital video has become so easy to produce that human beings can have trouble keeping up with it. Among the tools that computer scientists are developing to make the profusion of video more useful are algorithms for activity recognition — or determining what the people on camera are doing when.
At the Conference on Computer Vision and Pattern Recognition in June, Hamed Pirsiavash, a postdoc at MIT, and his former thesis advisor, Deva Ramanan of the University of California at Irvine, will present a new activity-recognition algorithm that has several advantages over its predecessors.
One is that the algorithm's execution time scales linearly with the size of the video file it's searching. That means that if one file is 10 times the size of another, the new algorithm will take 10 times as long to search it — not 1,000 times as long, as some earlier algorithms would.
Another is that the algorithm is able to make good guesses about partially completed actions, so it can handle streaming video. Partway through an action, it will issue a probability that the action is of the type that it's looking for. It may revise that probability as the video continues, but it doesn't have to wait until the action is complete to assess it.
Finally, the amount of memory the algorithm requires is fixed, regardless of how many frames of video it's already reviewed. That means that, unlike many of its predecessors, it can handle video streams of any length (or files of any size).
The grammar of action
Enabling all of these advances is the appropriation of a type of algorithm used in natural language processing, the computer science discipline that seeks techniques for interpreting sentences written in natural language.
"One of the challenging problems they try to solve is, if you have a sentence, you want to basically parse the sentence, saying what is the subject, what is the verb, what is the adverb," Pirsiavash says. "We see an analogy here, which is, if you have a complex action — like making tea or making coffee — that has some subactions, we can basically stitch together these subactions and look at each one as something like verb, adjective, and adverb."
On that analogy, the rules defining relationships between subactions are like rules of grammar. When you make tea, for instance, it doesn't matter whether you first put the teabag in the cup or put the kettle on the stove. But it's essential that you put the kettle on the stove before pouring the water into the cup. Similarly, in a given language, it could be the case that nouns can either precede or follow verbs, but that adjectives must always precede nouns.
For any given action, Pirsiavash and Ramanan's algorithm must thus learn a new "grammar." And the mechanism that it uses is the one that many natural-language-processing systems rely on: machine learning. Pirsiavash and Ramanan feed their algorithm training examples of videos depicting a particular action, and specify the number of subactions that the algorithm should look for. But they don't give it any information about what those subactions are, or what the transitions between them look like.
The rules relating subactions are the key to the algorithm's efficiency. As a video plays, the algorithm constructs a set of hypotheses about which subactions are being depicted where, and it ranks them according to probability. It can't limit itself to a single hypothesis, as each new frame could require it to revise its probabilities. But it can eliminate hypotheses that don't conform to its grammatical rules, which dramatically limits the number of possibilities it has to canvass.
The researchers tested their algorithm on eight different types of athletic endeavor — such as weightlifting and bowling — with training videos culled from YouTube. They found that, according to metrics standard in the field of computer vision, their algorithm identified new instances of the same activities more accurately than its predecessors.
Pirsiavash is particularly interested in possible medical applications of action detection. The proper execution of physical-therapy exercises, for instance, could have a grammar that's distinct from improper execution; similarly, the return of motor function in patients with neurological damage could be identified by its unique grammar. Action-detection algorithms could also help determine whether, for instance, elderly patients remembered to take their medication — and issue alerts if they didn't.
Abby Abazorius | newswise
Simulation and Virtual Reality: Virtual Trade Fair Tour on the Smartphone
28.04.2015 | Siemens AG
A silver lining
24.04.2015 | University of California - Santa Barbara
A team of highly determined high school students discovered a never-before-seen pulsar by painstakingly analyzing data from the National Science Foundation's...
Scientists from Nepal, Switzerland and Germany was now able to show how erosion processes caused by the monsoon are mirrored in the sediment load of a river crossing the Himalaya.
In these days, it was again tragically demonstrated that the Himalayas are one of the most active geodynamic regions of the world. Landslides belong to the...
A world-class prime systems integrator and electronic systems provider known for its rapid, innovative, and agile technology solutions, Sierra Nevada Corporation (SNC) is currently developing a new space transportation system called the Dream Chaser.
The ultimate aim is to construct a multi-mission-capable space utility vehicle, while accelerating the overall development process for this critical capability...
Today, textiles are used for more than just clothes or bags – they are high tech materials for high-tech applications. High-tech textiles must fulfill a number of functions and meet many requirements. That is why the Fraunhofer Institute for Silicate Research ISC dedicated some major developing work to this most intriguing research area. The result can now be seen at Techtextil trade show in Frankfurt from 4 to 7 May. On display will be novel textile-integrated sensors, a unique multifunctional coating system for textiles and fibers, and textile processing of glass, carbon, and ceramics fibers to fiber preforms.
Thin materials and new kinds of sensors now make it possible to integrate silicone elastomer sensors in textiles. They are suitable for applications in medical...
KAIST researchers published an article on the development of a novel technique to precisely track the 3-D positions of optically-trapped particles having complicated geometry in high speed in the April 2015 issue of Optica.
Daejeon, Republic of Korea, April 23, 2015--Optical tweezers have been used as an invaluable tool for exerting micro-scale force on microscopic particles and...
23.04.2015 | Event News
23.04.2015 | Event News
13.04.2015 | Event News
04.05.2015 | Life Sciences
04.05.2015 | Trade Fair News
04.05.2015 | Physics and Astronomy