Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Research identifies key weakness in modern computer vision systems

31.07.2018

Computer vision algorithms have come a long way in the past decade. They've been shown to be as good or better than people at tasks like categorizing dog or cat breeds, and they have the remarkable ability to identify specific faces out of a sea of millions.

But research by Brown University scientists shows that computers fail miserably at a class of tasks that even young children have no problem with: determining whether two objects in an image are the same or different. In a paper presented last week at the annual meeting of the Cognitive Science Society, the Brown team sheds light on why computers are so bad at these types of tasks and suggests avenues toward smarter computer vision systems.


Computers are great at categorizing images by the objects found with them, but they're surprisingly bad at figuring out when two objects in a single image are the same or different from each other. New research helps to show why that task is so difficult for modern computer vision algorithms.

Credit: Serre lab / Brown University

"There's a lot of excitement about what computer vision has been able to achieve, and I share a lot of that," said Thomas Serre, associate professor of cognitive, linguistic and psychological sciences at Brown and the paper's senior author. "But we think that by working to understand the limitations of current computer vision systems as we've done here, we can really move toward new, much more advanced systems rather than simply tweaking the systems we already have."

For the study, Serre and his colleagues used state-of-the-art computer vision algorithms to analyze simple black-and-white images containing two or more randomly generated shapes. In some cases the objects were identical; sometimes they were the same but with one object rotated in relation to the other; sometimes the objects were completely different. The computer was asked to identify the same-or-different relationship.

The study showed that, even after hundreds of thousands of training examples, the algorithms were no better than chance at recognizing the appropriate relationship. The question, then, was why these systems are so bad at this task.

Serre and his colleagues had a suspicion that it has something to do with the inability of these computer vision algorithms to individuate objects. When computers look at an image, they can't actually tell where one object in the image stops and the background, or another object, begins. They just see a collection of pixels that have similar patterns to collections of pixels they've learned to associate with certain labels. That works fine for identification or categorization problems, but falls apart when trying to compare two objects.

To show that this was indeed why the algorithms were breaking down, Serre and his team performed experiments that relieved the computer from having to individuate objects on its own. Instead of showing the computer two objects in the same image, the researchers showed the computer the objects one at a time in separate images. The experiments showed that the algorithms had no problem learning same-or-different relationship as long as they didn't have to view the two objects in the same image.

The source of the problem in individuating objects, Serre says, is the architecture of the machine learning systems that power the algorithms. The algorithms use convolutional neural networks -- layers of connected processing units that loosely mimic networks of neurons in the brain. A key difference from the brain is that the artificial networks are exclusively "feed-forward" -- meaning information has a one-way flow through the layers of the network. That's not how the visual system in humans works, according to Serre.

"If you look at the anatomy of our own visual system, you find that there are a lot of recurring connections, where the information goes from a higher visual area to a lower visual area and back through," Serre said.

While it's not clear exactly what those feedbacks do, Serre says, it's likely that they have something to do with our ability to pay attention to certain parts of our visual field and make mental representations of objects in our minds.

"Presumably people attend to one object, building a feature representation that is bound to that object in their working memory," Serre said. "Then they shift their attention to another object. When both objects are represented in working memory, your visual system is able to make comparisons like same-or-different."

Serre and his colleagues hypothesize that the reason computers can't do anything like that is because feed-forward neural networks don't allow for the kind of recurrent processing required for this individuation and mental representation of objects. It could be, Serre says, that making computer vision smarter will require neural networks that more closely approximate the recurrent nature of human visual processing.

###

Serre's co-authors on the paper were Junkyung Kim and Matthew Ricci. The research was supported by the National Science Foundation (IIS-1252951, 1644760) and DARPA (YFA N66001-14-1-4037).

Media Contact

Kevin Stacey
kevin_stacey@brown.edu
401-863-3766

 @brownuniversity

http://news.brown.edu/ 

Kevin Stacey | EurekAlert!
Further information:
https://news.brown.edu/articles/2018/07/same-different

More articles from Information Technology:

nachricht Novel communications architecture for future ultra-high speed wireless networks
17.06.2019 | IMDEA Networks Institute

nachricht Concert of magnetic moments
14.06.2019 | Forschungszentrum Juelich

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: The hidden structure of the periodic system

The well-known representation of chemical elements is just one example of how objects can be arranged and classified

The periodic table of elements that most chemistry books depict is only one special case. This tabular overview of the chemical elements, which goes back to...

Im Focus: MPSD team discovers light-induced ferroelectricity in strontium titanate

Light can be used not only to measure materials’ properties, but also to change them. Especially interesting are those cases in which the function of a material can be modified, such as its ability to conduct electricity or to store information in its magnetic state. A team led by Andrea Cavalleri from the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg used terahertz frequency light pulses to transform a non-ferroelectric material into a ferroelectric one.

Ferroelectricity is a state in which the constituent lattice “looks” in one specific direction, forming a macroscopic electrical polarisation. The ability to...

Im Focus: Determining the Earth’s gravity field more accurately than ever before

Researchers at TU Graz calculate the most accurate gravity field determination of the Earth using 1.16 billion satellite measurements. This yields valuable knowledge for climate research.

The Earth’s gravity fluctuates from place to place. Geodesists use this phenomenon to observe geodynamic and climatological processes. Using...

Im Focus: Tube anemone has the largest animal mitochondrial genome ever sequenced

Discovery by Brazilian and US researchers could change the classification of two species, which appear more akin to jellyfish than was thought.

The tube anemone Isarachnanthus nocturnus is only 15 cm long but has the largest mitochondrial genome of any animal sequenced to date, with 80,923 base pairs....

Im Focus: Tiny light box opens new doors into the nanoworld

Researchers at Chalmers University of Technology, Sweden, have discovered a completely new way of capturing, amplifying and linking light to matter at the nanolevel. Using a tiny box, built from stacked atomically thin material, they have succeeded in creating a type of feedback loop in which light and matter become one. The discovery, which was recently published in Nature Nanotechnology, opens up new possibilities in the world of nanophotonics.

Photonics is concerned with various means of using light. Fibre-optic communication is an example of photonics, as is the technology behind photodetectors and...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

SEMANTiCS 2019 brings together industry leaders and data scientists in Karlsruhe

29.04.2019 | Event News

Revered mathematicians and computer scientists converge with 200 young researchers in Heidelberg!

17.04.2019 | Event News

First dust conference in the Central Asian part of the earth’s dust belt

15.04.2019 | Event News

 
Latest News

Uncovering hidden protein structures

18.06.2019 | Life Sciences

Monitoring biodiversity with sound: how machines can enrich our knowledge

18.06.2019 | Life Sciences

Schizophrenia: Adolescence is the game-changer

18.06.2019 | Life Sciences

VideoLinks
Science & Research
Overview of more VideoLinks >>>