Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

This AI birdwatcher lets you 'see' through the eyes of a machine

01.11.2019

New research aims to open the 'black box' of computer vision

It can take years of birdwatching experience to tell one species from the next. But using an artificial intelligence technique called deep learning, Duke University researchers have trained a computer to identify up to 200 species of birds from just a photo.


A Duke team trained a computer to identify up to 200 species of birds from just a photo. Given a photo of a mystery bird (top), the A.I. spits out heat maps showing which parts of the image are most similar to typical species features it has seen before.

Credit: Chaofan Chen, Duke University

The real innovation, however, is that the A.I. tool also shows its thinking, in a way that even someone who doesn't know a penguin from a puffin can understand.

The team trained their deep neural network -- algorithms based on the way the brain works -- by feeding it 11,788 photos of 200 bird species to learn from, ranging from swimming ducks to hovering hummingbirds.

The researchers never told the network "this is a beak" or "these are wing feathers." Given a photo of a mystery bird, the network is able to pick out important patterns in the image and hazard a guess by comparing those patterns to typical species traits it has seen before.

Along the way it spits out a series of heat maps that essentially say: "This isn't just any warbler. It's a hooded warbler, and here are the features -- like its masked head and yellow belly -- that give it away."

Duke computer science Ph.D. student Chaofan Chen and undergraduate Oscar Li led the research, along with other team members of the Prediction Analysis Lab directed by Duke professor Cynthia Rudin.

They found their neural network is able to identify the correct species up to 84% of the time -- on par with some of its best-performing counterparts, which don't reveal how they are able to tell, say, one sparrow from the next.

Rudin says their project is about more than naming birds. It's about visualizing what deep neural networks are really seeing when they look at an image.

Similar technology is used to tag people on social networking sites, spot suspected criminals in surveillance cameras, and train self-driving cars to detect things like traffic lights and pedestrians.

The problem, Rudin says, is that most deep learning approaches to computer vision are notoriously opaque. Unlike traditional software, deep learning software learns from the data without being explicitly programmed. As a result, exactly how these algorithms 'think' when they classify an image isn't always clear.

Rudin and her colleagues are trying to show that A.I. doesn't have to be that way. She and her lab are designing deep learning models that explain the reasoning behind their predictions, making it clear exactly why and how they came up with their answers. When such a model makes a mistake, its built-in transparency makes it possible to see why.

For their next project, Rudin and her team are using their algorithm to classify suspicious areas in medical images like mammograms. If it works, their system won't just help doctors detect lumps, calcifications and other symptoms that could be signs of breast cancer. It will also show which parts of the mammogram it's homing in on, revealing which specific features most resemble the cancerous lesions it has seen before in other patients.

In that way, Rudin says, their network is designed to mimic the way doctors make a diagnosis. "It's case-based reasoning," Rudin said. "We're hoping we can better explain to physicians or patients why their image was classified by the network as either malignant or benign."

###

The team is presenting a paper on their findings at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019) in Vancouver on December 12.

Other authors of this study include Daniel Tao and Alina Barnett of Duke and Jonathan Su at MIT Lincoln Laboratory.

CITATION: "This Looks Like That: Deep Learning for Interpretable Image Recognition," Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Jonathan Su and Cynthia Rudin. Electronic Proceedings of the Neural Information Processing Systems Conference. December 12, 2019.

Media Contact

Robin Ann Smith
ras10@duke.edu
919-681-8057

 @DukeU

http://www.duke.edu

Robin Ann Smith | EurekAlert!
Further information:
https://today.duke.edu/2019/10/ai-birdwatcher-lets-you-see-through-eyes-machine

More articles from Information Technology:

nachricht New ultra-miniaturized scope less invasive, produces higher quality images
09.12.2019 | Johns Hopkins University

nachricht A platform for stable quantum computing, a playground for exotic physics
06.12.2019 | Harvard John A. Paulson School of Engineering and Applied Sciences

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Electronic map reveals 'rules of the road' in superconductor

Band structure map exposes iron selenide's enigmatic electronic signature

Using a clever technique that causes unruly crystals of iron selenide to snap into alignment, Rice University physicists have drawn a detailed map that reveals...

Im Focus: Developing a digital twin

University of Texas and MIT researchers create virtual UAVs that can predict vehicle health, enable autonomous decision-making

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location...

Im Focus: The coldest reaction

With ultracold chemistry, researchers get a first look at exactly what happens during a chemical reaction

The coldest chemical reaction in the known universe took place in what appears to be a chaotic mess of lasers. The appearance deceives: Deep within that...

Im Focus: How do scars form? Fascia function as a repository of mobile scar tissue

Abnormal scarring is a serious threat resulting in non-healing chronic wounds or fibrosis. Scars form when fibroblasts, a type of cell of connective tissue, reach wounded skin and deposit plugs of extracellular matrix. Until today, the question about the exact anatomical origin of these fibroblasts has not been answered. In order to find potential ways of influencing the scarring process, the team of Dr. Yuval Rinkevich, Group Leader for Regenerative Biology at the Institute of Lung Biology and Disease at Helmholtz Zentrum München, aimed to finally find an answer. As it was already known that all scars derive from a fibroblast lineage expressing the Engrailed-1 gene - a lineage not only present in skin, but also in fascia - the researchers intentionally tried to understand whether or not fascia might be the origin of fibroblasts.

Fibroblasts kit - ready to heal wounds

Im Focus: McMaster researcher warns plastic pollution in Great Lakes growing concern to ecosystem

Research from a leading international expert on the health of the Great Lakes suggests that the growing intensity and scale of pollution from plastics poses serious risks to human health and will continue to have profound consequences on the ecosystem.

In an article published this month in the Journal of Waste Resources and Recycling, Gail Krantzberg, a professor in the Booth School of Engineering Practice...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

The Future of Work

03.12.2019 | Event News

First International Conference on Agrophotovoltaics in August 2020

15.11.2019 | Event News

Laser Symposium on Electromobility in Aachen: trends for the mobility revolution

15.11.2019 | Event News

 
Latest News

The Arctic atmosphere - a gathering place for dust?

09.12.2019 | Earth Sciences

New ultra-miniaturized scope less invasive, produces higher quality images

09.12.2019 | Information Technology

Discovery of genes involved in the biosynthesis of antidepressant

09.12.2019 | Life Sciences

VideoLinks
Science & Research
Overview of more VideoLinks >>>