Robots and humans focus on different things, indicating neural gap

The GLP aggregated and excerpted this blog/article to reflect the diversity of news, opinion and analysis.

Researchers at Facebook and Virginia Tech in Blacksburg got humans and machines to look at pictures and answer simple questions – a task that neural-network-based artificial intelligence can handle.

But the researchers weren’t interested in the answers. They wanted to map human and AI attention, in order to shed a little light on the differences between us and them. What they found is that humans and machines don’t pay attention to the same things when they look at pictures – not at all.

“Machines do not seem to be looking at the same regions as humans, which suggests that we do not understand what they are basing their decisions on,” says Dhruv Batra at Virginia Tech.

This gap between humans and machines could be a useful source of inspiration for researchers looking to tweak their neural nets. “Can we make them more human-like, and will that translate to higher accuracy?” Batra asks. However, some researchers advise that we shouldn’t necessarily rush to build systems that exactly mimic humans.

Read full, original post: Robot eyes and humans fix on different things to decode a scene

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}
screenshot at  pm

Are pesticide residues on food something to worry about?

In 1962, Rachel Carson’s Silent Spring drew attention to pesticides and their possible dangers to humans, birds, mammals and the ...
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.