A team of scientists has found a striking similarity between how human brains and artificial neural networks perceive the world.
In the human brain, visual information passes through several cortices that interpret different aspects of the image, ultimately combining our perception of the world around us. A new study published Thursday in the journal Modern biology found that aspects of 3D shapes – such as strokes and spheres – were interpreted at the beginning of the process. And, it turns out, the same thing happens in artificial neural networks.
It may not seem too shocking that neural networks, a kind of artificial intelligence architecture explicitly modeled on the brain, interpret information in a similar way. But scientists didn̵
“I was surprised to see strong, clear signals of three-dimensional shape as early as V4,” said Johns Hopkins University neurologist and author Ed Connor in a press release, citing a specific visual cortex. “But I would never have guessed in a million years that you would see the same thing happen on AlexNet, which is only trained to translate 2D photos into object tags.”
Unexpected parallel hints that neural networks can teach us about our brains, just as we use what we know about the brain to develop new neural networks.
“Artificial networks are the most promising modern models for understanding the brain,” Connor said. “Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence.”
READ MORE: Researchers have found a “ghostly” resemblance to how they see brains and computers [Johns Hopkins University]
More about neural networks: Physicist: The whole universe can be a neural network