How we get directions wrong
Credit: Kyoto University
Using virtual three-dimensional mazes together with functional magnetic
resonance imaging (fMRI), researchers from Kyoto University investigated
whether a person's preconceptions could be represented in brain activity.
According to this research out of the University of Kyoto, this confusion may be due to the way our brain maintains our preconceived ideas. Our preconceived ideas are hard to shake, so we end up mixing up our preconceptions with the instructions. Result? Confusion.
So how does this apply to writers? Suppose you're writing a murder mystery, and your bad guy has an accomplice who is given specific instructions on setting up the bad guy's alibi. Your detective can't break the bad guy. But the accomplice? Did he do exactly as instructed? Or did he make a mistake?
Or perhaps you're developing an action-adventure plot. The main character must rely on others to accomplish something. Given this research, it's natural and normal for someone to intermix instructions with their own preconceived ideas. Result? The plan goes wrong.
Here's the report with a link to the full study in the attribution.
* * * * *
Mazes and brains: When preconception trumps logic
Regions in brain that may lead to new communication tools found
Researchers reconstruct what we see in our minds when
we navigate -- and explain how we get directions wrong.
Rhe regions of the brain responsible for preconception have been found by researchers who have decoded what scenes people picture in their minds. The discovery helps researchers to reconstruct what we see in our minds when we navigate -- and explain how we get directions wrong.
The brain helps us navigate by continually generating, rationalizing, and analyzing great amounts of in-formation. For example, this innate GPS-like function helps us find our way in cities, follow directions to a specific destination, or go to a particular restaurant to satisfy a craving.
"When people try to get from one place to another, they 'foresee' the upcoming landscape in their minds," said study author Yumi Shikauchi. "We wanted to decode prior belief in the brain, because it's so crucial for spatial navigation."
Using virtual three-dimensional mazes together with functional magnetic resonance imaging (fMRI), the researchers investigated whether a person's preconceptions could be represented in brain activity.
Participants were led through each maze, memorizing a sequence of scenes by receiving directions for each move. Then, while being imaged using fMRI, they were asked to navigate through the maze by choosing the upcoming scene from two options. In contrast to methods in previous studies, the re-searchers focused on the underpinnings of expectation and prediction, crucial cognitive processes in everyday decision making.
Twelve decoders deciphered brain activity from fMRI scans by associating signals with output variables. They were ultimately able to reconstruct what scene the participants pictured in their minds as they progressed through the maze.
They also discovered that the human sense of objectivity may sometimes be overpowered by preconception, which includes biases arising from external cues and prior knowledge.
"We found that the activity patterns in the parietal regions reflect participants' expectations even when they are wrong, demonstrating that subjective belief can override objective reality," said senior author Shin Ishii.
Shikauchi and Ishii hope that this research will contribute to the development of new communication tools that make use of brain activity.
"There are a lot of things that can't be communicated just by words and language. As we were able to decipher virtual expectations both right and wrong, this could contribute to the development of a new type of tool that allows people to communicate non-linguistic information," said Ishii. "We now need to be able to decipher scenes that are more complicated than simple mazes."
Story Source: Materials provided by Kyoto University. Yumi Shikauchi, Shin Ishii. Decoding the view expectation during learned maze navigation from human fronto-parietal network. Scientific Reports, 2015