The human brain recalls visual features in reverse order than it detects them
Columbia University Medical Center Oct 18, 2017
Columbia study challenges traditional hierarchy of brain decoding; offers insight into how the brain makes perceptual judgements.
Scientists at ColumbiaÂs Zuckerman Institute have contributed to solving a paradox of perception, literally upending models of how the brain constructs interpretations of the outside world. When observing a scene, the brain first processes details  spots, lines and simple shapes  and uses that information to build internal representations of more complex objects, like cars and people. But when recalling that information, the brain remembers those larger concepts first to then reconstruct the details  representing a reverse order of processing. The research, which involved people and employed mathematical modeling, could shed light on phenomena ranging from eyewitness testimony to stereotyping to autism.
This study was published in Proceedings of the National Academy of Sciences journal.
ÂThe order by which the brain reacts to, or encodes, information about the outside world is very well understood, said Ning Qian, PhD, a neuroscientist and a principal investigator at ColumbiaÂs Mortimer B. Zuckerman Mind Brain Behavior Institute. ÂEncoding always goes from simple things to the more complex. But recalling, or decoding, that information is trickier to understand, in large part because there was no method - aside from mathematical modeling - to relate the activity of brain cells to a personÂs perceptual judgment.Â
Without any direct evidence, researchers have long assumed that decoding follows the same hierarchy as encoding: you start from the ground up, building up from the details. The main contribution of this work with Misha Tsodyks, PhD, the paperÂs co-senior author who performed this work while at Columbia and is at the Weizmann Institute of Science in Israel, Âis to show that this standard notion is wrong, Dr. Qian said. ÂDecoding actually goes backward, from high levels to low.Â
As an analogy of this reversed decoding, Dr. Qian cites last yearÂs presidential election as an example.
ÂAs you observed the things one candidate said and did over time, you may have formed a categorical negative or positive impression of that person. From that moment forward, the way in which you recalled the candidateÂs words and actions are colored by that overall impression, said Dr. Qian. ÂOur findings revealed that higher-level categorical decisions - Âthis candidate is trustworthy - tend to be stable. But lower-level memories - Âthis candidate said this or that - are not as reliable. Consequently, high-level decoding constrains low-level decoding.Â
To explore this decoding hierarchy, Drs. Qian and Tsodyks and their team conducted an experiment that was simple in design in order to have a clear interpretation of the results. They asked 12 people to perform a series of similar tasks. In the first, they viewed a line angled at 50 degrees on a computer screen for half a second. Once it disappeared, the participants repositioned two dots on the screen to match what they remembered to be the angle of the line. They then repeated this task 50 more times. In a second task, the researchers changed the angle of the line to 53 degrees. And in a third task, the participants were shown both lines at the same time, and then had to orient pairs of dots to match each angle.
Previously held models of decoding predicted that in the two-line task, people would first decode the individual angle of each line (a lower-level feature) and the use that information to decode the two lines relationship (a higher-level feature).
ÂMemories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line, said Dr. Qian.
But that is not what happ
Go to Original
Scientists at ColumbiaÂs Zuckerman Institute have contributed to solving a paradox of perception, literally upending models of how the brain constructs interpretations of the outside world. When observing a scene, the brain first processes details  spots, lines and simple shapes  and uses that information to build internal representations of more complex objects, like cars and people. But when recalling that information, the brain remembers those larger concepts first to then reconstruct the details  representing a reverse order of processing. The research, which involved people and employed mathematical modeling, could shed light on phenomena ranging from eyewitness testimony to stereotyping to autism.
This study was published in Proceedings of the National Academy of Sciences journal.
ÂThe order by which the brain reacts to, or encodes, information about the outside world is very well understood, said Ning Qian, PhD, a neuroscientist and a principal investigator at ColumbiaÂs Mortimer B. Zuckerman Mind Brain Behavior Institute. ÂEncoding always goes from simple things to the more complex. But recalling, or decoding, that information is trickier to understand, in large part because there was no method - aside from mathematical modeling - to relate the activity of brain cells to a personÂs perceptual judgment.Â
Without any direct evidence, researchers have long assumed that decoding follows the same hierarchy as encoding: you start from the ground up, building up from the details. The main contribution of this work with Misha Tsodyks, PhD, the paperÂs co-senior author who performed this work while at Columbia and is at the Weizmann Institute of Science in Israel, Âis to show that this standard notion is wrong, Dr. Qian said. ÂDecoding actually goes backward, from high levels to low.Â
As an analogy of this reversed decoding, Dr. Qian cites last yearÂs presidential election as an example.
ÂAs you observed the things one candidate said and did over time, you may have formed a categorical negative or positive impression of that person. From that moment forward, the way in which you recalled the candidateÂs words and actions are colored by that overall impression, said Dr. Qian. ÂOur findings revealed that higher-level categorical decisions - Âthis candidate is trustworthy - tend to be stable. But lower-level memories - Âthis candidate said this or that - are not as reliable. Consequently, high-level decoding constrains low-level decoding.Â
To explore this decoding hierarchy, Drs. Qian and Tsodyks and their team conducted an experiment that was simple in design in order to have a clear interpretation of the results. They asked 12 people to perform a series of similar tasks. In the first, they viewed a line angled at 50 degrees on a computer screen for half a second. Once it disappeared, the participants repositioned two dots on the screen to match what they remembered to be the angle of the line. They then repeated this task 50 more times. In a second task, the researchers changed the angle of the line to 53 degrees. And in a third task, the participants were shown both lines at the same time, and then had to orient pairs of dots to match each angle.
Previously held models of decoding predicted that in the two-line task, people would first decode the individual angle of each line (a lower-level feature) and the use that information to decode the two lines relationship (a higher-level feature).
ÂMemories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line, said Dr. Qian.
But that is not what happ
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
-
Exclusive Write-ups & Webinars by KOLs
-
Daily Quiz by specialty
-
Paid Market Research Surveys
-
Case discussions, News & Journals' summaries