A brain-system that builds confidence in what we see, hear and touch
Ecole Polytechnique Fédérale de Lausanne News Oct 02, 2017
A series of experiments at EPFL provide conclusive evidence that the brain uses a single mechanism (supramodality) to estimate confidence in different senses such as audition, touch, or vision.
The study was published in the Journal of Neuroscience.
Behavioral scientists and psychologists use the term Âmetacognition to describe our ability to access, report and regulate our own mental states: Âthinking about thinkingÂ, Âknowing about knowing Âbeing aware about being awareÂ, are all higher-order cognitive skills that fit this category.
Specifically, metacognition enables the brain to compute a degree of confidence when we perceive events from the external world, such as a sound, light, or touch. The accuracy of confidence estimates is crucial in daily life, for instance when hearing a baby crying, or smelling a gas leak. Confidence estimates also need to combine input from multiple senses simultaneously, for instance when buying a violin based on how it sounds, feels, and looks.
From a neuroscience point of view, the way metacognition operates in different senses, and for combination of senses is still a mystery: Does metacognition use the same rules for visual, auditory, or tactile stimuli, or does it use different components of each of sensory domains? The first of these two ideas  i.e. the Âcommon rules  is known as Âsupramodality and it has proven controversial among neuroscientists.
A series of experiments by Olaf BlankeÂs lab at EPFL now provide evidence in favor of supramodality. The study, led by researcher Nathan Faivre, tested human volunteers using three different types of experimental techniques: behavioral psychophysics, computational modeling, and electrophysiological recordings.
The behavioral part of the study found that participants with high metacognitive performance for one sense (e.g. vision) were likely to perform well in other senses (e.g. audition or touch). ÂIn other words, explains Faivre, Âthose of us who are good at knowing what they see are also good at knowing what they hear and what they touch.Â
The computational modeling indicated that the confidence estimates we build when seeing an image or hearing a sound can be efficiently compared to one another. This implies that they share the same format.
Finally, the electrophysiological recordings revealed similar characteristics when the volunteers reported confidence in their responses to audio or audiovisual stimuli. This suggests that visual and audiovisual metacognition is based on similar neural mechanisms.
ÂThese results make a strong case in favor of the supramodality hypothesis, says Faivre. ÂThey show that there is a common currency for confidence in different sensory domains  in other words, that confidence in a signal is encoded with the same format in the brain no matter where the signal comes from. This gives metacognition a central status, whereby the monitoring of perceptual processes occurs through a common neural mechanism.Â
The study is an important step towards a mechanistic understanding of human metacognition. It tells us something about how we perceive the world and become aware of our surroundings, and can potentially lead to ways of treating several neurological and psychiatric disorders where metacognition is impaired.
Go to Original
The study was published in the Journal of Neuroscience.
Behavioral scientists and psychologists use the term Âmetacognition to describe our ability to access, report and regulate our own mental states: Âthinking about thinkingÂ, Âknowing about knowing Âbeing aware about being awareÂ, are all higher-order cognitive skills that fit this category.
Specifically, metacognition enables the brain to compute a degree of confidence when we perceive events from the external world, such as a sound, light, or touch. The accuracy of confidence estimates is crucial in daily life, for instance when hearing a baby crying, or smelling a gas leak. Confidence estimates also need to combine input from multiple senses simultaneously, for instance when buying a violin based on how it sounds, feels, and looks.
From a neuroscience point of view, the way metacognition operates in different senses, and for combination of senses is still a mystery: Does metacognition use the same rules for visual, auditory, or tactile stimuli, or does it use different components of each of sensory domains? The first of these two ideas  i.e. the Âcommon rules  is known as Âsupramodality and it has proven controversial among neuroscientists.
A series of experiments by Olaf BlankeÂs lab at EPFL now provide evidence in favor of supramodality. The study, led by researcher Nathan Faivre, tested human volunteers using three different types of experimental techniques: behavioral psychophysics, computational modeling, and electrophysiological recordings.
The behavioral part of the study found that participants with high metacognitive performance for one sense (e.g. vision) were likely to perform well in other senses (e.g. audition or touch). ÂIn other words, explains Faivre, Âthose of us who are good at knowing what they see are also good at knowing what they hear and what they touch.Â
The computational modeling indicated that the confidence estimates we build when seeing an image or hearing a sound can be efficiently compared to one another. This implies that they share the same format.
Finally, the electrophysiological recordings revealed similar characteristics when the volunteers reported confidence in their responses to audio or audiovisual stimuli. This suggests that visual and audiovisual metacognition is based on similar neural mechanisms.
ÂThese results make a strong case in favor of the supramodality hypothesis, says Faivre. ÂThey show that there is a common currency for confidence in different sensory domains  in other words, that confidence in a signal is encoded with the same format in the brain no matter where the signal comes from. This gives metacognition a central status, whereby the monitoring of perceptual processes occurs through a common neural mechanism.Â
The study is an important step towards a mechanistic understanding of human metacognition. It tells us something about how we perceive the world and become aware of our surroundings, and can potentially lead to ways of treating several neurological and psychiatric disorders where metacognition is impaired.
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
-
Exclusive Write-ups & Webinars by KOLs
-
Daily Quiz by specialty
-
Paid Market Research Surveys
-
Case discussions, News & Journals' summaries