|16:00 – 16:05||Opening Remarks and announcements|
|16:05 – 16:20||
Talk 1: Jiajia Yang
|16:20 – 16:35||
Talk 2: Naoko Koide-Majima
|16:35 – 17:20||
Lecture: Martin Hebart
Free discussion between speakers and attendees
Okayama University, Okayama, Japan
Laminar-specific predictive processing in the human somatosensory system
Human sensory processing is typically considered to occur within a hierarchical framework. It is often regarded as a series of discrete processing stages across columns throughout the whole brain, and it is known to be a bidirectional hierarchy at each stage rather than being strictly bottom-up. A mechanistic model of this framework is the predictive processing principle. In line with this principle, bottom-up feedforward signals are thought to encode basic perceptual dimensions, while top-down feedback signals such as prediction modulate the responsiveness of the early sensory cortex, thereby enhancing perceptual sensitivity for expected stimulus features. Recently, we have employed both conventional whole fMRI at 3T and high-resolution laminar fMRI at 7T to uncover the predictive processing in the human somatosensory system. In this talk, I will present our recent findings regarding this topic which included how the lower-level sensory cortex (i.e., primary somatosensory cortex) and higher-level areas (i.e., midcingulate cortex) contribute to tactile predictive processing.
Center for Information and Neural Networks [CiNet], National Institute of Information and Communications Technology, Osaka, Japan
Human cortical representation of a rich variety of emotion categories
We experience a rich variety of emotions in daily life, and one central topic of affective neuroscience is to reveal brain representation of these emotions. Recent psychological studies suggest high dimensional representational structures of diverse daily-life emotions. However, little is known about how such diverse emotions are represented in the human brain. To reveal that, we measured fMRI responses while subjects watched emotion-inducing audiovisual movies. We rated each of the one-second movie scenes with respect to 80 emotion categories. First, we quantified canonical correlations between the emotion ratings and the BOLD responses, and found that around 25 distinct dimensions of the emotion ratings statistically contribute to the emotion representation in the brain. Then, to show how the emotion categories are represented in the brain, we visualized a continuous semantic space of the emotion representation and mapped it on the cortical surface. We found that emotion representation was transited from unimodal to transmodal regions on the cortical surface. In this study, we present a cortical representation of a rich variety of emotion categories, which covers many of emotions we experience in daily life.
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Revealing interpretable representations in artificial and biological vision
Recognizing the objects around us seems like a trivial task, yet despite enormous recent progress with computational models of vision we still do not know how humans achieve this ability. To understand object recognition, we need to understand the computations at different stages of visual processing and how representations at one stage of processing lead to representations at another stage. This raises several key issues. First, how can we capture the complexity of real-world object recognition given the thousands of objects around us? Second, how can we gain a meaningful understanding from these measured representations? In this talk, I will present work from our lab that aims at resolving these key issues. I will lay out our pathway towards a large-scale comprehensive sampling of the representational space of objects and our approach of revealing the core representational dimensions of objects in brain, behavior, and artificial neural networks. In the process, I will introduce several novel methodological approaches for comparing and interpreting representations across systems, individuals, and species.