Connectionists: Information Structure of Brain and Qualia Symposium

Nao Tsuchiya naotsugu.tsuchiya at monash.edu
Sun May 9 22:01:58 EDT 2021


Dear all,


Please see the announcement of the zoom-based symposium on Information
Structure of Brain and Qualia Symposium.


Date and Time:

8:00 - 10:10 (JST) Wednesday, May 12th Translate to your time zone.
<http://www.worldtimebuddy.com/event?lid=1850147%2C21%2C30&h=1850147&sts=27012420&sln=8-10&a=show&euid=f09d52c6-d6a2-e084-b97b-24a8dc5000c6>


Title: Information Structure of Brain and Qualia Symposium

We are excited to announce our Information Structure of Brain and Qualia
Symposium!

Our two speakers are Dr. Marieke Mur <https://murlab.org/> from Western
University in Canada and Dr. Shinji Nishimoto
<https://cinet.jp/english/people/20141086/> from CiNet in Japan. All
details can be found below.

Please fill in the registration form here
<https://docs.google.com/forms/d/e/1FAIpQLSe9rmt4DA-nbLod2W04WJzrm22AA928S7cNwGIp-hEvlDPW5A/viewform?usp=sf_link>
to receive the information about the zoom. The content of the symposium
will be live broadcasted on Youtube and archived as well.


Schedule:

8:00-8:10 Introduction

8:10-8:40 Talk 1: Dr. Marieke Mur, “Predicting perceptual representations
from brain activity”

8:40-9:00 Panel Discussion Q&A

9:00-9:15 Break

9:15-9:45 Talk 2: Dr. Shinji Nishimoto, “Predicting perceptual
representations from brain activity”

9:45-10:00 Panel Discussion Q&A

10:00-10:10 Closing


Regards

Nao Tsuchiya

Predicting perceptual representations from brain activity

Object representations in human high-level visual cortex are at the
interface between perception and cognition. What is the nature of these
representations, and how are they computed? Furthermore, can they predict
human perception?

I will address these questions using representational similarity analysis,
an experimental and data-analytical framework for relating brain activity
data, behaviour, and computational models. I will focus on experimental
data acquired in healthy human participants while they were viewing object
images from a wide range of natural categories, including faces and places.
The data consist of functional magnetic resonance imaging (fMRI) data from
visual cortex and object-similarity judgments, which were acquired outside
the fMRI scanner.

I will show that object representations in high-level visual cortex are at
once categorical and continuous, and can be explained similarly well by
category labels, visual features of intermediate complexity, and deep
convolutional neural networks. Among the visual features, it is those
correlated with category membership that explain the high-level visual
object representation. I will further show that the high-level object
representation predicts human object-similarity judgments reasonably well,
but fails to capture evolutionary more recent category divisions present in
the judgments. These more recent category divisions are human-related and
reflect the distinction between humans and nonhuman animals, and between
manmade and natural objects.

Together, these findings suggest that high-level visual cortex has
developed feature detectors that distinguish between categories of
long-standing evolutionary relevance, and that other brain systems might
adaptively read out or introduce category divisions that serve current
behavioural goals.

Encoding and decoding of cognitive functions

Shinji Nishimoto

Predictive modeling of brain activity has been used to reveal how the brain
represents diverse perceptual phenomena, including visual, auditory,
semantic, emotional, and linguistic experiences. These studies have
provided the representational structure of perceptual contents, the
macroscopic functional structure across the brain, and the quantitative
frameworks to decode experiences from brain activity. However, many of
these studies focused on passive experiences, and relatively little was
known about how such studies might be generalized to explain more active
cognitive experiences. Recently, we have extended the modeling approach to
cognitive functions. We recorded brain activity while human participants
performed 103 cognitive tasks, including audiovisual recognition, memory
formation and recall, logical judgement, introspections, time perception,
prediction, decisions on ethics and beauty, and motor controls. We built
encoding and decoding models of the evoked brain activity using latent
cognitive features. These models revealed internal structures and
fine-scale cortical mappings of cognitive features and decoded brain
activity that can be generalized even under novel tasks. Our framework
provides a powerful step toward the comprehensive and quantitative
understanding of human perceptual and cognitive experiences.




-- 
-- 
Professor Nao (Naotsugu) Tsuchiya, PhD

 School of Psychological Sciences
 Turner Institute for Brain and Mental Health
* Brain Mapping and Modelling Program*
Monash University

770 Blackburn  Monash Biomedical Imaging facility, Clayton, VIC 3168
Australia

T: +61 3 9905 4564
E: naotsugu.tsuchiya at monash.edu
W: homepage <https://sites.google.com/monash.edu/tlab/home?authuser=0>:
Tw: @conscious_tlab <https://twitter.com/conscious_tlab>
YouTube : neural basis of consciousness
<https://www.youtube.com/channel/UCvRuQWqbKbHJFCC4xabOI4g>
2. Visiting Researcher at Department of Dynamic Brain Imaging,
Advanced Telecommunications Research (ATR), Japan
3. Visiting Researcher at CiNet, Osaka University, Japan
orcid.org/0000-0003-4216-8701
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20210510/fd174321/attachment.html>


More information about the Connectionists mailing list