<div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap">Dear all, </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap">Please see the announcement of the zoom-based symposium on </span>Information Structure of Brain and Qualia Symposium. </p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;font-size:11pt;white-space:pre-wrap">Date and Time: </span><br></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">8:00 - 10:10 (JST) Wednesday, May 12th </span><a href="http://www.worldtimebuddy.com/event?lid=1850147%2C21%2C30&h=1850147&sts=27012420&sln=8-10&a=show&euid=f09d52c6-d6a2-e084-b97b-24a8dc5000c6" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Translate to your time zone.</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Title: Information Structure of Brain and Qualia Symposium</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We are excited to announce our Information Structure of Brain and Qualia Symposium! </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Our two speakers are</span><a href="https://murlab.org/" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Dr. Marieke Mur</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> from Western University in Canada and</span><a href="https://cinet.jp/english/people/20141086/" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Dr. Shinji Nishimoto</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> from CiNet in Japan. All details can be found below.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Please fill in the registration form </span><a href="https://docs.google.com/forms/d/e/1FAIpQLSe9rmt4DA-nbLod2W04WJzrm22AA928S7cNwGIp-hEvlDPW5A/viewform?usp=sf_link" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">here</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> to receive the information about the zoom. The content of the symposium will be live broadcasted on Youtube and archived as well. </span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Schedule:</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">8:00-8:10 Introduction</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">8:10-8:40 Talk 1: Dr. Marieke Mur, “Predicting perceptual representations from brain activity” </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">8:40-9:00 Panel Discussion Q&A</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">9:00-9:15 Break</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">9:15-9:45 Talk 2: Dr. Shinji Nishimoto, “Predicting perceptual representations from brain activity” </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">9:45-10:00 Panel Discussion Q&A</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">10:00-10:10 Closing</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Regards</span></p><p style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Nao Tsuchiya</span></p><br><span id="gmail-docs-internal-guid-ffe2c488-7fff-94df-3ca2-c14f209586bb"><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt"><span style="font-size:14pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Predicting perceptual representations from brain activity</span></p><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Object representations in human high-level visual cortex are at the interface between perception and cognition. What is the nature of these representations, and how are they computed? Furthermore, can they predict human perception? </span></p><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will address these questions using representational similarity analysis, an experimental and data-analytical framework for relating brain activity data, behaviour, and computational models. I will focus on experimental data acquired in healthy human participants while they were viewing object images from a wide range of natural categories, including faces and places. The data consist of functional magnetic resonance imaging (fMRI) data from visual cortex and object-similarity judgments, which were acquired outside the fMRI scanner.  </span></p><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will show that object representations in high-level visual cortex are at once categorical and continuous, and can be explained similarly well by category labels, visual features of intermediate complexity, and deep convolutional neural networks. Among the visual features, it is those correlated with category membership that explain the high-level visual object representation. I will further show that the high-level object representation predicts human object-similarity judgments reasonably well, but fails to capture evolutionary more recent category divisions present in the judgments. These more recent category divisions are human-related and reflect the distinction between humans and nonhuman animals, and between manmade and natural objects.</span></p><p dir="ltr" style="line-height:1.295;margin-top:0pt;margin-bottom:8pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Together, these findings suggest that high-level visual cortex has developed feature detectors that distinguish between categories of long-standing evolutionary relevance, and that other brain systems might adaptively read out or introduce category divisions that serve current behavioural goals. </span></p></span><br class="gmail-Apple-interchange-newline"></div>
</div></div>
</div><span id="gmail-docs-internal-guid-12221ade-7fff-ea1b-f29a-f6fbb634bc2e"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Encoding and decoding of cognitive functions</span></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Shinji Nishimoto</span></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Predictive modeling of brain activity has been used to reveal how the brain represents diverse perceptual phenomena, including visual, auditory, semantic, emotional, and linguistic experiences. These studies have provided the representational structure of perceptual contents, the macroscopic functional structure across the brain, and the quantitative frameworks to decode experiences from brain activity. However, many of these studies focused on passive experiences, and relatively little was known about how such studies might be generalized to explain more active cognitive experiences. Recently, we have extended the modeling approach to cognitive functions. We recorded brain activity while human participants performed 103 cognitive tasks, including audiovisual recognition, memory formation and recall, logical judgement, introspections, time perception, prediction, decisions on ethics and beauty, and motor controls. We built encoding and decoding models of the evoked brain activity using latent cognitive features. These models revealed internal structures and fine-scale cortical mappings of cognitive features and decoded brain activity that can be generalized even under novel tasks. Our framework provides a powerful step toward the comprehensive and quantitative understanding of human perceptual and cognitive experiences.</span></p><br><br><br></span><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">-- <br>Professor Nao (Naotsugu) Tsuchiya, PhD<br><br> School of Psychological Sciences<br> Turner Institute for Brain and Mental Health<br>* Brain Mapping and Modelling Program*<br>Monash University<br><br><div>770 Blackburn  Monash Biomedical Imaging facility, Clayton, VIC 3168<br>Australia <br><br></div><div>T: +61 3 9905 4564<br>E: <a href="mailto:naotsugu.tsuchiya@monash.edu" target="_blank">naotsugu.tsuchiya@monash.edu</a> <br>W: <a href="https://sites.google.com/monash.edu/tlab/home?authuser=0" target="_blank">homepage</a>:  <br>Tw: @<a href="https://twitter.com/conscious_tlab" target="_blank">conscious_tlab</a><div>YouTube : <a href="https://www.youtube.com/channel/UCvRuQWqbKbHJFCC4xabOI4g" target="_blank">neural basis of consciousness</a><br>2. Visiting Researcher at Department of Dynamic Brain Imaging, <br>Advanced Telecommunications Research (ATR), Japan<br>3. Visiting Researcher at CiNet, Osaka University, Japan<br><a href="http://orcid.org/0000-0003-4216-8701" target="_blank">orcid.org/0000-0003-4216-8701</a><br></div></div></div></div></div>