<div dir="ltr">Dear colleagues,<div>We've now posted the schedule for the workshop (<a href="https://goo.gl/yNyBHT">https://goo.gl/yNyBHT</a>). Deadline for posters is May 15!</div><div><h3 style="font-size:18px;font-family:"lucida sans","lucida grande","lucida sans unicode",helvetica,arial,sans-serif;line-height:normal;margin:0px 0px 10px;padding:0px 0px 10px;border-width:0px 0px 1px;border-top-style:initial;border-right-style:initial;border-bottom-style:solid;border-left-style:initial;border-top-color:initial;border-right-color:initial;border-bottom-color:rgb(204,204,204);border-left-color:initial;outline:0px;color:rgb(0,102,204);overflow:hidden;text-align:center"><br></h3></div><div><div style="font-size:12.8px"><h2 style="font-size:15px;font-family:"lucida sans","lucida grande","lucida sans unicode",helvetica,arial,sans-serif;line-height:normal;margin:0px;padding:0px 0px 15px;border:0px;outline:0px;color:rgb(0,102,204);text-align:center"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px;font-size:14px"><span style="font-family:tahoma,geneva,sans-serif;line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><i style="font-family:"lucida sans","lucida grande","lucida sans unicode",helvetica,arial,sans-serif;line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><span id="gmail-m_2990000452291981879gmail-m_8863046576946452199gmail-docs-internal-guid-c97c7708-c75b-c4bb-9e7b-e1e132d00d3b" style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px;font-style:normal;color:rgb(0,0,0);background-color:transparent;vertical-align:baseline"><span class="gmail-il">Hierarchical</span> <span class="gmail-il">Multisensory</span> Integration: Theory and Experiments</span></i></span></span></h2><h3 style="font-size:18px;font-family:"lucida sans","lucida grande","lucida sans unicode",helvetica,arial,sans-serif;line-height:normal;margin:0px 0px 10px;padding:0px 0px 10px;border-width:0px 0px 1px;border-top-style:initial;border-right-style:initial;border-bottom-style:solid;border-left-style:initial;border-top-color:initial;border-right-color:initial;border-bottom-color:rgb(204,204,204);border-left-color:initial;outline:0px;color:rgb(0,102,204);overflow:hidden;text-align:center"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px;font-size:14px"><span style="font-family:tahoma,geneva,sans-serif;line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><span style="font-family:"lucida sans","lucida grande","lucida sans unicode",helvetica,arial,sans-serif;line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">Barcelona, Spain, June 18-19, 2017</span></span></span></span></h3><div style="font-size:12.8px"><font face="arial, helvetica, sans-serif"><font style="color:rgb(0,0,0);text-align:justify;font-size:small"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">The ability to map sensory inputs to meaningful semantic labels, i.e., to recognize objects, is foundational to cognition, and the human brain excels at object recognition tasks across sensory domains. </span></span><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">Examples include perceiving spoken speech, reading written words, even recognizing tactile Braille patterns. </span></font><span style="color:rgb(0,0,0);text-align:justify;font-size:small">In each sensory modality, processing appears to be realized by multi-stage processing hierarchies in which tuning complexity grows gradually from simple features in primary sensory areas to complex representations in higher-level areas that ultimately interface with task-related circuits in prefrontal/premotor cortices.</span><br></font></div><div style="font-size:12.8px"><span style="color:rgb(0,0,0);text-align:justify;font-size:small"><font face="arial, helvetica, sans-serif"><br></font></span></div><p style="line-height:normal;margin:0px 0px 6px;padding:0px;border:0px;outline:0px;color:rgb(0,0,0);text-align:justify"><font face="arial, helvetica, sans-serif">Crucially, real world stimuli usually do not have sensory signatures in just one modality but activate representations in different sensory domains, and successfully integrating these different <span class="gmail-il">hierarchical</span> representations appears to be of key importance for cognition. <span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">Prior theoretical work has mostly focused on tackling <span class="gmail-il">multisensory</span> integration at isolated processing stages, and the computational functions and benefits of </span><em style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><span class="gmail-il">hierarchical</span></em><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"> <span class="gmail-il">multisensory</span>interactions are still unclear. For instance, what characteristics of the input determine at which levels of two linked sensory processing hierarchies cross-sensory integration occurs? Can these connections form through unsupervised learning, just based on temporal coincidence? Which stages are connected: For instance, is there selective audio-visual integration just at a low level of the hierarchy, e.g., to enable letter-by-letter reading, or even earlier levels at the level of primary sensory cortices, with <span class="gmail-il">multisensory</span> selectivity in higher <span class="gmail-il">hierarchical</span> levels then resulting from feedforward processing within each hierarchy, or are there selective connections at multiple <span class="gmail-il">hierarchical</span> levels? What are the computational advantages of different cross-sensory connection schemes? What are roles for “top-down” vs. “lateral” inputs in learning cross-<span class="gmail-il">hierarchical</span> connections? What are computationally efficient ways to leverage prior learning from one modality in learning <span class="gmail-il">hierarchical</span> </span><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px">representations in a new modality? </span></font></p></div><div style="font-size:12.8px"><span class="gmail-m_2990000452291981879gmail-m_8863046576946452199gmail-im"><font face="arial, helvetica, sans-serif"><div><span style="color:rgb(0,0,0)">The workshop will gather a small group of experts to </span><span style="color:rgb(0,0,0)">informally </span><span style="color:rgb(0,0,0)">exchange the latest ideas and findings, both experimental and theoretical, in the field of <span class="gmail-il">multisensory</span> integration. It will consist of two days packed with talks by invited speakers as well as discussions. There will also be a poster session. Researchers, postdocs and graduate students interested in <span class="gmail-il">multisensory</span> integration and <span class="gmail-il">hierarchical</span> processing are all invited to apply. Click</span><span style="color:rgb(0,0,0)"> </span><a href="http://eventum.upf.edu/event_detail/8963/sections/6797/event-details.html" target="_blank" style="color:rgb(171,26,47);line-height:normal;margin:0px;padding:0px;border:0px;outline:0px;text-decoration-line:none">here</a><span style="color:rgb(0,0,0)"> (</span><font color="#000000"><a href="http://eventum.upf.edu/event_detail/8963/detail/pire-workshop_summer-school-2017.html" target="_blank">http://eventum.upf<wbr>.edu/event_detail/8963/detail/<wbr>pire-workshop_summer-school-<wbr>2017.html</a>)</font><span style="color:rgb(0,0,0)"> </span><span style="color:rgb(0,0,0)">for more information on the event and to register</span><span style="color:rgb(0,0,0)">.</span></div><div><span style="color:rgb(0,0,0)"><br></span></div><p dir="ltr" id="gmail-m_2990000452291981879gmail-m_8863046576946452199gmail-m_3254694386157034615m_3004055261382989673gmail-m_2425151235856386655gmail-docs-internal-guid-c97c7708-c75f-df72-50b6-c45cb0d0a9f0" style="line-height:1.38;margin:0pt 0px;padding:0px;border:0px;outline:0px;color:rgb(0,0,0);overflow:inherit"><span style="line-height:normal;margin:0px;padding:0px;border:0px;outline:0px"><font size="2" style="line-height:normal">This event is jointly organized by the Center for Brain and Cognition at the Universitat Pompeu Fabra in Barcelona and Georgetown University, with funding from the National Science Foundation and the Spanish Ministry of Economy, Industry and Competitiveness.</font></span></p></font></span></div></div><div><br></div><div><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Maximilian Riesenhuber</div>Lab for Computational Cognitive Neuroscience <br>Department of Neuroscience<br>Georgetown University Medical Center<br>Research Building Room WP-12<br>3970 Reservoir Rd., NW<br>Washington, DC 20007<br> <br>phone: 202-687-9198 * email: <a href="https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=mr287@georgetown.edu" target="_blank">max.riesenhuber@georgetown.edu</a><br><a href="http://maxlab.neuro.georgetown.edu/" target="_blank">http://maxlab.neuro.georgetown.edu</a><div>public key ID 0x<span style="color:rgb(46,46,46);font-family:"helvetica neue",helvetica,arial,sans-serif;line-height:20.7969px">8696063709CCE3BB</span></div></div></div></div></div>
</div></div>