<div dir="ltr"><span id="gmail-docs-internal-guid-6c72b359-7fff-fdeb-8477-dd84d2fb3e7f"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Computational Memorability of Imagery</b></span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Special Session at CBMI 2023</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">20-22 September 2023</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Orleans, France</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><a href="https://cbmi2023.org/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://cbmi2023.org</span></a></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(60,72,88);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The subject of memorability has seen an influx in interest since the likelihood of images being recognised upon subsequent viewing was found to be consistent across individuals. Driven primarily by the MediaEval Media Memorability tasks which has just completed its 5th annual iteration, recent research has extended beyond static images, pivoting to the more dynamic and multi-modal medium of video memorability.</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(60,72,88);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The memorability of a video or an image is an abstract concept and like other features such as aesthetics and beauty, is an intrinsic feature of imagery. There are many applications for predicting image and video memorability including marketing where some part of a video advertisement should strive to be the most memorable, in education where key parts of educational content should be memorable, in other areas of content creation such as video summaries of longer events like movies or wedding photography, and in cinematography where a director may want to make some parts of a movie or TV program more, or less, memorable than the rest.</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(60,72,88);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">For computing video memorability, researchers have used a variety of approaches including video vision transformers as well as more conventional machine learning, text features from text captions, a range of ensemble approaches, and even generating surrogate videos using stable diffusion methods. The performance of these approaches tells us that we are now close to the best performance for memorability prediction for video and for images that we could get using current techniques and that there are many research groups who can achieve such a level of performance.</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(60,72,88);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We believe that image and video memorability is now ready for the spotlight and for researchers to be drawn to using video memorability prediction in creative ways. We invite submissions from researchers who wish to extend their reported techniques and/or apply those techniques to real-world applications like marketing, education, or other areas of content production. We hope that the output from this special session will be a community-wide realization of the potential for video memorability prediction and uptake in research into, and applications of, the topic.</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:11pt"><span style="font-size:11pt;font-family:Arial;color:rgb(60,72,88);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The topics of the special session include, but are not limited to:</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Development and interpretation of single- or multi-modal models for Computational Memorability</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Transfer learning and transferability for Computational Memorability</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Computational Memorability applications</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Extending work from MediaEval Predicting Media Memorability task</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Cross- and multilingual aspects in Computational Memorability</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Evaluation and resources for Computational Memorability</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:23pt" role="presentation"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Computational memorability prediction based on physiological data (e.g.: EEG data)</span></p></li></ul><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The contributions to this special session are r</span><span style="font-size:10.5pt;font-family:Roboto,sans-serif;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">egular short papers (only) as 4 pages, plus additional pages for the list of references.</span><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> The review process is single-blind meaning authors do not have to anonymise their submissions. </span></p><b><br></b><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Important dates</b></span></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Paper submission: April 12, 2023</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Notification of acceptance: June 1, 2023</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Camera ready paper: June 15, 2023</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Conference dates: September 20-22, 2023</span></p><b><br></b><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Organisers</b></span></p><br><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Alba García Seco de Herrera, University of Essex (<a href="mailto:alba.garcia@essex.ac.uk">alba.garcia@essex.ac.uk</a>)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Roboto,sans-serif;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Gabi Constantin, University Politehnica of Bucharest (<a href="mailto:mihai.constantin84@upb.ro">mihai.constantin84@upb.ro</a>)</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Roboto,sans-serif;color:rgb(60,72,88);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt" role="presentation"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Alan Smeaton, Dublin City University (<a href="mailto:alan.smeaton@dcu.ie">alan.smeaton@dcu.ie</a>)</span></p></li></ul></span></div>