<div dir="ltr"><div><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt">***************************************************************<br>CALL FOR PAPERS<br>IEEEWCCI24 IEEE World Congress on Computational Intelligence: <a href="https://2024.ieeewcci.org/" target="_blank">https://2024.ieeewcci.org/</a><br></p><p style="line-height:1.2;margin-top:0pt;margin-bottom:0pt">More details on the special session "<b>Deep Learning for Graphs</b>"<b>: </b><a href="https://sites.google.com/view/dl4g-2024/home" target="_blank">https://sites.google.com/view/dl4g-2024/home</a></p><p style="line-height:1.2;margin-top:0pt;margin-bottom:0pt">Yokohama, Japan, 30 June–5 July 2024<br>***************************************************************<br></p></div><div><br></div><div><font size="4">*** Call For Papers ***</font></div><div>The research field of deep learning for graphs studies the application of well-known deep learning concepts, such as convolution operators on images, to the processing of graph-structured data. Graphs are abstract objects that naturally represent interacting systems of entities, where interactions denote functional and/or structural dependencies between them. Molecular compounds and social networks are the most common examples of such graphs: on the one hand, a molecule is seen as a system of interacting atoms, whose bonds depend, e.g., on their inter-atomic distance; on the other hand, a social network represents a vastly heterogeneous set of user-user interactions, as well as between users and items, like, pictures, movies and songs. Besides, graph representations are extremely useful in far more domains, for instance to encode symmetries and constraints of combinatorial optimization problems as a proxy of our a-priori knowledge. For these reasons, learning how to properly map graphs and their nodes to values of interest poses extremely important, yet challenging, research questions. This special session on graph learning will solicit recent advances that exploit various topics to benefit the solving of real-world problems. The special session is an excellent opportunity for the machine learning community to gather together and host novel ideas, showcase potential applications, and discuss the new directions of this remarkably successful research field. </div><div><br></div><div><span style="font-size:large">*** </span><font size="4">Important Dates</font><span style="font-size:large"> **</span></div><div><p dir="ltr" style="box-sizing:border-box;font-variant-ligatures:none;margin:12px 0px 0px;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-size:11pt;font-family:Lato,sans-serif;line-height:1.6667">Paper Submissions (EXTENDED): January 29th, 2024<br>Paper Acceptance Notifications: March 15, 2024<br>Conference: June 30 - July 5, 2024</p><p dir="ltr" style="box-sizing:border-box;font-variant-ligatures:none;margin:12px 0px 0px;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-size:11pt;font-family:Lato,sans-serif;line-height:1.6667"><br></p><div><div><p dir="ltr" style="line-height:2.00004;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="text-align:start"><font size="4">*** Topics ***</font></span></p><p dir="ltr" style="line-height:2.00004;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">This session focuses on the broad spectrum of machine learning methods for structured and relational data, with a focus on deep representation learning. </span><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Theoretical and methodological papers are welcome from any of the following areas, including but not limited to:</span></p><p dir="ltr" style="line-height:2.00004;text-align:justify;margin-top:0pt;margin-bottom:0pt"></p><ul><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Graph representation learning</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Graph generation (probabilistic models, variational autoencoders, adversarial learning, etc.)ù</span></li><li style="margin-left:15px"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit">Graph learning and relational inference</span><br></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Graph coarsening and pooling in graph neural networks</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Graph kernels and distances</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Theory of graph neural networks (e.g., expressive power, learnability, negative results)</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Learning on complex graphs (e.g., dynamic graphs and heterogeneous graphs)</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Deep learning for dynamic graphs and spatio-temporal data</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Anomaly and change detection in graph data</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Reservoir computing and randomized neural networks for graphs</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Recurrent, recursive, and contextual models</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Neural algorithmic reasoning</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Relational reinforcement learning</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Automatic graph machine learning</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Scalability, data efficiency, and training techniques of graph neural networks</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Tensor methods for structured data</span></li><li style="margin-left:15px"><span style="background-color:transparent;font-family:Lato,sans-serif;font-variant-ligatures:none;text-decoration-line:inherit;box-sizing:border-box;color:rgb(0,0,0)">Graph datasets and benchmarks<br></span></li></ul><p></p><p dir="ltr" style="box-sizing:border-box;font-variant-ligatures:none;margin:0pt 0px;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-family:Lato,sans-serif;line-height:1.656;background-color:transparent;border-width:initial;border-style:none;border-color:initial;padding:0pt"><span style="box-sizing:border-box;color:rgb(0,0,0)">We also encourage </span><span style="box-sizing:border-box;color:rgb(0,0,0)">application papers focused on but not limited to:</span></p><p dir="ltr" style="box-sizing:border-box;font-variant-ligatures:none;margin:0pt 0px;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-family:Lato,sans-serif;line-height:1.656;background-color:transparent;border-width:initial;border-style:none;border-color:initial;padding:0pt"></p><ul><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Bioinformatics (e.g., drug discovery and protein folding)<br></span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Cybersecurity (e.g., fraud detection)<br></span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Transportation Systems (e.g., traffic forecasting)<br></span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Recommender Systems (e.g., dynamic link prediction)</span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Graph Machine Learning Platforms and Systems</span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Computer Vision (e.g. point clouds)</span></li><li style="margin-left:15px"><span style="box-sizing:border-box;color:rgb(0,0,0)">Natural Language Processing<br></span></li></ul><p></p><p dir="ltr" style="line-height:2.00004;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:large;text-align:start">*** Submission Instructions ***</span></p><ol><li style="margin-left:15px">Go to the<a href="https://2024.ieeewcci.org/"> IEEE WCCI 2024 website</a> and click on "Submit your paper".<br></li><li style="margin-left:15px">You will be redirected to EDAS. Log into the system.</li><li style="margin-left:15px">Select "IJCNN 2024 Special Session Papers"<br></li><li style="margin-left:15px">Insert details of your paper and select the topic "Special Session: Deep Learning for Graphs"<br></li><li style="margin-left:15px">Click on "Register Paper". Good Luck!!</li></ol><div><br></div></div><div><font size="4">*** Session Organisers ***</font></div><div><ul><li style="margin-left:15px"><a href="https://sites.google.com/view/nicknavarin" target="_blank">Nicolò Navarin</a> (University of Padua)</li><li style="margin-left:15px"><a href="https://pages.di.unipi.it/bacciu/" target="_blank">Davide Bacciu</a> (University of Pisa)</li><li style="margin-left:15px"><a href="https://dzambon.github.io/" target="_blank">Daniele Zambon</a> (Swiss AI Lab IDSIA, Università della Svizzera italiana)</li><li style="margin-left:15px"><a href="https://diningphil.github.io/" target="_blank">Federico Errica</a> (NEC Laboratories Europe)</li><li style="margin-left:15px"><a href="https://danielecastellana22.github.io/" target="_blank">Daniele Castellana</a> (University of Florence)</li><li style="margin-left:15px"><a href="https://sites.google.com/view/lpasa-math-unipd/" target="_blank">Luca Pasa</a> (University of Padua)</li><li style="margin-left:15px"><a href="https://www.drigoni.it/" target="_blank">Davide Rigoni</a> (University of Padua)</li><li style="margin-left:15px"><a href="https://sites.google.com/view/filippombianchi/home" target="_blank">Filippo Maria Bianchi</a> (UiT the Arctic University of Norway)</li></ul></div></div></div></div>