<div dir="ltr"><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Call for Papers<u></u><u></u></span></b></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><u></u> <u></u></span></b></p><h1 style="margin:0cm;break-after:avoid;font-size:16pt;font-family:"Calibri Light",sans-serif;color:rgb(47,84,150);font-weight:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Journal:  Cognitive Computation, Springer (IF: 5.42)<u></u><u></u></span></b></h1><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><u></u> <u></u></span></b></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Special Issue Title:  <u></u></span>Advances in Multi-modal Deep and Shallow Neural Networks for Neuroimaging Applications</b></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><u></u> <u></u></span></b></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><span style="box-sizing:inherit;font-weight:bolder"></span></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">We welcome your submissions.</span></b></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></b></p><p class="MsoNormal"><b><span lang="EN-IN" style="font-size:11pt;font-family:Arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></b></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><span style="box-sizing:inherit;font-weight:bolder">Aim and Motivation:</span></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Over the past few decades, there has been an exponential increase in the volume, veracity and variety of multi-modal Big data generated from medical imaging applications. This has led to growing challenges for machine learning researchers to effectively extract hidden features and reduce artifacts automatically from images, in order to enhance disease classification, diagnosis, prognosis, segmentation and risk assessment (such as ionizing radiation exposure and side effect of contrast agents). Most existing solutions to these problems are suboptimal owing to risks associated with model training that often lead to inaccurate image acquisition and analysis on account of e.g. overfitting, noise in image, class imbalance and inappropriate features selection. Hence, automated and reliable quality control in medical imaging is a crucial factor for future widespread clinical deployment of machine learning based solutions.</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">In the context of neuroimaging applications, neuroimaging scans are being increasingly and contextually used, along with social, clinical and laboratory data, to detect and diagnose neurological diseases, such as Alzheimer’s disease, Multiple sclerosis, Parkinson’s disease etc.  The sources of neuroimaging modalities are from a wide variety of clinical settings, including electrocardiography (ECG), electroencephalography (EEG), magnetic resonance imaging (MRI), functional MRI (fMRI), positron emission tomography (PET). Recent literature has shown the potential of deep and shallow neural network-based multimodal learning algorithms to address a range of neuroimaging challenges on account of their automatic multimodal feature selection, learning and generalisation capabilities. </p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">This timely special issue aims to bring together world-leading cutting-edge research (from both academia and industry) on multimodal neural network algorithms, including integrated deep and shallow models, that can increase the diagnosis and prognosis accuracy in analysis of neuroimaging Big data.<br style="box-sizing:inherit"> </p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><span style="box-sizing:inherit;font-weight:bolder">Topics: </span>Topics include but are not limited to:</p><ul style="box-sizing:inherit;margin-top:0px;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><li style="margin-left:15px;box-sizing:inherit">Multimodal deep neural networks</li><li style="margin-left:15px;box-sizing:inherit">Multimodal shallow neural networks</li><li style="margin-left:15px;box-sizing:inherit">Integrated deep and shallow models for multimodal learning</li><li style="margin-left:15px;box-sizing:inherit">Real-time segmentation, clustering and classification</li><li style="margin-left:15px;box-sizing:inherit">Sparse, interpretable and privacy preserving data analytics</li><li style="margin-left:15px;box-sizing:inherit">Real-time Image acquisition, resolution, registration and production</li><li style="margin-left:15px;box-sizing:inherit">Automated multimodal artifacts reduction in neuroimaging Automated quality assessment and clinical validation models</li><li style="margin-left:15px;box-sizing:inherit">Emerging multimodal neuroimaging applications</li></ul><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><br style="box-sizing:inherit"> </p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><span style="box-sizing:inherit;font-weight:bolder">Deadlines:</span></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Submissions deadline: April 30, 2022</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">First notification of acceptance: June 30, 2022</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Submission of revised papers: August 30, 2022</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Final notification to authors: October 30, 2022</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Rolling publication of special issue: late 2022/early 2023</p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><br></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)"><span style="box-sizing:inherit;font-weight:bolder">Guest Editors:</span></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">M. Tanveer (Coordinator), Indian Institute of Technology Indore, India, Email: <a href="mailto:mtanveer@iiti.ac.in" target="_blank" style="color:rgb(0,75,131);box-sizing:inherit;background-color:transparent">mtanveer@iiti.ac.in</a></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Chin-Teng Lin, University of Technology Sydney, Australia, Email: <a href="mailto:Chin-Teng.Lin@uts.edu.au" target="_blank" style="color:rgb(0,75,131);box-sizing:inherit;background-color:transparent">Chin-Teng.Lin@uts.edu.au</a></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Yu-dong Zhang, University of Leicester (UK), Email: <a href="mailto:yudongzhang@ieee.org" target="_blank" style="color:rgb(0,75,131);box-sizing:inherit;background-color:transparent">yudongzhang@ieee.org</a></p><p style="box-sizing:inherit;padding:0px;margin:0px 0px 1.5em;color:rgb(51,51,51);font-family:-apple-system,"system-ui","Segoe UI",Roboto,Oxygen-Sans,Ubuntu,Cantarell,"Helvetica Neue",sans-serif;font-size:18px;background-color:rgb(252,252,252)">Kaizhu Huang, Xi’an Jiaotong-Liverpool University, China, Email: <a href="mailto:kh476@duke.edu" target="_blank" style="color:rgb(0,75,131);box-sizing:inherit;background-color:transparent">kh476@duke.edu</a></p><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>----------------------------------------------------------<br>Dr. M. Tanveer (General Chair - ICONIP 2022)<br>Associate Professor and Ramanujan Fellow<br>Department of Mathematics<br>Indian Institute of Technology Indore<br>Email: <a href="mailto:mtanveer@iiti.ac.in" target="_blank">mtanveer@iiti.ac.in</a><br>Mobile: +91-9413259268<br>Homepage: <a href="http://iiti.ac.in/people/~mtanveer/" target="_blank">http://iiti.ac.in/people/~mtanveer/</a></div><div><br></div><div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Associate Editor: IEEE TNNLS (IF: 10.45).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Associate Editor: Pattern Recognition, Elsevier (IF: 7.74).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Action Editor: Neural Networks, Elsevier  (IF: 8.05).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Board of Editors: Engineering Applications of AI, Elsevier (IF: 6.21).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Associate Editor: Neurocomputing, Elsevier  (IF: 5.72).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Editorial Board: Applied Soft Computing, Elsevier  (IF: 6.72).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Associate Editor: Cognitive Computation, Springer (IF: 5.42).</span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div style="color:rgb(34,34,34);font-family:arial,sans-serif"><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px">Associate Editor: International Journal of Machine Learning & Cybernetics (IF: 4.012).</span></div></div><div><span style="color:rgb(51,51,51);font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.4px"><br></span></div><div><br></div></div></div></div></div>