From g.goodhill at wustl.edu Sun Jan 1 11:21:18 2023 From: g.goodhill at wustl.edu (Goodhill, Geoffrey) Date: Sun, 1 Jan 2023 16:21:18 +0000 Subject: Connectionists: Postdoc Fellowships in new Center at Washington University in St Louis Message-ID: Dear Connectionists, Washington University in St Louis has now launched a new Center for Theoretical and Computational Neuroscience, ctcn.wustl.edu. This is a joint initiative between the Schools of Medicine, Engineering, and Arts and Sciences, and provides a hub for neuroscientists to collaborate with mathematicians, physicists and engineers to find creative solutions to some of the most difficult problems currently facing neuroscience and artificial intelligence. The CTCN is now recruiting a cohort of outstanding Postdoctoral Fellows to work at the interface between theoretical and experimental labs and help forge new collaborations. Washington University in St Louis is ranked in the top 10 worldwide for Neuroscience and Behavior. Salary for CTCN Fellows is significantly above standard NIH postdoc rates, and funds for conference travel are included. In addition, WashU offers excellent benefits and comprehensive access to career development, professional and personal support. Applications from women and under-represented minorities are particularly welcome. The collaborative environment at WashU is a reflection of the critical importance placed on diversity, equity and inclusion and creating a welcoming place where postdocs can thrive. The St Louis metropolitan area has a population of almost 3M and is rich in culture, green spaces and thriving music and arts scenes, as well as being quite affordable. For more details on this prestigious Fellowship opportunity, including how to apply, please see ctcn.wustl.edu. Review of applications will begin on Jan 30th and continue until the positions are filled. Professor Geoffrey J Goodhill Departments of Developmental Biology and Neuroscience Affiliate appointments: Physics, Biomedical Engineering, Computer Science and Engineering, and Electrical and Systems Engineering Washington University School of Medicine 660 S. Euclid Avenue St. Louis, MO 63110 g.goodhill at wustl.edu https://neuroscience.wustl.edu/people/geoffrey-goodhill-phd -------------- next part -------------- An HTML attachment was scrubbed... URL: From brainandmorelab at gmail.com Sun Jan 1 16:41:36 2023 From: brainandmorelab at gmail.com (Brain and More Lab) Date: Sun, 1 Jan 2023 22:41:36 +0100 Subject: Connectionists: Guest speakers presenting their papers at our labmeeting online Message-ID: Dear all, Happy new year! We have a youtube channel where we post our journal club and also guests talks about Neuroimaging or computational pathology (see an example of both): "Brain structure-function coupling provides signatures for task decoding & individual fingerprinting" Preti et al. Neuroimage https://youtu.be/v7bUdsoLzx4 and "AI-based assessment of cardiac allograft rejections" Lipkova et al. Nature Medicine https://youtu.be/60rYpebjEKs We organize those talks on Friday early afternoon CET, if you want to propose a paper, write to a.crimi at sano.science and agree upon an available time slot. Talks are streamed real time on Teams and then available on Youtube. Best, BAMlab -------------- next part -------------- An HTML attachment was scrubbed... URL: From suashdeb at gmail.com Sun Jan 1 21:16:53 2023 From: suashdeb at gmail.com (Suash Deb) Date: Mon, 2 Jan 2023 07:46:53 +0530 Subject: Connectionists: Extension of deadline, ISMSI 2023 Message-ID: Dear friends and esteemed colleagues, Happy New Year to all. On numerous requests, the deadline for submission of manuscripts for 2023 7th ISMSI, an annual event of India International Congress on Computational Intelligence www.iicci.in , has been extended till January 30, 2023 http://www.ismsi.org Hope it will help and we will look forward to receiving your manuscripts, if not already submitted, in the coming days. Thanks and kind regards, Suash Deb General Chair, ISMSI 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From malini.vinita.samarasinghe at ini.rub.de Mon Jan 2 02:13:38 2023 From: malini.vinita.samarasinghe at ini.rub.de (Vinita Samarasinghe) Date: Mon, 2 Jan 2023 08:13:38 +0100 Subject: Connectionists: Postdoc position - Cheng Lab - Application deadline extended Message-ID: <2f2fc783-8c43-2131-80aa-b2dde5187de3@ini.rub.de> *The application deadline for this position has been extended until 31.01.2023 * *Postdoctoral Position in Computational Neuroscience at the Institute for Neural Computation, Faculty of Computer Science* Prof. Sen Cheng, Institute for Neural Computation, Faculty of Computer Science at the Ruhr University Bochum, invites applications for a full time (currently 39.83 hours/week) *Postdoctoral position (TV-L E13) in Computational Neuroscience. * The position starts as soon as possibleand is funded until 30.06.2025. *Job description* The position is part of the Collaborative Research Center ?Extinction Learning? (SFB 1280) and studies the principles underlying spatial learning and its extinction with reinforcement learning models. A particular focus is the role of episodic-like memory in learning and extinction processes. The research group is highly dynamic and uses diverse computationalmodeling approachesincludingbiological neural networks, cognitive modeling, and machine learning to investigatelearning and memory in humans and animals. For further information see www.rub.de/cns . *Your Profile: * Candidates must have * a doctorate degreein neuroscience, physics, mathematics, electrical/biomedical engineering or a closely related field, * relevant experience in mathematical modeling, * excellent programming skills (e.g., Python, C/C++, Matlab), * excellent communication skills in English * the ability to work well in a team. Research experience in neuroscience would be a further asset. The Ruhr University Bochum is home to a vibrant research community in neuroscienceand cognitive science. The Institute for Neural Computation combines different areas of expertise ranging from experimental and theoretical neuroscienceto machine learning androbotics. Please send your application, including CV, transcripts and research statement electronically, as a *single PDF file*, to samarasinghe at ini.rub.de . In addition, at least two academic references mustbe sent independently to the above email address. The deadline for applications is January 31^st , 2023. Travel costs for interviews will not be reimbursed. The Ruhr University Bochum is committed to equal opportunity. We strongly encourage applications from qualified women and persons with disabilities. We are committed to providing a supportive work environment for female researchers, in particular those with young children. Our university provides mentoring and coaching opportunities specifically aimed at women in research. We have a strong research network with female role models and will provide opportunities to networkwith them. Wherever possible, events will be scheduled during regular childcare hours. Special childcare will be arranged if events have to be scheduled outside of regular hours, in case of sickness and during school or daycare closures. Where childcare is not an option parents will be offered a home office solution. Contact person: Vinita Samarasinghe, samarasinghe at ini.rub.de -- Vinita Samarasinghe M.Sc., M.A. Science Manager Arbeitsgruppe Computational Neuroscience Institut f?r Neuroinformatik Ruhr-Universit?t Bochum, NB 3/73 Postfachnummer 110 Universit?tstr. 150 44801 Bochum Tel: +49 (0) 234 32 27996 Email:samarasinghe at ini.rub.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.hyvarinen at helsinki.fi Mon Jan 2 04:55:57 2023 From: aapo.hyvarinen at helsinki.fi (=?UTF-8?Q?Hyv=c3=a4rinen_Aapo?=) Date: Mon, 2 Jan 2023 11:55:57 +0200 Subject: Connectionists: Open-rank faculty position in (computational) statistics, U Helsinki Message-ID: Dear All, The University of Helsinki has an open faculty position in Statistics, with emphasis on computational statistics and related theory. All levels are considered: assistant, associate, and full professor. Please see: https://jobs.helsinki.fi/job/Professor-or-AssistantAssociate-Professor-of-Statistics-%28Mathematical-and-Computational-Statistics%29/761124902/ Aapo Hyv?rinen From dwang at cse.ohio-state.edu Mon Jan 2 09:54:24 2023 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Mon, 2 Jan 2023 14:54:24 +0000 Subject: Connectionists: NEURAL NETWORKS, Jan. 2023 Message-ID: Neural Networks - Volume 157, January 2023 https://www.journals.elsevier.com/neural-networks Editorial: Another bumper year Taro Toyoizumi, DeLiang Wang On the role of feedback in image recognition under noise and adversarial attacks: A predictive coding perspective Andrea Alamia, Milad Mozafari, Bhavin Choksi, Rufin VanRullen Free energy model of emotional valence in dual-process perceptions Hideyoshi Yanagisawa, Xiaoxiang Wu, Kazutaka Ueda, Takeo Kato A Reinforcement Meta-Learning framework of executive function and information demand Massimo Silvetti, Stefano Lasaponara, Nabil Daddaoua, Mattias Horan, Jacqueline Gottlieb Gateway identity and spatial remapping in a combined grid and place cell attractor Tristan Baumann, Hanspeter A. Mallot Meta-HGT: Metapath-aware HyperGraph Transformer for heterogeneous information network embedding Jie Liu, Lingyun Song, Guangtao Wang, Xuequn Shang Multi-Aspect enhanced Graph Neural Networks for recommendation Chenyan Zhang, Shan Xue, Jing Li, Jia Wu, ... Jun Chang Robustness meets accuracy in adversarial training for graph autoencoder Xianchen Zhou, Kun Hu, Hongxia Wang Inverse free reduced universum twin support vector machine for imbalanced data classification Hossein Moosaei, M.A. Ganaie, Milan Hlad?k, M. Tanveer Maximum Decentral Projection Margin Classifier for High Dimension and Low Sample Size problems Zhiwang Zhang, Jing He, Jie Cao, Shuqing Li Pairwise learning problems with regularization networks and Nystr?m subsampling approach Cheng Wang, Ting Hu, Siyang Jiang More refined superbag: Distantly supervised relation extraction with deep clustering Suizhu Yang, Yanxia Liu, Yuantong Jiang, Zhiqiang Liu ExpGCN: Review-aware Graph Convolution Network for explainable recommendation Tianjun Wei, Tommy W.S. Chow, Jianghong Ma, Mingbo Zhao Adversarial style discrepancy minimization for unsupervised domain adaptation Xin Luo, Wei Chen, Zhengfa Liang, Chen Li, Yusong Tan DAFA-BiLSTM: Deep Autoregression Feature Augmented Bidirectional LSTM network for time series prediction Heshan Wang, Yiping Zhang, Jing Liang, Lili Liu Improving malicious email detection through novel designated deep-learning architectures utilizing entire email Trivikram Muralidharan, Nir Nissim Event-triggered adaptive dynamic programming for decentralized tracking control of input constrained unknown nonlinear interconnected systems Qiuye Wu, Bo Zhao, Derong Liu, Marios M. Polycarpou Discriminative and Geometry-Preserving Adaptive Graph Embedding for dimensionality reduction Jianping Gou, Xia Yuan, Ya Xue, Lan Du, ... Yi Zhang Neurodynamics-driven holistic approaches to semi-supervised feature selection Yadi Wang, Jun Wang Temperature guided network for 3D joint segmentation of the pancreas and tumors Qi Li, Xiyu Liu, Yiming He, Dengwang Li, Jie Xue Revisiting graph neural networks from hybrid regularized graph signal reconstruction Jiaxing Miao, Feilong Cao, Hailiang Ye, Ming Li, Bing Yang BASeg: Boundary aware semantic segmentation for autonomous driving Xiaoyang Xiao, Yuqian Zhao, Fan Zhang, Biao Luo, ... Chunhua Yang Pinning synchronization of stochastic neutral memristive neural networks with reaction-diffusion terms Xiang Wu, Shutang Liu, Huiyu Wang Multiple asymptotical -periodicity of fractional-order delayed neural networks under state-dependent switching Jingxuan Ci, Zhenyuan Guo, Han Long, Shiping Wen, Tingwen Huang Practical synchronization of neural networks with delayed impulses and external disturbance via hybrid control Shiyu Dong, Xinzhi Liu, Shouming Zhong, Kaibo Shi, Hong Zhu Tropical support vector machines: Evaluations and extension to function spaces Ruriko Yoshida, Misaki Takamori, Hideyuki Matsumoto, Keiji Miura Inferring the location of neurons within an artificial network from their activity Alexander J. Dyer, Lewis D. Griffin Finite-time consensus control for multi-agent systems with full-state constraints and actuator failures Jianhui Wang, Yancheng Yan, Zhi Liu, C.L. Philip Chen, ... Kairui Chen Neurodynamics-driven portfolio optimization with targeted performance criteria Jun Wang, Xin Gan Rutting prediction and analysis of influence factors based on multivariate transfer entropy and graph neural networks Jinren Zhang, Jinde Cao, Wei Huang, Xinli Shi, Xingye Zhou Image-based time series forecasting: A deep convolutional neural network approach Artemios-Anargyros Semenoglou, Evangelos Spiliotis, Vassilios Assimakopoulos Classification-based prediction of network connectivity robustness Yang Lou, Ruizi Wu, Junli Li, Lin Wang, ... Guanrong Chen Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach Jie Pan, Jingwei Huang, Gengdong Cheng, Yong Zeng Improved Residual Network based on norm-preservation for visual recognition Bharat Mahaur, K.K. Mishra, Navjot Singh Stacked attention hourglass network based robust facial landmark detection Ying Huang, He Huang Attention-enabled gated spiking neural P model for aspect-level sentiment classification Yanping Huang, Hong Peng, Qian Liu, Qian Yang, ... Mario J. P?rez-Jim?nez -------------- next part -------------- An HTML attachment was scrubbed... URL: From erdi.peter at wigner.mta.hu Mon Jan 2 16:49:39 2023 From: erdi.peter at wigner.mta.hu (=?ISO-8859-2?Q?=C9rdi_P=E9ter?=) Date: Mon, 2 Jan 2023 22:49:39 +0100 (CET) Subject: Connectionists: study abroad program for undergraduates Message-ID: <279ffa47-98db-ca41-16ca-39de5abd283a@rmki.kfki.hu> Dear All: You may or may not remember that there is a study abroad program, Budapest Semester in Cognitive Science (for details, see https://www.bscs-u s.org/). We opened it in 2004, and after three years of break, we are enthusiastically reopening it in Fall 2023. After the pandemic, we take a new start by integrating traditional values with unique aspects. Cognitive science is in the overlapping area of hard and social sciences. Its main goal is the interdisciplinary study of the mind. By understanding how your and others' minds work, you may have a selective advantage in managing your life in an uncertain and competitive world. The program is open and appropriate for undergraduates, typically in their junior years. Many students from biology, psychology, computer science, and philosophy, but we had chemistry, physics, and history students who benefited from participating in BSCS. A transcript is provided by the E??tv??s Lor??nd University, the most historical university in Hungary. Budapest is known as one of the most beautiful cities in the world. It contains the flat Pest and the hilly Buda separated by Danube (Duna in Hungarian). Cheap flights densely connect the city to many European cities, so students can visit many of them during the weekends. I am happy to answer any questions you may have. Peter Erdi Henry Luce Professor of Complex Systems Studies, Kalamazoo College Founding Co-Director of the BSCS, Budapest perdi at kzoo.edu From jenny.benois-pineau at u-bordeaux.fr Mon Jan 2 13:29:14 2023 From: jenny.benois-pineau at u-bordeaux.fr (jbenoisp) Date: Mon, 2 Jan 2023 19:29:14 +0100 Subject: Connectionists: =?utf-8?q?CBMI=272023=2E_Orl=C3=A9ans_France_=2E_?= =?utf-8?q?Call_for_Special_Sessions?= References: Message-ID: > Dear Colleagues, > Happy New Year, > We apologize if you receive this information several times. > ======================Call for SS Proposals at CBMI?2023================== > > CBMI?2023 http://cbmi2023.org/ is calling for high quality Special Sessions addressing innovative research in content ? based multimedia indexing and its related broad fields. The main scope of the conference is in analysis and understanding of multimedia contents including > > Multimedia information retrieval (image, audio, video, text) > Mobile media retrieval > Event-based media retrieval > Affective/emotional interaction or interfaces for multimedia retrieval > Multimedia data mining and analytics > Multimedia retrieval for multimodal analytics and visualization > Multimedia recommendation > Multimedia verification (e.g., multimodal fact-checking, deep fake analysis) > Large-scale multimedia database management > Summarization, browsing, and organization of multimedia content > Evaluation and benchmarking of multimedia retrieval systems > Explanations of decisions of AI-in Multimedia > Application domains : health, sustainable cities, ecology, culture? > and all this in the era of Artificial Intelligence for analysis and indexing of multimedia and multimodal information. > > A special oral session will contain oral presentations of long research papers, short papers will be presented as posters during poster sessions with special mention of an SS. > > > > - Long research papers should present complete work with evaluations on topics related to the Conference. > > - Short research papers should present preliminary results or more focused contributions. > > - > > An SS proposal has to contain > > - Name, title, affiliation and a short bio of SS chairs; > > - The rational ; > > - A list of at least 5 potential contributions with a provisional title, authors and affiliation. > > > > The dead line for SS proposals is coming: 23rd of January > > > Please submit your proposals to the SS chairs > > jenny.benois-pineau at u-bordeaux.fr > mourad.oussalah at oulu.fi > adel.hafiane at insa-cvl.fr > > > Jenny Benois-Pineau, > Professeure en Informatique, > Charg?e de mission aux relations Internationales > Coll?ge Sciences et Technologies, > Universit? de Bordeaux > 351, crs de la Lib?ration > 33405 Talence > France > tel.: +33 (0) 5 40 00 84 24 > > Jenny Benois-Pineau, PhD, HDR, > Professor of Computer Science, > Chair of International relations > School of Sciences and Technologies > University of Bordeaux > 351, crs de la Lib?ration > 33405 Talence > tel.: +33 (0) 5 40 00 84 24 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Tue Jan 3 11:32:12 2023 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Tue, 3 Jan 2023 17:32:12 +0100 Subject: Connectionists: =?utf-8?q?CFP_FRCCS_2023_=E2=80=93_3th_French_Reg?= =?utf-8?q?ional_Conference_on_Complex_Systems_May_30-June_02_Le_Ha?= =?utf-8?q?vre?= Message-ID: *Third F*rench* R*egional* C*onference on* C*omplex* S*ystems May 31 ? June 02, 2023 Le Havre, France *FRCCS 2023* You are cordially invited to submit your contribution until *February 22, 2023.* The* F*rench *R*egional* C*onference on *C*omplex *S*ystems (FRCCS) is an International annual Conference organized in France since 2021. After Dijon (2021), Paris (2022), Le Havre host its third edition (FRCCS 2023). It promotes interdisciplinary exchanges between researchers from various scientific disciplines and backgrounds (sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science). FRCCS 2023 is an opportunity to exchange and promote the cross-fertilization of ideas by presenting recent research work, industrial developments, and original applications. Special attention is given to research topics with a high societal impact from the complexity science perspective. *Keynote Speakers* Luca Maria Aiello ITU Copenhagen Denmark Ginestra Bianconi Queen Mary University UK V?ctor M. Egu?luz University of the Balearic Islands Spain Adriana Iamnitchi Maastricht University Netherlands Rosario N. Mantegna Palermo University Italy C?line Rozenblat University of Lausanne Switzerland *Submission Guidelines* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Full paper* about *original research* ? *Extended Abstract* about published or unpublished research. It is recommended to be between 3-4 pages. They should not exceed four pages. o Submissions must follow the Springer publication format available in the journal Applied Network Science in the Instructions for Authors' instructions entry. o All contributions should be submitted in *pdf format* via *EasyChair .* *Publication* *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Complexity, edited by Hindawi *Topics include, but are not limited to: * ? *Foundations of complex systems * - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks * - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment * - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity * - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior, and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations * o Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory) *GENERAL CHAIRS* Cyrille Bertelle LITIS, Normastic, Le Havre Roberto Interdonato CIRAD, UMR TETIS, Montpellier Join us at COMPLEX NETWORKS 2022 Palermo Italy *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Deputy Director LIB EA N? 7534 Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.wermter at uni-hamburg.de Tue Jan 3 13:11:07 2023 From: stefan.wermter at uni-hamburg.de (Stefan Wermter) Date: Tue, 3 Jan 2023 19:11:07 +0100 Subject: Connectionists: [jobs] Doctoral Network Researchers - Transparent Interpretable Neural Robots Message-ID: <67087a98-224d-d85e-bec5-d80aff995170@uni-hamburg.de> At the University of Hamburg, Dept of Informatics, Knowledge Technology, we are looking for applications for 2 Doctoral Candidate Researchers (PhD students)? in neural network technology, intelligent robotics and transparency/explainability in artificial intelligence. Project: TRAIL: TRAnsparent InterpretabLe robots (Doctoral Network, EU MSCA Project) Start date: 1 March 2023 or later as agreed, for 3 years Salary level: salary in accordance with the attractive EU MSCA Doctoral Network regulations for doctoral candidates Application deadline: 31 January 2023 The Doctoral Network TRAIL consists of 15 institutions including universities, research institutions and industry partners. The research area of TRAIL is neural network technology, intelligent robotics and transparency in artificial intelligence and is coordinated by Universit?t Hamburg. The focus of the research at Universit?t Hamburg is on neural networks and explainability for interactive social robotics. There are two Doctoral Candidate research positions at Universit?t Hamburg which will be filled in the context of transparency and interpretation of neural networks for interactive cognitive robots. The researchers will focus on the following topics: Researcher 1: Interpretable latent representations in deep learning architectures Researcher 2: Automated interpretation of neural class activation mapping The successful candidate will receive an attractive salary in accordance with the MSCA regulations for Early-Stage Researchers. The exact (net) salary will be confirmed upon appointment and is dependent on local tax regulations and on the country correction factor (to allow for the difference in cost of living in different EU Member States). The salary includes a living allowance, a mobility allowance and a family allowance (if applicable). Furthermore, TRAIL will offer to take advantage of joint scientific research trainings, transferable skills workshops, and international conferences. For more information about the project and consortium see https://www.inf.uni-hamburg.de/en/inst/ab/wtm/research/trail.html Requirements: MSc or equivalent in Artificial Intelligence, Computer Science or Engineering with a focus on Intelligent Systems, Intelligent Robotics or Neural Networks is required. Excellent programming skills (Python, C++, PyTorch, Machine Learning Frameworks, ROS etc.) are needed and expertise in at least one of neural networks or intelligent robotics is needed for these positions. The posts involve substantial traveling within Europe. Each PhD student will be expected to spend secondments at other partner sites and travel to international conferences. According to EU mobility rules, the doctoral researchers must not have resided or carried out their main activity (work, studies, etc.) in Germany for longer than 12 months in the 3 years immediately prior to their recruitment. Applicants who already have a doctoral degree are not eligible. Excellent English communication skills are an important requirement. Application: Upload your complete application documents (cover letter, curriculum vitae with details about where you resided or carried out your main activity [work, studies, etc.] in the three years immediately prior to the application deadline, copies of degree certificate[s], names and email contacts for two referees who would be willing to write a letter of support for your application, any English language certificates) via the online application form only. For more details on how to apply please go to: https://www.uni-hamburg.de/stellenangebote/ausschreibung.html?jobID=6cdcc9a72e39e9343ed64dd562fa2ea6c4647574 *********************************************** Professor Dr. Stefan Wermter Director of Knowledge Technology Department of Informatics University of Hamburg Vogt-Koelln-Str. 30 22527 Hamburg, Germany Email: stefan dot wermter AT uni-hamburg.de https://www.informatik.uni-hamburg.de/WTM/ *********************************************** From antona at alleninstitute.org Tue Jan 3 19:32:02 2023 From: antona at alleninstitute.org (Anton Arkhipov) Date: Wed, 4 Jan 2023 00:32:02 +0000 Subject: Connectionists: Allen Institute Modeling Software Workshop Message-ID: Happy New Year, everyone! Please join us for the Allen Institute Modeling Software Workshop in beautiful Seattle, on July 13-14, 2023! https://alleninstitute.org/what-we-do/brain-science/events-training/2023-modeling-workshop/ Sponsored by the NIH BRAIN program, this workshop is organized by the Allen Institute and our collaborators at the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign. This 2-day in-person workshop will consist of interactive seminars and hands-on computational work. It will focus on teaching the skills for building and simulating complex and heterogeneous network models grounded in real biological data. The tools covered by the workshop include: * The SONATA file format for multiscale neuronal network models and simulation output. * The Brain Modeling ToolKit (BMTK) ? a Python-based software package for building and simulating network models at multiple levels of resolution. * The Visual Neuronal Dynamics (VND) ? a program for displaying, animating, and analyzing neural network models using 3D graphics and built-in scripting. Workshop Topics: * Building heterogenous neural networks at different levels of resolution * Simulating networks of biophysically detailed, multi-compartmental neuronal models * Simulating networks of point-neuron models * Providing realistic spiking inputs to the neural networks * Simulating perturbations * Simulating extracellular electric field * Using and sharing models in the SONATA format * Visualizing network models? structure and dynamics in 3D We are eager to host a diverse audience, and we encourage trainees, scientists, and PIs to apply. Prior experience in modeling is not required or expected. The Allen Institute strives to make training opportunities available on a fair and equitable basis. Some travel funding is available for participants with financial need. Please indicate on your application whether such support is needed. Applications will be evaluated on a need-blind basis. There is no fee to participate in the workshop. Applications are due on February 15. All applicants will be notified of the status of their application by April 15. See more information and apply here: https://alleninstitute.org/what-we-do/brain-science/events-training/2023-modeling-workshop/ Anton Arkhipov Associate Investigator T: 206.548.8414 E: antona at alleninstitute.org [Text Description automatically generated] alleninstitute.org brain-map.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 26603 bytes Desc: image001.png URL: From J.Bowers at bristol.ac.uk Wed Jan 4 05:59:33 2023 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Wed, 4 Jan 2023 10:59:33 +0000 Subject: Connectionists: Senior Research Associate position at Bristol Message-ID: Job opening as Senior Research Associate (post-doc) working on EPSRC New Horizons project entitled ?Exploring the multiple loci of learning and computation in simple artificial neural networks?. The project assesses the adaptive value of learning outside of synapses in spiking networks. Position based in Bristol working with Jeffrey Bowers (https://jeffbowers.blogs.bristol.ac.uk/publications/psycho-neuro/) and Benjamin Evans (https://profiles.sussex.ac.uk/p555479-benjamin-evans). Daniel Goodman (https://www.imperial.ac.uk/people/d.goodman) will also be collaborating on the project. Deadline for applying January 18th. For more details see: https://www.bristol.ac.uk/jobs/find/details/?nPostingId=140994&nPostingTargetId=299159&id=Q50FK026203F3VBQBV7V77V83&LG=UK&mask=newuobext -------------- next part -------------- An HTML attachment was scrubbed... URL: From srodrigues at bcamath.org Wed Jan 4 08:46:01 2023 From: srodrigues at bcamath.org (Serafim Rodrigues) Date: Wed, 4 Jan 2023 14:46:01 +0100 Subject: Connectionists: Open PhD positions in Mathematical, Computational and Experimental Neuroscience Message-ID: Dear All, Up to 11 PhD positions are open at BCAM - Basque Centre for Applied Mathematics, Bilbao (Basque Country, Spain) under the Severo Ochoa program. We invite students in Engineering, Mathematics, Physics (or Biophysics), Computer Science, Neuroscience or related fields to apply for projects within the Mathematical, Computational and Experimental Neuroscience (MCEN) research group led by Ikerbasque Professor Serafim Rodrigues. MCEN was created with the mission to bridge the gap between mathematics and experimental neuroscience and to crucially enable mathematicians to come closer to neuroscientific experiments and understand the mathematical challenges in neuroscience. We are an interdisciplinary research group that also includes an experimental lab named NeuroMath. It is equipped with state-of-the-art electrophysiology and a setup for the design and implementation of neuromorphic analog circuits. MCEN promotes multidisciplinary projects combining insights from neuroscience, mathematics, physics and computation. The PhD student will work in one of our research lines, namely: ? Developing mathematics for neuroscience to explain neurophysiological observations of both normal and pathological brain states (e.g. epilepsy, Alzheimer?s Disease), as well as, determining brain?s computational principles [1,2]. ? Developing models at the interface between statistical physics and computer science to determine theoretical principles of intelligent agents? perception and behavior in complex environments [3,4]. ? Developing advanced data-science methods based on topological and geometrical data analysis to determine invariants of neuroscientific data [4]. ? Developing closed-loop machine-brain interfaces for alternative clinical therapies (e.g. deep-brain stimulators for epilepsy) and analog neuromorphic circuits [5]. ? Finding drug targets via molecular simulations of protein complexes [6]. This involves understanding protein interfaces, and docking of small therapeutic molecules. The Candidate will be co-supervised by Prof. Serafim Rodrigues, Dr. Miguel Aguilera and Dr. Rodrigo Azevedo. Contracts will be 4-year long and will also cover doctoral course tuition fees and predoctoral mobility (up to 6,860? in total). The candidate will profit from MCEN?s large network of collaborators in Europe (to name a few: France, Germany, The Netherlands, Portugal, UK) and also outside of Europe (to name a few: Brazil, Canada, Japan, Turkey, USA). Applications are open until 26 January 2023 and incorporation is possible between Sept 2023 and Sept 2024 (more information here ). Interested candidates, who have already completed their MSc degree or are in the process of completing it, should contact Prof. Rodrigues:srodrigues at bcamath.org, Dr. Aguilera sci at maguilera.net, Dr. Azevedo : razevedo at bcamath.org *References* [1] M Desroches, J Rinzel and S Rodrigues, *Classification of bursting patterns: A tale of two ducks*, PLoS Comput Biol *18*(2): e1009752, 2022. [2] S Rodrigues, M Desroches, M Krupa, JM Cortes, TJ Sejnowski and AB Ali, *Time-coded neurotransmitter release at excitatory and inhibitory synapses*, PNAS *113*(8): E1108-E1115, 2016. [3] M Aguilera, SA Moosavi and H Shimazaki, *A unifying framework for mean-field theories of asymmetric kinetic Ising systems*. Nat Commun *12*(1): 1-12, 2021. [4] Aguilera, M., & Bedia, M. G. (2018). *Adaptation to criticality through organizational invariance in embodied agents*. Scientific reports, *8*(1), 1-11 [5] A Guidolin, M Desroches, JD Victor, KP Purpura and S Rodrigues, *Geometry of spiking patterns in early visual cortex: a topological data analytic approach*, J Roy Soc Interface *19*(196): 20220677, 2022. [6] V Salari, S Rodrigues, E Saglamyurek, C Simon and D Oblak, *Are Brain?Computer Interfaces Feasible With Integrated Photonic Chips?*, Front Neurosci *15:* 1710, 2022. [7] Z Liu, RA Moreira, A Dujmovic?, H Liu, B Yang, AB Poma and MA Nash, *Mapping mechanostable pulling geometries of a therapeutic anticalin/CTLA-4 protein complex*, Nano Lett *22*(1): 179-187, 2021. -- Serafim Rodrigues Ikerbasque Research Professor Mathematical, Computational and Experimental Neuroscience (MCEN) Group Leader *BCAM - *Basque Center for Applied Mathematics Alameda de Mazarredo, 14 E-48009 Bilbao, Basque Country - Spain Tel. +34 946 567 842 srodrigues at bcamath.org | www.bcamath.org/srodrigues | www.ikerbasque.net/serafim-rodrigues Old Mathematicians never die They just "tend to infinity" -Anonymous *(**matematika mugaz bestalde)* -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at togelius.com Wed Jan 4 13:12:09 2023 From: julian at togelius.com (Julian Togelius) Date: Wed, 4 Jan 2023 13:12:09 -0500 Subject: Connectionists: Industrial Postdoctoral Position in Foundation Models of Behavior for Video Games Message-ID: We have an open Industrial Postdoc position (2 years) with modl.ai Malta and the Institute of Digital Games that PhD graduates from your group/Uni might find interesting. Could you please help us disseminate this opportunity? ---- Industrial Postdoc Call --- Do you have a PhD in AI/machine learning and wish to experience a top-tier industry-based research environment as an industrial postdoc fellow? Apply and join us in our modl.ai Malta office and the Institute of Digital Games - University of Malta and be part of a research team (feat. Julian Togelius, Sebastian Risi and Georgios Yannakakis among many) that develops the next generation tools for AI-based testing via large scale foundation models. We are looking for excellent candidates with experience training large deep learning models such as transformers and a deep understanding of modern machine learning techniques. Moreover, a good grasp of as many of the following areas as possible will be considered advantageous: human-computer interfaces, player modelling, behaviour cloning/imitation learning, procedural content generation, generative systems, game analytics. Apply here: (Closing date: Wednesday, 25th January 2023) https://lnkd.in/d-CcuUtv LinkedIn post: https://www.linkedin.com/feed/update/urn:li:activity:7011973158803005440/ -- Julian Togelius Associate Professor, New York University Department of Computer Science and Engineering mail: julian at togelius.com, web: http://julian.togelius.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rayyes.r at gmail.com Thu Jan 5 04:17:44 2023 From: rayyes.r at gmail.com (Rania Rayyes) Date: Thu, 5 Jan 2023 10:17:44 +0100 Subject: Connectionists: Two PhD Positions in AI & Robotics at KIT, Karlsruhe, Germany Message-ID: Two positions for PhD Researchers in AI & Robotics Department at Karlsruhe Institute for Technology (KIT) in Germany. Full Time, up to 5 years, 100% TV-L, E 13. For the department "Robotics and AI" of the Institute of Material Handling and Logistics (IFL) at the Karlsruhe Institute of Technology (KIT), and in cooperation with "InnovationsCampus Mobilit?t der Zukunft" (ICM), we are looking for full-time academic staff (100 %) available immediately. The doctorate thesis will be with one of the following research topics: - Human-Robot Learning, imitation learning, grasping, teleoperation - Efficient online learning for robot manipulators - Deep Learning for object recognition, detection, and segmentation - Curious robotics: supervise learning, intrinsic motivation, autonomous exploration, robot model learning, - Mobile robotics: localization and navigation, SLAM *Application:* Please send your application in the form of a single PDF (certificate, transcripts, CV, state any previous experience) by e-mail to: *application-air at ifl.kit.edu * *Questions: * application-air at ifl.kit.edu *Deadline: 31.01.2023* *More details:* https://www.ifl.kit.edu/download/02-01-23englisch-Akademischer%20Mitarbeiter%20Rayyes%20englisch.pdf https://www.ifl.kit.edu/english/5026_5690.php --- Jun.-Prof. Dr.-Ing. Rania Rayyes Head AI & Robotics Group Karlsruher Institut f?r Technologie (KIT) Institut f?r F?rdertechnik und Logistiksysteme (IFL) InnovationsCampus Mobilit?t der Zukunft (ICM) Geb?ude 50.38, Gotthard-Franz-Stra?e 8, 76131 Karlsruhe https://www.ifl.kit.edu/mitarbeiter_5688.php -------------- next part -------------- An HTML attachment was scrubbed... URL: From irina.illina at loria.fr Thu Jan 5 09:26:52 2023 From: irina.illina at loria.fr (Irina Illina) Date: Thu, 5 Jan 2023 15:26:52 +0100 (CET) Subject: Connectionists: Post-doctoral and engineer positions at Loria (France) : Automatic speech recognition for non-natives speakers in a noisy environment In-Reply-To: <1353687245.10848194.1667998030622.JavaMail.zimbra@loria.fr> References: <1847762572.10846716.1667997869456.JavaMail.zimbra@loria.fr> <522882403.10847309.1667997929778.JavaMail.zimbra@loria.fr> <1353687245.10848194.1667998030622.JavaMail.zimbra@loria.fr> Message-ID: <1105620611.17344686.1672928812973.JavaMail.zimbra@loria.fr> Dear all, Please, could you post it on your lists? Thank you. Best regards, Irina Illina Automatic speech recognition for non-natives speakers in a noisy environment Post-doctoral and engineer positions Starting date: begin of 2023 Duration: 24 months for a post-doc position and 12 months for an engineer position Supervisors: Irina Illina, Associate Professor, HDR Lorraine University LORIA-INRIA Multispeech Team, [ mailto:illina at loria.fr | illina at loria.fr ] Context When a person has their hands busy performing a task like driving a car or piloting an airplane, voice is a fast and efficient way to achieve interaction. In aeronautical communications, the English language is most often compulsory. Unfortunately, a large part of the pilots are not native English and speak with an accent dependent on their native language and are therefore influenced by the pronunciation mechanisms of this language. Inside an aircraft cockpit, non-native voice of the pilots and the surrounding noises are the most difficult challenges to overcome in order to have efficient automatic speech recognition (ASR). The problems of non-native speech are numerous: incorrect or approximate pronunciations, errors of agreement in gender and number, use of non-existent words, missing articles, grammatically incorrect sentences, etc. The acoustic environment adds a disturbing component to the speech signal. Much of the success of speech recognition relies on the ability to take into account different accents and ambient noises into the models used by ARP. Automatic speech recognition has made great progress thanks to the spectacular development of deep learning. In recent years, end-to-end automatic speech recognition, which directly optimizes the probability of the output character sequence based on the input acoustic characteristics, has made great progress [Chan et al., 2016; Baevski et al., 2020; Gulati, et al., 2020]. Objectives The recruited person will have to develop methodologies and tools to obtain high-performance non-native automatic speech recognition in the aeronautical context and more specifically in a (noisy) aircraft cockpit. This project will be based on an end-to-end automatic speech recognition system [Shi et al., 2021] using wav2vec 2.0 [Baevski et al., 2020]. This model is one of the most efficient of the current state of the art. This wav2vec 2.0 model enables self-supervised learning of representations from raw audio data (without transcription). How to apply: Interested candidates are encouraged to contact Irina Illina (illina at loria.fr) with the required documents (CV, transcripts, motivation letter, and recommendation letters). Requirements & skills: - M.Sc. or Ph.D. degree in speech/audio processing, computer vision, machine learning, or in a related field, - ability to work independently as well as in a team, - solid programming skills (Python, PyTorch), and deep learning knowledge, - good level of written and spoken English. References [Baevski et al., 2020] A. Baevski, H. Zhou, A. Mohamed, and M. Auli. Wav2vec 2.0: A framework for self-supervised learning of speech representations, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020. [Chan et al., 2016] W. Chan, N. Jaitly, Q. Le and O. Vinyals. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 4960-4964, 2016. [Chorowski et al., 2017] J. Chorowski, N. Jaitly. Towards better decoding and language model integration in sequence to sequence models. Interspeech, 2017. [Houlsby et al., 2019] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, S. Gelly. Parameter-efficient transfer learning for NLP. International Conference on Machine Learning, PMLR, pp. 2790?2799, 2019. [Gulati et al., 2020] A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, and R. Pang. Conformer: Convolution-augmented transformer for speech recognition. Interspeech, 2020. [Shi et al., 2021] X. Shi, F. Yu, Y. Lu, Y. Liang, Q. Feng, D. Wang, Y. Qian, and L. Xie. The accented english speech recognition challenge 2020: open datasets, tracks, baselines, results and methods. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6918?6922, 2021. -- Best regards, Irina Illina Associate Professor, HDR Lorraine University LORIA-INRIA Multispeech Team office C147 Building C 615 rue du Jardin Botanique 54600 Villers-les-Nancy Cedex Tel:+ 33 3 54 95 84 90 -- Best regards, Irina Illina Associate Professor, HDR Lorraine University LORIA-INRIA Multispeech Team office C147 Building C 615 rue du Jardin Botanique 54600 Villers-les-Nancy Cedex Tel:+ 33 3 54 95 84 90 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stevensequeira92 at hotmail.com Thu Jan 5 10:00:14 2023 From: Stevensequeira92 at hotmail.com (steven gouveia) Date: Thu, 5 Jan 2023 15:00:14 +0000 Subject: Connectionists: [2nd and Final Call for Abstracts] - 4th International Conference on Philosophy of Mind (Portugal) Message-ID: Dear All, The Call for Abstracts for the 4th International Conference on Philosophy of Mind: 4E's Approach to the Mind/Brain, taking place from 6 to 8 March 2023, at the Faculty of Philosophy and Social Sciences of the University Portuguese Catholic (Braga, Portugal), is still open. The Conference will have 6 Keynote Speakers: - Shaun Gallagher (Memphis Uni. | USA) - Adriana Sampaio (Uni. Minho | PT) - Karl Friston (Uni. College London | UK) - Anna Ciaunica (Uni. Lisbon | PT) - Peter G?rdenfors (L?nd Uni. | SW) - Dirk Geeraerts (Leuven Uni. | BL) An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information, and a short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: ?4th Inter. Conf. First Name Last Name ? title of abstract.? Final Deadline: January 15, 2023. All info related to the conference can be found here: https://4confphilmind.weebly.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Thu Jan 5 12:17:05 2023 From: cgf at isep.ipp.pt (Carlos) Date: Thu, 5 Jan 2023 17:17:05 +0000 Subject: Connectionists: Discovery Science (DS 2023) - CFP Message-ID: -------------------------------------------------------------------------------- Please distribute (Apologies for cross posting) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- CALL FOR PAPERS DS 2023 Discovery Science Conference Website link: https://ds2023.inesctec.pt/ October 9-11, 2023, Porto, Portugal -------------------------------------------------------------------------------- Special Issue -------------------------------------------------------------------------------- The authors of a number of selected papers presented at DS 2023 will be invited to submit extended versions of their papers for possible inclusion in a special issue of Machine Learning journal (published by Springer) on Discovery Science. Fast-track processing will be used to have them reviewed and published. -------------------------------------------------------------------------------- Award -------------------------------------------------------------------------------- There will be a Best Student Paper Award in the value of 555 Eur sponsored by Springer. -------------------------------------------------------------------------------- Aims and Scope -------------------------------------------------------------------------------- Discovery Science 2023 conference provides an open forum for intensive discussions and exchange of new ideas among researchers working in the area of Discovery Science. The conference focus is on the use of artificial intelligence methods in science. Its scope includes the development and analysis of methods for discovering scientific knowledge, coming from machine learning, data mining, intelligent data analysis, and big data analytics, as well as their application in various domains. Possible topics include, but are not limited to: -Artificial intelligence (machine learning, knowledge representation and reasoning, natural language processing, statistical methods, etc.) applied to science -Machine learning: supervised learning (including ranking, multi-target prediction and structured prediction), unsupervised learning, semi-supervised learning, active learning, reinforcement learning, online learning, transfer learning, etc. -Knowledge discovery and data mining -Causal modeling -AutoML, meta-learning, planning to learn -Machine learning and high-performance computing, grid and cloud computing -Literature-based discovery -Ontologies for science, including the representation and annotation of datasets and domain knowledge -Explainable AI, interpretability of machine learning and deep learning models -Process discovery and analysis -Computational creativity -Anomaly detection and outlier detection -Data streams, evolving data, change detection, concept drift, model maintenance -Network analysis -Time-series analysis -Learning from complex data -Graphs, networks, linked and relational data -Spatial, temporal and spatiotemporal data -Unstructured data, including textual and web data -Multimedia data -Data and knowledge visualization -Human-machine interaction for knowledge discovery and management -Evaluation of models and predictions in discovery setting -Machine learning and cybersecurity -Applications of the above techniques in scientific domains, such as -Physical sciences (e.g., materials sciences, particle physics) -Life sciences (e.g., systems biology/systems medicine) -Environmental sciences -Natural and social sciences -------------------------------------------------------------------------------- Important Dates -------------------------------------------------------------------------------- Abstract submission (deadline): May 27, 2023 Full paper submission (deadline): Jun 3, 2023 Notification of acceptance: July 21, 2023 Camera ready version, author registration: August 6, 2023 All dates are specified as 23:59:59 SST (Standard Samoa Time / Anywhere on Earth) -------------------------------------------------------------------------------- Submission procedure -------------------------------------------------------------------------------- Contributions, written in English, must be formatted according to the guidelines of the Lecture Notes of Computer Science (LNCS) series by Springer-Verlag, which are available together with templates here: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines. We strongly recommend using the LNCS template for LaTeX. The page limit for any contribution, including figures, title pages, references, and appendices, is 10-12 pages in LNCS format. Submission of the camera-ready version of the paper has to include the authors? consent to publish on the above Springer LNCS website. Authors may not submit any paper which is under review elsewhere or which has been accepted for publication in a journal or another conference; neither will they submit their papers elsewhere during the review period of DS? 2023. Submission System link: https://cmt3.research.microsoft.com/DS2023 -------------------------------------------------------------------------------- Venue -------------------------------------------------------------------------------- DS 2023 will be held in October 9-11, 2023 in Porto, Portugal. The conference will take place in Sheraton Hotel, Porto, Portugal. -------------------------------------------------------------------------------- Organizing Committee -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- General Chairs -------------------------------------------------------------------------------- Jo?o Gama - University of Porto, Portugal Pedro Henriques Abreu ? University of Coimbra, Portugal -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Program Chairs -------------------------------------------------------------------------------- Albert Bifet - University of Waikato, New Zealand Ana Carolina Lorena ? Aeronautics Institute of Technology, Brazil Rita P. Ribeiro ? University of Porto, Portugal -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Steering Committee Chair -------------------------------------------------------------------------------- Saso Dzeroski ? Jozef Stefan Institute, Slovenia -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Publicity Chair -------------------------------------------------------------------------------- Carlos Abreu Ferreira ? Polytechnic Institute of Porto, Portugal Ricardo Cerri ? Federal University of S?o Carlos, Brazil Wenbin Zhang ? University of Michigan Technological, USA -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Local Organization Committee -------------------------------------------------------------------------------- Bruno Veloso ? University Portucalense, Portugal Joana Cristo Santos ? University of Coimbra, Portugal Jos? Pereira Amorim ? University of Coimbra, Portugal Miriam Seoane Santos ? University of Coimbra, Portugal Ricardo Cardoso Pereira ? University of Coimbra, Portugal -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Contacts -------------------------------------------------------------------------------- Organizing Committee Contact Person: Pedro Henriques Abreu - University of Coimbra, Portugal ? pha at dei.uc.pt -------------------------------------------------------------------------------- Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From demian.battaglia at univ-amu.fr Thu Jan 5 12:46:16 2023 From: demian.battaglia at univ-amu.fr (BATTAGLIA Demian) Date: Thu, 5 Jan 2023 17:46:16 +0000 Subject: Connectionists: DEADLINE EXTENDED - Postdoc opening in computational analysis of electrophysiological data (FunSy team - Strasbourg) Message-ID: <764ccc01841143b0a0fd118e08c8c59e@univ-amu.fr> We are currently inviting applications for a full-time postdoc position under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny (University of Strasbourg, France; Functional System's Dynamics team ? FunSy, https://funsyteam.org). The position starts as soon as possible and can last up to two years. This job offer is funded by the French ANR "HippoComp" project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as computational resource. In our joint FunSy team, we perform electrophysiological recordings in hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. Furthermore, we use a large spectrum of computational tools ranging from time-series and network analyses, information theory and machine-learning to multi-scale computational modeling. A good idea about some of the approaches used in the lab can be found in this recent preprint: Douchamps et al. 2022. Hippocampal gamma oscillations form complex ensembles modulated by behavior and learning, bioRxiv (https://doi.org/10.1101/2022.10.17.512498). We are seeking for various profiles of candidates, with main expertise in either computational or experimental neuroscience. Computational project components would deal with numerically-intensive data analyses (dimensional reduction, encoding/decoding, design of discriminative features...). Experimental project components would focus primarily on simultaneous fiber photometry and local-fields potentials recordings, together with the pre-processing of these data. Interested candidates could have hybrid experimental/computational projects. The ideal candidate will have a doctorate in systems or computational neuroscience, with previous experience in the analysis of rich datasets. In addition, we expect programming skills (e.g., Python, C/C++, Matlab), good communication skills in English (oral and written), the ability to work well in an international team, curiosity and open-mindedness. Strasbourg University is home of an emergent and active community in neuroscience, from molecules and cells to whole brain imaging and behavior, in relation with clinics. The proximity with Germany and Switzerland is an asset allowing interactions with transnational Computational and Neuroscience teams. Strasbourg, home of important European institutions, is a small fascinating city with a rich student and cultural life, well connected via high-speed train to important European cities (e.g. less than two hours from Paris). EXTENDED DEADLINE FOR APPLICATION: January 22nd 2023 Please send your application, including CV, motivation letter and representative publications (preprints accepted) electronically, to both PIs (Romain Goutagny and Demian Battaglia). In addition, contacts of at least two academic references must be communicated (contact email or recommendation letter attachments). Pre-selected candidates will be interviewed over Zoom or invited to give a talk if possible. We are committed to equal opportunity hiring. We strongly encourage applications from qualified women or candidates from any country. Contact persons: Demian Battaglia, dbattaglia AT unistra.fr Romain Goutagny, goutagny AT unistra.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Thu Jan 5 10:05:29 2023 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Thu, 5 Jan 2023 11:05:29 -0400 Subject: Connectionists: =?utf-8?q?Donald_Hill_Postdoctoral_Fellow_?= =?utf-8?q?=E2=80=93_Individualized_Support_and_Training_Optimizati?= =?utf-8?q?on_for_Limbic_Rehabilitation?= Message-ID: Dear Colleagues, A Donald Hill Post-Doctoral Fellow position is open at Dalhousie University, Canada. The Post-Doctoral Fellow will be co-supervised by Dr. Ya-Jun Pan at the Advanced Control and Mechatronics Lab in the Dept. of Mechanical Engineering in the Faculty of Engineering, and Dr. Thomas Trappenberg at the Laboratory for Hierarchical Anticipatory Learning in the Faculty of Computer Sciences. The candidate will conduct research on the development of a supporting system that would be used in clinics to support physio- and occupational therapists, and potentially even at the home of patients. The research will investigate a sensor system including a proprioceptive sensor to measure muscle movements, a single Electroencephalogram (EEG) electrode over the motor cortex, and a vision system to measure arm movements. Multiple sensors and the vision system will generate rich data for analysis in the rehabilitation process. The resulting data are analysed with specialized software to produce a training plan for each patient. The research will involve system design, data analysis, software development, extensive simulation, and experimental studies. The successful candidate should have a Ph.D. degree (or close to completion) in any of the following programs: Computer Sciences, Mechanical Engineering, Electrical and Computer Engineering or Biomedical Engineering or related field. The candidate should have solid background in mechatronics, systems and control, robotics, measurements, machine learning/deep learning, software development and data analysis. Expertise in rehabilitation, perception, assistive robotics, and intelligent systems would be an asset. The candidate should have strong communication skills, publication records, technical writing, programing skills in Solidworks, Matlab/Simulink, Python, C/C++, Java, user interface design. Applicants who are currently registered in a Doctoral program are expected to complete their doctorate by the start date of the Fellowship. Applicants who have already completed their doctorate must have done so within the past 3 years. As a rule, individuals who hold a permanent academic position to which they will return will not be considered in this competition. Exceptions may be made in the case of individuals who, for example, are substantially changing their field of research. The Donald Hill Post-Doctoral Fellowships have been created to accelerate the careers of recent doctoral graduates engaged in leading-edge research who have also demonstrated an interest in the impact of technology on broader society. There is an expectation that fellows will become engaged and appreciate the necessity and benefits of interfacing with a wide diversity of disciplines, knowledge, and cultures to recognize and solve emerging challenges. Interested parties should submit a cover letter, current CV and contact information of 2 references, statement of motivation and research interests (up to two pages), transcripts of all obtained degrees (in English), and a minimum of 3 sample publications. Due to the terms of the funding arrangement, fellowships are restricted to Canadian citizens and permanent residents. Dalhousie University commits to achieving inclusive excellence through continually championing equity, diversity, inclusion, and accessibility. The university encourages applications from Indigenous persons (especially Mi?kmaq), persons of Black/African descent (especially African Nova Scotians), and members of other racialized groups, persons with disabilities, women, and persons identifying as members of 2SLGBTQ+ communities, and all candidates who would contribute to the diversity of our community. For more information, please visit *www.dal.ca/hiringfordiversity* . -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry.gouk at gmail.com Thu Jan 5 12:25:46 2023 From: henry.gouk at gmail.com (Henry Gouk) Date: Thu, 5 Jan 2023 17:25:46 +0000 Subject: Connectionists: ICLR 2023 Workshop on Domain Generalization Message-ID: ICLR 2023 Workshop: What do we need for successful domain generalization? Website: https://domaingen.github.io/ The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG) ? a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time ? have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: *what do we need for successful domain generalization?* We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Specific topics of interest include, but are not limited to: * Leveraging domain-level meta-data * Exploiting multiple modalities to achieve robustness to distribution shift * Frameworks for specifying known invariances/domain knowledge * Causal modeling and how it can be robust to distribution shift * Empirical analysis of existing domain generalization methods and their underlying assumptions * Theoretical investigations into the domain generalization problem and potential solutions Submissions are accepted via OpenReview: https://openreview.net/group?id=ICLR.cc/2023/Workshop/DG Submission deadline: February 3, 2023 Author notifications: March 3, 2023 Meeting: May 5, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From v.nowack at imperial.ac.uk Fri Jan 6 05:55:12 2023 From: v.nowack at imperial.ac.uk (Nowack, Vesna) Date: Fri, 6 Jan 2023 10:55:12 +0000 Subject: Connectionists: CFP - Genetic Improvement Workshop GI @ ICSE 2023 - DEADLINE approaching (13 Jan 2023) Message-ID: The 12th International Workshop on Genetic Improvement Co-located with the 45th IEEE/ACM International Conference on Software Engineering, ICSE 2023, Melbourne, Australia and online, 14--20 May 2023 http://geneticimprovementofsoftware.com/events/icse2023 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Important Dates: ~~~~~~~~~~~~~ Submission: 13 Jan 2023 Notification: 24 Feb 2023 Camera-ready: 17 Mar 2023 Workshop: 14-20 May 2023 (one day) Submission: ~~~~~~~~~~ Submit anonymised (double blind) PDF in IEEE conference proceedings format https://www.ieee.org/conferences/publishing/templates.html Research and Position Papers: ~~~~~~~~~~~~~~~~~~~~~~~~ * Research papers (limit eight pages) * Position papers (limit two pages) Topics of interest: ~~~~~~~~~~~~~~ Research and applications include, but are not limited to, using Genetic Improvement to: * Improve efficiency * Decrease memory consumption * Decrease energy consumption * Transplant new functionality * Specialise software * Translate between programming languages * Generate multiple versions of software * Improve low level or binary code * Improve SE documentation, specification, training and educational tools and techniques * Improve software engineering artifacts, e.g., documentation, specification, training and educational tools, materials and techniques * Repair bugs * GI techniques in industrial settings Keynote speaker: ~~~~~~~~~~~~~ Dr. Myra B. Cohen, Iowa State University (USA) Students travel awards: ~~~~~~~~~~~~~~~~~~ Up to five student travel awards 2 prizes: ~~~~~~~ * a prize will be awarded to the presenter who gives the best presentation at the workshop. * a prize will be given for the best paper. Both prizes will be awarded at the workshop which will be held in hybrid mode both in Melbourne, Australia and virtually on the Internet as part of the 45th International Conference on Software Engineering ICSE 2023. Workshop Chairs: ~~~~~~~~~~~~~~ Vesna Nowack v.nowack at imperial.ac.uk Markus Wagner markus.wagner at adelaide.edu.au Gabin An agb94 at kaist.ac.kr Aymeric Blot aymeric.blot at univ-littoral.fr Justyna Petke j.petke at ucl.ac.uk The full call for submissions: ~~~~~~~~~~~~~~~~~~~~~~ We invite submissions that discuss recent developments in all areas of research on, and applications of, Genetic Improvement. GI is the premier workshop in the field and provides an opportunity for researchers interested in automated program repair and software optimisation to disseminate their work, exchange ideas and discover new research directions. Topics of interest include both the theory and practice of Genetic Improvement. Applications include, but are not limited to, using GI to: * Improve efficiency * Decrease memory consumption * Decrease energy consumption * Transplant new functionality * Specialise software * Translate between programming languages * Generate multiple versions of software * Improve low level or binary code * Improve SE documentation, specification, training and educational tools and techniques * Improve software engineering artifacts, e.g., documentation, specification, training and educational tools, materials and techniques * Repair bugs * GI techniques in industrial settings Keynote: The keynote will be given by Dr. Myra B. Cohen head of Iowa State's LaVA-Ops, Laboratory for Variability-Aware Assurance and Testing of Organic Programs. The invited keynote presentation will be given by Myra Cohen. Dr. Cohen is a full professor at Iowa State University (USA), where she holds the Lanh and Oanh Nguyen Chair in Software Engineering in the Department of Computer Science. She is head of Iowa State's LaVA-Ops, Laboratory for Variability-Aware Assurance and Testing of Organic Programs. As well as genetic improvement, her research covers software testing of highly-configurable software, SBSE, applications of combinatorial designs (CIT), and the synergy between systems and synthetic biology and software engineering. She has served on many software engineering conferences, including this year as the Technical Briefings-track chair of ICSE 2023. Hybrid Event: Due to the continued COVID19 pandemic, as with GI 2020-22, the workshop may be held online with recordings available on YouTube. Eg https://youtube.com/playlist?list=PLI8fiFpB7BoIHgl5CsdtjfWvHlE5N6pje Students travel awards: We are pleased to announce that we will offer up to 5 awards of up to $250(USD) each to partially reimburse travel costs for students whose work is presented at the GI at ICSE 2023 workshop. Priority will be given based on the student's need and submission quality. Students applying for an award should submit a first-author regular full paper to the genetic improvement workshop. Moreover, their supervisor should send a one-paragraph note of recommendation to Dr. Vesna Nowack v.nowack at imperial.ac.uk listing: * the student's area of work. * the supervisor's support of the student's application. These awards are available thanks to the EPSRC grant on "Automated Software Specialisation Using Genetic Improvement" We encourage authors to submit early and in-progress work. The workshop emphasises interaction and discussion. All papers should be submitted electronically as PDFs via HotCRP https://gi-at-icse2023-workshop.hotcrp.com/ double-blind as PDFs in the ICSE conference format. All accepted papers must be presented at GI 2023 and will appear in the ICSE workshops volume. The official publication date of the workshop proceedings is the date the proceedings are made available by IEEE. This date may be up to two weeks prior to the first day of ICSE 2023. The official publication date affects the deadline for any patent filings related to published work. Details about the Genetic Improvement workshop can be found via: http://geneticimprovementofsoftware.com/events/icse2023 The webpage contains GI papers, blogs, success stories, a living survey, tools, benchmarks, people, GitHub, as well as past events, tutorials and workshops. Details about ICSE 2023 can be found at the web page: https://conf.researchr.org/home/icse-2023 Kind regards, Vesna Nowack On behalf of the GI at ICSE'23 Workshop chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at cs.ucy.ac.cy Sat Jan 7 04:55:07 2023 From: george at cs.ucy.ac.cy (George A. Papadopoulos) Date: Sat, 7 Jan 2023 11:55:07 +0200 Subject: Connectionists: =?utf-8?q?UMAP_=E2=80=9923=3A_31st_ACM_Conference?= =?utf-8?q?_on_User_Modeling=2C_Adaptation_and_Personalization=3A_L?= =?utf-8?q?ast_Call_for_Papers?= Message-ID: <50R8FKEK-TUX1-WEIX-4L7E-X7INGWWQKULK@cs.ucy.ac.cy> *** Last Call for Papers *** UMAP ?23: 31st ACM Conference on User Modeling, Adaptation and Personalization June 26 - 29, 2023, St. Raphael Resort, Limassol, Cyprus https://www.um.org/umap2023/? ACM UMAP is the premier international conference for researchers and practitioners working on systems that adapt to individual users or groups of users, and that collect, represent, and model user information. ACM UMAP? is sponsored by ACM SIGCHI and SIGWEB. User Modeling Inc., as the core Steering Committee, oversees the conference organization. The proceedings, published by ACM, will be part of the ACM Digital Library. The theme of UMAP 2023 is "Personalization in Times of Crisis?. Specifically, we welcome submissions that highlight the impact that critical periods (such as the COVID-19 pandemic, ongoing wars, and climate change, to name a few) can have on user modeling, personalization, and adaptation of (intelligent) systems; the focus is on investigations that capture how these trying times may have influenced user behavior and whether new models are required.? While we encourage submissions related to this theme, the scope of the conference is not limited to the theme only. As always, contributions from academia, industry, and other organizations discussing open challenges or novel research approaches are expected to be supported by rigorous evidence appropriate to the claims (e.g., user study, system evaluation, computational analysis). IMPORTANT DATES ? Paper Abstracts: January 19, 2023 (mandatory) ? Full paper: January 26, 2023 ? Notification: April 11, 2023 ? Camera-ready: May 2, 2023 ? Conference: June 26 - 29, 2023 Note: The submissions deadlines are at 11:59 pm AoE time (Anywhere on Earth) CONFERENCE TOPICS We welcome submissions related to user modeling, personalization, and adaptation of (intelligent) systems targeting a broad range of users and domains.?For detailed descriptions and the suggested topics for each track please visit the UMAP 2023 website. Personalized Recommender Systems This track invites works from researchers and practitioners on recommender systems. In addition to mature research works addressing technical aspects of recommendations, we welcome research contributions that address questions related to user perception, decision-making, and the business value of recommender systems. Knowledge Graphs, Semantics, Social and Adaptive Web This track welcomes works focused on the use of knowledge representations (i.e., novel knowledge bases), graph algorithms (i.e., graph embedding techniques), and social network analysis at the service of addressing all aspects of personalization, user model building, and personal experience in online social systems. Moreover, this track invites works in adaptive hypermedia, as well as semantic and social web. Intelligent User Interfaces This track invites works exploring how to make the interaction between computers and people smarter and more productive, leveraging solutions from human-computer interaction, data mining, natural language processing, information visualization, and knowledge representation and reasoning. Personalizing Learning Experiences through User Modeling This track invites researchers, developers, and practitioners from various disciplines to submit their innovative learning solutions, share acquired experiences, and discuss their modeling challenges for personalized adaptive learning. Responsibility, Compliance, and Ethics Researchers, developers, and practitioners have a social responsibility to account for the impact that technologies have on individuals (users, providers, and other stakeholders) and society. This track invites works related to the science of building, maintaining, evaluating, and studying adaptive systems that are fair, transparent, respectful of users? privacy, and beneficial to society. Personalization for Persuasive and Behavior Change Systems This track invites submissions focused on personalization and tailoring for persuasive technologies, including but not limited to personalization models, user models, computational personalization, design, and evaluation methods. It also welcomes work that brings attention to the user experience and designing personalized and adaptive behavior change technologies. Virtual Assistants, Conversational Interactions, and Personalized Human-robot Interaction This track invites works investigating new models and techniques for adapting synthetic companions (e.g., virtual assistants, chatbots, social robots) to individual users. With the conversational modality so in vogue across disciplines, this track welcomes work highlighting the model and deployment of synthetic companions driven by conversational search and recommendation paradigms. Research Methods and Reproducibility This track invites submissions on methodologies to evaluate personalized systems, benchmarks, and measurement scales, with particular attention to the reproducibility of results and techniques. Furthermore, the track looks for submissions that report new insights from reproducing existing works.? SUBMISSION AND REVIEW PROCESS Submissions for any of the aforementioned tracks should have a maximum length of *14 pages* (excluding references) in the ACM new single-column format (https://www.acm.org/publications/proceedings-template). (Papers of any length up to 14 pages are encouraged; reviewers will comment on whether the size is appropriate for the contribution.)? The submission link is: https://easychair.org/conferences/?conf=umap23 . Accepted papers will be included in the conference proceedings and presented at the conference. At least one author should register for the conference by the early registration date cut-off. UMAP uses a *double-blind* review process. Authors must omit their names and affiliations from their submissions; they should also avoid obvious identifying statements. For instance, citations to the authors' prior work should be in the third person. Submissions not abiding by anonymity requirements will be desk rejected.?? UMAP has a *no dual submission* policy, which is why full paper submissions should not be currently under review at another publication venue. Further, UMAP operates under the ACM Conference Code of Conduct (https://www.acm.org/about-acm/policy-against-harassment) as well as the ACM Publication Policies and Procedures (https://www.acm.org/publications/policies). PROGRAM CHAIRS ? Julia Neidhardt, TU Wien, Austria? ? Sole Pera, TU Delft, The Netherlands?????? TRACK CHAIRS Personalized Recommender Systems ? Noemi Mauro (University of Torino, Italy) ? Olfa Nasraoui (University of Louisville, USA) ? Marko Tkalcic (University of Primorska, Slovenia) ? Knowledge Graphs, Semantics, Social and Adaptive Web ? Daniela Godoy (ISISTAN - CONICET/UNICEN University, Argentina) ? Cataldo Musto (University of Bari, Italy) ? Intelligent User Interfaces ? Bart Knijnenburg (Clemson University, USA) ? Katrien Verbert (KU Leuven, Belgium) ? Wolfgang W?rndl (TU Munich, Germany) ? Personalizing Learning Experiences through User Modeling ? Oleksandra Poquet (TU Munich, Germany) ? Olga C. Santos (UNED, Spain)? ? Responsibility, Compliance, and Ethics ? Michael Ekstrand (Boise State University, USA) ? Peter Knees (TU Wien, Austria) ? Personalization for Persuasive and Behavior Change Systems ? Federica Cena (University of Torino, Italy) ? Rita Orji (Dalhousie University, Canada) ? Jun Zhao (Oxford University, England) ? Virtual Assistants, Conversational Interactions, and Personalized Human-Robot Interaction ? Li Chen (Hong Kong Baptist University, Hong Kong) ? Yi Zhang (University of California Santa Cruz, USA) ? Ingrid Zukerman (Monash University, Australia)? ? Research Methods and Reproducibility ? Dietmar Jannach (University of Klagenfurt, Austria) ? Alan Said (University of Gothenburg, Sweden) ?? Contact information: umap2023-program at um.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.mirzasoleiman at gmail.com Sat Jan 7 01:46:32 2023 From: b.mirzasoleiman at gmail.com (Baharan Mirzasoleiman) Date: Fri, 6 Jan 2023 22:46:32 -0800 Subject: Connectionists: The 3rd Sparse NN workshop [at ICLR'23] - Call For Papers! Message-ID: Dear all, We are excited to announce the third iteration of our workshop ?Sparsity in Neural Networks: On practical limitations and tradeoffs between sustainability and efficiency?, at ICLR?23 continuing on its successful inaugural version in 2021 and 2022. The workshop will take place in a hybrid manner on May 5th, 2023 in Kigali, Rwanda. The goal of the workshop is to bring together members of many communities working on neural network sparsity to share their perspectives and the latest cutting-edge research. We have assembled an incredible group of speakers, and we are seeking contributed work from the community. For more information and submitting your paper, please visit the workshop website: https://www.sparseneural.net/ Important Dates - February 3th, 2023 [AOE]: Submit an abstract and supporting materials - March 3th, 2023: Notification of acceptance - May 5, 2023: Workshop Topics (including but not limited to) - Algorithms for Sparsity - Pruning both for post-training inference, and during training - Algorithms for fully sparse training (fixed or dynamic), including biologically inspired algorithms - Algorithms for ephemeral (activation) sparsity - Sparsely activated expert models - Scaling laws for sparsity - Sparsity in deep reinforcement learning - Systems for Sparsity - Libraries, kernels, and compilers for accelerating sparse computation - Hardware with support for sparse computation - Theory and Science of Sparsity - When is overparameterization necessary (or not) - Optimization behavior of sparse networks - Representation ability of sparse networks - Sparsity and generalization - The stability of sparse models - Forgetting owing to sparsity, including fairness, privacy and bias concerns - Connecting neural network sparsity with traditional sparse dictionary modeling - Applications for Sparsity - Resource-efficient learning at the edge or the cloud - Data-efficient learning for sparse models - Communication-efficient distributed or federated learning with sparse models - Graph and network science applications This workshop is non-archival, and it will not have proceedings. Submissions will receive one of three possible decisions: - Accept (Spotlight Presentation). The authors will be invited to present the work during the main workshop, with live Q&A. - Accept (Poster Presentation). The authors will be invited to present their work as a poster during the workshop?s interactive poster sessions. - Reject. The paper will not be presented at the workshop. Eligible Work - The latest research innovations at all stages of the research process, from work-in-progress to recently published papers, where ?recent? refers to work presented within one year of the workshop, e.g., the manuscript is first publicly available on arxiv or elsewhere no earlier than February 3, 2022. We permit under-review or concurrent submissions. - Position or survey papers on any topics relevant to this workshop (see above) Required materials 1. One mandatory abstract (250 words or fewer) describing the work 2. Up-to 8 pages in length excluding the references and the appendix, for both technical and position papers. We encourage work-in-progress submissions and expect most submissions to be approximately 4 pages. Papers can be submitted in any of the ICLR, Neurips or ICML conference formats. We hope you will join us in attendance! Best Regards, On behalf of the organizing team (Aleksandra, Atlas, Baharan, Decebal, Elena, Ghada, Trevor, Utku, Zahra) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.destercke at hds.utc.fr Fri Jan 6 10:20:15 2023 From: sebastien.destercke at hds.utc.fr (Sebastien Destercke) Date: Fri, 6 Jan 2023 16:20:15 +0100 Subject: Connectionists: Next sipta seminar: 13th january (Friday) at 3pm Paris time Message-ID: <1EED993B-2FD6-4A9E-82B4-5105C3A31D8F@hds.utc.fr> Dear colleagues, We are delighted to announce our sixth SIPTA online seminar on imprecise probabilities (IP), and the first of this new year. These monthly events are open to anyone interested in IP, and will be followed by a Q&A and open discussion. They also provide an occasion for the community to meet, keep in touch and exchange between in-person events. The sixth seminar will take place on the 13th of January (Friday), at 3pm Paris Time. The zoom link is https://utc-fr.zoom.us/j/89529523919 This sixth seminar will consist of a joint talk aiming at providing an overview of recent advances and challenges in a given field. For this, we are very happy to have three different speakers that will each give their view on the topic of Engineering in IP (during a 1 hour talk in total). We are very happy to have as our speakers Alice Cicirello, the founder and the Head of the Data, Vibration and Uncertainty Group at Tu Delft, Matthias Faes, Chair for Reliability Engineering at TU Dortmund, and Edoardo Pattelli, Professor in Risk and Uncertainty Quantification and the Head of the Centre for Intelligent Infrastructure at the Department of Civil and Environmental Engineering, at Strathclyde University. All of them have made impressive contributions as to how one can deal with imprecision and uncertainty in various engineering problems, with a very rich expertise in both theoretical and applied problems. We are very much looking forward listening to their thoughts about the state of IP in engineering! Curious? Then check out the abstract on the webpage of the SIPTA seminars: sipta.org/events/sipta-seminars. The zoom link for attending the seminar can also be found on that same page shortly before the event. So please mark your calendars on the 13th January, at 15:00 Paris Time, and join us for the occasion. And for those who missed the previous seminar and want to catch up, or simply want to see it again and again, it is now online at https://www.youtube.com/watch?v=sdsFlLudLjo. See you at the seminar! S?bastien, Enrique and Jasper From e.neftci at fz-juelich.de Sat Jan 7 08:16:40 2023 From: e.neftci at fz-juelich.de (Emre Neftci) Date: Sat, 7 Jan 2023 14:16:40 +0100 Subject: Connectionists: Job Openings at Forschungszentrum Juelich (Location: Aachen) Message-ID: <1f347427-2b5b-4449-b8f0-8cfa8b002916@app.fastmail.com> Dear All, We have sevreal openings in neuromorphic engineering at Forschungszentrum J?lich in the new Neuromorphic Software Ecosystems (PGI-15) led by Emre Neftci. Our institute performs research in cutting-edge algorithms for neuromorphic hardware in close collaboration with materials and circuits researchers. PhD Position ? Continual Learning with Metaplastic Neural Networks and Nanodevices https://www.fz-juelich.de/en/careers/jobs/2022D-178 PhD openings - Neuromorphic Hardware for Event-based Computing at the Edge https://www.fz-juelich.de/en/careers/jobs/2022D-092 Please share and apply if interested! Kind regards, Emre -- Prof. Dr. Emre Neftci Head of Peter Gr?nberg Institute 15 - Neuromorphic Software Ecosystems Forschungszentrum J?lich www.fz-juelich.de/pgi/PGI-15 www.nmi-lab.org ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens, Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From david at irdta.eu Sat Jan 7 06:23:36 2023 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 7 Jan 2023 12:23:36 +0100 (CET) Subject: Connectionists: DeepLearn 2023 Winter: regular registration January 13 Message-ID: <170689241.321608.1673090616063@webmail.strato.com> ****************************************************************** 8th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2023 Winter Bournemouth, UK January 16-20, 2023 https://irdta.eu/deeplearn/2023wi/ *********** Co-organized by: Department of Computing and Informatics Bournemouth University Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Regular registration: January 13, 2023 ****************************************************************** SCOPE: DeepLearn 2023 Winter will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es, Las Palmas de Gran Canaria and Lule?. Deep learning is a branch of artificial intelligence covering a spectrum of current exciting research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, bioinformatics, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 20 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2023 Winter is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2023 Winter will take place in Bournemouth, a coastal resort town on the south coast of England. The venue will be: Talbot Campus Bournemouth University https://www.bournemouth.ac.uk/about/contact-us/directions-maps/directions-our-talbot-campus STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Yi Ma (University of California, Berkeley), CTRL: Closed-Loop Data Transcription via Rate Reduction Daphna Weinshall (Hebrew University of Jerusalem), Curriculum Learning in Deep Networks Eric P. Xing (Carnegie Mellon University), It Is Time for Deep Learning to Understand Its Expense Bills PROFESSORS AND COURSES: Matias Carrasco Kind (University of Illinois, Urbana-Champaign), [intermediate] Anomaly Detection Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Graph Representation Learning Sumit Chopra (New York University), [intermediate] Deep Learning for Healthcare Luc De Raedt (KU Leuven), [introductory/intermediate] From Statistical Relational to Neuro-Symbolic Artificial Intelligence Marco Duarte (University of Massachusetts, Amherst), [introductory/intermediate] Explainable Machine Learning Jo?o Gama (University of Porto), [introductory] Learning from Data Streams: Challenges, Issues, and Opportunities Claus Horn (Zurich University of Applied Sciences), [intermediate] Deep Learning for Biotechnology Zhiting Hu (University of California, San Diego) & Eric P. Xing (Carnegie Mellon University), [intermediate/advanced] A "Standard Model" for Machine Learning with All Experiences Nathalie Japkowicz (American University), [intermediate/advanced] Learning from Class Imbalances Gregor Kasieczka (University of Hamburg), [introductory/intermediate] Deep Learning Fundamental Physics: Rare Signals, Unsupervised Anomaly Detection, and Generative Models Karen Livescu (Toyota Technological Institute at Chicago), [intermediate/advanced] Speech Processing: Automatic Speech Recognition and beyond David McAllester (Toyota Technological Institute at Chicago), [intermediate/advanced] Information Theory for Deep Learning Dhabaleswar K. Panda (Ohio State University), [intermediate] Exploiting High-performance Computing for Deep Learning: Why and How? Fabio Roli (University of Genova), [introductory/intermediate] Adversarial Machine Learning Bracha Shapira (Ben-Gurion University of the Negev), [introductory/intermediate] Recommender Systems Kunal Talwar (Apple), [introductory/intermediate] Foundations of Differentially Private Learning Tinne Tuytelaars (KU Leuven), [introductory/intermediate] Continual Learning in Deep Neural Networks Lyle Ungar (University of Pennsylvania), [intermediate] Natural Language Processing using Deep Learning Bram van Ginneken (Radboud University Medical Center), [introductory/intermediate] Deep Learning for Medical Image Analysis Yu-Dong Zhang (University of Leicester), [introductory/intermediate] Convolutional Neural Networks and Their Applications to COVID-19 Diagnosis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by January 8, 2023. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 8, 2023. EMPLOYER SESSION: Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 8, 2023. ORGANIZING COMMITTEE: Rashid Bakirov (Bournemouth, local co-chair) Marcin Budka (Bournemouth) Vegard Engen (Bournemouth) Nan Jiang (Bournemouth, local co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2023wi/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2023wi/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Bournemouth University Rovira i Virgili University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From EPNSugan at ntu.edu.sg Sat Jan 7 07:48:47 2023 From: EPNSugan at ntu.edu.sg (Ponnuthurai Nagaratnam Suganthan) Date: Sat, 7 Jan 2023 12:48:47 +0000 Subject: Connectionists: IJCNN 2023 SS on "Randomization-Based Deep and Shallow Learning Algorithms" In-Reply-To: References: Message-ID: PDF CFP available from: https://github.com/P-N-Suganthan/CFP To submit to this special session, please use this link: https://edas.info/newPaper.php?c=30081&track=116093 International Joint Conference on Neural Networks 2023 Call for Papers for Special Session on Randomization-Based Deep and Shallow Learning Algorithms Randomization-based learning algorithms have received considerable attention from academics, researchers, and domain workers because randomization-based neural networks can be trained by non-iterative approaches possessing closed-form solutions. Those methods are generally computationally faster than iterative solutions and less sensitive to parameter settings. Even though randomization-based non-iterative methods have attracted much attention in recent years, their deep structures have not been sufficiently developed nor benchmarked. This special session aims to bridge this gap. The first target of this special session is to present the recent advances in randomization- based learning methods. Randomization-based neural networks usually offer non-iterative closed-form solutions. Secondly, the focus is on promoting the concepts of non-iterative optimization with respect to counterparts, such as gradient-based methods and derivative-free iterative optimization techniques. Besides the dissemination of the latest research results on randomization-based and/or non-iterative algorithms, it is also expected that this special session will cover some practical applications, present some new ideas and identify directions for future studies. Original contributions as well as comparative studies among randomization-based and non-randomized-based methods are welcome with unbiased literature review and comparative studies. Original contributions having biomedical applications with or without randomization algorithms are also welcome. Typical deep/shallow paradigms include (but not limited to) random vector functional link (RVFL / ensemble deep RVFL), randomized recurrent networks (RRN), kernel ridge regression (KRR) with randomization, extreme learning machines (ELM), random forests (RF), stochastic configuration network (SCN), broad learning system (BLS), convolution neural networks (CNN) with randomization, and so on. Topics: The topics of the special session include (with randomization-based methods), but are not limited to: l Randomized convolutional neural networks l Randomized internal representation learning l Regression, classification, and time series analysis by randomization-based methods l Kernel methods such as kernel ridge regression, kernel adaptive filters, etc. with randomization l Feedforward, recurrent, multilayer, deep and other structures with randomization l Ensemble deep learning with randomization such as the edRVFL l Moore-Penrose pseudo inverse, SVD and other solution procedures. l Gaussian process regression l Randomization-based methods using novel fuzzy approaches l Randomization-based methods for large-scale problems with and without kernels l Theoretical analysis of randomization-based methods l Comparative studies with competing methods without randomization l Deep randomized convolutional neural networks l Random/Rotation forests, oblique random forest, and XGBoost based methods l Applications of randomized methods in areas such as biomedicine, finance, economics, signal processing, big data and all other relevant areas Organizers P. N. Suganthan, Qatar University. p.n.suganthan at qu.edu.qa M. Tanveer, Indian Institute of Technology Indore, India. mtanveer at iiti.ac.in Yudong Zhang, University of Leicester, UK. yudong.zhang at le.ac.uk Important Dates * Jan 31, 2023- First Paper submission deadline (Extension may be offered) * March 31, 2023 - Paper acceptance notification * June 18-23, 2023- Gold Coast Convention Centre, Queensland, Australia Paper Submission Papers submitted to this Special Session are reviewed according to the same rules as the submissions to the regular sessions of IJCNN 2023. Authors who submit papers to this session are invited to mention it in the form during the submission. Submissions to regular and special sessions follow identical format, instructions, deadlines, and review procedures. Please, for further information and news refer to the IJCNN website: https://2023.ijcnn.org/ To submit to this special session, please use this link: https://edas.info/newPaper.php?c=30081&track=116093 ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Sat Jan 7 12:17:12 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Sat, 7 Jan 2023 17:17:12 +0000 Subject: Connectionists: 2022 PROSE book award in Neuroscience In-Reply-To: References: , Message-ID: Dear Connectionists colleagues, I feel happy and honored to report that my Magnum Opus Conscious Mind, Resonant Brain: How Each Brain Makes a Mind https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 has won the 2022 PROSE book award in Neuroscience from the Association of American Publishers. Best wishes for the New Year! Steve Grossberg Stephen Grossberg http://en.wikipedia.org/wiki/Stephen_Grossberg http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en https://youtu.be/9n5AnvFur7I https://www.youtube.com/watch?v=_hBye6JQCh4 https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Wang Professor of Cognitive and Neural Systems Director, Center for Adaptive Systems Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University sites.bu.edu/steveg steve at bu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From marinella.petrocchi at iit.cnr.it Sun Jan 8 09:25:35 2023 From: marinella.petrocchi at iit.cnr.it (Marinella Petrocchi) Date: Sun, 08 Jan 2023 15:25:35 +0100 Subject: Connectionists: [Last CFP][Updated deadlines!][ECIR 2023][ROMCIR 2023] Message-ID: <3035293b76868bbf0842b9fb426e03d5@iit.cnr.it> [Last CFP][Updated deadlines!][Apologies for multiple postings][ECIR 2023][ROMCIR 2023] ******************************************************************************************************************** ROMCIR 2023: The 3rd International Workshop on Reducing Online Misinformation through Credible Information Retrieval ***UPDATED IMPORTANT DATES*** - Abstract Submission Deadline: *January 15, 2023* - Paper Submission Deadline: *January 22, 2023* - Decision Notifications: *February 26, 2023* - Workshop day: *April 2, 2023* Conference website: https://romcir.disco.unimib.it Submission link: https://easychair.org/conferences/?conf=romcir2023 ******************************************************************************************************************** ***GENERAL DESCRIPTION*** The third edition of the ROMCIR Workshop aims at studying how to provide access to users to (topically) relevant and genuine information, to mitigate the information disorder phenomenon with respect to distinct domains. By "information disorder" we mean all forms of communication pollution, from misinformation made out of ignorance, to the intentional sharing of false content. In this context, all those approaches that can serve to assess the genuineness of information circulating online and in social media in particular find their place. Given that the problem in recent years has been addressed from various points of view (e.g., fake news detection, bot detection, information genuineness assessment, ...), the purpose of this Workshop proposed at ECIR 2023 is to consider these issues in the context of Information Access and Retrieval, also considering related Artificial Intelligence fields such as Natural Language Processing (NLP), Natural Language Understanding (NLU), Computer Vision, Machine and Deep Learning, etc. ***THEMES*** The themes of interest include, but are not limited to, the following: - Access to genuine information - Bias detection - Bot/spam/troll detection - Computational fact-checking - Crowdsourcing for information genuineness assessment - Deep fakes - Disinformation/misinformation detection - Evaluation strategies to assess information genuineness - Fake news/review detection - Harassment/bullying/hate speech detection - Information polarization in online communities, echo chambers - Propaganda identification/analysis - Retrieval of genuine information - Security, privacy, and information genuineness - Sentiment/emotional analysis - Societal reaction to misinformation - Stance detection - Trust and reputation Data-driven approaches, supported by publicly available datasets, are more than welcome. ***CONTRIBUTIONS*** The workshop solicits the sending of two types of contributions relevant to the workshop and suitable to generate discussion: - Original, unpublished contributions (pre-prints submitted to ArXiv are eligible) that will be included in an open-access post-proceedings volume of CEUR Workshop Proceedings (http://ceur-ws.org/), indexed by both Scopus and DBLP. - Already published or preliminary work that will not be included in the post-proceedings volume. All submissions will undergo SINGLE-BLIND peer review by the program committee. Submissions are to be done electronically through the EasyChair at: https://easychair.org/conferences/?conf=romcir2023 ***SUBMISSION INSTRUCTIONS*** Submissions must be: - no more than 10 pages long (regular papers) - between 5 and 9 pages long (short papers) We recommend that authors use the new CEUR-ART style for writing papers to be published: - An Overleaf page for LaTeX users is available at: https://www.overleaf.com/read/gwhxnqcghhdt - An offline version with the style files including DOCX template files is available at: http://ceur-ws.org/Vol-XXX/CEURART.zip - The paper must contain, as the name of the conference: ROMCIR 2023: The 3rd Workshop on Reducing Online Misinformation through Credible Information Retrieval, held as part of ECIR 2023: the 45th European Conference on Information Retrieval, April 2, 2023, Dublin, Ireland - The title of the paper should follow the regular capitalization of English - Please, choose the single-column template - According to CEUR-WS policy, the papers will be published under a CC BY 4.0 license: https://creativecommons.org/licenses/by/4.0/deed.en If the paper is accepted, authors will be asked to sign (at pen) an author agreement with CEUR: - In case you do not employ Third-Party Material (TPM) in your draft, sign the document at http://ceur-ws.org/ceur-author-agreement-ccby-ntp.pdf?ver=2020-03-02 - If you do use TPM, the agreement can be found at http://ceur-ws.org/ceur-author-agreement-ccby-tp.pdf?ver=2020-03-02 ***ORGANIZERS*** The following people contribute to the workshop in various capacities and roles: *Workshop Chairs* - Marinella Petrocchi (https://www.iit.cnr.it/en/marinella.petrocchi/), IIT-CNR, Pisa, Italy - Marco Viviani (https://ikr3.disco.unimib.it/people/marco-viviani/), University of Milano-Bicocca, Milan, Italy *Publicity and Proceedings Chair* - Rishabh Upadhyay (https://en.unimib.it/rishabh-gyanendra-upadhyay), University of Milano-Bicocca, Milan, Italy *Program Committee* - Rino Falcone, Institute for Cognitive Sciences and Technologies ? National Research Council, Italy - Carlos A. Iglesias, Technical University of Madrid, Spain - Petr Knoth, Open University, UK - Udo Kruschwitz, University of Regensburg, Germany - Yelena Mejova, ISI Foundation, Italy - Preslav Nakov, Hamad Bin Khalifa University, Qatar - Symeon Papadopoulos, Centre for Research and Technology, Greece - Marinella Petrocchi, Institute of Informatics and Telematics ? National Research Council, Italy - Francesco Pierri, Polytechnic University of Milan - Manuel Pratelli, IMT School for Advanced Studies Lucca, Italy - Fabio Saracco, Enrico Fermi Study and Research Center (CREF), Italy - Marco Viviani, University of Milano-Bicocca, Italy - Arkaitz Zubiaga, Queen Mary University of London, UK - Other PC members will be communicated -- Marinella Petrocchi Senior Researcher @Institute of Informatics and Telematics (IIT) National Research Council (CNR) Pisa (Italy) Mobile: +39 348 8260773 Skype: m_arinell_a Web: https://www.iit.cnr.it/en/marinella.petrocchi/ `Luck is a matter of geography' (Bandabardo') From amartino at luiss.it Mon Jan 9 02:15:39 2023 From: amartino at luiss.it (Alessio Martino) Date: Mon, 9 Jan 2023 07:15:39 +0000 Subject: Connectionists: Call for Papers Special Issue on Deep Learning for Anomaly Detection Message-ID: Dear Colleagues, I am contacting you in my capacity as Guest Editor for a Special Issue titled: "Deep Learning for Anomaly Detection" to appear in ?Algorithms? MDPI journal: https://www.mdpi.com/journal/algorithms/special_issues/Y072QR9GTI With this call for papers, I invite you and/or your co-authors to submit an original research paper, or a focused review, for our special issue. Deadline for manuscript submissions: 30 August 2023. Submitted papers will be peer reviewed and, upon acceptance, the paper will be published in open access form soon after professional editing. Thank you in advance for your consideration and I sincerely hope that you will accept this invitation to contribute to this Special Issue. If you believe that you will be able to submit a manuscript, I would also greatly appreciate if you could respond to this invitation at your earliest convenience. ?Algorithms? (ISSN 1999-4893) is an EI, Scopus and ESCI indexed, Open Access journal published online monthly by MDPI. Best Regards ________________________________________ Alessio Martino, PhD Assistant Professor of Computer Science LUISS Guido Carli University Department of Business and Management Viale Romania 32, 00197 Rome, Italy (Room 539) Phone: (+39) 06-85225957 E-mail: amartino at luiss.it Web: -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at oist.jp Mon Jan 9 03:22:33 2023 From: erik at oist.jp (Erik De Schutter) Date: Mon, 9 Jan 2023 08:22:33 +0000 Subject: Connectionists: Apply for Okinawa/OIST Computational Neuroscience Course 2023 till end of the month Message-ID: <706B25D4-3ADE-4FE7-8619-2A587CBA9338@oist.jp> OKINAWA/OIST COMPUTATIONAL NEUROSCIENCE COURSE 2023 Methods, Neurons, Networks and Behaviors June 19 to July 6, 2023 Okinawa Institute of Science and Technology Graduate University, Japan https://groups.oist.jp/ocnc The aim of the Okinawa/OIST Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling. We invite graduate students and postgraduate researchers to participate in the course, held from June 19th through July 6th, 2023 at an oceanfront seminar house of the Okinawa Institute of Science and Technology Graduate University. Applications are through the course web page (https://groups.oist.jp/ocnc ) only; January 1 - January 31, 2023. Applicants will receive confirmation of acceptance end of March. Like in preceding years, the 18th OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with ample time for student projects. The first week will focus exclusively on methods with hands-on tutorials during the afternoons, while the second and third weeks will have lectures by international and local experts. The course has a strong hands-on component based on student proposed modeling or data analysis projects, which are further refined with the help of a dedicated tutor. Applicants are required to propose their project at the time of application. There is no tuition fee. The sponsor will provide lodging and meals during the course and may provide partial travel support. We hope that this course will be a good opportunity for theoretical and experimental neuroscientists to meet each other and to explore the attractive nature and culture of Okinawa, the southernmost island prefecture of Japan. Invited faculty: ? Upinder Bhalla (NCBS, India) ? Claudia Clopath (Imperial College London, UK) ? Erik De Schutter (OIST) ? Kenji Doya (OIST) ? Tomoki Fukai (OIST) ? Izumi Fukunaga (OIST) ? Yukiko Goda (OIST) ? Mike H?usser (University College London, UK) ? Bernd Kuhn (OIST) ? Jinny Kim (KIST, South Korea) ? Rosalyn Moran (King's College London, UK) ? Steve Prescott (University of Toronto, Canada) ? Sam Reiter (OIST) ? Greg Stephens (OIST) ? Kazumasa Tanaka (OIST) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3766 bytes Desc: not available URL: From julia.vogt at inf.ethz.ch Mon Jan 9 04:22:38 2023 From: julia.vogt at inf.ethz.ch (Vogt Julia) Date: Mon, 9 Jan 2023 09:22:38 +0000 Subject: Connectionists: Workshop on time series representation for medical applications (TSRL4H) at ICLR 2023 Message-ID: Dear all, We are happy to announce that our workshop on time series representation for medical applications (TSRL4H) will be hosted at ICLR 2023! The workshop will include talks from leading researchers and pioneers in ML, as well as spotlight presentations and poster sessions for accepted papers. The submission deadline is February 3, and we encourage submissions on representation learning for time series that could bring value to medical applications, such as robustness, interpretability, multimodality, to name a few. More details can be found here: https://sites.google.com/view/tsrl4h-iclr2023 Best Regards Julia Vogt ------------------------------------------------------ Prof. Julia Vogt Medical Data Science Institute for Machine Learning Department of Computer Science ETH Zurich ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jferrer at uma.es Mon Jan 9 05:45:59 2023 From: jferrer at uma.es (javi Ferrer Urbano) Date: Mon, 9 Jan 2023 11:45:59 +0100 Subject: Connectionists: [CFP OLA'23][Extension] Optimization & Learning @Malaga (Spain) Message-ID: <81d65f12-89e0-ed26-86cf-bd8cc6225f90@uma.es> Apologies for cross-posting. Appreciate if you can distribute this CFP to your network. **************************************************************************************** ????????????????????????? OLA'2023 ????????? International Conference on Optimization and Learning ????????????????????????? 3-5 May 2023 ????????????????????????? Malaga, Spain http://ola2023.sciencesconf.org/ ??????????????????? SCOPUS Springer Proceedings **************************************************************************************** OLA is a conference focusing on the future challenges of optimization and/or machine learning methods and their applications. The conference OLA'2023 will provide an opportunity to the international research community in optimization and learning to discuss recent research results and to develop new ideas and collaborations in a friendly and relaxed atmosphere. OLA'2023 welcomes presentations that cover any aspects of optimization and/or machine learning research such as big optimization and learning, optimization for learning, learning for optimization, optimization and learning under uncertainty, deep learning, new high-impact applications, parameter tuning, 4th industrial revolution, computer vision, hybridization issues, optimization-simulation, meta-modeling, high-performance computing, parallel and distributed optimization and learning, surrogate modeling, multi-objective optimization ... Submission papers: We will accept two different types of submissions: -?????? S1: Extended abstracts of work-in-progress and position papers of a maximum of 3 pages -?????? S2: Original research contributions of a maximum of 12 pages Important dates: =============== Paper submission extended deadline???? Jan 27, 2022 [Extended] Notification of acceptance??? Feb 17, 2023 Proceedings: Accepted papers in categories S1 and S2 will be published in the proceedings. A SCOPUS and DBLP indexed Springer book will be published for accepted long papers. Proceedings will be available at the conference. From Stevensequeira92 at hotmail.com Mon Jan 9 11:59:54 2023 From: Stevensequeira92 at hotmail.com (steven gouveia) Date: Mon, 9 Jan 2023 16:59:54 +0000 Subject: Connectionists: [New Book] Thinking the New World: Conversations on Artificial Intelligence Message-ID: Dear All, Very happy to announce the official publication of my new edited book "Thinking the New World: Conversations on Artificial Intelligence". The book gathers together interviews with 13 experts on Ethics and Artificial Intelligence and its goal is to introduce to the general public - in an informal but rigorous language - some of the ethical problems of the "new world", the world of AI. The book is published in 3 formats via Amazon (if you are in Europe, you can choose the specific Amazon that is closer to you): (1) Paperback b/w (19.99$) cf. https://www.amazon.com/dp/B0BQY2F1SK ; (2) Hardback colour (59.99$) cf. https://www.amazon.com/dp/B0BQXW28NB ; (3) PDF. format (9.99$) cf. (via email: stevensequeira92 @ gmail . com / private message). List of Contributors: - Peter Singer (Princeton, USA / Melbourne, Australia); - Paul Thagard (Waterloo University, Canada); - Shoji Nagataki (Chukyo University, Japan); - Hajo Greif (Technical University of Munich, Germany); - David Harris Smith (McMaster University, Canada); - Pii Telaviki (University of Helsinki, Finland); - Sabina Leonelli (University of Exter, England); - Francesca Minerva (University of Milan, Italy); - Fabio Fossa (Politecnico di Milano, Italy; - Wulf Loh (University of T?bingen, Germany); - Shawn Kaplan (Adelphi University, USA); - Radu Uszkai (Bucharest University of Economic Studies, Romania); - Joshua Jowitt (Newcastle University, England). Share it widely so we can create greater ethical awareness regarding Artificial Intelligence in general. Steven S. Gouveia Ph.D. (University of Minho) ex-PostDoc Research Fellow (University of Ottawa) Researcher of the CEFH (Portuguese Catholic University) https://stevensgouveia.weebly.com (Books, papers, talks, etc) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdm at zhaw.ch Mon Jan 9 12:05:44 2023 From: stdm at zhaw.ch (Stadelmann Thilo (stdm)) Date: Mon, 9 Jan 2023 17:05:44 +0000 Subject: Connectionists: [Professorship] Neuro-symbolic AI for industrial use cases (3 more weeks) Message-ID: Dear colleagues, 3 more weeks (until end of Jan) to apply as Prof. for neuro-symbolic AI at one of the fastest-growing AI research centres in Zurich: https://www.zhaw.ch/en/jobs/vacant-positions/job-details/job/detail/2940394/ The package includes generous base funding from the Rieter Foundation and Rieter, something very special in the applied research landscape. Looking forward to getting to know you! Thilo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stevensequeira92 at hotmail.com Mon Jan 9 11:57:11 2023 From: Stevensequeira92 at hotmail.com (steven gouveia) Date: Mon, 9 Jan 2023 16:57:11 +0000 Subject: Connectionists: Philosophy & Neuroscience | Survey Message-ID: Philosophy & Neuroscience | Survey Within the scope of the Center for Philosophical and Humanistic Studies (CEFH) from the Portuguese Catholic University (Braga, Portugal). we would like to request your collaboration by answering this survey, whose objective is to evaluate how philosophy and neuroscience can cooperate together. This study was approved by the ethics committee of the center and was designed based on the Declaration of Helsinki. All reported data will be treated jointly, anonymously and confidentially. No data that identify the participants will be requested. So, if you are over 18 years old, if you work on topics related to Philosophy of Mind, Cognitive Science, Neuroscience (empirical and theoretical), Psychology, and related fields, and if you graduated in any of those fields (Master, PhD, etc.), we appreciate your collaboration. Any questions that arise can be addressed to the principal investigator by this email stevensequeira92 @ gmail.com The Survey can be found here: https://forms.gle/46yQ5uCE9gSZoMjn9 Many thanks for your valuable contribution. Steven S. Gouveia Ph.D. (University of Minho) Researcher of the CEFH (Portuguese Catholic University) https://stevensgouveia.weebly.com (Books, papers, talks, etc) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stevensequeira92 at hotmail.com Mon Jan 9 12:05:25 2023 From: stevensequeira92 at hotmail.com (steven gouveia) Date: Mon, 9 Jan 2023 17:05:25 +0000 Subject: Connectionists: [LAST Call for Abstracts] - 4th International Conference on Philosophy of Mind (Portugal) In-Reply-To: References: Message-ID: Dear All, The Call for Abstracts for the 4th International Conference on Philosophy of Mind: 4E's Approach to the Mind/Brain, taking place from 6 to 8 March 2023, at the Faculty of Philosophy and Social Sciences of the University Portuguese Catholic (Braga, Portugal), is still open. The Conference will have 6 Keynote Speakers: - Shaun Gallagher (Memphis Uni. | USA) - Adriana Sampaio (Uni. Minho | PT) - Karl Friston (Uni. College London | UK) - Anna Ciaunica (Uni. Lisbon | PT) - Peter G?rdenfors (L?nd Uni. | SW) - Dirk Geeraerts (Leuven Uni. | BL) An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information, and a short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: ?4th Inter. Conf. First Name Last Name ? title of abstract.? Final Deadline: January 15, 2023. All info related to the conference can be found here: https://4confphilmind.weebly.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From SchockaertS1 at cardiff.ac.uk Tue Jan 10 11:37:07 2023 From: SchockaertS1 at cardiff.ac.uk (Steven Schockaert) Date: Tue, 10 Jan 2023 16:37:07 +0000 Subject: Connectionists: Postdoctoral position at Cardiff University Message-ID: Location: Cardiff, UK Deadline for applications: 31st January 2023 Start date: as soon as possible Duration: 30 months Keywords: natural language processing, neurosymbolic AI, graph neural networks, commonsense reasoning Details about the post Applications are invited for a Research Associate post in the Cardiff University School of Computer Science & Informatics, to work on the EPSRC Open Fellowship project ReStoRe (Reasoning about Structured Story Representations), which is focused on story-level language understanding. The overall aim of this project is to develop methods for learning graph-structured representations of stories. For this post, the specific focus will be on developing common sense reasoning strategies, based on graph neural networks, to fill the gap between what is explicitly stated in a story and what a human reader would infer by ?reading between the lines?. More details about the post and instructions on how to apply are available here: https://www.jobs.ac.uk/job/CWM298/research-associate Background about the ReStoRe project When we read a story as a human, we build up a mental model of what is described. Such mental models are crucial for reading comprehension. They allow us to relate the story to our earlier experiences, to make inferences that require combining information from different sentences, and to interpret ambiguous sentences correctly. Crucially, mental models capture more information than what is literally mentioned in the story. They are representations of the situations that are described, rather than the text itself, and they are constructed by combining the story text with our commonsense understanding of how the world works. The field of Natural Language Processing (NLP) has made rapid progress in the last few years, but the focus has largely been on sentence-level representations. Stories, such as news articles, social media posts or medical case reports, are essentially modelled as collections of sentences. As a result, current systems struggle with the ambiguity of language, since the correct interpretation of a word or sentence can often only be inferred by taking its broader story context into account. They are also severely limited in their ability to solve problems where information from different sentences needs to be combined. As a final example, current systems struggle to identify correspondences between related stories (e.g. different news articles about the same event), especially if they are written from a different perspective. To address these fundamental challenges, we need a method to learn story-level representations that can act as an analogue to mental models. Intuitively, there are two steps involved in learning such story representations: first we need to model what is literally mentioned in the story, and then we need some form of commonsense reasoning to fill in the gaps. In practice, however, these two steps are closely interrelated: interpreting what is mentioned in the story requires a model of the story context, but constructing this model requires an interpretation of what is mentioned. The solution that is proposed in this fellowship is based on representations called story graphs. These story graphs encode the events that occur, the entities involved, and the relationships that hold between these entities and events. A story can then be viewed as an incomplete specification of a story graph, similar to how a symbolic knowledge base corresponds to an incomplete specification of a possible world. The proposed framework will allow us to reason about textual information in a principled way. It will lead to significant improvements in NLP tasks where a commonsense understanding is required of the situations that are described, or where information from multiple sentences or documents needs to be combined. It will furthermore enable a step change in applications that directly rely on structured text representations, such as situational understanding, information retrieval systems for the legal, medical and news domains, and tools for inferring business insights from news stories and social media feeds. -------------- next part -------------- An HTML attachment was scrubbed... URL: From interdonatos at gmail.com Tue Jan 10 11:16:39 2023 From: interdonatos at gmail.com (Roberto Interdonato) Date: Tue, 10 Jan 2023 17:16:39 +0100 Subject: Connectionists: CfP FRCCS 2023 - Third French Regional Conference on Complex Systems, May 31 Le Havre, France, June 02, 2023 Message-ID: *Third F*rench* R*egional* C*onference on* C*omplex* S*ystems May 31 ? June 02, 2023 Le Havre, France *FRCCS 2023* You are cordially invited to submit your contribution until *February 22, 2023.* *FRCCS 2023 (F*rench *R*egional* C*onference on *C*omplex *S*ystems 2023) is the Third edition of the French Regional Conference on Complex Systems. It promotes interdisciplinary exchanges between regional researchers from various scientific disciplines and backgrounds (sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science). FRCCS 2023 is an opportunity to exchange and promote the cross-fertilization of ideas by presenting recent research work, industrial developments, and original applications. Special attention is given to research topics with a high societal impact from the complexity science perspective. *Keynote Speakers* Luca Maria Aiello ITU Copenhagen Denmark Ginestra Bianconi Queen Mary University UK V?ctor M. Egu?luz University of the Balearic Islands Spain Adriana Iamnitchi Maastricht University Netherlands Rosario N. Mantegna Palermo University Italy C?line Rozenblat University of Lausanne Switzerland *Submission Guidelines* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Full paper* about *original research* ? *Extended Abstract* about published or unpublished research. It is recommended to be between 3-4 pages. They should not exceed four pages. o Submissions must follow the Springer publication format available in the journal Applied Network Science in the Instructions for Authors' instructions entry. o All contributions should be submitted in *pdf format* via *EasyChair .* *Publication* *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Complexity, edited by Hindawi *Topics include, but are not limited to:* ? *Foundations of complex systems* - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks* - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment* - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity* - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior, and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations* o Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory) *GENERAL CHAIRS* Cyrille Bertelle LITIS, Normastic, Le Havre Roberto Interdonato CIRAD, UMR TETIS, Montpellier -------------- next part -------------- An HTML attachment was scrubbed... URL: From vito.trianni at istc.cnr.it Tue Jan 10 11:51:43 2023 From: vito.trianni at istc.cnr.it (Vito Trianni) Date: Tue, 10 Jan 2023 17:51:43 +0100 Subject: Connectionists: [jobs] Research position in data visualisation, user experience design and human-computer interaction Message-ID: <3602682F-B14D-4CB0-9445-F6CBA62B5E85@istc.cnr.it> ? Apologies for multiple posting ? A research position is available within the context of the European project HACID (http://www.hacid-project.eu), at the Institute of Cognitive Sciences and Technologies (ISTC) of the Italian National Research Council in Rome. We are seeking a motivated researcher with expertise in Data Visualisation, UX and Human-Computer Interaction. The contract is for two years, with possibility of renewal. It is also possible to link the research to a PhD program (e.g., at Sapienza University, Computer Engineering). http://www.hacid-project.eu/jobs/ DEADLINE FOR APPLICATIONS: JANUARY THE 20TH, 2023 ------------------------------------- WHAT WE DO ------------------------------------- HACID aims at the study of hybrid human-artificial collective intelligence for open-ended domains like medical diagnostics and decision support for climate change adaptation policies. Thanks to well designed knowledge graphs and information aggregation algorithms, the project aims at improving decision making in situations where knowledge fragmentation and information overload can strike. Related to the open position, the research program consists in the development of a dashboard for visualization and interaction with knowledge graphs in the context of both medical diagnostics and climate services. The dashboard will have to enable the selection by domain experts of concepts relevant to the case study, presenting these concepts in a dynamic and usable way. The activities include the experimental validation of the dashboard to test its usability and consistency, in interaction with the HACID team for the different case studies. ------------------------------------- WHO WE?RE LOOKING FOR ------------------------------------- The following skills are requested: ? Knowledge of User Experience Design (UXD), human-machine interaction (HMI) and user interface design (UI); ? Experience in (i) data visualisation and (ii) design and implementation of interactive data dashboards, using open-source libraries and frameworks (e.g. D3, Kibana, Tableau, etc.); ? Experience in the use of programming languages: Python and/or Java; ------------------------------------- HOW TO APPLY ------------------------------------- Applications must be sent via email. Deadline for applications is January the 20th, 2023. Italian applicants must submit their applications through a certified email (Posta Elettronica Certificata ? PEC) to the address protocollo.istc at pec.cnr.it Foreign applicants must submit their applications through standard email to the address protocollo.roma at istc.cnr.it For all details about the application process, please check the notice of selection available at the following links: http://www.hacid-project.eu/jobs/index.html https://www.istc.cnr.it/en/content/assegno-di-ricerca-3562022-progettazione-e-validazione-sperimentale-di-strumenti For any inquiry, feel free to contact Vito Trianni: vito.trianni at istc.cnr.it ------------------------------------- WHO WE ARE ------------------------------------- The Institute for Cognitive Sciences and Technologies (ISTC) is an interdisciplinary institute, featuring integration among laboratories and across research topics. ISTC laboratories share objectives aimed at the analysis, representation, simulation, interpretation and design of cognitive and social processes in humans, animals and machines, spanning the physiological, phenomen ======================================================================== Vito Trianni, Ph.D. vito.trianni@(no_spam)istc.cnr.it ISTC-CNR http://www.istc.cnr.it/people/vito-trianni Via San Martino della Battaglia 44 Tel: +39 06 44595277 00185 Roma Fax: +39 06 44595243 Italy ======================================================================== From xavier.hinaut at inria.fr Wed Jan 11 13:49:15 2023 From: xavier.hinaut at inria.fr (Xavier Hinaut) Date: Wed, 11 Jan 2023 19:49:15 +0100 Subject: Connectionists: [internship] Animal vocalization analysis and annotation tool (Bordeaux, France) Message-ID: <2A5362C5-8C58-465B-B860-83982918CB7F@inria.fr> **AI internship offer at Inria and Bordeaux Neurocampus (France) on Canapy: an Animal vocalization analysis and annotation tool** Application and more info: https://github.com/neuronalX/internships/blob/main/2022-2023_MSc-or-BSc_Trouvain-Leblois-Hinaut_Canapy_Songbird-GUI_EN.pdf The main objectives of the internship will be: 1. to develop a graphical interface to train vocalization annotation models, to visualize their performance and to re-annotate parts of the dataset accordingly (in a similar fashion as semi-supervised learning); 2. to develop the corresponding software backend: data management (audio and annotations), serving and local persistence of the models (MLOps); 3. to collaborate with the project members to define the needs, establish the specifications or integrate pre-existing tools. This objective also implies collaborating with international researchers, and making an open source tool available to the public. The development will be incremental: a first prototype will allow to train models and to present their evaluation on the interface. A second prototype will offer advanced editing possibilities of the dataset (re-annotation of parts of the audio according to the results of the model), and the final version will integrate advanced analysis tools (dataset errors detection, spectrograms dimensionality reduction for visualization and/or clustering, syntactic analysis of song sequences, ...) The student will have to develop an interface, preferably web, in javascript/typescript (React...) or directly in Python (bokeh/panel/holoviz...). The software backend will serve Machine Learnnig models defined in Python (type scikit-learn/reservoirpy at first, eventually type tensorflow/pytorch). The tool could be inspired by or integrated with the VocalPy initiative [3]. The student will be encouraged to collaborate with the project collaborators. For example, the data could follow the convention defined by the VocalPy crowsetta package. Once complete, the tool will be made public, on Github, along with its documentation. The goal is to impact a large international community, like ReservoirPy [4], a library already developed in the Mnemosyne team for the ML community. [1] N. Trouvain et X. Hinaut, ? Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs ?, in ICANN 2021 - 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, sept. 2021, vol. 12895, p. 71 82. doi: 10/gq43sk. [2] Y. Cohen, D. A. Nicholson, A. Sanchioni, E. K. Mallaber, V. Skidanova, et T. J. Gardner, ? Automated annotation of birdsong with a neural network that segments spectrograms ?, eLife, vol. 11, p. e63853, janv. 2022, doi: 10/gq43sd. [3] ? VocalPy ?. https://github.com/vocalpy [3] ? ReservoirPy ?. https://github.com/reservoirpy/reservoirpy Best regards, Xavier Hinaut Inria Research Scientist www.xavierhinaut.com -- +33 5 33 51 48 01 Mnemosyne team, Inria, Bordeaux, France -- https://team.inria.fr/mnemosyne & LaBRI, Bordeaux University -- https://www4.labri.fr/en/formal-methods-and-models & IMN (Neurodegeneratives Diseases Institute) -- http://www.imn-bordeaux.org/en From nguyensmai at gmail.com Wed Jan 11 18:57:43 2023 From: nguyensmai at gmail.com (Nguyen, Sao Mai) Date: Thu, 12 Jan 2023 00:57:43 +0100 Subject: Connectionists: [Jobs] M2 internship position in Reinforcement Learning In-Reply-To: References: Message-ID: Dear all, could you please share to anybody who might be interested in the following internship position ? --- ENSTA, IP Paris is looking to hire a talented master student in machine learning on a collaborative project with Ecole Polytechnique Laboratory: U2IS, ENSTA Paris (http://u2is.ensta-paris.fr/) & LIX, Ecole Polytechnique The intern will be part of the laboratory U2IS of ENSTA Paris and will collaborate with LIX, Ecole Polytechnique Duration: 6 months, flexible dates Contact : NGUYEN Sao Mai : nguyensmai at gmail.com Context: Fully autonomous robots have the potential to impact real-life applications, like assisting elderly people. Autonomous robots must deal with uncertain and continuously changing environments, where it is not possible to program the robot tasks. Instead, the robot must continuously learn new tasks and how to perform more complex tasks combining simpler ones (i.e., a task hierarchy). This problem is called lifelong learning of hierarchical tasks. Summary: Hierarchical Reinforcement Learning (HRL) is a recent approach for learning to solve long and complex tasks by decomposing them into simpler subtasks. HRL could be regarded as an extension of the standard Reinforcement Learning (RL) setting as it features high-level agents selecting subtasks to perform and low-level agents learning actions or policies to achieve them. We recently proposed a HRL algorithm, GARA (Goal Abstraction via Reachability Analysis), that aims to learn an abstract model of the subgoals of the hierarchical task. However, HRL can still be limited when faced with the states with high dimension and the real-world open-ended environment. Introducing a human teacher to Reinforcement Learning algorithms has been shown to bootstrap the learning performance. Moreover, active imitation learners such as in [1] have shown that they can strategically choose the most useful questions to ask to a human teacher : they can choose, who, when, what and whom to ask for demonstrations [2,3]. This internship?s goal is to explore how active imitation can improve the algorithm GARA. The intuition in this context is that human demonstrations can be used to determine the structure of the task (ie. which subtasks need to be achieved) as well as determining a planning strategy to solve it (ie. the order of achieving subtasks). During this internship we will : - Study the relevant state-of-art and make a research hypothesis about the usefulness of introducing human demonstrations into the considered HRL algorithm. - Design and implement a component to learn from human demonstrations in GARA. - Conduct an experimental evaluation to assess the research hypothesis. The intern is expected to also collaborate with a PhD student whose work is closely related to this topic. References: [1] Cakmak, M., DePalma, N., Thomaz, A. L., and Arriaga, R. (2009). Effects of Social Exploration Mechanisms on Robot Learning. (IEEE) International Symposium on Robot and Human Interactive Communication(128--134). [2] Duminy, N., Nguyen, S. M., and Duhaut, D. (2019). Learning a Set of Interrelated Tasks by Using a Succession of Motor Policies for a Socially Guided Intrinsically Motivated Learner. Frontiers in Neurorobotics, 12(87). [3] Nguyen, S. M. and Oudeyer, P.-Y. (2012). Active choice of teachers, learning strategies and goals for a socially guided intrinsic motivation learner. Paladyn Journal of Behavioural Robotics, 3(3)(136-146). SP Versita. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyensmai at gmail.com Wed Jan 11 19:09:29 2023 From: nguyensmai at gmail.com (Nguyen, Sao Mai) Date: Thu, 12 Jan 2023 01:09:29 +0100 Subject: Connectionists: [Jobs] M2 internship position in Robot Learning Message-ID: Dear all, could you please share to anybody who might be interested in the following internship position ? --- ENSTA, IP Paris is looking to hire a talented master student in machine learning on a collaborative project with Ecole Polytechnique Laboratory: U2IS, ENSTA Paris (http://u2is.ensta-paris.fr/) & LIX, Ecole Polytechnique The intern will be part of the laboratory U2IS of ENSTA Paris and will collaborate with LIX, Ecole Polytechnique Duration: 6 months, flexible dates Contact : NGUYEN Sao Mai : nguyensmai at gmail.com Context: Fully autonomous robots have the potential to impact real-life applications, like assisting elderly people. Autonomous robots must deal with uncertain and continuously changing environments, where it is not possible to program the robot tasks. Instead, the robot must continuously learn new tasks and how to perform more complex tasks combining simpler ones (i.e., a task hierarchy). This problem is called lifelong learning of hierarchical tasks [5]. Hierarchical Reinforcement Learning (HRL) is a recent approach for learning to solve long and complex tasks by decomposing them into simpler subtasks. HRL could be regarded as an extension of the standard Reinforcement Learning (RL) setting as it features high-level agents selecting subtasks to perform and low-level agents learning actions or policies to achieve them. Summary: This internship studies the applications of Hierarchical Reinforcement Learning methods in robotics: Deploying autonomous robots in real world environments typically introduces multiple difficulties among which is the size of the observable space and the length of the required tasks. Reinforcement Learning typically helps agents solve decision making problems by autonomously discovering successful behaviours and learning them. But these methods are known to struggle with long and complex tasks. Hierarchical Reinforcement Learning extend this paradigm to decompose these problems into easier subproblems with High-level agents determining which subtasks need to be accomplished, and Low-level agent learning to achieve them. During this internship, the intern will : ? Get acquainted with the state of art in Hierarchical Reinforcement Learning including the most notable algorithms [1, 2, 3], the challenges they solve and their limitations. ? Reimplement some of these approaches and validate their results in robotics simulated environments such as iGibson [4]. ? Establish an experimental comparison of these methods with respect to some research hypothesis. The intern is expected to also collaborate with a PhD student whose work is closely related to this topic. References: [1] Nachum, O.; Gu, S.; Lee, H.; and Levine, S. 2018. Data- Efficient Hierarchical Reinforcement Learning. In Bengio, S.; Wallach, H. M.; Larochelle, H.; Grauman, K.; Cesa- Bianchi, N.; and Garnett, R., eds., Advances in Neural Infor- mation Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montre ?al, Canada, 3307?3317. [2] Kulkarni, T. D.; Narasimhan, K.; Saeedi, A.; and Tenen- baum, J. 2016. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. In Lee, D.; Sugiyama, M.; Luxburg, U.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. [3] Vezhnevets, A. S.; Osindero, S.; Schaul, T.; Heess, N.; Jaderberg, M.; Silver, D.; and Kavukcuoglu, K. 2017. FeU- dal Networks for Hierarchical Reinforcement Learning. CoRR, abs/1703.01161. [4] Chengshu Li, Fei Xia, Roberto Mart ??n-Mart ??n, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, and Silvio Savarese. igibson 2.0: Object-centric simulation for robot learning of everyday household tasks, 2021. URL https://arxiv.org/abs/2108.0327 [5] Nguyen, S. M., Duminy, N., Manoury, A., Duhaut, D., and Buche, C. (2021). Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning. KI - K?nstliche Intelligenz, 35(81-90). Nguyen Sao Mai nguyensmai at gmail.com Researcher in Cognitive Developmental Robotics https://doi.org/10.1155/2022/5667223 http://nguyensmai.free.fr | Youtube | Twitter | ResearchGate | Hal -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhammer at techfak.uni-bielefeld.de Thu Jan 12 02:48:37 2023 From: bhammer at techfak.uni-bielefeld.de (Barbara Hammer) Date: Thu, 12 Jan 2023 08:48:37 +0100 Subject: Connectionists: JAII lecture by Kenneth D. Forbus Message-ID: Dear colleagues, I am happy to announce the next lecture organized by JAII , the Joint Artificial Intelligence Institute of Bielefeld University and University of Paderborn: Lecture by Kenneth D. Forbus (Northwestern University) on January 19, 16:00-17:30 CET, "Qualitative Representations and Analogical Learning for Human-like AI Systems" While there has been substantial progress in AI, we are still far away from systems that can learn incrementally from small amounts of data while producing results that are understandable by human partners. Our hypothesis is that qualitative representations and analogical learning are central in human cognition, and that these ideas provide the basis for new technologies that will help us create more human-like AI systems. We illustrate using examples from vision, language, and reasoning. These advances should support building software social organisms, that interact with people as collaborators rather than tools, which ultimately could revolutionize how AI systems are built and used. The Zoom link is directly available at the JAII Homepage -- Prof. Dr. Barbara Hammer Machine Learning Group, CITEC https://hammer-lab.techfak.uni-bielefeld.de/ Bielefeld University D-33594 Bielefeld Phone: +49 521 / 106 12115 -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.dovrolis at cyi.ac.cy Thu Jan 12 04:08:11 2023 From: c.dovrolis at cyi.ac.cy (Constantine Dovrolis) Date: Thu, 12 Jan 2023 09:08:11 +0000 Subject: Connectionists: Post-Doc position in Cyprus -- The Cyprus Institute Message-ID: Summary: * Post-Doc position in Cyprus * The Cyprus Institute (www.cyi.ac.cy) * Focus on fundamental research in developing efficient and interpretable deep nets, continual learning, neuro-inspired ML, self-supervised learning, and other cutting-edge topics * Mentors: Constantine Dovrolis and Mihalis Nicolaou * 2 years ? can be extended * Closing date: 31/1/2023 ---- The Cyprus lnstitute {Cyl) is a non-profit science and technology educational and research institution based in Cyprus and led by an acclaimed Board of Trustees. The research agenda of Cyl falls within the following four research centers: The Computation-based Science and Technology Research Center (CaSToRC); the Science and Technology in Archaeology and Culture Research Center (STARC); the Energy, Environment and Water Research Center (EEWRC); and the Climate and Atmosphere Research Center (CARE-C). Considerable cross-center interaction is a characteristic feature of the lnstitute's culture. The Cyprus Institute invites applications for a Post-Doctoral Fellow in to pursue research in Machine Learning. The successful candidate will be actively engaged in cutting-edge research in terms of core problems in ML and AI such as developing efficient and interpretable deep nets, continual learning, neuro-inspired ML, self-supervised learning, and other cutting-edge topics. The candidate should have deep understanding of machine learning fundamentals (e.g., linear algebra, probability theory, optimization) as well as broad knowledge of the state-of-the-art in AI and machine and learning. Additionally, the candidate should have extensive experience with ML programming frameworks (e.g., PyTorch). The candidate will be working primarily with two PIs: Prof. Constantine Dovrolis (recently moved to CyI from Georgia Tech -- see http://www.cc.gatech.edu/~dovrolis/) and Prof. Mihalis Nicolaou (see http://mihalisan.cyi.ac.cy/). This position offers a unique opportunity for fundamental research and its exact focus will be determined also based the interests and skills of the successful candidate. Furthermore, the candidate will have the opportunity to link his/her work with ML applications of critical societal impact, favored by the interdisciplinary setting of the Institute. The successful candidate will also work closely with the PIs in writing relevant grant proposals. The appointment is for a period of 2 years, with the option of renewal subject to performance and the availability of funds. An internationally competitive remuneration package will be offered, which is commensurate with the level of experience of the successful candidate. Responsibilities/activities to be involved in: * Conducting analytical and experimental research in machine/deep learning algorithms and frameworks * Writing research papers in collaboration with the PIs, aiming to publish at top-tier conferences and journals * Writing relevant grant proposals in collaboration with the PIs * Dissemination of results at scientific conferences, workshops, and seminars * Assisting in the supervision of graduate students * Contribution to relevant Research Center activities at CaSToRC of CyI Required Qualifications * A Ph.D. degree in one of the following: computer science, electrical and computer engineering, applied math, statistics, or a degree in a similar area. * At least 3 publications in the areas of Artificial Intelligence and/or Machine Learning. * Strong programming skills (preferably in Python), and experience with deep learning frameworks such as PyTorch. * Ability to work as part of an interdisciplinary team while showing initiative and independence. * Excellent knowledge of the English language (written and verbal). * High level of organizational, analytical and problem-solving skills. * Strong presentation skills. * High level of communication and interpersonal skills. Preferred Qualifications (not mandatory) * Publications at top-tier peer-reviewed conferences such as ICML, NeurIPS, ICLR, etc, will be considered a very strong qualification. Application For full consideration, interested applicants should process their application at The Cyprus Institute Exelsys Platform (https://bit.ly/3HDLRTw) based on the instructions given. Applicants should submit a curriculum vitae including a short letter of interest, and a list of three (3) referees (including contact information) (all documentation should be in English and in PDF Format). For further information, please contact Prof Constantine Dovrolis (c.dovrolis at cyi.ac.cy). Please note that applications, which do not fulfill the required qualifications and do not follow the announcement?s guidelines will not be considered. Recruitment will continue until the position is filled. The Cyprus Institute is an Equals Opportunity employer certified from the Cypriot Ministry of Labor and also an HRS4R accredited Institution that adheres to the European Commission?s ?Charter & Code? principles for recruitment and selection. Best regards, Constantine Dovrolis ---------------------------- Professor and Director of CaSToRC - The Cyprus Institute - https://www.cyi.ac.cy/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.gleeson at ucl.ac.uk Thu Jan 12 07:36:45 2023 From: p.gleeson at ucl.ac.uk (Padraig Gleeson) Date: Thu, 12 Jan 2023 12:36:45 +0000 Subject: Connectionists: NeuroDataShare 2023 workshop and hackathon - Sainsbury Wellcome Centre London, Feb 20-23 Message-ID: <7ea3795c-960f-b678-a1cf-7af0cd00a7bf@ucl.ac.uk> (Apologies for cross posting) * * *NeuroDataShare 2023: Exploring and sharing multi-scale neuroscience data* Sainsbury Wellcome Centre, London, UK http://www.neurodatashare.org Modern experimental neuroscience is producing huge amounts of data at a rapidly increasing pace, with population recordings using multielectrode arrays and imaging and simultaneous behavioural data together with transcriptomics and anatomical reconstructions. While these data are useful to those who obtain them in the original studies, their value is magnified when they are shared in accessible formats with the wider community for use in new studies, and to investigate brain function from different perspectives. To facilitate this, many groups are developing tools, standardised languages and databases to help specify, analyse, visualise and share such data sets. *This meeting will bring together experimentalists, those offering infrastructure solutions for sharing data in neuroscience and those who wish to reuse, reanalyse and gain new insight into publicly available datasets.* The meeting will consist of a *2 day workshop of scientific presentations (Mon 20th & Tues 21st Feb 2023)* at the Sainsbury Wellcome Centre from leading neuroscientists who are generating data of different types across multiple scales, who are faced with issues of how to disseminate their output to other researchers. Scientific talks will be complemented by presentations from those developing the infrastructure to standardise and share data, and there will be discussions on the challenges and opportunities of greater data sharing in neuroscience. The second part of the meeting will be a smaller, more focussed *2 day hackathon (Wed 22nd & Thus 23rd Feb 2023)* where PhD students, postdocs and PIs will get hands on demonstrations to get their data into standardised formats, including Neurodata Without Borders , as well as help with sharing the data on the Open Source Brain platform . We look forward to welcoming you to London! NeuroDataShare organisers Padraig Gleeson, Angus Silver and Isaac Bianco. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio.bellavia at unifi.it Thu Jan 12 10:27:19 2023 From: fabio.bellavia at unifi.it (Fabio Bellavia) Date: Thu, 12 Jan 2023 16:27:19 +0100 Subject: Connectionists: CALL FOR PAPERS - CVPR 2023 Workshop on Image Matching: Local Features and Beyond Message-ID: <30b67d9e-ccaf-1a6c-7fa8-693880b0e941@unifi.it> CALL FOR PAPERS *CVPR 2023 Workshop on Image Matching: Local Features and Beyond* Workshop website: https://image-matching-workshop.github.io Challenge website: https://www.cs.ubc.ca/research/image-matching-challenge/ CMT website for paper submission: https://cmt3.research.microsoft.com/IMW2023 *OVERVIEW* We are happy to announce that the Fifth Workshop on Image Matching: Local Features and Beyond will be held at CVPR 2023 on June 19, 2023 (morning, Pacific Time) in Vancouver, Canada. Its goal is to encourage and highlight novel strategies for image matching that deviate from and advance traditional formulations, with a focus on large-scale, wide-baseline matching for 3D reconstruction and pose estimation. This can be achieved by applying new technologies to sparse feature matching, or doing away with keypoints and descriptors entirely, such as with dense solutions. *We will also hold the fifth edition of the Image Matching Challenge*, co-located with the workshop. Details will be announced in the coming weeks. *TOPICS* Workshop topics include (but are not limited to): - Formulations of keypoint extraction and matching pipelines with deep networks. - Application of geometric constraints into the training of deep networks. - Leveraging additional cues such as semantics and mono-depth estimates. - Methods addressing adversarial conditions where current methods fail (weather changes, day versus night, etc.). - Attention mechanisms to match salient image regions. - Integration of differentiable components into 3D reconstruction frameworks. - Connecting local descriptors/image matching with global descriptors/image retrieval. - Matching across different data modalities such as aerial versus ground. - Large-scale evaluation of classical and modern methods for image matching, by means of our open challenge. - New perception devices such as event-based cameras. - Other topics related to image matching, structure from motion, mapping, and re-localization, such as privacy-preserving representations. *SUBMISSION* We invite paper submissions up to 8 pages, excluding references and acknowledgements. They should use the CVPR template and be submitted to the CMT site. Submissions must contain novel work and will be indexed in IEEE Xplore/CVF. They will receive at least two double-blind reviews. *IMPORTANT DATES* - Paper submission deadline: March 19, 2023. - Notification to authors: April 4, 2023. - Camera-ready deadline: April 6, 2023 (hard deadline on April 8!). - Workshop date: June 19, 2023, afternoon (exact schedule TBA). (All dates are at 11:59PM, Pacific Time, unless stated otherwise.) *ORGANIZERS* - Vassileios Balntas, Scape Technologies - Fabio Bellavia, University of Palermo - Vincent Lepetit, ?cole des Ponts ParisTech - Jiri Matas, Czech Technical University in Prague - Dmytro Mishkin, Czech Technical University in Prague/HOVER Inc. - Luca Morelli, University of Trento/Bruno Kessler Foundation - Fabio Remondino, Bruno Kessler Foundation - Weiwei Sun, University of British Columbia - Eduard Trulls, Google - Kwang Moo Yi, University of British Columbia From d.kollias at qmul.ac.uk Thu Jan 12 13:21:34 2023 From: d.kollias at qmul.ac.uk (Dimitrios Kollias) Date: Thu, 12 Jan 2023 18:21:34 +0000 Subject: Connectionists: Two funded PhDs in AI, ML, DL for Affective Computing, London, UK at QMUL Message-ID: Dear All, I am currently an Assistant Professor in Artificial Intelligence at the School of Electronic Engineering & Computer Science, Queen Mary University of London (QMUL), UK. I have two open Ph.D. positions in my lab and am looking for brilliant candidates with background and/or strong passion in Artificial Intelligence, Machine and Deep Learning for Affective Computing. 1) A fully funded 3-years PhD studentship is available for UK home candidates (i.e., candidates with British citizenship). The PhD studentship will cover tuition fees and offer a London stipend of ?19,668 per year. International candidates (i.e., with nationalities other than the British one) can apply and they get a reduced international tuition fee and the stipend; the reduced fee plus the stipend are almost of the same amount as the total international tuition fees needed to be paid (so international candidates will need to cover their own living expenses; there will be options to raise some money for the living expenses etc). 2) A fully funded 4-years PhD studentship is available for Chinese candidates. This studentship is co-funded by the China Scholarship Council (CSC). CSC is offering a monthly stipend of ?1350 (tax free) to cover living expenses and QMUL is waving fees and hosting the student (eligibility criteria and details about CSC can be found here). The Application deadline for both positions is on 31 January 2023 and the expected start date is for September 2023. Both projects will engage with key industrial R&D partners. About you: * Holders (or about to hold) high-quality (ideally first class) undergraduate and/or masters (ideally distinction) degree in a relevant discipline. Candidates who are interested and/or want further information should contact me directly at d.kollias at qmul.ac.uk and include their CV at the email. Kind Regards, Dimitris ======================================================================== Dr Dimitrios Kollias, PhD, MIEEE, FHEA Lecturer (Assistant Professor) in Artificial Intelligence Member of Multimedia and Vision (MMV) research group Member of Queen Mary Computer Vision Group Associate Member of Centre for Advanced Robotics (ARQ) Academic Fellow of Digital Environment Research Institute (DERI) School of EECS Queen Mary University of London ======================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From cchrist at ucy.ac.cy Thu Jan 12 20:36:18 2023 From: cchrist at ucy.ac.cy (Chris Christodoulou) Date: Fri, 13 Jan 2023 03:36:18 +0200 Subject: Connectionists: BioSystems - Special Issue on Neural Coding 2021 Message-ID: <83f85179-2a99-fc75-dd36-04cd73779eef@ucy.ac.cy> Dear Colleagues, We would like to announce a /BioSystems/ Special Issue on Neural Coding: / / /BioSystems/ Special Issue: Selected Papers from the 14th International Neural Coding Workshop, Seattle, Washington Available online to download from: https://www.sciencedirect.com/journal/biosystems/special-issue/10M83VBSBZQ The Table of Contents of this special issue can be seen at the end of this email. Best wishes for a happy New Year, Guest Editors Chris Christodoulou, Giuseppe D'Onofrio, Michael Stiber and Alessandro Villa ------------------------------------------------------------------------------------------------------------- /BioSystems/ - Contents Selected Papers from the 14th International Neural Coding Workshop, Seattle, Washington, USA, 2021 Available online to download from: https://www.sciencedirect.com/journal/biosystems/special-issue/10M83VBSBZQ Editorial: Selected papers from the 14th international neural coding workshop, Seattle, Washington Chris Christodoulou, Giuseppe D'Onofrio, Michael Stiber and Alessandro E. P. Villa Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles Robin Gutzen, Sonja Gr?n, Michael Denker Getting the news in milliseconds: The role of early novelty detection in active electrosensory exploration Angel A. Caputi, Alejo Rodr?guez-Catt?neo, Joseph C. Waddell, Ana Carolina Pereira, Pedro A. Aguilera Spike frequency adaptation facilitates the encoding of input gradient in insect olfactory projection neurons Hayeong Lee, Lubomir Kostal, Ryohei Kanzaki, Ryota Kobayashi A simple model of the electrosensory electromotor loop in /Gymnotus omarorum/ Angel A. Caputi, Joseph C. Waddell, Pedro A. Aguilera A simple neuronal model with intrinsic saturation of the firing frequency Rimjhim Tomar, Charles E. Smith, Petr Lansky Non-monotone cellular automata: Order prevails over chaos Henrik Ekstr?m, Tatyana Turova From chaos to clock in recurrent neural net. Case study A. Vidybida, O. Shchur Phase offset determines alpha modulation of gamma phase coherence and hence signal transmission Priscilla E. Greenwood, Lawrence M. Ward -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Thu Jan 12 12:38:52 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Thu, 12 Jan 2023 19:38:52 +0200 Subject: Connectionists: Call for Free Participation in the Computational Politics e-symposium, 1st March 2023 References: <20220901201642.Horde.u4-j5yXCcTM6p4EeySUJVSG@webmail.auth.gr> <007e01d8beb7$d721f890$8565e9b0$@csd.auth.gr> <20220902153113.Horde.BOQWz8UnX4eMQdCA8Raisqx@webmail.auth.gr> <00f201d905a9$e24ed790$a6ec86b0$@csd.auth.gr> <018401d92695$bc253d40$346fb7c0$@csd.auth.gr> Message-ID: <066201d926ac$bd70b190$385214b0$@csd.auth.gr> Dear Computer scientists, Political scientists, students and enthusiasts, you are welcomed to attend for free the ?Computational Politics e-symposium on 1 March 2023?. Its exciting program can be found in: https://icarus.csd.auth.gr/ai-mellontology-symposium-2023/ Participation is through the zoom link (passcode: 867064) also posted in this www page. No registration is needed. The aim of this e-symposium is to define Computational Politics as a discipline lying at the intersection of Political science and Computer science. Politics (in Greek: ??????????, ?city-state affairs?) refers to activities associated with decision-making in social groups (including states), or other forms of power relations among individuals and/or social strata. It is essentially the art or science of government. Therefore, politics require both the analysis of political, social and financial data, decision making and decision execution/monitoring. As all these political activities concern both information analysis and control of societal processes, they can be greatly assisted by Information Technologies (IT), notably Data Analytics, Artificial Intelligence and Systems Theory (Cybernetics). Computational Politics refers exactly to the use of AI and IT in politics and Political Science. Computational Politics has various subtopics, e.g.,: * Political system modeling and design * Community and citizen modeling * Information flow * Political discourse analysis * Election campaigns * Political history * Politics and Economics. The e-symposium contains 11 lectures overviewing most of the above topics, as well as underlying technological tools, e.g., * Natural language Processing * Text sentiment analysis * Time series prediction. They will be delivered by both well-known scientists and qualified junior researchers. This symposium is the third edition of the ?AI Mellontology symposium? series. It is organized by the Horizon2020 AI4media R&D project and it is sponsored by the International AI Doctoral Academy (AIDA ) and LITHME Cost action. Organizational contact: Ms. Ioanna Koroni koroniioanna at csd.auth.gr For the organizing committee Prof. Ioannis Pitas Computational Politics e-symposium chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Fri Jan 13 02:44:16 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Fri, 13 Jan 2023 09:44:16 +0200 Subject: Connectionists: AIDA Short Course: "Nvidia DLI - Accelerating Data Engineering Pipelines", 13th January 2023 Message-ID: <0b2401d92722$d721fb00$8565f100$@csd.auth.gr> Nvidia DLI and University Debrecen organize an online AIDA short course on "Accelerating Data Engineering Pipelines" offered through the International Artificial Intelligence Doctoral Academy (AIDA). The purpose of this course is to overview the foundations and the current state of the art in GPU-accelerated data science in python. This short course will cover the following topics: * Data on the Hardware Level (60 mins), * ETL with NVTabular (120), Data Visualization (120 mins), * Final Project: Data Detective (60 mins) The targeted applications will be in GPU-accelerated ETL data process. LECTURER: - Dr. Laszlo Kovacs, Assistant Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador, email: kovacs.laszlo at inf.unideb.hu HOST INSTITUTION/ORGANIZER: Nvidia Deep Learning Institute, Faculty of Informatics, University of Debrecen, Hungary REGISTRATION: Free of charge for university students and staff WHEN: January 13, 2022 from 09:00 to 17:00 CET WHERE: Online HOW TO REGISTER and ENROLL: Both AIDA and non-AIDA students are encouraged to participate in this short course. If you are an AIDA Student* already, please: Step (a) register in the course by following the Course Link: Nvidia Deep Learning Institute | University of Debrecen (unideb.hu) AND Step (b) enroll in the same course in the AIDA system using the enrollment button in the AIDA course page Nvidia DLI - Accelerating Data Engineering Pipelines - AIDA - AI Doctoral Academy (i-aida.org), so that this course enter your AIDA Course Attendance Certificate. If you are not an AIDA Student do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members listed in this page: https://www.i-aida.org/about/members/) Dr. Laszlo Kovacs, Assistant Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador Email kovacs.laszlo at inf.unideb.hu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 280974 bytes Desc: not available URL: From max.garagnani at gmail.com Thu Jan 12 13:47:50 2023 From: max.garagnani at gmail.com (Max Garagnani) Date: Thu, 12 Jan 2023 18:47:50 +0000 Subject: Connectionists: PhD position in Computational Cognitive Neuroscience at Goldsmiths, London (UK) Message-ID: Apologies for cross-posting ? The Department of Computing at Goldsmiths, University of London, has a vacancy for a 3-year full-time PhD student position in Computational Cognitive Neuroscience. The position will be supervised by Dr. Max Garagnani (https://www.gold.ac.uk/computing/people/garagnani-max/). The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour. ELIGIBILITY --------------- Applicants should hold an Honour's degree in Computer Science or related fields (Mathematics, Engineering, Physics, etc.), or have completed (or being in the process of completing) a Master?s degree in a relevant discipline, such as Computational / Cognitive Neuroscience, Cognitive Robotics, Artificial Intelligence, Data Science, or similar. Advanced programming skills (in one or more of the programming languages Python,Java,C/C++) are essential. Knowledge of cognitive neuroscience and/or brain mechanisms of dopamine-modulated learning would be a plus. Applicants must possess excellent written and spoken communication skills in English and be able to demonstrate a strong motivation to conduct research. BENEFITS ------------- The studentship covers tuition fees (Home and Overseas) and provides, in addition, a tax-free yearly stipend of 18,000 GBP. Funding is available for three years, for full-time, on-campus studies only (relocation to the London area is required). Studentships are conditional on satisfactory progress, which is reviewed each year. HOW TO APPLY --------------------- Candidates are strongly encouraged to make informal enquiries with Dr Max Garagnani (email provided below) prior to submitting an application. Please submit your application by visiting https://lnkd.in/eAWkz3ZT and clicking on the 'Apply Now' button, and include the following documents in your application: ? Motivation letter (max. 1 page) describing your research interests, reasons for choosing this project, and relevance of your background to the project ? CV (include information about qualifications and any relevant professional and/or research experience) ? Copies of transcripts, certificates and diplomas ? Publications (if any), and, where applicable, copy of Master thesis (as a PDF document) ? Contact details of two referees. You should ensure that two supporting letters of references reach M.Garagnani at gold.ac.uk by the submission deadline, i.e., *** WEDNESDAY 15 FEBRUARY 2023 ***. START DATE: ---------------- The desirable start date is 1st April 2023, although a later start date may be possible depending on the candidate?s circumstances. THE COMPUTING DEPARTMENT AND GOLDSMITHS -------------------------------------------------------------------- Goldsmiths University of London is a world-leading centre of educational excellence where ground-breaking research meets innovative teaching and thinking. We are looking for inspiring, talented people to help Goldsmiths build on its global reputation as we expand our capabilities as a learning organisation. As a college we are working to tackle inequality in all its forms and are working to promote equality on grounds of race, disability, age, sex, gender identity, sexual orientation, religion and belief, marriage and civil partnership, pregnancy and maternity, and caring responsibilities. We are keen to attract candidates from diverse backgrounds who share our commitment to creating an inclusive culture in which all students and staff can thrive. For further information about Goldsmiths and the Computing Department, please visit: https://www.gold.ac.uk/computing/ For any specific questions, please get in touch. Max Garagnani, PhD. PhD. -- Senior Lecturer in Computer Science Joint Programme Leader, MSc in Computational Cognitive Neuroscience https://www.gold.ac.uk/pg/msc-computational-cognitive-neuroscience/ Department of Computing Goldsmiths, University of London Lewisham Way, New Cross London SE14 6NW, UK https://www.gold.ac.uk/computing/people/garagnani-max/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.linton at columbia.edu Thu Jan 12 12:47:50 2023 From: paul.linton at columbia.edu (Paul Linton) Date: Thu, 12 Jan 2023 12:47:50 -0500 Subject: Connectionists: Event: NEW APPROACHES TO 3D VISION - Online + In Person [New York], 15 Feb 2023 Message-ID: <6E363C1C-1D57-4EF2-A38A-BF05485F4F14@columbia.edu> Launch event for the Royal Society volume NEW APPROACHES TO 3D VISION 15th February 2023, 4:30pm-6:00pm, Eastern Time (USA) Online + In Person: Zuckerman Institute , Columbia University, New York REGISTER (Online + In Person): https://www.eventbrite.com/e/new-approaches-to-3d-vision-tickets-491862653437 EVENT DESCRIPTION: https://scienceandsociety.columbia.edu/events/new-approaches-3d-vision With talks on: ARTIFICIAL INTELLIGENCE - Ida Momennejad (Microsoft Research) Ida Momennejad explores the ways in which neuroscience, behavioral research, and AI inform one another, using AI navigation in 3D computer games as a key example. ANIMAL NAVIGATION - Kate Jeffery (University of Glasgow) Kate Jeffery explores how animals' "cognitive maps" of their environment reflect the possibilities for movement rather than the environment's physical geometry. HUMAN VISION - Fulvio Domini (Brown University) Fulvio Domini argues 3D vision isn?t trying to reconstruct the true 3D layout of the world, but instead the most stable 3D percept across viewing conditions. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NewApproachesTo3DVision.pdf Type: application/pdf Size: 94569 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio.bellavia at unifi.it Thu Jan 12 11:04:55 2023 From: fabio.bellavia at unifi.it (Fabio Bellavia) Date: Thu, 12 Jan 2023 17:04:55 +0100 Subject: Connectionists: LAST CALL for the IET Image Processing special issue on "Advancements in Fine Art Pattern Extraction and Recognition" (deadline 31 January 2023) Message-ID: __apologies for multiple posting, please distribute among interested parties__ Call for Papers *Special Issue of IET Image Processing on Advancements in Fine Art Pattern Extraction and Recognition* ___________ Aim & Scope Cultural heritage, especially fine arts, plays an invaluable role in the cultural, historical and economic growth of our societies. Fine arts are primarily developed for aesthetic purposes and are mainly expressed through painting, sculpture and architecture. In recent years, thanks to technological improvements and drastic cost reductions, a large-scale digitization effort has been made, which has led to an increasing availability of large digitized fine art collections. This availability, coupled with recent advances in pattern recognition and computer vision, has disclosed new opportunities, especially for researchers in these fields, to assist the art community with automatic tools to further analyze and understand fine arts. Among other benefits, a deeper understanding of fine arts has the potential to make them more accessible to a wider population, both in terms of fruition and creation, thus supporting the spread of culture. This special issue aims to offer the opportunity to present advancements in the state-of-the-art, innovative research, ongoing projects, and academic and industrial reports on the application of visual pattern extraction and recognition for a better understanding and fruition of fine arts, soliciting contributions from pattern recognition, computer vision, artificial intelligence and image processing research areas. The special issue will be linked to the 2nd International Workshop on Fine Art Pattern Extraction and Recognition (FAPER2022). Authors of selected conference papers will be invited to extend and improve their contributions for this special issue, and authors are also invited to submit new contributions (non-conference papers). _______________________________________ Topics include, but are not limited to: - Applications of machine learning and deep learning to cultural heritage and digital humanities - Computer vision and multimedia data processing for fine arts - Generative adversarial networks for artistic data - Augmented and virtual reality for cultural heritage - 3D reconstruction of historical artifacts - Point cloud segmentation and classification for cultural heritage - Historical document analysis - Content-based retrieval in visual art domain - Digitally enriched museum visits - Smart interactive experiences in cultural sites - Project, products or prototypes for cultural heritage _______________________________________________ Submission Deadline (extended): 31 January 2023 Submissions must be made through ScholarOne: https://mc.manuscriptcentral.com/theiet-ipr see the PDF call for paper for more information: https://ietresearch.onlinelibrary.wiley.com/pb-assets/assets/17519667/Special%20Issues/IPR%20SI%20CFP_AFAPER-1651107571727.pdf ___________ Open Access From January 2021, The IET began an Open Access publishing partnership with Wiley. As a result, all submissions that are accepted for this Special Issue will be published under the Gold Open Access Model and subject to the Article Processing Charge (APC) of $2,300. *APC can be covered in FULL, i.e. FREE OF CHARGE*, or part by your institution *CHECK? YOUR? ELIGIBILITY? HERE* https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/affiliation-policies-payments/institutional-funder-payments.html _______________ Editor-in-Chief Prof. Farzin Deravi, University of Kent, UK _____________ Guest Editors Giovanna Castellano, Universita' di Bari, Italy Gennaro Vessio, Universita' di Bari, Italy Fabio Bellavia, Universita' di Palermo, Italy Sinem Aslan, Universit? Ca' Forscari Venezia, Italy From juergen at idsia.ch Fri Jan 13 03:13:43 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Fri, 13 Jan 2023 08:13:43 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning Message-ID: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://arxiv.org/abs/2212.11279 https://people.idsia.ch/~juergen/deep-learning-history.html This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen From gualtiero.volpe at unige.it Fri Jan 13 05:04:02 2023 From: gualtiero.volpe at unige.it (Gualtiero Volpe) Date: Fri, 13 Jan 2023 11:04:02 +0100 Subject: Connectionists: ICMI 2023 - Call for Papers Message-ID: <037d01d92736$5d19cef0$174d6cd0$@unige.it> 25th ACM International Conference on Multimodal Interaction (ICMI 2023) 9-13 October 2023, Paris, France The 25th International Conference on Multimodal Interaction (ICMI 2023) will be held in Paris, France. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data and systems. The study of social interactions englobes both human-human interactions and human-computer interactions. A unique aspect of ICMI is its multidisciplinary nature which values both scientific discoveries and technical modeling achievements, with an eye towards impactful applications for the good of people and society. ICMI 2023 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits, doctoral consortium, and late-breaking papers. The conference will also feature tutorials, workshops and grand challenges. The proceedings of all ICMI 2023 papers, including Long and Short Papers, will be published by ACM as part of their series of International Conference Proceedings and Digital Library, and the adjunct proceedings will feature the workshop papers. Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2023 will need to be novel along one of the two dimensions: * Scientific Novelty: Papers should bring new scientific knowledge about human social interactions, including human-computer interactions. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children's interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper. * Technical Novelty: Papers should propose novelty in their computational approach for recognizing, generating or modeling multimodal data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach. Please see the Submission Guidelines for Authors https://icmi.acm.org/ for detailed submission instructions. Commitment to ethical conduct is required and submissions must adhere to ethical standards in particular when human-derived data are employed. Authors are encouraged to read the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/). ICMI 2023 conference theme: The theme for this year's conference is "Science of Multimodal Interactions". As the community grows, it is important to understand the main scientific pillars involved in deep understanding of multimodal social interactions. As a first step, we want to acknowledge key discoveries and contributions that the ICMI community enabled over the past 20+ years. As a second step, we reflect on the core principles, 20+ foundational methodologies and scientific knowledge involved in studying and modeling multimodal interactions. This will help establish a distinctive research identity for the ICMI community while at the same time embracing its multidisciplinary collaborative nature. This research identity and long-term agenda will enable the community to develop future technologies and applications while maintaining commitment to world-class scientific research. Additional topics of interest include but are not limited to: * Affective computing and interaction * Cognitive modeling and multimodal interaction * Gesture, touch and haptics * Healthcare, assistive technologies * Human communication dynamics * Human-robot/agent multimodal interaction * Human-centered A.I. and ethics * Interaction with smart environment * Machine learning for multimodal interaction * Mobile multimodal systems * Multimodal behaviour generation * Multimodal datasets and validation * Multimodal dialogue modeling * Multimodal fusion and representation * Multimodal interactive applications * Novel multimodal datasets * Speech behaviours in social interaction * System components and multimodal platforms * Visual behaviours in social interaction * Virtual/augmented reality and multimodal interaction Important Dates Paper Submission: May 1, 2023 Rebuttal period: June 26-29, 2023 Paper notification: July 21, 2023 Camera-ready paper: August 14, 2023 Presenting at main conference: October 9-13, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Fri Jan 13 05:09:02 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Fri, 13 Jan 2023 12:09:02 +0200 Subject: Connectionists: AIDA Short Course: "Nvidia DLI - Fundamentals of Accelerated Data Science with RAPIDS", 19th January 2023 Message-ID: <0f0a01d92737$103ae500$30b0af00$@csd.auth.gr> Nvidia DLI and University Debrecen organize an online AIDA short course on "Fundamentals of Accelerated Data Science with RAPIDS" offered through the International Artificial Intelligence Doctoral Academy (AIDA). The purpose of this course is to overview the foundations and the current state of the art in GPU-accelerated data science in python This short course will cover the following topics: * GPU-Accelerated Data Manipulation (120 mins), * GPU-Accelerated Machine Learning (120), * Project: Data Analysis to Save the UK (120 mins) The targeted applications will be in graph-based data analytics. LECTURER: - Dr. Laszlo Kovacs, Assistant Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador, email: kovacs.laszlo at inf.unideb.hu HOST INSTITUTION/ORGANIZER: Nvidia Deep Learning Institute, Faculty of Informatics, University of Debrecen, Hungary REGISTRATION: Free of charge for university students and staff WHEN: January 19, 2022 from 09:00 to 17:00 CET WHERE: Online HOW TO REGISTER and ENROLL: Both AIDA and non-AIDA students are encouraged to participate in this short course. If you are an AIDA Student* already, please: Step (a) register in the course by following the Course Link: Nvidia Deep Learning Institute | University of Debrecen (unideb.hu) AND Step (b) enroll in the same course in the AIDA system using the enrollment button in the AIDA course page Nvidia DLI - Fundamentals of Accelerated Data Science with RAPIDS - AIDA - AI Doctoral Academy (i-aida.org) , so that this course enter your AIDA Course Attendance Certificate. If you are not an AIDA Student do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members listed in this page: https://www.i-aida.org/about/members/) Dr. Laszlo Kovacs, Assistant Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador Email kovacs.laszlo at inf.unideb.hu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 229541 bytes Desc: not available URL: From andreas.wichert at tecnico.ulisboa.pt Fri Jan 13 06:40:37 2023 From: andreas.wichert at tecnico.ulisboa.pt (Andrzej Wichert) Date: Fri, 13 Jan 2023 11:40:37 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> Message-ID: <4F70FCBD-BF38-4924-A523-826087B53460@tecnico.ulisboa.pt> Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert http://web.tecnico.ulisboa.pt/andreas.wichert/ - https://www.amazon.com/author/andreaswichert Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal > On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > https://arxiv.org/abs/2212.11279 > > https://people.idsia.ch/~juergen/deep-learning-history.html > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > Happy New Year! > > J?rgen > > > From sahar.moghimi at u-picardie.fr Fri Jan 13 07:31:15 2023 From: sahar.moghimi at u-picardie.fr (Sahar Moghimi) Date: Fri, 13 Jan 2023 13:31:15 +0100 Subject: Connectionists: Two 3-year post-doc and two PhD positions in neurodevelopment and rhythm processing in Canada and France Message-ID: <20230113133115.Horde.S6I35AiekC1UyRdrRjDN4mc@webmail.u-picardie.fr> We are looking for two 3-year post-doc researchers and two PhD students for the projects PreMusic and BabyMusic funded by the French National Research Agency ANR and the Fondation pour l?Audition. The deadline for applications is March 31st, 2023. Applications will be evaluated as they come in, and the positions will be open until filled. The consortium of the projects aims to evaluate the development of rhythm perception starting from the third trimester of gestation into infancy, and the impact of early musical interventions in the NICU on preterm infants? development. In these cross-sectional and longitudinal studies, we will evaluate the development of auditory rhythm processing capacities with EEG, and behavioral protocols. The project consortium involves four academic partners with complementary expertise in early neurodevelopment, cognitive neurosciences of music, neural data processing (in particular EEG), and music analysis. The aim is to put together a cross-disciplinary team that together covers the following methods: protocol design and implementation, EEG signal processing, behavioral studies, video analysis, statistics, machine learning. The positions will be with Laurel Trainor in Hamilton, Canada (Institute for the Music and the Mind), and with Sahar Moghimi in Amiens (INSERM U1105), in collaboration with Barbara Tillmann in Dijon (LEAD-UMR5022). 1) Hamilton (https://livelab.mcmaster.ca/mcmaster-institute-for-music-the-mind-mimm/) The positions include developing auditory stimuli and experimental protocols, extracting the neural response from EEG signals, as well as behavioral results during experimental protocols. The candidates will conduct the experiments on the infants in conjunction with graduate students and technicians in the lab. Required: PhD (MSc for PhD applications) in neuroscience, biomedical engineering, computer science, psychology, or related fields, strong background and research expertise in EEG signal processing, advanced skills with scripting languages, such as Matlab or Python, knowledge in the field of music cognition, neuroscience of music and/or auditory perception, high verbal and written communication skills Preferable: Expertise in perceptual development and in sound measurement and analysis 2) Amiens (https://gramfc.u-picardie.fr/) The post-doc/PhD will be fully dedicated to extracting the EEG correlates of rhythm processing in the course of development, aiming to extract the neural response to different rhythmic characteristics, and to evaluate the impact of musical interventions on neurodevelopment. Required: PhD (MSc for PhD applications) in neuroscience, biomedical engineering, computer science, or related fields, strong background in neural signal processing, advanced skills with scripting languages, such as Matlab or Python, research experience in EEG signal processing/modeling, high verbal and written communication skills Preferable: knowledge in the field of neurosciences of music and/or auditory perception, French fluency All applications should include a CV, a cover letter specifying research interests and motivation, and contact details for two referees. Applications should be sent to either Laurel Trainor ljt at mcmaster.ca (positions in Hamilton) or Sahar Moghimi sahar.moghimi at u-picardie.fr (positions in Amiens). -------------- next part -------------- An embedded message was scrubbed... From: Sahar Moghimi Subject: Two 3-year post-doc and two PhD positions in neurodevelopment and rhythm processing in Canada and France Date: Fri, 13 Jan 2023 13:30:27 +0100 Size: 23600 URL: From m.fairbank at essex.ac.uk Fri Jan 13 07:54:34 2023 From: m.fairbank at essex.ac.uk (Fairbank, Michael H) Date: Fri, 13 Jan 2023 12:54:34 +0000 Subject: Connectionists: PhD Scholarships in data-science at University of Essex, UK Message-ID: We have two fully funded PhD scholarships in data science, with an emphasis on neural network applications/research, available at our Institute for Analytics and Data Science. The deadline for applications is 10th February 2023. Dr Michael Fairbank Computer Science and Electronic Engineering University of Essex -------------- next part -------------- An HTML attachment was scrubbed... URL: From shu-chen.li at tu-dresden.de Fri Jan 13 08:26:26 2023 From: shu-chen.li at tu-dresden.de (Shu-Chen Li) Date: Fri, 13 Jan 2023 13:26:26 +0000 Subject: Connectionists: 4-year postdoc position in fMRI research of value-based decision making and brain stimulation (application deadline Feb 3, 2023) In-Reply-To: <60448D18-D195-4462-B0FC-E71989ACF567@tu-dresden.de> References: <1D5761D1-8882-4079-A012-9169BED5B6C9@contoso.com> <60448D18-D195-4462-B0FC-E71989ACF567@tu-dresden.de> Message-ID: <46B57280-E0C8-49B4-9EB7-7714CFB642E3@tu-dresden.de> Faculty of Psychology (TU Dresden, Germany) At the Chair of Lifespan Developmental Neuroscience offers a project position as Research Associate / Postdoc (m/f/x) (subject to personal qualification employees are remunerated according to salary group E 13 TV-L) starting as soon as possible. The position is limited for 4 years, with a possibility of extension subject to the availability of resources. The period of employment is governed by ? 2 (2) Fixed Term Research Contracts Act (Wissenschaftszeitvertragsgesetz - WissZeitVG). The Chair of Lifespan Developmental Neuroscience investigates neurocognitive mechanisms underlying perceptual, cognitive, and motivational development across the lifespan. The main themes of our research are neurofunctional mechanisms underlying lifespan development of episodic and spatial memory, cognitive control, reward processing, decision making, perception and action. We also pursue applied research to study effects of behavioral intervention, non-invasive brain stimulation, or digital technologies in enhancing functional plasticity for individuals of difference ages. We utilize a broad range of neurocognitive (e.g., EEG, fNIRs, fMRI, tDCS) and computational methods. Our lab has several testing rooms and is equipped with multiple EEG (64-channel and 32-channel) and fNIRs systems, as well as eye-tracking and virtual-reality devices. The MRI scanner (3T) and TMS-device can be accessed through the university?s NeuroImaging Center. TU Dresden is a university of excellence supported by the DFG, which offers outstanding research opportunities. Researchers in this chair are involved in large research consortium and cluster, such as the DFG SFB 940 ?Volition and Cognitive Control? and DFG EXC 2050 ?Tactile Internet with Human-in-the-Loop?. The here announced position is embedded in a newly established research group funded by the DFG (FOR5429), with a focus on modulating brain networks for memory and learning by using focalized transcranial electrical stimulation (tES). The subproject with which this position is associated will study effects of focalized tES on value-based sequential learning at the behavioral and brain levels in adults. Within the research group we closely collaborate with the project sites at Center for Cognitive Neuroscience of the Freie Universit?t Berlin (Free University of Berlin) and the Department of Neurology at the University Medicine Greifswald and other partner institutions. The data collection for this subproject will mainly be carried out at the Berlin site (Center for Cognitive Neuroscience, FU Berlin). Tasks: conduct project-related research (data collection and analyses); develop own research ideas in the areas of value-based learning and neurocognitive aging; publishing scientific articles. Requirements: university and PhD degree (e.g. Dr. rer. nat. or PhD) in Psychology, Neuroscience or related fields; experiences with cognitive neuroscience methods (i.e., fMRI); excellent language skills in English. Language skills in German is not required but will be welcomed. Prior experience with tES is not required but will be preferred. Interests and experiences in computational neuroscience will be highly welcomed. Please contact Prof. Shu-Chen Li (shu-chen.li at tu-dresden.de) for questions about the position. Applications from women are particularly welcome. The same applies to people with disabilities. Please submit your application materials (cover letter, research interests, CV, degree certificates and names of 3 referees) by February 3, 2023 (stamped arrival date of the university central mail service applies) with the subject heading: Postdoc-Brain Stimulation to: TU Dresden, Fakult?t Psychologie, Institut f?r P?dagogische Psychologie und Entwicklungspsychologie, Professur f?r Entwicklungspsychologie und Neurowissenschaft der Lebensspanne, Frau Prof. Dr. Shu-Chen Li, Helmholtzstr. 10, 01069 Dresden or via the TU Dresden SecureMail Portal https://securemail.tu-dresden.de by sending it as a single pdf document to shu-chen.li at tu-dresden.de. Please submit copies only, as your application materials will not be returned to you. Expenses incurred in attending the interviews cannot be reimbursed. ___________________ **Reference to data protection: Your data protection rights, the purpose for which your data will be processed, as well as further information about data protection is available to you on the website: https://tu-dresden.de/karriere/datenschutzhinweis. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahar.moghimi at u-picardie.fr Fri Jan 13 09:19:43 2023 From: sahar.moghimi at u-picardie.fr (Sahar Moghimi) Date: Fri, 13 Jan 2023 15:19:43 +0100 Subject: Connectionists: A PhD position on the development of functional networks in very premature neonates in EEG and MEG Message-ID: <20230113151943.Horde.rnyUG3HvXQNZwKL8YKFVCg9@webmail.u-picardie.fr> A PhD position is available at GRAMFC (Inserm U1105) under the co-supervision of Fabrice Wallois (Inserm U1105, Amiens) and Olivier David (ILCB, CNRS, Aix-Marseille) funded by Inserm. The deadline for applications is March 31st, 2023. Applications will be evaluated as they come in, and the positions will be open until filled. Description The main objective of this project is to characterize the endogenous generators underlying the emergence of sensory capacities and to characterize their associated functional connectivity. This will be done retrospectively on our High Resolution EEG database in premature neonates from 24 weeks of gestational age, which is the largest database worldwide. We will also use the OPM pediatric MEG, which is being set up in Amiens. This study will allow us to characterize the establishment of sensory networks before the modulation of cortical activity by external sensory information. The PhD candidate will be concentrated on developing advance signal processing approached using the already available datasets on HR EEG and MEG, for characterization of spontaneous neural oscillations and analysis of functional connectivity. Skills Required: MSc in neuroscience, biomedical engineering, computer science, or related fields, strong background and research expertise in EEG/MEG signal processing/modeling , advanced skills with scripting languages, such as Matlab or Python, statistic modeling, high English verbal and written communication skills Preferable: French fluency All applications should include a CV, a cover letter specifying research interests and motivation, and contact details for two referees. Applications should be sent to Fabrice Wallois fabrice.wallois at u-picardie.fr From jeanpascal.pfister at unibe.ch Fri Jan 13 09:53:10 2023 From: jeanpascal.pfister at unibe.ch (jeanpascal.pfister at unibe.ch) Date: Fri, 13 Jan 2023 14:53:10 +0000 Subject: Connectionists: Open PhD positions at the University of Bern Message-ID: <952D3523-EB45-4ED1-BC03-2DB185795420@unibe.ch> Applications are invited for three PhD student positions at the University of Bern. The positions are funded by a grant from the Swiss National Science Foundation which is entitled ?Why Spikes??. This project aims at answering an almost 100 year old question in Neuroscience: ?What are spikes good for??. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels. The PhD students and post-doc will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The PhD candidates (resp. post-doc candidate) should hold a Master (resp. PhD) degree in Physics, Mathematics, Computer Science, Computational Neuroscience, Neuroscience or a related field. She/he should have keen interests in developing theories that can be tested experimentally. Preference will be given to candidates with strong mathematical and programming skills. Expertise in stochastic dynamical systems, point processes, control theory and nonlinear Bayesian filtering will be a plus. The applicant should submit a CV (including contacts of two referees), a statement of research interests, marks obtained for the Master to Jean-Pascal Pfister (jeanpascal.pfister at unibe.ch). ThThe position is offered for a period of three years and can be extended. Deadline for application is the 31st of January 2023 or until the position is filled. Salary scale is provided by the Swiss National Science Foundation. (http://www.snf.ch/SiteCollectionDocuments/allg_doktorierende_e.pdf). --------------------------------- Prof. Jean-Pascal Pfister Theoretical Neuroscience Group Physiology Department, University of Bern 5 B?hlplatz, CH-3012 Bern, CH e-mail: jeanpascal.pfister at unibe.ch URL: http://www.physio.unibe.ch/~pfister/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From valle at ime.unicamp.br Fri Jan 13 09:29:09 2023 From: valle at ime.unicamp.br (Marcos Eduardo Valle (IMECC-Unicamp)) Date: Fri, 13 Jan 2023 11:29:09 -0300 Subject: Connectionists: [CFP] Special Session on COMPLEX VALUED AND QUATERNIONIC NEURAL NETWORKS: THEORY AND APPLICATIONS at IJCNN 2023 Message-ID: Special Session ? COMPLEX VALUED AND QUATERNIONIC NEURAL NETWORKS: THEORY AND APPLICATIONS International Joint Conference on Neural Networks (IJCNN 2023) June 18 - June 23, 2023, Gold Coast Convention and Exhibition Centre Queensland, Australia. Webpages (for further information): IJCNN 2023: https://2023.ijcnn.org/ Special Session: https://www.eis.t.u-tokyo.ac.jp/~ahirose/tmp/IJCNN2023SPECIALSESSION_GRM_AH.pdf Aim and Scope: Innovations in Artificial Neural Networks (ANNs) lead to rich and elegant theory of Complex Valued Neural Networks (CVNNs), Quaternionic Neural Network (QNNs) alongwith interesting applications. In the past decade, research efforts in these areas have accelerated leading to new research directions related to Hypercomplex-Valued Neural Networks (HVNNs) (particularly based on geometric and algebraic properties related to hypercomplex numbers including quaternions). CVNNs naturally arise in applications dealing with electromagnetic waves, quantum waves and other wave phenomena. Also, quaternionic neural networks found many applications in modeling three and four dimensional data, processing of colour and polarimetric SAR images etc. In spite of development of large body of knowledge (theory and applications), new research problems such as generalization of real valued ANN architectures, training algorithms to CVNNs, QNNs naturally arise. Furthermore, applications of CVNNs, QNNs in research areas such as pattern recognition, classification, nonlinear filtering, brain-computer interfaces, time-series prediction, intelligent image processing, bio-informatics, robotics etc are emerging naturally. This special session is aimed at providing a forum for organized and comprehensive exchange of ideas, presentation of research results and discussion of novel trends in CVNNs, QNNs. We fondly hope that this special session will attract renowned speakers, experienced/young research scholars who aspire to contribute to CVNN, QNN community. We expect the session to inspire and benefit computational intelligence researchers, other than specialists who require latest tools related to ANNs. List of Topics: The special session invites research papers dealing with all aspects of CVNNs, QNNs. Theoretical advances as well as applied contributions are welcome. Also, interdisciplinary contributions from related areas overlapping with the scope of the special session are welcome. Topics include, but are not limited to - Complex Valued Neural Networks (CVNNs) with multivalued neurons - Novel complex valued, quaternionic activation functions - Complex Valued, Quaternionic Deep Neural Networks - Complex Valued, nonlinear adaptive filters - Complex Valued, Quaternionic Recurrent Neural Networks - Theoretical research efforts related to CVNNs, QNNs - Novel Learning algorithms for CVNNs, QNNs - Classification, Pattern Recognition and time series prediction using CVNNs, QNNs - CVNNs, QNNs applied to Classification of Spatio-Temporal Data - CVNNs, QNNs operating on Frequency Domain Data - Applications of CVNNs, QNNs in Speech, Image, Video processing and Bio-informatics (e.g. genomics - Quantum Neural Networks - CVNNs, QNNs in Human Computer Interaction, Robotics Important Dates: - Paper Submission: January 31, 2023 - Notification of Acceptance: March 31, 2023 Organizers: Garimella Rama Murthy (Mahindra University, Hyderabad, India) Akira Hirose (University of Tokyo, Japan) Danilo Mandic (Imperial College, London, UK) Igor Aizenberg (Manhattan College, USA) -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.fairbank at essex.ac.uk Fri Jan 13 10:16:16 2023 From: m.fairbank at essex.ac.uk (Fairbank, Michael H) Date: Fri, 13 Jan 2023 15:16:16 +0000 Subject: Connectionists: PhD Scholarships in data-science at University of Essex, UK In-Reply-To: References: Message-ID: And... Here is the link for it: https://www.essex.ac.uk/scholarships/institute-for-analytics-and-data-science-phd-scholarship ________________________________ From: Fairbank, Michael H Sent: 13 January 2023 12:54 To: connectionists at mailman.srv.cs.cmu.edu Subject: PhD Scholarships in data-science at University of Essex, UK We have two fully funded PhD scholarships in data science, with an emphasis on neural network applications/research, available at our Institute for Analytics and Data Science. The deadline for applications is 10th February 2023. Dr Michael Fairbank Computer Science and Electronic Engineering University of Essex -------------- next part -------------- An HTML attachment was scrubbed... URL: From martinagonzalezvilas at gmail.com Fri Jan 13 12:10:51 2023 From: martinagonzalezvilas at gmail.com (Martina G. Vilas) Date: Fri, 13 Jan 2023 18:10:51 +0100 Subject: Connectionists: The Algonauts Project 2023 Challenge is now live: How the Human Brain Makes Sense of Natural Scenes Message-ID: Dear colleagues, We announce the 2023 challenge sponsored by The Algonauts Project : How the Human Brain Makes Sense of Natural Scenes. We invite you to join us in a competition to develop the best computer model of how the human brain responds to natural scenes. Participants whose models most closely match recorded brain data (top 3) will receive monetary prizes and be invited to present their work on Aug. 24th at this year?s Conference on Cognitive Computational Neuroscience (CCN). Submission deadline: July 26, 2023 The Algonauts Project: http://algonauts.csail.mit.edu/ CCN Workshop: How the Human Brain Makes Sense of Natural Scenes Aug. 24, 2023 Organizers will summarize the outcome and provide hands-on tutorials. Challenge winners are invited to present the results of their work. Panel discussion about challenges in NeuroAI. We look forward to seeing you at the workshop in August, and we hope you will consider joining the challenge! Best wishes, The Algonauts Project Team *Algonauts 2023 Project Team:* Radoslaw Cichy, Freie Universit?t Berlin Kendrick Kay, University of Minnesota Aude Oliva, Massachusetts Institute of Technology Gemma Roig, Goethe University Frankfurt Alessandro Gifford, Freie Universit?t Berlin Benjamin Lahner, Massachusetts Institute of Technology Alex Lascelles, Massachusetts Institute of Technology Sari Saba-Sadiya, Goethe University Frankfurt & FIAS Martina Vilas, Goethe University Frankfurt & ESI -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivekatcube at gmail.com Fri Jan 13 09:55:45 2023 From: vivekatcube at gmail.com (Vivek Sharma) Date: Fri, 13 Jan 2023 15:55:45 +0100 Subject: Connectionists: RA / PhD / PostDoc Position available at Radboud University (SPECS Research Group) Message-ID: Dear all, We are looking for a potential candidate for the Python Based development of the Software BrainX3. The proposal is attached below. Feel free to circulate. BrainX3 (https://www.brainx3.com/) is an interactive platform for the 3D visualisation, analysis and simulation of human neuroimaging data. In particular, we focus on volumetric MRI data, DTI/DSI tractography data, EEG/MEG and semantic corpora from available text databases. BrainX3 provide a means to organise and visualise how the above data types could be combined to extract meaningful insights about brain structures and pathways that might be useful for the scientific and clinical communities. We are looking for a highly motivated individual for a Research Assistant / PhD / PostDoc position to work on software development and further improve the functionality in line with Neuroscience. We?re looking for individuals with a background in computer science or with an experience with python libraries for visualisation, e.g. VTK and QT5. Who We Are: https://specs-lab.com/ The SPECS research group is located at Donders Centre of Neuroscience, Radboud University, Nijmegen, Netherlands. Applicants should submit their resume and a one-page cover letter specifying their eligibility for the position to paul.verschure at ru.nl and paul.verschure at donders.ru.nl. -- Vivek -------------- next part -------------- An HTML attachment was scrubbed... URL: From claudio.gallicchio at unipi.it Fri Jan 13 09:48:58 2023 From: claudio.gallicchio at unipi.it (Claudio Gallicchio) Date: Fri, 13 Jan 2023 14:48:58 +0000 Subject: Connectionists: [Call for Papers] Special Session on Reservoir Computing - IJCNN 2023 - Gold Coast Australia In-Reply-To: <32bd95296ca44890af0a97e8d8eef33f@unipi.it> References: <5B6DEE5D-129B-4B08-8ED2-D549567C1D6D@unipi.it>, <32bd95296ca44890af0a97e8d8eef33f@unipi.it> Message-ID: [Apologies for any cross-postings] [GetFileAttachment.png] Special Session on Reservoir Computing: theory, models, and applications 18 - 23rd June 2023, Gold Coast Convention and Exhibition Centre Queensland, Australia Papers submission deadline: 31 January 2023 More info at: IEEE Task Force on Reservoir Computing - IJCNN 2023 - Special Session Paper submission Guidelines: International Joint Conference on Neural Networks 2023 (ijcnn.org) Organisers Andrea Ceni (University of Pisa, Italy), Claudio Gallicchio (University of Pisa, Italy), Gouhei Tanaka (University of Tokyo, Japan). Description Reservoir Computing (RC) is a popular approach for efficiently training Recurrent Neural Networks (RNNs), based on (i) constraining the recurrent hidden layers to develop stable dynamics, and (ii) restricting the training algorithms to operate solely on an output (readout) layer. Over the years, the field of RC attracted a lot of research attention, due to several reasons. Indeed, besides the striking efficiency of training algorithms, RC neural networks are distinctively amenable to hardware implementations (including neuromorphic unconventional substrates, like those studied in photonics and material sciences), enable clean mathematical analysis (rooted, e.g., in the field of random matrix theory), and finds natural engineering applications in resource-constrained contexts, such as edge AI systems. Moreover, in the broader picture of Deep Learning development, RC is a breeding ground for testing innovative ideas, e.g. biologically plausible training algorithms beyond gradient back-propagation. Noticeably, although established in the Machine Learning field, RC lends itself naturally to interdisciplinarity, where ideas and inspirations coming from diverse areas such as computational neuroscience, complex systems and non-linear physics can lead to further developments and new applications. This special session is intended to be a hub for discussion and collaboration within the Neural Networks community, and therefore invites contributions on all aspects of RC, from theory, to new models, to emerging applications. We invite researchers to submit papers on all aspects of RC research, targeting contributions on theory, models, and applications. Topics of Interests A list of relevant topics for this session includes, without being limited to, the following: * New Reservoir Computing models and architectures, including Echo State Networks and Liquid State Machines * Hardware, physical and neuromorphic implementations of Reservoir Computing systems * Learning algorithms in Reservoir Computing * Reservoir Computing in Computational Neuroscience * Reservoir Computing on the edge systems * Novel learning algorithms rooted in Reservoir Computing concepts * Novel applications of Reservoir Computing, e.g., to images, video and structured data * Federated and Continual Learning in Reservoir Computing * Deep Reservoir Computing neural networks * Theory of complex and dynamical systems in Reservoir Computing * Extensions of the Reservoir Computing framework, such as Conceptors Important Dates * Papers submission deadline: January 31, 2023 * Decision notification: March 31, 2023 Submission Guidelines and Instructions Papers submission for this Special Session follows the same process as for the regular sessions of IJCNN 2023, which uses EDAS as submission system. The review process for IJCNN 2023 will be double-blind. For prospected authors, it is therefore mandatory to anonymize their manuscripts. Each paper should have 6 to MAXIMUM 8 pages, including figures, tables and references. Please refer to the Submission Guidelines at https://2023.ijcnn.org/authors/paper-submission for full information. Submit your paper at the following link https://edas.info/N30081 and choose the track "Special Session: Reservoir Computing: theory, models, and applications", or use the direct link: https://edas.info/newPaper.php?c=30081&track=116064. Note that anonymizing your paper is mandatory, and papers that explicitly or implicitly reveal the authors' identities may be rejected. Sincerely, The Organizing Team -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GetFileAttachment.png Type: image/png Size: 87966 bytes Desc: GetFileAttachment.png URL: From juergen at idsia.ch Fri Jan 13 13:02:46 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Fri, 13 Jan 2023 18:02:46 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <4F70FCBD-BF38-4924-A523-826087B53460@tecnico.ulisboa.pt> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <4F70FCBD-BF38-4924-A523-826087B53460@tecnico.ulisboa.pt> Message-ID: Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://twitter.com/SchmidhuberAI/status/1606333832956973060?cxt=HHwWiMC8gYiH7MosAAAA J?rgen > On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: > > Dear Juergen, > > You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? > > Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). > > Best, > > Andreas > > -------------------------------------------------------------------------------------------------- > Prof. Auxiliar Andreas Wichert > > http://web.tecnico.ulisboa.pt/andreas.wichert/ > - > https://www.amazon.com/author/andreaswichert > > Instituto Superior T?cnico - Universidade de Lisboa > Campus IST-Taguspark > Avenida Professor Cavaco Silva Phone: +351 214233231 > 2744-016 Porto Salvo, Portugal > >> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >> >> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >> >> https://arxiv.org/abs/2212.11279 >> >> https://people.idsia.ch/~juergen/deep-learning-history.html >> >> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >> >> Happy New Year! >> >> J?rgen >> >> >> > From vladan at temple.edu Sat Jan 14 14:28:56 2023 From: vladan at temple.edu (Vladan Radosavljevic) Date: Sat, 14 Jan 2023 19:28:56 +0000 Subject: Connectionists: CfP - Workshop on Machine Learning for Streaming Media (ML4SM) at The WebConf2023 Message-ID: Call for Papers: Workshop on Machine Learning for Streaming Media at The WebConf2023 Austin, Texas, USA, Sunday, April 30, 2023 https://ml4streamingmedia-workshop.github.io/www/index.html Streaming media have been seeing massive year over year growth in terms of consumption hours recently. For many people, streaming services have become part of everyday life and accessing and consuming media content via streaming is now the norm for people of all ages. Powered by Machine Learning (ML) algorithms, streaming services are becoming one the most visible and impactful applications of ML that directly interact with people and influence their lives. Despite the rapid growth of streaming services, the research discussions around ML for streaming media remain fragmented across different conferences and workshops. Also, the gap between academic research and constraints and requirements in industry limits the broader impact of many contributions from academia. Therefore, we believe that there is an urgent need to: (i) build connections and bridge the gap by bringing together researchers and practitioners from both academia and industry working on these problems, (ii) attract ML researchers from other areas to streaming media problems, and (iii) bring up the pain points and battle scars in industry to which academia researchers can pay more attention. With this motivation in mind, we are organizing a workshop on Machine Learning for Streaming Media in conjunction with the WebConf 2023. We invite quality research contributions, including original research, preliminary research results, and proposals for new work, to be submitted. All submitted papers will be peer reviewed by the program committee and judged for their relevance to the workshop, especially to the topics identified below, and their potential to generate discussion. Accepted submissions will be presented at the workshop and will be published in the companion (workshop) proceedings of the WebConf 2023. We welcome research that has been previously published or is under review elsewhere. Such articles should be clearly identified at the time of submission and will not be published in the proceedings. Workshop Topics The main topics we would like to consider for this workshop are * Content Understanding * Multimodal representation * Feature extraction for audio, video, and image content * Knowledge Graph generation for streaming media * Semi-supervised learning for content understanding * Metadata enrichment for music, podcast, video catalog * Search and recommendation for streaming media * Named entity recognition (e.g. identifying celebrities, hosts, artists) * Conversational systems * Reward modeling and shaping * Item cold start problems and challenges * Designing scalable ML systems * Heterogeneous content recommendation * Learning to rank * Transfer learning * Explainable recommendations * Representation learning * Graph learning algorithms for streaming media * Measurement, Metrics & Evaluation * Evaluation methodologies for streaming media search and recommendations * Methodologies for valuation of content * Measuring business impact of recommendation systems * Life-time value modeling * Churn prediction & retention modeling * User Studies & Human-In the Loop * User studies on real-world recommenders ? Human-In the loop recommendations * Mixed methods research * User studies on preference elicitation * Trust, Safety & Algorithmic Fairness * Identifying misinformation and disinformation ? Algorithmic fairness in recommendations * Hate-speech and fake news detection * Content moderation * Societal impact of recommendation systems for streaming media * Machine learning to optimize streaming quality of experience Important Dates * Submission deadline: 6th of February 2023 * Author notification: 6th of March 2023 * Camera-ready version deadline: 20th of March 2023 * Workshop: Either 1st of May OR 2nd of May 2023 All deadlines are 11:59 pm, Anywhere on Earth (AoE). Submission Instructions Submission link: https://easychair.org/conferences/conf=thewebconf2023iwpd Formatting Instructions Submissions should not exceed six pages in length (including appendices and references). Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic ?sigconf? sample. The PDF files must have all non-standard fonts embedded. Workshop papers must be self-contained and in English. Registration and Attendance Further, at least one author of each accepted workshop paper has to register for the main conference. Workshop attendance is only granted for registered participants. Workshop Organizers * Sudarshan Lamkhede - Manager, Machine Learning - Search and Recommendations, Netflix Research. * Praveen Chandar - Staff Research Scientist, Spotify * Vladan Radosavljevic - Machine Learning Engineering Manager, Spotify * Amit Goyal - Senior Applied Scientist, Amazon Music * Lan Luo - Associate Professor of Marketing, University of Southern California If you have any questions please do not hesitate to reach out to the workshop organizers via organizers-ml4sm at googlegroups dot com -------------- next part -------------- An HTML attachment was scrubbed... URL: From publicity at acsos.org Sun Jan 15 11:20:39 2023 From: publicity at acsos.org (ACSOS Publicity Chairs) Date: Sun, 15 Jan 2023 16:20:39 +0000 Subject: Connectionists: ACSOS 2023: First Joint Call for Contributions Message-ID: *** ACSOS 2023 - First Joint Call for Contributions *** 4th IEEE International Conference on Autonomic Computing and Self-Organizing Systems 25-29 September 2023 - Toronto, Canada https://2023.acsos.org/ https://twitter.com/ACSOSconf ******************************************************* The world is increasingly embracing autonomous systems: in robotics, manufacturing, software engineering, vehicles, data center systems, and precision agriculture to name just a few areas. These systems are bringing autonomy to a whole new level of dynamic decision-making under uncertainty, requiring autonomic behavior (e.g., control theory, cybernetics) and self-reference, leading to a range of self-* properties (e.g., self-awareness, self-adaptation, self-organization), and an approach in which system implementation and its environment are holistically considered. Despite this rise in autonomic and self-* systems, there remains a wide range of fundamental challenges in understanding how to design, control, reason about, and trust such systems. The IEEE ACSOS conference solicits novel research on these topics, in fundamentals and methods as well as applications for autonomic and self-* systems. ACSOS is particularly proud of its long-standing academic breadth and innovative industry contributions, and regularly features work from computational biologists through to operating systems researchers ? united by the common theme of autonomous systems. Now in its 4th edition, ACSOS was founded in 2020 as a merger of the IEEE International Conference on Autonomic Computing (ICAC) and the IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO). Beyond the main track for papers, ACSOS 2023 will include tracks for research artifacts, posters and demos, tutorials, workshops, and the doctoral symposium. Further details on these tracks, as well as links to their detailed calls are available below. All submissions are required to be formatted according to the standard IEEE Computer Society Press proceedings style guide: https://www.ieee.org/conferences/publishing/templates.html Papers are submitted electronically in PDF format through the ACSOS 2023 conference management system: https://easychair.org/conferences/?conf=acsos2023 All times below are in the Anywhere on Earth (AoE) timezone. *** Main Track *** Abstract Submission Deadline: April 30th, 2023 Paper Submission Deadline: May 5th, 2023 Notification to Authors: July 5th, 2023 Registration Deadline: August 5th, 2023 Camera Ready Deadline: August 5th, 2023 Further details: https://2023.acsos.org/track/acsos-2023-papers *** Artifacts *** Paper Submission Deadline: July 14th, 2023 Notification to Authors: July 28th, 2023 Registration Deadline: August 5th, 2023 Camera Ready Deadline: August 5th, 2023 Further details: https://2023.acsos.org/track/acsos-2023-artifacts *** Doctoral Symposium *** Paper Submission Deadline: June 2nd, 2023 Notification to Authors: July 12th, 2023 Registration Deadline: August 5th, 2023 Camera Ready Deadline: August 5th, 2023 Further details: https://2023.acsos.org/track/acsos-2023-doctoral-symposium *** Posters and Demos *** Submission Deadline: July 9th, 2023 Notification to Authors: July 23rd, 2023 Registration Deadline: August 5th, 2023 Camera Ready Deadline: August 5th, 2023 Further details: https://2023.acsos.org/track/acsos-2023-posters-and-demos *** Tutorials *** Submission Deadline: July 10th, 2023 Notification to Authors: July 20th, 2023 Registration Deadline: August 5th, 2023 Camera Ready Deadline: August 5th, 2023 Further details: https://2023.acsos.org/track/acsos-2023-tutorials *** Workshops *** Proposal submission deadline: March 15th, 2023 Acceptance notification: April 7th, 2023 Call for papers online: April 28th, 2023 Workshops dates: TBD (expected Sept. 25th and Sept. 29th, 2023) We will have an ongoing evaluation of the proposals after their submissions (i.e., workshops might be accepted before the acceptance notification). Further details: https://2023.acsos.org/track/acsos-2023-workshops From marcin at amu.edu.pl Sat Jan 14 05:47:58 2023 From: marcin at amu.edu.pl (Marcin Paprzycki) Date: Sat, 14 Jan 2023 11:47:58 +0100 Subject: Connectionists: CFP -- SOMET -- Naples, September 2023 In-Reply-To: <55c94d51-ed88-be01-960c-d324df0db0db@pti.org.pl> References: <55c94d51-ed88-be01-960c-d324df0db0db@pti.org.pl> Message-ID: <93fb5c49-e0b4-3d02-5769-3853f3256a76@amu.edu.pl> =====CALL-of-Paper==== SOMET 2023 in Naples ========================== The 22nd International Conference on Intelligent Software Methodologies, Tools and Techniques Parthenope Congress Center, Naples, Italy September 20-22, 2023 http://www.impianti.unina.it/somet2023/index.html Venue Parthenope Congress Center 36, Partenope Street 80121 - Naple, Italy Author's Schedule Full paper submission: April 1, 2023 Notification deadline: May 15, 2023 Final paper submission: June 15, 2023 For paper submission please visit: http://www.impianti.unina.it/somet2023/paper_submission.html You are invited to participate in SoMeT_23 to help build a forum for exchanging ideas, experiences and applications to foster new directions in software development methodologies and related tools and techniques. The conference is focused on, but not limited to the following areas: ? Modeling and Simulation in Operations Management ? Software application in Logistics and Supply Chain Management ? Requirement engineering, especially for high-assurance system, and requirement elicitation ? Software methodologies and tools for robust, reliable, non-fragile software design ? Software development techniques for legacy systems ? Automatic software generation versus reuse, and legacy systems, source code analysis and manipulation ? Software quality and process assessment for business enterprise models ? Intelligent software systems design, and software evolution techniques ? Agile Software and Lean Methods ? Software optimization and formal methods for software design ? Static, dynamic analysis of software performance model, software maintenance, and program understanding and visualization ? Software security tools and techniques, and related Software Engineering models ? End-user programming environment, User-centered Adoption-Centric Reengineering techniques ? Ontology engineering, semantic web ? Software design through interaction, and precognitive software techniques for interactive software entertainment applications ? Business oriented software application models ? Software Engineering models, and formal techniques for software representation, software testing and validation ? End-user programming environment, User-centered and Adoption-Centric models Reengineering techniques ? Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering ? Object-oriented, aspect-oriented, component-based and generic programming, multi-agent technology ? Creativity and art in software design principles ? Axiomatic based principles on software design ? Agile Software and Lean Methods ? Model Driven Development (DVD), code centric to model centric software engineering ? New aspects on digital libraries, collections and archives, Web publishing, and Knowledge-based engineering ? Medical Informatics and bioinformatics, Software methods and application for biomedicine and bioinformatics ? Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery ? Others software engineering disciplines The conference program also, has several invited talks revised by the program committee members. Those invited technical papers will describe innovative and significant work in the research and practice of software science. From gary.marcus at nyu.edu Sat Jan 14 07:04:28 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 14 Jan 2023 04:04:28 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning Message-ID: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> ?Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary > On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: > > ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: > > Sec. 1: Introduction > Sec. 2: 1676: The Chain Rule For Backward Credit Assignment > Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning > Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs > Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) > Sec. 6: 1965: First Deep Learning > Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent > Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. > Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) > Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc > Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners > Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command > Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention > Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs > Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients > Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets > Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher > Sec. 18: It's the Hardware, Stupid! > Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science > Sec. 20: The Broader Historic Context from Big Bang to Far Future > Sec. 21: Acknowledgments > Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) > > Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= > > J?rgen > > > > > >> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >> Dear Juergen, >> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >> Best, >> Andreas >> -------------------------------------------------------------------------------------------------- >> Prof. Auxiliar Andreas Wichert >> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >> - >> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >> Instituto Superior T?cnico - Universidade de Lisboa >> Campus IST-Taguspark >> Avenida Professor Cavaco Silva Phone: +351 214233231 >> 2744-016 Porto Salvo, Portugal >>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>> Happy New Year! >>> J?rgen From david at irdta.eu Sat Jan 14 09:29:18 2023 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 14 Jan 2023 15:29:18 +0100 (CET) Subject: Connectionists: DeepLearn 2023 Spring: early registration February 1st Message-ID: <1441900827.1010095.1673706558554@webmail.strato.com> ****************************************************************** 9th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2023 Spring Bari, Italy April 3-7, 2023 https://irdta.eu/deeplearn/2023sp/ *********** Co-organized by: Department of Computer Science University of Bari ?Aldo Moro? Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: February 1st, 2023 ****************************************************************** SCOPE: DeepLearn 2023 Spring will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es, Las Palmas de Gran Canaria, Lule? and Bournemouth. Deep learning is a branch of artificial intelligence covering a spectrum of current exciting research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, bioinformatics, geographic information systems, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 23 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2023 Spring is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2023 Spring will take place in Bari, an important economic centre on the Adriatic Sea. The venue will be: Department of Computer Science University of Bari ?Aldo Moro? via Edoardo Orabona, 4 70125 Bari STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Vipin Kumar (University of Minnesota), Knowledge Guided Deep Learning: A Framework for Accelerating Scientific Discovery William S. Noble (University of Washington), Deep Learning Applications in Mass Spectrometry Proteomics and Single-Cell Genomics Emma Tolley (Swiss Federal Institute of Technology Lausanne), Physics-Informed Deep Learning PROFESSORS AND COURSES: Babak Ehteshami Bejnordi (Qualcomm AI Research), [intermediate/advanced] Conditional Computation for Efficient Deep Learning with Applications to Computer Vision, Multi-Task Learning, and Continual Learning Patrick Gallinari (Sorbonne University), [intermediate] Physics Aware Deep Learning for Modeling Dynamical Systems Sergei V. Gleyzer (University of Alabama), [introductory/intermediate] Machine Learning Fundamentals and Their Applications to Very Large Scientific Data: Rare Signal and Feature Extraction, End-to-End Deep Learning, Uncertainty Estimation and Realtime Machine Learning Applications in Software and Hardware Jacob Goldberger (Bar-Ilan University), [introductory/intermediate] Calibration Methods for Neural Networks Christoph Lampert (Institute of Science and Technology Austria), [intermediate] Training with Fairness and Robustness Guarantees Yingbin Liang (Ohio State University), [intermediate/advanced] Bilevel Optimization and Applications in Deep Learning Miaoyuan Liu (Purdue University), [introductory/intermediate] Edge of the Future: AI in Real Time Systems of Scientific Instruments Xiaoming Liu (Michigan State University), [intermediate] Deep Learning for Trustworthy Biometrics Michael Mahoney (University of California Berkeley), [intermediate] Practical Neural Network Theory Liza Mijovic (University of Edinburgh), [introductory/intermediate] Deep Learning & the Higgs Boson: Classification with Fully Connected and Adversarial Networks Bhiksha Raj (Carnegie Mellon University), [introductory] An Introduction to Quantum Neural Networks [with Rita Singh and Daniel Justice] Holger Rauhut (RWTH Aachen University), [intermediate] Gradient Descent Methods for Learning Neural Networks: Convergence and Implicit Bias Bart ter Haar Romeny (Eindhoven University of Technology), [intermediate/advanced] Explainable Deep Learning from First Principles Tara Sainath (Google), [advanced] E2E Speech Recognition Martin Schultz (Research Centre J?lich), [introductory/intermediate] Deep Learning for Air Quality, Weather and Climate Hao Su (University of California San Diego), [intermediate/advanced] Neural Representation for 3D Capturing Adi Laurentiu Tarca (Wayne State University), [intermediate] Machine Learning for Cross-Sectional and Longitudinal Omics Studies Zhi Tian (George Mason University), [intermediate] Communication-Efficient and Robust Distributed Learning Michalis Vazirgiannis (Polytechnic Institute of Paris), [intermediate/advanced] Graph Machine Learning with GNNs and Applications Atlas Wang (University of Texas Austin), [intermediate] Sparse Neural Networks: From Practice to Theory Guo-Wei Wei (Michigan State University), [introductory/advanced] Discovering the Mechanisms of SARS-CoV-2 Evolution and Transmission Lei Xing (Stanford University), [intermediate] Deep Learning for Medical Imaging and Genomic Data Processing: from Data Acquisition, Analysis, to Biomedical Applications Xiaowei Xu (University of Arkansas Little Rock), [intermediate/advanced] Deep Learning Language Models and Causal Inference OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by March 26, 2023. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by March 26, 2023. EMPLOYER SESSION: Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by March 26, 2023. ORGANIZING COMMITTEE: Giuseppina Andresini (Bari, local co-chair) Graziella De Martino (Bari, local co-chair) Corrado Loglisci (Bari, local co-chair) Donato Malerba (Bari, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Paolo Mignone (Bari, local co-chair) Sara Morales (Brussels) Gianvito Pio (Bari, local co-chair) Francesca Prisciandaro (Bari, local co-chair) David Silva (London, organization chair) Gennaro Vessio (Bari, local co-chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2023sp/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2023sp/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: University of Bari ?Aldo Moro? Rovira i Virgili University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Sun Jan 15 15:39:44 2023 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Sun, 15 Jan 2023 22:39:44 +0200 Subject: Connectionists: ImageCLEF 2023 Multimedia Retrieval in CLEF Lab Message-ID: [Apologies for multiple postings] ImageCLEF 2023 Multimedia Retrieval in CLEF http://www.imageclef.org/2023/ https://www.facebook.com/ImageClef/ https://twitter.com/imageclef/ *** CALL FOR PARTICIPATION *** ImageCLEF 2023 is an evaluation campaign that is being organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org) and are presented in the CLEF conference. Selected contributions among the participants will be invited for submission to a special section "Best of CLEF'23 Labs" in the Springer Lecture Notes in Computer Science (LNCS) of CLEF'23, together with the annual lab overviews. Target communities involve (but are not limited to): information retrieval (text, vision, audio, multimedia, social media, sensor data, etc.), machine learning, deep learning, data mining, natural language processing, image and video processing, computer vision, with special attention to the challenges of multi-modality, multi-linguality, and interactive search. *** 2023 TASKS *** - medical dialogue topic classification and summarization - visual question answering and generation - traceability of training data in synthetic medical image generation - concept detection and caption prediction - recommendations of articles and editorials from Europeana data - classification of photographic user profiles in unintended scenarios - late fusion mechanisms and ensembling #ImageCLEFmedMEDIQA-Sum (new) https://www.imageclef.org/2023/medical/mediqa Clinical notes are documents that are routinely created by clinicians after every patient encounter. They are used to record a patient's health conditions as well as past or planned tests and treatments. The task tackles the automatic generation of clinical notes summarizing clinician-patient encounter conversations through dialogue to topic classification, dialogue to note summarization, and full-encounter dialogue to note summarization. Organizers: Wen-wai Yim, and Asma Ben Abacha (Microsoft, USA), Neal Snider (Microsoft/Nuance, USA), Griffin Adams (Columbia University, USA), Meliha Yetisgen (University of Washington, USA). #ImageCLEFmedVQA (new) https://www.imageclef.org/2023/medical/vqa Identifying lesions in colonoscopy images is one of the most popular applications of artificial intelligence in medicine. Until now, the research has focused on single-image or video analysis. The main focus of the task will be on visual question answering and visual question generation. The goal is that through the combination of text and image data the output of the analysis gets easier to use by medical experts. Organizers: Michael A. Riegler, Steven A. Hicks, Vajira Thambawita, Andrea Stor?s, and P?l Halvorsen (SimulaMet, Norway), Thomas de Lange, Nikolaos Papachrysos, and Johanna Sch?ler (Sahlgrenska University Hospital, Sweden), Debesh Jha (Norway & Northwestern University, USA). #ImageCLEFmedGANs (new) https://www.imageclef.org/2023/medical/gans The task is focused on examining the existing hypothesis that GANs are generating medical images that contain the "fingerprints" of the real images used for generative network training. If the hypothesis is correct, artificial biomedical images may be subject to the same sharing and usage limitations as real sensitive medical data. On the other hand, if the hypothesis is wrong, GANs may be potentially used to create rich datasets of biomedical images that are free of ethical and privacy regulations. Organizers: Serge Kozlovski, and Vassili Kovalev (Belarusian Academy of Sciences, Minsk, Belarus), Ihar Filipovich (Belarus State University, Minsk, Belarus), Alexandra Andrei, Ioan Coman, and Bogdan Ionescu (Politehnica University of Bucharest, Romania), Henning M?ller (University of Applied Sciences Western Switzerland, Sierre, Switzerland). #ImageCLEFmedicalCaption (7th edition) https://www.imageclef.org/2023/medical/caption Interpreting and summarizing the insights gained from medical images such as radiology output is a time-consuming task that involves highly trained experts and often represents a bottleneck in clinical diagnosis pipelines. The task addresses the need for automatic methods that can approximate this mapping from visual information to condensed textual descriptions. The more image characteristics are known, the more structured are the radiology scans and hence, the more efficient are the radiologists regarding interpretation. Organizers: Johannes R?ckert (University of Applied Sciences and Arts Dortmund, Germany), Asma Ben Abacha (Microsoft, USA), Alba Garc?a Seco de Herrera (University of Essex, UK), Christoph M. Friedrich (University of Applied Sciences and Arts Dortmund, Germany), Henning M?ller (University of Applied Sciences Western Switzerland, Sierre, Switzerland), Louise Bloch, Raphael Br?ngel, Ahmad Idrissi-Yaghir, and Henning Sch?fer (University of Applied Sciences and Arts Dortmund, Germany). #ImageCLEFrecommending (new) https://www.imageclef.org/2023/recommending In recent years cultural heritage organisations have made considerable efforts to digitise their collections, and this trend is expected to continue due to organisational goals and national cultural policies. Thus media archives have not only exponentially increased in size, but now hold contents in various modalities (video, image, text). Even when structured metadata is available it is still difficult to discover the contents of media archives and allow users to navigate multiperspectivity in media collections. The task addresses the content-based recommendation of meaningful articles and editorials for specific topics from Europeana data. Organizers: Alexandru Stan, and George Ioannidis (IN2 Digital Innovations, Germany), Bogdan Ionescu (Politehnica University of Bucharest, Romania), Hugo Manguinhas (Europeana Foundation, Netherlands). #ImageCLEFaware (3rd edition) https://www.imageclef.org/2023/aware The images available on social networks can be exploited in ways users are unaware of when initially shared, including situations that have serious consequences for the users? real lives. For instance, it is common practice for prospective employers to search online for information about their future employees. This task addresses the development of algorithms which raise the users? awareness about real-life impact of online image sharing by classifying user profiles in a list of common unintended use-cases. Organizers: J?r?me Deshayes-Chossart, and Adrian Popescu (CEA LIST, France), Bogdan Ionescu (Politehnica University of Bucharest, Romania). #ImageCLEFfusion (2nd edition) https://www.imageclef.org/2023/fusion Despite the current advances in knowledge discovery, single learners do not produce satisfactory performance when dealing with complex data, such as class imbalance, high-dimensionality, concept drift, noisy data, multimodal data, etc. The task aims to fill this gap by exploiting novel and innovative late fusion techniques for producing a powerful learner based on the expertise of the pool of classifiers it integrates. The task requires participants to develop aggregation mechanisms of the outputs of the supplied systems and generate ensemble predictions with significantly higher performance than the individual systems. Organizers: Liviu-Daniel Stefan, Mihai Gabriel Constantin, Mihai Dogariu, and Bogdan Ionescu (Politehnica University of Bucharest, Romania). *** IMPORTANT DATES *** (may vary depending on the task) - Run submission: May 10, 2023 - Working notes submission: June 5, 2023 - CLEF 2023 conference: September 18-21, 2023, Thessaloniki, Greece *** REGISTRATION *** Follow the instructions here https://www.imageclef.org/2023. *** OVERALL COORDINATION *** Bogdan Ionescu, Politehnica University of Bucharest, Romania Henning M?ller, HES-SO, Sierre, Switzerland Ana-Maria Dragulinescu, Politehnica University of Bucharest, Romania *** ENDORSEMENT *** The campaign is supported under the H2020 AI4Media ?A European Excellence Centre for Media, Society and Democracy? project, contract #951911 https://www.ai4media.eu/. On behalf of the organizers, Bogdan Ionescu https://www.aimultimedialab.ro/ From juergen at idsia.ch Sun Jan 15 16:04:11 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Sun, 15 Jan 2023 21:04:11 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> Message-ID: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen > On 14. Jan 2023, at 15:04, Gary Marcus wrote: > > Dear Juergen, > > You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. > > I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. > > Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. > > My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. > Historians looking back on this paper will see too little about that roots of that trend documented here. > > Gary > >> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >> >> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >> >> Sec. 1: Introduction >> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >> Sec. 6: 1965: First Deep Learning >> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >> Sec. 18: It's the Hardware, Stupid! >> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >> Sec. 20: The Broader Historic Context from Big Bang to Far Future >> Sec. 21: Acknowledgments >> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >> >> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >> >> J?rgen >> >> >> >> >> >>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>> Dear Juergen, >>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>> Best, >>> Andreas >>> -------------------------------------------------------------------------------------------------- >>> Prof. Auxiliar Andreas Wichert >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>> - >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>> Instituto Superior T?cnico - Universidade de Lisboa >>> Campus IST-Taguspark >>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>> 2744-016 Porto Salvo, Portugal >>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>> Happy New Year! >>>> J?rgen From EPNSugan at ntu.edu.sg Sun Jan 15 22:25:52 2023 From: EPNSugan at ntu.edu.sg (Ponnuthurai Nagaratnam Suganthan) Date: Mon, 16 Jan 2023 03:25:52 +0000 Subject: Connectionists: IJCNN 2023 SS on "Advances in deep and shallow machine learning algorithms for biomedical data and imaging" In-Reply-To: References: Message-ID: PDF CFP available from: https://github.com/P-N-Suganthan/CFP To submit to this special session, please use this link: https://edas.info/newPaper.php?c=30081&track=116093 International Joint Conference on Neural Networks 2023 Call for Papers for Special Session on Advances in deep and shallow machine learning algorithms for biomedical data and imaging Aim and Scope: Deep learning is one of the most important revolutions in the field of artificial intelligence over the last decade. It has achieved great success in different tasks in computer vision, image processing, biomedical analysis and related fields. Researchers in deep and shallow machine learning including those working in computer vision, image processing, biomedical analysis and related fields when tied with experienced clinicians can play a significant role in understanding and working on complex medical data which ultimately improves patient care. To develop a novel deep or shallow machine learning algorithm specific to medical data is a challenge and need of the hour. Healthcare and biomedical sciences have become data-intensive fields, with a strong need for sophisticated data mining methods to extract the knowledge from the available information. Biomedical data contains several challenges in data analysis, including high dimensionality, class imbalance and low numbers of samples. Although the current research in this field has shown promising results, several research issues need to be explored as follows. There is a need to explore novel feature selection methods to improve predictive performance along with interpretation, and to explore large scale data in biomedical sciences. This special session aims to bring together the current research progress (from both academia and industry) on novel machine learning methods to address the challenges to biomedical complex data. Special attention will be devoted to handle feature selection, class imbalance, and data fusion in biomedical and machine learning applications. It will attract medical experts who have access to interesting sources of data but lack the expertise in using machine learning techniques effectively. Topics: The topics relevant to the special session include (but are not limited to) the following topics: * Computer aided detection and diagnosis * Machine learning methods applied to biomedical data * Deep learning for neuroimaging * Biomedical image classification * Evolutionary computing in bioinformatics * Pattern recognition for imaging and genomics * Big data analytics on biomedical applications * Improved algorithms for multimodality neuroimaging data fusion systems * Clustering and classification algorithms for Healthcare. Guest Editors: Mohammad Tanveer, Indian Institute of Technology Indore, India, Email: mtanveer at iiti.ac.in, Homepage: http://people.iiti.ac.in/~mtanveer/ Yu-dong Zhang, University of Leicester, UK Email: yudong.zhang at le.ac.uk, Homepage: https://le.ac.uk/people/yudong-zhang P. N. Suganthan, Qatar University. p.n.suganthan at qu.edu.qa Important Dates * Jan 31, 2023- First Paper submission deadline (Extension may be offered) * March 31, 2023 - Paper acceptance notification * June 18-23, 2023- Gold Coast Convention Centre, Queensland, Australia Paper Submission Papers submitted to this Special Session are reviewed according to the same rules as the submissions to the regular sessions of IJCNN 2023. Authors who submit papers to this session are invited to mention it in the form during the submission. Submissions to regular and special sessions follow identical format, instructions, deadlines, and review procedures. Please, for further information and news refer to the IJCNN website: https://2023.ijcnn.org/ To submit to this special session, please use this link: https://edas.info/newPaper.php?c=30081&track=116093 ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhammer at techfak.uni-bielefeld.de Sun Jan 15 14:13:52 2023 From: bhammer at techfak.uni-bielefeld.de (Barbara Hammer) Date: Sun, 15 Jan 2023 20:13:52 +0100 Subject: Connectionists: AI-starter program in North-Rhine Westfalia (Germany) Message-ID: Dear colleagues, I would like to draw your attention to the AI-starter program, a funding line for postdocs in any topic related to core-AI who consider to move to a university in North-Rhine-Westphalia, Germany.? The deadline is end March. Please find the call at this link https://www.ptj.de/lw_resource/datapool/systemfiles/cbox/5383/live/lw_bekdoc/ki-starter_call-for-project-funding-application-5th6th.pdf and further information at this link: https://www.ptj.de/lw_resource/datapool/systemfiles/cbox/5382/live/lw_bekdoc/ki-starter_notes-on-application-5th6th.pdf Forms and contact information in case of questions is available at this page: https://www.ptj.de/ki-starter Best wishes Barbara Hammer -- Prof. Dr. Barbara Hammer Machine Learning Group, CITEC Bielefeld University D-33594 Bielefeld Phone: +49 521 / 106 12115 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Menno.VanZaanen at nwu.ac.za Mon Jan 16 02:12:23 2023 From: Menno.VanZaanen at nwu.ac.za (Menno Van Zaanen) Date: Mon, 16 Jan 2023 07:12:23 +0000 Subject: Connectionists: CfP 4th workshop on Resources for African Indigenous Language (RAIL) @ EACL Message-ID: <883c3b232b6bb8300abb62ee6300175d4c4618d5.camel@nwu.ac.za> First call for papers Fourth workshop on Resources for African Indigenous Language (RAIL) https://bit.ly/rail2023 The 4rd RAIL (Resources for African Indigenous Languages) workshop will be co-located with EACL 2023 in Dubrovnik, Croatia. The Resources for African Indigenous Languages (RAIL) workshop is an interdisciplinary platform for researchers working on resources (data collections, tools, etc.) specifically targeted towards African indigenous languages. In particular, it aims to create the conditions for the emergence of a scientific community of practice that focuses on data, as well as computational linguistic tools specifically designed for or applied to indigenous languages found in Africa. Previous workshops showed that the presented problems (and solutions) are not only applicable to African languages. Many issues are also relevant to other low-resource languages, such as different scripts and properties like tone. As such, these languages share similar challenges. This allows for researchers working on these languages with such properties (including non-African languages) to learn from each other, especially on issues pertaining to language resource development. The RAIL workshop has several aims. First, it brings together researchers working on African indigenous languages, forming a community of practice for people working on indigenous languages. Second, the workshop aims to reveal currently unknown or unpublished existing resources (corpora, NLP tools, and applications), resulting in a better overview of the current state-of-the-art, and also allows for discussions on novel, desired resources for future research in this area. Third, it enhances sharing of knowledge on the development of low-resource languages. Finally, it enables discussions on how to improve the quality as well as availability of the resources. The workshop has ?Impact of impairments on language resources? as its theme, but submissions on any topic related to properties of African indigenous languages (including non-African languages) may be accepted. Suggested topics include (but are not limited to) the following: Digital representations of linguistic structures Descriptions of corpora or other data sets of African indigenous languages Building resources for (under resourced) African indigenous languages Developing and using African indigenous languages in the digital age Effectiveness of digital technologies for the development of African indigenous languages Revealing unknown or unpublished existing resources for African indigenous languages Developing desired resources for African indigenous languages Improving quality, availability and accessibility of African indigenous language resources Submission requirements: We invite papers on original, unpublished work related to the topics of the workshop. Submissions, presenting completed work, may consist of up to eight (8) pages of content plus additional pages of references. The final camera-ready version of accepted long papers are allowed one additional page of content (so up to 9 pages) so that reviewers? feedback can be incorporated. Submissions need to use the EACL stylesheets. These can be found at https://2023.eacl.org/calls/styles. Submission is electronic in PDF through the START system (link will be provided once available). Reviewing is double-blind, so make sure to anonymize your submission (e.g., do not provide author names, affiliations, project names, etc.) Limit the amount of self citations (anonymized citations should not be used). Accepted papers will be published in the ACL workshop proceedings. Important dates: Submission deadline 13 February 2023 Date of notification 13 March 2023 Camera ready deadline 27 March 2023 RAIL workshop 2 or 6 May 2023 Organising Committee Rooweither Mabuya, South African Centre for Digital Language Resources (SADiLaR), South Africa Don Mthobela, Cam Foundation Mmasibidi Setaka, South African Centre for Digital Language Resources (SADiLaR), South Africa Menno van Zaanen, South African Centre for Digital Language Resources (SADiLaR), South Africa -- Prof Menno van Zaanen menno.vanzaanen at nwu.ac.za Professor in Digital Humanities South African Centre for Digital Language Resources https://www.sadilar.org ________________________________ NWU PRIVACY STATEMENT: http://www.nwu.ac.za/it/gov-man/disclaimer.html DISCLAIMER: This e-mail message and attachments thereto are intended solely for the recipient(s) and may contain confidential and privileged information. Any unauthorised review, use, disclosure, or distribution is prohibited. If you have received the e-mail by mistake, please contact the sender or reply e-mail and delete the e-mail and its attachments (where appropriate) from your system. ________________________________ From ioannakoroni at csd.auth.gr Mon Jan 16 03:19:23 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Mon, 16 Jan 2023 10:19:23 +0200 Subject: Connectionists: 20th International Conference on Content-based Multimedia Indexing, sponsored by AI4Media References: <0d6301d87b40$d50c34e0$7f249ea0$@loba.pt> <14ba01d8bdf8$5f450ed0$1dcf2c70$@loba.pt> <0dbc01d8de2c$5fa732f0$1ef598d0$@loba.pt> <0d1b01d8e931$e5447d90$afcd78b0$@loba.pt> <019901d8fe5f$0117b9f0$03472dd0$@loba.pt> <0e8901d915ee$b76c0610$26441230$@loba.pt> <029501d9273f$15ae7d00$410b7700$@loba.pt> <029c01d92740$23d2ad60$6b780820$@loba.pt> Message-ID: <193501d92983$3e33d860$ba9b8920$@csd.auth.gr> The 20th International Conference on Content-based Multimedia Indexing #CBMI2023 will take place in Orleans, France from September 20th to 23rd, 2023. Sponsored by the AI4Media project, CBMI2023 aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualization and analytics. Call for special session proposals CBMI?2023 is calling for high quality Special Sessions addressing innovative research related to Artificial Intelligence for analysis and indexing of multimedia and multimodal information. The main scope of the conference is in analysis and understanding of multimedia contents including: 1. Multimedia information retrieval (image, audio, video, text) 2. Mobile media retrieval 3. Event-based media retrieval 4. Affective / emotional interaction or interfaces for multimedia retrieval 5. Multimedia data mining and analytics 6. Multimedia retrieval for multi-modal analytics and visualization 7. Multimedia recommendation 8. Multimedia verification (e.g., multi-modal fact-checking, deep fake analysis) 9. Summarization, browsing, and organization of multimedia content 10. Evaluation and benchmarking of multimedia retrieval systems Explanations 11. Application domains: health, sustainable cities, ecology,? More information HERE > http://cbmi2023.org/call-for-special-session/ Call for regular papers Authors are encouraged to submit previously unpublished research papers in the broad field of content-based multimedia indexing and applications. The organisers highlight contributions addressing the main problem of search and retrieval. This call also includes artificial intelligence in multimedia analysis, user interaction, social media indexing and retrieval. In addition, special sessions on specific technical aspects or application domains are planned, such as Multimedia for Healthcare, Explainability of AI tools in Multimedia, Physical models in Multimedia mining? The CBMI proceedings are traditionally indexed and distributed by ACM DL. Best papers will be invited to submit extended versions of their contributions to a special issue of a leading journal in the field. More information HERE > http://cbmi2023.org/call-for-regular-papers/ Important Dates Deadline call for special session: 23 January 2023 Deadline call for regular papers: 12 April 2023 More information about the event at http://cbmi2023.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alban.gallard at u-picardie.fr Mon Jan 16 04:22:34 2023 From: alban.gallard at u-picardie.fr (Alban Gallard) Date: Mon, 16 Jan 2023 10:22:34 +0100 Subject: Connectionists: Internship for the characterisation of the bursts in EEG and fMEG Message-ID: <20230116102234.Horde.Al43bemUvngCyL-w1A3wO_4@webmail.u-picardie.fr> A internship position is available at GRAMFC, in Amiens, France. It is a laboratory focused on the analysis of cognitive development and cerebral dysfunction in newborns. Description: The brain activity of premature infants and fetuses is composed of periods of rest and bursts. These bursts can be measured using EEG for premature infants and MEG for fetuses. It has already been determined that the proportion of bursts changes with the gestational age of the child. The objective of the internship is to compare the bursts of activity between premature babies and fetuses by performing the following tasks: - Bibliographic analysis of bursts of EEG activity in premature babies and MEG in fetuses - Feature extraction of bursts and inter-bursts - Analysis and comparison of the characteristics obtained Profil: We are looking for a master student in end-of-study internship. The student need to have skills in signal processing and must be interested in the field of health. It is better if the student speaks French. All applications should include a CV, a cover letter specifying research interests and motivation. Applications should be sent to Alban Gallard: alban.gallard at u-picardie.fr From kai.sauerwald at fernuni-hagen.de Mon Jan 16 03:28:35 2023 From: kai.sauerwald at fernuni-hagen.de (Kai Sauerwald) Date: Mon, 16 Jan 2023 09:28:35 +0100 Subject: Connectionists: Call for Papers: 21st International Workshop on Nonmonotonic Reasoning (NMR) Message-ID: * Apologies if you receive multiple copies of this call * ============================== Call for Papers NMR 2023 September 2-4, 2023 Rhodes, Greece * Deadlines: 2 June & 9 June 2023* ============================== The 21st International Workshop on Nonmonotonic Reasoning (NMR) http://nmr.krportal.org/2023/ September 2-4, 2023, Rhodes, Greece NMR 2023 is part of the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), https://kr.org/KR2023/. NMR is the premier forum for results in the area of nonmonotonic reasoning. Its aim is to bring together active researchers in this broad field within knowledge representation and reasoning (KRR), including belief revision, uncertain reasoning, reasoning about actions, planning, logic programming, preferences, deontic reasoning, argumentation, causality, and many other related topics including systems and applications (see NMR page, https://nmr.cs.tu-dortmund.de/). NMR has a long history - it started in 1984 and has been held every two years until 2020 and then every year. Recent previous NMR workshops were held in Haifa (2022), Hanoi (virtual, 2021), in Rhodes (virtual, 2020), Tempe (2018) and Cape Town (2016). Since 2020 NMR is being held annually. NMR workshops are usually co-located with the KR conferences (kr.org). As in previous editions, NMR 2023 aims to foster connections between the different subareas of nonmonotonic reasoning and provide a forum for emerging topics. We especially invite papers on systems and applications, as well as position papers and papers addressing benchmark issues. The workshop will be structured by topical sessions fitting to the scopes of accepted papers. The workshop will be held in Rhodes, Greece, in September 2-4, 2023. Workshop activities will include invited talks and presentations of technical papers. -- Submission Information -- There are two types of submissions: Full papers. Full papers should be at most 10 pages including references, figures and appendices, if any. Papers already published or accepted for publication at other conferences are also welcome, provided that the original publication is mentioned in a footnote on the first page and the submission at NMR falls within the authors? rights. In the same vein, papers under review for other conferences can be submitted with a similar indication on their front page. Extended Abstracts. Extended abstracts should be at most 3 pages. The abstracts should introduce work that has recently been published or is under review, or ongoing research at an advanced stage. We highly encourage to attach to the submission a preprint/postprint or a technical report. Such extra material will be read at the discretion of the reviewers. Submitting already published material may require a permission by the copyright holder. All submissions should be formatted in CEUR style (2-column style) without enabled header and footer. The author kit can be found at http://ceur-ws.org/Vol-XXX/CEURART.zip. Papers must be submitted in PDF only. Submission will be through the EasyChair conference system. Please submit via Easychair to: https://easychair.org/my/conference?conf=nmr2023 -- Workshop Proceedings -- The accepted papers will be made available electronically in the CEUR Workshop Proceedings series (http://ceur-ws.org/). The copyright of papers remains with the authors. -- Important Dates -- All dates are 'Anywhere on Earth', namely 23:59 UTC-12. - Paper registration deadline: 2 June 2023 - Paper submission deadline: 9 June 2023 - Notification to authors: 17 July 2023 - Camera-ready version: 4 August 2023 - Workshop dates: 2-4 September 2023 -- Workshop Co-Chairs -- - Kai Sauerwald, FernUniversit?t in Hagen, Germany - Matthias Thimm, FernUniversit?t in Hagen, Germany -- Further Information -- Please visit the workshop website (http://nmr.krportal.org/2023/) for further information and regular updates. NMR 2023 will follow the same contingency plans as KR 2023 with regard to the effects of the global pandemic on international travel. See the KR 2023 website (https://kr.org/KR2023/) for the latest news. From andreas.wichert at tecnico.ulisboa.pt Mon Jan 16 04:35:36 2023 From: andreas.wichert at tecnico.ulisboa.pt (Andrzej Wichert) Date: Mon, 16 Jan 2023 09:35:36 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Dear Jurgen, Again, you missed symbolical AI in your description, names like Douglas Hofstadter. Many of today?s application are driven by symbol manipulation, like diagnostic systems, route planing (GPS navigation), time table planing, object oriented programming symbolic integration and solutions of equations (Mathematica). What are the DL applications today in the industry besides some nice demos? You do not indicate open problems in DL. DL is is highly biologically implausible (back propagation, LSTM), requires a lot of energy (computing power) requires a huge training sets. The black art approach of DL, the failure of self driving cars, the question why does a deep NN give better results than shallow NN? Maybe the biggest mistake was to replace the biological motivated algorithm of Neocognitron by back propagation without understanding what a Neocognitron is doing. Neocognitron performs invariant pattern recognition, a CNN does not. Transformers are biologically implausible and resulted from an engineering requirement. My point is that when is was a student, I wanted to do a master thesis in NN in the late eighties, and I was told that NN do not belong to AI (not even to computer science). Today if a student comes and asks that he wants to investigate problem solving by production systems, or a biologically motivated ML he will be told that this is not AI since according to you (the title of your review) AI is today DL. In my view, DL stops the progress in AI and NN in the same way LSIP and Prolog did in the eighties. Best, Andrzej -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert http://web.tecnico.ulisboa.pt/andreas.wichert/ - https://www.amazon.com/author/andreaswichert Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal > On 15 Jan 2023, at 21:04, Schmidhuber Juergen wrote: > > Thanks for these thoughts, Gary! > > 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). > > Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: > > "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. > > Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. > > When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? > > Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. > > 2. Here a few relevant paragraphs from the intro: > > "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. > > Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. > > A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. > > A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." > > 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? > > Juergen > > >> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >> >> Dear Juergen, >> >> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >> >> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >> >> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >> >> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >> Historians looking back on this paper will see too little about that roots of that trend documented here. >> >> Gary >> >>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>> >>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>> >>> Sec. 1: Introduction >>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>> Sec. 6: 1965: First Deep Learning >>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>> Sec. 18: It's the Hardware, Stupid! >>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>> Sec. 21: Acknowledgments >>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>> >>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>> >>> J?rgen >>> >>> >>> >>> >>> >>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>> Dear Juergen, >>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>> Best, >>>> Andreas >>>> -------------------------------------------------------------------------------------------------- >>>> Prof. Auxiliar Andreas Wichert >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>> - >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>> Instituto Superior T?cnico - Universidade de Lisboa >>>> Campus IST-Taguspark >>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>> 2744-016 Porto Salvo, Portugal >>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>> Happy New Year! >>>>> J?rgen > > From michel.verleysen at uclouvain.be Mon Jan 16 07:30:09 2023 From: michel.verleysen at uclouvain.be (Michel Verleysen) Date: Mon, 16 Jan 2023 12:30:09 +0000 Subject: Connectionists: ESANN 2023 call for papers Message-ID: ESANN 2023 - 31st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning Bruges (Belgium) and online, 4-5-6 October 2023 https://www.esann.org Call for papers The call for papers is available at https://www.esann.org. Deadline for submissions: May 2, 2023. The ESANN conference addresses machine learning, artificial neural networks, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered. ESANN 2023 builds upon a successful series of conferences organized each year since 1993. ESANN has become a major scientific event in the machine learning, computational intelligence and artificial neural networks fields over the years. The conference will be organized in hybrid mode. In-person participation is preferred, however online participation is possible for those who prefer not to travel. The physical conference will be organized in Bruges, one of the most beautiful medieval towns in Europe. Designated as the "Venice of the North", the city has preserved all the charms of the medieval heritage. Its centre, which is inscribed on the Unesco World Heritage list, is in itself a real open air museum. We hope to receive your submission to ESANN 2023 and to see you in Bruges or online! ======================================================== ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning http://www.esann.org/ * For submissions of papers, reviews, registrations: Michel Verleysen UCLouvain - Machine Learning Group 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at uclouvain.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at uclouvain.be ======================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jan 16 08:18:55 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Mon, 16 Jan 2023 13:18:55 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> Message-ID: Gary, "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. Best, Steve On 1/14/23 07:04, Gary Marcus wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0 J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0 - https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0 Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0 https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0 This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Mon Jan 16 09:14:53 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Mon, 16 Jan 2023 14:14:53 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Neural models of attention, learning, and prediction for AI In-Reply-To: References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Dear Andrzej, Juergen et al., While other approaches to AI than Deep Learning are being discussed, I would like to mention neural network models of attention, learning, classification, and prediction, as well as of perception, cognition, emotion, and goal-oriented action, that could profitably be considered part of modern AI. My Magnum Opus about how our brains make our minds: Conscious MIND, Resonant BRAIN: How Each Brain Makes a Mind https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 provides a self-contained overview of many of these contributions. I am happy and honored to report that the book has won the 2022 PROSE book award in Neuroscience from the Association of American Publishers. Among other things, the book explains how, where in our brains, and why from a deep computational perspective, we can consciously see, hear, feel, and know things about the world, and use these conscious representations to effectively plan and act to acquire valued goals. These results hereby offer a rigorous solution of the classical mind-body problem. More generally, the book provides a self-contained and non-technical synthesis of many of the processes whereby our brains make our minds, in both health and disease. These neural models equally well can be combined to design autonomous adaptive intelligence into algorithms and mobile robots for engineering, technology, and AI, and many of them have already found their way into large-scale applications, notably applications that require fast incremental learning and prediction of non-stationary databases. The book shows that Adaptive Resonance Theory, or ART, is the currently most advanced cognitive and neural theory that explains how humans learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events. ART overcomes serious foundational problems of back propagation and Deep Learning, including the fact that they are unreliable (because they can experience catastrophic forgetting) and untrustworthy (because they are not explainable). In particular, even if Deep Learning makes a successful prediction in one situation, one does not know why it did so, and cannot depend upon it making a successful prediction in related situations. It should therefore never be used in applications with life-or-death consequences, such as medical and financial applications. Why should anyone believe in ART? There are several kinds of reasons: All the foundational hypotheses of ART have been supported by subsequent psychological and neurobiological experiments. ART has also provided principled and unifying explanations and predictions of hundreds of other experimental facts. ART can, moreover, be derived from a THOUGHT EXPERIMENT about how ANY system can learn to autonomously correct predictive errors in a changing world that is filled with unexpected events. The hypotheses on which this thought experiment is based are familiar facts that we all know about from daily life. They are familiar because they represent ubiquitous evolutionary pressures on the evolution of our brains. When a few such familiar facts are applied together, these mutual constraints lead uniquely to ART. Nowhere during the thought experiment are the words mind or brain mentioned. ART hereby proposes a UNIVERSAL class of solutions of the problem of autonomous error correction and prediction in a changing world that is filled with unexpected events. The CogEM (Cognitive-Emotional-Motor) model of how cognition and emotion interact can also be derived from a thought experiment. CogEM proposes explanations of many data about cognitive-emotional interactions. Combining ART and CogEM shows how knowledge and value-based costs can be combined to focus attention upon knowledge and actions that have a high probability of realizing valued goals. Remarkably, the combination of ART and CogEM also leads to the results on consciousness, because they naturally emerge from an analysis of how we can quickly LEARN about a changing world without experiencing catastrophic forgetting. I have called this a solution of the stability-plasticity dilemma; namely how we learn quickly (plasticity) without experiencing catastrophic forgetting (stability). Back propagation and Deep Learning cannot solve any of these problems. One reason is that they are defined by a feedforward adaptive filter. They include no cell activations, or short-term memory (STM) traces, and no top-down learning and attentive matching. In contrast, a good enough match of an ART learned top-down expectation with a bottom-up feature pattern triggers a bottom-up and top-down resonance that chooses and focuses attention upon the CRITICAL FEATURE PATTERNS that are sufficient to predict valued outcomes, while suppressing predictively irrelevant features. These critical features are the ones that are learned by bottom-up adaptive filters and top-down expectations. The selectivity of attention and learning is how the stability-plasticity dilemma is solved. The learned top-down expectations obey the ART Matching Rule. They are embodied by a top-down, modulatory on-center, off-surround network whose cells obey mass action, or shunting, laws. These laws model the membrane equations of neurophysiology. The ART Matching Rule has been supported by psychological, anatomical, neurophysiological, biophysical, and even biochemical data in multiple species, including bats. The above resonance is called a feature-category resonance. My book summarizes six different resonances, with different functions, that occur in different parts of our brains: TYPE OF RESONANCE TYPE OF CONSCIOUSNESS surface-shroud see visual object or scene feature- category recognize visual object or scene stream-shroud hear auditory object or stream spectral-pitch-and-timbre recognize auditory object or stream item-list recognize speech and language cognitive- emotional feel emotion and know its source As the above Table suggests, the book also summarizes many results about speech and language learning, cognitive planning, and performance, notably the role of the prefrontal cortex in choosing, storing, learning, and controlling the event sequences that provide predictive contexts for realizing many of the higher-order processes that together realize human intelligence. When we compare AI with National Intelligence (NI), we might also hope that NI will shed some light on deeper aspects of the human condition. To this end, the book summarizes the following kinds of results: The models clarify how normal brain dynamics can break down in specific and testable ways to cause behavioral symptoms of multiple mental disorders, including Alzheimer's disease, autism, amnesia, schizophrenia, PTSD, ADHD, visual and auditory agnosia and neglect, and disorders of slow-wave sleep. Its exposition of how our brains consciously see enable the book to explain how many visual artists, including Matisse, Monet, and Seurat, as well as the Impressionists and Fauvists in general, achieved the aesthetic effects in their paintings and how humans consciously see these paintings. The book goes beyond such purely scientific topics to clarify how our brains support such vital human qualities as creativity, morality, and religion, and how so many people can persist in superstitious, irrational, and self-defeating behaviors in certain social environments. Many other topics are discussed in the book's Preface and 17 chapters: https://academic.oup.com/book/40038 That makes the book a flexible resource in many kinds of courses and seminars, as some of its reviewers have noted. In case the above comments may interest some of you in learning more, let me add that I wrote the book to be self-contained and non-technical in a conversational style so that even people who know no science can enjoy reading parts of it, no less than students and researchers in multiple disciplines. In fact, friends of mine who know no science have been reading it, including a rabbi, pastor, visual artist, gallery owner, social worker, and lawyer. I also priced it to be affordable. Given that it is an almost 800 double-column page book with over 600 color figures, the book could have cost well over $100 dollars. Instead, the book costs around $33 for the hard copy and around $19 for the Kindle version because I subsidized the cost with thousands of dollars of my personal funds. I did that so that faculty and students who might want to read it could afford to do so. For people who want all the bells and whistles of this line of work up to the present time, there are videos of several of my keynote lectures and around 560 downloadable archival articles on my web page sites.bu.edu/steveg . If any of you do read parts of the book or research articles, please feel free to send along any comments or questions that may arise when you do. Best wishes to all in the New Year, Steve ________________________________ From: Connectionists on behalf of Andrzej Wichert Sent: Monday, January 16, 2023 4:35 AM To: Schmidhuber Juergen Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Dear Jurgen, Again, you missed symbolical AI in your description, names like Douglas Hofstadter. Many of today?s application are driven by symbol manipulation, like diagnostic systems, route planing (GPS navigation), time table planing, object oriented programming symbolic integration and solutions of equations (Mathematica). What are the DL applications today in the industry besides some nice demos? You do not indicate open problems in DL. DL is is highly biologically implausible (back propagation, LSTM), requires a lot of energy (computing power) requires a huge training sets. The black art approach of DL, the failure of self driving cars, the question why does a deep NN give better results than shallow NN? Maybe the biggest mistake was to replace the biological motivated algorithm of Neocognitron by back propagation without understanding what a Neocognitron is doing. Neocognitron performs invariant pattern recognition, a CNN does not. Transformers are biologically implausible and resulted from an engineering requirement. My point is that when is was a student, I wanted to do a master thesis in NN in the late eighties, and I was told that NN do not belong to AI (not even to computer science). Today if a student comes and asks that he wants to investigate problem solving by production systems, or a biologically motivated ML he will be told that this is not AI since according to you (the title of your review) AI is today DL. In my view, DL stops the progress in AI and NN in the same way LSIP and Prolog did in the eighties. Best, Andrzej -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert http://web.tecnico.ulisboa.pt/andreas.wichert/ - https://www.amazon.com/author/andreaswichert Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal > On 15 Jan 2023, at 21:04, Schmidhuber Juergen wrote: > > Thanks for these thoughts, Gary! > > 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). > > Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: > > "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. > > Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. > > When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? > > Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. > > 2. Here a few relevant paragraphs from the intro: > > "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. > > Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. > > A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. > > A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." > > 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? > > Juergen > > >> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >> >> Dear Juergen, >> >> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >> >> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >> >> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >> >> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >> Historians looking back on this paper will see too little about that roots of that trend documented here. >> >> Gary >> >>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>> >>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>> >>> Sec. 1: Introduction >>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>> Sec. 6: 1965: First Deep Learning >>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>> Sec. 18: It's the Hardware, Stupid! >>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>> Sec. 21: Acknowledgments >>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>> >>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>> >>> J?rgen >>> >>> >>> >>> >>> >>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>> Dear Juergen, >>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>> Best, >>>> Andreas >>>> -------------------------------------------------------------------------------------------------- >>>> Prof. Auxiliar Andreas Wichert >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>> - >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>> Instituto Superior T?cnico - Universidade de Lisboa >>>> Campus IST-Taguspark >>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>> 2744-016 Porto Salvo, Portugal >>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>> Happy New Year! >>>>> J?rgen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jan 16 12:28:28 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 16 Jan 2023 09:28:28 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> References: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary > On Jan 15, 2023, at 23:04, Schmidhuber Juergen wrote: > > ?Thanks for these thoughts, Gary! > > 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). > > Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: > > "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. > > Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. > > When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? > > Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. > > 2. Here a few relevant paragraphs from the intro: > > "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. > > Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. > > A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. > > A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." > > 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? > > Juergen > > >> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >> >> Dear Juergen, >> >> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >> >> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >> >> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >> >> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >> Historians looking back on this paper will see too little about that roots of that trend documented here. >> >> Gary >> >>>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>> >>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>> >>> Sec. 1: Introduction >>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>> Sec. 6: 1965: First Deep Learning >>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>> Sec. 18: It's the Hardware, Stupid! >>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>> Sec. 21: Acknowledgments >>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>> >>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>> >>> J?rgen >>> >>> >>> >>> >>> >>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>> Dear Juergen, >>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>> Best, >>>> Andreas >>>> -------------------------------------------------------------------------------------------------- >>>> Prof. Auxiliar Andreas Wichert >>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>> - >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>> Instituto Superior T?cnico - Universidade de Lisboa >>>> Campus IST-Taguspark >>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>> 2744-016 Porto Salvo, Portugal >>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>> Happy New Year! >>>>> J?rgen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Mon Jan 16 15:22:30 2023 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Mon, 16 Jan 2023 22:22:30 +0200 Subject: Connectionists: Call-for-papers: 2nd ACM Int. Workshop on Multimedia AI against Disinformation @ ACM ICMR 2023 Message-ID: [Apologies for multiple postings] 2nd ACM International Workshop on Multimedia AI against Disinformation MAD'23 ACM International Conference on Multimedia Retrieval ICMR'23 Thessaloniki, Greece, June 12-15, 2023 https://mad2023.idmt.fraunhofer.de/ https://easychair.org/my/conference?conf=icmr20230 *** Call for papers *** * Paper submission due: February 28, 2023 * Acceptance notification: March 31, 2023 * Camera-ready papers due: April 20, 2023 * Workshop @ACM ICMR 2023: June 12, 2023 (TBD) Modern communication does not rely anymore solely on classic media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users. The speedup in the amount of information available, however, also led to an increased amount and quality of misleading content, disinformation and propaganda Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens' opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as politics, health and religion. Disinformation campaigns are leveraging, among others, market-ready AI-based tools for content creation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of ?deepfakes?, undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new analysis tools able to detect the presence of synthetic and manipulated content, accessible to journalists and fact-checkers, robust and trustworthy, and possibly based on AI to reach greater performance. Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed in order to reduce the effects of disinformation campaigns. The workshop, in its second edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation. Topics of interest include but are not limited to: - Disinformation detection in multimedia content (e.g., video, audio, texts, images) - Multimodal verification methods - Synthetic and manipulated media detection - Multimedia forensics - Disinformation spread and effects in social media - Analysis of disinformation campaigns in societally-sensitive domains - Robustness of media verification against adversarial attacks and real-world complexities - Fairness and non-discrimination of disinformation detection in multimedia content - Explaining disinformation /disinformation detection technologies for non-expert users - Temporal and cultural aspects of disinformation - Dataset sharing and governance in AI for disinformation - Datasets for disinformation detection and multimedia verification - Open resources, e.g., datasets, software tools - Multimedia verification systems and applications - System fusion, ensembling and late fusion techniques - Benchmarking and evaluation frameworks *** Submission guidelines *** When preparing your submission, please adhere strictly to the ACM ICMR 2023 instructions, to ensure the appropriateness of the reviewing process and inclusion in the ACM Digital Library proceedings. The instructions are available here https://icmr2023.org/paper-submissions/. *** Organizing committee *** Luca Cuccovillo, Fraunhofer IDMT, Germany Bogdan Ionescu, Politehnica University of Bucharest, Romania Giorgos Kordopatis-Zilos, Centre for Research and Technology Hellas, Thessaloniki, Greece Symeon Papadopoulos, Centre for Research and Technology Hellas, Thessaloniki, Greece Adrian Popescu, CEA LIST, Saclay, France The workshop is supported under the H2020 project AI4Media ?A European Excellence Centre for Media, Society and Democracy? https://www.ai4media.eu/, and the Horizon Europe project vera.ai ?VERification Assisted by Artificial Intelligence? https://www.veraai.eu/. On behalf of the organizers, Bogdan Ionescu https://www.aimultimedialab.ro/ From gary.marcus at nyu.edu Mon Jan 16 12:37:50 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 16 Jan 2023 09:37:50 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: <82AEB876-1C07-41EE-8290-66FF4A3234E3@nyu.edu> An HTML attachment was scrubbed... URL: From d.kollias at qmul.ac.uk Mon Jan 16 17:52:59 2023 From: d.kollias at qmul.ac.uk (Dimitrios Kollias) Date: Mon, 16 Jan 2023 22:52:59 +0000 Subject: Connectionists: (CfP) CVPR 2023: 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) Message-ID: Dear Colleagues, Please find below the invitation to contribute to the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) to be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023. (1): The Competition is split into the below four Challenges: * Valence-Arousal Estimation Challenge * Expression Classification Challenge * Action Unit Detection Challenge * Emotional Reaction Intensity Estimation Challenge The first 3 Challenges are based on an augmented version of the Aff-Wild2 database, which is an audiovisual in-the-wild database of 594 videos of 584 subjects of around 3M frames; it contains annotations in terms of valence-arousal, expressions and action units. The last Challenge is based on the Hume-Reaction dataset, which is a multimodal dataset of about 75 hours of video recordings of 2222 subjects; it contains continuous annotations for the intensity of 7 emotional experiences. Participants are invited to participate in at least one of these Challenges. There will be one winner per Challenge; the top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2023 proceedings; all other teams are also encouraged to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2023 proceedings. More information about the Competition can be found here. Important Dates: * Call for participation announced, team registration begins, data available: 13 January, 2023 * Final submission deadline: 18 March, 2023 * Winners Announcement: 19 March, 2023 * Final paper submission deadline: 24 March, 2023 * Review decisions sent to authors; Notification of acceptance: 3 April, 2023 * Camera ready version deadline: 8 April, 2023 Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Panagiotis Tzirakis, Hume AI Alice Baird, Hume AI Alan Cowen, Hume AI (2): The Workshop solicits contributions on the recent progress of recognition, analysis, generation and modelling of face, body, and gesture, while embracing the most advanced systems available for face and gesture analysis, particularly, in-the-wild (i.e., in unconstrained environments) and across modalities like face to voice. In parallel, this Workshop will solicit contributions towards building fair models that perform well on all subgroups and improve in-the-wild generalisation. Original high-quality contributions, including: - databases or - surveys and comparative studies or - Artificial Intelligence / Machine Learning / Deep Learning / AutoML / (Data-driven or physics-based) Generative Modelling Methodologies (either Uni-Modal or Multi-Modal; Uni-Task or Multi-Task ones) are solicited on the following topics: i) "in-the-wild" facial expression or micro-expression analysis, ii) "in-the-wild" facial action unit detection, iii) "in-the-wild" valence-arousal estimation, iv) "in-the-wild" physiological-based (e.g.,EEG, EDA) affect analysis, v) domain adaptation for affect recognition in the previous 4 cases vi) "in-the-wild" face recognition, detection or tracking, vii) "in-the-wild" body recognition, detection or tracking, viii) "in-the-wild" gesture recognition or detection, ix) "in-the-wild" pose estimation or tracking, x) "in-the-wild" activity recognition or tracking, xi) "in-the-wild" lip reading and voice understanding, xii) "in-the-wild" face and body characterization (e.g., behavioral understanding), xiii) "in-the-wild" characteristic analysis (e.g., gait, age, gender, ethnicity recognition), xiv) "in-the-wild" group understanding via social cues (e.g., kinship, non-blood relationships, personality) xv) subgroup distribution shift analysis in affect recognition xvi) subgroup distribution shift analysis in face and body behaviour xvii) subgroup distribution shift analysis in characteristic analysis Accepted workshop papers will appear at CVPR 2023 proceedings. Important Dates: Paper Submission Deadline: 24 March, 2023 Review decisions sent to authors; Notification of acceptance: 3 April, 2023 Camera ready version 8 April, 2023 Chairs: Dimitrios Kollias, Queen Mary University of London, UK Stefanos Zafeiriou, Imperial College London, UK Panagiotis Tzirakis, Hume AI Alice Baird, Hume AI Alan Cowen, Hume AI In case of any queries, please contact d.kollias at qmul.ac.uk Kind Regards, Dimitrios Kollias, on behalf of the organising committee ======================================================================== Dr Dimitrios Kollias, PhD, MIEEE, FHEA Lecturer (Assistant Professor) in Artificial Intelligence Member of Multimedia and Vision (MMV) research group Member of Queen Mary Computer Vision Group Associate Member of Centre for Advanced Robotics (ARQ) Academic Fellow of Digital Environment Research Institute (DERI) School of EECS Queen Mary University of London ======================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jan 16 13:18:10 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Mon, 16 Jan 2023 18:18:10 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <82AEB876-1C07-41EE-8290-66FF4A3234E3@nyu.edu> References: <82AEB876-1C07-41EE-8290-66FF4A3234E3@nyu.edu> Message-ID: <6aa804e3-7776-9826-a011-31330b0b218d@rubic.rutgers.edu> On 1/16/23 12:37, Gary Marcus wrote: Dear Stephen, Despite our agreement that LLMs are overhyped, I still have to take exception to this one. Well its a start. Working AI is not and has never been all neural networks. Unfortunately recently it has been. Although NN per se failed in the 70s.. and soldiers like Grossberg kept the lamp burning through the 70s and early 80s, until Boltzmann and Backprop appeared in 1984-6. Relatively few commercial applications are pure neural nets, even today. Empirically there is lot of "working" hybrid and even symbolic AI out there already. Google Search, one of the most widely used commercial products, is and has been for a long time a combination of symbolic AI and deep learning. Really, I thougnt it was inverse FFTs? Oh maybe you consider any non DL algorithms symbolic? I mean your definition here has alway be slippery.. especailly when you give talks. Vehicle navigation systems, also very widely used commercially, are largely or perhaps entirely symbolic. Hmm except for Musk's attempts, which probably were hybrid or completely symbolic (but his cars hit people -right?), I thought Waymo was completely DL, with driver help; Many of the nascent efforts to turn LLMs into search engines appear to be hybrid systems, combining classical search with LLMs, and so on. And as mentioned, Juergen's own company, trying to reshape industrial AI, is neurosymbolic in its fundamental design. Really? I don't think so.. Juergen.. are you doing Neurosymbolic things? Oh wait is LSTM symbolic Gary? Neurosymbolic AI is also an active area of research; DeepMind for example, just released a fascinating paper that translates symbolic code intro Transformer-like neural networks. DeepMind is engaging in such research precisely because they recognize its potential importance. There are now also whole conferences devoted to Neurosymbolic AI, like this one next week: https://ibm.github.io/neuro-symbolic-ai/events/ns-workshop2023/ Yeah there are also Flat Earth International Conferences too: https://www.youtube.com/watch?v=4ylYvNnP1r Pretending none of this exists is just silly. You can place your bets as you like, but there seems to be a lot that you seem unaware of, both in terms of research and commercial application. Actually, I'm pretty much upto date on what's happening. Pretending something exists that doesn't and probably never weill is delusional. Steve Gary On Jan 16, 2023, at 05:19, Stephen Jos? Hanson wrote: ? Gary, "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. Best, Steve On 1/14/23 07:04, Gary Marcus wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0 J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0 - https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0 Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0 https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0 This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jan 16 13:47:17 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Mon, 16 Jan 2023 18:47:17 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <15464C76-D960-4CBF-99FA-3F04851AEC06@nyu.edu> References: <6aa804e3-7776-9826-a011-31330b0b218d@rubic.rutgers.edu> <15464C76-D960-4CBF-99FA-3F04851AEC06@nyu.edu> Message-ID: <413617aa-e66c-8bd3-e09b-b8bb70443d69@rubic.rutgers.edu> On 1/16/23 13:30, Gary Marcus wrote: Stephen, Making up opinionated stuff without any kind of reference is not helpful. I agree. .But your references need to unpacked abit more than you do. Also you definition of "symbolic" appears to be pretty adaptable. Wrt navigation, I was talking about turn-by-turn directions. Is there anyone can do that with a neural network? (Turn-by-turn is basically a solved problem, largely solved by graph-theoretic techniques drawn from classical AI; full autonomy simply isn?t solved, by any technique) I believe Waymo was already doing that, but happy to be corrected. Commercial stuff could certanly be polluted with ad hoc symbolic hacks. The last published work that I know of disclosing Google Search?s internal structure, a while ago, to be sure, placed RankBrain as just one cue (ranking third at the time) among many; there may be more recent work I have not heard any indication that have gone over to a full neural net solution. Certainly they incorporate things like BERT, and more broadly embeddings. Have they kicked away all the classical AI they once used? Not so I am aware. What classical AI..? Alpha-beta pruning? Machine learning using propositional logic? Minimum covering DNFs? What are you talking about? I gave you a link to what Juergen?s team is doing; I spoke with them in late 2022. They were 100% clear that it was neurosymbolic. Really.. ok. something to talk with Juergen about that isn't historical.. But again.. what do you think is symbolic in any algorithm you are talking about? Rather then accussing me of me of being opinionated.. (I suppose I am).. I think you're arguments and evidence slides off the table most of the time. Steve Gary On Jan 16, 2023, at 10:18, Stephen Jos? Hanson wrote: ? On 1/16/23 12:37, Gary Marcus wrote: Dear Stephen, Despite our agreement that LLMs are overhyped, I still have to take exception to this one. Well its a start. Working AI is not and has never been all neural networks. Unfortunately recently it has been. Although NN per se failed in the 70s.. and soldiers like Grossberg kept the lamp burning through the 70s and early 80s, until Boltzmann and Backprop appeared in 1984-6. Relatively few commercial applications are pure neural nets, even today. Empirically there is lot of "working" hybrid and even symbolic AI out there already. Google Search, one of the most widely used commercial products, is and has been for a long time a combination of symbolic AI and deep learning. Really, I thougnt it was inverse FFTs? Oh maybe you consider any non DL algorithms symbolic? I mean your definition here has alway be slippery.. especailly when you give talks. Vehicle navigation systems, also very widely used commercially, are largely or perhaps entirely symbolic. Hmm except for Musk's attempts, which probably were hybrid or completely symbolic (but his cars hit people -right?), I thought Waymo was completely DL, with driver help; Many of the nascent efforts to turn LLMs into search engines appear to be hybrid systems, combining classical search with LLMs, and so on. And as mentioned, Juergen's own company, trying to reshape industrial AI, is neurosymbolic in its fundamental design. Really? I don't think so.. Juergen.. are you doing Neurosymbolic things? Oh wait is LSTM symbolic Gary? Neurosymbolic AI is also an active area of research; DeepMind for example, just released a fascinating paper that translates symbolic code intro Transformer-like neural networks. DeepMind is engaging in such research precisely because they recognize its potential importance. There are now also whole conferences devoted to Neurosymbolic AI, like this one next week: https://ibm.github.io/neuro-symbolic-ai/events/ns-workshop2023/ Yeah there are also Flat Earth International Conferences too: https://www.youtube.com/watch?v=4ylYvNnP1r Pretending none of this exists is just silly. You can place your bets as you like, but there seems to be a lot that you seem unaware of, both in terms of research and commercial application. Actually, I'm pretty much upto date on what's happening. Pretending something exists that doesn't and probably never weill is delusional. Steve Gary On Jan 16, 2023, at 05:19, Stephen Jos? Hanson wrote: ? Gary, "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. Best, Steve On 1/14/23 07:04, Gary Marcus wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0 J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0 - https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0 Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0 https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0 This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jan 16 13:30:09 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 16 Jan 2023 10:30:09 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <6aa804e3-7776-9826-a011-31330b0b218d@rubic.rutgers.edu> References: <6aa804e3-7776-9826-a011-31330b0b218d@rubic.rutgers.edu> Message-ID: <15464C76-D960-4CBF-99FA-3F04851AEC06@nyu.edu> An HTML attachment was scrubbed... URL: From patrick.gallinari at sorbonne-universite.fr Mon Jan 16 12:52:45 2023 From: patrick.gallinari at sorbonne-universite.fr (patrick Gallinari) Date: Mon, 16 Jan 2023 18:52:45 +0100 Subject: Connectionists: =?utf-8?q?_PhD_positions_=C2=AB_Physics_Aware_Dee?= =?utf-8?q?p_Learning_=C2=BB_Sorbonne_University=2C_Paris_Fr=2E_Dead_line_?= =?utf-8?q?January_31st=2E?= Message-ID: Dear all, 3 PhD positions on ??Physics Aware Deep Learning?? are available at Sorbonne University, Paris, Fr. Machine Learning augmented model design of unsteady reactive flows ? application to a scramjet combustor Domain generalization and transfer learning for deep learning data coherence of climate data sets Physics Based Deep Learning of Surrogate Models for Fluid Flow Simulation. Application to sustainable space missions. *Apply here **(**link* *) be**fore January 31**^st **, 2023?!* Additional PhD topics are available here Conditions: Candidates must already be in possession of a master?s degree, but must not have completed a doctoral degree. They should also have an excellent level of English, and have not carried out their main activity in France for more than 12 months over the past 36 months. -- Prof. Patrick Gallinari Sorbonne Universite - ISIR 4 place Jussieu, 75252 Paris Cedex 05, France Tel: 33144277370 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbianchi at fei.edu.br Mon Jan 16 16:09:25 2023 From: rbianchi at fei.edu.br (Reinaldo A. C. Bianchi) Date: Mon, 16 Jan 2023 21:09:25 +0000 Subject: Connectionists: Call for Participation - RoboCup 2023 Humanoid Research Demonstration Message-ID: <93522BBA-55CC-4567-8CF3-D68FAAE6ABF4@fei.edu.br> * Call for Participation * RoboCup 2023 Humanoid Research Demonstration https://humanoid.robocup.org/ July 06 - 09, 2023, Bordeaux, France =========================================================== The RoboCup Humanoid League invites the research community to apply for showcasing the latest research and development results that are relevant for humanoid robots. Researchers are invited to submit their demonstrations independently of whether they participate in the RoboCup competitions, symposium or have a RoboCup team. Contributions will be evaluated for scientific and technical excellence. Topics of Interest --------------------------------------- We welcome demonstrations containing new ideas, concepts, practical studies, and experiment demonstrations relevant to the field of Humanoid Robotics. Topics of interest include, but are not limited to: * Components, joints and mechanics; * Soft robotics; * Anthropomorphic vs. non-anthropomorphic; * Walking, running, jumping and other humanoid locomotion; * Adaptability and scalability; * Sensors and perception; * Control and stability; * Dealing with falling; * Reflexes and learning; * Energy supply and efficiency; * Robot design and robotic kits; * Virtual robots and simulation; * Benchmarking; * Bipedal robots applied to real problems; * Education with and for humanoid robots. Procedure --------------------------------------- The Humanoid Research Demonstration will take place in one or several sessions and participating teams are asked to show a demonstration of the system live in Bordeaux or stream their demonstration from their lab during the session. As a back-up solution, each team needs to provide a video demonstration prior to the beginning of the tournament. If the demonstration is performed in Bordeaux, the Humanoid League will provide a humanoid league soccer playing field for the demonstration. However, the members of the Technical Committee of the Humanoid League understand that for some demonstrations this setup may not be ideal and will try to accommodate all teams. If a team requires other arrangements for the demonstration, it must submit a request to the Technical Committee at least 3 months before the competition to allow sufficient time for alternative arrangements. Application --------------------------------------- We invite teams to apply to participate in the Humanoid Research Demonstration by submitting the following material: Demonstration Data * Demonstration title; * URL of the group?s home page; * Name of the contact person; * E-mail address of the contact person; * Postal address of the contact person. Demonstration Video The first part of the material is a video of your robot or robotic part demonstrating its skills or a brief overview of the software demonstration if your demonstration does not involve a physical embodiment. The video must be supplied as a link to it via YouTube. The maximum duration of the video is 3 minutes. The proponent is responsible to ensure that the video adheres to YouTube?s TOS (especially in regard to music copyright) to prevent the video from being blocked for the reviewers. Hardware Specification If the demonstration includes any type of hardware to be showcased, a one-page specification (PDF) for each different type of humanoid type robot/mechanism used that includes the following: * Robot/mechanism picture; * Robot/mechanism name; * Size of the humanoid type robot/mechanism; * Weight of the robot/mechanism; * Robots/mechanisms joint specification; * Type of sensors used (incl. type of camera(s)); * Computing unit(s); * Other specifications Short paper A short paper describing the robot, robot part of software and its task and required environment, limited to four (4) pages including text, references, tables, and figures. The short paper must follow the LNCS format which can be downloaded fromhttp://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0. Please pay special attention to the ?Author guidelines? that you?ll be able to find there. Plagiarism ======================================= Plagiarism, loosely the unattributed use of other peoples' words, code, and ideas is not tolerated in the RoboCup community. See the point ?Publishing Ethics? at https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines for a more detailed description. The teams and team members that plagiarize other peoples' work and present it as their own will be disqualified. For a first offense, the team and team members will be banned from RoboCup competition for two years (usually the current and next year). Harsher penalties will be applied to repeat offenders or extremely serious cases of plagiarism. A team may be disqualified at any time for plagiarism, even after the competition has started. RoboCup will not reimburse teams for any expenses related to their disqualification. Online Submission ======================================= All qualification material must be submitted online at https://submission.robocuphumanoid.com Important Dates ======================================= Humanoid Research Demonstration * Submission system open: November 20th, 2022 * Submission deadline: April 2nd, 2023 Publication ======================================= Please note that after the announcement of the qualified teams the qualified teams submitted material will be made publicly available on the Humanoid League website. Teams applying for participation, therefore, implicitly grant the right of publication of their qualification material to the Humanoid League. Visa Process ======================================= If you are a citizen of a country that needs a Visa for traveling to France, please start the VISA process as soon as you receive your notification of qualification. If you are not sure if you're eligible for a VISA-exempt, please consult the official website of the France government for information, at https://france-visas.gouv.fr/en/web/france-visas/welcome-page. With best regards, Technical Committee of RoboCup Humanoid League 2022 Esta mensagem, juntamente com qualquer outra informa??o anexada, ? confidencial e protegida por lei. Somente os seus destinat?rios est?o autorizados a us?-la. Se voc? n?o for o destinat?rio, por favor, informe o remetente e, em seguida, apague a mensagem, observando que n?o h? autoriza??o para usar, copiar, armazenar, encaminhar, imprimir ou tomar qualquer a??o baseada no seu conte?do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.dovrolis at cyi.ac.cy Tue Jan 17 01:53:02 2023 From: c.dovrolis at cyi.ac.cy (Constantine Dovrolis) Date: Tue, 17 Jan 2023 06:53:02 +0000 Subject: Connectionists: Post-Doc position in Cyprus -- Climate Science and ML Message-ID: Summary: * Post-Doc position in Cyprus * The Cyprus Institute (www.cyi.ac.cy) * Focus on fundamental research in Climate Science using Machine Learning methods. * Mentors: Constantine Dovrolis, Theo Christoudias, Johannes Lelieveld. * 2 years ? can be extended * Closing date: 15/2/2023 ---- The Cyprus Institute (CyI) is a European Non-profit Science and Technology oriented Educational and Research Institution based in Cyprus and led by an acclaimed Board of Trustees. The research agenda of the CyI is pursued at its four Research Centres: The Computation-based Science and Technology Research Centre (CaSToRC), the Science and Technology in Archaeology and Culture Research Centre (STARC), the Energy Environment Water Research Centre (EEWRC) and the Climate and Atmosphere Research Centre (CARE-C). Considerable cross-centre interaction is a characteristic of the Institute?s culture. The Cyprus Institute invites applications for a highly qualified and motivated individual to join the Institute as a Postdoctoral Research Fellow in Machine Learning and Data Science for Climate Science in CaSToRC. The successful candidate will apply Machine Learning for investigating key processes of the Earth System, including (but not limited to) the following: * Extreme event (weather, temperature, precipitation, etc.) risk detection * Data-driven and hybrid modeling of the Earth system * Using machine learning to develop new parameterizations for climate models * Causal inference in the context of climate change * Machine learning in support to air quality modelling for exposure mapping, super-resolution, short-term forecasts and long-term projections The candidate will be working primarily with Prof. Constantine Dovrolis (recently moved to CyI from Georgia Tech -- see http://www.cc.gatech.edu/~dovrolis/) as well as with Prof. Theo Christoudias (http://christoudias.cyi.ac.cy/) and Prof. Johannes Lelieveld. (https://www.cyi.ac.cy/index.php/care-c/about-the-center/care-c-our-people/itemlist/user/67-jos-lelieveld.html) from CARE-C. This position offers a unique opportunity for fundamental research and its exact focus will be determined also based the interests and skills of the successful candidate. The successful candidate will also work closely with the PIs in writing relevant grant proposals. The appointment is for a period of 2 years, with the possibility of renewal subject to performance and the availability of funds. An internationally competitive remuneration package will be offered, which is commensurate with the level of experience of the successful candidate. Responsibilities/activities to be involved in: 1. Development of novel Machine Learning/AI models for the analysis of climate phenomena 2. Collection and statistical analysis of observational and model data relevant to atmospheric and climate change, with a special focus on the Eastern Mediterranean and the Middle East (EMME) 3. Writing Research Papers in collaboration with the PIs -- aiming to publish at top-tier conferences and journals 4. Presentation of results in conferences and meetings and participation in journal publications 5. Contribution to research proposal preparation, project (scientific) reporting & project management 6. Supervision and guidance of Research Assistants and Students Required Qualifications 1. PhD in computer science, geosciences, physics or a related field (such as computational science, applied math or climate science) at the time of the appointment (The candidate must hold a PhD degree of a recognized higher education institution before the deadline of the opening. Candidates who have successfully defended their doctoral thesis but who have not yet formally been awarded the doctoral degree will also be considered eligible to apply provided that they can document their successful defense of the thesis) 2. Publications in the areas of Artificial Intelligence and/or Machine Learning (either developing new methods in these areas -- or applying existing methods in climate science) 3. 3 year-experience (including PhD research) 4. Understanding of fundamental concepts in climate science 5. Proficient programming skills (preferably in Python) and experience with deep learning frameworks such as PyTorch 6. Ability to work as part of an interdisciplinary team while showing initiative and independence 7. Excellent knowledge of the English language (written and verbal) Preferred Qualifications * Experience with statistical methods for climate datasets * Background in atmospheric dynamics, climate modeling Application For full consideration, interested applicants should process their application at The Cyprus Institute Exelsys Platform (https://bit.ly/3HDLRTw) based on the instructions given. Applicants should submit a curriculum vitae including a short letter of interest, list of publications and a list of three (3) referees (including contact information) (all documentation should be in English and in PDF Format). For further information, please contact Prof Constantine Dovrolis (c.dovrolis at cyi.ac.cy). Please note that applications which do not fulfill the required qualifications and do not follow the announcement?s guidelines will not be considered. Recruitment will continue until the position is filled. The Cyprus Institute is an Equal Opportunities Employer certified from the Cypriot Ministry of Labor and also an HRS4R accredited Institution that adheres to the European Commission?s ?Charter & Code? principles for recruitment and selection. Contact person: Prof C. Dovrolis Reference number: CaSToRC_PDF_22_15 Best regards, Constantine Dovrolis ---------------------------- Professor and Director of CaSToRC - The Cyprus Institute - https://www.cyi.ac.cy/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stmanion at gmail.com Tue Jan 17 00:28:11 2023 From: stmanion at gmail.com (Sean Manion) Date: Tue, 17 Jan 2023 00:28:11 -0500 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus wrote: > Hi, Juergen, > > Thanks for your reply. Restricting your title to ?modern? AI as you did > is a start, but I think still not enough. For example, from what I > understand about NNAISANCE, through talking with you and Bas Steunebrink, > there?s quite a bit of hybrid AI in what you are doing at your company, not > well represented in the review. The related open-access book certainly > draws heavily on both traditions ( > https://link.springer.com/book/10.1007/978-3-031-08020-3). > > Likewise, there is plenty of eg symbolic planning in modern navigation > systems, most robots etc; still plenty of use of symbolic trees in game > playing; lots of people still use taxonomies and inheritance, etc., an > AFAIK nobody has built a trustworthy virtual assistant, even in a narrow > domain, with only deep learning. And so on. > > In the end, it?s really a question about balance, which is what I think > Andrzej was getting at; you go miles deep on the history of deep learning, > which I respect, but just give relatively superficial pointers (not none!) > outside that tradition. Definitely better, to be sure, in having at least a > few pointers than in having none, and I would agree that the future is > uncertain. I think you strike the right note there! > > As an aside, saying that everything can be formulated as RL is maybe no > more helpful than saying that everything we (currently) know how to do can > be formulated in terms of Turing machine. True, but doesn?t carry you far > enough in most real world applications. I personally see RL as part of an > answer, but most useful in (and here we might partly agree) the context of > systems with rich internal models of the world. > > My own view is that we will get to more reliable AI only once the field > more fully embraces the project of articulating how such models work and > how they are developed. > > Which is maybe the one place where you (eg > https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg > https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg > https://arxiv.org/abs/2002.06177) are most in agreement. > > Best, > Gary > > On Jan 15, 2023, at 23:04, Schmidhuber Juergen wrote: > > ?Thanks for these thoughts, Gary! > > 1. Well, the survey is about the roots of ?modern AI? (as opposed to all > of AI) which is mostly driven by ?deep learning.? Hence the focus on the > latter and the URL "deep-learning-history.html.? On the other hand, many of > the most famous modern AI applications actually combine deep learning and > other cited techniques (more on this below). > > Any problem of computer science can be formulated in the general > reinforcement learning (RL) framework, and the survey points to ancient > relevant techniques for search & planning, now often combined with NNs: > > "Certain RL problems can be addressed through non-neural techniques > invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) > [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution > (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], > control theory and system identification (1950s) [KAL59][GLA85], > stochastic gradient descent (SGD, 1951) [STO51-52], and universal search > techniques (1973) [AIT7]. > > Deep FNNs and RNNs, however, are useful tools for _improving_ certain > types of RL. In the 1980s, concepts of function approximation and NNs were > combined with system identification [WER87-89][MUN87][NGU89], DP and its > online variant called Temporal Differences [TD1-3], artificial evolution > [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on > this can be found in Sec. 6 of the 2015 survey [DL1]. > > When there is a Markovian interface [PLAN3] to the environment such that > the current input to the RL machine conveys all the information required to > determine a next optimal action, RL with DP/TD/MC-based FNNs can be very > successful, as shown in 1994 [TD2] (master-level backgammon player) and the > 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For > more complex cases without Markovian interfaces, ?? > > Theoretically optimal planners/problem solvers based on algorithmic > information theory are mentioned in Sec. 19. > > 2. Here a few relevant paragraphs from the intro: > > "A history of AI written in the 1980s would have emphasized topics such as > theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert > systems, and heuristic search [FEI63,83][LEN83]. This would be in line with > topics of a 1956 conference in Dartmouth, where the term "AI" was coined by > John McCarthy as a way of describing an old area of research seeing renewed > interest. > > Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo > built the first working chess end game player [BRU1-4] (back then chess was > considered as an activity restricted to the realms of intelligent > creatures). AI theory dates back at least to 1931-34 when Kurt G?del > identified fundamental limits of any type of computation-based AI > [GOD][BIB3][GOD21,a,b]. > > A history of AI written in the early 2000s would have put more emphasis on > topics such as support vector machines and kernel methods [SVM1-4], > Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) > reasoning [BAY1-8][FI22] and other concepts of probability theory and > statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble > methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation > [EVO1-7][TUR1]. Why? Because back then such techniques drove many > successful AI applications. > > A history of AI written in the 2020s must emphasize concepts such as the > even older chain rule [LEI07] and deep nonlinear artificial neural networks > (NNs) trained by gradient descent [GD?], in particular, feedback-based > recurrent networks, which are general computers whose programs are weight > matrices [AC90]. Why? Because many of the most famous and most commercial > recent AI applications depend on them [DL4]." > > 3. Regarding the future, you mentioned your hunch on neurosymbolic > integration. While the survey speculates a bit about the future, it also > says: "But who knows what kind of AI history will prevail 20 years from > now?? > > Juergen > > > On 14. Jan 2023, at 15:04, Gary Marcus wrote: > > > Dear Juergen, > > > You have made a good case that the history of deep learning is often > misrepresented. But, by parity of reasoning, a few pointers to a tiny > fraction of the work done in symbolic AI does not in any way make this a > thorough and balanced exercise with respect to the field as a whole. > > > I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as > planning, reasoning, natural language understanding, robotics and knowledge > representation are treated very superficially here. A few pointers to > theorem proving and the like does not solve that. > > > Your essay is a fine if opinionated history of deep learning, with a > special emphasis on your own work, but of somewhat limited value beyond a > few terse references in explicating other approaches to AI. This would be > ok if the title and aspiration didn?t aim for as a whole; if you really > want the paper to reflect the field as a whole, and the ambitions of the > title, you have more work to do. > > > My own hunch is that in a decade, maybe much sooner, a major emphasis of > the field will be on neurosymbolic integration. Your own startup is heading > in that direction, and the commericial desire to make LLMs reliable and > truthful will also push in that direction. > > Historians looking back on this paper will see too little about that roots > of that trend documented here. > > > Gary > > > On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: > > > ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI > from theorem proving (e.g., Zuse 1948) to later surveys of expert systems > and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much > further in time (not even speaking of Sec. 20). The survey also explains > why AI histories written in the 1980s/2000s/2020s differ. Here again the > table of contents: > > > Sec. 1: Introduction > > Sec. 2: 1676: The Chain Rule For Backward Credit Assignment > > Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow > Learning > > Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First > Learning RNNs > > Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) > > Sec. 6: 1965: First Deep Learning > > Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent > > Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. > > Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) > > Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More > RNNs / Etc > > Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity > / NN Online Planners > > Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command > > Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with > Linearized Self-Attention > > Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. > Distilling NNs > > Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding > Gradients > > Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / > ResNets > > Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher > > Sec. 18: It's the Hardware, Stupid! > > Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer > Science > > Sec. 20: The Broader Historic Context from Big Bang to Far Future > > Sec. 21: Acknowledgments > > Sec. 22: 555+ Partially Annotated References (many more in the > award-winning survey [DL1]) > > > Tweet: > https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= > > > J?rgen > > > > > > > On 13. Jan 2023, at 14:40, Andrzej Wichert < > andreas.wichert at tecnico.ulisboa.pt> wrote: > > Dear Juergen, > > You make the same mistake at it was done in the earlier 1970. You identify > deep learning with modern AI, the paper should be called instead "Annotated > History of Deep Learning? > > Otherwise, you ignore symbolical AI, like search, production systems, > knowledge representation, search, planning etc., as if is not part of AI > anymore (suggested by your title). > > Best, > > Andreas > > > -------------------------------------------------------------------------------------------------- > > Prof. Auxiliar Andreas Wichert > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= > > - > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= > > Instituto Superior T?cnico - Universidade de Lisboa > > Campus IST-Taguspark > > Avenida Professor Cavaco Silva Phone: +351 214233231 > > 2744-016 Porto Salvo, Portugal > > On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: > > Machine learning is the science of credit assignment. My new survey > credits the pioneers of deep learning and modern AI (supplementing my > award-winning 2015 survey): > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= > > This was already reviewed by several deep learning pioneers and other > experts. Nevertheless, let me know under juergen at idsia.ch if you can spot > any remaining error or have suggestions for improvements. > > Happy New Year! > > J?rgen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Wiener Aiken Von Neumann invite 04 Dec 1944.jpg Type: image/jpeg Size: 97769 bytes Desc: not available URL: From J.Spencer at uea.ac.uk Tue Jan 17 06:36:49 2023 From: J.Spencer at uea.ac.uk (John Spencer (PSY - Staff)) Date: Tue, 17 Jan 2023 11:36:49 +0000 Subject: Connectionists: Dynamic Friday Tutorials on Feb. 3rd and March 3rd... Message-ID: <6C6307B8-5F68-4FBF-8CF2-AF2F379909FF@uea.ac.uk> Greetings, The next Dynamic Friday Tutorials on February 3rd and March 3rd will discuss the following sequence of papers: February 3: 1. Lipinski, J., Schneegans, S., Sandamirskaya, Y., Spencer, J. P., & Sch?ner, G.. (2012). A Neuro-Behavioral Model of Flexible Spatial Language Behaviors. Journal of Experimental Psychology: Learning, Memory and Cognition., 38(6), 1490?1511. https://dynamicfieldtheory.org/upload/file/1470692845_fe09e17da927514823a1/LipinskiEtAl2011.pdf 2. Richter, M., Lins, J., Schneegans, S., Sandamirskaya, Y., & Sch?ner, G.. (2014). Autonomous Neural Dynamics to Test Hypotheses in a Model of Spatial Language. In P. Bello, Guarini, M., McShane, M., & Scassellati, B. (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 2847?2852). Austin, TX: Cognitive Science Society. https://dynamicfieldtheory.org/upload/file/1470692845_573a9c7ffe8e21330360/RichterEtAl2014.pdf March 3: Sabinasz, D., & Sch?ner, G.. (2022). A Neural Dynamic Model Perceptually Grounds Nested Noun Phrases. Topics in Cognitive Science. http://doi.org/10.1111/tops.12630 Details are on-line: https://dynamicfieldtheory.org/events/dynamic_friday_tutorials_dft/ You can register on our website if interested. Cheers, John Spencer John P. Spencer, PhD Professor Developmental Dynamics Lab https://www.facebook.com/DDPSYUEA https://ddlabs.uea.ac.uk School of Psychology, Room 0.09 Lawrence Stenhouse Building, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ United Kingdom Telephone 01603 593968 [/var/folders/jl/_9mpm6j92_9ckmg0q9qhs4986d96kt/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/cidimage002.jpg at 01D48BB4.171FA130][/var/folders/jl/_9mpm6j92_9ckmg0q9qhs4986d96kt/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/cidimage004.jpg at 01D48BB4.171FA130] UK 14th for Research Quality in Psychology, Psychiatry, and Neuroscience (Times Higher Education rankings for the Research Excellence Framework 2021) World Top 200 (Times Higher Education World University Rankings 2022) UK Top 30 (The Times/Sunday Times 2022 and Complete University Guide 2022) UK Top 20 for research quality (Times Higher Education Rankings for the Research Excellence Framework 2021) World Top 50 for research citations (Times Higher Education World University Rankings 2022) World Top 50 (Times Higher Education Impact Rankings 2022) Athena SWAN Silver Award Holder (since 2019) in recognition of advancement towards gender equality for all (Advance HE) Any personal data exchanged as part of this email conversation will be processed by the University in accordance with current UK data protection law and in line with the relevant UEA Privacy Notice. This email is confidential and may be privileged. If you are not the intended recipient please accept my apologies; please do not disclose, copy or distribute information in this email or take any action in reliance on its contents: to do so is strictly prohibited and may be unlawful. Please inform me that this message has gone astray before deleting it. Thank you for your co-operation. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 201 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 5594 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2467 bytes Desc: image003.jpg URL: From laure.berti at ird.fr Tue Jan 17 07:19:36 2023 From: laure.berti at ird.fr (Laure Berti) Date: Tue, 17 Jan 2023 13:19:36 +0100 Subject: Connectionists: Special Issue: Deep Learning for Environmental Remote Sensing In-Reply-To: <6C6307B8-5F68-4FBF-8CF2-AF2F379909FF@uea.ac.uk> References: <6C6307B8-5F68-4FBF-8CF2-AF2F379909FF@uea.ac.uk> Message-ID: <28eef8df-eb6c-3af3-d4e8-07e0e33c0fbe@ird.fr> We invite contributions to the special issue on *Deep Learning for Environmental Remote Sensing* in the journal *Sensors* (IF: 3.847). https://www.mdpi.com/journal/sensors/special_issues/474QRK5XS0 The submission deadline is July 26, 2023. Submitted papers are expected to be aligned with one or more of the relevant topics of the special issue, including (but not limited to): - deep learning models - explainable deep learning models - multimodal and multiscale remote sensing data fusion - uncertainty quantification of deep learning in environmental and Earth observation applications - mapping, monitoring, and characterization of land cover changes with time series - robust parameters retrieval for forestry and agricultural applications - hybrid deep learning and physical models in environmental applications - physical interpretation of deep learning models - application of deep learning to environmental science, agroecology, agroforestry, water management, biodiversity assessment and restoration, forest disturbances, natural resources mapping, disaster management, using Earth observation data Please consider contributing to and/or forwarding this CFP to potentially interested people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Tue Jan 17 08:13:01 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Tue, 17 Jan 2023 13:13:01 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From frothga at sandia.gov Tue Jan 17 12:12:02 2023 From: frothga at sandia.gov (Rothganger, Fredrick) Date: Tue, 17 Jan 2023 17:12:02 +0000 Subject: Connectionists: Cybernetics In-Reply-To: References: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Despite its long history, cybernetics remains the single best framework for our field. It is broad enough to encompass biology, neuroscience, artificial/natural intelligence (possibly including all mental phenomena), computer science, politics and economics. Yet it is specific enough to implement things that actually work. I believe that if we ever achieve anything resembling AGI, it's generality will come from goal-oriented behavior (a kind of feedback control). ________________________________ From: Connectionists on behalf of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 6:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: [EXTERNAL] Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From arbib at usc.edu Tue Jan 17 13:32:15 2023 From: arbib at usc.edu (Michael Arbib) Date: Tue, 17 Jan 2023 18:32:15 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. ************************************ From: Connectionists On Behalf Of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 5:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Jan 17 13:35:00 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 17 Jan 2023 10:35:00 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: <26E5D41A-623F-491D-A8A2-98B7750C0333@nyu.edu> An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Jan 17 14:03:41 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 17 Jan 2023 11:03:41 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <693a1f0d-402d-e55d-703f-ef19320fcaa8@rubic.rutgers.edu> References: <693a1f0d-402d-e55d-703f-ef19320fcaa8@rubic.rutgers.edu> Message-ID: <87231C68-748E-4203-94FF-BB3A3FFBD9CE@nyu.edu> An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Tue Jan 17 13:45:42 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Tue, 17 Jan 2023 18:45:42 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <26E5D41A-623F-491D-A8A2-98B7750C0333@nyu.edu> References: <26E5D41A-623F-491D-A8A2-98B7750C0333@nyu.edu> Message-ID: <693a1f0d-402d-e55d-703f-ef19320fcaa8@rubic.rutgers.edu> Michael, I agree, looking forward to reading it. I think the cybernetics book.. with the transcriptions of the discussion is nice bit of time travel.. Best, Steve On 1/17/23 13:35, Gary Marcus wrote: Wow. Chills down spine, in a good way. I did not know that and look forward to reading! On Jan 17, 2023, at 10:32, Michael Arbib wrote: ? Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. ************************************ From: Connectionists On Behalf Of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 5:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Jan 17 20:16:15 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 17 Jan 2023 17:16:15 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: apple-touch-icon-1582743354193.png Type: image/png Size: 2569 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: apple-touch-icon-1582743354193.png Type: image/png Size: 2569 bytes Desc: not available URL: From steve at bu.edu Tue Jan 17 18:00:51 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Tue, 17 Jan 2023 23:00:51 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <26E5D41A-623F-491D-A8A2-98B7750C0333@nyu.edu> References: <26E5D41A-623F-491D-A8A2-98B7750C0333@nyu.edu> Message-ID: Dear Gary, Michael, and other Connectionists colleagues, Michael's reminiscences remind me of related memories. I was traveling from Stanford in 1964 to MIT to study with Norbert Wiener when I heard that Wiener had just died. I was a PhD student at Stanford then and was hoping to find a more congenial place at MIT to continue my work on neural networks. Instead, I got my PhD at the Rockefeller Institute for Medical Research in New York, where Abe Pais, who worked for Niels Bohr and was a colleague of Albert Einstein at the Institute for Advanced Study, was on the faculty. Abe regaled us with rather colorful stories about both Bohr and Einstein. Later, when I was an assistant professor at MIT, I got to know Norman Levinson and his wife, Fagi, very well. Norman was Wiener's most famous student: https://en.wikipedia.org/wiki/Norman_Levinson Norman and Fagi told me lots of stories about Wiener's many famous idiosyncrasies. While I was a young professor at MIT, Norman and Fagi generously treated me as their scientific godson and took me under their wing both at their home and at many scientific conferences. After Norman's death, Fagi became our daughter's god grandmother and shared many happy family celebrations with us. For an interesting and heartwarming story about Fagi's impact on the mathematics community to which she was connected through Norman, and thus Wiener, see: https://news.mit.edu/2010/obit-levinson I also had the good luck to meet Warren McCulloch https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch and Jerry Lettvin https://en.wikipedia.org/wiki/Jerome_Lettvin when I got to MIT. I remember wandering about Warren's lab at the Research Lab of Electronics with my friend, Stu Kauffman, who was then working with McCulloch: https://en.wikipedia.org/wiki/Stuart_Kauffman. I met Stu at Dartmouth, where I began my work in neural networks as a freshman in 1957. We have remained close friends to the present time. Jerry and his wife, Maggie, also looked after me and invited me to dinner parties at their home. Maggie became quite a famous person in her own right and was a major role model for exercise, health, and women in general: https://en.wikipedia.org/wiki/Maggie_Lettvin I will never forget the generosity and kindness of these incredibly talented people. My Magnum Opus includes discussions of Bohr, Einstein, and McCulloch, among other great scientists: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best, Steve ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Tuesday, January 17, 2023 1:35 PM To: Michael Arbib Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Wow. Chills down spine, in a good way. I did not know that and look forward to reading! On Jan 17, 2023, at 10:32, Michael Arbib wrote: ? Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. ************************************ From: Connectionists On Behalf Of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 5:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiaxiangzhang at gmail.com Tue Jan 17 18:07:25 2023 From: jiaxiangzhang at gmail.com (Jiaxiang ZHANG) Date: Tue, 17 Jan 2023 23:07:25 +0000 Subject: Connectionists: Fully-funded PhD studentship on neurocognitive fingerprinting at Swansea University Message-ID: [Apologize for cross-posting] Dear all, We are seeking applicants for a PhD studentship to join the Department of Computer Science, at Swansea University. This project will combine sensitive behavioural paradigms, hierarchical generative models, and cutting-edge neuroimaging. We will use computational models to quantify human behavioural and electrophysiological data. From model inferences, we then construct an individual?s digital fingerprints. The project will then estimate the uniqueness and robustness of human digital fingerprints. We welcome enthusiastic applicants from a wide range of backgrounds, including computing, neuroscience, psychology, engineering, and physics. We will provide training in advanced statistical analyses, computational modelling and brain signal analysis. The PhD candidate will join a collaborative and multidisciplinary AI research group at Swansea University Computational Foundry. Computational Foundry is a ?32.5 million world-class that hosts the School of Mathematics and Computer Science, and the PhD candidate will be a member of this vibrant research community. Job specs and application details are available online ( https://www.swansea.ac.uk/postgraduate/scholarships/research/computer-science-epsrc-su-phd-digital-2023-rs204.php). Application deadline is January 25, 2023. This studentship is available to UK and International applicants. For informal inquiries about the project, please contact Professor Jiaxiang Zhang (Jiaxiang.zhang at swansea.ac.uk) with your CV. --- Jiaxiang Zhang Department of Computer Science, Swansea University & Cardiff University Brain Imaging Centre http://ccbrain.org https://www.swansea.ac.uk/staff/jiaxiang.zhang Jiaxiang.zhang at swansea.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Jan 17 18:31:30 2023 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 17 Jan 2023 15:31:30 -0800 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From steve at bu.edu Tue Jan 17 19:35:12 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 18 Jan 2023 00:35:12 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: Dear Gary et al., Your reminiscence about McCarthy and LISP reminds me of a story about BASIC and computer time-sharing. The latter were both introduced by two of my math professors at Dartmouth, John Kemeny and Tom Kurtz, a few years after I was an undergraduate student there: https://en.wikipedia.org/wiki/John_G._Kemeny Kemeny had a profound influence on my life in science. Just after I made some of my first discoveries about how to model mind and brain, I took his course in the philosophy of science as a sophomore. Until that point, essentially all of my science courses were taught by the book, with no hint of the passions and meandering pathways that often led to discoveries. I felt that my head exploded with ideas when I made my first discoveries, but I had no idea how to "do" science. His course was incredibly liberating and instructive for me. Kemeny was an eloquent lecturer who made mathematics live. He believed that good mathematics students should go into the social sciences. He put his money where his mouth is by writing, with another of my math professors, J. Laurie Snell, the book Mathematical Models in the Social Sciences, which is still in print today: https://www.amazon.com/Mathematical-Models-Social-Sciences-Press/dp/0262610302 Kemeny was one of my most important mentors who encouraged my early work. I became Dartmouth's first joint major in mathematics and psychology with his full support. Returning to a theme of my earlier email, Kemeny was Einstein's last assistant at Princeton before being hired as a full professor at Dartmouth at age 27 and becoming chairman of the mathematics department a couple of years later. Another set of lucky circumstances that helped me to find my own path. Not surprisingly, I also discuss Kemeny in my Magnum Opus https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve ________________________________ From: Gary Marcus Sent: Tuesday, January 17, 2023 6:31 PM To: Grossberg, Stephen Cc: Michael Arbib ; connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning I am just old enough to appreciate all this and young enough not to have met any of them. My late father took Fortran (on punch cards) from John McCarthy at MIT, and McCarthy (who I did later meet) left in the middle of the semester, having just invented LISP. On Jan 17, 2023, at 15:01, Grossberg, Stephen wrote: ? Dear Gary, Michael, and other Connectionists colleagues, Michael's reminiscences remind me of related memories. I was traveling from Stanford in 1964 to MIT to study with Norbert Wiener when I heard that Wiener had just died. I was a PhD student at Stanford then and was hoping to find a more congenial place at MIT to continue my work on neural networks. Instead, I got my PhD at the Rockefeller Institute for Medical Research in New York, where Abe Pais, who worked for Niels Bohr and was a colleague of Albert Einstein at the Institute for Advanced Study, was on the faculty. Abe regaled us with rather colorful stories about both Bohr and Einstein. Later, when I was an assistant professor at MIT, I got to know Norman Levinson and his wife, Fagi, very well. Norman was Wiener's most famous student: https://en.wikipedia.org/wiki/Norman_Levinson Norman and Fagi told me lots of stories about Wiener's many famous idiosyncrasies. While I was a young professor at MIT, Norman and Fagi generously treated me as their scientific godson and took me under their wing both at their home and at many scientific conferences. After Norman's death, Fagi became our daughter's god grandmother and shared many happy family celebrations with us. For an interesting and heartwarming story about Fagi's impact on the mathematics community to which she was connected through Norman, and thus Wiener, see: https://news.mit.edu/2010/obit-levinson I also had the good luck to meet Warren McCulloch https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch and Jerry Lettvin https://en.wikipedia.org/wiki/Jerome_Lettvin when I got to MIT. I remember wandering about Warren's lab at the Research Lab of Electronics with my friend, Stu Kauffman, who was then working with McCulloch: https://en.wikipedia.org/wiki/Stuart_Kauffman. I met Stu at Dartmouth, where I began my work in neural networks as a freshman in 1957. We have remained close friends to the present time. Jerry and his wife, Maggie, also looked after me and invited me to dinner parties at their home. Maggie became quite a famous person in her own right and was a major role model for exercise, health, and women in general: https://en.wikipedia.org/wiki/Maggie_Lettvin I will never forget the generosity and kindness of these incredibly talented people. My Magnum Opus includes discussions of Bohr, Einstein, and McCulloch, among other great scientists: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best, Steve ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Tuesday, January 17, 2023 1:35 PM To: Michael Arbib Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Wow. Chills down spine, in a good way. I did not know that and look forward to reading! On Jan 17, 2023, at 10:32, Michael Arbib wrote: ? Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. ************************************ From: Connectionists On Behalf Of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 5:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From suashdeb at gmail.com Wed Jan 18 05:09:11 2023 From: suashdeb at gmail.com (Suash Deb) Date: Wed, 18 Jan 2023 15:39:11 +0530 Subject: Connectionists: ISMSI 2023 in virtual mode Message-ID: Dear esteemed colleagues, Warmest greetings. Hope all is well. This is to share with you that on account of the current spurt of Coronavirus cases in China and neighborhood and considering the fact that Malaysia is not very far off from China, the Core committee of IICCI recently met and unanimously decided to hold ISMSI 2023 in full fledged virtual mode Albeit disappointing once more, I hope you will continue to extend support and submit your precious research findings for possible presentations at 2023 7th ISMSI. As in the previous years, besides getting your (accepted and registered) manuscripts published in online proceedings of ACM (ICPS), there will be scope of publications of the extended versions of a few conference papers in NCAA, a Springer Publication (SCIE Indexed). For more information, pls. visit the conference website http://www.ismsi.org/index.html Will look forward to receiving (if not already) your manuscripts in the coming days and with kind regards, Suash Deb General Chair, ISMSI 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre-yves.oudeyer at inria.fr Wed Jan 18 04:23:17 2023 From: pierre-yves.oudeyer at inria.fr (Pierre-Yves Oudeyer) Date: Wed, 18 Jan 2023 10:23:17 +0100 Subject: Connectionists: Call for Abstracts: Curiosity, Creativity and Complexity, Columbia University, May 23-25, 2023 References: Message-ID: <9E1C5712-3E6A-400B-8BFC-4F0B64D2E4FA@inria.fr> Abstract submission is open for an interdisciplinary conference on Curiosity, Creativity and Complexity at Columbia University in New York, USA on May 23-25, 2023. The conference covers Neuroscience and Psychology (neural mechanisms of cognitive control, exploration, decision-making, information demand, memory and creativity), Computer Science (artificial intelligence of curiosity and intrinsic motivation) and Economics (decision making and information demand). For more information and a speakers list, please see: https://zuckermaninstitute.columbia.edu/ccc-event The conference will include poster sessions showcasing ongoing research. Eligible students and trainees can apply for travel awards. Please use this form to contribute an abstract and apply for a travel award. Deadline: February 15, 2023. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avellido at cs.upc.edu Wed Jan 18 03:50:42 2023 From: avellido at cs.upc.edu (Alfredo Vellido) Date: Wed, 18 Jan 2023 09:50:42 +0100 Subject: Connectionists: [CFP] Special Session IJCNN 2023 - The Coming of Age of Explainable AI (XAI) and ML Message-ID: <88881ffb-adee-636d-b354-73ee3f6f467d@cs.upc.edu> Apologies for cross-posting =================== 1st CALL FOR PAPERS =================== IEEE IJCNN 2023 Special Session on The Coming of Age of Explainable AI (XAI) and Machine Learning June 18-23, 2023, Queensland, Australia www.cs.upc.edu/~avellido/research/conferences/IJCNN2023-XAIcomingofage.html https://2023.ijcnn.org/authors/paper-submission Aims & Scope ------------ Much of current research on Machine Learning (ML) is dominated by methods of the Deep Learning family. The more complex their architectures, the more difficult the interpretation or explanation of how and why a particular network prediction is obtained, or the elucidation of which components of the complex system contributed essentially to the obtained decision. This brings about the concern about interpretability and non-transparency of complex models, especially in high-stakes applications areas such as healthcare, national security, industry or public governance, to name a few, in which decision making processes may affect citizens. This is, for instance, made especially relevant by rapid developments in the field of autonomous systems ? from cars that drive themselves to partner robots and robotic drones. DARPA (Defense Advanced Research Projects Agency), a research agency of US Department of Defense, was the first to start a research program on Explainable AI (https://www.darpa.mil/program/explainable-artificial-intelligence) with the goal ?to create a suite of machine learning techniques that (1) Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and (2) Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.? Research on Explainable AI (XAI) is now supported worldwide by a variety of public institutions and legal regulations, such as European Union?s General Data Protection Regulation (GDPR) and the forthcoming Artificial Intelligence Act. Similar concerns about transparency and interpretability are being raised by governments and organizations worldwide. The lack of transparency (interpretability and explainability) of many ML approaches in the light of regulations may end up limiting ML to niche applications and poses a significant risk of costly mistakes without the mitigation of a sound understanding about the flow of information in the model. For this special session, we invite papers that address many of the challenges of XAI in the context of ML models and algorithms. We are interested in papers on efficient and innovative algorithmic approaches to XAI and their actual applications all areas. This session also aims to explore the performance-versus-explanation trade-off space for high-stakes applications of ML in light of all types of AI regulation. Comprehensive survey papers on existing technologies for XAI are also welcome. We aim to bring together researchers from different fields to discuss key issues related to the research and applications of XAI methods and to share their experiences of solving problems in high-stakes applications in all domains. Topics that are of interest to this session include but are not limited to: ?New neural network architectures and algorithms for XAI ?Interpretability by design ?Rule extraction algorithms for deep neural networks ?Augmentations of AI methods to increase interpretability and transparency ?Innovative applications of XAI ?Verification of AI performance ?Regulation-compliant XAI methods ?Explanation-generation methods for high-stakes applications ?Stakeholder-specific XAI methods for high-stakes applications ?XAI methods auditing in specific domains ?Human-in-the-loop ML: bridging the gap between data scientists and end-users ?XAI through Data Visualization ?Interpretable ML pipelines ?Query Interfaces for DL ?Active and Transfer learning with transparency ?Relevance and Metric Learning ?Deep Neural Reasoning ?Interfaces with Rule-Based Reasoning, Fuzzy Logic and Natural Language Processing Important Dates --------------- Paper submission: January 31, 2023 (likely to be extended) Paper decision notification: March 31, 2023 Session Chairs -------------- Qi Chen, Victoria University of Wellington, New Zealand Jos? M Ju?rez, Universidad de Murcia, Spain Paulo Lisboa, Liverpool John Moores University, U.K. Asim Roy, Arizona State University, U.S.A. Alfredo Vellido, Universitat Polit?cnica de Catalunya, Spain From malini.vinita.samarasinghe at ini.rub.de Wed Jan 18 05:42:52 2023 From: malini.vinita.samarasinghe at ini.rub.de (Vinita Samarasinghe) Date: Wed, 18 Jan 2023 11:42:52 +0100 Subject: Connectionists: Update: Women in Memory Research Message-ID: <1c985f0e-6de0-87f7-818d-1dc527188dcc@ini.rub.de> An updated version of our WiMR 2023 program: The event below is for women who are interested in pursuing a PhD in memory research in neuroscience or philosophy. Please be kind enough to pass on this message to interested students. ------------------------------------------------------------------------ Women in Memory Research 2023 The research unit ?Constructing scenarios of the past? seeks to promote women in memory research. The program WiMR 2023 is made possible through a grant from the German Research Foundation (DFG) and collaboration with the Ruhr University Bochum. Come and learn what an academic career looks likeand discover its advantages. During your week at the Ruhr University Bochum (RUB)you will participate in GEM 2023 ?where you will hear about the latest research in generative episodic memory, be given the opportunityto present your research and meet with female scientists from the field. Before and after GEM you'll be introduced to support structures, funding measures and the FOR 2812 labs . Our participating senior scientists include: Ali Boyle, London School of Economics; Pernille Hemmer, Rutgers University; Peggy St. Jacques, University of Alberta; Kristina Liefke, Ruhr University Bochum; Maria Wimber, University of Glasgow. Sounds exciting? *Who can apply:* Womenmasters students in their final year of study and recently graduated masters students who are looking into an academic career in the area of memory research/neuroscience. Applicants must have excellent grades and be able to communicate in English. Selection of participants is competitive. We only have 12 spots! *How to apply:* Send your application including a one page letter of motivation, a current CV, masters transcripts and a letter of recommendation from one of your professors. Your application should be sent, as a single PDF document, to Vinita Samarasinghe @ for2812+gem at rub.de ?by March 15, 2023 with the subject line "WiMR - application". If you need child care or any other support please note this in your application. Applications will be evaluated and ranked after the deadline has passed and notifications will be sent out by 31.03.2023. If you accept, you must register for GEM 2023. *What to expect from us:* We will provide?bed and breakfast (single occupancy), cover travel costs (restrictions apply) and provide some meals. The program?is offered in English. *What we expect from you:* You are present for all of the program and that you present your current research in the form of a poster at GEM 2023. *Program:* 12.06.2023 AM Welcome to Bochum PM GEM 2023 13.06.2023 GEM 2023 14.06.2023 AM GEM 2023 PM Fireside chat with senior memory researchers 15.06.2023 AM Lab visits PM Funding and support structures 16.06.2023 Close up meetings (optional) *Questions? * Visit our website at https://for2812.rub.de/wimr2023 for more information. Please feel free to get in touch with Vinita Samarasinghe (for2812 at rub.de )?if you have any questions. -- Vinita Samarasinghe M.Sc., M.A. Science Manager Arbeitsgruppe Computational Neuroscience Institut f?r Neuroinformatik Ruhr-Universit?t Bochum, NB 3/73 Postfachnummer 110 Universit?tstr. 150 44801 Bochum Tel: +49 (0) 234 32 27996 Email:samarasinghe at ini.rub.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonizhong at msn.com Wed Jan 18 05:57:43 2023 From: jonizhong at msn.com (Joni Zhong) Date: Wed, 18 Jan 2023 10:57:43 +0000 Subject: Connectionists: [CFP] IJCNN 2023 Special Session - Human-in-the-loop Algorithms and System Designs Message-ID: IJCNN 2023 Special Session - Human-in-the-loop Algorithms and System Designs Submission link: https://edas.info/N30081 Submission Deadline: Jan 31, 2023 Aims The human-in-the-loop (HIL) terms can be used in different ways in different research fields. Human-in-the-loop learning algorithms refer to algorithms that include human feedback into the training loop of the machine learning models to improve the quality of training and to augment the functions of the model. In the hardware applications, the human-in-the-loop learning devices or robots refer to the skill transfer skills from the users to the robots, where a human?s skills could be actively learnt by the robots? model to enhance its intelligence or skill levels in an active and autonomous way. In the system design and implementation processes, the ?human-in-the-loop? refers to the active integration of human inputs with different physiological inputs, such as BCI. We can see the ?human-in-the-loop? methodologies in different research scopes differ, but their central approach can stay unified an,d interdisciplinary research and development can be foreseen. The special session aims to reduce the gaps between the robotic systems? techniques and settings and the users? practical needs by the users? inputs and optimization. Main Topics: Themes of interest to this special session include, but are not limited to: ? HIL in Human-robot interaction ? HIL in Human-robot collaboration ? HIL in Assistive Technologies ? Human-guided Reinforcement Learning ? Interactive Reinforcement Learning ? Active Learning and Continuous Learning ? Human-centric Design for Assistive Technologies ? Interpretable Machine Learning ? Human Factors in Robots and Devices ? Conversation Systems between Human and Robots ? Ergonomics ? etc. Important dates: Paper Submission Deadline January 31, 2023 Paper acceptance notification date March 31, 2023 Conference June 18-23, 2023 Submission Guidelines: Organisers : Junpei Zhong, The Hong Kong Polytechnic University Ding Ding, Southeast University Francisco Cruz, New South Wales University Nicol?s Navarro-Guerrero, German Research Centre for Artificial Intelligence Contact: Dr. Joni Zhong, joni.zhong at polyu.edu.hk Please submit your paper at the following link https://edas.info/N30081 and choose the track " Special Session: Human-in-the-loop Algorithms and System Designs ?. Please follow the standard submission guidelines of IJCNN 2023. Authors of selected papers will also be invited to submit an extended and improved version to special issues published in MDPI Applied Sciences (ISSN 2076-3417) or Robotics (ISSN 2218-6581). -------------- next part -------------- An HTML attachment was scrubbed... URL: From graduateprograms at bccn-berlin.de Wed Jan 18 07:43:47 2023 From: graduateprograms at bccn-berlin.de (Graduate Programs) Date: Wed, 18 Jan 2023 13:43:47 +0100 Subject: Connectionists: Info Day - International Master & PhD in Computational Neuroscience @ BCCN Berlin - 25.01.23 Message-ID: <96772f4c-6010-f47f-1f04-29047530ce72@bccn-berlin.de> Dear Connectionists, Please share the following event information with anyone who may be interested. We at the Bernstein Center for Computational Neuroscience (BCCN) Berlin will hold a hybrid "Info Day" to discuss our International Master & Doctoral Programs in Computational Neuroscience next Wednesday, January 25th, at 3pm (CET). Attendees are welcome to join in person at the BCCN Berlin or virtually via Zoom. The event will consist of talks by: * Prof. Dr. Klaus Obermayer (head of the programs) * Lisa Velenosi (teaching coordinator) * Current master and doctoral students Attendees will also have the opportunity to meet & discuss with current students and ask any questions or concerns that they may have. See our website for a more detailed schedule and registration link. Best regards and happy new year, Lisa Velenosi -- Lisa Velenosi Teaching Coordinator of the SFB1315 & BCCN Berlin Humboldt-Universit?t zu Berlin Philippstra?e 13, Haus 6; 10115 Berlin; Germany Tel: +49 (0)30 2093 6773 Mondays - Thursdays -------------- next part -------------- An HTML attachment was scrubbed... URL: From nemanja at temple.edu Wed Jan 18 09:09:26 2023 From: nemanja at temple.edu (Nemanja Djuric) Date: Wed, 18 Jan 2023 14:09:26 +0000 Subject: Connectionists: CfP: The 5th Workshop on "Precognition: Seeing through the Future" @ CVPR 2023 Message-ID: Call for Workshop Papers The 5th Workshop on "Precognition: Seeing through the Future" in conjunction with The 36th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023) Vancouver, June 18th-22nd, 2023 https://sites.google.com/view/ieeecvf-cvpr2023-precognition ================= Despite its potential and relevance for real-world applications, visual forecasting or precognition has not been in the focus of new theoretical studies and practical applications as much as detection and recognition problems. Through the organization of this workshop we aim to facilitate further discussion and interest within the research community regarding this nascent topic. The workshop will discuss recent approaches and research trends not only in anticipating human behavior from videos, but also precognition in multiple other visual applications, such as: medical imaging, health-care, human face aging prediction, early event prediction, autonomous driving forecasting, and so on. In addition, this workshop will give an opportunity for the community in both academia and industry to meet and discuss future work and research directions. It will bring together researchers from different fields and viewpoints to discuss existing major research problems and identify opportunities in further research directions in both research topics and industrial applications. This is the fifth Precognition workshop organized at CVPR. It follows very successful workshops organized since 2019, which featured talks from researchers across a number of industries, insightful presentations, and large attendance. For full programs, slides, posters, and other resources, please visit the websites of earlier Precognition workshops, linked at the workshop website. ================= Topics: The workshop focuses on several important aspects of visual forecasting. The topics of interest for this workshop include, but are not limited to: - Early event prediction - Activity and trajectory forecasting - Multi-agent forecasting - Human behavior and pose prediction - Human face aging prediction - Predicting frames and features in videos and other sensors in autonomous driving - Traffic congestion anomaly prediction - Automated Covid-19 prediction in medical imaging - Visual DeepFake prediction - Short- and long-term prediction and diagnoses in medical imaging - Prediction of agricultural parameters from satellite imagery - Databases, evaluation and benchmarking in precognition ================= Submission Instructions: All submitted work will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. For each accepted submission, at least one author must attend the workshop and present the paper. There are two ways to contribute submissions to the workshop: - Extended abstracts submissions are single-blind peer-reviewed, and author names and affiliations should be listed. Extended abstract submissions are limited to a total of four pages. Extended abstracts of already published works can also be submitted. Accepted abstracts will be presented at the poster session, and will not be included in the printed proceedings of the workshop. - Full paper submissions are double-blind peer-reviewed. The submissions are limited to eight pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed (additional information about formatting and style files is available at the website). Accepted papers will be presented at the poster session, with selected papers also being presented in an oral session. All accepted papers will be published by the CVPR in the workshop proceedings. Submission website: https://cmt3.research.microsoft.com/PRECOGNITION2023 ================= Important Deadlines: Submission : March 19th, 2023 Decisions : April 3rd, 2023 Camera-ready : April 8th, 2023 Workshop : June 18th, 2023 (subject to change by the CVPR organizers) ================= Program Committee Chairs: - Dr. Khoa Luu (University of Arkansas) - Dr. Kris Kitani (Carnegie Mellon University) - Dr. Hien Van Nguyen (University of Houston) - Dr. Nemanja Djuric (Aurora Innovation) - Dr. Utsav Prabhu (Google) For further questions please contact a member of the organizing committee at precognition.organizers at gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stevensequeira92 at hotmail.com Wed Jan 18 11:20:33 2023 From: stevensequeira92 at hotmail.com (steven gouveia) Date: Wed, 18 Jan 2023 16:20:33 +0000 Subject: Connectionists: Philosophy & Neuroscience | Survey In-Reply-To: References: Message-ID: Philosophy & Neuroscience | Survey Within the scope of the Center for Philosophical and Humanistic Studies (CEFH) from the Portuguese Catholic University (Braga, Portugal). we would like to request your collaboration by answering this survey, whose objective is to evaluate how philosophy and neuroscience can cooperate together. This study was approved by the ethics committee of the center and was designed based on the Declaration of Helsinki. All reported data will be treated jointly, anonymously and confidentially. No data that identify the participants will be requested. So, if you are over 18 years old, if you work on topics related to Philosophy of Mind, Cognitive Science, Neuroscience (empirical and theoretical), Psychology, and related fields, and if you graduated in any of those fields (Master, PhD, etc.), we appreciate your collaboration. Any questions that arise can be addressed to the principal investigator by this email stevensequeira92 @ gmail.com The Survey can be found here: https://forms.gle/46yQ5uCE9gSZoMjn9 Many thanks for your valuable contribution. Steven S. Gouveia Ph.D. (University of Minho) Researcher of the CEFH (Portuguese Catholic University) https://stevensgouveia.weebly.com (Books, papers, talks, etc) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Wed Jan 18 09:28:41 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Wed, 18 Jan 2023 14:28:41 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: Thanks to all for these great comments on cybernetics! In fact, as mentioned in the survey, "what?s now called deep learning is conceptually closer to the old field of cybernetics than to what's been called AI since 1956 (e.g., expert systems and logic programming).? https://arxiv.org/abs/2212.11279 The survey also mentions the MACY conferences and Norbert Wiener?s encounter with Leonardo Torres y Quevedo at the 1951 AI conference in Paris: "In 1914, the Spaniard Leonardo Torres y Quevedo became the 20th century's first AI pioneer when he built the first working chess end game player (back then chess was considered as an activity restricted to the realms of intelligent creatures). The machine was still considered impressive decades later when another AI pioneer?Norbert Wiener [WI48]?played against it at the 1951 Paris AI conference? "on calculating machines and human thought, now often viewed as the first conference on AI [AI51][BRO21][BRU4].? J?rgen > On 18 Jan 2023, at 3:35 AM, Grossberg, Stephen wrote: > > Dear Gary et al., > > Your reminiscence about McCarthy and LISP reminds me of a story about BASIC and computer time-sharing. The latter were both introduced by two of my math professors at Dartmouth, John Kemeny and Tom Kurtz, a few years after I was an undergraduate student there: https://en.wikipedia.org/wiki/John_G._Kemeny > > Kemeny had a profound influence on my life in science. Just after I made some of my first discoveries about how to model mind and brain, I took his course in the philosophy of science as a sophomore. Until that point, essentially all of my science courses were taught by the book, with no hint of the passions and meandering pathways that often led to discoveries. > > I felt that my head exploded with ideas when I made my first discoveries, but I had no idea how to "do" science. His course was incredibly liberating and instructive for me. > > Kemeny was an eloquent lecturer who made mathematics live. He believed that good mathematics students should go into the social sciences. He put his money where his mouth is by writing, with another of my math professors, J. Laurie Snell, the book Mathematical Models in the Social Sciences, which is still in print today: https://www.amazon.com/Mathematical-Models-Social-Sciences-Press/dp/0262610302 > > Kemeny was one of my most important mentors who encouraged my early work. I became Dartmouth's first joint major in mathematics and psychology with his full support. > > Returning to a theme of my earlier email, Kemeny was Einstein's last assistant at Princeton before being hired as a full professor at Dartmouth at age 27 and becoming chairman of the mathematics department a couple of years later. > > Another set of lucky circumstances that helped me to find my own path. > > Not surprisingly, I also discuss Kemeny in my Magnum Opus > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > Best again, > > Steve > From: Gary Marcus > Sent: Tuesday, January 17, 2023 6:31 PM > To: Grossberg, Stephen > Cc: Michael Arbib ; connectionists at cs.cmu.edu > Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning > > I am just old enough to appreciate all this and young enough not to have met any of them. My late father took Fortran (on punch cards) from John McCarthy at MIT, and McCarthy (who I did later meet) left in the middle of the semester, having just invented LISP. > >> On Jan 17, 2023, at 15:01, Grossberg, Stephen wrote: >> >> ? >> Dear Gary, Michael, and other Connectionists colleagues, >> >> Michael's reminiscences remind me of related memories. >> >> I was traveling from Stanford in 1964 to MIT to study with Norbert Wiener when I heard that Wiener had just died. I was a PhD student at Stanford then and was hoping to find a more congenial place at MIT to continue my work on neural networks. >> >> Instead, I got my PhD at the Rockefeller Institute for Medical Research in New York, where Abe Pais, who worked for Niels Bohr and was a colleague of Albert Einstein at the Institute for Advanced Study, was on the faculty. Abe regaled us with rather colorful stories about both Bohr and Einstein. >> >> Later, when I was an assistant professor at MIT, I got to know Norman Levinson and his wife, Fagi, very well. Norman was Wiener's most famous student: >> https://en.wikipedia.org/wiki/Norman_Levinson >> Norman and Fagi told me lots of stories about Wiener's many famous idiosyncrasies. >> >> While I was a young professor at MIT, Norman and Fagi generously treated me as their scientific godson and took me under their wing both at their home and at many scientific conferences. After Norman's death, Fagi became our daughter's god grandmother and shared many happy family celebrations with us. >> >> For an interesting and heartwarming story about Fagi's impact on the mathematics community to which she was connected through Norman, and thus Wiener, see: >> https://news.mit.edu/2010/obit-levinson >> >> I also had the good luck to meet Warren McCulloch >> https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch >> and Jerry Lettvin https://en.wikipedia.org/wiki/Jerome_Lettvin >> when I got to MIT. >> >> I remember wandering about Warren's lab at the Research Lab of Electronics with my friend, Stu Kauffman, who was then working with McCulloch: >> https://en.wikipedia.org/wiki/Stuart_Kauffman. >> I met Stu at Dartmouth, where I began my work in neural networks as a freshman in 1957. We have remained close friends to the present time. >> >> Jerry and his wife, Maggie, also looked after me and invited me to dinner parties at their home. Maggie became quite a famous person in her own right and was a major role model for exercise, health, and women in general: >> https://en.wikipedia.org/wiki/Maggie_Lettvin >> >> I will never forget the generosity and kindness of these incredibly talented people. >> >> My Magnum Opus includes discussions of Bohr, Einstein, and McCulloch, among other great scientists: >> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 >> >> Best, >> >> Steve >> >> From: Connectionists on behalf of Gary Marcus >> Sent: Tuesday, January 17, 2023 1:35 PM >> To: Michael Arbib >> Cc: connectionists at cs.cmu.edu >> Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning >> >> Wow. Chills down spine, in a good way. I did not know that and look forward to reading! >> >>> On Jan 17, 2023, at 10:32, Michael Arbib wrote: >>> >>> ? >>> Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: >>> >>> Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. >>> >>> A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. >>> >>> >>> ************************************ >>> >>> From: Connectionists On Behalf Of Stephen Jos? Hanson >>> Sent: Tuesday, January 17, 2023 5:13 AM >>> To: Sean Manion ; Gary Marcus >>> Cc: connectionists at cs.cmu.edu >>> Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning >>> >>> Sean, >>> What a wonderfult find! >>> I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. >>> It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". >>> These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. >>> https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html >>> thanks for sharing! >>> Steve >>> On 1/17/23 00:28, Sean Manion wrote: >>> Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. >>> >>> For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." >>> >>> I thought some might find this of interest. >>> >>> Cheers! >>> >>> Sean > > >> On Jan 16, 2023, at 05:19, Stephen Jos? Hanson wrote: >> >> Gary, >> >> "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" >> >> >> As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. >> >> But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. >> langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) >> >> Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. >> >> So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. >> >> Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. >> >> Best, >> >> Steve >> On 1/14/23 07:04, Gary Marcus wrote: >>> >>> >>> On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus wrote: >>> Hi, Juergen, >>> >>> Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). >>> >>> Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. >>> >>> In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! >>> >>> As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. >>> >>> My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. >>> >>> Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. >>> >>> Best, >>> Gary >>> >>> >>> On Jan 15, 2023, at 23:04, Schmidhuber Juergen wrote: >>> >>> ?Thanks for these thoughts, Gary! >>> >>> 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). >>> >>> Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: >>> >>> "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. >>> >>> Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. >>> >>> When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? >>> >>> Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. >>> >>> 2. Here a few relevant paragraphs from the intro: >>> >>> "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. >>> >>> Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. >>> >>> A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. >>> >>> A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." >>> >>> 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? >>> >>> Juergen >>> >>> >>> >>> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >>> >>> Dear Juergen, >>> >>> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >>> >>> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >>> >>> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >>> >>> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >>> Historians looking back on this paper will see too little about that roots of that trend documented here. >>> >>> Gary >>> >>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>> >>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>> >>> Sec. 1: Introduction >>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>> Sec. 6: 1965: First Deep Learning >>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>> Sec. 18: It's the Hardware, Stupid! >>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>> Sec. 21: Acknowledgments >>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>> >>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>> >>> J?rgen >>> >>> >>> >>> >>> >>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>> Dear Juergen, >>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>> Best, >>> Andreas >>> -------------------------------------------------------------------------------------------------- >>> Prof. Auxiliar Andreas Wichert >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>> - >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>> Instituto Superior T?cnico - Universidade de Lisboa >>> Campus IST-Taguspark >>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>> 2744-016 Porto Salvo, Portugal >>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know underjuergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>> Happy New Year! >>> J?rgen >>> >>> -- >>> Stephen Jos? Hanson >>> Professor, Psychology Department >>> Director, RUBIC (Rutgers University Brain Imaging Center) >>> Member, Executive Committee, RUCCS From vito.trianni at istc.cnr.it Wed Jan 18 10:20:21 2023 From: vito.trianni at istc.cnr.it (Vito Trianni) Date: Wed, 18 Jan 2023 16:20:21 +0100 Subject: Connectionists: [jobs] research position in data visualisation, user experience and human-computer interaction: few days left to apply Message-ID: <1C2E5608-A5D8-44C9-9CFC-F1B006BFCA26@istc.cnr.it> There are still a couple of days left to apply to a researcher position in data visualisation, user experience and human-computer interaction: DEADLINE FOR APPLICATIONS IS JANUARY THE 20TH, 2023 The research position is open within the context of the European project HACID (http://www.hacid-project.eu). The position is for full-time research at the Institute of Cognitive Sciences and Technologies (ISTC) of the Italian National Research Council in Rome. We are seeking expertise in Data Visualisation, UX and Human-Computer Interaction. The contract is for two years, with possibility of renewal. It is also possible to link the research to a PhD program (e.g., at Sapienza University, Computer Engineering). See the notice of selection with instructions for the application here: - http://www.hacid-project.eu/jobs/index.html - https://www.istc.cnr.it/en/content/assegno-di-ricerca-3562022-progettazione-e-validazione-sperimentale-di-strumenti ------------------------------------- WHAT WE DO ------------------------------------- HACID aims at the study of hybrid human-artificial collective intelligence for open-ended domains like medical diagnostics and decision support for climate change adaptation policies. Thanks to well designed knowledge graphs and information aggregation algorithms, the project aims at improving decision making in situations where knowledge fragmentation and information overload can strike. Related to the open position, the research program consists in the development of a dashboard for visualization and interaction with knowledge graphs in the context of both medical diagnostics and climate services. The dashboard will have to enable the selection by domain experts of concepts relevant to the case study, presenting these concepts in a dynamic and usable way. The activities include the experimental validation of the dashboard to test its usability and consistency, in interaction with the HACID team for the different case studies. ---------------------------------------- WHO WE?RE LOOKING FOR ---------------------------------------- The following skills are requested: ? Knowledge of User Experience Design (UXD), human-machine interaction (HMI) and user interface design (UI); ? Experience in (i) data visualisation and (ii) design and implementation of interactive data dashboards, using open-source libraries and frameworks (e.g. D3, Kibana, Tableau, etc.); ? Experience in the use of programming languages: Python and/or Java; ------------------------------------- HOW TO APPLY ------------------------------------- Applications must be sent via email. Deadline for applications is January the 20th, 2023. Italian applicants must submit their applications through a certified email (Posta Elettronica Certificata ? PEC) to the address protocollo.istc at pec.cnr.it Foreign applicants must submit their applications through standard email to the address protocollo.roma at istc.cnr.it For all details about the application process, please check the notice of selection available at the following links: http://www.hacid-project.eu/jobs/index.html https://www.istc.cnr.it/en/content/assegno-di-ricerca-3562022-progettazione-e-validazione-sperimentale-di-strumenti For any inquiry, feel free to contact Vito Trianni: vito.trianni at istc.cnr.it ------------------------------------- WHO WE ARE ------------------------------------- The Institute for Cognitive Sciences and Technologies (ISTC) is an interdisciplinary institute, featuring integration among laboratories and across research topics. ISTC laboratories share objectives aimed at the analysis, representation, simulation, interpretation and design of cognitive and social processes in humans, animals and machines, spanning the physiological, phenomen ======================================================================== Vito Trianni, Ph.D. vito.trianni@(no_spam)istc.cnr.it ISTC-CNR http://www.istc.cnr.it/people/vito-trianni Via San Martino della Battaglia 44 Tel: +39 06 44595277 00185 Roma Fax: +39 06 44595243 Italy ======================================================================== From steve at bu.edu Wed Jan 18 12:58:43 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 18 Jan 2023 17:58:43 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: Dear Gary, I feel very lucky to have met Sue Carey and Henry and Lila Gleitman over the years. I met Sue as a colleague at MIT and as a fellow member of the Society of Experimental Psychologists. As I recall, I met both Henry and Lila when I lectured at the University of Pennsylvania over the years. Some of my own recent work is about a topic that interested both of them: How children learn language meanings through their interpersonal interactions with caregivers in real time. Best, Steve ________________________________ From: Gary Marcus Sent: Tuesday, January 17, 2023 8:16 PM To: Grossberg, Stephen Cc: Michael Arbib ; connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning ?Luckily, by the time I came on to the scene, more doors were open to women. Two of my greatest mentors, Susan Carey and Lila Gleitman, wrote powerful intellectual memoirs of their own. Neither worked specifically on AI, but the lessons I learned from both have been central to my own thinking, as I have transitioned from studying natural intelligence into studying artificial intelligence. Both memoirs are delightfully written and well worth reading: Becoming a Cognitive Scientist annualreviews.org [apple-touch-icon-1582743354193.png] Recollecting What We Once Knew: My Life in Psycholinguistics annualreviews.org [apple-touch-icon-1582743354193.png] On Jan 17, 2023, at 4:35 PM, Grossberg, Stephen wrote: ? Dear Gary et al., Your reminiscence about McCarthy and LISP reminds me of a story about BASIC and computer time-sharing. The latter were both introduced by two of my math professors at Dartmouth, John Kemeny and Tom Kurtz, a few years after I was an undergraduate student there: https://en.wikipedia.org/wiki/John_G._Kemeny Kemeny had a profound influence on my life in science. Just after I made some of my first discoveries about how to model mind and brain, I took his course in the philosophy of science as a sophomore. Until that point, essentially all of my science courses were taught by the book, with no hint of the passions and meandering pathways that often led to discoveries. I felt that my head exploded with ideas when I made my first discoveries, but I had no idea how to "do" science. His course was incredibly liberating and instructive for me. Kemeny was an eloquent lecturer who made mathematics live. He believed that good mathematics students should go into the social sciences. He put his money where his mouth is by writing, with another of my math professors, J. Laurie Snell, the book Mathematical Models in the Social Sciences, which is still in print today: https://www.amazon.com/Mathematical-Models-Social-Sciences-Press/dp/0262610302 Kemeny was one of my most important mentors who encouraged my early work. I became Dartmouth's first joint major in mathematics and psychology with his full support. Returning to a theme of my earlier email, Kemeny was Einstein's last assistant at Princeton before being hired as a full professor at Dartmouth at age 27 and becoming chairman of the mathematics department a couple of years later. Another set of lucky circumstances that helped me to find my own path. Not surprisingly, I also discuss Kemeny in my Magnum Opus https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve ________________________________ From: Gary Marcus Sent: Tuesday, January 17, 2023 6:31 PM To: Grossberg, Stephen Cc: Michael Arbib ; connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning I am just old enough to appreciate all this and young enough not to have met any of them. My late father took Fortran (on punch cards) from John McCarthy at MIT, and McCarthy (who I did later meet) left in the middle of the semester, having just invented LISP. On Jan 17, 2023, at 15:01, Grossberg, Stephen wrote: ? Dear Gary, Michael, and other Connectionists colleagues, Michael's reminiscences remind me of related memories. I was traveling from Stanford in 1964 to MIT to study with Norbert Wiener when I heard that Wiener had just died. I was a PhD student at Stanford then and was hoping to find a more congenial place at MIT to continue my work on neural networks. Instead, I got my PhD at the Rockefeller Institute for Medical Research in New York, where Abe Pais, who worked for Niels Bohr and was a colleague of Albert Einstein at the Institute for Advanced Study, was on the faculty. Abe regaled us with rather colorful stories about both Bohr and Einstein. Later, when I was an assistant professor at MIT, I got to know Norman Levinson and his wife, Fagi, very well. Norman was Wiener's most famous student: https://en.wikipedia.org/wiki/Norman_Levinson Norman and Fagi told me lots of stories about Wiener's many famous idiosyncrasies. While I was a young professor at MIT, Norman and Fagi generously treated me as their scientific godson and took me under their wing both at their home and at many scientific conferences. After Norman's death, Fagi became our daughter's god grandmother and shared many happy family celebrations with us. For an interesting and heartwarming story about Fagi's impact on the mathematics community to which she was connected through Norman, and thus Wiener, see: https://news.mit.edu/2010/obit-levinson I also had the good luck to meet Warren McCulloch https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch and Jerry Lettvin https://en.wikipedia.org/wiki/Jerome_Lettvin when I got to MIT. I remember wandering about Warren's lab at the Research Lab of Electronics with my friend, Stu Kauffman, who was then working with McCulloch: https://en.wikipedia.org/wiki/Stuart_Kauffman. I met Stu at Dartmouth, where I began my work in neural networks as a freshman in 1957. We have remained close friends to the present time. Jerry and his wife, Maggie, also looked after me and invited me to dinner parties at their home. Maggie became quite a famous person in her own right and was a major role model for exercise, health, and women in general: https://en.wikipedia.org/wiki/Maggie_Lettvin I will never forget the generosity and kindness of these incredibly talented people. My Magnum Opus includes discussions of Bohr, Einstein, and McCulloch, among other great scientists: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best, Steve ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Tuesday, January 17, 2023 1:35 PM To: Michael Arbib Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Wow. Chills down spine, in a good way. I did not know that and look forward to reading! On Jan 17, 2023, at 10:32, Michael Arbib wrote: ? Now that Cybernetics has been brought into the conversation, and since I may be the only person who was both a PhD student of Norbert Wiener (for a while) and an RA for Warren McCulloch, I take the liberty of drawing attention to a memoir I wrote: Arbib, M. A. (2018). From cybernetics to brain theory, and more: A memoir. Cognitive Systems Research, 50, 83-145. A preprint is available on ResearchGate ? just enter ?Arbib ResearchGate Memoir? in your browser. There are ideas in there whose solution I still await ?. ************************************ From: Connectionists On Behalf Of Stephen Jos? Hanson Sent: Tuesday, January 17, 2023 5:13 AM To: Sean Manion ; Gary Marcus Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Sean, What a wonderfult find! I believe this is most likely a precursor to the Macy meetings 1946-1953, which was run by McCulloch. It included the wonderful list from Wiener of the areas that "have obtained a degree of intimacy". These meetings became called by the attendees-- CYBERNETICS.. and of course is the precursor to AI and Neural Networks, computational neuroscience etc.. https://press.uchicago.edu/ucp/books/book/distributed/C/bo23348570.html thanks for sharing! Steve On 1/17/23 00:28, Sean Manion wrote: Thank you all for a great discussion, and of course J?rgen for your work on the annotated history that has kicked it off. For reasons tangential to all of this, I have been recently reviewing some of the MIT Archives and found this invitation from Wiener, von Neumann, and Aiken to several individuals for a sometimes historically overlooked 2 day meeting that was held at Princeton in January 1945 on a "...field of effort, which as yet is not even named." I thought some might find this of interest. Cheers! Sean On Mon, Jan 16, 2023 at 11:51 PM Gary Marcus > wrote: Hi, Juergen, Thanks for your reply. Restricting your title to ?modern? AI as you did is a start, but I think still not enough. For example, from what I understand about NNAISANCE, through talking with you and Bas Steunebrink, there?s quite a bit of hybrid AI in what you are doing at your company, not well represented in the review. The related open-access book certainly draws heavily on both traditions (https://link.springer.com/book/10.1007/978-3-031-08020-3). Likewise, there is plenty of eg symbolic planning in modern navigation systems, most robots etc; still plenty of use of symbolic trees in game playing; lots of people still use taxonomies and inheritance, etc., an AFAIK nobody has built a trustworthy virtual assistant, even in a narrow domain, with only deep learning. And so on. In the end, it?s really a question about balance, which is what I think Andrzej was getting at; you go miles deep on the history of deep learning, which I respect, but just give relatively superficial pointers (not none!) outside that tradition. Definitely better, to be sure, in having at least a few pointers than in having none, and I would agree that the future is uncertain. I think you strike the right note there! As an aside, saying that everything can be formulated as RL is maybe no more helpful than saying that everything we (currently) know how to do can be formulated in terms of Turing machine. True, but doesn?t carry you far enough in most real world applications. I personally see RL as part of an answer, but most useful in (and here we might partly agree) the context of systems with rich internal models of the world. My own view is that we will get to more reliable AI only once the field more fully embraces the project of articulating how such models work and how they are developed. Which is maybe the one place where you (eg https://arxiv.org/pdf/1803.10122.pdf), Yann LeCun (eg https://openreview.net/forum?id=BZ5a1r-kVsf), and I (eg https://arxiv.org/abs/2002.06177) are most in agreement. Best, Gary On Jan 15, 2023, at 23:04, Schmidhuber Juergen > wrote: ?Thanks for these thoughts, Gary! 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. 2. Here a few relevant paragraphs from the intro: "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? Juergen On 14. Jan 2023, at 15:04, Gary Marcus > wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen > wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert > wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen > wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: apple-touch-icon-1582743354193.png Type: image/png Size: 2569 bytes Desc: apple-touch-icon-1582743354193.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: apple-touch-icon-1582743354193.png Type: image/png Size: 2569 bytes Desc: apple-touch-icon-1582743354193.png URL: From tiako at ieee.org Wed Jan 18 14:34:43 2023 From: tiako at ieee.org (Pierre F. Tiako) Date: Wed, 18 Jan 2023 13:34:43 -0600 Subject: Connectionists: CFP: CAIS 2023 Automated and Intelligent Systems, Oct 2-5, Oklahoma City, USA & Online Message-ID: [Apologies for cross-posting] --- Call for Abstracts and Papers ------------- 2023 OkIP International Conference on Automated and Intelligent Systems (CAIS) Downtown Oklahoma City, OK, USA & Online October 2-5, 2023 https://eventutor.com/e/CAIS003 Submission Deadline: May 23, 2023 Extended versions of the best papers will be considered for publication in the inaugural volume of the International Journal of Automated and Intelligent Systems. *** Contribution Types (Two-Column IEEE Format Style): - Full Paper: Accomplished research results (6 pages) - Short Paper: Work in progress/fresh developments (3 pages) - Extended Abstract/Poster/Journal First: Displayed/Oral presented (1 page) *** Areas: >> AI, Machine Learning (ML), and Applications - General ML | Active/Supervised Learning - Clustering/Unsupervised Learning - Online Learning | Learning to rank - Reinforcement Learning | Deep Learning(DL) - Semi/Self Supervised Learning - Time Series Analysis | Prediction/Forecasting - DL Architectures/Generative-Models - Deep Reinforcement Learning - Computational Learning Theory - Bandit/Game/Statistical-Learning Theory - Optimization Methods and Techniques - Convex/Non-Convex Optimization - Matrix/Tensor Methods - Stochastic/Online Optimizations - Non-Smooth/Composite Optimization - Probabilistic Inference | Graphical Models - Bayesian/Monte-Carlo Methods - Trustworthy Machine Learning - ML Accountability/Causality - ML Fairness/Privacy/Robustness - Healthcare/DNA/Transportation - Digital-Economy | Ecommerce Security - Sustainability | Energy | Green Technology - Language | Image - Recommendation Systems >> Agent-based, Automated, and Distributed Supports - Multi-Agent Systems | Software Agents - Decentralized/Distributed Intelligence - Context-Aware Computing - Group Decision Support Systems - Intelligent Structures/Networks - Design/Automation Approaches - Sensor Networks Architectures - Complex Manufacturing Processes - Analytical Models | Path Planning - Multistage Assembly Line - Automated Inspection >> Intelligent Systems and Applications - Medical Nanorobotics | - Sensory/Embedded Systems - Embedded Systems | Digital Manufacturing - Optimization/Evolutionary Algorithms - Bioinformatics/Biotechnology Applications - Computer-Vision Applications - Sensor-Networks Applications - Intelligent Design | Fuzzy Systems - Soft/Ubiquitous Computing - Pervasive/Wearable Computing - Intelligence Manufacturing | Microsatellite - Cyber-physical Systems | Kinematics >> Knowledge-based and Control Supports - Expert/Complex Systems - Decision-Support Systems - Intelligent Control/Supervision Systems - Knowledge Engineering - Neural Networks | Structural Optimization - Intelligent Teleoperation - Intelligent Shopfloor - Collision Avoidance | Fault Diagnosis - Object Detection and Tracking | Path Planning - Position/Quality/Motion Control - Predictive Control - Preventive Maintenance | Defect Detection >> Robotics and Vehicles - Unmanned Vehicles/Robots - Autonomous Vehicles/Robots - Human-Robot Interfaces - Human-Robot Interactions - Intelligent Telerobotics | Service Robots - Robotic Manipulators/Arms - Robotic Applications - Self-Driving Vehicles | Cloud-based Driving - Vehicular ad hoc Networks |Traffic Detection - Vehicle-to-Vehicle Communication - Vehicle Platooning | Steering Systems - Vehicle dynamics | Traffic Computing >> Important Dates: - Abstract or Paper Submission: May 23, 2023 - Author Notification: June 30, 2023 - Camera Ready Paper Submission, Registration: July 16, 2023 - Conference Date: October 2-5, 2023 >> Technical Program Committee https://eventutor.com/event/33/page/134-committee Please feel free to contact us for any inquiries at: info at okipublishing.com -------- Pierre Tiako General Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisant at umbc.edu Wed Jan 18 16:48:17 2023 From: bisant at umbc.edu (David B) Date: Wed, 18 Jan 2023 16:48:17 -0500 Subject: Connectionists: Deadline: FLAIRS-36, Neural Networks and Data Mining Special Track, Clearwater Beach, Fl, May 14-17, 2023 In-Reply-To: References: Message-ID: FLAIRS-36 Clearwater Beach, Florida, USA, May 14-17, 2023 Neural Networks and Data Mining Special Track Abstract submission deadline: February 6, 2023 Paper submission deadline: February 13, 2023 Notifications: March 13, 2023 Camera ready version due: April 10, 2023 The Florida Artificial Intelligence Research Symposium (FLAIRS) is a medium-sized interdisciplinary AI conference which is noted for its double-blind reviewing, free tutorials, and beautiful venues. This special track will be devoted to neural networks and data mining with the aim of presenting new and important contributions in these areas. Papers and contributions are encouraged for any work related to neural networks, data mining, or the intersection thereof. Topics of interest may include (but are in no way limited to): 1. Applications such as Pattern Recognition, Control and Process Monitoring, Biomedical Applications, Robotics, Text Mining, Diagnostic Problems, Telecommunications, Power Systems, Signal Processing; Intelligence analysis, medical and health applications, text, video, and multi-media mining, E-commerce and web data, financial data analysis, cyber security, remote sensing, earth sciences, bioinformatics, and astronomy. 2. Algorithms such as new developments in Back Propagation, SVM, Deep Learning, Ensemble Methods, Kernel Approaches; hybrid approaches such as Neural Networks/Genetic Algorithms, Neural Network/Expert Systems, Causal Nets trained with Backpropagation, and Neural Network/Fuzzy Logic. 3. Modeling algorithms such as hidden Markov models, decision trees, neural networks, statistical methods, or probabilistic methods; case studies in areas of application, or over different algorithms and approaches. 4. Graph modeling, pattern discovery, and anomaly detection. 5. Feature extraction and selection. 6. Post-processing techniques such as visualization, summarization, or trending. 7. Preprocessing and data reduction. 8. Knowledge engineering or warehousing. Papers dealing with Cloud-based unstructured data or tool suites, such as TensorFlow, PyTorch, Mahout, or Apache Spark, are also encouraged. Submission Guidelines Submitted papers must be original, and not submitted concurrently to a journal or another conference. Double-blind reviewing will be provided, so submitted papers must use fake author names and affiliations. Papers must follow the FLAIRS template guidelines ( https://www.flairs-36.info/call-for-papers) and be submitted as a PDF through the EasyChair conference system. (Do NOT use a fake name for your EasyChair login; your EasyChair account information is hidden from reviewers.) FLAIRS will not accept any paper which, at the time of submission, is under review for or has already been published or accepted for publication in a journal or another conference. Authors are also required not to submit their papers elsewhere during FLAIRS's review period. These restrictions apply only to journals and conferences, not to workshops and similar specialized presentations with a limited audience and without archival proceedings. Authors will be required to confirm that their submissions conform to these requirements at the time of submission. Conference Proceedings Papers will be refereed and all accepted papers will appear in the conference proceedings. Program committee: Track Co-Chairs: David Bisant, Central Security Svcs, bisant at umbc.edu Steven Gutstein, Army Research Laboratory, s.m.gutstein at gmail.com William Eberle, Tennessee Tech University, weberle at tntech.edu Program Committee Members: Martin Atzmueller (University of Kassel, Germany) Juan Banda (Montana State University, USA) Lori Bogren (Verizon Business Solutions, USA) Sergei Dolenko (D.V.Skobeltsyn Inst of Nuc Phys, M.V.Lomonosov Moscow St Univ) Yoni Fridman (Verizon Business Solutions, USA) Olac Fuentes (University of Texas at El Paso, USA) Hyoil Han (Marshall University, USA) Sheikh Rabiul Islam (University of Hartford, USA) Mike James (iProgrammer, United Kingdom) Jacek Kukluk (Dana-Farber/Harvard Cancer Center, USA) Katrina Kutchko (Verizon Business Solutions, USA) Prabin Lamichhane (Mastercard, USA) Lenin Mookiah (eBay, USA) Ramesh Paudel (George Washington University, USA) Roberto Santana (University of Basque Country, Spain) Hujun Yin (University of Manchester, UK) Further Information Questions regarding the Data Mining Special Track should be addressed to the track co-chairs: David Bisant, Central Security Svcs, bisant at umbc.edu Steven Gutstein, Army Research Laboratory, s.m.gutstein at gmail.com William Eberle, Tennessee Tech University, weberle at tntech.edu Invited Speakers To be announced Conference Web Sites Paper submission site: https://www.flairs-36.info/submissions FLAIRS-36 conference web page: https://www.flairs-36.info/ Florida AI Research Society (FLAIRS): https://www.flairs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Wed Jan 18 22:50:10 2023 From: ASIM.ROY at asu.edu (Asim Roy) Date: Thu, 19 Jan 2023 03:50:10 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: Message-ID: Responding to Stephen Jos? Hanson?s comment: ?Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.? Steve, even a simple CNN that recognizes just Cats and Dogs sends an output signal that is symbolic. So, a basic NN classifier is a neuro-symbolic system. Such systems are there already, contrary to your statement above. One of the next steps is to extract more symbolic information from these systems. That?s what you find in Hinton?s GLOM approach ? finding parts of objects. Once you find those parts, which essentially correspond to certain abstractions (e.g. a leg, an eye of a cat), you can then transmit that information in symbolic form to whoever is receiving the whole object information. Beyond GLOM, there are many other methods in computer vision that are trying to do the same thing ? that is, extract part information. I can send you references if you want. So neuro-symbolic stuff is already happening, contrary to what you are saying. Gary?s reference to the IBM conference indicates this is an emerging topic in AI. And, of course, you also have the GLOM type work. In addition, DARPA?s conception of Explainable AI (Explainable Artificial Intelligence (darpa.mil)) was also neuro-symbolic as shown in the figure below. The idea is to identify objects based on their parts. So, the figure below says that it?s a cat because it has fur, whiskers, and claws plus an unlabeled visual feature. Below are also two figures from Doran et al. 2017 that explains how neuro-symbolic would work. The second figure, Fig. 5, shows how a reasoning system might work. And that?s very similar to how we reason in our heads. Hope this helps. The way forward is indeed neuro-symbolic, as Gary said, and it?s happening now, with perhaps Hinton?s GLOM showing the way. Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794. Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] [cid:image002.png at 01D92B7E.0A81C250] [cid:image003.png at 01D92B7E.0A81C250] ====================================================================================================================================================== Gary, "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. Best, Steve On 1/14/23 07:04, Gary Marcus wrote: Dear Juergen, You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. Historians looking back on this paper will see too little about that roots of that trend documented here. Gary On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: Sec. 1: Introduction Sec. 2: 1676: The Chain Rule For Backward Credit Assignment Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) Sec. 6: 1965: First Deep Learning Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher Sec. 18: It's the Hardware, Stupid! Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science Sec. 20: The Broader Historic Context from Big Bang to Far Future Sec. 21: Acknowledgments Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0 J?rgen On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: Dear Juergen, You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0 - https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0 Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0 https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0 This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. Happy New Year! J?rgen -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 169238 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 195221 bytes Desc: image003.png URL: From gualtiero.volpe at unige.it Thu Jan 19 03:29:26 2023 From: gualtiero.volpe at unige.it (Gualtiero Volpe) Date: Thu, 19 Jan 2023 09:29:26 +0100 Subject: Connectionists: ICMI 2023 Call for Workshops Message-ID: <02fb01d92be0$24bb8cc0$6e32a640$@unige.it> 25th ACM International Conference on Multimodal Interaction (9-13 October 2023) ICMI 2023 Call for Workshops : https://icmi.acm.org/2023/call-for-workshops/ The International Conference on Multimodal Interaction (ICMI 2023) will be held in Paris on October 9-13, 2023. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models, and applications. Examples of recent workshops include: * Media Analytics for Societal Trends * International Workshop on Automated Assessment of Pain (AAP) * Multi-sensorial Approaches to Human-Food Interaction * Multimodal e-Coaches * Modeling Cognitive Processes from Multimodal Data * Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction * Investigating Social Interactions with Artificial Agents * Insights on Group & Team Dynamics * Face and Gesture Analysis for Health Informatics * Generation and Evaluation of Non-verbal Behaviour for Embodied Agents * Bridging Social Sciences and AI for Understanding Child Behavior We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.). The content of accepted workshops is under the control of the workshop organizers. Workshops may be half-day or one-day in duration. Workshop organizers will be expected to manage the workshop content, solicit submissions, be present to moderate the discussion and panels, invite experts in the domain, conduct the reviewing process, and maintain a website for the workshop. Workshop papers will be indexed by ACM Digital Library in an adjunct proceeding, and a short workshop summary by the organizers will be published in the main conference proceedings. Submission Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Giovanna Varni and Theodora Chaspari (icmi2023-workshop-chairs at acm.org ). The proposal should include the following: * Workshop title * List of organizers including affiliation, email address, and short biographies * Workshop motivation, expected outcomes and impact * Tentative list of keynote speakers * Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program * Planned advertisement means, website hosting, and estimated participation * Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.) * Paper submission and acceptance deadlines * Special space and equipment requests, if any Important dates Workshop proposal submission February 5, 2023 Notification of acceptance ?February 17, 2023 Workshop papers due July 23, 2023 Workshop dates 9-13 October 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alon.Korngreen at biu.ac.il Thu Jan 19 05:14:11 2023 From: Alon.Korngreen at biu.ac.il (Alon Korngreen) Date: Thu, 19 Jan 2023 10:14:11 +0000 Subject: Connectionists: NeuroData INFO session Message-ID: Dear Connectionists, Please share the following event information with anyone who may be interested. The Gonda Multidisciplinary Brain Research Center at Bar-Ilan University will hold a zoom info session on the newly opened International Erasmus Mundus Joint Master in Brain and Data Science "NeuroData" on Monday, January 23, at 09:00 GMT+2 (Tel-Aviv time). The NeuroData Master https://www.neurodata-master.org/ is a collaboration between six leading universities in Israel and Europe: Bar-Ilan University in Israel, Instituto Superior T?cnico, University of Lisbon, in Portugal, University of Jyv?skyl? in Finland, University of Padua in Italy, Vrije Universiteit Amsterdam in The Netherlands, and University of Zagreb in Croatia and is coordinated by Bar-Ilan University. At the event Prof. Alon Korngreen, the head of the Gonda Multidisciplinary Brain Research Center and program coordinator will present the program and answer questions from the audience. To receive the zoom link register to the event here https://www.neurodata-master.org/event-details-registration/neurodata-info-meeting-3 best regards, Alon Korngreen Prof. Alon Korngreen The Mina and Everard Goodman Faculty of life Sciences Head, The Gonda Brain Research Center Bar-Ilan University Ramat-Gan, 5290002 Israel Phone: 00972-3-5318224 Business WhatsApp: https://wa.me/message/R3WHVUATR3QBI1 ?Progress is impossible without change, and those who cannot change their minds cannot change anything.? George Bernard Shaw -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinxlemons at gmail.com Thu Jan 19 09:07:05 2023 From: vinxlemons at gmail.com (Vincenzo Lomonaco) Date: Thu, 19 Jan 2023 15:07:05 +0100 Subject: Connectionists: [CfP] 2nd Conference on Lifelong Learning Agents (CoLLAs 2023) Message-ID: Dear all, We are pleased to announce the *2nd edition* of the *International Conference on Lifelong Learning Agents* (*CoLLAs)* , which will take place in Montreal, Canada in August 2023! More information about the *Call for Paper* can be found below. We are looking forward to receiving your manuscript and meeting you in Montreal! CoLLAs official website: https://lifelong-ml.cc/ -------------------- Call for Papers (Conference track) Machine learning has relied heavily on a traditional view of the learning process, whereby observations are assumed to be i.i.d., typically given as a dataset split into a training and validation set with the explicit focus to maximize performance on the latter. While this view proved to be immensely beneficial for the field, it represents just a fraction of the realistic scenarios of interest. Over the past few decades, increased attention has been given to alternative paradigms that help explore different aspects of the learning process, from lifelong learning, continual learning, and meta-learning to transfer learning, multi-task learning and out-of-distribution generalization to name just a few. The Conference on Lifelong Learning Agents (CoLLAs) focuses on these learning paradigms that aim to move beyond the traditional, single-distribution machine learning setting and to allow learning to be more robust, more efficient in terms of compute and data, more versatile in terms of being able to handle multiple problems and be well-defined and well-behaved in more realistic non-stationary settings compared to the traditional view. We invite submissions to the 2nd edition of CoLLAs that describe applications, new theories, methodology or new insights into existing algorithms and/or benchmarks. Accepted papers will be published in the Proceedings of Machine Learning Research (PMLR). Topics of submission may include, but are not limited to, Reinforcement Learning, Supervised Learning or Unsupervised Learning approaches for: - Lifelong Learning / Continual Learning - Meta-Learning - Multi-Task Learning - Transfer Learning - Curriculum Learning - Domain Adaptation - Few-Shot Learning - Out-Of-Distribution Generalization - Online Learning - Active Learning The conference also welcomes submissions at the intersection of machine learning and neuroscience and applications of the topics of interest to real-world problems. Submitted papers will be evaluated based on their novelty, technical quality, and potential impact. Experimental methods and results are expected to be reproducible, and authors are strongly encouraged to make code and data available. We also encourage submissions of proof-of-concept research that puts forward novel ideas and demonstrates potential, as well as in-depth analysis of existing methods and concepts. Key Dates The planned dates are as follows: - Abstract deadline: March 03, 2023, 11:59 pm (Anywhere on Earth, AoE) - Submission deadline: March 06, 2023, 11:59 pm (AoE) - Reviews released: April 10, 2023 - Author rebuttal due: April 18, 2023 - Notification of decision: May 10, 2023 - Resubmission Deadline: June 10, 2023 - Decision of resubmissions: July 01, 2023 - Main Conference: August 2023 Review Process Papers will be selected via double-blind peer-review process. All accepted papers will be presented at the Conference as contributed talks or as posters and will be published in the Proceedings (PMLR). Additionally there is a non-archival workshop track, which will also go through the review process. The reviews process will be hosted on OpenReview with submissions and reviews being private until a decision is made. Reviews and discussions of the accepted papers will be made available after acceptance. In addition to accept/reject, a paper can be marked for conditional acceptance. In this case, the authors have a fixed amount of time to incorporate a clear list of demands from the Program Chairs, and if these updates are present the paper will automatically get accepted. Rejected papers that initially received a conditional acceptance (where authors decided not to add the required modifications) can be presented in the workshop track if the authors chose to. The authors will still be able to present a poster on their work as part of this track. This system is aimed to produce a fairer treatment of borderline papers and to save the time spent in going through the entire reviewing process from scratch when resubmitting to a future edition of the conference or a different relevant conference. During the rebuttal period, authors are allowed to update their papers once. All updates should be clearly marked using the macros provided in the latex style files. However, reviewers are not required to read the new version. Physical and Virtual Attendance Collas 2023 will be mainly an in-person event, in Montreal Canada. We believe that in-person interactions are important to grow the community. However, we recognize that participating in person might not be possible for everyone for various reasons, including health concerns around COVID. Therefore participants will have the option to participate virtually at the conference and present virtually their work by providing a prerecorded video. However this will not be a fully hybrid event, and not all elements will be available to virtual participants. More information about the organization soon. Formatting and Supplementary Material Submissions should have a recommended length of 9 single-column CoLLAs-formatted pages, plus unlimited pages for references and appendices. We enforce a maximum length of 10 pages, where the 10th page can be used if it helps with the formatting of the paper. The camera ready version will have a strict 10 page limit. So please do not use the entire 10th page during the initial submission. The appendices should be within the same pdf file as the main publication, however, an additional zip file can be submitted that can include multiple files of different formats (e.g. videos or code). Note that reviewers are under no obligation to examine the appendix and the supplementary material. Please format the paper using the official LaTeX style files that can be found on overleaf here or on GitHub here . We do not support submissions in formats other than LaTeX. Please do not modify the layout given by the style file. For any questions, you can reach us at: con... at lifelong-ml.cc . Submissions will be through OpenReview . Abstract and Title Authors should include a full title for their paper, as well as a complete abstract by the abstract submission deadline. Submission titles should not be modified after the abstract submission deadline, and abstracts should not be modified by more than 50% after the abstract submission deadline. Submissions violating these rules may be deleted after the paper submission deadline without review. The author list can be updated until the paper submission deadline. Only the ordering of the authors can be changed when submitting the camera-ready version of the paper. Anonymization Requirements All submissions must be anonymized and may not contain any information with the intention or consequence of violating the double-blind reviewing policy, including (but not limited to) citing previous works of the authors or sharing links in a way that can infer any author?s identity or institution, actions that reveal the identities of the authors to potential reviewers. Authors are allowed to post versions of their work on preprint servers such as Arxiv. They are also allowed to give talks to restricted audiences on the work(s) submitted to CoLLAs during the review. If you have posted or plan to post a non-anonymized version of your paper online before the CoLLAs decisions are made, the submitted version must not refer to the non-anonymized version. CoLLAs strongly discourages advertising the preprint on social media or in the press while under submission to CoLLAs. Under no circumstances should your work be explicitly identified as CoLLAs submission at any time during the review period, i.e. from the time you submit the abstract to the communication of the accept/reject decisions. Dual Submissions It is not appropriate to submit papers that are identical (or substantially similar) to versions that have been previously published, accepted for publication, or submitted in parallel to other conferences or journals. Such submissions violate our dual submission policy, and the organizers have the right to reject such submissions or to remove them from the proceedings. Code of Conduct and Ethics All participants in CoLLAs, including authors, will be required to adhere to CoLLAs code of conduct & ethics. Plagiarism in any form is strictly forbidden as it is an unethical use of privileged information by reviewers, such as sharing it or using it for any other purpose than the reviewing process. All suspected unethical behaviours will be investigated and individuals found violating the rules may face sanctions. Further details about CoLLAs code of conduct, ethics and reproducibility can be found on the website. ----------- *Vincenzo Lomonaco*, University of Pisa, Italy on behalf of the *2023 CoLLAs Organizing Committee *. -------------- next part -------------- An HTML attachment was scrubbed... URL: From laure.berti at ird.fr Thu Jan 19 07:20:34 2023 From: laure.berti at ird.fr (Laure Berti) Date: Thu, 19 Jan 2023 13:20:34 +0100 Subject: Connectionists: Postdoc offer at Aix-Marseille University Luminy in Learning from Multimodal Healthcare Data Message-ID: <09c66ccd-224b-4a84-22c5-945fb059be6b@ird.fr> We are recruiting a postdoctoral research associate for 12 months at Aix-Marseille University, Marseille, LIS Lab (CNRS UMR 7020), Luminy site. This project will develop new algorithms for efficient deep learning using multimodal healthcare data. The start date is flexible but would ideally be asap early in 2023. The salary depends on the candidate's seniority and varies from 1 980 Euros to 2 320 Euros net per month CV, list of publications and inquiries about this position can be sent to laure.berti @ ird.fr and noel.novelli @ lis-lab.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.otte at uni-tuebingen.de Thu Jan 19 10:16:38 2023 From: sebastian.otte at uni-tuebingen.de (Sebastian Otte) Date: Thu, 19 Jan 2023 16:16:38 +0100 Subject: Connectionists: ICANN 2023 - Call for Papers, Special Sessions & Workshops Message-ID: <10239bb5-cb33-2cb8-6fcb-7dec35feecbd@uni-tuebingen.de> 32nd International Conference on Artificial Neural Networks ICANN 2023 https://e-nns.org/icann2023 Dates: 26th to 29th of September 2023 ======================================================================= The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). In 2023 the School of Engineering of Democritus University of Thrace, Greece, will organize ICANN 2023. This will be held at Astoria Capsis Hotel at Heraklion city, Crete, Greece from 26th to 29th of September 2023. The conference will be organized as a HYBRID event. CONFERENCE TOPICS ICANN 2023 is a conference featuring tracks in Brain Inspired Computing, Machine Learning, and Artificial Neural Networks, with strong cross-disciplinary interactions and applications. A non-exhaustive list of topics includes: Machine Learning: Deep Learning, Neural Network Theory, Neural Network Models, Graphical Models, Bayesian Networks, Kernel Methods, Generative Models, Information Theoretic Learning, Reinforcement Learning, Relational Learning, Dynamical Models, Recurrent Networks, Ethics of AI Brain Inspired Computing: Cognitive Models, Computational Neuroscience, Self-Organization, Neural Control and Planning, Hybrid Neural-Symbolic Architectures, Neural Dynamics, Cognitive Neuroscience, Brain Informatics, Perception and Action, Spiking Neural Networks Neural Applications for: Bioinformatics, Biomedicine, Intelligent Robotics, Neurorobotics, Language Processing, Speech Processing, Image Processing, Sensor Fusion, Pattern Recognition, Data Mining, Neural Agents, Brain-Computer Interaction, Neuromorphic Computing and Edge AI, Evolutionary Neural Networks CALL FOR CONTRIBUTED SCIENTIFIC COMMUNICATIONS All scientific communications presented at ICANN 2023 will be reviewed and scientifically evaluated by a panel of experts. The conference will feature two categories of communications: - oral communications (15'+5') - poster communications. CALL FOR SPECIAL SESSIONS AND WORKSHOPS ICANN 2023 organizers cordially invite internationally recognized experts to organize Special Sessions and Workshops within the general scope of the conference. Please submit your Workshop / Special Session proposal by the deadline (see SUBMISSION DEADLINES) to ICANN 2023 organizing board by email at liliadis AT civil.duth.gr (Prof. Lazaros Iliadis, ICANN 2023 General co-Chair) CALL FOR PAPERS Authors willing to present original contributions in either oral or poster category may submit: - A full paper of maximum 12 pages (including references) to be published in Springer-Verlag Lecture Notes in Computer Science (LNCS) series with individual DOI. - An extended abstract of maximum 4 pages to be published in Springer-Verlag Lecture Notes in Computer Science (LNCS) series, without indexing. In case the number of requested oral presentations is larger than the available slots the ICANN scientific committee will select which papers will be reassigned to a poster session. This selection will be based on the coherence of the program and is totally independent of the category of submission. SUBMISSION DEADLINES - Special session and workshop proposal submission: Feb. 28, 2023 - Full paper and extended abstract submission: Apr. 9, 2023 BEST PAPER AWARDS ENNS will sponsor a maximum of four best paper awards. All awards will be presented during the final ceremony. TRAVEL GRANTS The European Neural Network Society sponsors a number of Student Travel Grants covering part of the costs for attending ICANN. Details will be provided on the conference website. ORGANIZATION General Chairs Iliadis Lazaros, Democritus University of Thrace, Greece Plamen Angelov, Lancaster University, UK Program Chairs Antonios Papaleonidas, Democritus University of Thrace, Greece Elias Pimenidis, UWE Bristol, UK Chrisina Jayne, Teesside University, UK Honorary Chairs Stefan Wermter, University of Hamburg, Germany Vera Kurkova, Czech Academy of Sciences, Czech Republic Nikola Kasabov, Auckland University of Technology, New Zealand Organizing Chairs Antonios Papaleonidas, Democritus University of Thrace, Greece Anastasios Panagiotis Psathas, Democritus University of Thrace, Greece George Magoulas, University of London, Birkbeck College, UK Haralambos Mouratidis, University of Essex, UK Award Chairs Stefan Wermter, University of Hamburg, Germany Chukiong Loo, University of Malaysia, Malaysia Communication Chairs Sebastian Otte, University of T?bingen, Germany Anastasios Panagiotis Psathas, Democritus University of Thrace, Greece From srodrigues at bcamath.org Thu Jan 19 15:58:49 2023 From: srodrigues at bcamath.org (Serafim Rodrigues) Date: Thu, 19 Jan 2023 21:58:49 +0100 Subject: Connectionists: Ikerbasque Research Fellows 2023 call Message-ID: Dear All *ikerbasque, * the Basque Foundation for Science, would like to inform you that they have launched *a new international call* to reinforce research and scientific career in the Basque Country: *20 positions for experienced postdoctoral researchers* *Ikerbasque Research Fellows* - 5-year contracts, during the last year the researcher can be assessed to obtain a permanent contract - An initial moving allowance of up to 4,000 ? is provided for international moves. - A start-up funding of 10,000 ? is provided for the initial research costs. - PhD degree between Jan 2012-Dec 2020 - A support letter from BCAM is mandatory - Applications from women are especially welcomed - Deadline: March 15th, 2023 at 13:00 CET The procedure for the pre-selection of the BCAM (Basque Center for Applied Mathematics) candidates will be as follows: - *Potential candidates interested in Mathematical, Computational and Experimental Neuroscience* - Please send email to Serafim Rodrigues - Please send before *Feb 14th 2023, * the following information: - *Updated CV* to the Serafim Rodrigues and recruitment at bcamath.org - *Two reference letters*. The reference letters shall be submitted directly by the referees to recruitment at bcamath.org - The BCAM Selection Committee will pre-select the candidates and will have an *interview with them *around *Feb 15th ? 28th, 2023*. - After that, The BCAM Selection Committee will select the BCAM candidacies in this call by the end of February and will prepare the BCAM support letter for the selected candidates. For further information, please visit *calls.ikerbasque.net* *, *or contact us recruitment at bcamath.org -- Serafim Rodrigues Ikerbasque Research Professor Mathematical, Computational and Experimental Neuroscience (MCEN) Group Leader *BCAM - *Basque Center for Applied Mathematics Alameda de Mazarredo, 14 E-48009 Bilbao, Basque Country - Spain Tel. +34 946 567 842 srodrigues at bcamath.org | www.bcamath.org/srodrigues | www.ikerbasque.net/serafim-rodrigues Old Mathematicians never die They just "tend to infinity" -Anonymous *(**matematika mugaz bestalde)* -------------- next part -------------- An HTML attachment was scrubbed... URL: From bogdanlapi at gmail.com Thu Jan 19 16:21:56 2023 From: bogdanlapi at gmail.com (Bogdan Ionescu) Date: Thu, 19 Jan 2023 23:21:56 +0200 Subject: Connectionists: Call-for-Papers: ACM TOMM SI on Realistic Synthetic Data: Generation, Learning, Evaluation Message-ID: [Apologies for multiple postings] ACM Transactions on Multimedia Computing, Communications, and Applications Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation Impact Factor 4.094 https://mc.manuscriptcentral.com/tomm Submission deadline: 31 March 2023 *** CALL FOR PAPERS *** [Guest Editors] Bogdan Ionescu, Universitatea Politehnica din Bucuresti, Rom?nia Ioannis Patras, Queen Mary University of London, UK Henning Muller, University of Applied Sciences Western Switzerland, Switzerland Alberto Del Bimbo, Universit? degli Studi di Firenze, Italy [Scope] In the current context of Machine Learning (ML) and Deep Learning (DL), data and especially high-quality data are central for ensuring proper training of the networks. It is well known that DL models require an important quantity of annotated data to be able to reach their full potential. Annotating content for models is traditionally made by human experts or at least by typical users, e.g., via crowdsourcing. This is a tedious task that is time consuming and expensive -- massive resources are required, content has to be curated and so on. Moreover, there are specific domains where data confidentiality makes this process even more challenging, e.g., in the medical domain where patient data cannot be made publicly available, easily. With the advancement of neural generative models such as Generative Adversarial Networks (GAN), or, recently diffusion models, a promising way of solving or alleviating such problems that are associated with the need for domain specific annotated data is to go toward realistic synthetic data generation. These data are generated by learning specific characteristics of different classes of target data. The advantage is that these networks would allow for infinite variations within those classes while producing realistic outcomes, typically hard to distinguish from the real data. These data have no proprietary or confidentiality restrictions and seem a viable solution to generate new datasets or augment existing ones. Existing results show very promising results for signal generation, images etc. Nevertheless, there are some limitations that need to be overcome so as to advance the field. For instance, how can one control/manipulate the latent codes of GANs, or the diffusion process, so as to produce in the output the desired classes and the desired variations like real data? In many cases, results are not of high quality and selection should be made by the user, which is like manual annotation. Bias may intervene in the generation process due to the bias in the input dataset. Are the networks trustworthy? Is the generated content violating data privacy? In some cases one can predict based on a generated image the actual data source used for training the network. Would it be possible to train the networks to produce new classes and learn causality of the data? How do we objectively assess the quality of the generated data? These are just a few open research questions. [Topics] In this context, the special issue is seeking innovative algorithms and approaches addressing the following topics (but is not limited to): - Synthetic data for various modalities, e.g., signals, images, volumes, audio, etc. - Controllable generation for learning from synthetic data. - Transfer learning and generalization of models. - Causality in data generation. - Addressing bias, limitations, and trustworthiness in data generation. - Evaluation measures/protocols and benchmarks to assess quality of synthetic content. - Open synthetic datasets and software tools. - Ethical aspects of synthetic data. [Important Dates] - Submission deadline: 31 March 2023 - First-round review decisions: 30 June 2023 - Deadline for revised submissions: 31 July 2023 - Notification of final decisions: 30 September 2023 - Tentative publication: December 2023 [Submission Information] Prospective authors are invited to submit their manuscripts electronically through the ACM TOMM online submission system (see https://mc.manuscriptcentral.com/tomm) while adhering strictly to the journal guidelines (see https://tomm.acm.org/authors.cfm). For the article type, please select the Special Issue denoted SI: Realistic Synthetic Data: Generation, Learning, Evaluation. Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere. If the submission is an extended work of a previously published conference paper, please include the original work and a cover letter describing the new content and results that were added. According to ACM TOMM publication policy, previously published conference papers can be eligible for publication provided that at least 40% new material is included in the journal version. [Contact] For questions and further information, please contact Bogdan Ionescu / bogdan.ionescu at upb.ro. [Acknowledgement] The Special Issue is endorsed by the AI4Media "A Centre of Excellence delivering next generation AI Research and Training at the service of Media, Society and Democracy" H2020 ICT-48-2020 project https://www.ai4media.eu/. On behalf of the Guest Editors, Bogdan Ionescu https://www.aimultimedialab.ro/ From hugo.o.sousa at inesctec.pt Fri Jan 20 04:25:04 2023 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Fri, 20 Jan 2023 09:25:04 +0000 Subject: Connectionists: Text2Story'23 Deadline Extension Message-ID: <084650fd8d964a799a87d3c00f821bfb@inesctec.pt> *** Apologies for cross-posting *** ++ DEADLINE EXTENSION ++ **************************************************************************** Sixth International Workshop on Narrative Extraction from Texts (Text2Story'23) Held in conjunction with the 45th European Conference on Information Retrieval (ECIR'23) April 2nd, 2023 - Dublin, Ireland Website: https://text2story23.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission Deadline Extension: January 30th, 2023 - Acceptance Notification Date: March 3rd, 2023 - Camera-ready copies: March 17th, 2023 - Workshop: April 2nd, 2023 ++ Overview ++ Recent years have shown a stream of continuously evolving information making it unmanageable and time-consuming for an interested reader to track and process and to keep up with all the essential information and the various aspects of a story. Automated narrative extraction from text offers a compelling approach to this problem. It involves identifying the sub-set of interconnected raw documents, extracting the critical narrative story elements, and representing them in an adequate final form (e.g., timelines) that conveys the key points of the story in an easy-to-understand format. Although, information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of automated identification and analysis of the different elements of a narrative present in a document (set) still presents significant unsolved challenges ++ List of Topics ++ In the sixth edition of the Text2Story workshop, we aim to bring to the forefront the challenges involved in understanding the structure of narratives and in incorporating their representation in well-established models, as well as in modern architectures (e.g., transformers) which are now common and form the backbone of almost every IR and NLP application. It is hoped that the workshop will provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: * Narrative Representation Models * Story Evolution and Shift Detection * Temporal Relation Identification * Temporal Reasoning and Ordering of Events * Causal Relation Extraction and Arrangement * Narrative Summarization * Multi-modal Summarization * Automatic Timeline Generation * Storyline Visualization * Comprehension of Generated Narratives and Timelines * Big Data Applied to Narrative Extraction * Personalization and Recommendation of Narratives * User Profiling and User Behavior Modeling * Sentiment and Opinion Detection in Texts * Argumentation Analysis * Bias Detection and Removal in Generated Stories * Ethical and Fair Narrative Generation * Misinformation and Fact Checking * Bots Influence * Narrative-focused Search in Text Collections * Event and Entity importance Estimation in Narratives * Multilinguality: Multilingual and Cross-lingual Narrative Analysis * Evaluation Methodologies for Narrative Extraction * Resources and Dataset Showcase * Dataset Annotation for Narrative Generation/Analysis * Applications in Social Media (e.g. narrative generation during a natural disaster) * Language Models and Transfer Learning in Narrative Analysis * Narrative Analysis in Low-resource Languages * Text Simplification ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We invite two kinds of submissions: * Full papers (up to 7 pages + references): Original and high-quality unpublished contributions on the theory and practical aspects of the narrative extraction task. Full-papers should introduce existing approaches, describe the methodology and the experiments conducted in detail. Negative result papers to highlight tested hypotheses that did not get the expected outcome are also welcomed. * Work in progress, demos and dissemination papers (up to 4 pages + references): unpublished short papers describing work in progress; demo and resource papers presenting research/industrial prototypes, datasets or software packages; position papers introducing a new point of view, a research vision or a reasoned opinion on the workshop topics; and dissemination papers describing project ideas, ongoing research lines, case studies or summarized versions of previously published papers in high-quality conferences/journals that is worthwhile sharing with the Text2Story community, but where novelty is not a fundamental issue. Submissions will be peer-reviewed by at least two members of the programme committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (indexed in Scopus and DBLP) as long as they don't conflict with previous publication rights. ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Invited Speakers ++ Structured Summarisation of News at Scale Speaker: Georgiana Ifrim, University College Dublin, Ireland Abstract: Facilitating news consumption at scale is still quite challenging. Some research effort focused on coming up with useful structures for facilitating news navigation for humans, but benchmarks and objective evaluation of such structures is not common. One area that has progressed recently is news timeline summarisation. In this talk, we present some of our work on long-range large-scale news timeline summarisation. Timelines present the most important events of a topic linearly in chronological order and are commonly used by news editors to organise long-ranging topics for news consumers. Tools for automatic timeline summarisation can address the cost of manual effort and the infeasibility of manually covering many topics, over long time periods and massive news corpora. In this talk, we first compare different high-level approaches to timeline summarisation, identify the modules and features important for this task, and present new state-of-the-art results with a simple new method. We provide several examples of automatic timelines and present both a quantitative and qualitative analysis of these structured news summaries. Most of our tools and datasets are available online on github. Bio: Dr. Georgiana Ifrim is an Associate Professor at the School of Computer Science, UCD, co-lead of the SFI Centre for Research Training in Machine Learning (ML-Labs) and SFI Funded Investigator with the Insight Centre for Data Analytics and VistaMilk SFI Centre. Dr. Ifrim holds a PhD and MSc in Machine Learning, from Max-Planck Institute for Informatics, Germany, and a BSc in Computer Science, from University of Bucharest, Romania. Her research focuses on effective approaches for large-scale sequence learning, time series classification, and text mining. She has published more than 50 peer-reviewed articles in top-ranked international journals and conferences and regularly holds senior positions in the program committees for IJCAI, AAAI, and ECML-PKDD, as well as being a member of the editorial board of the Machine Learning Journal, Springer. Creating and Visualising Semantic Story Maps Speaker: Valentina Bartalesi, CNR-ISTI, Italy Abstract: A narrative is a conceptual basis of collective human understanding. Humans use stories to represent characters' intentions, feelings and the attributes of objects, and events. A widely-held thesis in psychology to justify the centrality of narrative in human life is that humans make sense of reality by structuring events into narratives. Therefore, narratives are central to human activity in cultural, scientific, and social areas. Story maps are computer science realizations of narratives based on maps. They are online interactive maps enriched with text, pictures, videos, and other multimedia information, whose aim is to tell a story over a territory. This talk presents a semi-automatic workflow that, using a CRM-based ontology and the Semantic Web technologies, produces semantic narratives in the form of story maps (and timelines as an alternative representation) from textual documents. An expert user first assembles one territory-contextual document containing text and images. Then, automatic processes use natural language processing and Wikidata services to (i) extract entities and geospatial points of interest associated with the territory, (ii) assemble a logically-ordered sequence of events that constitute the narrative, enriched with entities and images, and (iii) openly publish online semantic story maps and an interoperable Linked Open Data-compliant knowledge base for event exploration and inter-story correlation analyses. Once the story maps are published, the users can review them through a user-friendly web tool. Overall, our workflow complies with Open Science directives of open publication and multi-discipline support and is appropriate to convey "information going beyond the map" to scientists and the large public. As demonstrations, the talk will show workflow-produced story maps to represent (i) 23 European rural areas across 16 countries, their value chains and territories, (ii) a Medieval journey, (iii) the history of the legends, biological investigations, and AI-based modelling for habitat discovery of the giant squid Architeuthis dux. Bio: Valentina Bartalesi Lenzi is a researcher at the CNR-ISTI and external professor of Semantic Web in the Computer Science master's degree course at the University of Pisa. She earned her PhD in Information Engineering from the University of Pisa and graduated in Digital Humanities from the University of Pisa. Her research fields mainly concern Knowledge Representation, Semantic Web technologies, and the development of formal ontologies for representing textual content and narratives. She has participated in several European and National research projects, including MINGEI, PARTHENOS, E-RIHS PP, IMAGO. She is the author of over 50 peer-reviewed articles in national and international conferences and scientific journals. ++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC & Universidade da Beira do Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC & University of Porto) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Antoine Doucet (Universit? de La Rochelle) Ant?nio Horta Branco (University of Lisbon) Arian Pasquali (Faktion AI) Bart Gajderowicz (University of Toronto) Bego?a Altuna (Universidad del Pa?s Vasco) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST & INESC-ID, University of Lisbon) Daniel Loureiro (Cardiff University) Dennis Aumiller (Heidelberg University) Dhruv Gupta (Norwegian University of Science and Technology) Dyaa Albakour (Signal UK) Evelin Amorim (INESC TEC) Henrique Cardoso (INESC TEC & University of Porto) Ismail Altingovde (Middle East Technical University) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Bandeli (Walmart Inc.) Luca Cagliero (Politecnico di Torino) Ludovic Moncla (INSA Lyon) Marc Finlayson (Florida International University) Marc Spaniol (Universit? de Caen Normandie) Moreno La Quatra (Politecnico di Torino) Nuno Guimar?es (INESC TEC & University of Porto) Pablo Gamallo (University of Santiago de Compostela) Pablo Gerv?s (Universidad Complutense de Madrid) Paulo Quaresma (Universidade de ?vora) Paul Rayson (Lancaster University) Raghav Jain (Indian Institute of Technology, Patna) Ross Purves (University of Zurich) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Simra Shahid (Adobe's Media and Data Science Research Lab) Sriharsh Bhyravajjula (University of Washington) Udo Kruschwitz (University of Regensburg) Veysel Kocaman (John Snow Labs & Leiden University) ++ Contacts ++ Website: https://text2story23.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2023 at easychair.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgios.yannakakis at um.edu.mt Fri Jan 20 06:57:00 2023 From: georgios.yannakakis at um.edu.mt (Georgios N Yannakakis) Date: Fri, 20 Jan 2023 12:57:00 +0100 Subject: Connectionists: AI and Games - Postdoc/PhD/Developer posts In-Reply-To: References: Message-ID: If you wish to join our AI research group at the Institute of Digital Games - University of Malta we have a number of research posts (research associates, PhD students and Postdoctoral fellows) open currently! Apply and be part of a research team that builds the next generation AI algorithms that play, feel and design games. We are looking for excellent candidates with a good grasp of as many of the following areas as possible: deep/shallow learning, affect annotation and modelling, human-computer interaction, computer vision, behaviour cloning, procedural content generation, generative systems. Apply here by *Jan 31, 2023*: https://lnkd.in/dfWpxVbP -- Georgios N. Yannakakis | Professor Director | Editor in Chief, IEEE Trans. on Games Institute of Digital Games 20 Triq L-Esperanto +356 2340 3510 -------------- next part -------------- An HTML attachment was scrubbed... URL: From benoit.frenay at unamur.be Fri Jan 20 07:40:05 2023 From: benoit.frenay at unamur.be (=?Windows-1252?Q?Beno=EEt_Frenay?=) Date: Fri, 20 Jan 2023 12:40:05 +0000 Subject: Connectionists: =?windows-1252?q?ML_applied_to_Sign_Language=3A_S?= =?windows-1252?q?pecial_Session_at_ESANN=9223?= Message-ID: Call for papers: special session on "Machine Learning Applied to Sign Language" at ESANN 2023 European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2023). 4-5 October 2023, Bruges, Belgium. http://www.esann.org DESCRIPTION: Deep learning has led to spectacular advances in many fields dealing with unstructured data such as computer vision, natural language processing, and data generation. Recently, sign languages have drawn the attention of machine learning practitioners as sign language recognition, translation, and synthesis raise interesting technical challenges and have a clear societal impact. The overarching domain of sign language processing is related to computer vision, natural language processing, computer graphics, and human-computer interaction. It brings together computer scientists and linguists to tackle interdisciplinary problems. This special session aims to highlight recent advances made in sign language recognition, translation, and synthesis, as well as new datasets. Topics of interest include, but are not limited to: ? Sign language recognition models ? Sign language translation models (from signed to spoken languages and vice versa) ? Sign language synthesis and virtual signing avatars ? Data collection efforts related to sign language processing All papers will be submitted to a peer review process. Accepted papers will be presented as either talks or posters, in order to favour interaction with the ESANN attendees. There is no difference in quality between talks and posters and all papers will be published in the conference proceedings. At least one author is expected to register for the conference and pay the registration fee. SUBMISSION: Prospective authors must submit their paper through the ESANN portal following the instructions provided on https://www.esann.org/node/6. Author guidelines are available on https://www.esann.org/author_guidelines. Each paper will undergo a peer reviewing process for its acceptance. Authors should send an e-mail with the tentative title of their contribution to the special session organizers as soon as possible. IMPORTANT DATES: Paper submission deadline: 2 May 2023 Notification of acceptance: 16 June 2023 The ESANN 2023 conference: 4-5 October 2023 SPECIAL SESSION ORGANISERS: Joni Dambre, Ghent University (Belgium) joni.dambre at ugent.be Mathieu De Coster, Ghent University (Belgium) mathieu.decoster at ugent.be J?r?me Fink, Universit? de Namur (Belgium) jerome.fink at unamur.be Beno?t Fr?nay, Universit? de Namur (Belgium) benoit.frenay at unamur.be -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Fri Jan 20 11:28:00 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Fri, 20 Jan 2023 16:28:00 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> Message-ID: <701C6FFF-0525-426E-8DE9-0FBB6A6EE866@supsi.ch> Dear Andrzej, thanks again! A few answers: 1. "What are the DL applications today in the industry besides some nice demos?" Alas, there are so many, on your smartphone and billions of other devices, some by the most famous companies, some of them mentioned in the survey. For example, it?s fair to say that DL has revolutionised image processing and world-wide communication across numerous languages. See, e.g., Sec. 16, or reference [DL4]: https://people.idsia.ch/~juergen/impact-on-most-valuable-companies.html 2. "Why does a deep NN give better results than a shallow NN?? At least for RNNs that?s very clear. Compare Sec. 14 ff. https://people.idsia.ch/~juergen/deep-learning-history.html#unsupdl 3. "Neocognitron performs invariant pattern recognition, a CNN does not." CNNs and Neocognitrons (1979) have the same basic architecture with alternating convolutional and downsampling layers. Not sure why someone changed the name! See Sec. 9: https://people.idsia.ch/~juergen/deep-learning-history.html#cnn 4. "according to you (the title of your review) AI is today DL." No, it isn?t, as also mentioned in more detail in my reply to Gary: "many of the most famous modern AI applications actually combine deep learning and other cited techniques." 5. "biologically plausible" deep learning: Sec. 10 mentions some of the proposals since the 1980s, plus recent work, but only briefly, because so far the impact on modern AI has been negligible. 6. "you missed symbolical AI." Covered by many citations since the 1940s and famous surveys since the 1960s. Many successful modern RL applications actually combine NNs and old ?symbolic? techniques such as Monte Carlo Tree Search and Planning. See also Sec. 17 and my answer to Gary on the "modern AI" focus. Generally speaking, I have never really understood the distinction between ?symbolic" and ?subsymbolic? AI. Our team is probably best known for ?subsymbolic? deep learning and NNs, but for many decades we have also published stuff that many would consider ?symbolic.? For example, the G?del Machine (GM, 2003, https://arxiv.org/abs/cs/0309048) is a self-referential universal problem solver making provably optimal self-improvements. It will rewrite any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. Of course, any NN code can be injected into the GM?s initial self-referential code. Does that make the GM ?subsymbolic?? Likewise, a GM can be implemented on a recurrent NN. Does that make the RNN ?symbolic?? This also ties in with what Steve and Gary have been discussing. 7. "Open problems" and future work: see, e.g., Sec 17: "The future of RL will be about learning/composing/planning with compact spatio-temporal abstractions of complex input streams?about commonsense reasoning [MAR15] and learning to think [PLAN4-5]. How can NNs learn to represent percepts and action plans in a hierarchical manner, at multiple levels of abstraction, and multiple time scales [LEC]? We published first answers to these questions in 1990-91: self-supervised neural history compressors [UN][UN0-3] learn to represent percepts at multiple levels of abstraction and multiple time scales (see above), while end-to-end differentiable NN-based subgoal generators [HRL3][MIR] learn hierarchical action plans through gradient descent (see above). More sophisticated ways of learning to think in abstract ways were published in 1997 [AC97][AC99][AC02] and 2015-18 [PLAN4-5]." Nevertheless, much remains to be done! Cheers, J?rgen > On 16. Jan 2023, at 12:35, Andrzej Wichert wrote: > > Dear Jurgen, > > Again, you missed symbolical AI in your description, names like Douglas Hofstadter. Many of today?s application are driven by symbol manipulation, like diagnostic systems, route planing (GPS navigation), time table planing, object oriented programming symbolic integration and solutions of equations (Mathematica). > What are the DL applications today in the industry besides some nice demos? > > You do not indicate open problems in DL. DL is is highly biologically implausible (back propagation, LSTM), requires a lot of energy (computing power) requires a huge training sets. The black art approach of DL, the failure of self driving cars, the question why does a deep NN give better results than shallow NN? Maybe the biggest mistake was to replace the biological motivated algorithm of Neocognitron by back propagation without understanding what a Neocognitron is doing. Neocognitron performs invariant pattern recognition, a CNN does not. Transformers are biologically implausible and resulted from an engineering requirement. > > My point is that when is was a student, I wanted to do a master thesis in NN in the late eighties, and I was told that NN do not belong to AI (not even to computer science). Today if a student comes and asks that he wants to investigate problem solving by production systems, or a biologically motivated ML he will be told that this is not AI since according to you (the title of your review) AI is today DL. In my view, DL stops the progress in AI and NN in the same way LSIP and Prolog did in the eighties. > > Best, > > Andrzej > > -------------------------------------------------------------------------------------------------- > Prof. Auxiliar Andreas Wichert > > http://web.tecnico.ulisboa.pt/andreas.wichert/ > - > https://www.amazon.com/author/andreaswichert > > Instituto Superior T?cnico - Universidade de Lisboa > Campus IST-Taguspark > Avenida Professor Cavaco Silva Phone: +351 214233231 > 2744-016 Porto Salvo, Portugal > >> On 15 Jan 2023, at 21:04, Schmidhuber Juergen wrote: >> >> Thanks for these thoughts, Gary! >> >> 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). >> >> Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: >> >> "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. >> >> Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. >> >> When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? >> >> Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. >> >> 2. Here a few relevant paragraphs from the intro: >> >> "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. >> >> Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. >> >> A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. >> >> A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." >> >> 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? >> >> Juergen >> > On 16. Jan 2023, at 16:18, Stephen Jos? Hanson wrote: > > Gary, > > "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" > > As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. > > But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. > langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) > > Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. > > So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. > > Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. > > Best, > > Steve >> >>> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >>> >>> Dear Juergen, >>> >>> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >>> >>> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >>> >>> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >>> >>> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >>> Historians looking back on this paper will see too little about that roots of that trend documented here. >>> >>> Gary >>> >>>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>>> >>>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>>> >>>> Sec. 1: Introduction >>>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>>> Sec. 6: 1965: First Deep Learning >>>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>>> Sec. 18: It's the Hardware, Stupid! >>>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>>> Sec. 21: Acknowledgments >>>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>>> >>>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>>> >>>> J?rgen >>>> >>>> >>>> >>>> >>>> >>>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>>> Dear Juergen, >>>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>>> Best, >>>>> Andreas >>>>> -------------------------------------------------------------------------------------------------- >>>>> Prof. Auxiliar Andreas Wichert >>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>>> - >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>>> Instituto Superior T?cnico - Universidade de Lisboa >>>>> Campus IST-Taguspark >>>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>>> 2744-016 Porto Salvo, Portugal >>>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>>> Happy New Year! >>>>>> J?rgen >> >> > From loizos at ouc.ac.cy Sat Jan 21 04:07:28 2023 From: loizos at ouc.ac.cy (Loizos Michael) Date: Sat, 21 Jan 2023 09:07:28 +0000 Subject: Connectionists: Multiple Research Positions on ML+KRR at CYENS Center of Excellence, Cyprus (applications reviewed on a rolling basis) Message-ID: <1674292049355.82675@ouc.ac.cy> Dear colleagues, The CYENS Center of Excellence is offering multiple fully-funded (post-doc and junior researcher) positions for pure and applied research on: - integrated learning and reasoning, - cognitive computing, - personal assistants, - explainable and trustworthy AI, - machine learning / learning theory, - preference and policy elicitation, - neural-symbolic integration, - conversational AI, - natural language understanding / generation, - formal argumentation, - knowledge-based systems. Interested candidates should apply as soon as possible. Applications are reviewed on a rolling basis until positions are filled. https://www.cyens.org.cy/en-gb/vacancies/job-listings/research-associates/research-associates-for-the-socially-competent-ro/ Regards, Loizos -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sun Jan 22 12:35:55 2023 From: david at irdta.eu (David Silva - IRDTA) Date: Sun, 22 Jan 2023 18:35:55 +0100 (CET) Subject: Connectionists: BigDat 2023 Summer: early registration February 9 Message-ID: <1295697508.325130.1674408955348@webmail.strato.com> *********************************************** 7th INTERNATIONAL SCHOOL ON BIG DATA BigDat 2023 Summer Las Palmas de Gran Canaria, Spain July 17-21, 2023 https://bigdat.irdta.eu/2023su *********************************************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice - IRDTA Brussels/London *********************************************** Early registration: February 9, 2023 *********************************************** FRAMEWORK: BigDat 2023 Summer is part of a multi-event called Deep&Big 2023 consisting also of DeepLearn 2023 Summer. BigDat 2023 Summer participants will have the opportunity to attend lectures in the program of DeepLearn 2023 Summer as well if they are interested. SCOPE: BigDat 2023 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of big data. Previous events were held in Tarragona, Bilbao, Bari, Timisoara, Cambridge and Ancona. Big data is a broad field covering a large spectrum of current exciting research and industrial innovation with an extraordinary potential for a huge impact on scientific discoveries, health, engineering, business models, and society itself. Renowned academics and industry pioneers will lecture and share their views with the audience. Most big data subareas will be displayed, namely foundations, infrastructure, management, search and mining, analytics, security and privacy, as well as applications to biology and medicine, business, finance, transportation, online social networks, etc. Major challenges of analytics, management and storage of big data will be identified through 16 four-hour and a half courses and 2 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and employment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, BigDat 2023 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: BigDat 2023 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/ STRUCTURE: 2 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Also, if interested, participants will be able to attend courses developed in DeepLearn 2023 Summer, which will be held in parallel and at the same venue. Full live online participation will be possible. The organizers highlight, however, the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Valerie Daggett (University of Washington), Dynameomics: From Atomistic Simulations of All Protein Folds to the Discovery of a New Protein Structure to the Design of a Diagnostic Test for Alzheimer?s Disease Sander Klous (University of Amsterdam), How to Audit an Analysis on a Federative Data Exchange PROFESSORS AND COURSES: (to be completed) Paolo Addesso (University of Salerno), [introductory/intermediate] Data Fusion for Remotely Sensed Data Marcelo Bertalm?o (Spanish National Research Council), [introductory] The Standard Model of Vision and Its Limitations: Implications for Imaging, Vision Science and Artificial Neural Networks Gianluca Bontempi (Free University of Brussels), [intermediate/advanced] Big Data Analytics in Fraud Detection and Churn Prevention: from Prediction to Causal Inference Altan ?akir (Istanbul Technical University), [introductory/intermediate] Introduction to Distributed Deep Learning with Apache Spark Ian Fisk (Flatiron Institute), tba Ravi Kumar (Google), [intermediate/advanced] Differential Privacy Wladek Minor (University of Virginia), [introductory/advanced] Big Data in Biomedical Sciences Jos? M.F. Moura (Carnegie Mellon University), [introductory/intermediate] Graph Signal Processing and Geometric Learning Panos Pardalos (University of Florida), [intermediate/advanced] Data Analytics for Massive Networks Ramesh Sharda (Oklahoma State University), [introductory/intermediate] Network-Based Health Analytics Steven Skiena (Stony Brook University), [introductory/intermediate] Word and Graph Embeddings for Machine Learning Mayte Suarez-Farinas (Icahn School of Medicine at Mount Sinai), tba Ana Trisovic (Harvard University), [introductory/advanced] Reproducible Research, Best Practices and Big Data Management Sebasti?n Ventura (University of C?rdoba), [intermediate] Supervised Descriptive Pattern Mining OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 9, 2023. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of big data in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 9, 2023. EMPLOYER SESSION: Organizations searching for personnel well skilled in big data will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the organization and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 9, 2023. ORGANIZING COMMITTEE: Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://bigdat.irdta.eu/2023su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish as well as eventually courses in DeepLearn 2023 Summer. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://bigdat.irdta.eu/2023su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. Participants will be recognized 2 ECTS credits by University of Las Palmas de Gran Canaria. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria - Fundaci?n Parque Cient?fico Tecnol?gico Rovira i Virgili University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Sun Jan 22 04:42:37 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Sun, 22 Jan 2023 09:42:37 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> <701C6FFF-0525-426E-8DE9-0FBB6A6EE866@supsi.ch> Message-ID: <72AA7AF8-65D9-461B-98B0-007584A4EC9A@supsi.ch> Dear Andrzej, thanks again! Let me focus here on one issue you have raised: ?What is a concept? From this key question, many others follow. How do fluid boundaries come about? How do they give rise to generalization? What makes something similar to something else? For example, what makes an uppercase letter ?A? recognizable as such. What is the essence of ?A?-ness? DL tries to find regularities from a labeled data set but cannot answer these questions.? However, DL has provided very good answers to such questions! A central insight was: if parts of the _un_labelled data are predictable from other parts, then those parts tend to ?belong together? and the data stream is compressible and can be decomposed into abstract objects. To the best of my knowledge, the first neural sequence-processing system of this kind was the 1991 RNN-based sequence chunker or history compressor discussed in Sec. 14 of the survey. It learns to represent percepts in a hierarchical manner, at multiple levels of abstraction, and multiple time scales [LEC]. This greatly improved DL back then: https://people.idsia.ch/~juergen/deep-learning-history.html#unsupdl The survey also mentions lots of additional work on NNs learning to extract abstract objects from unlabelled raw data, e.g., references [OBJ1-5]. Best, J?rgen > On 21. Jan 2023, at 14:28, Andrzej Wichert wrote: > > Dear Juergen, > > Thank you for your email. Some small remarks... > > "3. "Neocognitron performs invariant pattern recognition, a CNN does not." CNNs and Neocognitrons (1979) have the same basic architecture with alternating convolutional and downsampling layers. Not sure why someone changed the name! See Sec. 9: https://people.idsia.ch/~juergen/deep-learning-history.html#cnn? > > Neocognitron has maybe the same basic architecture as a CNN, but it performs a different task, see > Luis Sa-Couto and Andreas Wichert, Simple convolutional based models: are they learning the task or the data?, Neural Computation,1(17) , 2021 https://doi.org/10.1162/neco_a_01446 > > 6. Generally speaking, I have never really understood the distinction between ?symbolic" and ?subsymbolic? AI?. > All mathematics, the calculations are related to symbol manipulation and logic. > > However, I think the most important question in AI (modern AI) was posed by Douglas Hoffstadter (G?del, Escher, Bach and Fluid Concepts and Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought) > - ?What is a concept?? From this key question, many others follow. How do fluid boundaries come about? How do they give rise to generalization? What makes something similar to something else? For example, what makes an uppercase letter ?A? recognizable as such. What is the essence of ?A?-ness? DL tries to find regularities from a labeled data set but cannot answer these questions. > > > Best Wishes, > > Andrzej > --------------------------------------------------------------------------------------------------- > Prof. Auxiliar Andreas Wichert > > http://web.tecnico.ulisboa.pt/andreas.wichert/ > - > https://www.amazon.com/author/andreaswichert > > Instituto Superior T?cnico - Universidade de Lisboa > Campus IST-Taguspark > Avenida Professor Cavaco Silva Phone: +351 214233231 > 2744-016 Porto Salvo, Portugal > > > > > >> On 20 Jan 2023, at 16:28, Schmidhuber Juergen wrote: >> >> Dear Andrzej, thanks again! A few answers: >> >> 1. "What are the DL applications today in the industry besides some nice demos?" Alas, there are so many, on your smartphone and billions of other devices, some by the most famous companies, some of them mentioned in the survey. For example, it?s fair to say that DL has revolutionised image processing and world-wide communication across numerous languages. See, e.g., Sec. 16, or reference [DL4]: https://people.idsia.ch/~juergen/impact-on-most-valuable-companies.html >> >> 2. "Why does a deep NN give better results than a shallow NN?? At least for RNNs that?s very clear. Compare Sec. 14 ff. https://people.idsia.ch/~juergen/deep-learning-history.html#unsupdl >> >> 3. "Neocognitron performs invariant pattern recognition, a CNN does not." CNNs and Neocognitrons (1979) have the same basic architecture with alternating convolutional and downsampling layers. Not sure why someone changed the name! See Sec. 9: https://people.idsia.ch/~juergen/deep-learning-history.html#cnn >> >> 4. "according to you (the title of your review) AI is today DL." No, it isn?t, as also mentioned in more detail in my reply to Gary: "many of the most famous modern AI applications actually combine deep learning and other cited techniques." >> >> 5. "biologically plausible" deep learning: Sec. 10 mentions some of the proposals since the 1980s, plus recent work, but only briefly, because so far the impact on modern AI has been negligible. >> >> 6. "you missed symbolical AI." Covered by many citations since the 1940s and famous surveys since the 1960s. Many successful modern RL applications actually combine NNs and old ?symbolic? techniques such as Monte Carlo Tree Search and Planning. See also Sec. 17 and my answer to Gary on the "modern AI" focus. >> >> Generally speaking, I have never really understood the distinction between ?symbolic" and ?subsymbolic? AI. Our team is probably best known for ?subsymbolic? deep learning and NNs, but for many decades we have also published stuff that many would consider ?symbolic.? For example, the G?del Machine (GM, 2003, https://arxiv.org/abs/cs/0309048) is a self-referential universal problem solver making provably optimal self-improvements. It will rewrite any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. >> >> Of course, any NN code can be injected into the GM?s initial self-referential code. Does that make the GM ?subsymbolic?? Likewise, a GM can be implemented on a recurrent NN. Does that make the RNN ?symbolic?? This also ties in with what Steve and Gary have been discussing. >> >> 7. "Open problems" and future work: see, e.g., Sec 17: "The future of RL will be about learning/composing/planning with compact spatio-temporal abstractions of complex input streams?about commonsense reasoning [MAR15] and learning to think [PLAN4-5]. How can NNs learn to represent percepts and action plans in a hierarchical manner, at multiple levels of abstraction, and multiple time scales [LEC]? We published first answers to these questions in 1990-91: self-supervised neural history compressors [UN][UN0-3] learn to represent percepts at multiple levels of abstraction and multiple time scales (see above), while end-to-end differentiable NN-based subgoal generators [HRL3][MIR] learn hierarchical action plans through gradient descent (see above). More sophisticated ways of learning to think in abstract ways were published in 1997 [AC97][AC99][AC02] and 2015-18 [PLAN4-5]." Nevertheless, much remains to be done! >> >> Cheers, >> J?rgen >> >> >> >> >>> On 16. Jan 2023, at 12:35, Andrzej Wichert wrote: >>> >>> Dear Jurgen, >>> >>> Again, you missed symbolical AI in your description, names like Douglas Hofstadter. Many of today?s application are driven by symbol manipulation, like diagnostic systems, route planing (GPS navigation), time table planing, object oriented programming symbolic integration and solutions of equations (Mathematica). >>> What are the DL applications today in the industry besides some nice demos? >>> >>> You do not indicate open problems in DL. DL is is highly biologically implausible (back propagation, LSTM), requires a lot of energy (computing power) requires a huge training sets. The black art approach of DL, the failure of self driving cars, the question why does a deep NN give better results than shallow NN? Maybe the biggest mistake was to replace the biological motivated algorithm of Neocognitron by back propagation without understanding what a Neocognitron is doing. Neocognitron performs invariant pattern recognition, a CNN does not. Transformers are biologically implausible and resulted from an engineering requirement. >>> >>> My point is that when is was a student, I wanted to do a master thesis in NN in the late eighties, and I was told that NN do not belong to AI (not even to computer science). Today if a student comes and asks that he wants to investigate problem solving by production systems, or a biologically motivated ML he will be told that this is not AI since according to you (the title of your review) AI is today DL. In my view, DL stops the progress in AI and NN in the same way LSIP and Prolog did in the eighties. >>> >>> Best, >>> >>> Andrzej >>> >>> -------------------------------------------------------------------------------------------------- >>> Prof. Auxiliar Andreas Wichert >>> >>> http://web.tecnico.ulisboa.pt/andreas.wichert/ >>> - >>> https://www.amazon.com/author/andreaswichert >>> >>> Instituto Superior T?cnico - Universidade de Lisboa >>> Campus IST-Taguspark >>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>> 2744-016 Porto Salvo, Portugal >>> >>>> On 15 Jan 2023, at 21:04, Schmidhuber Juergen wrote: >>>> >>>> Thanks for these thoughts, Gary! >>>> >>>> 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). >>>> >>>> Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: >>>> >>>> "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. >>>> >>>> Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. >>>> >>>> When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? >>>> >>>> Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. >>>> >>>> 2. Here a few relevant paragraphs from the intro: >>>> >>>> "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. >>>> >>>> Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. >>>> >>>> A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. >>>> >>>> A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." >>>> >>>> 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? >>>> >>>> Juergen >>>> >> >>> On 16. Jan 2023, at 16:18, Stephen Jos? Hanson wrote: >>> >>> Gary, >>> >>> "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" >>> >>> As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. >>> >>> But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. >>> langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) >>> >>> Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. >>> >>> So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. >>> >>> Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. >>> >>> Best, >>> >>> Steve >>>> >>>>> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >>>>> >>>>> Dear Juergen, >>>>> >>>>> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >>>>> >>>>> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >>>>> >>>>> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >>>>> >>>>> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >>>>> Historians looking back on this paper will see too little about that roots of that trend documented here. >>>>> >>>>> Gary >>>>> >>>>>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>>>>> >>>>>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>>>>> >>>>>> Sec. 1: Introduction >>>>>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>>>>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>>>>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>>>>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>>>>> Sec. 6: 1965: First Deep Learning >>>>>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>>>>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>>>>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>>>>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>>>>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>>>>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>>>>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>>>>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>>>>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>>>>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>>>>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>>>>> Sec. 18: It's the Hardware, Stupid! >>>>>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>>>>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>>>>> Sec. 21: Acknowledgments >>>>>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>>>>> >>>>>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>>>>> >>>>>> J?rgen >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>>>>> Dear Juergen, >>>>>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>>>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>>>>> Best, >>>>>>> Andreas >>>>>>> -------------------------------------------------------------------------------------------------- >>>>>>> Prof. Auxiliar Andreas Wichert >>>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>>>>> - >>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>>>>> Instituto Superior T?cnico - Universidade de Lisboa >>>>>>> Campus IST-Taguspark >>>>>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>>>>> 2744-016 Porto Salvo, Portugal >>>>>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>>>>> Happy New Year! >>>>>>>> J?rgen > From andreas.wichert at tecnico.ulisboa.pt Sat Jan 21 06:28:37 2023 From: andreas.wichert at tecnico.ulisboa.pt (Andrzej Wichert) Date: Sat, 21 Jan 2023 11:28:37 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <701C6FFF-0525-426E-8DE9-0FBB6A6EE866@supsi.ch> References: <98231BD9-6F00-407C-814D-7803C38EEB01@nyu.edu> <0DE4742A-3C24-4102-930F-8C8A5A753BC3@supsi.ch> <701C6FFF-0525-426E-8DE9-0FBB6A6EE866@supsi.ch> Message-ID: Dear Juergen, Thank you for your email. Some small remarks... "3. "Neocognitron performs invariant pattern recognition, a CNN does not." CNNs and Neocognitrons (1979) have the same basic architecture with alternating convolutional and downsampling layers. Not sure why someone changed the name! See Sec. 9: https://people.idsia.ch/~juergen/deep-learning-history.html#cnn? Neocognitron has maybe the same basic architecture as a CNN, but it performs a different task, see Luis Sa-Couto and Andreas Wichert, Simple convolutional based models: are they learning the task or the data?, Neural Computation,1(17) , 2021 https://doi.org/10.1162/neco_a_01446 6. Generally speaking, I have never really understood the distinction between ?symbolic" and ?subsymbolic? AI?. All mathematics, the calculations are related to symbol manipulation and logic. However, I think the most important question in AI (modern AI) was posed by Douglas Hoffstadter (G?del, Escher, Bach and Fluid Concepts and Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought) - ?What is a concept?? From this key question, many others follow. How do fluid boundaries come about? How do they give rise to generalization? What makes something similar to something else? For example, what makes an uppercase letter ?A? recognizable as such. What is the essence of ?A?-ness? DL tries to find regularities from a labeled data set but cannot answer these questions. Best Wishes, Andrzej --------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert http://web.tecnico.ulisboa.pt/andreas.wichert/ - https://www.amazon.com/author/andreaswichert Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal > On 20 Jan 2023, at 16:28, Schmidhuber Juergen wrote: > > Dear Andrzej, thanks again! A few answers: > > 1. "What are the DL applications today in the industry besides some nice demos?" Alas, there are so many, on your smartphone and billions of other devices, some by the most famous companies, some of them mentioned in the survey. For example, it?s fair to say that DL has revolutionised image processing and world-wide communication across numerous languages. See, e.g., Sec. 16, or reference [DL4]: https://people.idsia.ch/~juergen/impact-on-most-valuable-companies.html > > 2. "Why does a deep NN give better results than a shallow NN?? At least for RNNs that?s very clear. Compare Sec. 14 ff. https://people.idsia.ch/~juergen/deep-learning-history.html#unsupdl > > 3. "Neocognitron performs invariant pattern recognition, a CNN does not." CNNs and Neocognitrons (1979) have the same basic architecture with alternating convolutional and downsampling layers. Not sure why someone changed the name! See Sec. 9: https://people.idsia.ch/~juergen/deep-learning-history.html#cnn > > 4. "according to you (the title of your review) AI is today DL." No, it isn?t, as also mentioned in more detail in my reply to Gary: "many of the most famous modern AI applications actually combine deep learning and other cited techniques." > > 5. "biologically plausible" deep learning: Sec. 10 mentions some of the proposals since the 1980s, plus recent work, but only briefly, because so far the impact on modern AI has been negligible. > > 6. "you missed symbolical AI." Covered by many citations since the 1940s and famous surveys since the 1960s. Many successful modern RL applications actually combine NNs and old ?symbolic? techniques such as Monte Carlo Tree Search and Planning. See also Sec. 17 and my answer to Gary on the "modern AI" focus. > > Generally speaking, I have never really understood the distinction between ?symbolic" and ?subsymbolic? AI. Our team is probably best known for ?subsymbolic? deep learning and NNs, but for many decades we have also published stuff that many would consider ?symbolic.? For example, the G?del Machine (GM, 2003, https://arxiv.org/abs/cs/0309048) is a self-referential universal problem solver making provably optimal self-improvements. It will rewrite any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. > > Of course, any NN code can be injected into the GM?s initial self-referential code. Does that make the GM ?subsymbolic?? Likewise, a GM can be implemented on a recurrent NN. Does that make the RNN ?symbolic?? This also ties in with what Steve and Gary have been discussing. > > 7. "Open problems" and future work: see, e.g., Sec 17: "The future of RL will be about learning/composing/planning with compact spatio-temporal abstractions of complex input streams?about commonsense reasoning [MAR15] and learning to think [PLAN4-5]. How can NNs learn to represent percepts and action plans in a hierarchical manner, at multiple levels of abstraction, and multiple time scales [LEC]? We published first answers to these questions in 1990-91: self-supervised neural history compressors [UN][UN0-3] learn to represent percepts at multiple levels of abstraction and multiple time scales (see above), while end-to-end differentiable NN-based subgoal generators [HRL3][MIR] learn hierarchical action plans through gradient descent (see above). More sophisticated ways of learning to think in abstract ways were published in 1997 [AC97][AC99][AC02] and 2015-18 [PLAN4-5]." Nevertheless, much remains to be done! > > Cheers, > J?rgen > > > > >> On 16. Jan 2023, at 12:35, Andrzej Wichert wrote: >> >> Dear Jurgen, >> >> Again, you missed symbolical AI in your description, names like Douglas Hofstadter. Many of today?s application are driven by symbol manipulation, like diagnostic systems, route planing (GPS navigation), time table planing, object oriented programming symbolic integration and solutions of equations (Mathematica). >> What are the DL applications today in the industry besides some nice demos? >> >> You do not indicate open problems in DL. DL is is highly biologically implausible (back propagation, LSTM), requires a lot of energy (computing power) requires a huge training sets. The black art approach of DL, the failure of self driving cars, the question why does a deep NN give better results than shallow NN? Maybe the biggest mistake was to replace the biological motivated algorithm of Neocognitron by back propagation without understanding what a Neocognitron is doing. Neocognitron performs invariant pattern recognition, a CNN does not. Transformers are biologically implausible and resulted from an engineering requirement. >> >> My point is that when is was a student, I wanted to do a master thesis in NN in the late eighties, and I was told that NN do not belong to AI (not even to computer science). Today if a student comes and asks that he wants to investigate problem solving by production systems, or a biologically motivated ML he will be told that this is not AI since according to you (the title of your review) AI is today DL. In my view, DL stops the progress in AI and NN in the same way LSIP and Prolog did in the eighties. >> >> Best, >> >> Andrzej >> >> -------------------------------------------------------------------------------------------------- >> Prof. Auxiliar Andreas Wichert >> >> http://web.tecnico.ulisboa.pt/andreas.wichert/ >> - >> https://www.amazon.com/author/andreaswichert >> >> Instituto Superior T?cnico - Universidade de Lisboa >> Campus IST-Taguspark >> Avenida Professor Cavaco Silva Phone: +351 214233231 >> 2744-016 Porto Salvo, Portugal >> >>> On 15 Jan 2023, at 21:04, Schmidhuber Juergen wrote: >>> >>> Thanks for these thoughts, Gary! >>> >>> 1. Well, the survey is about the roots of ?modern AI? (as opposed to all of AI) which is mostly driven by ?deep learning.? Hence the focus on the latter and the URL "deep-learning-history.html.? On the other hand, many of the most famous modern AI applications actually combine deep learning and other cited techniques (more on this below). >>> >>> Any problem of computer science can be formulated in the general reinforcement learning (RL) framework, and the survey points to ancient relevant techniques for search & planning, now often combined with NNs: >>> >>> "Certain RL problems can be addressed through non-neural techniques invented long before the 1980s: Monte Carlo (tree) search (MC, 1949) [MOC1-5], dynamic programming (DP, 1953) [BEL53], artificial evolution (1954) [EVO1-7][TUR1] (unpublished), alpha-beta-pruning (1959) [S59], control theory and system identification (1950s) [KAL59][GLA85], stochastic gradient descent (SGD, 1951) [STO51-52], and universal search techniques (1973) [AIT7]. >>> >>> Deep FNNs and RNNs, however, are useful tools for _improving_ certain types of RL. In the 1980s, concepts of function approximation and NNs were combined with system identification [WER87-89][MUN87][NGU89], DP and its online variant called Temporal Differences [TD1-3], artificial evolution [EVONN1-3] and policy gradients [GD1][PG1-3]. Many additional references on this can be found in Sec. 6 of the 2015 survey [DL1]. >>> >>> When there is a Markovian interface [PLAN3] to the environment such that the current input to the RL machine conveys all the information required to determine a next optimal action, RL with DP/TD/MC-based FNNs can be very successful, as shown in 1994 [TD2] (master-level backgammon player) and the 2010s [DM1-2a] (superhuman players for Go, chess, and other games). For more complex cases without Markovian interfaces, ?? >>> >>> Theoretically optimal planners/problem solvers based on algorithmic information theory are mentioned in Sec. 19. >>> >>> 2. Here a few relevant paragraphs from the intro: >>> >>> "A history of AI written in the 1980s would have emphasized topics such as theorem proving [GOD][GOD34][ZU48][NS56], logic programming, expert systems, and heuristic search [FEI63,83][LEN83]. This would be in line with topics of a 1956 conference in Dartmouth, where the term "AI" was coined by John McCarthy as a way of describing an old area of research seeing renewed interest. >>> >>> Practical AI dates back at least to 1914, when Leonardo Torres y Quevedo built the first working chess end game player [BRU1-4] (back then chess was considered as an activity restricted to the realms of intelligent creatures). AI theory dates back at least to 1931-34 when Kurt G?del identified fundamental limits of any type of computation-based AI [GOD][BIB3][GOD21,a,b]. >>> >>> A history of AI written in the early 2000s would have put more emphasis on topics such as support vector machines and kernel methods [SVM1-4], Bayesian (actually Laplacian or possibly Saundersonian [STI83-85]) reasoning [BAY1-8][FI22] and other concepts of probability theory and statistics [MM1-5][NIL98][RUS95], decision trees, e.g. [MIT97], ensemble methods [ENS1-4], swarm intelligence [SW1], and evolutionary computation [EVO1-7][TUR1]. Why? Because back then such techniques drove many successful AI applications. >>> >>> A history of AI written in the 2020s must emphasize concepts such as the even older chain rule [LEI07] and deep nonlinear artificial neural networks (NNs) trained by gradient descent [GD?], in particular, feedback-based recurrent networks, which are general computers whose programs are weight matrices [AC90]. Why? Because many of the most famous and most commercial recent AI applications depend on them [DL4]." >>> >>> 3. Regarding the future, you mentioned your hunch on neurosymbolic integration. While the survey speculates a bit about the future, it also says: "But who knows what kind of AI history will prevail 20 years from now?? >>> >>> Juergen >>> > >> On 16. Jan 2023, at 16:18, Stephen Jos? Hanson wrote: >> >> Gary, >> >> "vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here" >> >> As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point. >> >> But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines. >> langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!) >> >> Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party. >> >> So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves. >> >> Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless. >> >> Best, >> >> Steve >>> >>>> On 14. Jan 2023, at 15:04, Gary Marcus wrote: >>>> >>>> Dear Juergen, >>>> >>>> You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole. >>>> >>>> I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. >>>> >>>> Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn?t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. >>>> >>>> My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. >>>> Historians looking back on this paper will see too little about that roots of that trend documented here. >>>> >>>> Gary >>>> >>>>> On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen wrote: >>>>> >>>>> ?Dear Andrzej, thanks, but come on, the report cites lots of ?symbolic? AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and ?traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents: >>>>> >>>>> Sec. 1: Introduction >>>>> Sec. 2: 1676: The Chain Rule For Backward Credit Assignment >>>>> Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning >>>>> Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs >>>>> Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning) >>>>> Sec. 6: 1965: First Deep Learning >>>>> Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent >>>>> Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. >>>>> Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) >>>>> Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc >>>>> Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners >>>>> Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command >>>>> Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention >>>>> Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs >>>>> Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients >>>>> Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets >>>>> Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher >>>>> Sec. 18: It's the Hardware, Stupid! >>>>> Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science >>>>> Sec. 20: The Broader Historic Context from Big Bang to Far Future >>>>> Sec. 21: Acknowledgments >>>>> Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1]) >>>>> >>>>> Tweet: https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=nWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk&e= >>>>> >>>>> J?rgen >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> On 13. Jan 2023, at 14:40, Andrzej Wichert wrote: >>>>>> Dear Juergen, >>>>>> You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning? >>>>>> Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title). >>>>>> Best, >>>>>> Andreas >>>>>> -------------------------------------------------------------------------------------------------- >>>>>> Prof. Auxiliar Andreas Wichert >>>>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__web.tecnico.ulisboa.pt_andreas.wichert_&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=h5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk&e= >>>>>> - >>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.amazon.com_author_andreaswichert&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=w1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U&e= >>>>>> Instituto Superior T?cnico - Universidade de Lisboa >>>>>> Campus IST-Taguspark >>>>>> Avenida Professor Cavaco Silva Phone: +351 214233231 >>>>>> 2744-016 Porto Salvo, Portugal >>>>>>>> On 13 Jan 2023, at 08:13, Schmidhuber Juergen wrote: >>>>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w&e= >>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=oGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5&s=XPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk&e= >>>>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >>>>>>> Happy New Year! >>>>>>> J?rgen From m.t.van.der.meer at liacs.leidenuniv.nl Mon Jan 23 08:00:20 2023 From: m.t.van.der.meer at liacs.leidenuniv.nl (Meer, M.T. van der (Michiel)) Date: Mon, 23 Jan 2023 13:00:20 +0000 Subject: Connectionists: [CFP] Call for Contributions at HHAI2023 (Main Track, Posters & Demos, Doctoral Consortium, Workshop & Tutorials) Message-ID: <5ebd9498-df7d-557e-d803-d12b6c98cdc5@liacs.leidenuniv.nl> Calls for Contributions at HHAI2023 June 26-30, 2023, Munich, Germany In this call: * Call for Main Track Papers * Call for Posters and Demos * Call for Doctoral Consortium Papers * Call for Workshop and Tutorial Proposals The full text of each call is available on our website: https://www.hhai-conference.org In this message, we shortened each call to their essentials. All deadlines are at the end of the day specified, anywhere on Earth (UTC-12). Hybrid Human-Artificial Intelligence (HHAI 2023) is the second international conference focusing on the study of Artificial Intelligent systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence. HHAI aims for AI systems that work together with humans, emphasising the need for adaptive, collaborative, responsible, interactive and human-centered intelligent systems that leverage human strengths and compensate for human weaknesses, while taking into account social, ethical and legal considerations. This field of study is driven by current developments in AI, but also requires fundamentally new approaches and solutions. In addition, we want to encourage collaborations across research domains such as AI, HCI, cognitive and social sciences, philosophy & ethics, complex systems, and others. In this second international conference, we invite scholars from these fields to submit their best original, new as well as in progress, visionary and existing work on Hybrid Human-Artificial Intelligence. Topics We invite research on different challenges in Hybrid Human-Artificial Intelligence. The following list of topics is illustrative, not exhaustive: * Human-AI interaction and collaboration * Adaptive human-AI co-learning and co-creation * Learning, reasoning and planning with humans and machines in the loop * User modelling and personalisation * Integration of learning and reasoning * Transparent, explainable and accountable AI * Fair, ethical, responsible and trustworthy AI * Societal awareness of AI * Multimodal machine perception of real world settings * Social signal processing * Representations learning for Communicative or Collaborative AI * Symbolic and narrative-based representations for human-centric AI * Role of Design and Compositionality of AI systems in Interpretable / Collaborative AI We welcome contributions about all types of technology, from robots and conversational agents to multi-agent systems and machine learning models. Keep an eye out on our website for more information: https://www.hhai-conference.org/ Work should be submitted via Easychair: https://easychair.org/conferences/?conf=hhai2023 For questions, you can reach us at: organisers at hhai-conference.org --- Call for Papers Main Track We welcome submissions of 4-12 pages addressing relevant topics to HHAI. For more information see: https://www.hhai-conference.org/cfp/ Important dates * Abstract submission: February 10th, 2023 * Paper submission: February 17th, 2023 * Reviews released: March 17th, 2023 * Final notification: April 17th, 2023 * Camera-ready: May 02nd, 2023 * Main conference: June 26th-30th, 2023 --- Call for Posters and Demos We welcome submissions of 2 page abstracts addressing relevant topics to HHAI. For more information see: https://www.hhai-conference.org/cfpd/ Important Deadlines * Submission due: Monday April 10th, 2023 * Notifications: Friday May 5th, 2023 * Camera-ready due (extended abstract): Monday May 15th, 2023 * Final video and poster submission: Tuesday May 23th, 2023 * HHAI2023 Posters & Demos: June 29th, 2023 --- Call for Doctoral Consortium Papers We welcome submissions of 8-12 page descriptions of PhD research proposals. For more information see: https://www.hhai-conference.org/cfpdc/ Important Dates * Submission Deadline: March 15th, 2023 * Reviews Released: April 15th, 2023 * Camera-ready Papers Due: May 15th, 2023 * Doctoral Consortium: June 27th, 2023 --- Call for HHAI 2023 Workshop and Tutorial Proposals We welcome submissions of 8 page proposals for workshops or tutorials. For more information see: https://www.hhai-conference.org/cfpwt/ Important Dates * January 31, 2023: Workshop and tutorial proposals due * February 7, 2023: Proposal acceptance notification * February 14, 2023: Deadline for announcing the Call for Contributions to the workshops * March 28, 2023: Recommended deadline for submissions to the workshops * April 25, 2023: Recommended deadline for notifications on the submissions * June 26/27, 2023: HHAI 2023 Workshops Kind regards, On behalf of the HHAI 2023 Organizing Committee, Michiel van der Meer Leiden University Web & Publicity Chair HHAI 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.kollias at qmul.ac.uk Mon Jan 23 16:47:15 2023 From: d.kollias at qmul.ac.uk (Dimitrios Kollias) Date: Mon, 23 Jan 2023 21:47:15 +0000 Subject: Connectionists: (CfP) AIMIA Workshop and 3rd COV19D Competition in IEEE ICASSP 2023 Message-ID: Dear Colleagues, We are inviting you to participate in the AI-enabled Medical Image Analysis Workshop and 3rd Covid-19 Diagnosis Competition to be held in conjunction with IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) 2023 in Rhodes Island, Greece, 4 ? 9 June, 2023. The website for these events that contains more information can be found here. A) The AI-enabled Medical Image Analysis (MIA) Workshop is devoted to medical image analysis, with emphasis on pathology and radiology fields for diagnosis of diseases. Our focus is placed on Artificial Intelligence (AI), Machine and Deep Learning (ML, DL) approaches that target effective and adaptive diagnosis; we also have a particular interest in approaches that enforce trustworthiness and automatically generate explanations, or justifications of the decision making process. Important Dates: March 24, 2022: Paper submission April 14, 2022: Review decisions sent to authors; notifications of acceptance April 28, 2022: Camera ready version For any requests or enquiries, please contact: stefanos at cs.ntua.gr B) The 3rd COV19D Competition includes two Challenges: i) COVID19 Detection Challenge and ii) COVID19 Severity Detection Challenge. The 1st Challenge refers to COVID19 and non-COVID19 classification. The 2nd Challenge refers to classification of COVID-19 severity into four categories, i.e., mild, moderate, severe and critical. Important Dates: December 16, 2022: Opening of the Competition March 16, 2022: Submission of results March 17, 2022: Winners announcement March 24, 2022: Paper submission April 14, 2022: Review decisions sent to authors; notifications of acceptance April 28, 2022: Camera ready version For registration and further information, please contact d.kollias at qmul.ac.uk Organizers: Stefanos Kollias, National Technical University of Athens Xujiong Ye, University of Lincoln Francesco Rundo, STMicroelectronics ADG?Central R&D Dimitrios Kollias, Queen Mary University London All accepted papers will be part of IEEE ICASSP 2023 Conference Proceedings. Kind Regards, on behalf of the organising committee, Dimitris ======================================================================== Dr Dimitrios Kollias, PhD, MIEEE, FHEA Lecturer (Assistant Professor) in Artificial Intelligence Member of Multimedia and Vision (MMV) research group Member of Queen Mary Computer Vision Group Associate Member of Centre for Advanced Robotics (ARQ) Academic Fellow of Digital Environment Research Institute (DERI) School of EECS Queen Mary University of London ======================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.risi at gmail.com Mon Jan 23 15:02:24 2023 From: sebastian.risi at gmail.com (Sebastian Risi) Date: Mon, 23 Jan 2023 21:02:24 +0100 Subject: Connectionists: ERC-Funded PhD and Postdoc positions in Artificial Intelligence and Meta-Learning (deadline February 1st) Message-ID: The positions are funded as part of the GROW-AI project, a 5-year European Research Council (ERC) Consolidator Grant awarded to Sebastian Risi. The PhD positions are full-time positions for 3 years (starting from an MSc or equivalent) or 4 years (starting from a BSc). The postdoc positions are full-time positions funded for an initial period of 2 years, with potential extensions for up to a total of 4 years. The project: Growing Machines Capable of Rapid Learning in Unknown Environments Despite major advances in the field of artificial intelligence, especially in the field of neural networks, these systems still pale in comparison to even simple biological intelligence. Current machine learning systems take many trials to learn, lack common-sense, and often fail even if the environment only changes slightly. In stark contrast to current neural networks, whose architectures are designed by human experts and whose large number of parameters are optimized directly, evolution does not operate directly on the parameters of biological nervous systems. Instead, these nervous systems are grown and self-organize through a much smaller genetic program that produces rich behavioral capabilities right from birth and the ability to rapidly learn. Neuroscience suggests this ?genomic bottleneck? is an important regularizing constraint, allowing animals to generalize to new situations. The goal in GROW-AI is to learn such genomic bottleneck algorithms, and in addition, co-optimize task generators that provide the agents with the most effective learning environments. Taking inspiration from the fields of artificial life, neurobiology, and machine learning, we will investigate if algorithmic growth is needed to understand and create intelligence. The candidate should have experience in some of the following topics: - Deep RL / meta-learning / continual learning - Neuroevolution / evolutionary algorithms - Indirect encodings such as Hypernetworks - Neurobiology / systems neuroscience / neurogenetics - Evolutionary developmental biology - Procedural content generation Research environment The candidates will work together with Sebastian Risi (sebastianrisi.com) and join the Creative AI (game.itu.dk/groups/creativeai/) and Robotics, Evolution, and Art Lab (REAL) (real.itu.dk) at the IT University of Copenhagen (itu.dk). Our aim is to build a diverse team. All applications are welcome, especially those from members of underrepresented groups are encouraged. *Application* The application should be in English and should include: - A cover letter. - A research statement, which provides evidence of independent thinking, novelty and originality, and a state-of-the-art grasp of the targeted research field matching the project description above. - A full CV, including name, address, phone number, e-mail, previous and present employment and academic background - Documentation of academic degrees (copy of degree certificates etc.) - Applications may also include up to 3 relevant scientific publications written by the applicant, or the applicant?s master thesis. Letters of recommendation can also be included in the application Applications without the above-mentioned required documents will not be assessed. *General information* The IT University of Copenhagen (ITU) is a teaching and research-based tertiary institution concerned with information technology (IT) and the opportunities it offers. The IT University has more than 160 full-time Faculty members. Research and teaching in information technology span all academic activities which involve computers including computer science, information and media sciences, humanities and social sciences, business impact and the commercialization of IT. Questions about the positions can be directed to Sebastian Risi, IT University of Copenhagen, e-mail sebr at itu.dk. Questions related to the application procedure may be directed to HR, hr at itu.dk. Link to apply: https://candidate.hr-manager.net/ApplicationInit.aspx?cid=119&ProjectId=181512&DepartmentId=3508&MediaId=5 Deadline: February 1st, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Tue Jan 24 03:15:30 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Tue, 24 Jan 2023 10:15:30 +0200 Subject: Connectionists: AI4Media: ACM TOMM Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation References: <90f2cecc-a5b4-502d-de6c-0c22f4206892@iti.gr> <035601d92f13$917d1c80$b4775580$@loba.pt> Message-ID: <256601d92fcc$06ccf6f0$1466e4d0$@csd.auth.gr> The ACM Transactions on Multimedia Computing, Communications, and Applications organises a Special Issue on "Realistic Synthetic Data: Generation, Learning, Evaluation". The Special Issue is endorsed by the AI4Media project and the Guest Editors are members of the AI4Media consortium. The call for papers can be found below and in the attachment. The submission deadline is March 31st, 2023. The Topics of Interest include: * Synthetic data for various modalities, e.g., signals, images, volumes, audio, etc. * Controllable generation for learning from synthetic data. * Transfer learning and generalization of models. * Causality in data generation. * Addressing bias, limitations, and trustworthiness in data generation. * Evaluation measures/protocols and benchmarks to assess quality of synthetic content. * Open synthetic datasets and software tools. * Ethical aspects of synthetic data. Please consider submitting your work to this special issue! Thank you, Best regards, ---------------------------------------------------------------------------------------------------- Call-for-Papers: ACM TOMM SI on Realistic Synthetic Data: Generation, Learning, Evaluation [Apologies for multiple postings] ACM Transactions on Multimedia Computing, Communications, and Applications Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation Impact Factor 4.094 https://mc.manuscriptcentral.com/tomm Submission deadline: 31 March 2023 *** CALL FOR PAPERS *** [Guest Editors] Bogdan Ionescu, Universitatea Politehnica din Bucuresti, Rom?nia Ioannis Patras, Queen Mary University of London, UK Henning Muller, University of Applied Sciences Western Switzerland, Switzerland Alberto Del Bimbo, Universit? degli Studi di Firenze, Italy [Scope] In the current context of Machine Learning (ML) and Deep Learning (DL), data and especially high-quality data are central for ensuring proper training of the networks. It is well known that DL models require an important quantity of annotated data to be able to reach their full potential. Annotating content for models is traditionally made by human experts or at least by typical users, e.g., via crowdsourcing. This is a tedious task that is time consuming and expensive -- massive resources are required, content has to be curated and so on. Moreover, there are specific domains where data confidentiality makes this process even more challenging, e.g., in the medical domain where patient data cannot be made publicly available, easily. With the advancement of neural generative models such as Generative Adversarial Networks (GAN), or, recently diffusion models, a promising way of solving or alleviating such problems that are associated with the need for domain specific annotated data is to go toward realistic synthetic data generation. These data are generated by learning specific characteristics of different classes of target data. The advantage is that these networks would allow for infinite variations within those classes while producing realistic outcomes, typically hard to distinguish from the real data. These data have no proprietary or confidentiality restrictions and seem a viable solution to generate new datasets or augment existing ones. Existing results show very promising results for signal generation, images etc. Nevertheless, there are some limitations that need to be overcome so as to advance the field. For instance, how can one control/manipulate the latent codes of GANs, or the diffusion process, so as to produce in the output the desired classes and the desired variations like real data? In many cases, results are not of high quality and selection should be made by the user, which is like manual annotation. Bias may intervene in the generation process due to the bias in the input dataset. Are the networks trustworthy? Is the generated content violating data privacy? In some cases one can predict based on a generated image the actual data source used for training the network. Would it be possible to train the networks to produce new classes and learn causality of the data? How do we objectively assess the quality of the generated data? These are just a few open research questions. [Topics] In this context, the special issue is seeking innovative algorithms and approaches addressing the following topics (but is not limited to): - Synthetic data for various modalities, e.g., signals, images, volumes, audio, etc. - Controllable generation for learning from synthetic data. - Transfer learning and generalization of models. - Causality in data generation. - Addressing bias, limitations, and trustworthiness in data generation. - Evaluation measures/protocols and benchmarks to assess quality of synthetic content. - Open synthetic datasets and software tools. - Ethical aspects of synthetic data. [Important Dates] - Submission deadline: 31 March 2023 - First-round review decisions: 30 June 2023 - Deadline for revised submissions: 31 July 2023 - Notification of final decisions: 30 September 2023 - Tentative publication: December 2023 [Submission Information] Prospective authors are invited to submit their manuscripts electronically through the ACM TOMM online submission system (see https://mc.manuscriptcentral.com/tomm) while adhering strictly to the journal guidelines (see https://tomm.acm.org/authors.cfm). For the article type, please select the Special Issue denoted SI: Realistic Synthetic Data: Generation, Learning, Evaluation. Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere. If the submission is an extended work of a previously published conference paper, please include the original work and a cover letter describing the new content and results that were added. According to ACM TOMM publication policy, previously published conference papers can be eligible for publication provided that at least 40% new material is included in the journal version. [Contact] For questions and further information, please contact Bogdan Ionescu / bogdan.ionescu at upb.ro . [Acknowledgement] The Special Issue is endorsed by the AI4Media "A Centre of Excellence delivering next generation AI Research and Training at the service of Media, Society and Democracy" H2020 ICT-48-2020 project https://www.ai4media.eu/. On behalf of the Guest Editors, Bogdan Ionescu https://www.aimultimedialab.ro/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CfP-TOMM-SI-RealisticSyntheticData-2023.pdf Type: application/pdf Size: 186533 bytes Desc: not available URL: From ioannakoroni at csd.auth.gr Tue Jan 24 03:30:02 2023 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Tue, 24 Jan 2023 10:30:02 +0200 Subject: Connectionists: AIDA Short Courses: "Nvidia DLI - Fundamentals of Deep Learning", 2nd February 2023 Message-ID: <283601d92fce$0e7e58b0$2b7b0a10$@csd.auth.gr> Nvidia DLI and University Debrecen organize an online AIDA short course on "Fundamentals of Deep Learning" offered through the International Artificial Intelligence Doctoral Academy (AIDA). The purpose of this course is to learn how deep learning works through hands-on exercises in computer vision and natural language processing. You'll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly. This short course will cover the following topics: * The Mechanics of Deep Learning (120 mins), * Pre-trained Models and Recurrent Networks (120 mins), * Final Project: Object Classification (120 mins). LECTURER: - Dr. Andras Hajdu, Full Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador, email: hajdu.andras at inf.unideb.hu HOST INSTITUTION/ORGANIZER: Nvidia Deep Learning Institute, Faculty of Informatics, University of Debrecen, Hungary REGISTRATION: Free of charge for university students and staff WHEN: February 2, 2023 from 09:00 to 17:00 CET WHERE: Online HOW TO REGISTER and ENROLL: Both AIDA and non-AIDA students are encouraged to participate in this short course. If you are an AIDA Student* already, please: Step (a) register in the course by following the Course Link: Nvidia Deep Learning Institute | University of Debrecen (unideb.hu) AND Step (b) enroll in the same course in the AIDA system using the enrollment button in the AIDA course page Nvidia DLI - Fundamentals of Deep Learning - AIDA - AI Doctoral Academy (i-aida.org), so that this course enter your AIDA Course Attendance Certificate. If you are not an AIDA Student do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members listed in this page: https://www.i-aida.org/about/members/) Dr. Andras Hajdu, Full Professor Nvidia Deep Learning Institute Certified Instructor and Ambassador Email hajdu.andras at inf.unideb.hu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 237109 bytes Desc: not available URL: From smgsolinas at uniss.it Tue Jan 24 03:34:30 2023 From: smgsolinas at uniss.it (Sergio Solinas) Date: Tue, 24 Jan 2023 09:34:30 +0100 Subject: Connectionists: PhD APPLICATION OPEN in computational neuroscience, cognitive science, and Mixed Reality Message-ID: The Theoretical and Computational Neuroscience Laboratory of Paolo Enrico and Sergio Solinas invites applications for a *Doctoral Student* position at UNISS, University of Sassari, ITALY SARDINIA. Research questions span a range of topics in theoretical/computational neuroscience, cognitive science, EEG recordings involving data analysis, brain function theory, neural network simulations using NEURON or NEST and neural activity visualization using mixed reality approaches with the Microsoft Hololens 2 device and Unity development platform. This research project is inscribed within the EBRAINS-ITALY PNRR funding schema for Research Infrastructures. Candidates with backgrounds in one or more than one of the following fields are welcome to apply: mathematics, statistics, artificial intelligence, physics, computer science, engineering, biology, and psychology. Experience with data analysis, proficiency with numerical methods, and familiarity with neuroscience topics and mathematical and statistical methods are desirable. Equally desirable is a curiosity about brain function, a spirit of intellectual adventure, a good social attitude, and sharing and reusing knowledge. The application is open and the deadline for application is the 17th February 2023. The application process requires you to send many documents. *Application web page: https://www.uniss.it/bandi/bando-pnrr-xxxviii-ciclo * *English version in the lower part of the page.* *Please write to both enrico at uniss.it & smgsolinas at uniss.it for details and updates.* In all email correspondence, please mention ?APPLICATION-PHD? in the subject header. The University of Sassari was funded in 1562 and is an active research institution. The city of Sassari is located in the north of Sardinia . The climate is among the best you can find in Europe. The life cost is relatively cheap if compared to Italy's mainland. The island of Sardinia has national and international connection flights from its three airports: Alghero , Olbia , and Cagliari . Best regards,Sergio MG Solinas Dip. di Scienze Biomediche Universit? di Sassari Viale San Pietro 23 07100 - Sassari The NEURON School -- -- *Dona il? 5x1000* all'Universit? degli Studi di Sassaricodice fiscale: 00196350904 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai.sauerwald at fernuni-hagen.de Tue Jan 24 03:54:22 2023 From: kai.sauerwald at fernuni-hagen.de (Kai Sauerwald) Date: Tue, 24 Jan 2023 09:54:22 +0100 Subject: Connectionists: Uncertain Reasoning Special Track at The 36th International FLAIRS Conference (UR@FLAIRS-36) Message-ID: <13aeeafd-f626-49e3-b1d0-1f57cfdc8f1a@fernuni-hagen.de> *** Third Call for Papers *** ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Uncertain Reasoning (UR) Special Track at The 36th International FLAIRS Conference (FLAIRS-36) In cooperation with the American Association for Artificial Intelligence Clearwater Beach, Florida, USA May 14-17, 2023 Abstract submission deadline: February 6, 2023 Paper submission deadline: February 13, 2023 Notification: March 13, 2023 All accepted papers will be included in the FLAIRS proceedings published by Florida Online Journals Invited papers will be published in a special journal issue http://ur-flairs.github.io/2023/ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Call For Papers The Special Track on Uncertain Reasoning (UR) is the oldest track in FLAIRS conferences, running annually since 1996. The UR Special Track at the 36th International Florida Artificial Intelligence Research Society Conference (FLAIRS-36) is the 28th in the series. Like the past tracks, UR seeks to bring together researchers working on broad issues related to reasoning under uncertainty. Topics of Interest Papers on all aspects of uncertain reasoning are invited. Papers of particular interest include, but are not limited to: *Uncertain reasoning formalisms, calculi and methodologies *Reasoning with probability, possibility, fuzzy logic, belief function, vagueness, granularity, rough sets, and probability logics *Modeling and reasoning using imprecise and indeterminate information, such as: Choquet capacities, comparative orderings, convex sets of measures, and interval-valued probabilities *Exact, approximate and qualitative uncertain reasoning *Probabilistic graphical models of uncertainty such as: Bayesian networks, Markov random field, probabilistic circuits *Multi-agent uncertain reasoning and decision-making *Decision-theoretic planning and Markov decision process *Temporal reasoning and uncertainty *Non-monotonic reasoning *Conditional Logics *Argumentation *Belief change and merging *Similarity-based reasoning *Ontologies and description logics *Construction of models from elicitation, data mining and knowledge discovery *Uncertain reasoning in information retrieval, filtering, fusion, diagnosis, prediction, situation assessment *Uncertain reasoning in data management *Practical applications of uncertain reasoning *Learning probabilistic models *Applications in computer vision and animation === Paper Submission and Publication === Submitted papers must be original, and not submitted concurrently to a journal or another conference while under review. Interested authors should format their papers according to FLAIRS-36 conference formatting guidelines. Papers should not exceed 6 pages (4 pages for a poster) and are due by February 13, 2023. The reviewing is a double blind process. Author names and affiliations must be omitted on submitted papers. Papers must be submitted as PDF through the EasyChair conference system, which can be accessed through the main conference web site (http://www.flairs-36.info/). Authors should indicate the Uncertain Reasoning special track for submissions. All accepted papers will be included in the proceedings of FLAIRS, which will be published by the Florida Online Journals. FLAIRS requires that there be at least one full author registration per paper. Instructions on the submission procedure are available at the UR website: http://ur-flairs.github.io/2023 We anticipate there will be a special issue devoted to extended versions of selected papers at the track. === Important Dates === Abstract submission due: Feb. 6, 2023 Paper submission due: Feb. 13, 2023 Author Notification: Mar. 13, 2023 === Program Committee === [ Track Chairs ] Kai Sauerwald University of Hagen, Germany Choh Man Teng Institute for Human and Machine Cognition, USA [ PC Members ] Mohand Said Allili (Universit? du Qu?bec en Outaouais) Alessandro Antonucci (IDSIA, Switzerland) Ofer Arieli (The Academic College of Tel-Aviv, Israel) Christoph Beierle (FernUniversit?t in Hagen, Israel) Salem Benferhat (University of Artois, France) Stefano Bistarelli (University of Perugia, Italy) Nizar Bouguila (Concordia University, Canada) Martine Ceberio (University of Texas at El Paso, US) Llu?s Godo (University of Barcelona, Spain) Christophe Gonzales (LIS, France) Gabriele Kern-Isberner (University of Technology Dortmund, Germany) Vladik Kreinovich (University of Texas at El Paso, US) Philippe Leray (University of Nantes, France) Nicholas Mattei (Tulane University, US) Ralf M?ller (University of L?beck, Germany) Arthur Paul Pedersen (The City College of New York, US) Rafael Pe?aloza Nyssen (University of Milano-Bicocca, Italy) Eugene Santos (Dartmouth College, US) Dilip Sarkar (University of Miami, US) Kari Sentz (Los Alamos National Laboratory, US) Karima Sedki (University of Paris 13, France) Karim Tabia (University of Artois, France) Carlo Taticchi (University of Perugia, Italy) === Travel Information === Additional information on the conference locale and travel planning can be found at http://www.flairs-36.info. From J.Bowers at bristol.ac.uk Tue Jan 24 07:49:07 2023 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Tue, 24 Jan 2023 12:49:07 +0000 Subject: Connectionists: Senior Research Associate (post-doc) working on spiking neural networks Message-ID: Deadline extended for Senior Research Associate (post-doc) working on EPSRC New Horizons project entitled "Exploring the multiple loci of learning and computation in simple artificial neural networks". The project assesses the adaptive value of learning outside of synapses in spiking networks. Position based in Bristol working with Jeffrey Bowers (https://jeffbowers.blogs.bristol.ac.uk/publications/psycho-neuro/) and Benjamin Evans (https://profiles.sussex.ac.uk/p555479-benjamin-evans). Daniel Goodman (https://www.imperial.ac.uk/people/d.goodman) will also be collaborating on the project. Based at University of Bristol, UK. Deadline for applying Feb 17th. For more details see: https://www.jobs.ac.uk/job/CWV563/senior-research-associate Jeffrey Bowers School of Experimental Psychology University of Bristol Personal website: https://jeffbowers.blogs.bristol.ac.uk/ ERC lab website: https://mindandmachine.blogs.bristol.ac.uk/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pavis at iit.it Tue Jan 24 12:04:19 2023 From: Pavis at iit.it (Pavis) Date: Tue, 24 Jan 2023 17:04:19 +0000 Subject: Connectionists: 2 Post Doc positions on Deep Learning for Computer Vision - IIT Genoa, Italy Message-ID: <11094a14414749b0bc77096973c0b60d@iit.it> Two Post Doc positions on Deep Learning for Computer Vision (2300000N) Commitment & contract: minimum 24 months - temporary contract Location: Genoa, Via Enrico Melen 83 ABOUT US At IIT we work enthusiastically to develop human-centered Science and Technology to tackle some of the most pressing societal challenges of our times and transfer these technologies to the production system and society. Our Genoa headquarters are strictly interconnected with the other 11 centers around Italy and two outstations based in the US. We promote excellence in basic and applied research such as neuroscience and cognition, humanoid technologies and robotics, nanotechnology, materials for a truly multidisciplinary scientific experience. YOUR TEAM You will be working in a multicultural and multi-disciplinary group, where Computer Vision and Machine Learning experts, Engineers, and Physicists collaborate to carry out common research. The Pattern Analysis and Computer Vision (PAVIS) Research line is coordinated by Alessio Del Bue leading a team of more than 25 people working on advancing the state of the art in Computer Vision and Machine Learning. The research focuses on novel computational models related to Deep Learning, Geometrical Computer Vision, Scene Understanding and Augmented Reality with the aim of solving challenging problems of relevant practical application. The research will be carried out in the framework of two projects funded through the PNRR. Within the team, your main responsibilities will be: Investigate new Deep Learning methods based on visual and multimodal sensors data for long term tracking with special focus in assisting people with physical or mental disabilities (assistive AI); Research deep learning based geometrical approaches for modelling dynamic 3D scenes from ego-centric vision sensors and other sensor modalities (e.g., wearable sensors, mobiles, robotic platforms etc); Supervise PhD students. ESSENTIAL REQUIREMENTS A PhD in Computer Vision, Computer Science, Machine Learning, and similar disciplines Documented experience on Deep Learning Strong programming ability (Python preferred) Experience with Deep Learning tools such as Pytorch, Tensorflow etc A relevant publication record Ability to work as part of a group developing ideas and collaborating The ability to properly report, organize and publish research data Documented experience in coaching junior scientists Good command in spoken and written English ADDITIONAL SKILLS Experience on either: Scene Understanding from images and other sensor modalities, Image Tracking 3D vision Good communication skills Strong problem-solving attitude High motivation to learn Spirit of innovation and creativity Good in time and priority management Ability to work in a dynamic and international environment Ability to work both independently and collaboratively in a highly interdisciplinary environment COMPENSATION PACKAGE Competitive salary package for international standards Private health care coverage (depending on your role and contract) Wide range of staff discounts Candidates from abroad or Italian citizens who permanently work abroad and meet specific requirements, may be entitled to a deduction from taxable income of up to 90% from 6 to 13 years WHAT?S IN FOR YOU? An equal, inclusive and multicultural environment ready to welcome you with open arms. Discrimination is a big NO for us! We like contamination and encourage you to mingle and discover what other people are up to in our labs! If paperwork is not your piece of cake, we got you! There?s a specialized team working to help you with that, especially during your relocation! If you are a startupper or a business-minded person, you will find some exceptionally gifted professionals ready to nurture and guide your attitude and aspirations. If you want your work to have a real impact, in IIT you will find an innovative and stimulating culture that drives our mission to contribute to the improvement and well-being of society! We stick to our values! Integrity, courage, societal responsibility and inclusivity are the values we believe in! They define us and our actions in our everyday life. They guide us to accomplish IIT mission! If you feel this tickles your appetite for change, do not hesitate and apply! HOW TO APPLY Please submit your application using the online form https://iit.taleo.net/careersection/ex/jobdetail.ftl?lang=en&job=2300000N and including: a detailed CV, university transcripts, cover letter (outlining motivation, experience and qualifications) and contact details of 2 references. Application?s deadline: 07.02.2023 The Selection processes will be managed in compliance with the provisions established in Article 7 of the Funding Decree 1053 del 23.06.2022 We inform you that the information you provide will be used solely for the purposes of evaluating and selecting professional profiles in order to meet the requirements of Istituto Italiano di Tecnologia. Your data will be processed by Istituto Italiano di Tecnologia, based in Genoa, Via Morego 30, acting as Data Controller, in compliance with the rules on protection of personal data, including those related to data security. Please also note that, pursuant to articles 15 et. seq. of European Regulation no. 679/2016 (General Data Protection Regulation), you may exercise your rights at any time by contacting the Data Protection Officer (phone Tel: +39 010 28961 - email: dpo at iit.it - kindly note that this e-mail address is exclusively reserved for handling data protection issues. Please, do not use this e-mail address to send any document and/or request of information about this opening). From michael.furlong at uwaterloo.ca Tue Jan 24 13:38:57 2023 From: michael.furlong at uwaterloo.ca (Michael Furlong) Date: Tue, 24 Jan 2023 18:38:57 +0000 Subject: Connectionists: [meetings] 2023 Nengo Summer School Call For Applications Message-ID: [All details about this school can be found online at https://www.nengo.ai/summer-school] The Centre for Theoretical Neuroscience at the University of Waterloo is excited to announce our 8th annual Nengo summer school on large-scale brain modelling and neuromorphic computing. This two-week school will teach participants to use the Nengo simulation package to build state-of-the-art cognitive and neural models to run both in simulation and on neuromorphic hardware. Summer school participants will be given on-site access to neuromorphic hardware and will learn to run high-level applications using Nengo! More generally, Nengo provides users with a versatile and powerful environment for designing cognitive and neural systems and has been used to build what is currently the world's largest functional brain model, Spaun, which includes spiking deep learning, reinforcement learning, adaptive motor control, and cognitive control networks. For a look at the last in-person summer school, check out this short video: https://youtu.be/5w0BzvNOypc We welcome applications from all interested graduate students, postdocs, professors, and industry professionals with a relevant background. ***Application Deadline: March 15, 2023*** Format: A combination of tutorials and project-based work. Participants are encouraged to bring their own ideas for projects, which may focus on testing hypotheses, modeling neural or cognitive data, implementing specific behavioural functions with neurons, expanding past models, or providing a proof-of-concept of various neural mechanisms. Hands-on tutorials, work on individual or group projects, and talks from invited faculty members will make up the bulk of day-to-day activities. A project demonstration event will be held on the last day of the school, with prizes for strong projects! Participants will have the opportunity to learn how to: * interface Nengo with various kinds of neuromorphic hardware (e.g. Loihi 1, SpiNNaker) * build perceptual, motor, and sophisticated cognitive models using spiking neurons * model anatomical, electrophysiological, cognitive, and behavioural data * use a variety of single cell models within a large-scale model * integrate machine learning methods into biologically oriented models * interface Nengo with cameras and robotic systems * implement modern nonlinear control methods in neural models * and much more? Date and Location: June 4th to June 16th, 2023 at the University of Waterloo, Ontario, Canada. Applications: Please visit http://www.nengo.ai/summer-school, where you can find more information regarding costs, travel, lodging, along with an application form listing required materials. If you have any questions about the school or the application process, please contact Michael Furlong (michael.furlong at uwaterloo.ca). The school is also partly supported by ONR and ABR, Inc. We look forward to hearing from you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdm at zhaw.ch Wed Jan 25 06:25:48 2023 From: stdm at zhaw.ch (Stadelmann Thilo (stdm)) Date: Wed, 25 Jan 2023 11:25:48 +0000 Subject: Connectionists: [CFP] 10th IEEE Swiss Conference on Data Science Message-ID: Dear colleagues, you're cordially invited to submit your work to the 10th IEEE Swiss Conference on Data Science (IEEE SDS2023). It brings together researchers, developers and innovators. The conference will take place on June 22 - 23, 2023 in Zurich, Switzerland. We invite both business presentations (https://sds2023.ch/call-for-participation) and scientific papers (https://sds2023.ch/call-for-papers). The deadline for submitting is February 17, 2023. We invite international submissions of original research papers related to all areas of Data Science, especially best practice and use-case oriented research work with scientific-technical depth and practical applicability. Topics include but are not limited to the following areas: Data Product Design, Data-Centric Business Models Data-driven Value Creation, Servitization, ecosystems, Service Design and Operations Circular Servitization Business Intelligence Data Warehousing Decision Support Systems Data, Text, and Web Analytics Machine Translation & Multilinguality Question Answering and NLP Applications Information Integration Search & Information Retrieval Semi-Structured and Unstructured Data Pattern Recognition Recommendation Systems Personalization and Contextualization Data Governance Data Fusion Predictive Modelling Abnormal Detection, Adversarial Attacks & Robustness Support Vector Machines Data Mining Classification, Clustering, Object Detection and Semantic Segmentation Un-, Semi- and weakly Supervised Learning Deep Learning Machine Learning Evolutionary Computing and Optimization Feature Selection Fuzzy Computing Hybrid Methods Neural Networks and their Applications Data Management Modeling and Managing Large Data Systems Open Data Data and Information Quality Data Modeling and Visualization Data Structures and Data Management Algorithms Big Data Applications Edge Computing Query Processing and Optimization Data Privacy and Security Fairness, Accountability, Transparency, Ethics, and Explainability Cyber security in relation to data science Domain Specific Applications (e.g. Health, Legal, Industrial, Environmental etc.) Smart Services Submission Full Paper - present high-quality research contributions, including experiments, analysis, or system papers, as well as descriptions of application results. Each submission can have up to 8 pages incl. references. Short Paper - present late-breaking results, work in progress, follow-up extensions, application case studies, or evaluations of existing methods. Each submission can have up to 4 pages incl. references. All submissions must be original works that have not been published previously in any conference proceedings, magazine, journal, or edited book. The review process will be double blind, therefore we ask authors to take specific care to anonymize their submissions reasonably. Concurrent submissions are strictly forbidden. Submission of papers implies the intention to register and present the related content at the conference. The scientific international Program Committee will carefully review the submissions. Excellent content and presentation as demonstrated in the submission are mandatory for acceptance (acceptance rate 2022: 29%). Accepted submissions will be presented in English (as a talk or a poster) by the authors at the conference. As last year, proceedings of all papers presented at the conference will be submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements. All submissions must be formatted by following the IEEE 8.5" x 11" two-column format. Important Dates February 17, 2023: Submission deadline April 05, 2023: Acceptance notification May 09, 2023: Camera ready papers deadline June 22 - 23, 2023: Conference takes place Best, Thilo Stadelmann for the SDS2023 scientific program committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Wed Jan 25 08:44:34 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Wed, 25 Jan 2023 13:44:34 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> Message-ID: <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. https://people.idsia.ch/~juergen/deep-learning-history.html#rnn J?rgen > On 13. Jan 2023, at 11:13, Schmidhuber Juergen wrote: > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > https://arxiv.org/abs/2212.11279 > > https://people.idsia.ch/~juergen/deep-learning-history.html > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > Happy New Year! > > J?rgen > From boris.gutkin at ens.fr Wed Jan 25 10:00:42 2023 From: boris.gutkin at ens.fr (bgutkin) Date: Wed, 25 Jan 2023 16:00:42 +0100 Subject: Connectionists: C-BRAINS International PhD Program in Paris, France Message-ID: <7FD39713-0B9E-4C37-8B07-B6A9C7A3ABD4@ens.fr> C-BRAINS International PhD Program in Paris, France Call for applications 2023 Deadline March 3, 2023 C-BRAINS (Cognition and Brain Bevolutions: Artificial Intelligence, Neurogenomics, Society) is a major innovation and research initiative supported by the Ile-de-France Regional Council. It brings together the best of academic research in neuroscience and cognitive science in the Paris region, with more than 200 research teams spread over 17 sites (6 universities and schools of higher education, as well as laboratories affiliated to Inserm, CNRS, CEA and the Pasteur Institute) and more than 40 industrial partners. An international doctoral program is set up by C-BRAINS in 2023, offering students a unique opportunity to develop their scientific projects within the neuroscience and cognitive science research teams of the greater Paris area. The C-BRAINS international PhD program will offer 13 PhD contracts starting in October 2023 for three years, and financial packages of 5 k? of scientific support and 7 k? of installation bonus (12 k? tax-free stipend). C-BRAINS is particularly eager to welcome students from multidisciplinary backgrounds applied to neuroscience and cognitive science, and willing to use their skills and experience in new and exciting projects proposed by our academic and associate partners. Interns will conduct their research at the various partner institutions after choosing from a list of over 70 topics. They will have the opportunity to contact potential supervisors prior to the March 3, 2023 deadline. Research opportunities are available at: https://dim-cbrains.fr/fr/phd-program/dim-cbrains Applicants must have a highly qualified undergraduate degree (or equivalent) and a Master of Science or other Master's degree from a university outside of France. They must not have worked or resided in France for more than 12 months during the 3 years preceding their recruitment by C-BRAINS. For more details on how to apply and the link to our online application form, please visit: https://dim-cbrains.fr/en/platform/login The application will close on Friday, March 3, 2023 at midnight. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadine.spychala at gmail.com Wed Jan 25 10:40:21 2023 From: nadine.spychala at gmail.com (Nadine Spychala) Date: Wed, 25 Jan 2023 15:40:21 +0000 Subject: Connectionists: Reminder: ALIFE 2023 - Call for Workshops & Tutorials - Submission Deadline 31st Jan Message-ID: Dear all, *the* *International Conference on Artificial Life* - ALIFE 2023 - is looking for proposals for - *papers,* - *extended abstracts*, - *workshops* and - *tutorials*. *Deadline for workshop & tutorial submissions is 31st January*. ALIFE 2023 will take place in *Sapporo*, *Japan*, *July 24 - 28th*,* 2023*, and will be a* hybrid* conference: primarily focused on-site, at Hokkaido University, while also allowing for online participation. The *conference theme* is ?*Ghosts in the Machine*?. Traditionally, Artificial Life research has sought to uncover the mysteries of life and the mind, that is, to naturalise the ?ghosts in the machine?: What is life? What is the mind? At ALIFE 2023, we would like to bring back the focus on the integration of philosophy of mind with theoretical, computational and empirical works in science and engineering that deal with these ghosts. For more details please check the conference website and follow us on Twitter and Facebook. Hope to see you in Sapporo! On behalf of the organizing team, Nadine Spychala -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Wed Jan 25 10:42:20 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 25 Jan 2023 15:42:20 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Early recurrent neural networks for serial verbal learning and associative pattern learning In-Reply-To: <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> Message-ID: Dear Juergen and Connectionists colleagues, In his attached email below, Juergen mentioned a 1972 article of my friend and colleague, Shun-Ichi Amari, about recurrent neural networks that learn. Here are a couple of my own early articles from 1969 and 1971 about such networks. I introduced them to explain paradoxical data about serial verbal learning, notably the bowed serial position effect: Grossberg, S. (1969). On the serial learning of lists. Mathematical Biosciences, 4, 201-253. https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 3, 95-125. https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf Juergen also mentioned that Shun-Ichi's work was a precursor of what some people call the Hopfield model, whose most cited articles were published in 1982 and 1984. I actually started publishing articles on this topic starting in the 1960s. Here are two of them: Grossberg, S. (1969). On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1, 319-350. https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831. https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf An early use of Lyapunov functions to prove global limit theorems in associative recurrent neural networks is found in the following 1980 PNAS article: Grossberg, S. (1980). Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77, 2338-2342. https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf Subsequent results culminated in my 1983 article with Michael Cohen, which was in press when the Hopfield (1982) article was published: Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826. https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf Our article introduced a general class of neural networks for associative spatial pattern learning, which included the Additive and Shunting neural networks that I had earlier introduced, as well as a Lyapunov function for all of them. This article proved global limit theorems about all these systems using that Lyapunov function. The Hopfield article describes the special case of the Additive model. His article proved no theorems. Best to all, Steve Stephen Grossberg http://en.wikipedia.org/wiki/Stephen_Grossberg http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en https://youtu.be/9n5AnvFur7I https://www.youtube.com/watch?v=_hBye6JQCh4 https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Wang Professor of Cognitive and Neural Systems Director, Center for Adaptive Systems Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University sites.bu.edu/steveg steve at bu.edu ________________________________ From: Connectionists on behalf of Schmidhuber Juergen Sent: Wednesday, January 25, 2023 8:44 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. https://people.idsia.ch/~juergen/deep-learning-history.html#rnn J?rgen > On 13. Jan 2023, at 11:13, Schmidhuber Juergen wrote: > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > https://arxiv.org/abs/2212.11279 > > https://people.idsia.ch/~juergen/deep-learning-history.html > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > Happy New Year! > > J?rgen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Wed Jan 25 11:51:27 2023 From: rloosemore at susaro.com (Richard Loosemore) Date: Wed, 25 Jan 2023 11:51:27 -0500 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> Message-ID: <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> Please, somebody reassure me that this isn't just another attempt to rewrite history so that Schmidhuber's lab invented almost everything. Because at first glance, that's what it looks like. Richard From juergen at idsia.ch Wed Jan 25 11:40:16 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Wed, 25 Jan 2023 16:40:16 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Early recurrent neural networks for serial verbal learning and associative pattern learning In-Reply-To: References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> Message-ID: <599E58E5-BB09-4374-9610-69EB43BCD2E5@supsi.ch> Dear Steve, thanks - I hope you noticed that the survey mentions your 1969 work! And of course it also mentions the origin of this whole recurrent network business: the Ising model or Lenz-Ising model introduced a century ago. See Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture https://people.idsia.ch/~juergen/deep-learning-history.html#rnn "The first non-learning RNN architecture (the Ising model or Lenz-Ising model) was introduced and analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s [L20][I24,I25][K41][W45][T22]. It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs ...? J?rgen > On 25. Jan 2023, at 18:42, Grossberg, Stephen wrote: > > Dear Juergen and Connectionists colleagues, > > In his attached email below, Juergen mentioned a 1972 article of my friend and colleague, Shun-Ichi Amari, about recurrent neural networks that learn. > > Here are a couple of my own early articles from 1969 and 1971 about such networks. I introduced them to explain paradoxical data about serial verbal learning, notably the bowed serial position effect: > > Grossberg, S. (1969). On the serial learning of lists. Mathematical Biosciences, 4, 201-253. > https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf > > Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 3, 95-125. > https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf > > Juergen also mentioned that Shun-Ichi's work was a precursor of what some people call the Hopfield model, whose most cited articles were published in 1982 and 1984. > > I actually started publishing articles on this topic starting in the 1960s. Here are two of them: > > Grossberg, S. (1969). On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1, 319-350. > https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf > > Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831. > https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf > > An early use of Lyapunov functions to prove global limit theorems in associative recurrent neural networks is found in the following 1980 PNAS article: > > Grossberg, S. (1980). Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77, 2338-2342. > https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf > > Subsequent results culminated in my 1983 article with Michael Cohen, which was in press when the Hopfield (1982) article was published: > > Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826. > https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf > > Our article introduced a general class of neural networks for associative spatial pattern learning, which included the Additive and Shunting neural networks that I had earlier introduced, as well as a Lyapunov function for all of them. > > This article proved global limit theorems about all these systems using that Lyapunov function. > > The Hopfield article describes the special case of the Additive model. > > His article proved no theorems. > > Best to all, > > Steve > > Stephen Grossberg > http://en.wikipedia.org/wiki/Stephen_Grossberg > http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en > https://youtu.be/9n5AnvFur7I > https://www.youtube.com/watch?v=_hBye6JQCh4 > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > Wang Professor of Cognitive and Neural Systems > Director, Center for Adaptive Systems > Professor Emeritus of Mathematics & Statistics, > Psychological & Brain Sciences, and Biomedical Engineering > Boston University > sites.bu.edu/steveg > steve at bu.edu > > From: Connectionists on behalf of Schmidhuber Juergen > Sent: Wednesday, January 25, 2023 8:44 AM > To: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning > > Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > J?rgen > > > > > > On 13. Jan 2023, at 11:13, Schmidhuber Juergen wrote: > > > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > > > https://arxiv.org/abs/2212.11279 > > > > https://people.idsia.ch/~juergen/deep-learning-history.html > > > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > > > Happy New Year! > > > > J?rgen > > From li.zhaoping at tuebingen.mpg.de Wed Jan 25 12:00:06 2023 From: li.zhaoping at tuebingen.mpg.de (Li Zhaoping) Date: Wed, 25 Jan 2023 18:00:06 +0100 Subject: Connectionists: Postdoctoral position in Human Visual Psychophysics with fMRI/MRI in Tuebingen, Germany Message-ID: <9a630239-c209-b656-2d23-5dbe273fef4f@tuebingen.mpg.de> *Postdoctoral position in Human Visual Psychophysics with fMRI/MRI, (m/f/d)* *(TV?D-Bund E13, 100%)* The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of T?bingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using fMRI/MRI. The framework and motivation of the projects can be found at: https://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html . The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. fMRI/MRI technology can be used in combination with other methods such as eye tracking, TMS and/or EEG methodologies, and other related methods as necessary. The postdoc will be working closely with the principal investigator and other members of Zhaoping's team when needed. *Responsibilities:* ?Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. Coordinate with the PI and other team members for strategies and project planning Coordinate with the PI and other team members for project planning, and in supervision of student projects or teaching assistance for university courses in our field. *Requirements: * Ph.D. in neuroscience, psychology, computer science, physics or a related natural science or engineering field. Publications in peer-reviewed journals Highly skilled in techniques of human visual psychophysics such as eye tracking, MATLAB programming for experiments, experimental data taking, data analysis, and paper writing. Highly skilled in fMRI experiments and data analysis. Experience in project management is highly desired. Strong command of English; knowledge of German is a plus. *Who we are:** *We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org ). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, EEG/ERP, and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute (MPI) for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn other skills in our multidisciplinary group and benefit from interactions with our colleagues in the university, at MPI, as well as internationally. This job opening is for the CIN or the MPI working group. The position (salary level TV?D-Bund E13, 100%) is for a duration of two years. Extension or a permanent contract after two years is possible depending on situations. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. ** *Your application:* The position is available immediately and will be open until filled. Preference will be given to applications received by *March 19^th , 2023*. We look forward to receiving your application that includes (1) a cover letter, including a statement on roughly when you would like to start this position, (2) a motivation statement, (3) a CV, (4) names and contact details of three people for references, (5) if you have them, transcripts from your past and current education listing the courses taken and their grades, (6) if you have them, please also include copies of your degree certificates, (7) you may include a pdf file of your best publication(s), or other documents and information that you think could strengthen your application.Please use pdf files for these documents (and you may combine them into a single pdf file) and send to *jobs.li at tuebingen.mpg.de*, where also informal inquiries can be addressed. Please note that applications without complete information in (1)-(4) will not be considered, unless the cover letter includes an explanation and/or information about when the needed materials will be supplied. For further opportunities in our group, please visit https://www.lizhaoping.org/jobs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Wed Jan 25 13:09:35 2023 From: jose at rubic.rutgers.edu (=?utf-8?B?U3RlcGhlbiBKb3PDqSBIYW5zb24=?=) Date: Wed, 25 Jan 2023 18:09:35 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> Message-ID: <5f1e30ef-d1f6-5efb-aab1-d19e0c5c8679@rubic.rutgers.edu> Well he mentions Legendre, Gauss and the Big Bang.. so no research claims in those areas.. Steve On 1/25/23 11:51, Richard Loosemore wrote: Please, somebody reassure me that this isn't just another attempt to rewrite history so that Schmidhuber's lab invented almost everything. Because at first glance, that's what it looks like. Richard -- Stephen Jos? Hanson Professor, Psychology Department Director, RUBIC (Rutgers University Brain Imaging Center) Member, Executive Committee, RUCCS -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Wed Jan 25 13:13:38 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 25 Jan 2023 18:13:38 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Early binary, linear, and continuous-nonlinear neural networks, some which included learning In-Reply-To: <599E58E5-BB09-4374-9610-69EB43BCD2E5@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <599E58E5-BB09-4374-9610-69EB43BCD2E5@supsi.ch> Message-ID: Dear Juergen, Thanks for mentioning the Ising model! As you know, it is a binary model, with just two states, and it does not learn. My Magnum Opus https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 reviews some of the early binary neural network models, such as the McCulloch-Pitts, Caianiello, and Rosenblatt models, starting on p. 64, before going on to review early linear models that included learning, like the Adeline and Madeline models of Bernie Widrow and the Brain-State-in-a-Box model of Jim Anderson, then continuous and nonlinear models of various kinds, including models that are still used today. Best, Steve ________________________________ From: Connectionists on behalf of Schmidhuber Juergen Sent: Wednesday, January 25, 2023 11:40 AM To: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning: Early recurrent neural networks for serial verbal learning and associative pattern learning Dear Steve, thanks - I hope you noticed that the survey mentions your 1969 work! And of course it also mentions the origin of this whole recurrent network business: the Ising model or Lenz-Ising model introduced a century ago. See Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture https://people.idsia.ch/~juergen/deep-learning-history.html#rnn "The first non-learning RNN architecture (the Ising model or Lenz-Ising model) was introduced and analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s [L20][I24,I25][K41][W45][T22]. It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs ...? J?rgen > On 25. Jan 2023, at 18:42, Grossberg, Stephen wrote: > > Dear Juergen and Connectionists colleagues, > > In his attached email below, Juergen mentioned a 1972 article of my friend and colleague, Shun-Ichi Amari, about recurrent neural networks that learn. > > Here are a couple of my own early articles from 1969 and 1971 about such networks. I introduced them to explain paradoxical data about serial verbal learning, notably the bowed serial position effect: > > Grossberg, S. (1969). On the serial learning of lists. Mathematical Biosciences, 4, 201-253. > https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf > > Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 3, 95-125. > https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf > > Juergen also mentioned that Shun-Ichi's work was a precursor of what some people call the Hopfield model, whose most cited articles were published in 1982 and 1984. > > I actually started publishing articles on this topic starting in the 1960s. Here are two of them: > > Grossberg, S. (1969). On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1, 319-350. > https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf > > Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831. > https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf > > An early use of Lyapunov functions to prove global limit theorems in associative recurrent neural networks is found in the following 1980 PNAS article: > > Grossberg, S. (1980). Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77, 2338-2342. > https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf > > Subsequent results culminated in my 1983 article with Michael Cohen, which was in press when the Hopfield (1982) article was published: > > Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826. > https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf > > Our article introduced a general class of neural networks for associative spatial pattern learning, which included the Additive and Shunting neural networks that I had earlier introduced, as well as a Lyapunov function for all of them. > > This article proved global limit theorems about all these systems using that Lyapunov function. > > The Hopfield article describes the special case of the Additive model. > > His article proved no theorems. > > Best to all, > > Steve > > Stephen Grossberg > http://en.wikipedia.org/wiki/Stephen_Grossberg > http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en > https://youtu.be/9n5AnvFur7I > https://www.youtube.com/watch?v=_hBye6JQCh4 > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > Wang Professor of Cognitive and Neural Systems > Director, Center for Adaptive Systems > Professor Emeritus of Mathematics & Statistics, > Psychological & Brain Sciences, and Biomedical Engineering > Boston University > sites.bu.edu/steveg > steve at bu.edu > > From: Connectionists on behalf of Schmidhuber Juergen > Sent: Wednesday, January 25, 2023 8:44 AM > To: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning > > Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > J?rgen > > > > > > On 13. Jan 2023, at 11:13, Schmidhuber Juergen wrote: > > > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > > > https://arxiv.org/abs/2212.11279 > > > > https://people.idsia.ch/~juergen/deep-learning-history.html > > > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > > > Happy New Year! > > > > J?rgen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Wed Jan 25 15:38:51 2023 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 25 Jan 2023 20:38:51 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Boltzmann Machine and Adaptive Resonance Theory In-Reply-To: References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <599E58E5-BB09-4374-9610-69EB43BCD2E5@supsi.ch> Message-ID: Dear Geoff, It's good to hear from you! I of course know about the Boltzmann Machine learning algorithm that you published with David Ackley and Terry Sejnowski: https://onlinelibrary.wiley.com/doi/pdfdirect/10.1207/s15516709cog0901_7 Because your article was published in 1985, I did not include it in a list of early algorithms. I do discuss it, however, in my Magnum Opus on p. 156: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes-ebook/dp/B094W6BBKN/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr= As you know better than I do, there is more to the Boltzmann Machine than an Ising model, as your use of the name Boltzmann, one of the greatest founders of statistical mechanics, suggests. In particular, your model requires that an external parameter, such as a formal temperature variable, be slowly adjusted to control the approach to equilibrium. The Boltzmann Machine is thus neither an autonomous, nor a non-parametric, algorithm. I had also published quite a few models by 1985, notably foundational models on Competitive Learning and Adaptive Resonance Theory, or ART, between 1976 and 1980. ART can autonomously learn to attend, classify, recognize, and predict objects and events in a changing world that is filled with unexpected events. Unsupervised ART models, such as those that I published between 1976 and 1980, do not require any external supervision; e.g., Grossberg, S. (1976). Adaptive pattern classification and universal recoding, II: Feedback, expectation, olfaction, and illusions. Biological Cybernetics, 23, 187-202. https://sites.bu.edu/steveg/files/2016/06/Gro1976BiolCyb_II.pdf Grossberg, S. (1980). How does a brain build a cognitive code? Psychological Review, 87, 1-51. https://sites.bu.edu/steveg/files/2016/06/Gro1980PsychRev.pdf See the Appendices, starting on p. 45, for some theorems. Starting in 1987, Gail Carpenter and I began to publish ART learning algorithms with a full suite of mathematical theorems and parametric computer simulations, including a proof that these ART models do not experience catastrophic forgetting: Carpenter, G.A., and Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 37, 54-115. https://sites.bu.edu/steveg/files/2016/06/CarGro1987CVGIP.pdf Supervised ARTMAP models, which I began to published with Gail and some of our students starting in 1991, were simulated using challenging benchmark databases and compared with other algorithms. They are "supervised" by environmental feedback, which may or may not include a human teacher. See: Carpenter, G.A., Grossberg, S., and Reynolds, J.H. (1991). ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network. Neural Networks, 4, 565-588. https://sites.bu.edu/steveg/files/2016/06/CarGroRey1991NN.pdf as well as many other increasingly powerful ART algorithms that can be downloaded from sites.bu.edu/steveg and http://techlab.bu.edu/members/gail/publications.html Best, Steve ________________________________ From: Geoffrey Hinton Sent: Wednesday, January 25, 2023 2:02 PM To: Grossberg, Stephen Cc: connectionists at cs.cmu.edu Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning: Early binary, linear, and continuous-nonlinear neural networks, some which included learning Dear Stephen, Thanks for letting us know about your Magnum Opus. There is actually a learning algorithm for the Ising model and it works even when you can only observe the states of a subset of the units. It's called the Boltzmann Machine learning algorithm. Geoff On Wed, Jan 25, 2023 at 1:25 PM Grossberg, Stephen > wrote: Dear Juergen, Thanks for mentioning the Ising model! As you know, it is a binary model, with just two states, and it does not learn. My Magnum Opus https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 reviews some of the early binary neural network models, such as the McCulloch-Pitts, Caianiello, and Rosenblatt models, starting on p. 64, before going on to review early linear models that included learning, like the Adeline and Madeline models of Bernie Widrow and the Brain-State-in-a-Box model of Jim Anderson, then continuous and nonlinear models of various kinds, including models that are still used today. Best, Steve ________________________________ From: Connectionists > on behalf of Schmidhuber Juergen > Sent: Wednesday, January 25, 2023 11:40 AM To: connectionists at cs.cmu.edu > Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning: Early recurrent neural networks for serial verbal learning and associative pattern learning Dear Steve, thanks - I hope you noticed that the survey mentions your 1969 work! And of course it also mentions the origin of this whole recurrent network business: the Ising model or Lenz-Ising model introduced a century ago. See Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture https://people.idsia.ch/~juergen/deep-learning-history.html#rnn "The first non-learning RNN architecture (the Ising model or Lenz-Ising model) was introduced and analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s [L20][I24,I25][K41][W45][T22]. It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs ...? J?rgen > On 25. Jan 2023, at 18:42, Grossberg, Stephen > wrote: > > Dear Juergen and Connectionists colleagues, > > In his attached email below, Juergen mentioned a 1972 article of my friend and colleague, Shun-Ichi Amari, about recurrent neural networks that learn. > > Here are a couple of my own early articles from 1969 and 1971 about such networks. I introduced them to explain paradoxical data about serial verbal learning, notably the bowed serial position effect: > > Grossberg, S. (1969). On the serial learning of lists. Mathematical Biosciences, 4, 201-253. > https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf > > Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 3, 95-125. > https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf > > Juergen also mentioned that Shun-Ichi's work was a precursor of what some people call the Hopfield model, whose most cited articles were published in 1982 and 1984. > > I actually started publishing articles on this topic starting in the 1960s. Here are two of them: > > Grossberg, S. (1969). On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1, 319-350. > https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf > > Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831. > https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf > > An early use of Lyapunov functions to prove global limit theorems in associative recurrent neural networks is found in the following 1980 PNAS article: > > Grossberg, S. (1980). Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77, 2338-2342. > https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf > > Subsequent results culminated in my 1983 article with Michael Cohen, which was in press when the Hopfield (1982) article was published: > > Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826. > https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf > > Our article introduced a general class of neural networks for associative spatial pattern learning, which included the Additive and Shunting neural networks that I had earlier introduced, as well as a Lyapunov function for all of them. > > This article proved global limit theorems about all these systems using that Lyapunov function. > > The Hopfield article describes the special case of the Additive model. > > His article proved no theorems. > > Best to all, > > Steve > > Stephen Grossberg > http://en.wikipedia.org/wiki/Stephen_Grossberg > http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en > https://youtu.be/9n5AnvFur7I > https://www.youtube.com/watch?v=_hBye6JQCh4 > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > Wang Professor of Cognitive and Neural Systems > Director, Center for Adaptive Systems > Professor Emeritus of Mathematics & Statistics, > Psychological & Brain Sciences, and Biomedical Engineering > Boston University > sites.bu.edu/steveg > steve at bu.edu > > From: Connectionists > on behalf of Schmidhuber Juergen > > Sent: Wednesday, January 25, 2023 8:44 AM > To: connectionists at cs.cmu.edu > > Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning > > Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > J?rgen > > > > > > On 13. Jan 2023, at 11:13, Schmidhuber Juergen > wrote: > > > > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): > > > > https://arxiv.org/abs/2212.11279 > > > > https://people.idsia.ch/~juergen/deep-learning-history.html > > > > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. > > > > Happy New Year! > > > > J?rgen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrey.hinton at gmail.com Wed Jan 25 14:02:11 2023 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Wed, 25 Jan 2023 14:02:11 -0500 Subject: Connectionists: Annotated History of Modern AI and Deep Learning: Early binary, linear, and continuous-nonlinear neural networks, some which included learning In-Reply-To: References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <599E58E5-BB09-4374-9610-69EB43BCD2E5@supsi.ch> Message-ID: Dear Stephen, Thanks for letting us know about your Magnum Opus. There is actually a learning algorithm for the Ising model and it works even when you can only observe the states of a subset of the units. It's called the Boltzmann Machine learning algorithm. Geoff On Wed, Jan 25, 2023 at 1:25 PM Grossberg, Stephen wrote: > Dear Juergen, > > Thanks for mentioning the Ising model! > > As you know, it is a *binary model*, with just two states, and it does > not learn. > > My Magnum Opus > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > reviews some of the early binary neural network models, such as the > *McCulloch-Pitts*, *Caianiello*, and *Rosenblatt *models*, *starting on > p. 64, before going on to review early *linear models* that included > learning, like the *Adeline and Madeline* models of Bernie *Widrow* and > the *Brain-State-in-a-Box* model of Jim *Anderson, *then *continuous and > nonlinear models* of various kinds, including models that are still used > today. > > Best, > > Steve > > ------------------------------ > *From:* Connectionists on > behalf of Schmidhuber Juergen > *Sent:* Wednesday, January 25, 2023 11:40 AM > *To:* connectionists at cs.cmu.edu > *Subject:* Re: Connectionists: Annotated History of Modern AI and Deep > Learning: Early recurrent neural networks for serial verbal learning and > associative pattern learning > > Dear Steve, > > thanks - I hope you noticed that the survey mentions your 1969 work! > > And of course it also mentions the origin of this whole recurrent network > business: the Ising model or Lenz-Ising model introduced a century ago. See > Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > "The first non-learning RNN architecture (the Ising model or Lenz-Ising > model) was introduced and analyzed by physicists Ernst Ising and Wilhelm > Lenz in the 1920s [L20][I24,I25][K41][W45][T22]. It settles into an > equilibrium state in response to input conditions, and is the foundation of > the first learning RNNs ...? > > J?rgen > > > > On 25. Jan 2023, at 18:42, Grossberg, Stephen wrote: > > > > Dear Juergen and Connectionists colleagues, > > > > In his attached email below, Juergen mentioned a 1972 article of my > friend and colleague, Shun-Ichi Amari, about recurrent neural networks that > learn. > > > > Here are a couple of my own early articles from 1969 and 1971 about such > networks. I introduced them to explain paradoxical data about serial verbal > learning, notably the bowed serial position effect: > > > > Grossberg, S. (1969). On the serial learning of lists. Mathematical > Biosciences, 4, 201-253. > > https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf > > > > Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal > effects in serial learning. Journal of Statistical Physics, 3, 95-125. > > https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf > > > > Juergen also mentioned that Shun-Ichi's work was a precursor of what > some people call the Hopfield model, whose most cited articles were > published in 1982 and 1984. > > > > I actually started publishing articles on this topic starting in the > 1960s. Here are two of them: > > > > Grossberg, S. (1969). On learning and energy-entropy dependence in > recurrent and nonrecurrent signed networks. Journal of Statistical Physics, > 1, 319-350. > > https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf > > > > Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural > networks. Proceedings of the National Academy of Sciences, 68, 828-831. > > https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf > > > > An early use of Lyapunov functions to prove global limit theorems in > associative recurrent neural networks is found in the following 1980 PNAS > article: > > > > Grossberg, S. (1980). Biological competition: Decision rules, pattern > formation, and oscillations. Proceedings of the National Academy of > Sciences, 77, 2338-2342. > > https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf > > > > Subsequent results culminated in my 1983 article with Michael Cohen, > which was in press when the Hopfield (1982) article was published: > > > > Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global > pattern formation and parallel memory storage by competitive neural > networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, > 815-826. > > https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf > > > > Our article introduced a general class of neural networks for > associative spatial pattern learning, which included the Additive and > Shunting neural networks that I had earlier introduced, as well as a > Lyapunov function for all of them. > > > > This article proved global limit theorems about all these systems using > that Lyapunov function. > > > > The Hopfield article describes the special case of the Additive model. > > > > His article proved no theorems. > > > > Best to all, > > > > Steve > > > > Stephen Grossberg > > http://en.wikipedia.org/wiki/Stephen_Grossberg > > http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en > > https://youtu.be/9n5AnvFur7I > > https://www.youtube.com/watch?v=_hBye6JQCh4 > > https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 > > > > Wang Professor of Cognitive and Neural Systems > > Director, Center for Adaptive Systems > > Professor Emeritus of Mathematics & Statistics, > > Psychological & Brain Sciences, and Biomedical Engineering > > Boston University > > sites.bu.edu/steveg > > steve at bu.edu > > > > From: Connectionists on > behalf of Schmidhuber Juergen > > Sent: Wednesday, January 25, 2023 8:44 AM > > To: connectionists at cs.cmu.edu > > Subject: Re: Connectionists: Annotated History of Modern AI and Deep > Learning > > > > Some are not aware of this historic tidbit in Sec. 4 of the survey: half > a century ago, Shun-Ichi Amari published a learning recurrent neural > network (1972) which was later called the Hopfield network. > > > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > > > J?rgen > > > > > > > > > > > On 13. Jan 2023, at 11:13, Schmidhuber Juergen > wrote: > > > > > > Machine learning is the science of credit assignment. My new survey > credits the pioneers of deep learning and modern AI (supplementing my > award-winning 2015 survey): > > > > > > https://arxiv.org/abs/2212.11279 > > > > > > https://people.idsia.ch/~juergen/deep-learning-history.html > > > > > > This was already reviewed by several deep learning pioneers and other > experts. Nevertheless, let me know under juergen at idsia.ch if you can spot > any remaining error or have suggestions for improvements. > > > > > > Happy New Year! > > > > > > J?rgen > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gros at itp.uni-frankfurt.de Wed Jan 25 15:06:11 2023 From: gros at itp.uni-frankfurt.de (Claudius Gros) Date: Wed, 25 Jan 2023 21:06:11 +0100 Subject: Connectionists: =?utf-8?b?Pz09P3V0Zi04P3E/ICBBbm5vdGF0ZWQgSGlz?= =?utf-8?q?tory_of_Modern_AI_and_Deep_Learning?= In-Reply-To: <5f1e30ef-d1f6-5efb-aab1-d19e0c5c8679@rubic.rutgers.edu> Message-ID: <38a-63d18b80-19b-5e29ab00@154884272> It is actually interesting. In other fields, like physics, there is a division of labor: - scientist, doing the heavy lifting, and - historians, trying to figure out the history of the field. It is quite amusing, that this seems to be different in machine learning. Some scientists want to be both! Claudius On Wednesday, January 25, 2023 19:09 CET, Stephen Jos? Hanson wrote: > Well he mentions Legendre, Gauss and the Big Bang.. so no research claims in those areas.. > > Steve > > > On 1/25/23 11:51, Richard Loosemore wrote: > > Please, somebody reassure me that this isn't just another attempt to rewrite history so that Schmidhuber's lab invented almost everything. > > Because at first glance, that's what it looks like. > > Richard > > > -- > Stephen Jos? Hanson > Professor, Psychology Department > Director, RUBIC (Rutgers University Brain Imaging Center) > Member, Executive Committee, RUCCS -- ### ### Prof. Dr. Claudius Gros ### http://itp.uni-frankfurt.de/~gros ### ### Complex and Adaptive Dynamical Systems, A Primer ### A graduate-level textbook, Springer (2008/10/13/15) ### ### Life for barren exoplanets: The Genesis project ### https://link.springer.com/article/10.1007/s10509-016-2911-0 ### From franrruiz87 at gmail.com Thu Jan 26 04:14:43 2023 From: franrruiz87 at gmail.com (=?UTF-8?Q?Francisco_J=2E_Rodr=C3=ADguez_Ruiz?=) Date: Thu, 26 Jan 2023 09:14:43 +0000 Subject: Connectionists: ICBINB Monthly Seminar Series Talk: Stephan Mandt Message-ID: Dear all, After the break, we are pleased to announce that the next speaker of the *?I Can?t Believe It?s Not Better!? (**ICBINB)* virtual seminar series will be *Stephan Mandt** (**University of California, Irvine**)*. More details about this series and the talk are below. The *"I Can't Believe It's Not Better!" (ICBINB) monthly online seminar series* seeks to shine a light on the "stuck" phase of research. Speakers will tell us about their most beautiful ideas that didn't "work", about when theory didn't match practice, or perhaps just when the going got tough. These talks will let us peek inside the file drawer of unexpected results and peer behind the curtain to see the real story of *how real researchers did real research*. *When: *February 2nd, 2023 @ 6pm CET / 12pm EST / 9am PST *Where: *RSVP for the Zoom link here: https://us02web.zoom.us/meeting/register/tZIpfu2hqTsvHNV89dmCB0RGvVUt6k_heQwx *Title: **Why Neural Compression Has Not Taken Off (Yet)* *Abstract:** Despite recent advancements in neural data compression, classical codecs such as JPEG and BPG have remained industry standards to date. The talk will provide an introduction to the promising field of neural compression, focusing on why these new compression technologies have not seen the 10X performance boosts that deep learning has already achieved in other fields, such as NLP or vision. The talk will also present new avenues for neural compression research that provide novel directions for probabilistic modeling and show promise to make neural compression more practical and widely applicable across industries.* *Bio:** Stephan Mandt is an Associate Professor of Computer Science and Statistics at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research in Pittsburgh and Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne, where he received the German National Merit Scholarship. He is furthermore a recipient of the NSF CAREER Award, the UCI ICS Mid-Career Excellence in Research Award, the German Research Foundation?s Mercator Fellowship, a Kavli Fellow of the U.S. National Academy of Sciences, a member of the ELLIS Society, and a former visiting researcher at Google Brain. Stephan is an Action Editor of the Journal of Machine Learning Research and Transaction on Machine Learning Research and regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR. His research is currently supported by NSF, DARPA, IARPA, DOE, Disney, Intel, and Qualcomm.* For more information and for ways to get involved, please visit us at http://icbinb.cc/, Tweet to us @ICBINBWorkhop , or email us at cant.believe.it.is.not.better at gmail.com. -- Best wishes, The ICBINB Organizers -------------- next part -------------- An HTML attachment was scrubbed... URL: From li.zhaoping at tuebingen.mpg.de Thu Jan 26 05:02:45 2023 From: li.zhaoping at tuebingen.mpg.de (Zhaoping Li) Date: Thu, 26 Jan 2023 11:02:45 +0100 Subject: Connectionists: ?==?utf-8?q? Annotated History of Modern AI and Deep Learning In-Reply-To: <38a-63d18b80-19b-5e29ab00@154884272> References: <38a-63d18b80-19b-5e29ab00@154884272> Message-ID: <691e21c7-e146-4a26-dfb2-8f4a17e6088d@tuebingen.mpg.de> I believe that good scientists are also good historians of science, or at least want to know what happened in the past in their field. Being a good scientist helps one to be a better historian, example: Abraham Pais, so the best historians of science must be good scientists. I am interested in, and am very grateful to colleagues who help us to learn more about, the history of the field. Zhaoping -- Li Zhaoping Ph.D. Prof. of Cognitive Science, University of Tuebingen Head of Dept of Sensory and Sensorimotor Systems, Max Planck Institute of Biological Cybernetics Author of "Understanding vision: theory, models, and data", Oxford University Press, 2014 www.lizhaoping.org On 1/25/23 21:06, Claudius Gros wrote: > It is actually interesting. In other fields, like physics, > there is a division of labor: > > - scientist, doing the heavy lifting, and > - historians, trying to figure out the history of the field. > > It is quite amusing, that this seems to be different in > machine learning. Some scientists want to be both! > > Claudius > > > On Wednesday, January 25, 2023 19:09 CET, Stephen Jos? Hanson wrote: > >> Well he mentions Legendre, Gauss and the Big Bang.. so no research claims in those areas.. >> >> Steve >> >> >> On 1/25/23 11:51, Richard Loosemore wrote: >> >> Please, somebody reassure me that this isn't just another attempt to rewrite history so that Schmidhuber's lab invented almost everything. >> >> Because at first glance, that's what it looks like. >> >> Richard >> >> >> -- >> Stephen Jos? Hanson >> Professor, Psychology Department >> Director, RUBIC (Rutgers University Brain Imaging Center) >> Member, Executive Committee, RUCCS > > From juergen at idsia.ch Thu Jan 26 08:29:29 2023 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Thu, 26 Jan 2023 13:29:29 +0000 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <45179978-F1C5-460D-8F80-6642EFF982CD@supsi.ch> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <45179978-F1C5-460D-8F80-6642EFF982CD@supsi.ch> Message-ID: <375D7E20-A161-4FD0-BB0B-31AFEEAF6968@supsi.ch> And in 1967-68, the same Shun-Ichi Amari trained multilayer perceptrons (MLPs) with many layers by stochastic gradient descent (SGD) in end-to-end fashion. See Sec. 7 of the survey: https://people.idsia.ch/~juergen/deep-learning-history.html#2nddl Amari's implementation [GD2,GD2a] (with his student Saito) learned internal representations in a five layer MLP with two modifiable layers, which was trained to classify non-linearily separable pattern classes. Back then compute was billions of times more expensive than today. To my knowledge, this was the first implementation of learning internal representations through SGD-based deep learning. If anyone knows of an earlier one then please let me know :) J?rgen > On 25. Jan 2023, at 16:44, Schmidhuber Juergen wrote: > > Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network. > > https://people.idsia.ch/~juergen/deep-learning-history.html#rnn > > J?rgen > > > > >> On 13. Jan 2023, at 11:13, Schmidhuber Juergen wrote: >> >> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): >> >> https://arxiv.org/abs/2212.11279 >> >> https://people.idsia.ch/~juergen/deep-learning-history.html >> >> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements. >> >> Happy New Year! >> >> J?rgen >> > From viktor.jirsa at univ-amu.fr Thu Jan 26 09:53:22 2023 From: viktor.jirsa at univ-amu.fr (Viktor Jirsa) Date: Thu, 26 Jan 2023 15:53:22 +0100 Subject: Connectionists: Call for contributions to the Human Brain Project Summit 2023 in Marseille, France References: <73101874.4119466.1673516396730.JavaMail.cloud@p1-mta-1101.eu.messagegears.net> Message-ID: Dear colleagues, I wish to draw your attention to the call for contributions to the Human Brain Project (HBP) Summit here below. The HBP Summit is the last major event of the Human Bran Project, which we have the pleasure to host here in Marseille, France. The conference will be a forum for stimulating and high-quality scientific exchange in different fields including neurosciences, health and applications in clinical medicine, as well as brain derived technologies. The Summit is open to all our colleagues and registration and poster submission are open now. Hope to see you in Marseille! Viktor --- Viktor Jirsa Directeur, Institut de Neurosciences des Syst?mes Chief Science Officer, EBRAINS AISBL UMR INSERM 1106 Aix-Marseille Universit? Facult? de M?decine, 27, Boulevard Jean Moulin 13005 Marseille, France http://ins.univ-amu.fr http://ebrains.eu > > > Click here to view this email in your browser. > > > Dear friends and colleagues, > > As we are entering the New Year, we are delighted to invite you to the HBP Summit 2023entitled ?Achievements and future of digital brain research?, which will take place from March 28-31, 2023, in Marseille, France. > > We will have the pleasure of welcoming renowned researchers in the field of neuroscience and representatives of distinguished European research institutions. > REGISTER NOW > > For all group or institutional registrations, contact: summit2023 at mcocongres.com . Please note that hotel reservations will need to be made separately by individuals and/or institutions (see accommodation on our website). > Plenary > and poster sessions > High-level panel discussions > on brain science and health > > Awards > for the best abstracts and for innovation > Exhibition > and Science Market > Read the preliminary programme > > The Summit will be an opportunity to present the latest European advances in the field of neuroscience, brain health, clinical applications, neurotechnology and computing. > > Don?t miss this great opportunity to learn, meet, interact and exchange ideas with a broad network of passionate scientists. > > Please visit summit2023.humanbrainproject.eu for further information about the event and the programme. > > Yours sincerely, > Prof. Dr. Katrin AMUNTS > Scientific Research Director of HBP > Pawel SWIEBODA > Director General of HBP > CEO of EBRAINS > > Prof. Dr. Viktor JIRSA > HBP Summit 2023 Local Host, CSO of EBRAINS > > EBRAINS AISBL, Chau. de la Hulpe 166, 1170 Watermael-Boitsfort, Belgium > Click here to unsubscribe or to change your Subscription Preferences. -------------- next part -------------- An HTML attachment was scrubbed... URL: From albagarciaseco at gmail.com Thu Jan 26 09:57:16 2023 From: albagarciaseco at gmail.com (=?UTF-8?B?QWxiYSBHYXJjw61h?=) Date: Thu, 26 Jan 2023 15:57:16 +0100 Subject: Connectionists: IEEE CBMS 2023 - CALL FOR PAPERS Message-ID: **************************************************************************************************** Apologies if you receive multiple copies of this announcement **************************************************************************************************** 36th IEEE International Symposium on Computer Based Medical Systems L?Aquila, Italy, 22-24 June 2023 https://2023.cbms-conference.org/ ---------------------------------------------------------------------------------------------------- CALL FOR PAPERS ---------------------------------------------------------------------------------------------------- Attracting a worldwide audience, CBMS is the premier conference for computer-based medical systems, and one of the main conferences within the fields of medical informatics and biomedical informatics. CBMS allows the exchange of ideas and technologies between academic and industrial scientists. The scientific program of IEEE CBMS 2023 will consist of regular and 5 special track sessions with technical contributions reviewed and selected by an international program committee as well as keynote talks, and tutorials given by leading experts in their fields. The IEEE CBMS 2023 edition also aims to host high-quality papers about industry and real case applications as well as allow to researchers leading international projects to show to the scientific community the main aims, goals, and results of their projects (check the Projects and Industry track here:https://2023.cbms-conference.org/projects-and-industry-track/). There are already two confirmed special issues in JCR indexed journals, check here: https://2023.cbms-conference.org/special-issue/. We solicit submissions on previously unpublished research work. Example areas include but are not limited to: - Biomedical Signal and Image Processing - Clinical and Healthcare Services Research - Data Analysis and Visualization - Data Mining and Machine Learning - Decision Support and Recommendation Systems - Healthcare Communication Networks - Healthcare Data and Knowledge Management - Human-Computer Interaction (HCI) in Healthcare - Information Technologies in Healthcare - Digital Biomarkers - Intelligent Medical Devices and Smart Technologies - Radiomics and Radiogenomics - Semantics and Knowledge Representation - Serious Gaming in Healthcare - Systems Integration and Security - Technology-enabled Education - Telemedicine Systems - Translational Bioinformatics - Sensor solutions for Connected Health - mHealth Solutions and Insights - Learning from Medical Devices - Cyberphysical Systems in Medicine ---------------------------------------------------------------------------------------------------- ORGANIZERS ---------------------------------------------------------------------------------------------------- Prof. Giuseppe Placidi, PhD, Universit? dell?Aquila (Italy) Rosa Sicilia, PhD, Universit? Campus Bio-Medico di Roma (Italy) Prof. Alejandro Rodri?guez Gonza?lez, PhD, Universidad Polite?cnica de Madrid (Spain) ---------------------------------------------------------------------------------------------------- PAPER SUBMISSION AND PUBLICATION ---------------------------------------------------------------------------------------------------- Submitted papers have to be original, containing new and original results. Submission implies the willingness of at least one of the authors to register and present the paper at the CBMS 2023 Symposium. All papers will be peer reviewed by at least two independent referees. - Prospective authors are invited to submit papers in any of the topics listed above. - Instructions for preparing the manuscript (in Word and Latex formats) are available at: https://2023.cbms-conference.org/general-instructions/ - Please also check the Guidelines. - Papers must be submitted electronically via the web-based submission system. ---------------------------------------------------------------------------------------------------- CONTACTS ---------------------------------------------------------------------------------------------------- cbms2023 at cbms-conference.org ---------------------------------------------------------------------------------------------------- IMPORTANT DATES ---------------------------------------------------------------------------------------------------- Paper submission deadline: February 15, 2023 Notification of acceptance: March 30, 2023 Camera-ready due: April 18, 2023 ---------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From susan.fischer at tue.mpg.de Thu Jan 26 10:34:13 2023 From: susan.fischer at tue.mpg.de (Susan Fischer) Date: Thu, 26 Jan 2023 16:34:13 +0100 Subject: Connectionists: Postdoc positions in cognitive neuroscience and computational psychiatry, University of Tuebingen, Germany Message-ID: <774DB479-1C1A-47BA-9E5A-5B89449D2D22@tue.mpg.de> **Apologies for cross-posting** Dear colleagues, We are looking to hire several new postdocs in the "Developmental Computational Psychiatry" lab and the newly established W3 professorship "Computational Psychiatry" led by Tobias Hauser at the University of T?bingen (Germany). The focus of the lab is to better understand the computational and neural mechanisms underlying decision making and learning, and how these processes go awry in patients with mental illnesses. The successful candidates will have the chance to work in a highly dynamic and inspiring environment and to collaborate closely with Prof Peter Dayan and the Max-Planck Institute for Biological Cybernetics. Concretely, we are looking for the following candidates: * Postdoc with clinical psychiatry / psychotherapy background https://devcompsy.org/PostdocClinincalAvH2023.pdf * Postdoc with computational modelling background https://devcompsy.org/PostdocModellingAvH2023.pdf * Postdocs with experimental & neuroimaging (MEG / MRI) background https://devcompsy.org/PostdocNImgERC2023.pdf More information about the positions can be found here: https://devcompsy.org/join-the-lab/ Interested candidates are encouraged to reach out to Tobias Hauser directly to informally discuss the positions. ************************************************ Susan Fischer Coordinator Alexander von Humboldt Professorship Prof Peter Dayan Eberhard Karls Universit?t T?bingen & Max Planck Institute for Biological Cybernetics AI Research Building R 20-7/A21 Maria-von-Linden-Str. 6 72076 T?bingen Germany susan.fischer at tue.mpg.de ************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbianchi at fei.edu.br Thu Jan 26 14:47:54 2023 From: rbianchi at fei.edu.br (Reinaldo A. C. Bianchi) Date: Thu, 26 Jan 2023 19:47:54 +0000 Subject: Connectionists: Call for Papers - BWAIF - 2nd Brazilian Workshop on Artificial Intelligence in Finance Message-ID: <1F99A4C8-583A-4710-AB5A-5A1C0901F09E@fei.edu.br> Call for Papers - BWAIF - 2023 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 2nd Brazilian Workshop on Artificial Intelligence in Finance Satellite Event of the XLIII Congress of the Brazilian Computer Society Jo?o Pessoa, Para?ba, July 23rd-28th, 2023. https://csbc.sbc.org.br/2023/ii-brazilian-workshop-on-artificial-intelligence-in-finance-bwaif-2023/ Artificial Intelligence, and in particular Machine Learning, is a technology that is transforming how we integrate information, analyze data and make decisions, with large impact throughout our society. Advances in AI are being felt in our economy, with significant impacts in finance, including financial markets, services, and the global financial system more broadly. The Brazilian Workshop on Artificial Intelligence in Finance (BWAIF), which will have its second edition as a satellite event of the XLIII Congress of the Brazilian Computer Society, will be a forum for researchers, professionals, educators and students to present and discuss innovations, trends, experiences and evolution in the fields of Artificial Intelligence and its applications in Finance. BWAIF will take place as a Satellite event of the SBC Congress, whose theme in 2023 is the "Opportunities and challenges of the integration of the physical and digital worlds", has a huge relationship with the development of a society that uses digital resources for financial transactions, where large institutions have focused part of their resources for the development of "phygital" platforms, which implies the convergence of actions in the physical, digital and social aspects of organizations. Although being an event of the Brazilian Computer Society conference, with papers being accepted in English and Portuguese, we encourage the participation of the international community, with main presentations in English. The conference will be held in person, in Jo?o Pessoa, a beautiful beach city on the Atlantic coast, the capital of the Brazilian state of Paraiba. Being founded more than 400 years ago, it has many architectural and natural monuments, with brightly painted art nouveau architecture that hints at the city?s creative tradition. A powerful coastal sunshine keeps beaches bustling year-round in Joao Pessoa, with bars, restaurants, coconut palms, and a broad promenade along the seafront. TOPICS OF INTEREST It is of interest all studies that have not been published previously and that present new ideas, discussions about existing works, practical studies and experiments relevant to the application of Artificial Intelligence in the financial area. Topics of interest include, but are not limited to: - AI and Cryptocurrencies - AI techniques for financial decision making - AI techniques for financial forecasting - AI techniques for Portfolio analysis - AI techniques for simulation of markets, economies, and other financial systems - AI techniques for risk assessment and management - Computational game-theoretic analysis of financial scenarios - Ethics and fairness of AI in finance - Explainability, interpretability and trustworthiness of AI in finance - Infrastructure to support AI research in finance - Multi-agent systems in finance - Natural Language Processing and its applications in finance - Robustness, security, and privacy of AI systems in finance - Computational regulation and compliance in finance - Robustness and uncertainty quantification for AI models in finance - Synthetic Data and benchmarks for AI pipelines for financial applications - Trading algorithms ARTICLE FORMAT AND SUBMISSION Articles are limited to twelve (12) pages, including text, references, appendices, tables, and figures. Articles must have a summary of a maximum of 300 words in addition to the key words. Articles can be written in Portuguese or English, using the SBC article style, available at: http://www.sbc.org.br/documentos-da-sbc/summary/169-templates-para-artigos-e-capitulos-de-livros/878-modelosparapublicaodeartigos. It is also available at Overleaf: https://pt.overleaf.com/latex/templates/sbc-conferences-template/blbxwjwzdngr. Works written in Portuguese must have a title, abstract and key words in English. Submissions must be made online using the JEMS system: https://jems.sbc.org.br/bwaif2023. The review process will be double-blind (names and institutions of the authors should be omitted in the articles). All papers submitted will be reviewed by at least two experts in the field. The authors of the accepted papers will be invited to present their papers in an oral presentation or in a poster session. All accepted papers will be published electronically, with DOI, in SBC's Open Library, SOL: http://sol.sbc.org.br. IMPORTANT DATES - Deadline for submission of papers: March 05th, 2023. - Results: May 5th, 2023. - Camera-ready submission: May 16th, 2023. - Authors' registration: May 16th, 2023. AUTHORS' REGISTRATION For an accepted article to be presented and included in the events, it is necessary that at least one of the authors of the article register in the event in the professional category. Each entry in the professional category entitles to the publication of a single article, considering any of the Full SBC Conference base events or satellite events. Authors with more than one article approved at any CSBC event must pay a "publishing fee" per additional article. The amount of this fee can be seen on the CSBC 2022 registration page. ORGANIZATION General Organisers and the Program Committee: Reinaldo A.C. Bianchi, FEI University Center. Anna Helena Reali Costa, Polytechnic School of the University of S?o Paulo. CONTACT Prof. Dr. Reinaldo A.C. Bianchi - rbianchi at fei.edu.br Esta mensagem, juntamente com qualquer outra informa??o anexada, ? confidencial e protegida por lei. Somente os seus destinat?rios est?o autorizados a us?-la. Se voc? n?o for o destinat?rio, por favor, informe o remetente e, em seguida, apague a mensagem, observando que n?o h? autoriza??o para usar, copiar, armazenar, encaminhar, imprimir ou tomar qualquer a??o baseada no seu conte?do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Menno.VanZaanen at nwu.ac.za Fri Jan 27 02:25:00 2023 From: Menno.VanZaanen at nwu.ac.za (Menno Van Zaanen) Date: Fri, 27 Jan 2023 07:25:00 +0000 Subject: Connectionists: 2nd CfP 4th workshop on Resources for African Indigenous Language (RAIL) @ EACL Message-ID: <464ce19b84968e6ddf14c75fae4be302e6d13a00.camel@nwu.ac.za> Second call for papers Fourth workshop on Resources for African Indigenous Language (RAIL) https://bit.ly/rail2023 The 4th RAIL (Resources for African Indigenous* Languages) workshop will be co-located with EACL 2023 in Dubrovnik, Croatia. The Resources for African Indigenous Languages (RAIL) workshop is an interdisciplinary platform for researchers working on resources (data collections, tools, etc.) specifically targeted towards African indigenous languages. In particular, it aims to create the conditions for the emergence of a scientific community of practice that focuses on data, as well as computational linguistic tools specifically designed for or applied to indigenous languages found in Africa. Previous workshops showed that the presented problems (and solutions) are not only applicable to African languages. Many issues are also relevant to other low-resource languages, such as different scripts and properties like tone. As such, these languages share similar challenges. This allows for researchers working on these languages with such properties (including non-African languages) to learn from each other, especially on issues pertaining to language resource development. The RAIL workshop has several aims. First, it brings together researchers working on African indigenous languages, forming a community of practice for people working on indigenous languages. Second, the workshop aims to reveal currently unknown or unpublished existing resources (corpora, NLP tools, and applications), resulting in a better overview of the current state-of-the-art, and also allows for discussions on novel, desired resources for future research in this area. Third, it enhances sharing of knowledge on the development of low-resource languages. Finally, it enables discussions on how to improve the quality as well as availability of the resources. The workshop has ?Impact of impairments on language resources? as its theme, but submissions on any topic related to properties of African indigenous languages (including non-African languages) may be accepted. Suggested topics include (but are not limited to) the following: Digital representations of linguistic structures Descriptions of corpora or other data sets of African indigenous languages Building resources for (under resourced) African indigenous languages Developing and using African indigenous languages in the digital age Effectiveness of digital technologies for the development of African indigenous languages Revealing unknown or unpublished existing resources for African indigenous languages Developing desired resources for African indigenous languages Improving quality, availability and accessibility of African indigenous language resources *: The term indigenous languages used in the RAIL workshop is intended to refer to non-colonial languages (in this case those used in Africa). In no way is this term used to cause any harm or discomfort to anyone. Many of these languages were or are still marginalised, and the aim of the workshop is to bring attention to the creation, curation, and development of resources for these languages in Africa. Submission requirements: We invite papers on original, unpublished work related to the topics of the workshop. Submissions, presenting completed work, may consist of up to eight (8) pages of content plus additional pages of references. The final camera-ready version of accepted long papers are allowed one additional page of content (so up to 9 pages) so that reviewers? feedback can be incorporated. Submissions need to use the EACL stylesheets. These can be found at https://2023.eacl.org/calls/styles. Submission is electronic in PDF through the START system (link will be provided once available). Reviewing is double-blind, so make sure to anonymize your submission (e.g., do not provide author names, affiliations, project names, etc.) Limit the amount of self citations (anonymized citations should not be used). Accepted papers will be published in the ACL workshop proceedings. Please make sure you also go through the responsible NLP checklist (https://aclrollingreview.org/responsibleNLPresearch/). Also, submissions should have a section titled ?Limitations? (as described in the stylesheets). Authors are also encouraged to include an explicit ethics statement. Important dates: Submission deadline 13 February 2023 Date of notification 13 March 2023 Camera ready deadline 27 March 2023 RAIL workshop 5 or 6 May 2023 Organising Committee Rooweither Mabuya, South African Centre for Digital Language Resources (SADiLaR), South Africa Don Mthobela, Cam Foundation Mmasibidi Setaka, South African Centre for Digital Language Resources (SADiLaR), South Africa Menno van Zaanen, South African Centre for Digital Language Resources (SADiLaR), South Africa -- Prof Menno van Zaanen menno.vanzaanen at nwu.ac.za Professor in Digital Humanities South African Centre for Digital Language Resources https://www.sadilar.org ________________________________ NWU PRIVACY STATEMENT: http://www.nwu.ac.za/it/gov-man/disclaimer.html DISCLAIMER: This e-mail message and attachments thereto are intended solely for the recipient(s) and may contain confidential and privileged information. Any unauthorised review, use, disclosure, or distribution is prohibited. If you have received the e-mail by mistake, please contact the sender or reply e-mail and delete the e-mail and its attachments (where appropriate) from your system. ________________________________ From brais.cancela at udc.es Thu Jan 26 14:23:45 2023 From: brais.cancela at udc.es (Brais Cancela Barizo) Date: Thu, 26 Jan 2023 19:23:45 +0000 Subject: Connectionists: CFP special session on "Green Machine Learning" at ESANN 2023 Message-ID: <5102E87D-C0A3-4F4C-A234-5A87584511A3@udc.es> [Apologies if you receive multiple copies of this CFP] Call for papers:? special session on " Green Machine Learning " at ESANN 2023 European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2023)? 4-6 October 2023, Bruges (Belgium) - http://www.esann.org IMPORTANT DATES: Paper submission deadline: 2 May 2023 Notification of acceptance: 16 June 2023 ESANN conference: 4-6 October 2023 Green Machine Learning Homepage: https://www.esann.org/special-sessions#session3 Organized by Ver?nica Bol?n-Canedo, Laura Mor?n-Fern?ndez, Brais Cancela and Amparo Alonso-Betanzos (Universidade da Coru?a, Spain) Emails: veronica.bolon at udc.es, laura.moranf at udc.es, brais.cancela at udc.es , ciamparo at udc.es In the last years we have witnessed the most impressive advances achieved by Artificial Intelligence (AI), in most cases by using deep learning models. However, it is undeniable that deep learning has a huge carbon footprint (a paper from 2019 stated that training a language model could emit nearly five times the lifetime emissions of an average car). The term Green AI refers to AI research that is more environmentally friendly and inclusive, not only by producing novel results without increasing the computational cost, but also by ensuring that any researcher with a laptop has the opportunity to perform high-quality research without the need to use expensive cloud servers. The typical AI research (sometimes referred as Red AI) aims to obtain state-of-the-art results at the expense of using massive computational power, usually through an enormous quantity of training data and numerous experiments. Efficient machine learning approaches (especially deep learning) are starting to receive some attention in the research community. However, the problem is that, most of the time, these works are not motivated by being green. Therefore, it is necessary to encourage the AI community to recognize the value of work by researchers who take a different path, optimizing efficiency rather than only accuracy. Topics such as low-resolution algorithms, edge computing, efficient platforms, and in general scalable and sustainable algorithms and their applications are of interest to complete a holistic view of Green AI. In this special session, we invite papers on both practical and theoretical issues about developing new machine learning that are sustainable and green, as well as review papers with the state-of-art techniques and the open challenges encountered in this field. In particular, topics of interest include, but are not limited to: Developing energy-efficient algorithms for training and/or inference. Investigating sustainable data management and storage techniques. Exploring the use of renewable energy sources for machine learning. Examining the ethical and social implications of green machine learning. Investigating methods for reducing the carbon footprint of machine learning systems. Studying the impact of green machine learning on various industries and applications. Submitted papers will be reviewed according to the ESANN reviewing process and will be evaluated on their scientific value: originality, correctness, and writing style. -------------- next part -------------- An HTML attachment was scrubbed... URL: From theimad at gmail.com Fri Jan 27 03:29:46 2023 From: theimad at gmail.com (Imad Khan) Date: Fri, 27 Jan 2023 19:29:46 +1100 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> Message-ID: Dear Richard, I find your comment a bit unwarranted. You could, however, follow Gary Marcus' way to put forward critical thoughts. I do not necessarily agree with Gary, but I agree with his style. I am reproducing Gary's text below for your convenience. Juergan is an elder of AI and deserves respect (like all of us do). I did go to your website and you're correct to say that AI systems are complex systems and an integrated approach is needed to save another 20 years! Gary's excerpt: [image: image.png] Regards, Dr. M. Imad Khan On Thu, 26 Jan 2023 at 04:41, Richard Loosemore wrote: > > Please, somebody reassure me that this isn't just another attempt to > rewrite history so that Schmidhuber's lab invented almost everything. > > Because at first glance, that's what it looks like. > > Richard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38209 bytes Desc: not available URL: From li.zhaoping at tuebingen.mpg.de Fri Jan 27 04:38:22 2023 From: li.zhaoping at tuebingen.mpg.de (Li Zhaoping) Date: Fri, 27 Jan 2023 10:38:22 +0100 Subject: Connectionists: ?==?utf-8?q? Annotated History of Modern AI and Deep Learning In-Reply-To: References: <38a-63d18b80-19b-5e29ab00@154884272> <691e21c7-e146-4a26-dfb2-8f4a17e6088d@tuebingen.mpg.de> Message-ID: <3093b5c6-8e45-f6e9-a6fe-3d48205fc772@tuebingen.mpg.de> Dear Jonathan, thank you for your discussion. A good scientist should also be objective. Winston Churchill? played a big role in World War II, and then wrote the history of the war.? He won Nobel prize in literature for his writings. Zhaoping On 27/01/2023 10:18, Jonathan Shapiro wrote: > "I believe that good scientists are also good historians of science, or > at least want to know what happened in the past in their field. > Being a good scientist helps one to be a better historian, example: > Abraham Pais, so the best historians of science > must be good scientists." > > That is an interesting idea, Professor Zhaoping. But, doesn't a > historian need objectivity? When a scientist writes the history of the > science which they worked in and contributed to, there could be a > bias. Even when their contributions are many. There could be a > temptation to write their competitors out of the history or make > claims beyond what they had done. Maybe the best historians of science > understand the science, but whose contributions are to understanding > its history. > > Jonathan Shapiro > Department of Computer Science > University of Manchester > ------------------------------------------------------------------------ > *From:* Connectionists > on behalf of Zhaoping Li > *Sent:* Thursday, January 26, 2023 10:02 AM > *To:* connectionists at mailman.srv.cs.cmu.edu > > *Subject:* Re: Connectionists: ?==?utf-8?q? Annotated History of > Modern AI and Deep Learning > I believe that good scientists are also good historians of science, or > at least want to know what happened in the past in their field. > Being a good scientist helps one to be a better historian, example: > Abraham Pais, so the best historians of science > must be good scientists. > > I am interested in, and am very grateful to colleagues who help us to > learn more about, the history of the field. > > Zhaoping > > -- > Li Zhaoping Ph.D. > Prof. of Cognitive Science, University of Tuebingen > Head of Dept of Sensory and Sensorimotor Systems, > Max Planck Institute of Biological Cybernetics > Author of "Understanding vision: theory, models, and data", Oxford > University Press, 2014 > www.lizhaoping.org > > > > On 1/25/23 21:06, Claudius Gros wrote: > > It is actually interesting. In other fields, like physics, > > there is a division of labor: > > > > - scientist, doing the heavy lifting, and > > - historians, trying to figure out the history of the field. > > > > It is quite amusing, that this seems to be different in > > machine learning. Some scientists want to be both! > > > > Claudius > > > > > > On Wednesday, January 25, 2023 19:09 CET, Stephen Jos? Hanson > wrote: > > > >> Well he mentions Legendre, Gauss and the Big Bang.. so no research > claims in those areas.. > >> > >> Steve > >> > >> > >> On 1/25/23 11:51, Richard Loosemore wrote: > >> > >> Please, somebody reassure me that this isn't just another attempt > to rewrite history so that Schmidhuber's lab invented almost everything. > >> > >> Because at first glance, that's what it looks like. > >> > >> Richard > >> > >> > >> -- > >> Stephen Jos? Hanson > >> Professor, Psychology Department > >> Director, RUBIC (Rutgers University Brain Imaging Center) > >> Member, Executive Committee, RUCCS > > > > > -- Li Zhaoping, Ph.D. Prof. of Cognitive Science, University of Tuebingen Head of Department of Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics Author of "Understanding vision: theory, models, and data", Oxford University Press, 2014 www.lizhaoping.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rloosemore at susaro.com Fri Jan 27 13:19:39 2023 From: rloosemore at susaro.com (Richard Loosemore) Date: Fri, 27 Jan 2023 13:19:39 -0500 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> Message-ID: <5fad3a61-7ea0-344a-8f7c-57034b23575c@susaro.com> Dear Imad, Fair comment, although I heard Jeurgen say much the same thing 14 years ago, at the AGI conference in 2009, so perhaps you can forgive me for being a little weary of this tune...? More *substantively* let me say that this field is such that many ideas/algorithms/theories can be SEEN as variations on other ideas/algorithms/theories, if you look at them from just the right angle. If I may add a tongue-in-cheek comment.? I got into this field in 1981 (my first supervisor was John G. Taylor).? By the time the big explosion happened in 1985-7, I was already thinking far beyond that paradigm.? When thinking about what thesis to do, to satisfy my Warwick Psych Dept overseers in 1989, I invented, on paper, many of the ideas that later became Deep Learning.? But those struck me as tedious and ultimately irrelevant, because I wanted to understand the whole system, not make pattern association machines.? This is NOT a claim that I invented anything first, but it IS meant to convey the idea that to people like me who come up with novel ideas all the time, but try to stay focussed on what they consider the genuine prize, all this fighting for a place in the history books seems pathetic. There, that's my arrogant thought-for-the day.? You can now safely ignore me again. Richard Loosemore On 1/27/23 3:29 AM, Imad Khan wrote: > Dear Richard, > I find your comment a bit unwarranted. You could, however, follow Gary > Marcus' way to put forward critical thoughts. I do not necessarily > agree with?Gary, but I agree with?his style. I am reproducing Gary's > text below for your convenience. Juergan is an elder of AI and > deserves respect (like all of us do). I did go to your website and > you're correct to say that AI systems are complex systems and an > integrated approach is needed to save another 20 years! > > Gary's excerpt: > image.png > Regards, > Dr. M. Imad Khan > > > On Thu, 26 Jan 2023 at 04:41, Richard Loosemore > wrote: > > > Please, somebody reassure me that this isn't just another attempt to > rewrite history so that Schmidhuber's lab invented almost everything. > > Because at first glance, that's what it looks like. > > Richard > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38209 bytes Desc: not available URL: From terry at salk.edu Fri Jan 27 12:59:28 2023 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 27 Jan 2023 09:59:28 -0800 Subject: Connectionists: NEURAL COMPUTATION - February 1, 2023 In-Reply-To: Message-ID: NEURAL COMPUTATION - Volume 35, Number 2 - February 1, 2023 Now available for online download: http://www.mitpressjournals.org/toc/neco/35/2 http://cognet.mit.edu/content/neural-computation ----- Articles Optimal Quadratic Binding for Relational Reasoning in Vector Symbolic Neural Architectures Naoki Hiratani, Haim Sompolinsky The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, and Yunpu Ma Dynamic Consolidation for Continual Learning Hang Li, Chen Ma, Xi Chen, and Xue Liu Letter Identifying and Localizing Multiple Objects Using Artificial Ventral and Dorsal Cortical Visual Pathways Zhixian Han, Anne Sereno ----- ON-LINE -- http://www.mitpressjournals.org/neco MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-cs at mit.edu ----- From david at irdta.eu Sun Jan 29 05:53:43 2023 From: david at irdta.eu (David Silva - IRDTA) Date: Sun, 29 Jan 2023 11:53:43 +0100 (CET) Subject: Connectionists: DeepLearn 2023 Summer: early registration February 15 Message-ID: <885813934.1334790.1674989623195@webmail.strato.com> ************************************************************************ 10th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2023 Summer Las Palmas de Gran Canaria, Spain July 17-21, 2023 https://irdta.eu/deeplearn/2023su/ ************************************************************************ Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ************************************************************************ Early registration: February 15, 2023 ************************************************************************ FRAMEWORK: DeepLearn 2023 Summer is part of a multi-event called Deep&Big 2023 consisting also of BigDat 2023 Summer. DeepLearn 2023 Summer participants will have the opportunity to attend lectures in the program of BigDat 2023 Summer as well if they are interested. SCOPE: DeepLearn 2023 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es, Las Palmas de Gran Canaria, Lule?, Bournemouth and Bari. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, geographic information systems, signal processing, genomics, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 2 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and employment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2023 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2023 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/ STRUCTURE: 2 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Also, if interested, participants will be able to attend courses developed in BigDat 2023 Summer, which will be held in parallel and at the same venue. Full live online participation will be possible. The organizers highlight, however, the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Alex Voznyy (University of Toronto), Comparison of Graph Neural Network Architectures for Predicting the Electronic Structure of Molecules and Solids Aidong Zhang (University of Virginia), Concept-Based Models for Robust and Interpretable Deep Learning PROFESSORS AND COURSES: (to be completed) Eneko Agirre (University of the Basque Country), [introductory/intermediate] Natural Language Processing in the Large Language Model Era Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning in Science Daniel Cremers (Technical University of Munich), [intermediate] Deep Networks for 3D Computer Vision Stefano Giagu (Sapienza University of Rome), [introductory/intermediate] Quantum Machine Learning on Parameterized Quantum Circuits Georgios Giannakis (University of Minnesota), tba Tae-Kyun Kim (Korea Advanced Institute of Science and Technology), [intermediate/advanced] Deep 3D Pose Estimation Marcus Liwicki (Lule? University of Technology), [intermediate/advanced] Methods for Learning with Few Data Chen Change Loy (Nanyang Technological University), [introductory/intermediate] Image and Video Restoration Ivan Oseledets (Skolkovo Institute of Science and Technology), [introductory/intermediate] Tensor Methods for Approximation of High-Dimensional Arrays and Their Applications in Machine Learning Deepak Pathak (Carnegie Mellon University), [intermediate/advanced] Continually Improving Agents for Generalization in the Wild Kaushik Roy (Purdue University), tba Carlo Sansone (University of Naples Federico II), tba Bj?rn Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Amos Storkey (University of Edinburgh), [intermediate] Meta-Learning and Contrastive Learning for Robust Representations Ponnuthurai N. Suganthan (Qatar University), [introductory/intermediate] Randomization-Based Deep and Shallow Learning Algorithms and Architectures Jiliang Tang (Michigan State University), [introductory/advanced] Graph Neural Networks: Models, Applications and Advances Savannah Thais (Columbia University), [intermediate] Applications of Graph Neural Networks: Physical and Societal Systems Z. Jane Wang (University of British Columbia), [introductory/intermediate] Adversarial Deep Learning in Digital Image Security & Forensics Andrew Gordon Wilson (New York University), tba Li Xiong (Emory University), [introductory] Deep Learning and Privacy Enhancing Technology Lihi Zelnik-Manor (Technion - Israel Institute of Technology), [introductory] Introduction to Computer Vision and the Ethical Questions It Raises OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 9, 2023. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 9, 2023. EMPLOYER SESSION: Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the organization and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 9, 2023. ORGANIZING COMMITTEE: Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2023su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For logistical reasons, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish as well as eventually courses in BigDat 2023 Summer. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://irdta.eu/deeplearn/2023su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. Participants will be recognized 2 ECTS credits by University of Las Palmas de Gran Canaria. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria - Fundaci?n Parque Cient?fico Tecnol?gico Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Sat Jan 28 06:33:29 2023 From: axel.hutt at inria.fr (Axel Hutt) Date: Sat, 28 Jan 2023 12:33:29 +0100 (CET) Subject: Connectionists: PhD position in computational and experimental neuroscience Message-ID: <663403092.31837646.1674905609661.JavaMail.zimbra@inria.fr> ---------------------------------------------------------------------------------------- ------- PhD position in Strasbourg / France in October 2023 --------- --------------------------------------------------------------------------------------- Topic : Mathematical modeling and behavioral experiments to improve visual attention Attention-deficit/hyperactivity disorder is characterized by, inter alia, inattention . It may be treated successfully by pharmacological medication. Since such medication typically yields cognitive adverse side effects, an alternative non- pharmacological feedback training setup may represent an alternative light treatment. The PhD-project proposes a combination of auditory stimulation and performance feedback to demonstrate that non-pharmacological feedback may also improve visual attention deficits. To this end, theoretical modeling, control feedback and experimental psychophysics and neuropsychology will be employed. The PhD-student will perform own simple reaction-time experiments to study visual attention under auditory stimulation with performance feedback. In addition, she/he will subsequently study mathematical models of behavioral feedback and will combine all results with insights gained from an own EEG-experiment with healthy and visual attention-deficit subjects. Where: [ https://mlms.icube.unistra.fr/index.php/Presentation | Team MLMS (iCube) ] / [ https://mimesis.inria.fr/ | Team MIMESIS (INRIA) ] and team INSERM1114, all in Strasbourg / France When: start September / October 2023 for 3 years Who: the preferred candidate should have good knowledge in applied mathematics, good programming skills, strong interest in mathematical modeling and experimental work. Why: You are strongly interested in pre-clinical applications and motivated to learn to apply various tools in computational and experimental neuroscience. For more information send an email to Axel Hutt, email: [ mailto:axel.hutt at inria.fr | axel.hutt at inria.fr ] . For applications please send your documents. . Feedback schemes are well-known to improve visual attention. The aim of our project is to develop a computational feedback tool, that may be implemented as a software utility. A good candidate for such a non-neural feedback is performance feedback, which feeds back the real-time behavioral performance. In such a setting, visual attention is reflected in the behavioral performance. Moreover, it is well-known that auditory stimulation improves visual attention. The optimal combination of performance feedback and auditory stimulation permits to improve visual attention optimally. This will also be examined by experimental electroencephalographic data (EEG). To this end, the present project combines mathematical modeling of behavioral feedback, experimental psycho-physics and data analysis of EEG experiments to identify an experimental protocol for optimal visual attention improvement. -- Axel Hutt Directeur de Recherche Equipe MIMESIS INRIA Nancy Grand Est B?timent IHU 1, Place de l'Hopital 67000 Strasbourg, France https://mimesis.inria.fr/members/axel-hutt/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Sun Jan 29 08:26:53 2023 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Sun, 29 Jan 2023 09:26:53 -0400 Subject: Connectionists: Annotated History of Modern AI and Deep Learning In-Reply-To: <5fad3a61-7ea0-344a-8f7c-57034b23575c@susaro.com> References: <8246578A-D869-49FB-8AA6-4014D9EB0239@supsi.ch> <138FE707-87DB-4DE0-8C2B-5D41800A55D4@supsi.ch> <12f78a08-cf83-e437-60ba-a5f8e377256f@susaro.com> <5fad3a61-7ea0-344a-8f7c-57034b23575c@susaro.com> Message-ID: Dear All, I know the discussions are getting sometimes heated, but I want to thank everyone for it. I meant to contribute earlier pointing to an early paper by Amari-sensei where he used backprop without even detailed explanations. I always thought that for him it was trivial as it is just the chain rule. While Amari-sensei is so inspiring and has given us so many more insights through information geometry, there is also a huge role for people who popularize some ideas and bring the rest of us commoners along. I specifically enjoyed comments on deep learning versus neurosymbolic causal learning. I am so excited to see more awareness of possible relations that might bring these fields closer together in the future. What is your favorite venue for such discussions? Respectfully, Thomas Trappenberg On Sun, Jan 29, 2023, 8:49 a.m. Richard Loosemore wrote: > > Dear Imad, > > Fair comment, although I heard Jeurgen say much the same thing 14 years > ago, at the AGI conference in 2009, so perhaps you can forgive me for being > a little weary of this tune...? > > More *substantively* let me say that this field is such that many > ideas/algorithms/theories can be SEEN as variations on other > ideas/algorithms/theories, if you look at them from just the right angle. > > If I may add a tongue-in-cheek comment. I got into this field in 1981 (my > first supervisor was John G. Taylor). By the time the big explosion > happened in 1985-7, I was already thinking far beyond that paradigm. When > thinking about what thesis to do, to satisfy my Warwick Psych Dept > overseers in 1989, I invented, on paper, many of the ideas that later > became Deep Learning. But those struck me as tedious and ultimately > irrelevant, because I wanted to understand the whole system, not make > pattern association machines. This is NOT a claim that I invented anything > first, but it IS meant to convey the idea that to people like me who come > up with novel ideas all the time, but try to stay focussed on what they > consider the genuine prize, all this fighting for a place in the history > books seems pathetic. > > There, that's my arrogant thought-for-the day. You can now safely ignore > me again. > > Richard Loosemore > > > > > > > > > > On 1/27/23 3:29 AM, Imad Khan wrote: > > Dear Richard, > I find your comment a bit unwarranted. You could, however, follow Gary > Marcus' way to put forward critical thoughts. I do not necessarily agree > with Gary, but I agree with his style. I am reproducing Gary's text below > for your convenience. Juergan is an elder of AI and deserves respect (like > all of us do). I did go to your website and you're correct to say that AI > systems are complex systems and an integrated approach is needed to save > another 20 years! > > Gary's excerpt: > [image: image.png] > > Regards, > Dr. M. Imad Khan > > > On Thu, 26 Jan 2023 at 04:41, Richard Loosemore > wrote: > >> >> Please, somebody reassure me that this isn't just another attempt to >> rewrite history so that Schmidhuber's lab invented almost everything. >> >> Because at first glance, that's what it looks like. >> >> Richard >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38209 bytes Desc: not available URL: From dwang at cse.ohio-state.edu Sun Jan 29 13:42:49 2023 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Sun, 29 Jan 2023 18:42:49 +0000 Subject: Connectionists: NEURAL NETWORKS, Feb. 2023 Message-ID: Neural Networks - Volume 158, February 2023 https://www.journals.elsevier.com/neural-networks Characterizing functional brain networks via Spatio-Temporal Attention 4D Convolutional Neural Networks (STA-4DCNNs) Xi Jiang, Jiadong Yan, Yu Zhao, Mingxin Jiang, ... Tianming Liu Pavlovian-based neurofeedback enhances meta-awareness of mind-wandering Issaku Kawashima, Toru Nagahama, Hiroaki Kumano, Keiko Momose, Saori C. Tanaka Representation based regression for object distance estimation Mete Ahishali, Mehmet Yamac, Serkan Kiranyaz, Moncef Gabbouj LSTMED: An uneven dynamic process monitoring method based on LSTM and Autoencoder neural network Wenfeng Deng, Yuxuan Li, Keke Huang, Dehao Wu, ... Weihua Gui EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks Javier Poyatos, Daniel Molina, Aritz D. Martinez, Javier Del Ser, Francisco Herrera DANet: Semi-supervised differentiated auxiliaries guided network for video action recognition Guangyu Gao, Ziming Liu, Guangjun Zhang, Jinyang Li, A.K. Qin Reinforcement learning for robust stabilization of nonlinear systems with asymmetric saturating actuators Xiong Yang, Yingjiang Zhou, Zhongke Gao UniSKGRep: A unified representation learning framework of social network and knowledge graph Yinghan Shen, Xuhui Jiang, Zijian Li, Yuanzhuo Wang, ... Xueqi Cheng A fractional gradient descent algorithm robust to the initial weights of multilayer perceptron Xuetao Xie, Yi-Fei Pu, Jian Wang Continual learning with attentive recurrent neural networks for temporal data classification Shao-Yu Yin, Yu Huang, Tien-Yu Chang, Shih-Fang Chang, Vincent S. Tseng Accelerating reinforcement learning with case-based model-assisted experience augmentation for process control Runze Lin, Junghui Chen, Lei Xie, Hongye Su SSA-ICL: Multi-domain adaptive attention with intra-dataset continual learning for Facial expression recognition Hongxiang Gao, Min Wu, Zhenghua Chen, Yuwen Li, ... Chengyu Liu Forgetting memristor based STDP learning circuit for neural networks Wenhao Zhou, Shiping Wen, Yi Liu, Lu Liu, ... Ling Chen Multi-graph Fusion Graph Convolutional Networks with pseudo-label supervision Yachao Yang, Yanfeng Sun, Fujiao Ju, Shaofan Wang, ... Baocai Yin Deep MCANC: A deep learning approach to multi-channel active noise control Hao Zhang, DeLiang Wang Unsupervised graph-level representation learning with hierarchical contrasts Wei Ju, Yiyang Gu, Xiao Luo, Yifan Wang, ... Ming Zhang Multi-scale multi-reception attention network for bone age assessment in X-ray images Zhichao Yang, Cong Cong, Maurice Pagnucco, Yang Song Strictly intermittent quantized control for fixed/predefined-time cluster lag synchronization of stochastic multi-weighted complex networks A singular Riemannian geometry approach to Deep Neural Networks I. Theoretical foundations Alessandro Benfenati, Alessio Marta Achieving small-batch accuracy with large-batch scalability via Hessian-aware learning rate adjustment Sunwoo Lee, Chaoyang He, Salman Avestimehr Bayesian Disturbance Injection: Robust imitation learning of flexible policies for robot manipulation Hanbit Oh, Hikaru Sasaki, Brendan Michael, Takamitsu Matsubara LAC-GAN: Lesion attention conditional GAN for Ultra-widefield image synthesis Haijun Lei, Zhihui Tian, Hai Xie, Benjian Zhao, ... Baiying Lei An architecture entropy regularizer for differentiable neural architecture search Kun Jing, Luoyu Chen, Jungang Xu Neural speech enhancement with unsupervised pre-training and mixture training Xiang Hao, Chenglin Xu, Lei Xie A singular Riemannian geometry approach to deep neural networks II. Reconstruction of 1-D equivalence classes Alessandro Benfenati, Alessio Marta CT-Loc: Cross-domain visual localization with a channel-wise transformer Daeho Kim, Jaeil Kim A class of doubly stochastic shift operators for random graph signals and their boundedness Bruno Scalzo, Ljubi?a Stankovi?, Milo? Dakovi?, Anthony G. Constantinides, Danilo P. Mandic A unified deep semi-supervised graph learning scheme based on nodes re-weighting and manifold regularization Fadi Dornaika, Jingjun Bi, Chongsheng Zhang IA-FaceS: A bidirectional method for semantic face editing Wenjing Huang, Shikui Tu, Lei Xu -------------- next part -------------- An HTML attachment was scrubbed... URL: From pstone at cs.utexas.edu Sun Jan 29 16:39:12 2023 From: pstone at cs.utexas.edu (Peter Stone) Date: Sun, 29 Jan 2023 15:39:12 -0600 Subject: Connectionists: AI100 Essay Contest Message-ID: <14202.1675028352@cs.utexas.edu> AI100 Prize: Early Career Essay Competition The One Hundred Year Study on Artificial Intelligence (AI100) is a longitudinal study of progress in AI and its impacts on society. A key feature of the 2021 AI100 report was its commentary on what had changed since the first report published in 2016. As a way of laying the groundwork for the next report, planned for 2026, the AI100 Standing Committee invites original essay submissions that react directly to one or both of the AI100 reports. Essay application is now open and will close on March 31, 2023. Apply here: https://ai100.stanford.edu/prize-competition ___ Professor Peter Stone Truchard Foundation Chair in Computer Science University Distinguished Teaching Professor Director, Texas Robotics Associate Chair, Department of Computer Science office: 512-471-9796 The University of Texas at Austin mobile: 512-810-3373 2317 Speedway, Stop D9500 pstone at cs.utexas.edu Austin, Texas 78712-1757 USA http://www.cs.utexas.edu/~pstone From amir.kalfat at gmail.com Sun Jan 29 22:05:13 2023 From: amir.kalfat at gmail.com (Amir Aly) Date: Mon, 30 Jan 2023 03:05:13 +0000 Subject: Connectionists: [Meetings] CRNS Talk (10) - Live Talk by Prof. Jun Tani - Okinawa Institute of Science and Technology - Japan Message-ID: Dear All **Apologies for cross-posting** The* Center for Robotics and Neural Systems* (CRNS) is pleased to announce the talk of *Prof. **Jun Tani* from *Okinawa Institute of Science and Technology (OIST) *- Japan on Wednesday, *February 15th from 11:00 AM to 12:30 PM *(*London time*) over *Zoom*. *Thank you for forwarding the invitation to any of your colleagues who might be interested*. >> *Events*: The CRNS talk series will cover a wide range of topics including social and cognitive robotics, computational neuroscience, computational linguistics, cognitive vision, machine learning, AI, and applications to healthcare. More details are available here: https://www.plymouth.ac.uk/research/robotics-neural-systems/whats-on >> *Link for the next event (No Registration is Required)*: Join Zoom Meeting *https://plymouth.zoom.us/j/96236551399?pwd=YS9zN2RhNVBhczJ4MWV0Zm5oMDdrdz09&from=addon * >> *Title of the talk: Exploring robotic minds by extending the framework of predictive coding and active inference* *Abstract*: The focus of my research has been to investigate how cognitive agents can develop structural representation and functions via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my team has developed various models analogous to predictive coding and active inference frameworks based on the free energy principle. Those models have been used for conducting diverse robotics experiments which include goal-directed planning and replanning in a dynamic environment, social embodied interactions with others, development of the higher cognitive competency for executive control for attention and working memory, embodied language, and others. The talk focuses on a set of emergent phenomena which we observed in those robotics experiments. These findings could inform us of possible non-trivial accounts for understanding embodied cognition including the issues of subjective experiences. >> If you have any questions, please don't hesitate to contact me, Regards ---------------- *Dr. Amir Aly* Lecturer in Artificial Intelligence and Robotics Center for Robotics and Neural Systems (CRNS) School of Engineering, Computing, and Mathematics Room A307 Portland Square, Drake Circus, PL4 8AA University of Plymouth, UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Mon Jan 30 03:09:04 2023 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Mon, 30 Jan 2023 09:09:04 +0100 Subject: Connectionists: =?utf-8?q?CFP_FRCCS_2023_=E2=80=93_3th_French_Reg?= =?utf-8?q?ional_Conference_on_Complex_Systems_May_30-June_02_Le_Ha?= =?utf-8?q?vre?= Message-ID: *Third F*rench* R*egional* C*onference on* C*omplex* S*ystems May 31 ? June 02, 2023 Le Havre, France *FRCCS 2023* You are cordially invited to submit your contribution until *February 22, 2023.* The* F*rench *R*egional* C*onference on *C*omplex *S*ystems (FRCCS) is an International annual Conference organized in France since 2021. After Dijon (2021), Paris (2022), Le Havre host its third edition (FRCCS 2023). It promotes interdisciplinary exchanges between researchers from various scientific disciplines and backgrounds (sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science). FRCCS 2023 is an opportunity to exchange and promote the cross-fertilization of ideas by presenting recent research work, industrial developments, and original applications. Special attention is given to research topics with a high societal impact from the complexity science perspective. *Keynote Speakers* Luca Maria Aiello ITU Copenhagen Denmark Ginestra Bianconi Queen Mary University UK V?ctor M. Egu?luz University of the Balearic Islands Spain Adriana Iamnitchi Maastricht University Netherlands Rosario N. Mantegna Palermo University Italy C?line Rozenblat University of Lausanne Switzerland *Submission Guidelines* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Full paper* about *original research* ? *Extended Abstract* about published or unpublished research. It is recommended to be between 3-4 pages. They should not exceed four pages. o Submissions must follow the Springer publication format available in the journal Applied Network Science in the Instructions for Authors' instructions entry. o All contributions should be submitted in *pdf format* via *EasyChair .* *Publication* *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Complexity, edited by Hindawi *Topics include, but are not limited to: * ? *Foundations of complex systems * - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks * - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment * - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity * - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior, and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations * o Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory) *GENERAL CHAIRS* Cyrille Bertelle LITIS, Normastic, Le Havre Roberto Interdonato CIRAD, UMR TETIS, Montpellier Join us at COMPLEX NETWORKS 2023 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanjay.ankur at gmail.com Mon Jan 30 05:58:05 2023 From: sanjay.ankur at gmail.com (Ankur Sinha) Date: Mon, 30 Jan 2023 10:58:05 +0000 Subject: Connectionists: Software WG: Software Highlight: Dendrify, a framework for incorporating dendrites to spiking neural networks Message-ID: <20230130105805.gjowhi3hujmwnacy@thor> Dear all, Apologies for the cross-posts. Please join us for the INCF/OCNS Software Working Group's next Software Highlight session: https://ocns.github.io/SoftwareWG/2023/01/27/dev-session-michalis-pagkalos-dendrify.html Michalis Pagkalos will introduce and discuss Dendrify, a framework for incorporating dendrites to spiking neural networks, in this session. - Date: February 7, 1600 UTC (Click here to see your local time[1]) (Add to calendar[2]). - Zoom: https://ucl.zoom.us/j/91907842769?pwd=bnEzTU9Eem9SRmthSjJIRElFZ0xwUT09 The abstract for the talk is below: Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models can be suboptimal for practical applications due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify allows the creation of reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems. - Manuscript: Introducing the Dendrify framework for incorporating dendrites to spiking neural networks | Nature Communications [3] - Source code: Poirazi-Lab/dendrify: Introducing dendrites to spiking neural networks. Designed for the Brian 2 simulator [4]. - Documentation: https://dendrify.readthedocs.io/en/latest/ [1] https://www.timeanddate.com/worldclock/fixedtime.html?msg=Software+Highlight%3A+Michalis+Pagkalos%3A+Dendrify&iso=20230207T16&p1=1440 [2] https://ocns.github.io/SoftwareWG/extras/ics/20230207-dendrify.ics [3] https://www.nature.com/articles/s41467-022-35747-8 [4] https://github.com/Poirazi-Lab/dendrify Please subscribe to the working group's mailing list for reminders, more updates, and to contact us: https://ocns.github.io/SoftwareWG/pages/contact.html On behalf of the working group, -- Thanks, Regards, Ankur Sinha (He / Him / His) | https://ankursinha.in Research Fellow at the Silver Lab, University College London | http://silverlab.org/ Free/Open source community volunteer at the NeuroFedora project | https://neuro.fedoraproject.org Time zone: Europe/London -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From a.passarella at iit.cnr.it Mon Jan 30 07:00:01 2023 From: a.passarella at iit.cnr.it (Andrea Passarella) Date: Mon, 30 Jan 2023 13:00:01 +0100 (CET) Subject: Connectionists: CFP - OSNeHM 2023 (co-located with The Web Conf): Workshop on Online Social Networks in the Human-centric Metaverse Message-ID: <20230130120001.BE982163008F@magneto.iit.cnr.it> ******************************************************************** CALL FOR PAPERS ACM OSNeHM 2023 First International workshop on Online Social Networks in the Human-centric Metaverse co-located with The Web Conference 2023 Austin, Texas, USA APRIL 30 - MAY 4, 2023 https://osnehm.iit.cnr.it/ ******************************************************************** NEXT DEADLINES - abstract submission extended (all deadlines are AoE) ******************************************************************** 6th February 2023: Abstract submission deadline 6th February 2023: Workshop paper submission deadline ******************************************************************** SUBMISSION LINK ******************************************************************** https://easychair.org/conferences/?conf=thewebconf2023iwpd ******************************************************************** SCOPE AND OVERVIEW __________________ The cyber and physical worlds are increasingly becoming indistinguishable. This is fostered by enabling technologies such as IoT and pervasive networks, advanced data management and analytics techniques, and advanced platforms with massive diffusion, chiefly among them Online Social Networks. Whatever we do in one world has immediate consequences on the other world, thanks to a constant flow of data - and online analytics - between the two worlds. In this context, the vision of the Metaverse provides additional perspectives, augmenting human interactions with things and other humans across the two worlds. The role of the humans in this socio-technical complex system is key, and still largely unexplored. Quite interestingly, while new tools characteristic of the cyber-physical world- OSN among them - have been designed to largely extend human capabilities, the real interplay between these tools and human behaviours and cognitive constraints often result in unexpected results. Therefore, while humans are in principle at the center of the cyber-physical convergence and, thus -- in the perspective -- of the Metaverse, the interplay between both worlds and the technical solutions underpinning this convergence are hitherto largely unexplored and yet to be understood. This is a big gap our community should feel, in order to develop cyber-physical worlds (and the Metaverse) as a truly human-centric environment. OSNeHM???s main theme will be the role of Online Social Networks in such a human-centric cyber-physical convergence leading to the Metaverse. It will provide a forum for discussion on early yet principled approaches and results on all aspects related to this theme. A special emphasis will be devoted to the characterisation of the individual and social behaviour of humans, using OSN as ???big data microscopes??? for collecting and analysing big data via robust big data analytics. Papers discussing solutions focusing on the interplay between social and technical (online and offline) worlds will be high welcome. On the other hand, the workshop will welcome papers proposing novel technical solutions to support human-centric approaches to the evolution of OSN in the perspective of the Metaverse. Specific topics of interest include, but are not limited to, the following: * OSNeHM platforms, protocols and applications; * OSN and Metaverse services & applications; * Decentralised, mobile and location-based OSNeHM; * Trust, reputation, privacy and security in OSNeHM; * Dynamics of trends, information and opinion diffusion in OSNeHM; * Fake news, toxicity radicalization and disinformation in OSNeHM; * Detecting, modeling and tackling Online harms in OSNeHM; * Recommendations and advertising in OSNeHM; * Measurement, analysis and modeling of popular OSN (Facebook, Twitter, Instagram, Flickr, etc.), including decentralized ones (e.g., Mastodon, Pleroma, ...); * Data mining, and machine learning in OSNeHM systems; * Social media analysis and social analytics in the perspective of OSNeHM; * Information extraction and search in OSNeHM; * Complex-network analysis of OSNeHM; * Modeling of social behavior through OSN data; * Crowdsourcing and OSNeHM; * Multidisciplinary applications of OSNeHM (economics, medicine, society, politics, homeland security, psychology, etc.) PAPER FORMAT AND SUBMISSION INSTRUCTIONS ________________________________________ Papers that have been previously published or are under review for another journal, conference or workshop will not be considered for publication. Submitted papers should not exceed 12 pages in length (maximum 8 pages for the main paper content + maximum 2 pages for appendixes + maximum 2 pages for references). Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic ???sigconf??? sample. The PDF files must have all non-standard fonts embedded. Workshop papers must be self-contained and in English. Submissions that do not follow these guidelines may be rejected without review. Further, at least one author of each accepted workshop paper has to register for the main conference. Workshop attendance is only granted for registered participants. Accepted papers will be included in the workshop proceedings, which will be published as companion proceedings of The Web Conference, and indexed according to the main conference policy. Please follow the submission link at: https://easychair.org/conferences/?conf=thewebconf2023iwpd and select the full name of the workshop in the submission list. AWARDS AND EDITORIAL FOLLOW-UPS _______________________________ We will consider assigning a best paper award. We will organise a special issue on the Elsevier Online Social Networks and Media (OSNEM) Journal https://www.journals.elsevier.com/online-social-networks-and-media/ soliciting submissions of extended versions of particularly promising papers. OSNEM is a recent yet very well-reputed (Q1 SJR) journal covering, among others, 100% of the workshop topics. IMPORTANT DATES (all deadlines are AoE) _______________ 6th February 2023: Abstract submission deadline 6th February 2023: Workshop paper submission deadline 6th March 2023: Workshop paper (acceptance) notification 20th March 2023: Workshop papers camera-ready deadline 31st March 2023: Final program (with duration) provided to Workshop Track leads 1st or 2nd May 2023: Workshops at WWW2023 ORGANISING COMMITTEE ____________________ Workshop chairs: * Marco Conti, IIT-CNR, Italy * Andrea Passarella, IIT-CNR, Italy * Jussara M. Almeida, Universidade Federal de Minas Gerais, Brazil * Arkaitz Zubiaga, Queen Mary University of London, UK Technical Program Committee * Virgilio Almeida - Universidade Federal de Minas Gerais, Brazil * Chiara Boldrini - IIT-CNR, Italy * Barbara Carminati - University of Insubria, Italy * Ignacio Castro - Queen Mary University of London, United Kingdom * Emilio Ferrara - University of Southern California, Los Angeles, USA * Pan Hui - Hong Kong University of Science and Technology, Hong Kong * Adriana Iamnitchi - Maastricht University, Netherlands * Andreas Kaltenbrunner - Pompeu Fabra University, Spain * Ioannis Katakis - University of Nicosia, Cyprus * Ema Kusen - Vienna University of Economics and Business, Austria * Haewoon Kwak - Singapore Management University, Singapore * Lik-Hang Lee - KAIST, South Korea * Na Li - Prairie View A&M University, USA * Fabricio Murai - Worcester Polytechnic Institute, USA * Paolo Rosso - Technical University of Valencia, Spain * Daniel Sadoc - Federal University of Rio de Janeiro, Brasil * Nishanth Sastry - University of Surrey, Guildford, United Kingdom * Altigran Silva - Universidade Federal do Amazonas, Brazil * Thiago Silva - Universidade Tecnol??gica Federal do Paran??, Brazil * Fabrizio Silvestri - Sapienza University of Rome, Italy * Mark Strembeck - Vienna University of Economics and Business, Austria * Andrea Tagarelli - University of Calabria, Italy * Panayiotis Tsaparas - University of Ioannina, Greece * Gareth Tyson - Hong Kong University of Science and Technology, Hong Kong * Marco Viviani - University of Milan-Bicocca, Italy * Ingmar Weber - Saarland University, Saarbr??cken, Germany For more information, please write to the workshop co-chairs at osnehm23 iit cnr it From hugo.o.sousa at inesctec.pt Mon Jan 30 06:42:14 2023 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Mon, 30 Jan 2023 11:42:14 +0000 Subject: Connectionists: Text2Story'23 Last Deadline Extension Message-ID: <555ef71188194285a40694c98e1110bd@inesctec.pt> *** Apologies for cross-posting *** ++ LAST DEADLINE EXTENSION ++ **************************************************************************** Sixth International Workshop on Narrative Extraction from Texts (Text2Story'23) Held in conjunction with the 45th European Conference on Information Retrieval (ECIR'23) April 2nd, 2023 - Dublin, Ireland Website: https://text2story23.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission Deadline Extension: January 30th, 2023 February 6th, 2023 - Acceptance Notification Date: March 3rd, 2023 - Camera-ready copies: March 17th, 2023 - Workshop: April 2nd, 2023 ++ Overview ++ Recent years have shown a stream of continuously evolving information making it unmanageable and time-consuming for an interested reader to track and process and to keep up with all the essential information and the various aspects of a story. Automated narrative extraction from text offers a compelling approach to this problem. It involves identifying the sub-set of interconnected raw documents, extracting the critical narrative story elements, and representing them in an adequate final form (e.g., timelines) that conveys the key points of the story in an easy-to-understand format. Although, information extraction and natural language processing have made significant progress towards an automatic interpretation of texts, the problem of automated identification and analysis of the different elements of a narrative present in a document (set) still presents significant unsolved challenges ++ List of Topics ++ In the sixth edition of the Text2Story workshop, we aim to bring to the forefront the challenges involved in understanding the structure of narratives and in incorporating their representation in well-established models, as well as in modern architectures (e.g., transformers) which are now common and form the backbone of almost every IR and NLP application. It is hoped that the workshop will provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: * Narrative Representation Models * Story Evolution and Shift Detection * Temporal Relation Identification * Temporal Reasoning and Ordering of Events * Causal Relation Extraction and Arrangement * Narrative Summarization * Multi-modal Summarization * Automatic Timeline Generation * Storyline Visualization * Comprehension of Generated Narratives and Timelines * Big Data Applied to Narrative Extraction * Personalization and Recommendation of Narratives * User Profiling and User Behavior Modeling * Sentiment and Opinion Detection in Texts * Argumentation Analysis * Bias Detection and Removal in Generated Stories * Ethical and Fair Narrative Generation * Misinformation and Fact Checking * Bots Influence * Narrative-focused Search in Text Collections * Event and Entity importance Estimation in Narratives * Multilinguality: Multilingual and Cross-lingual Narrative Analysis * Evaluation Methodologies for Narrative Extraction * Resources and Dataset Showcase * Dataset Annotation for Narrative Generation/Analysis * Applications in Social Media (e.g. narrative generation during a natural disaster) * Language Models and Transfer Learning in Narrative Analysis * Narrative Analysis in Low-resource Languages * Text Simplification ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset (published at ECIR'21) under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks including the study of news coverage about the COVID-19 pandemic. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We invite two kinds of submissions: * Full papers (up to 7 pages + references): Original and high-quality unpublished contributions on the theory and practical aspects of the narrative extraction task. Full-papers should introduce existing approaches, describe the methodology and the experiments conducted in detail. Negative result papers to highlight tested hypotheses that did not get the expected outcome are also welcomed. * Work in progress, demos and dissemination papers (up to 4 pages + references): unpublished short papers describing work in progress; demo and resource papers presenting research/industrial prototypes, datasets or software packages; position papers introducing a new point of view, a research vision or a reasoned opinion on the workshop topics; and dissemination papers describing project ideas, ongoing research lines, case studies or summarized versions of previously published papers in high-quality conferences/journals that is worthwhile sharing with the Text2Story community, but where novelty is not a fundamental issue. Submissions will be peer-reviewed by at least two members of the programme committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (indexed in Scopus and DBLP) as long as they don't conflict with previous publication rights. ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Invited Speakers ++ Structured Summarisation of News at Scale Speaker: Georgiana Ifrim, University College Dublin, Ireland Abstract: Facilitating news consumption at scale is still quite challenging. Some research effort focused on coming up with useful structures for facilitating news navigation for humans, but benchmarks and objective evaluation of such structures is not common. One area that has progressed recently is news timeline summarisation. In this talk, we present some of our work on long-range large-scale news timeline summarisation. Timelines present the most important events of a topic linearly in chronological order and are commonly used by news editors to organise long-ranging topics for news consumers. Tools for automatic timeline summarisation can address the cost of manual effort and the infeasibility of manually covering many topics, over long time periods and massive news corpora. In this talk, we first compare different high-level approaches to timeline summarisation, identify the modules and features important for this task, and present new state-of-the-art results with a simple new method. We provide several examples of automatic timelines and present both a quantitative and qualitative analysis of these structured news summaries. Most of our tools and datasets are available online on github. Bio: Dr. Georgiana Ifrim is an Associate Professor at the School of Computer Science, UCD, co-lead of the SFI Centre for Research Training in Machine Learning (ML-Labs) and SFI Funded Investigator with the Insight Centre for Data Analytics and VistaMilk SFI Centre. Dr. Ifrim holds a PhD and MSc in Machine Learning, from Max-Planck Institute for Informatics, Germany, and a BSc in Computer Science, from University of Bucharest, Romania. Her research focuses on effective approaches for large-scale sequence learning, time series classification, and text mining. She has published more than 50 peer-reviewed articles in top-ranked international journals and conferences and regularly holds senior positions in the program committees for IJCAI, AAAI, and ECML-PKDD, as well as being a member of the editorial board of the Machine Learning Journal, Springer. Creating and Visualising Semantic Story Maps Speaker: Valentina Bartalesi, CNR-ISTI, Italy Abstract: A narrative is a conceptual basis of collective human understanding. Humans use stories to represent characters' intentions, feelings and the attributes of objects, and events. A widely-held thesis in psychology to justify the centrality of narrative in human life is that humans make sense of reality by structuring events into narratives. Therefore, narratives are central to human activity in cultural, scientific, and social areas. Story maps are computer science realizations of narratives based on maps. They are online interactive maps enriched with text, pictures, videos, and other multimedia information, whose aim is to tell a story over a territory. This talk presents a semi-automatic workflow that, using a CRM-based ontology and the Semantic Web technologies, produces semantic narratives in the form of story maps (and timelines as an alternative representation) from textual documents. An expert user first assembles one territory-contextual document containing text and images. Then, automatic processes use natural language processing and Wikidata services to (i) extract entities and geospatial points of interest associated with the territory, (ii) assemble a logically-ordered sequence of events that constitute the narrative, enriched with entities and images, and (iii) openly publish online semantic story maps and an interoperable Linked Open Data-compliant knowledge base for event exploration and inter-story correlation analyses. Once the story maps are published, the users can review them through a user-friendly web tool. Overall, our workflow complies with Open Science directives of open publication and multi-discipline support and is appropriate to convey "information going beyond the map" to scientists and the large public. As demonstrations, the talk will show workflow-produced story maps to represent (i) 23 European rural areas across 16 countries, their value chains and territories, (ii) a Medieval journey, (iii) the history of the legends, biological investigations, and AI-based modelling for habitat discovery of the giant squid Architeuthis dux. Bio: Valentina Bartalesi Lenzi is a researcher at the CNR-ISTI and external professor of Semantic Web in the Computer Science master's degree course at the University of Pisa. She earned her PhD in Information Engineering from the University of Pisa and graduated in Digital Humanities from the University of Pisa. Her research fields mainly concern Knowledge Representation, Semantic Web technologies, and the development of formal ontologies for representing textual content and narratives. She has participated in several European and National research projects, including MINGEI, PARTHENOS, E-RIHS PP, IMAGO. She is the author of over 50 peer-reviewed articles in national and international conferences and scientific journals. ++ Organizing committee ++ Ricardo Campos (INESC TEC; Ci2 - Smart Cities Research Center, Polytechnic Institute of Tomar, Tomar, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC & Universidade da Beira do Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC & University of Porto) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Antoine Doucet (Universit? de La Rochelle) Ant?nio Horta Branco (University of Lisbon) Arian Pasquali (Faktion AI) Bart Gajderowicz (University of Toronto) Bego?a Altuna (Universidad del Pa?s Vasco) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST & INESC-ID, University of Lisbon) Daniel Loureiro (Cardiff University) Dennis Aumiller (Heidelberg University) Dhruv Gupta (Norwegian University of Science and Technology) Dyaa Albakour (Signal UK) Evelin Amorim (INESC TEC) Henrique Cardoso (INESC TEC & University of Porto) Ismail Altingovde (Middle East Technical University) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Kiran Bandeli (Walmart Inc.) Luca Cagliero (Politecnico di Torino) Ludovic Moncla (INSA Lyon) Marc Finlayson (Florida International University) Marc Spaniol (Universit? de Caen Normandie) Moreno La Quatra (Politecnico di Torino) Nuno Guimar?es (INESC TEC & University of Porto) Pablo Gamallo (University of Santiago de Compostela) Pablo Gerv?s (Universidad Complutense de Madrid) Paulo Quaresma (Universidade de ?vora) Paul Rayson (Lancaster University) Raghav Jain (Indian Institute of Technology, Patna) Ross Purves (University of Zurich) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Simra Shahid (Adobe's Media and Data Science Research Lab) Sriharsh Bhyravajjula (University of Washington) Udo Kruschwitz (University of Regensburg) Veysel Kocaman (John Snow Labs & Leiden University) ++ Contacts ++ Website: https://text2story23.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2023 at easychair.org ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at icas.cc Mon Jan 30 13:14:16 2023 From: info at icas.cc (ICAS Organizing Committee) Date: Mon, 30 Jan 2023 19:14:16 +0100 Subject: Connectionists: Lecturers @ ACDL 2023, 6th Advanced Course on Data Science & Machine Learning | June 10-14, 2023 | Riva del Sole Resort & SPA - Italy Early Registration: by February 23 Message-ID: * Apologies for multiple copies. Please forward to anybody who might be interested * #ACDL2023, An Interdisciplinary Course: #BigData, #DeepLearning & #ArtificialIntelligence without Borders Riva del Sole Resort & SPA - Tuscany, Italy, June 10-14 https://acdl2023.icas.cc acdl at icas.cc EARLY REGISTRATION: by February 23 https://acdl2023.icas.cc/registration/ Oral Presentation Submission Deadline: by February 23 (AoE) LECTURERS: Each Lecturer will hold up to four lectures on one or more research topics. https://acdl2023.icas.cc/lecturers/ Luca Beyer, Google Brain, Z?rich, Switzerland Aakanksha Chowdhery, Google Brain, USA Thomas Kipf, Google Brain, USA Pushmeet Kohli, DeepMind, London, UK Yi Ma, University of California, Berkeley, USA Panos Pardalos, University of Florida, USA Zoltan Szabo, LSE, London, UK TUTORIAL SPEAKERS: Each Tutorial Speaker will hold more than four lessons on one or more research topics. Rapha?l Berthier, EPFL, Switzerland Bruno Loureiro, ?cole Normale Sup?rieure, France Varun Ojha, Newcastle University, UK https://acdl2023.icas.cc/lecturers/ PAST LECTURERS: https://acdl2023.icas.cc/past-lecturers/ ACDL 2023 VENUE: Riva del Sole Resort & SPA Localit? Riva del Sole ? Castiglione della Pescaia (Grosseto) CAP 58043 ? Tuscany ? Italy p: +39-0564-928111 f: +39-0564-935607 e: events at rivadelsole.it w: www.rivadelsole.it https://acdl2023.icas.cc/venue/ PAST EDITIONS: https://acdl2023.icas.cc/past-editions/ REGISTRATION: https://acdl2023.icas.cc/registration/ CERTIFICATE & 8 ECTS: PhD students, PostDocs, Industry Practitioners, Junior and Senior Academics, and will be typical profiles of the ACDL attendants.The Course will involve a total of 36?40 hours of lectures, according to the academic system the final achievement will be equivalent to 8 ECTS points for the PhD Students (and some strongly motivated master student) attending the Course. At the end of the course, a formal certificate will be delivered indicating the 8 ECTS points. Anyone interested in participating in ACDL 2023 should register as soon as possible. Similarly for accommodation at the Riva del Sole Resort & SPA (the Course Venue), book your full board accommodation at the Riva del Sole Resort & SPA as soon as possible. All course participants must stay at the Riva del Sole Resort & SPA. See you in Riva del Sole in June! ACDL 2023 Directors. acdl at icas.cc https://acdl2023.icas.cc https://www.facebook.com/groups/204310640474650/ * Apologies for multiple copies. Please forward to anybody who might be interested * *6th Advanced Course on Data Science & Machine Learning - ACDL2023* 10-14 June Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) ? Tuscany, Italy An Interdisciplinary Course: Big Data, Deep Learning & AI without Borders Early Registration: by Feb 23 (AoE). W: https://acdl2023.icas.cc/ E: acdl at icas.cc FB: https://www.facebook.com/groups/204310640474650/ T: https://twitter.com/TaoSciences The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. *9th International Conference on machine Learning, Optimization & Data science ? LOD 2023 (TBA)* Paper Submission Deadline: March 23 (AoE). lod at icas.cc *ACAIN 2023, the* *3rd International Advanced Course & Symposium on Artificial Intelligence & Neuroscience (TBA)* Early Registration (Course): by Feb 23 (AoE) W: TBA E: acain at icas.cc FB: https://www.facebook.com/ACAIN-Int-Advanced-Course-Symposium-on-AI-Neuroscience-100503321621692/ The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladan at temple.edu Mon Jan 30 22:54:33 2023 From: vladan at temple.edu (Vladan Radosavljevic) Date: Tue, 31 Jan 2023 03:54:33 +0000 Subject: Connectionists: 2nd CfP - Workshop on Machine Learning for Streaming Media (ML4SM) at The WebConf2023 Message-ID: Call for Papers: Workshop on Machine Learning for Streaming Media at The WebConf2023 Austin, Texas, USA, April 30, 2023. https://ml4streamingmedia-workshop.github.io/www/index.html Submission deadline: 6th of February 2023 Streaming media have been seeing massive year over year growth in terms of consumption hours recently. For many people, streaming services have become part of everyday life and accessing and consuming media content via streaming is now the norm for people of all ages. Powered by Machine Learning (ML) algorithms, streaming services are becoming one the most visible and impactful applications of ML that directly interact with people and influence their lives. Despite the rapid growth of streaming services, the research discussions around ML for streaming media remain fragmented across different conferences and workshops. Also, the gap between academic research and constraints and requirements in industry limits the broader impact of many contributions from academia. Therefore, we believe that there is an urgent need to: (i) build connections and bridge the gap by bringing together researchers and practitioners from both academia and industry working on these problems, (ii) attract ML researchers from other areas to streaming media problems, and (iii) bring up the pain points and battle scars in industry to which academia researchers can pay more attention. With this motivation in mind, we are organizing a workshop on Machine Learning for Streaming Media in conjunction with the WebConf 2023. We invite quality research contributions, including original research, preliminary research results, and proposals for new work, to be submitted. All submitted papers will be peer reviewed by the program committee and judged for their relevance to the workshop, especially to the topics identified below, and their potential to generate discussion. Accepted submissions will be presented at the workshop and will be published in the companion (workshop) proceedings of the WebConf 2023. We welcome research that has been previously published or is under review elsewhere. Such articles should be clearly identified at the time of submission and will not be published in the proceedings. Workshop Topics The main topics we would like to consider for this workshop are * Content Understanding * Multimodal representation * Feature extraction for audio, video, and image content * Knowledge Graph generation for streaming media * Semi-supervised learning for content understanding * Metadata enrichment for music, podcast, video catalog * Search and recommendation for streaming media * Named entity recognition (e.g. identifying celebrities, hosts, artists) * Conversational systems * Reward modeling and shaping * Item cold start problems and challenges * Designing scalable ML systems * Heterogeneous content recommendation * Learning to rank * Transfer learning * Explainable recommendations * Representation learning * Graph learning algorithms for streaming media * Measurement, Metrics & Evaluation * Evaluation methodologies for streaming media search and recommendations * Methodologies for valuation of content * Measuring business impact of recommendation systems * Life-time value modeling * Churn prediction & retention modeling * User Studies & Human-In the Loop * User studies on real-world recommenders ? Human-In the loop recommendations * Mixed methods research * User studies on preference elicitation * Trust, Safety & Algorithmic Fairness * Identifying misinformation and disinformation ? Algorithmic fairness in recommendations * Hate-speech and fake news detection * Content moderation * Societal impact of recommendation systems for streaming media * Machine learning to optimize streaming quality of experience Important Dates * Submission deadline: 6th of February 2023 * Author notification: 6th of March 2023 * Camera-ready version deadline: 20th of March 2023 * Workshop: 30th of April 2023 All deadlines are 11:59 pm, Anywhere on Earth (AoE). Submission Instructions Submission link: https://easychair.org/conferences/conf=thewebconf2023iwpd Formatting Instructions Submissions should not exceed six pages in length (including appendices and references). Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic ?sigconf? sample. The PDF files must have all non-standard fonts embedded. Workshop papers must be self-contained and in English. Registration and Attendance Further, at least one author of each accepted workshop paper has to register for the main conference. Workshop attendance is only granted for registered participants. Workshop Organizers * Sudarshan Lamkhede - Manager, Machine Learning - Search and Recommendations, Netflix Research. * Praveen Chandar - Staff Research Scientist, Spotify * Vladan Radosavljevic - Machine Learning Engineering Manager, Spotify * Amit Goyal - Senior Applied Scientist, Amazon Music * Lan Luo - Associate Professor of Marketing, University of Southern California If you have any questions please do not hesitate to reach out to the workshop organizers via organizers-ml4sm at googlegroups dot com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Tue Jan 31 03:20:40 2023 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Tue, 31 Jan 2023 09:20:40 +0100 Subject: Connectionists: Lecturers @ ACDL 2023, 6th Advanced Course on Data Science & Machine Learning | June 10-14, 2023 | Riva del Sole Resort & SPA - Italy Early Registration: by February 23 Message-ID: * Apologies for multiple copies. Please forward to anybody who might be interested * #ACDL2023, An Interdisciplinary Course: #BigData, #DeepLearning & #ArtificialIntelligence without Borders Riva del Sole Resort & SPA - Tuscany, Italy, June 10-14 https://acdl2023.icas.cc acdl at icas.cc EARLY REGISTRATION: by February 23 https://acdl2023.icas.cc/registration/ Oral Presentation Submission Deadline: by February 23 (AoE) LECTURERS: Each Lecturer will hold up to four lectures on one or more research topics. https://acdl2023.icas.cc/lecturers/ Luca Beyer, Google Brain, Z?rich, Switzerland Aakanksha Chowdhery, Google Brain, USA Thomas Kipf, Google Brain, USA Pushmeet Kohli, DeepMind, London, UK Yi Ma, University of California, Berkeley, USA Panos Pardalos, University of Florida, USA Alex Smola, Amazon, USA (tbc) Zoltan Szabo, LSE, London, UK TUTORIAL SPEAKERS: Each Tutorial Speaker will hold more than four lessons on one or more research topics. Rapha?l Berthier, EPFL, Switzerland Bruno Loureiro, ?cole Normale Sup?rieure, France Varun Ojha, Newcastle University, UK https://acdl2023.icas.cc/lecturers/ PAST LECTURERS: https://acdl2023.icas.cc/past-lecturers/ ACDL 2023 VENUE: Riva del Sole Resort & SPA Localit? Riva del Sole ? Castiglione della Pescaia (Grosseto) CAP 58043 ? Tuscany ? Italy p: +39-0564-928111 f: +39-0564-935607 e: events at rivadelsole.it w: www.rivadelsole.it https://acdl2023.icas.cc/venue/ PAST EDITIONS: https://acdl2023.icas.cc/past-editions/ REGISTRATION: https://acdl2023.icas.cc/registration/ CERTIFICATE & 8 ECTS: PhD students, PostDocs, Industry Practitioners, Junior and Senior Academics, and will be typical profiles of the ACDL attendants.The Course will involve a total of 36?40 hours of lectures, according to the academic system the final achievement will be equivalent to 8 ECTS points for the PhD Students (and some strongly motivated master student) attending the Course. At the end of the course, a formal certificate will be delivered indicating the 8 ECTS points. Anyone interested in participating in ACDL 2023 should register as soon as possible. Similarly for accommodation at the Riva del Sole Resort & SPA (the Course Venue), book your full board accommodation at the Riva del Sole Resort & SPA as soon as possible. All course participants must stay at the Riva del Sole Resort & SPA. See you in Riva del Sole in June! ACDL 2023 Directors. acdl at icas.cc https://acdl2023.icas.cc https://www.facebook.com/groups/204310640474650/ * Apologies for multiple copies. Please forward to anybody who might be interested * *6th Advanced Course on Data Science & Machine Learning - ACDL2023* 10-14 June Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) ? Tuscany, Italy An Interdisciplinary Course: Big Data, Deep Learning & AI without Borders Early Registration: by Feb 23 (AoE). W: https://acdl2023.icas.cc/ E: acdl at icas.cc FB: https://www.facebook.com/groups/204310640474650/ T: https://twitter.com/TaoSciences The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. *9th International Conference on machine Learning, Optimization & Data science ? LOD 2023 (TBA)* Paper Submission Deadline: March 23 (AoE). lod at icas.cc *ACAIN 2023, the* *3rd International Advanced Course & Symposium on Artificial Intelligence & Neuroscience (TBA)* Early Registration (Course): by Feb 23 (AoE) W: TBA E: acain at icas.cc FB: https://www.facebook.com/ACAIN-Int-Advanced-Course-Symposium-on-AI-Neuroscience-100503321621692/ The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suashdeb at gmail.com Tue Jan 31 04:47:26 2023 From: suashdeb at gmail.com (Suash Deb) Date: Tue, 31 Jan 2023 15:17:26 +0530 Subject: Connectionists: Final Extension of Deadline (ISMSI23) Message-ID: Dear esteemed colleagues, This is to let you know that in view of numerous requests from our peers, the deadline for submission of manuscripts for 2023 7th ISMSI has been extended for a final time, till 10th March, 2023. As intimated previously, the conference had to converted into a virtual format due to the recent surge in Coronavirus cases in China. For more information pls. visit the conference website http://ismsi.org Thank you and will be looking forward to receive your manuscripts in the coming days. With kind regards, Suash Deb General Chair, ISMSI 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hussain.doctor at gmail.com Tue Jan 31 04:55:55 2023 From: hussain.doctor at gmail.com (Amir Hussain) Date: Tue, 31 Jan 2023 09:55:55 +0000 Subject: Connectionists: Call for Papers: The first 2023 IEEE ICASSP Workshop on Advances in Multi-modal Hearing Assistive Technologies (AMHAT) Message-ID: Dear all - Please see CFP below and help circulate, many thanks, Amir *Call for Papers: The first 2023 ICASSP Workshop on Advances in Multi-modal Hearing Assistive Technologies (AMHAT): https://cogmhear.org/amhat2023/ * *Update: *ICASSP Workshop paper submissions are now open! *- *AMHAT 2023 is accepting both* short (2 page) *and* long paper (4 page) submissions *on a range of topics of interest - see details below and the Workshop website. *AMHAT Workshop Paper Submission Link: *All accepted AMHAT papers will appear in* IEEE Xplore. *Please use the *ICASSP 2023 CMT Submission link below* and select "AMHAT 2023: Advances in Multi-Modal Hearing Assistive Technology" as the target track. https://cmt3.research.microsoft.com/ICASSP2023/Submission/Index *About AMHAT 2023* Hearing loss affects 1.5 billion people globally and is associated with poorer health and social outcomes. Recent technological advances have enabled low-latency, high data-rate wireless solutions for in-ear hearing assistive devices, which have primarily reformed the current innovation direction of the hearing industry. Nevertheless, even sophisticated commercial hearing aids and cochlear-implant devices are based on audio-only processing, and remain ineffective in restoring speech intelligibility in overwhelmingly noisy environments. Human performance in such situations is known to be dependent upon input from both the aural and visual senses that are then combined by sophisticated multi-level integration strategies in the brain. Due to advances in miniaturized sensors and embedded low-power technology, we now have the potential to monitor not only sound but also many parameters such as visuals to improve speech intelligibility. Creating future transformative multimodal hearing assistive technologies that draw on cognitive principles of normal (visually-assisted) hearing, raises a range of formidable technical, privacy and usability challenges which need to be holistically overcome. The AMHAT Workshop aims to provide an interdisciplinary forum for the wider speech signal processing, artificial intelligence, wireless sensing and communications and hearing technology communities to discuss the latest advances in this emerging field, and stimulate innovative research directions, including future challenges and opportunities. *Topics of interest* The Workshop invites authors to submit short (2-page) and long (4-page) papers presenting novel research related to all aspects of multi-modal hearing assistive technologies, including, but not limited to the following: -Novel explainable and privacy-preserving machine learning and statistical model-based approaches to multi-modal speech-in-noise processing -End-to-end real-time, low-latency and energy-efficient audio-visual speech enhancement and separation methods -Human auditory-inspired models of multi-modal speech perception and enhancement -Internet of things (IoT), 5G/6G and wireless sensing enabled approaches to multi-modal hearing assistive technologies -Multi-modal speech enhancement and separation in AR/VR environments -Innovative binaural and multi-microphone, including MEMS antenna integration and multi-modal beamforming approaches -Cloud, Edge and System-on-Chip based software and hardware implementations -New multi-modal speech intelligibility models for normal and hearing-impaired listeners -Audio-visual speech quality and intelligibility assessment and prediction techniques for multi-modal hearing assistive technologies -Demonstrators of multi-modal speech-enabled hearing assistive technology use cases (e.g. multi-modal listening and communication devices) -Accessibility and human-centric factors in the design and evaluation of multi-modal hearing assistive technology, including public perceptions, ethics, standards, societal, economic and political impacts -Contextual (e.g. user preference and cognitive load-aware) multi-modal hearing assistive technologies -Innovative applications of multi-modal hearing assistive technologies (e.g. diagnostics, therapeutics, human-robot interaction, sign-language recognition for aided communication) -Live demonstrators of multi-modal speech-enabled hearing assistive technology use cases (e.g. multi-modal cochlear implants and listening and communication devices) -Accessibility and human-centric factors in the design and evaluation of multi-modal hearing assistive technology, including public perceptions, ethics, standards, societal, economic and political impacts *Important Dates* Workshop Paper Submission Deadline: 24 February 2023 Workshop Paper Acceptance Notification: 14 April 2023 Workshop Camera Ready Paper Deadline: 28 April 2023 *Paper submission: * Oral presentations and posters (with or without demonstrations) have equal status, and authors are encouraged to suggest the presentation format best suited to communicate their ideas. Papers should contain a description of ideas and applicable research results in a minimum of 2 pages (for short papers) and maximum of 4 pages (for long papers) for technical content including figures and possible references, An additional optional 5th page may be included containing only references. All AMHAT workshop papers will appear in IEEE Xplore, paper submissions and the reviewing process will be conducted through the ICASSP-2023 paper management system (Microsoft CMT) (please ensure to select AMHAT 2023 at the time of submission, and indicate whether you prefer an oral, presentation or a poster) https://cmt3.research.microsoft.com/ICASSP2023/Submission/Index. *Workshop Chairs* Amir Hussain, Edinburgh Napier University, UK Mathini Sellathurai, Heriot-Watt University, UK Peter Bell, University of Edinburgh, UK Katherine August, Stevens Institute of Technology, USA *Steering Committee Chairs* John Hansen, University of Texas at Dallas, USA Naomi Harte, Trinity College Dublin, UK Michael Akeroyd, University of Nottingham, UK Thank you in advance --- Professor Amir Hussain Edinburgh Napier University, Edinburgh EH10 5DT, Scotland, UK E-mail: a.hussain at napier.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtkostecki at gmail.com Tue Jan 31 07:40:06 2023 From: mtkostecki at gmail.com (Mateusz Kostecki) Date: Tue, 31 Jan 2023 13:40:06 +0100 Subject: Connectionists: School of Ideas in Neuroscience 2023 Message-ID: Hello! We are happy to announce the 2023 edition of the School of Ideas in Neuroscience! It is often said that neuroscience desperately needs ideas. We are flooded by data and it is harder and harder to make sense of it. But, at the same time, most courses and workshops in neuroscience teach us experimental techniques, giving us more tools to gather even more data. But we are not taught how to develop ideas. We believe that theoretical thinking and idea development is a skill ? a skill that can be trained and developed. Our school will aim at teaching how to develop theoretical thinking, how to create ideas, and put their research in a broader context. We want you to learn how to make sense of data. In the *first section, selected theoretical frameworks* serves us as examples ? together with their authors, we discuss how different theoretical frameworks can be applied to empirical research ? but we also trace how they were developed. In the *metatheory section*, together with philosophers and historians of science, we look at how ideas were and are developed, and what factors foster their evolution. In the* thinking tools section*, we try to teach our participants how to think theoretically by building models, transgressing borders of their discipline, think creatively and critically ? and how to find time to think. Our guests are *John Krakauer, Adrienne Fairhall, Luiz Pessoa, Nedah Nemati, Pamela Lyon, Aikaterini Fotopoulou, Carina Curto, Kate Nave, Gregory Kohn, Antonella Tramacere, Marcin Mi?kowski, Wiktor M?ynarski.* Please find more info and the registration form here - *https://nenckiopenlab.org/school-of-ideas-2023/ * See you in Warsaw! Mateusz Kostecki -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: plakat_internet_sin_2023.png Type: image/png Size: 7432889 bytes Desc: not available URL: From i.lin at ucl.ac.uk Tue Jan 31 06:52:35 2023 From: i.lin at ucl.ac.uk (Lin, I-Chun) Date: Tue, 31 Jan 2023 11:52:35 +0000 Subject: Connectionists: Postdoctoral fellowship at the Gatsby Unit Message-ID: <5206434F-70CA-4758-9825-968EFE6A287F@ucl.ac.uk> The Gatsby Computational Neuroscience Unit invites applications for a postdoctoral Training Fellowship under the guidance of Professor Maneesh Sahani and Professor Jennifer Linden (UCL Ear Institute). You will focus on an investigation of the computational principles that underlie the formation of perceptual representations. The goal is to apply state-of-the-art unsupervised inferential approaches to learn models of acoustic environments, and evaluate the fidelity with which they reproduce neural recordings in the auditory cortex from animals raised in either routine or altered acoustic settings. Deadline: 28 February 2023. For detailed information on the role and how to apply, visit www.ucl.ac.uk/gatsby/vacancies About the Gatsby Unit Established in 1998 through a generous grant from the Gatsby Charitable Foundation, the Gatsby Unit has been a pioneering centre for research in theoretical neuroscience and machine learning. The Unit provides a unique multidisciplinary research environment with strong links to the Sainsbury Wellcome Centre for Neural Circuits and Behaviour, the ELLIS Unit at UCL, and other neuroscience and machine learning groups at UCL and beyond. -- I-Chun Lin, PhD Scientific Programme Manager Gatsby Computational Neuroscience Unit, UCL -------------- next part -------------- An HTML attachment was scrubbed... URL: From danko.nikolic at gmail.com Tue Jan 31 12:43:30 2023 From: danko.nikolic at gmail.com (Danko Nikolic) Date: Tue, 31 Jan 2023 18:43:30 +0100 Subject: Connectionists: Transient subnetwork selection: a new paradigm to replace connectionism? Message-ID: Dear all, I am happy to announce that my paper, the draft of which has been discussed on this list, has yesterday finally been published after a peer review. This is probably the most important paper I have done in my career so far. To remind you, the paper proposes a new paradigm as an alternative to connectionism. To understand the mind, synapses are not so important any more. Instead, critical are some other types of proteins on the neural membrane. These proteins have the capability to transiently select subnetworks that will be functional in the next few seconds or minutes. The paradigm proposes that cognition emerges from those transient subnetwork selections (and not from network computations of the classical, the so-called connectionist paradigm). The proteins in question are metabotropic receptors and G protein-gated ion channels. Simply put, we think with those proteins. A result of a thought is a new state of network pathways, not the activity of neurons. I would like to thank the list for many of the comments that I received and that helped me improve the manuscript. For example, very useful was the information on the learning algorithms able to learn the n-bit parity problem (aka, generalized XOR), which I used to illustrate the scaling problems of deep learning. This made my supplementary materials much better. The paper can be downloaded without a paywall for 50 days, here: https://authors.elsevier.com/a/1gVvg5Fq7aXeir The new version of the paper is much better than the original draft. It has more information, clearer explanations and improved structure. I hope the paper inspires people to investigate possibilities beyond connectionism both for understanding the brain and for building AI. For myself, I would love to build an AI based on these principles. Thanks a lot. Danko Dr. Danko Nikoli? www.danko-nikolic.com https://www.linkedin.com/in/danko-nikolic/ -- I wonder, how is the brain able to generate insight? -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From maanakg at gmail.com Tue Jan 31 22:36:03 2023 From: maanakg at gmail.com (Maanak Gupta) Date: Tue, 31 Jan 2023 21:36:03 -0600 Subject: Connectionists: Call for Papers: ACM Symposium on Access Control Models and Technologies Message-ID: ACM SACMAT 2023 June 7-9, 2023 - Trento, Italy http://www.sacmat.org ----------------------------------------------- The ACM Symposium on Access Control Models and Technologies (SACMAT) is the premier forum for the presentation of research results and experience reports on leading edge issues of access control, including models, systems, applications, and theory. The aims of the symposium are to share novel access control solutions that fulfill the needs of heterogeneous applications and environments, and to identify new directions for future research and development. SACMAT provides researchers and practitioners with a unique opportunity to share their perspectives with others interested in the various aspects of access control. Topics of Interest ============================================================== Submissions covering any relevant area of access control are welcomed. Areas include, but are not limited to, the following: * Authentication: ** Biometric-based Authentication ** Identity management ** Location-based Authentication ** Password-based Authentication ** Usable authentication * Data Security: ** Big data ** Data leakage prevention ** Data protection on untrusted infrastructure ** Databases and data management * Mechanisms: ** AI/ML Technologies ** Blockchain Technologies ** Cryptographic Technologies ** Economic models and game theory ** Hardware-security Technologies (e.g., Intel SGX, ARM TrustZone) ** Programming-language based Technologies ** Trust Management ** Usable mechanisms * Network: ** Corporate and Military-grade Networks ** Network systems (e.g., Software-defined network, Network function virtualization) ** Opportunistic Network (e.g., delay-tolerant network, P2P) ** Overlay Network ** Satellite Network ** Wireless and Cellular Networks * Policies and Models: ** Analysis of Models ** Analysis of policy languages ** Efficient enforcement of policies ** Extension of Models ** Extension of policy languages ** New Access Control Models ** Novel policy language design ** Policy engineering and policy mining ** Usable access control policy ** Verification of policy languages * Privacy and Privacy-enhancing Technologies: ** Access control and identity management with privacy ** Anonymous communication and censorship resistance ** Anonymous protocols (e.g., Tor) ** Attacks on Privacy and their defenses ** Cryptographic tools for privacy ** Data protection technologies ** Mixers and Mixnets ** Online social networks (OSN) * Systems: ** Autonomous systems (e.g., UAV security, autonomous vehicles, etc) ** Cloud systems and their security ** Cyber-physical and Embedded systems ** Design for resiliency ** Designing systems with zero-trust architectures ** Distributed systems ** Fog and Edge-computing systems ** IoT systems (e.g., home-automation systems) ** Mobile systems ** Operating systems ** WWW Call for Research Papers ============================================================== Papers offering novel research contributions are solicited for submission. Accepted papers will be presented at the symposium and published by the ACM in the symposium proceedings. We also encourage submissions to the "Work-in-progress Track" to present ideas that may have not been completely developed and experimentally evaluated. In addition to the regular research track, this year SACMAT will again host the special track - "Blue Sky/Vision Track". Researchers are invited to submit papers describing promising new ideas and challenges of interest to the community as well as access control needs emerging from other fields. We are particularly looking for potentially disruptive and new ideas which can shape the research agenda for the next 10 years. Paper Submission and Format ------------------------- ** Regular Track Paper ** Papers must be written in English. Authors are required to use the ACM format for papers, using the two-column SIG Proceedings Template (the sigconf template for LaTex) available in the following link: https://www.acm.org/publications/authors/submissions The length of the paper in the proceedings format must not exceed twelve US letter pages formatted for 8.5" x 11" paper and be no more than 5MB in size. It is the responsibility of the authors to ensure that their submission will print easily on simple default configurations. The submission must be anonymous, so information that might identify the authors - including author names, affiliations, acknowledgements, or obvious self-citations - must be excluded. It is the authors' responsibility to ensure that their anonymity is preserved when citing their own work. Submissions should be made by the paper submission deadline to the EasyChair conference management system ( https://easychair.org/my/conference?conf=acmsacmat2023). When submitting papers, please pay attention to the submission cycle you are submitting in. All submissions must contain a significant original contribution. That is, submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal, conference, or workshop. In particular, simultaneous submission of the same work is not allowed. Wherever appropriate, relevant related work, including that of the authors, must be cited. Submissions that are not accepted as full papers may be invited to appear as short papers. At least one author from each accepted paper must register (with a full registration) for the conference prior to the camera-ready deadline and is expected to (physically) present it at the conference (remote presentations are possible only as exceptional cases, e.g., last minute positive COVID). ** Work-in-progress Track ** Authors are invited to submit papers in the newly introduced work-in-progress track. This track is introduced for (junior) authors, ideally Ph.D. and Master's students, to obtain early, constructive feedback on their work. Submissions in this track should follow the same format as for the regular track papers while limiting the total number of pages to six US letter pages. Paper submitted in this track should be anonymized and can be submitted by the same deadline as for the regular track to the EasyChair conference management system ( https://easychair.org/my/conference?conf=acmsacmat2023) ** Blue Sky Track ** All submissions to this track should be in the same format as for the regular track, but the length must not exceed ten US letter pages, and the submissions are not required to be anonymized (optional). Submissions to this track should be submitted by the same deadlines as the ones for the second cycle of the regular track to the EasyChair conference management system (https://easychair.org/my/conference?conf=acmsacmat2023). Other Calls ============================================================== Call for Demos ------------------------- A demonstration proposal should clearly describe (1) the overall architecture of the system or technology to be demonstrated, and (2) one or more demonstration scenarios that describe how the audience, interacting with the demonstration system or the demonstrator, will gain an understanding of the underlying technology. Submissions will be evaluated based on the motivation of the work behind the use of the system or technology to be demonstrated and its novelty. Demonstration proposals should be in the same format as for the regular track, but the length must not exceed four US letter pages, and the submission should not be anonymized. A two-page description of the demonstration will be included in the conference proceedings. Submissions are expected to be submitted through the EasyChair conference management system (https://easychair.org/my/conference?conf=acmsacmat2023) by the demo submission deadline. Call for Posters ------------------------- SACMAT 2023 will include a poster session to promote discussion of ongoing projects among researchers in the field of access control and computer security. Posters can cover preliminary or exploratory work with interesting ideas, or research projects in early stages with promising results in all aspects of access control and computer security. Authors interested in displaying a poster must submit a poster abstract in the same format as for the regular track, but the length must not exceed three US letter pages, and the submission should not be anonymized. Accepted poster abstracts will be included in the conference proceedings. Submissions are expected to be submitted through the EasyChair conference management system (https://easychair.org/my/conference?conf=acmsacmat2023) by the poster submission deadline. Call for Lightning Talks ------------------------- Participants are invited to submit proposals for 5-minute lightning talks describing recently published results, work in progress, wild ideas, etc. Submissions are expected to be submitted through the EasyChair conference management system (https://easychair.org/my/conference?conf=acmsacmat2023) by the lightning talks submission deadline. Important Dates ============================================================== Research Papers (Regular, Work-in-progress and Blue Sky Tracks) ------------------------- * Submission (Cycle 2): February 17, 2023 (11:59 pm AoE) * Rebuttal (Cycle 2): March 27-30, 2023 * Notification to Authors (Cycle 2): April 12, 2023 * Camera Ready: May 5, 2023 Demos and Posters ------------------------- * Submission: April 14, 2023 (11:59 pm AoE) * Notification to Authors: April 21, 2023 * Camera Ready: May 5, 2023 Lightning Talks ------------------------- * Submission: May 12, 2023 (11:59 pm AoE) * Notification to Authors: May 19, 2023 Event ------------------------- * Conference: June 7-9, 2023 Financial Conflict of Interest (COI) Disclosure: ============================================================== In the interests of transparency and to help readers form their own judgments of potential bias, ACM SACMAT requires authors and PC members to declare any competing financial and/or non-financial interests in relation to the work described. Definition ------------------------- For the purposes of this policy, competing interests are defined as financial and non-financial interests that could directly undermine, or be perceived to undermine the objectivity, integrity, and value of a publication, through a potential influence on the judgments and actions of authors with regard to objective data presentation, analysis, and interpretation. Financial competing interests include any of the following: * Funding: Research support (including salaries, equipment, supplies, and other expenses) by organizations that may gain or lose financially through this publication. A specific role for the funding provider in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript, should be disclosed. * Employment: Recent (while engaged in the research project), present or anticipated employment by any organization that may gain or lose financially through this publication. * Personal financial interests: Ownership or contractual interest in stocks or shares of companies that may gain or lose financially through publication; consultation fees or other forms of remuneration (including reimbursements for attending symposia) from organizations that may gain or lose financially; patents or patent applications (awarded or pending) filed by the authors or their institutions whose value may be affected by publication. For patents and patent applications, disclosure of the following information is requested: patent applicant (whether author or institution), name of the inventor(s), application number, the status of the application, specific aspect of manuscript covered in the patent application. It is difficult to specify a threshold at which a financial interest becomes significant, but note that many US universities require faculty members to disclose interests exceeding $10,000 or 5% equity in a company. Any such figure is necessarily arbitrary, so we offer as one possible practical alternative guideline: "Any undeclared competing financial interests that could embarrass you were they to become publicly known after your work was published." We do not consider diversified mutual funds or investment trusts to constitute a competing financial interest. Also, for employees in non-executive or leadership positions, we do not consider financial interest related to stocks or shares in their company to constitute a competing financial interest, as long as they are publishing under their company affiliation. * Non-financial competing interests: Non-financial competing interests can take different forms, including personal or professional relations with organizations and individuals. We would encourage authors and PC members to declare any unpaid roles or relationships that might have a bearing on the publication process. Examples of non-financial competing interests include (but are not limited to): ** Unpaid membership in a government or non-governmental organization ** Unpaid membership in an advocacy or lobbying organization ** Unpaid advisory position in a commercial organization ** Writing or consulting for an educational company ** Acting as an expert witness Conference Code of Conduct and Etiquette ============================================================== ACM SACMAT will follow the ACM Policy Against Harassment at ACM Activities. Please familiarize yourself with the ACM Policy Against Harassment (available at https://www.acm.org/special-interest-groups/volunteer-resources/officers-manual/ policy-against-discrimination-and-harassment) and guide to Reporting Unacceptable Behavior (available at https://www.acm.org/about-acm/reporting-unacceptable-behavior). AUTHORS TAKE NOTE ============================================================== The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfram.schenck at fh-bielefeld.de Tue Jan 31 15:55:28 2023 From: wolfram.schenck at fh-bielefeld.de (Wolfram Schenck) Date: Tue, 31 Jan 2023 21:55:28 +0100 Subject: Connectionists: [JOBS] Research Associate in the field Machine Learning and AI (SAIL subproject R2.1: Self-aware AI, resilience and preparedness) In-Reply-To: References: Message-ID: POSITION: Research Associate in the field Machine Learning and AI (SAIL-Teilprojekt R2.1: Self-aware AI, resilience and preparedness) LOCATION: Bielefeld University of Applied Sciences, Bielefeld, Germany ------------------------------------------------------------------------------- With more than 10,000 students, Bielefeld University of Applied Sciences is the largest university of applied sciences in East Westphalia-Lippe (OWL). Based in Bielefeld, Minden and G?tersloh, it has an excellent network not only in the OWL region, but also nationally and internationally through diverse contacts, partnerships and cooperations in science, economy, politics and culture. The Faculties of Design, Minden Campus, Engineering and Mathematics, Social Sciences, Business and Health strive for high quality in teaching and research. Within the framework of the funding announcement ?Networks 2021? (Netzwerke 2021) by the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW) for the sponsored project ?SustAInable Life-cycle of Intelligent Socio-Technical Systems? (SAIL), the Faculty of Engineering and Mathematics seeks to employ a *** Research Associate in the field Machine Learning and AI (SAIL subproject R2.1: Self-aware AI, resilience and preparedness) *** to start immediately. The **full-time position is fixed-term until 31 July 2026**. Depending on individual qualification and tasks, remuneration will be up to salary group 13 TV-L (collective agreement of the federal state.) A cooperative doctorate can be pursued as part of the employment. Your workplace will be on Bielefeld Campus. Current systems that contain AI technology are primarily targeted at the introductory phase, a core component of which is training and customising AI models based on given sample data. The new SAIL research network is intended to develop the foundations for a sustainable design of AI components. The aim is to ensure that AI systems work transparently, safely and reliably throughout their entire product life cycle. In addition to Bielefeld University of Applied Sciences, this interdisciplinary network comprises Bielefeld University, Paderborn University and OWL University of Applied Sciences and Arts. The project addresses fundamental research in the field of AI, its implications from the perspective of the humanities and social sciences, as well as concrete fields of application in Industry 4.0 and Intelligent Healthcare. The predictions of AI systems are fraught with uncertainty, often making their practical use difficult. There are several ways to tackle this uncertainty, e.g., anticipating and correcting invalid predictions or downstream strategies for dealing with invalid results. In this subproject, approaches from both directions are examined, considering both the algorithmic level and the user perspective. In this way, AI systems could identify false predictions in interaction with the user and improve accordingly. The developed methods are validated using experimental setups in the application areas ?Intelligent Industrial Work Spaces? and ?Adaptive Healthcare Assistance Systems.? ----------------------------------------------------------------------------------------------------------------- Tasks and responsibilities: - Independent project coordination and implementation within the research network - Conduct of independent research activities within the framework of the research project: ?? Identification and definition of requirements for the AI workflow, the algorithms to be used and the underlying infrastructure ?? Realisation of new ML approaches based on, e.g., techniques of active learning and modelling of uncertainty in prediction ?? Development of user interfaces for the developed AI systems ?? Generalisation and verification of the modular solution components on Bielefeld UAS?s own research infrastructure ?? Conduct of studies with users of the newly developed systems ?? Application and transfer of research results - Instruction and guidance of student and research assistants - Knowledge transfer/publications/conference presentations - Support of project lead in other project activities within the scope of SAIL - Implementation of creative ideas in the research network as well as interdisciplinary work You will perform these tasks independently in coordination with Professor Dr.-Ing. Wolfram Schenck. In addition, you will work in an interdisciplinary team consisting of the research associates of the other universities and other professors. ---------------------------------------------------------------------------------------------------------------- Requirements: - Master of Science from a university or university of applied sciences in one of the following fields: computer science, data science, electrical engineering, (bio-)mechatronics or cognitive sciences with specialisation in data-based modelling, statistics or machine learning - In-depth expertise in data collection and analysis, machine learning, statistics, data mining and optimisation - First practical experience with the methods and toolboxes of machine learning - Very good programming skills (Python, R, MATLAB/Simulink, etc.) - Excellent conceptual and analytical thinking and action - Independent and autonomous way of working - Outstanding intellectual capacity - Experience in writing academic texts and presenting scientific work results - Very good knowledge of written and oral English - Excellent team spirit and communication skills, confident demeanour Candidates are further required to make sure that their fixed-term employment does not exceed the limits prescribed by the Wissenschaftszeitvertragsgesetz due to previous employments. ---------------------------------------------------------------------------------------------------------------- The ideal candidate should also bring: - Basic knowledge of German language - Experience in project coordination - Experience in research activities, in the preparation of scientific reports and publications or activities as a student or research assistant --------------------------------------------------------------------------------------------------------------- Benefits: - Opportunities for participation in qualification programmes - Support offers for publications and patents - University daycare facility ?EffHa? and holiday care for schoolchildren on Bielefeld Campus - 6 faculties with diverse partnerships and research collaborations in one of the most economically powerful regions in Germany - Good accessibility by public transport - Participation in the university sports programme from Bielefeld University --------------------------------------------------------------------------------------------------------------- Bielefeld University of Applied Sciences has received multiple awards for its successes in promoting equal opportunities and has been certified as a family-friendly university. Therefore, women are particularly welcome to apply, especially in the field of research and in technology, IT and crafts. Applications from women will be given preference in case of equal suitability, skills and professional performance, unless reasons concerning the person of another applicant predominate. Persons with severe disabilities are encouraged to apply, too. Subject to other applicable laws, severely disabled applicants with equivalent qualifications will be given preferential consideration. Please find more detailed information on the SAIL subprojects at: https://jaii.eu/sail. Multiple applications for other subprojects in the network are expressly welcome. In this case, please include the reference numbers of the respective subprojects in your application. If you have any questions relating to the content of the position we offer, please contact Prof. Dr.-Ing. Wolfram Schenck via e-mail at wolfram.schenck[AT]fh-bielefeld.de. ** Are you interested? Please apply online only by 23 February 2023, stating the reference number 03302. ** Application link: https://app.fh-bielefeld.de/qisserver/rds?state=change&type=3&nextdir=sva/bwmsas&subdir=sva/bwm&moduleParameter=bwmSearchResult&next=TableSelect.vm&P_start=0&P_anzahl=100&navigationPosition=qissvaCareer%2Csvabwmstellenuebersicht&breadcrumb=sva_bwm_stellenuebersicht&topitem=qissvaCareer&subitem=svabwmstellenuebersicht -------------- next part -------------- An HTML attachment was scrubbed... URL: