From ioannakoroni at csd.auth.gr Wed Jun 1 02:24:19 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 1 Jun 2022 09:24:19 +0300 Subject: Connectionists: Early registration: Invitation to join 2022 Summer 'Programming short course and workshop on Deep Learning and Computer Vision', 24-26th August 2022 References: <004601d874cd$c8a23160$59e69420$@csd.auth.gr> Message-ID: <190201d87580$3a5ac300$af104900$@csd.auth.gr> Dear Deep Learning, Computer Vision, Digital Media engineers, scientists and enthusiasts, you are welcomed to register in the CVML e-course on ?Programming short course and workshop on Deep Learning and Computer Vision?, 24-26th August 2022: https://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep-learning-and-computer-vision-2022/ It will take place as a three-day e-course (due to COVID-19 circumstances), hosted by the Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece, providing a series of live lectures and programming workshops delivered through a tele-education platform (Zoom). Its focus is on upgrading your programming skills in various Deep Learning and Computer Vision topics. You will be provided programming exercises in Python, CUDA, PyTorch, OpenCV etc to this end. Application focus will be in Digital Media. They will be complemented with on-line video recorded lectures and lecture pdfs, to facilitate international participants having time difference issues and to enable you to study at own pace. You can also self-assess your knowledge, by filling appropriate questionnaires (one per lecture). This course is part of the very successful CVML programming short course and workshop series that took place in the last four years. Course description ?Programming short course and workshop on Deep Learning and Computer Vision? The programming short course and workshop e-course consists of 16 1-hour live lectures & workshops organized in two Parts (1 Part per day): Part A will focus on Deep Learning and GPU programming. Part B lectures will focus on deep learning algorithms for computer vision, namely on 2D object/face detection and 2D object tracking. Part C lectures will focus on autonomous UAV cinematography. Before mission execution, it is best simulated, using drone mission simulation tools. Course lectures and programming workshops Part A (8 hours), Deep Learning and GPU programming Deep neural networks. Convolutional NNs. Parallel GPU and multi-core CPU architectures ? GPU programming Image classification with CNNs. CUDA programming Part B (8 hours), Deep Learning for Computer Vision Deep learning for object/face detection. 2D object tracking. PyTorch: Understand the core functionalities of an object detector. Training and deployment. OpenCV programming for object tracking. Part C (8 hours), Autonomous UAV cinematography Video summarization. UAV cinematography. Video summarization with Pytorch. Drone cinematography with Airsim. You can use the following link for course registration: https://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep-learning-and-computer-vision-2022/ For questions, please contact: Ioanna Koroni < koroniioanna at csd.auth.gr> This programming short course is organized by Prof. I. Pitas, IEEE and EURASIP fellow and IEEE distinguished speaker. He is the coordinator of the EC funded International AI Doctoral Academy (AIDA ), that is co-sponsored by all 5 European AI R&D flagship projects (H2020 ICT48). He was initiator and first Chair of the IEEE SPS Autonomous Systems Initiative. He is Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab), Aristotle University of Thessaloniki, Greece. He was Coordinator of the European Horizon2020 R&D project Multidrone. He is ranked 249-top Computer Science and Electronics scientist internationally by Guide2research (2018). He has 33800+ citations to his work and h-index 86+. Relevant links: 1) Prof. I. Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ &hl=el 2) Horizon2020 EU funded R&D project Aerial-Core: https://aerial-core.eu/ 3) Horizon2020 EU funded R&D project Multidrone: https://multidrone.eu/ 4) International AI Doctoral Academy (AIDA): http://www.i-aida.org/ 5) Horizon2020 EU funded R&D project AI4Media: https://ai4media.eu/ 6) AIIA Lab: https://aiia.csd.auth.gr/ Sincerely yours Prof. I. Pitas Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab) Aristotle University of Thessaloniki, Greece Post scriptum: To stay current on CVML matters, you may want to register in the CVML email list, following instructions in: https://lists.auth.gr/sympa/info/cvml -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.hyvarinen at helsinki.fi Wed Jun 1 03:40:19 2022 From: aapo.hyvarinen at helsinki.fi (=?UTF-8?Q?Aapo_Hyv=c3=a4rinen?=) Date: Wed, 1 Jun 2022 10:40:19 +0300 Subject: Connectionists: New book on AI and human suffering Message-ID: Dear All, I'm happy to announce that my new book is now available on arxiv at https://arxiv.org/pdf/2205.15409 : Title: "Painful intelligence: What AI can tell us about human suffering" Abstract: This book uses the modern theory of artificial intelligence (AI) to understand human suffering or mental pain. Both humans and sophisticated AI agents process information about the world in order to achieve goals and obtain rewards, which is why AI can be used as a model of the human brain and mind. This book intends to make the theory accessible to a relatively general audience, requiring only some relevant scientific background. The book starts with the assumption that suffering is mainly caused by frustration. Frustration means the failure of an agent (whether AI or human) to achieve a goal or a reward it wanted or expected. Frustration is inevitable because of the overwhelming complexity of the world, limited computational resources, and scarcity of good data. In particular, such limitations imply that an agent acting in the real world must cope with uncontrollability, unpredictability, and uncertainty, which all lead to frustration. Fundamental in such modelling is the idea of learning, or adaptation to the environment. While AI uses machine learning, humans and animals adapt by a combination of evolutionary mechanisms and ordinary learning. Even frustration is fundamentally an error signal that the system uses for learning. This book explores various aspects and limitations of learning algorithms and their implications regarding suffering. At the end of the book, the computational theory is used to derive various interventions or training methods that will reduce suffering in humans. The amount of frustration is expressed by a simple equation which indicates how it can be reduced. The ensuing interventions are very similar to those proposed by Buddhist and Stoic philosophy, and include mindfulness meditation. Therefore, this book can be interpreted as an exposition of a computational theory justifying why such philosophies and meditation reduce human suffering. Link: https://arxiv.org/pdf/2205.15409 Aapo Hyv?rinen Professor of Computer Science University of Helsinki From ioannakoroni at csd.auth.gr Wed Jun 1 03:35:19 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 1 Jun 2022 10:35:19 +0300 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Luc_De_Raedt?= =?utf-8?q?=3A_=E2=80=9CProbabilistic_Logics_to_Neuro-Symbolic_Arti?= =?utf-8?q?ficial_Intelligence=E2=80=9D=2C_7th_June_2022_17=3A00-18?= =?utf-8?q?=3A00_CET=2E_Upcoming_AIDA_AI_excellence_lectures?= References: <071301d86f73$23b92cd0$6b2b8670$@csd.auth.gr> <00f801d86f75$8452df30$8cf89d90$@csd.auth.gr> Message-ID: <1c2001d8758a$257c7640$707562c0$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Prof. Luc De Raedt (KU Leuven, Belgium), a prominent AI & Robotics researcher internationally, will deliver the e-lecture: ?Probabilistic Logics to Neuro-Symbolic Artificial Intelligence?, on Tuesday 7th June 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/94177102301 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE , Humane AI Net , TAILOR , VISION , currently in the process of formation, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures will be offered alternatingly by: Top highly-cited senior AI scientists internationally or Young AI scientists with promise of excellence (AI sprint lectures) Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. Other upcoming lecture: Prof. Jan Peters (Technische Universitaet Darmstadt, Germany), ?Robot Learning?, 21nd June 2022 17:00 ? 18:00 CET. More lecture infos in: https://www.i-aida.org/events/robot-learning/ These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe, J. Stefanowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jun 1 04:21:32 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 1 Jun 2022 11:21:32 +0300 Subject: Connectionists: AIDA Short Course: "Artificial Intelligence for video streaming platforms", 16-17/06/2022 Message-ID: <1eea01d87590$9ac03da0$d040b8e0$@csd.auth.gr> Politehnica University of Bucharest organizes an online AIDA short course on ?Artificial Intelligence for Video Streaming Platforms? offered through the International Artificial Intelligence Doctoral Academy (AIDA). The purpose of this course is to overview the foundations and the current state of the art in various systems developed for on-line video streaming platforms, namely: 1. DEEP-AD: A Multimodal Temporal Video Segmentation Framework for Online Video Advertising: - Shot boundary detection - Automatic video abstraction - Multimodal scene segmentation using: low/high level visual descriptors, audio patterns and semantic description - Thumbnail selection from video scenes - Ads insertion based on semantic criterion 2. Automatic subtitle synchronization and positioning: - Text pre-processing and automatic speech recognition - Anchor word identification and token matching - Phrases alignment - Subtitle/Close Caption positioning 3. DEEP-HEAR: A Multimodal Subtitle Positioning System Dedicated to Deaf and Hearing-Impaired People: - Face detection, tracking and recognition - Video temporal segmentation into stories - Active speaker detection - Subtitle positioning LECTURER: - Prof. Ruxandra ?apu, email: ruxandra_tapu at comm.pub.ro HOST INSTITUTION/ORGANIZER: Politehnica University of Bucharest REGISTRATION: Free of charge WHEN: 16-17 June 2022 from 11:00 to 13:00 CET (4 hours) WHERE: Online (Microsoft Teams link will be provided) HOW TO REGISTER and ENROLL: Both AIDA and non-AIDA students are encouraged to participate in this short course. If you are an AIDA Student* already, please: Step (a): Register in the course, please send fill the Registration form . AND Step (b): Enroll in the same course in the AIDA system using ?Enroll on this Course ?, so that this course enters your AIDA Certificate of Course Attendance. If you are not an AIDA Student, do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members listed in this page: https://www.i-aida.org/about/members/) Prof. Ruxandra ?apu, Email ruxandra_tapu at comm.pub.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From benoit.frenay at unamur.be Wed Jun 1 06:18:38 2022 From: benoit.frenay at unamur.be (=?UTF-8?B?QmVub8OudCBGcsOpbmF5?=) Date: Wed, 1 Jun 2022 12:18:38 +0200 Subject: Connectionists: Fwd: [staff.info] EASEAI 2022 - Call for papers In-Reply-To: <6f6a78ce-ab5d-fc14-6757-dc6af8b46b5c@unamur.be> References: <6f6a78ce-ab5d-fc14-6757-dc6af8b46b5c@unamur.be> Message-ID: * Dear colleagues, We have the pleasure to invite you to contribute to the 4th International Workshop on Education through Advanced Software Engineering and Artificial Intelligence. It will be co-located with ESEC/FSE ?22 in Singapore. Please find hereby the call for papers. We strongly encourage contributions focusing on out-of-the-box tools, ideas, methodologies, experimentations, etc. pertaining to education through and to digital technologies. We would also be grateful if you could help us advertise the workshop by forwarding this CFP to your research collaborators. Best regards, EASEAI organizers 4th International Workshop on Education through Advanced Software Engineering and Artificial Intelligence (EASEAI) Hybrid event, Co-located with ESEC/FSE '22 Monday 14 - Friday 18 November 2022 Singapore Website:https://easeai.github.io Important Dates Paper submission: June 27th, 2022 (AoE) Notification: July 29th, 2022 (AoE) Camera-ready: September 9th, 2022 (AoE) Submissions will be handled via https://easeai2022.hotcrp.com/ Description In the past few years, the world has seen a tremendous digital transformation in all of its areas. In consequence, the general public needs to be able to acquire an ever-increasing amount of digital literacy and at least some level of proficiency with modern digital tools. While modern software engineering relies heavily on Computer Assisted Software Engineering (CASE) tools and development methodologies (to improve productivity, quality, and efficiency of development teams), those tools remain targeted towards experienced practitioners and computer science remains taught in a very classical way. In the same time, the rise of artificial intelligence allows more and more easily to provide automated support, automate the processing and review of documents such as dissertations and other kinds of exercises, or to predict the needs of students. This context seems to be a perfect opportunity to foster interesting discussions in a workshop that gathers people from many different communities (software engineering, education science, artificial intelligence, machine learning, natural language processing, etc.), through the common lens of how advanced software tools and techniques might be used as a catalyst for a better way to teach various types of students. Topics of interest Topics include, but are not limited to: Tools and techniques * Advances in automated grading of assignments. * Advances in automated feedback and recommendations to provide support to students. * Visualization of technical and/or scientific information in various fields (chemistry, software engineering, physics, history, sociology, etc.). * Efforts related to the improvement of usability of advanced tools for non experienced users. * Efforts related to inclusion of users with disabilities in the learning environment. * Dropout prediction in a learning environment using machine learning. * Methods with a focus on explainability and interpretability (either to students or teachers) are particularly welcome. Methodologies * New and unexpected usage of established software engineering practices in the education environment. * Integration of agile methods principles in the education environment. * Integration of software quality standards and methods dedicated to continuous improvement in the education environment. * Application to serious game and gamification in the context of education. * Team work or individual work in online settings or not. Evaluations and Lessons Learned * Feedback on case studies using cutting edge techniques, methods, and tools applied to the education environment. * Improvement of computational thinking skills and digital literacy through software engineering. * Improvement of critical thinking and research skills of students. * Integration of software engineering and artificial intelligence research into teaching and training. * Introduction to artificial intelligence and machine learning principles for younger audiences. Types of Submissions We invite original papers in the conference format (ACM sigconf, double column) describing positions and new ideas (short papers up to 4 pages) as well as new results and reporting on innovative approaches (long papers up to 8 pages). All accepted papers will be published in the ACM digital library, together with the other ESEC/FSE workshops proceedings. The workshop also welcomes presentations of previously peer-reviewed published papers. We will invite authors to submit a one-page extended abstract that will not be included in the proceedings. Workshop Format The workshop will be a hybrid organization. Presenters will be invited to submit a video of their work that will be available to attendees before the workshop begins.? The workshop will take place on one day and include 3 or 4 presentation sessions. Each accepted paper will have 10 minutes for presentation. At the end of each presentation session, there will be a time for discussion (5 to 10 minutes per accepted paper). Based on its success during the last editions, discussants will be assigned to accepted papers to foster and trigger discussions. Review Process The workshop will follow a single-blind peer review process. Each contribution will be reviewed by at least three members of the program committee. Acceptance will be jointly decided with the reviewers, based on the reviews and discussions. As previously published papers have been already reviewed and accepted, they will not be reviewed again for technical content. If needed, the presentation's propositions will be prioritized, based on the content and structure of the sessions. Organizing Committee * Andreea Vescan - Babes-Bolyai University, Romania * Camelia Serban- Babes-Bolyai University, Romania * Julie Henry - University of Namur, Belgium * Upsorn Praphamontripong - University of Virginia, USA * -- Cheffe de projet STEAM Doctorante en didactique de l'informatique, Facult? d'Informatique Universit? de Namur -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ staff.info mailing list staff.info at unamur.be https://listes.fundp.ac.be/mailman/listinfo/staff.info From chz8 at aber.ac.uk Wed Jun 1 08:56:05 2022 From: chz8 at aber.ac.uk (Christine Zarges [chz8] (Staff)) Date: Wed, 1 Jun 2022 12:56:05 +0000 Subject: Connectionists: Final Call: SIGEVO Summer School (online, 20-24 June 2022), Deadline: 6 June 2022 Message-ID: (Apologies for cross-posting) ************************************************************************ SIGEVO Summer School 20-24 June 2022 (online on Zoom and Gather, GMT+1) Website: https://gecco-2022.sigevo.org/Summer-School Email: sigevo-school at aber.ac.uk Application deadline: 6 June, 2022 (AOE) ************************************************************************ The SIGEVO Summer School (S3, https://gecco-2022.sigevo.org/Summer-School) goes into round 5 and will again cover both, general skills needed by GECCO researchers and in-depth knowledge about specific GECCO topics. Networking with fellow participants and senior researchers (mentors) will give participants a unique opportunity to connect with the community. Interested? Places are limited. Submit a short motivational statement by 6 June 2022. More information: https://gecco-2022.sigevo.org/Call-for-Summer-School Organisers: - Miguel Nicolau, University College Dublin, Republic of Ireland - Christine Zarges, Aberystwyth University, Wales, United Kingdom Questions? Send us an email: sigevo-school at aber.ac.uk Current Mentors: - Juergen Branke, University of Warwick, United Kingdom - Carola Doerr, CNRS and Sorbonne University, France - Carlos Fonseca, University of Coimbra, Portugal - Manuel L?pez-Ib??ez, University of M?laga, Spain - Alberto Moraglio, University of Exeter, United Kingdom - Leslie P?rez C?ceres, Pontificia Universidad Cat?lica de Valpara?so, Chile ---------------------------------------------------------------------------------------------------------------------- Y Brifysgol orau yn y DU am Ansawdd ei Dysgu a Phrofiad Myfyrwyr Best University in the UK for Teaching Quality and Student Experience (The Times and Sunday Times, Good University Guide 2021) Rydym yn croesawu gohebiaeth yn Gymraeg a Saesneg. Cewch ateb Cymraeg i bob gohebiaeth Gymraeg ac ateb Saesneg i bob gohebiaeth Saesneg. Ni fydd gohebu yn Gymraeg yn arwain at oedi. We welcome correspondence in Welsh and English. Correspondence received in Welsh will be answered in Welsh and correspondence in English will be answered in English. Corresponding in Welsh will not involve any delay. From XIQUAN_CUI at homedepot.com Wed Jun 1 10:42:32 2022 From: XIQUAN_CUI at homedepot.com (Cui, Xiquan) Date: Wed, 1 Jun 2022 14:42:32 +0000 Subject: Connectionists: KDD 2022 Workshop on Online and Adaptive Recommender Systems (OARS) Message-ID: NOTE: The new submissions deadline is June 5th, 2022 KDD 2022 Workshop on Online and Adaptive Recommender Systems (OARS) Call For Papers ================== KDD OARS is a half day workshop taking place on August 15th, 2022 in conjunction with KDD 2022 in Washington DC, USA. Workshop website: https://oars-workshop.github.io/ Important Dates: ================== - Submissions Due - June 5th, 2022 - Notification - June 20th, 2022 - Camera Ready Version of Papers Due - July 9th, 2022 - KDD OARS Workshop - August 15th, 2022 Details: ================== The KDD workshop on online and adaptive recommender (OARS) will serve as a platform for publication and discussion of OARS. This workshop will bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to implement OARS algorithms and systems, and improve user experiences by better modeling and responding to users? intent. We invite submission of papers and posters of two to ten pages (including references), representing original research, preliminary research results, proposals for new work, and position and opinion papers. All submitted papers and posters will be single-blind and will be peer reviewed by an international program committee of researchers of high repute. Accepted submissions will be presented at the workshop. Topics of interest include, but are not limited to: ==================================== * Novel algorithms and paradigms (deep learning, reinforcement learning, online learning etc.) * Use cases (product, content, fashion/decor, job, healthy lifestyle, interactive/conversational recommendations, etc.) * User modeling and representations (real-time user intent/style/taste modeling, combine with long term interest, incorporation of knowledge graph) * Architecture and infrastructure (novel and scalable deep learning architectures, steaming and event-driven processing, etc.) * Evaluations and explanations (evaluation, comparison, explanation of OARS for a recommendation task, off-policy and counterfactual evaluation, etc.) * Social and user impact (UX, welfare, and objectives of OARS, privacy and ethics considerations, etc.) Submission Instructions: ================== All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. All submissions must be formatted according to the ACM Conference Proceeding templates (two column format). Submissions must describe work that is not previously published, not accepted for publication elsewhere, and not currently under review elsewhere. All submissions must be in English. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper in-person. Submissions to KDD OARS workshop should be made at https://easychair.org/my/conference?conf=oarskdd2022 ORGANIZERS: ================== Xiquan Cui The Home Depot, USA Vachik Dave Walmart Labs, USA Yi Su UC Berkeley, USA Julian McAuley UCSD, USA Khalifeh Al-Jadda Google Inc, USA Srijan Kumar Georgia Institute of Technology, USA Tao Ye Amazon, USA Kamelia Aryafar Google Inc, USA Mohammad Korayem CareerBuilder, Canada Contact: Please direct all your queries to xiquan_cui at homedepot.com for help. Xiquan Cui Senior Manager of Online Data Science xiquan_cui at homedepot.com Office: (770)433-8211 x80588 The Home Depot 320 Interstate North Pkwy SE, Atlanta, GA 30339 INTERNAL USE INTERNAL USE -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.wermter at uni-hamburg.de Wed Jun 1 11:41:47 2022 From: stefan.wermter at uni-hamburg.de (Stefan Wermter) Date: Wed, 1 Jun 2022 17:41:47 +0200 Subject: Connectionists: [jobs] Research Associate - Deep learning for intelligent robot interaction Message-ID: <9527efeb-1306-d8fa-b4f5-cfb01ee938d2@uni-hamburg.de> At the University of Hamburg, Dept of Informatics,? Knowledge Technology, we are looking for applications for a research associate in neural knowledge technology for intelligent robotic systems. Topic: MoreSpace:? Modeling of a Robot?s Peripersonal Space and Body Schema for Adaptive Learning and Imitation Salary level: 13 TV-L Start date:? 1. August 2022 or as soon as possible, for a period of three years Application deadline: 10 June 2022 This novel exciting project MoreSpace concentrates on research, design, development and evaluation of neurocomputational deep learning methods for intelligent robot assistants to explore human-robot interaction.? In particular the project MoreSpace includes research into modeling a robot?s peripersonal space and body schema for adaptive learning and imitation. Our research questions focus on a) adaptive decision-making with conflicting sensations and b) on self-other transfer and imitation learning. We will develop a novel conflict-driven attention mechanism by considering psychological phenomena that involve conflicting sensations. Furthermore, we will develop learning from observation by developing a projection mechanism to map an observer?s body schema to an observed agent. We expect the resulting framework to improve the capabilities of robotic agents to handle conflicting sensor data and to improve human-robot interaction scenarios in the context of the different morphologies of the NICO and NICOL robots. Our experiments will take place in a table-top scenario and mainly involve object manipulation experiments, including block-stacking and tool-use tasks. Together with our collaborators, we will first conduct robot-robot interaction and later human-robot-interaction experiments. More info about the MoreSpace project and our research at https://www.inf.uni-hamburg.de/en/inst/ab/wtm/research.html Requirements: An advanced university degree in a relevant field related to the research topic above. Preference will be given to candidates with a completed PhD in Computer Science or being at an advanced level towards it with specialization in artificial intelligence, intelligent robotics and neural networks. We expect experience in machine learning technology and programming skills in Python and PyTorch. Experience with robots, robotic simulators and robotic systems is an advantage. Your demonstrated research experience and international-level publications should be in some of the areas of Neural Networks, Robotics, Machine Learning, Vision or Natural Language Processing. Very good communication skills in English and in German desired. For further information, please contact Prof. Dr. Stefan Wermter (stefan.wermter at uni-hamburg.de) or consult our website at https://www.informatik.uni-hamburg.de/wtm/ Applications should include a cover letter, a tabular curriculum vitae, and copies of degree certificate(s). For more details on how to apply please go to: https://www.uni-hamburg.de/en/stellenangebote/ausschreibung.html?jobID=6da75f67a01decbaeed7f9b7811c48df176e680f Please pass on to a qualified candidate. *********************************************** Professor Dr. Stefan Wermter Director of Knowledge Technology Department of Informatics University of Hamburg Vogt-Koelln-Str. 30 22527 Hamburg, Germany Email: stefan dot wermter AT uni-hamburg.de https://www.informatik.uni-hamburg.de/WTM/ *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From phitzler at googlemail.com Wed Jun 1 23:21:44 2022 From: phitzler at googlemail.com (Pascal Hitzler) Date: Wed, 1 Jun 2022 22:21:44 -0500 Subject: Connectionists: Call for book chapter proposals for A Compendium of Neuro-Symbolic Artificial Intelligence Message-ID: <0fcd40f1-35de-4816-45af-10122e0ec6dc@googlemail.com> We recently published a book entitled Neuro-Symbolic Artificial Intelligence: The State of the Art (see https://ebooks.iospress.nl/ISBN/978-1-64368-244-0 ) which contains invited overview chapters by selected authors. Due to the success of this book, and because there is much more work on the topic that we have not been able to include, the publisher has agreed to a new and more comprehensive volume. The new book will be entitled (tentatively) A Compendium of Neuro-Symbolic Artificial Intelligence and at this time we are requesting book chapter proposals. We are understanding the topic in a very general sense, i.e. in scope is any research that includes both artificial neural networks (and deep learning) and symbolic methods; see e.g. http://doi.org/10.3233/AIC-210084 . A book chapter shall be an overview of a line of work by the chapter authors, based on 2 or more related publications in quality conferences or journals. The intention is that a large collection of such chapters will provide an overview of the whole field. To contribute to the book, please provide a brief book chapter proposal to hitzler at ksu.edu by the Deadline July 15, 2022 consisting of the following: * Title of the chapter * List of chapter authors * A brief abstract (one paragraph) * Approximate number of pages (see the front matter in the above linked book for approximate formatting) The list of already published conference or journal papers the chapter will be based on We will notify contributors by July 30 whether their chapter will be included. Further please take note of the following: * The deadline for the chapters will be October 31, 2022. * We will do a light cross-review for feedback (since material is based on already peer reviewed publications) * Each contributing author will have to be available to review at most one other chapter within 4 weeks. We expect publication of the book in the first half of 2023. We are looking forward to your contribution! Pascal Hitzler Md Kamruzzaman Sarker -- Pascal Hitzler Lloyd T. Smith Creativity in Engineering Chair Director, Center for AI and Data Science Kansas State University http://www.pascal-hitzler.de http://www.daselab.org http://www.semantic-web-journal.net From hongzhi.kuai at gmail.com Thu Jun 2 04:26:07 2022 From: hongzhi.kuai at gmail.com (H.Z. Kuai) Date: Thu, 2 Jun 2022 17:26:07 +0900 Subject: Connectionists: WI-IAT' 22 CfPs [Extended Deadline] Message-ID: [Apologies if you receive this more than once] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CALL FOR PAPERS The 21st IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '22) November 17-20, 2022, Niagara Falls, Canada A hybrid conference with both online and offline modes Web Intelligence = AI in the Connected World Homepage: https://www.wi-iat.com/wi-iat2022/ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Full Papers Submission Deadline: June 30, 2020 (Extended) Sponsored By: Web Intelligence Consortium (WIC) Association for Computing Machinery (ACM) IEEE Computer Society * Award Information * * Two Student Travel Awards (US$500 Each) * Two Student Awards (Non-travel, US$500 Each) * Two Volunteer Awards (US$500 Each) * One Best Paper Award (US$1000) WI-IAT 2022 Special Event: - Web Intelligence Journal Special Issue: 20 Years of Web Intelligence https://www.iospress.com/catalog/journals/web-intelligence ************************************** WI-IAT 21st Keynote Speakers: ************************************** https://www.wi-iat.com/wi-iat2022/projects-KeynoteSpeakers.html - Ophir Frieder Fellow of the American Association for the Advancement of Science (AAAS) Fellow of the Association for Computing Machinery (ACM) Fellow of the Institute of Electrical and Electronics Engineering (IEEE) Georgetown University, USA - Kevin Leyton-Brown Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) Fellow of the Association of Computing Machinery (ACM) University of British Columbia, Canada - Ming Li Fellow of the Royal Society of Canada Fellow of the Association for Computing Machinery (ACM) Fellow of the Institute of Electrical and Electronics Engineering (IEEE) University of Waterloo, Canada - Witold Pedrycz Fellow of the Royal Society of Canada Fellow of the Institute of Electrical and Electronics Engineering (IEEE) University of Alberta, Canada - Yiyu Yao Fellow of the International Rough Set Society (IRSS) University of Regina, Canada More to be announced later. ACCEPTED WORKSHOPS/SPECIAL SESSIONS ++++++++++++++++++++++++++++++++++++++ https://www.wi-iat.com/wi-iat2022/Workshops-Special-Sessions.html WS01: The 11th International Workshop on Intelligent Data Processing (IDP) WS02: The 7th International Workshop on Application of Big Data for Computational Social Science (ABCSS2022) WS03: The 7th International Workshop on Integrated Social CRM (iCRM 2022) WS04: The 5th International Workshop on Social Media Analytics for Health intelligence (SMA4H) WS05: The International Workshop on Personalized QA and its Applications (PQAIA) WS06: The 2nd International Workshop on Expert Recommendation for Community Question Answering (XPERT4CQA) WS07: The International Workshop on Data Analytics on Social Media (DASM?22) WS08: The 15th Natural Language Processing and Ontology Engineering (NLPOE2022) WS09: The 1st International Workshop on the Fundamentals and Advances of Recommendation System (FA-RS) WS10: The International Workshop on Affective Computing and Emotion Recognition (ACER-EMORE2022) WS11: The International Workshop on Telemedicine System WS12: The International Workshop on Web Intelligence meets Brain Informatics (WImBI?22) More to be announced later. The 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '22) provides a premier international forum to bring together researchers and practitioners from diverse fields for presentation of original research results, as well as exchange and dissemination of innovative and practical development experiences on Web Intelligence and Intelligent Agent Technology research and applications. Academia, professionals and industry people can exchange their ideas, findings and strategies in utilizing the power of human brains and man-made networks to create a better world. More specifically, the fields of how intelligence is impacting the Web of People, the Web of Data, the Web of Things, the Web of Trust, the Web of Agents, and emerging Web in health and smart living in the 5G Era. Therefore, the theme of WI-IAT '22 will be ?Web Intelligence = AI in the Connected World?. After the great successful online WI-IAT'20 and hybrid WI-IAT'21 during the global pandemic, WI-IAT'22 will be held in Niagara Falls, Canada, and once again, in the hybrid mode. WI-IAT '22 welcomes research, application as well as Industry/Demo track paper submissions in these core thematic pillars under wider topics, which demand WI innovative and disruptive solutions for any of the next indicative sub-topics. TRACKS AND TOPICS ++++++++++++++++++ Track 1: Web of People * Crowdsourcing and Social Data Mining * Human-Centric Computing * Information Diffusion * Knowledge Community Support * Modelling Crowd-Sourcing * Opinion Mining * People Oriented Applications and Services * Recommendation Engines * Sentiment Analysis * Situational Awareness Social Network Analysis * Social Groups and Dynamics * Social Media and Dynamics * Social Networks Analytics * User and Behavioural Modelling Track 2: Web of Data * Algorithms and Knowledge Management * Autonomy-Oriented Computing (AOC) * Big Data Analytics * Big Data and Human Brain Complex Systems * Cognitive Models * Computational Models * Data-Driven Services and Applications * Data Integration and Data Provenance * Data Science and Machine Learning * Graph Isomorphism * Graph Theory * Information Search and Retrieval * Knowledge Graph * Knowledge Graph and Semantic Networks * Linked Data Management and Analytics * Self-Organizing Networks * Semantic Networks * Sensor Networks * Web Science Track 3: Web of Things * Complex Networks * Distributed Systems and Devices * Dynamics of Networks * Industrial Multi-Domain Web * Intelligent Ubiquitous Web of Things * IoT Data Analytics * Location and Time Awareness * Open Autonomous Systems * Streaming Data Analysis * Web Infrastructures and Devices Mobile Web * Wisdom Web of Things (W2T) Track 4: Web of Trust * Blockchain analytics and technologies * Fake content and fraud detection * Hidden Web Analytics * Monetization Services and Applications * Trust Models for Agents * Ubiquitous Computing * Web Cryptography * Monetization services and applications * Web safety and openness Track 5: Web of Agents * Agent Networks * Autonomy Remembrance Agents * Autonomy-oriented Computing * Behaviour Modelling * Distributed Problem-Solving Global Brain * Edge Computing * Individual-based Modelling Knowledge * Information Agents * Local-Global Behavioural Interactions * Mechanism Design * Multi-Agent Systems * Network Autonomy Remembrance Agents * Self-adaptive Evolutionary Systems * Self-organizing Systems * Social Groups and Dynamics Special Track: Emerging Web in Health and Smart Living * Big Data in Medicine * City Brain and Global Brain * Digital Ecosystems * Digital Epidemiology * Health Data Exchange and Sharing * Healthcare and Medical Applications and Services * Omics Research and Trends * Personalized Health Management and Analytics * Smart City Applications and Services * Time Awareness and Location Awareness Smart City * Wellbeing and Healthcare in the 5G Era IMPORTANT DATES +++++++++++++++ June 23, 2022: Workshop Proposal Submission June 30, 2022 (Extended): Full Papers Submission August 20, 2022: Paper Acceptance Notification August 20, 2022: Early Registration Opens September 7, 2022: Camera-ready Submission November 17, 2022: Workshops and Special Sessions November 18-20, 2022: Main Conference PAPER SUBMISSION ++++++++++++++++ Papers must be submitted electronically via CyberChair in standard IEEE Conference Proceedings format (max 8 pages, templates at https://www.ieee.org/conferences/publishing/templates.html). Submitted papers will undergo a peer review process, coordinated by the International Program Committee. Main Conference Paper Submission: https://www.wi-iat.com/wi-iat2022/Participant-Submission.html Workshops and Special Sessions Paper Submission: https://www.wi-iat.com/wi-iat2022/Workshops-Special-Sessions.html Organization Structure ++++++++++++++++++++++ General Chairs * Gabriella Pasi, University of Milano-Bicocca, Italy * Jimmy Huang, York University, Canada * Jie Tang, Tsinghua University, China * Christopher W. Clifton, Purdue University, USA Program Committee Chairs * Jiashu Zhao, Wilfrid Laurier University, Canada * Ebrahim Bagheri, Ryerson University, Canada * Norbert Fuhr, University of Duisburg-Essen, Germany * Atsuhiro Takasu, National Institute of Informatics, Japan * Yixing Fan, Chinese Academy of Sciences, China Local Organizing Chairs * Mehdi Kargar, Ryerson University, Canada * George J. Georgopoulos, York University, Canada Workshop/Special Session Chairs * Hiroki Matsumoto, Maebashi Institute of Technology, Japan * Ameeta Agrawal, Portland State University, USA * Cathal Gurrin, Dublin City University, Ireland * Chao Huang, University of Hong Kong, China Publicity Chairs * Hongzhi Kuai, Maebashi Institute of Technology, Japan * Yang Liu, Wilfrid Laurier University, Canada * Yan Ge, University of Bristo, UK Tutorial Chair * Vivian Hu, Ryerson University, Canada Industry Chairs * Stephen Chan, Dapasoft, Canada * Long Xia, Baidu, China WIC Steering Committee Chairs * Ning Zhong, Maebashi Institute of Technology, Japan * Jiming Liu, Hong Kong Baptist University, HK, China WIC Executive Secretary * Xiaohui Tao, University of Southern Queensland, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From boubchir at ai.univ-paris8.fr Thu Jun 2 12:01:26 2022 From: boubchir at ai.univ-paris8.fr (Larbi Boubchir) Date: Thu, 2 Jun 2022 18:01:26 +0200 Subject: Connectionists: [CfP] Special Session on Advances in Deep Learning for Biometrics and Forensics - ICONIP2022 In-Reply-To: <07d66ee5-b997-d31b-73be-79641313edb5@ai.univ-paris8.fr> References: <07d66ee5-b997-d31b-73be-79641313edb5@ai.univ-paris8.fr> Message-ID: <8db8a211-72e5-23a7-e5ae-8d6e5f03ac09@ai.univ-paris8.fr> Special Session on Advances in Deep Learning for Biometrics and Forensics The 29th International Conference on Neural Information Processing (ICONIP 2022) November 22-26, 2022, New Delhi, India (hybrid mode) Scope and Aim The biometric is a growing technology due to the needs of the society, companies and governments for recognition, security and privacy concerns. It has also become a growing research area that offers greater security and convenience solutions for various applications in biometrics and forensics areas. Methods and algorithms from data science are widely explored and used to address several problems in many fields such as biometrics and forensics. In this context, the advances in artificial intelligence, in particular in feature engineering and deep learning, have allowed to resolve various complex problems related to recognition, detection, control, security, forensic identification, etc. Indeed, most of biometric systems are based on a typical representation, including biometric data preprocessing, feature extraction, and classification parts. Deep learning offers an end-to-end learning paradigm allowing to unify these parts. It has been shown to be a promising and powerful alternative to conventional approaches based on machine learning. This special session aims to bring together researchers, scientists and industry professionals interested in biometrics and forensic, to present and discuss their recent advanced algorithms and methods in deep learning for biometrics and its applications. Topics The main topics that are of interest to this special session include, but are not limited to, the following: * Deep learning for biometrics and/or forensics * Biometric recognition (authentication and identification) * Physiological and behavioral biometrics (e.g., fingerprint, palmprint, palm vein, face, iris, ear, gait, voice, etc.) * Soft biometrics * Multimodal biometrics * Big Data challenges in biometrics * Attacks to biometric systems * Security and privacy in biometrics * Forensic identification * Emerging biometrics * Related applications Important Dates * Paper submission deadline: June 15, 2022 * Paper acceptance notification: August 15, 2022 * Conference: November 22-26, 2022 ? New Delhi, India Submission Guidelines Papers submitted to this Special Session are reviewed according to the same rules as the submissions to the regular sessions of ICONIP 2022. Authors who submit papers to this session are invited to mention it in the form during the submission. Submissions to regular and special sessions follow identical format, instructions, deadlines and procedures of the other papers. Click here to information paper submission: https://www.iconip2022.apnns.org/camara_ready_submission.php Please, for further information and news refer to the ICONIP website: https://www.iconip2022.apnns.org Organizers Larbi Boubchir, Full Professor, LIASD research Lab., University of Paris 8, France (contact: larbi.boubchir at univ-paris8.fr) Boubaker Daachi, Full Professor, LIASD research Lab., University of Paris 8, France -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.zock at lis-lab.fr Thu Jun 2 20:17:21 2022 From: michael.zock at lis-lab.fr (Michael Zock) Date: Fri, 3 Jun 2022 02:17:21 +0200 Subject: Connectionists: Call for Papers - COGALEX-VII (The 7th International Workshop on Cognitive Aspects of the Lexicon) Message-ID: <60ae4de8-bafa-eb90-1503-5b93ae75ec1a@lis-lab.fr> *Call for papers*for COGALEX-VII ?The 7thInternational Workshop on /Cognitive Aspects of the Lexicon/? https://sites.google.com/view/cogalexvii2022/home co-located with AACL-IJCNLP2022 https://www.aacl2022.org/home Taipei, Taiwan SubmissionDeadline: 25-Aug-2022 Workshop date?: Date: 20-Nov-2022 *Key words*: Dictionary, (Mental)Lexicon, Brain, Cognition, Neuroscience, Computational Linguistics; Corpus Linguistics, Complex graphs, Navigation *Meeting Description:* COGALEX is a workshop devoted to the /cognitive aspects of the lexicon/. While in the past, it has always been co-located with COLING, this time it will be hosted by AACL-IJCNLP2022, at the /NTUH International Convention Center/, Taipei, Taiwan (https://www.aacl2022.org/home ). The accepted papers will be published as proceedingsappearingin the ACL anthology. The goal of COGALEX is to provide a snapshot of the current state of the art in the different disciplines ?lexicography, psycholinguistics, neuroscience? dealing words, their organization (lexicon) and usage (for example: navigation in a hybrid conceptual-lexical resource; word production and analysis). The approach being deliberately cross-disciplinary. In sum, wesolicit original and unpublished work related to the cognitive aspects of the lexicon. For details, see: https://sites.google.com/view/cogalexvii2022/home Short papers can be up to 4 pages in length andlong papers up to 8 pages. Both submission formats can have an unlimitednumber of pages for references. All submissions must follow the ACLstylesheet. We don?t accept submissions that consist only of an abstract. The submissions must be anonymous and they will be peer-reviewed by ourprogram committee. The peer review is double blinded. Papers must be submittedviaSoftConf by August 25, 2022. Submission page:https://softconf.com/aacl2022/CogALex-VII/user/scmd.cgi?scmd=submitPaperCustom&pageid=0 At least one of the authors of anaccepted paper must register for the main conference and present the paper.Accepted papers (short and long) will be published in the workshop proceedingsthat will appear in the ACL Anthology. Accepted papers will also be given anadditional page to address the reviewers? comments. The length of a camera-ready submission can then be 4pages for a short paper and 8for a long paperwith an unlimited number of pages for references. We consider to invite the authors of the accepted papers to submit an extendedversion of their workshop paper fora special issue. *Important dates* -Paper submission (full and short): August 25, 2022 -Notification of acceptance: September 25, 2022 -Camera ready deadline: October 10, 2022 -Workshop: November 20, 2022 ** *Workshop organizers?;* -Michael Zock (CNRS, LIS, Aix-Marseille University, Marseille, France) -Emmanuele Chersoni (The Hong Kong Polytechnic University, Hong Kong, China) -Yu-Yin Hsu (The Hong Kong Polytechnic University, Hong Kong, China) -Enrico Santus (Bayer, Whippany, NJ, 07981, USA) *For specific **requests or information***** Please send an e-mail to cogalex2022 at gmail.com ,? or to Michael Zock (michael.zock at lis-lab.fr ),Emmanuele Chersoni (emmanuelechersoni at gmail.com ) -- Michael ZOCK Emeritus Research Director CNRS LIS UMR 7020 (Group TALEP) Aix Marseille Universit? 163 avenue de Luminy - case 901 13288 Marseille / France Mail: michael.zock at lis-lab.fr Tel.:??? +33 (0)6 51.70.97.22 http://pageperso.lif.univ-mrs.fr/~michael.zock/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjchoi at cs.ucla.edu Fri Jun 3 02:08:33 2022 From: yjchoi at cs.ucla.edu (YooJung Choi) Date: Thu, 2 Jun 2022 23:08:33 -0700 Subject: Connectionists: [Deadline Extended] TPM @ UAI 2022: Deadline Extended to June 13th Message-ID: We have extended the paper submission deadline for TPM 2022 to *June 13th, 2022*. *Important Dates* - *Submission deadline: *June 9th, 2022 AoE *June 13th, 2022 AoE* - *Notification of acceptance: *July 5th, 2022 - *Camera-ready version: *August 12th, 2022 - *Workshop date:* August 5th, 2022 ****The 5th Workshop on Tractable Probabilistic Modeling (TPM): **From Theory to Practice (and Back)****** https://tractable-probabilistic-modeling.github.io/tpm2022/ AI and ML systems designed and deployed to support decision-making in the real world need to perform *complex reasoning under uncertainty*. For safety-critical systems, such as applications in healthcare and finance, it is crucial that this reasoning is *reliable*, i.e. either *exact* or coming with approximation guarantees. At the same time, it is important that these guarantees can be carried out *efficiently*. For this, tractable probabilistic models (TPMs) are very appealing because they support reliable and efficient reasoning for a wide range of reasoning scenarios, *by design*. Therefore, it is no wonder that research on modeling and learning different TPMs has been flourishing recently. The variegated TPM spectrum includes models that deliver tractable computation of likelihoods such as *normalizing flow*, *Gaussian processes* and *autoregressive models*; tractable marginals, such as *mixture models*, *bounded-treewidth models*, and *determinantal point processes*; and models supporting more complex reasoning scenarios such as *probabilistic circuits*. As the subtitle of this year?s Workshop proposal suggests, we are particularly interested in bridging the latest theoretical advancements in this spectrum with the burgeoning literature on applying TPMs to real-world problems. In particular, TPMs have been successfully used in image classification, completion and generation, activity recognition, language and speech modeling, verification and diagnosis of physical systems, and more recently in computational life science, e.g., for drug discovery and epidemiology modeling. The workshop will be held *in person* on August 5th, 2022, co-located with UAI 2022 in Eindhoven, Netherlands. *Topics of interest* Prospective authors are invited to submit *novel research*, *retrospective papers,* or *recently accepted papers* on relevant topics including, but not limited to: - New tractable representations in logical, continuous and hybrid domains - Learning algorithms for TPMs - Theoretical and empirical analysis of tractable models - Connections between TPM classes - TPMs for responsible, robust and explainable AI - Approximate inference algorithms (with guarantees) - Applications of TPMs to real-world problems *Submission Instructions* Original papers and retrospective papers are required to follow the style guidelines of UAI 2022 and should be using the following adjusted template TPM format . Submitted papers should be up to 4 pages long, excluding references. Already accepted papers can be submitted in the format of the venue they have been accepted to. Supplementary material can be put in the same pdf paper (after references); it is entirely up to the reviewers to decide whether they wish to consult this additional material. All submissions must be electronic (through the link below), and must closely follow the formatting guidelines at https://tractable-probabilistic-modeling.github.io/tpm2022/cfp/; otherwise they will automatically be rejected. Reviewing for TPM 2022 is single-blind; i.e., reviewers will know the authors? identity but authors won't know the reviewers' identity. However, we recommend that you refer to your prior work in the third person wherever possible. We also encourage links to public repositories such as GitHub to share code and/or data. *Submission Link:* https://openreview.net/group?id=auai.org/UAI/2022/Workshop/TPM ****Accepted papers will be considered for a** best paper award**** *Organizers* YooJung Choi (UCLA) Eric Nalisnick (University of Amsterdam) Martin Trapp (Aalto University) Fabrizio Ventola (TU Darmstadt) Antonio Vergari (University of Edinburgh) *For any questions, contact us at **tpmworkshop2022 at gmail.com * ****Please consider sharing this CFP in your network**** -------------- next part -------------- An HTML attachment was scrubbed... URL: From U.K.Gadiraju at tudelft.nl Fri Jun 3 07:24:05 2022 From: U.K.Gadiraju at tudelft.nl (Ujwal Gadiraju) Date: Fri, 3 Jun 2022 11:24:05 +0000 Subject: Connectionists: [ACM Conference on Hypertext & Social Media, HT 2022] Call for Participation Message-ID: <2537d98d90b545bd96d7e3e0c9d31529@tudelft.nl> Hi all, --- Apologies if you have received multiple copies of this announcement CALL FOR PARTICIPATION === The ACM Conference on Hypertext and Social Media is a premium venue for high-quality peer-reviewed research on hypertext theory, systems and applications. It is concerned with all aspects of modern hypertext research including social media, semantic web, dynamic and computed hypertext and hypermedia as well as narrative systems and applications. For the first time in the history of the conference, HT ?22 will be run in a hybrid mode between **June 28 - July 1, 2022**, with the opportunity for speakers and attendees to participate onsite or online. IMPORTANT DATES === * **Early-bird registration: May 27, 2022** * Regular registration: May 28 ? June 24, 2022 * On-site registration: from June 28, 2022 * Conference: June 28 - July 1, 2022 REGISTRATION === Registration is open at https://ht.acm.org/ht2022/registration/. Register on or before **May 27** to benefit from an early-bird discount! Make sure to book your hotel stay at the Hotel Four Points by Sheraton conference hotel. The hotel is at the center of the conference activity, and will allow you to network easily with other conference attendees. See how to get access to a discount rate at https://ht.acm.org/ht2022/venue/ PRELIMINARY PROGRAM === **Keynotes** * **m.c. schraefel**, University of Southampon * **Dene Grigar**, Washington State University Vancouver * **Nuria Rodr?guez**, Universidad de M?laga **Paper sessions** * Research papers: https://ht.acm.org/ht2022/accepted-papers/ * Blue Sky Ideas papers: https://ht.acm.org/ht2022/accepted-blue-sky-ideas-papers/ **Other Sessions** * Late-breaking results: https://ht.acm.org/ht2022/accepted-late-breaking-results-papers/ * Demonstrations: https://ht.acm.org/ht2022/accepted-demo-papers/ **Workshops**: https://ht.acm.org/ht2022/accepted-workshops/ * HUMAN?22 ? 5th Workshop on Human Factors in Hypertext * The Narrative and Hypertext (NHT) * Open Challenges in Online Social Networks * 7th edition of The International Workshop on Social Media World Sensors (SIdEWayS) ORGANIZATION === **General Chairs** * Alejandro Bellog?n, Universidad Aut?noma de Madrid, Spain * Ludovico Boratto, University of Cagliari, Italy **Others Chairs**: https://ht.acm.org/ht2022/organization/ --- Best, Ujwal ____________________________________ Dr. Ir. Ujwal Gadiraju Assistant Professor Web Information Systems Delft University of Technology The Netherlands W: https://wis.ewi.tudelft.nl/gadiraju W: https://www.ujwalgadiraju.com E: u.k.gadiraju at tudelft.nl https://www.academicfringe.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Jun 4 04:04:00 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 4 Jun 2022 10:04:00 +0200 (CEST) Subject: Connectionists: DeepLearn 2022 Summer: early registration June 14 Message-ID: <1156544769.1464080.1654329840356@webmail.strato.com> ****************************************************************** 6th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2022 Summer Las Palmas de Gran Canaria, Spain July 25-29, 2022 https://irdta.eu/deeplearn/2022su/ ***************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: June 14, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Bournemouth, and Guimar?es. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 21 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/index.php?option=com_k2&view=item&layout=item&id=360&Itemid=896 STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wahid Bhimji (Lawrence Berkeley National Laboratory), Deep Learning on Supercomputers for Fundamental Science Joachim M. Buhmann (Swiss Federal Institute of Technology Zurich), Machine Learning -- A Paradigm Shift in Human Thought!? Kate Saenko (Boston University), Overcoming Dataset Bias in Deep Learning PROFESSORS AND COURSES: Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning: From Theory to Applications in the Natural Sciences Arindam Banerjee (University of Illinois Urbana-Champaign), [intermediate/advanced] Deep Generative and Dynamical Models Mikhail Belkin (University of California San Diego), [intermediate/advanced] Modern Machine Learning and Deep Learning through the Prism of Interpolation Arthur Gretton (University College London), [intermediate/advanced] Probability Divergences and Generative Models Phillip Isola (Massachusetts Institute of Technology), [intermediate] Deep Generative Models Mohit Iyyer (University of Massachusetts Amherst), [intermediate/advanced] Natural Language Generation Irwin King (Chinese University of Hong Kong), [intermediate/advanced] Deep Learning on Graphs Vincent Lepetit (Paris Institute of Technology), [intermediate] Deep Learning and 3D Reasoning for 3D Scene Understanding Dimitris N. Metaxas (Rutgers, The State University of New Jersey), [intermediate/advanced] Model-based, Explainable, Semisupervised and Unsupervised Machine Learning for Dynamic Analytics in Computer Vision and Medical Image Analysis Sean Meyn (University of Florida), [introductory/intermediate] Reinforcement Learning: Fundamentals, and Roadmaps for Successful Design Louis-Philippe Morency (Carnegie Mellon University), [intermediate/advanced] Multimodal Machine Learning Wojciech Samek (Fraunhofer Heinrich Hertz Institute), [introductory/intermediate] Explainable AI: Concepts, Methods and Applications Clara I. S?nchez (University of Amsterdam), [introductory/intermediate] Mechanisms for Trustworthy AI in Medical Image Analysis and Healthcare Bj?rn W. Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Jonathon Shlens (Apple), [introductory/intermediate] An Introduction to Computer Vision and Convolution Neural Networks [virtual] Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines Csaba Szepesv?ri (University of Alberta), [intermediate/advanced] Tools and Techniques of Reinforcement Learning to Overcome Bellman's Curse of Dimensionality 1. Murat Tekalp (Ko? University), [intermediate/advanced] Deep Learning for Image/Video Restoration and Compression Alexandre Tkatchenko (University of Luxembourg), [introductory/intermediate] Machine Learning for Physics and Chemistry Li Xiong (Emory University), [introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning Ming Yuan (Columbia University), [intermediate/advanced] Low Rank Tensor Methods in High Dimensional Data Analysis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 17, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. ORGANIZING COMMITTEE: Marisol Izquierdo (Las Palmas de Gran Canaria, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available in due time at https://irdta.eu/deeplearn/2022su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at amu.edu.pl Sat Jun 4 15:18:50 2022 From: marcin at amu.edu.pl (Marcin Paprzycki) Date: Sat, 4 Jun 2022 21:18:50 +0200 Subject: Connectionists: Fwd: Final Call -- FedCSIS 2022, Position Papers -- Strict submission deadline (position papers): June 7, 2022, 23:59:59 AoE (no extensions) In-Reply-To: <52073079-4f24-adf4-5628-7721470e658c@ibspan.waw.pl> References: <52073079-4f24-adf4-5628-7721470e658c@ibspan.waw.pl> Message-ID: <22eaef49-59e2-6d3c-2c84-95ef5091e92e@amu.edu.pl> CALL FOR POSITION PAPERS 17th Conference on Computer Science and Intelligence Systems (FedCSIS?2022) HYBRID CONFERENCE -- Sofia, Bulgaria, 4-7 September, 2022 (CORE B Ranked, IEEE: #54150) www.fedcsis.org Strict submission deadline (position papers): June 7, 2022, 23:59:59 AoE (no extensions) ************************** COVID-19 Information ************************ Conference will take place in Sofia Bulgaria, for those who will be able to make it there. For those who will not be able to reach Sofia, online participation will be made available. ************************************************************************ Please feel free to forward this announcement to your colleagues and associates who could be interested in it. The FedCSIS 2022 Federated Conference invites submissions of POSITION PAPERS to its respective events. Position papers must not exceed 8 pages and they should relate to an ongoing research or experience. Position papers will be presented by the authors alongside regular papers. Position papers may be also submitted as DEMO PAPERS and presented as demonstrations of software tools and products. They should describe non-for-profit software tools in a prototype-, alpha-, or beta-version. We invite TWO TYPES OF POSITION PAPERS: - EMERGING RESEARCH PAPERS present preliminary research results from work-in-progress based on sound scientific approach but presenting work not completely validated as yet. They must describe precisely the research problem and its rationale. They must also define the intended future work including the expected benefits from solution to the tackled problem. - CHALLENGE PAPERS propose and describe research challenges in theory or practice of computer science and information systems. The papers in this category must be based on deep understanding of existing research or industrial problems and should be defining new promising research directions. FedCSIS TRACKS AND TECHNICAL SESSIONS The FedCSIS 2022 consists of five conference Tracks, hosting Technical Sessions: Track 1: Advanced Artificial Intelligence in Applications (17th Symposium AAIA'22) * Artificial Intelligence for Next-Generation Diagnostis Imaging (1st Workshop AI4NextGenDI'22) * Intelligence for Patient Empowerment with Sensor Systems (1st Workshop AI4Empowerment'22) * Intelligence in Machine Vision and Graphics (4th Workshop AIMaViG'22) * Ambient Assisted Living Systems (1st Workshop IntelligentAAL'22) * Personalization and Recommender Systems (1st Workshop PeRS'22) * Rough Sets: Theory and Applications (4th International Symposium RSTA'22) * Computational Optimization (15th Workshop WCO'22) Track 2: Computer Science & Systems (CSS'22) * Actors, Agents, Assistants, Avatars (1st Workshop 4A'22) * Computer Aspects of Numerical Algorithms (15th Workshop CANA'22) * Concurrency, Specification and Programming (30th Symposium CS&P'22) * Multimedia Applications and Processing (15th International Symposium MMAP'22) * Scalable Computing (12th Workshop WSC'22) Track 3: Network Systems and Applications (NSA'22) * Complex Networks - Theory and Application (1st Workshop CN-TA'22) * Internet of Things - Enablers, Challenges and Applications (6th Workshop IoT-ECAW'22) * Cyber Security, Privacy and Trust (3rd International Forum NEMESIS'22) Track 4: Advances in Information Systems and Technologies (AIST'22) * Data Science in Health, Ecology and Commerce (4th Workshop DSH'22) * Information Systems Management (17th Conference ISM'22) * Knowledge Acquisition and Management (28th Conference KAM'22) Track 5: Software, System and Service Engineering (S3E'22) * Cyber-Physical Systems (9th Workshop IWCPS-8) * Model Driven Approaches in System Development (7th Workshop MDASD'22) * Software Engineering (42th IEEE Workshop SEW-42) Recent Advances in Information Technology (7th Symposium DS-RAIT'22) KEYNOTE SPEAKERS * Krassimir Atanassov Bulgarian Academy of Sciences, Sofia, Bulgaria https://scholar.google.com/citations?hl=pl&user=K-vuWKsAAAAJ * Thomas Blaschke University of Salzburg, Salzburg, Austria https://scholar.google.com/citations?user=kMroJzUAAAAJ * Chris Cornelis Ghent University, Ghent, Belgium https://scholar.google.com/citations?hl=pl&user=ln46HlkAAAAJ * Franco Zambonelli University of Modena e Reggio Emilia, Bologna, Italy https://scholar.google.com/citations?hl=pl&user=zxulxcoAAAAJ ZDZISLAW PAWLAK BEST PAPER AWARD The Professor Zdzislaw Pawlak Awards are given in the following categories: ? Best Paper Award (?600) ? Young Researcher Paper Award (?400) ? Industry Cooperation Award (?400) ? International Cooperation Award (?400) All papers accepted to FedCSIS 2022 are eligible to be considered as the award winners. This award will be granted independently from awards given by individual FedCSIS events (Tracks and/or Technical Sessions). Past Award winners can be found here: https://fedcsis.org/2022/zp_award POSITION PAPER SUBMISSION AND PUBLICATION: Position papers will be published as a Volume of the Annals of Computer Science and Information Systems series (https://annals-csis.org/), with an ISBN, ISSN and DOI numbers. The papers will be submitted for indexing to DBLP Computer Science Bibliography and Google Scholar (see, also: https://annals-csis.org/indexation). The Annals-CSIS volume will NOT be placed in the IEEE Digital Library and will NOT be submitted to Clarivate Analytics Web of Science. Authors should submit a position paper in English, carefully checked for correct grammar and spelling, using the on-line submission procedure. The guidelines for paper formatting provided at the conference website ought to be used for all submitted papers. The required submission format is the same as the camera-ready format. Please check, and carefully follow, the instructions and templates provided. Papers that are out of the conference scope of the selected event, or that contain any form of (self)plagiarism will be rejected without reviews. All position papers will be refereed before inclusion in the conference program. IMPORTANT DATES + Position paper submission: June 7, 2022 (23:59 AoE) + Author notification: July 6, 2022 + Final paper submission and registration: July 12, 2022 + Payment (early fee deadline): August 2, 2022 + Conference date: September 4-7, 2022 CHAIRS OF FedCSIS CONFERENCE SERIES Maria Ganzha, Marcin Paprzycki, Dominik Slezak CONTACT FedCSIS at: secretariat at fedcsis.org FedCSIS in Social Media: FedCSIS on Facebook: http://tinyurl.com/FedCSISFacebook FedCSIS on LinkedIN: https://tinyurl.com/FedCSISonLinkedIN FedCSIS on Twitter: https://twitter.com/FedCSIS FedCSIS on XING: http://preview.tinyurl.com/FedCSISonXING ABOUT FedCSIS Conference Series FedCSIS is an annual international conference, this year organized jointly by the Polish Information Processing Society (PTI), IEEE Poland Section Computer Society Chapter and Bulgarian Academy of Sciences. The mission of the FedCSIS Conference Series is to provide a highly acclaimed forum in computer science and information systems. We invite researchers from around the world to contribute their research results and participate in Technical Sessions focused on their scientific and professional interests in computer science and information systems. -- Ta wiadomo?? zosta?a sprawdzona na obecno?? wirus?w przez oprogramowanie antywirusowe Avast. https://www.avast.com/antivirus From dimitri.ognibene at gmail.com Sat Jun 4 14:13:02 2022 From: dimitri.ognibene at gmail.com (Dimitri Ognibene) Date: Sat, 4 Jun 2022 20:13:02 +0200 Subject: Connectionists: [JOBS] 1 Post-Doc open Positions on AI-ML to understand and contrast social media threats for teenagers Message-ID: [JOBS] 1 Post-Doc open Positions on AI-ML to understand and contrast social media threats for teenagers ######### Apologize for cross posting ######### Dear colleagues, University Milano-Bicocca is offering 1 postdoctoral position on machine learning and AI applied to understand and contrast social media threats for teenagers. The successful participant will be involved in the multidisciplinary project COURAGE funded by Volkswagen Foundation. The position is research starting in September 2022 Please contact us for expression of interest and preliminary information: dimitri.ognibene at unimib dot it (PI) 1 Researcher Position Application Opens: Middle June (tbd) Application Closes: late June (tbd) Interview: 5th July h14 (CET) Starts: September 2022 Duration: 1.5 years (18 months) Salary: ?2600 pm Work permit: valid permit to work in Italy from September 2022 is a strong asset Topics: Social media, NLP, machine learning, cognitive models, opinion dynamics, computer vision, reinforcement learning, graph neural network, recommender systems, network analysis Project: COURAGE https://www.upf.edu/web/courage Informal description of the job: Do social media harm teenagers and our society? Can we make them safer? Let's understand how to reshape social media algorithms together with this competitive research positions in #machinelearnig, #artificialintelliegence and #socialmedia modelling based in Universit? degli Studi di Milano-Bicocca on the project COURAGE #couragecompanion #AIforSocialGood We will use the state of the art in graph neural networks, reinforcement learning, nlp, cv, and machine learning in general to improve our understanding of social media dynamics, and help our society by supporting and teaching young people tackle hate speech and fake news in social media. Many opportunities for interdisciplinary and international research -- Dimitri Ognibene, PhD Associate Professor at Universit? Milano-Bicocca Honorary Lecturer of Computer science and Artificial Intelligence at University of Essex http://sites.google.com/site/dimitriognibenehomepage/ *Skype:* dimitri.ognibene -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.lieck at gmail.com Fri Jun 3 10:32:41 2022 From: robert.lieck at gmail.com (Robert Lieck) Date: Fri, 3 Jun 2022 15:32:41 +0100 Subject: Connectionists: PhD Position on Neuro-Symbolic Modelling of Music (Durham, UK) Message-ID: <22fb5552-021c-9b68-47cd-03acb36af418@gmail.com> Dear all, I am happy to announce a funded *PhD position on neuro-symbolic modelling of music* at Durham University (UK). The project is about combining *deep learning* with structured/*symbolic methods* (artificial grammars, graphical models, etc.) to address fundamental challenges in *machine learning *and*music analysis*. More details can be found here and below. Please distribute to anyone who might be interested. Many thanks! Best wishes, Robert /(apologies for cross-posting)/ -- *Dr Robert Lieck* (he/him) *Assistant Professor* /Department of Computer Science/ /Durham University/ robert.lieck at durham.ac.uk ======================================== *More information can be found **here **.* This funded PhD position is about developing novel algorithmic tools for music analysis using deep learning and structured/symbolic methods. It will combine approaches from computational musicology, image analysis, and natural language processing to advance the state of the art in the field. Music analysis is a highly challenging task for which artificial intelligence (AI) and machine learning (ML) is lagging far behind the capabilities of human experts. Solving it requires a combination of two different model types: (1) neural networks and deep learning techniques to extract features from the input data and (2) structured graphical models and artificial grammars to represent the complex dependencies in a musical piece. The central goal of the project is to leverage the synergies from combining these techniques to build models that achieve human-expert level performance in analysing the structure of a musical piece. *You will get:* * the chance to do your PhD at a *world-class university* and conduct *groundbreaking research *in machine learning and artificial intelligence * the opportunity to work on an *interdisciplinary *project with *real-world applications* in the field of music * *committed supervision* and *comprehensive training* (regular one-on-one meetings, ample time for discussion, detailed feedback, support in your scientific development, e.g., presentation skills, research methodology, scientific writing etc.) * a *stimulating*, *diverse*, and *supportive *research environment (as member of the interdisciplinary AIHS group ) * the opportunity to publish in *top journals*, attend *international conferences*, and build a *network of collaborations* *You should bring:* * *enthusiasm *for interdisciplinary research in artificial intelligence and music * an *open mind-set* and creative *problem-solving* skills * a *solution-oriented* can-do mentality * a desire to *understand *the structure of *music *and its inner workings * a good command of a*modern programming language* (preferably Python) and familiarity with a modern *deep learning framework* (e.g. PyTorch) * a strong master degree (or equivalent) with a significant *mathematical *or *computational *component If you are interested, please send an email with your CV and a short informal motivation to Robert Lieck robert.lieck at durham.ac.uk for initial discussions. *Important Note:* We are looking to fill this position as soon as possible (the position is still open as long as it is advertised) and are accepting applications on a rolling basis. The preferred start date is October 2022 (new academic year). We would particularly like to encourage applications from women, disabled, Black, Asian and other minority ethnic candidates, since these groups are currently underrepresented in our area. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From htlin at csie.ntu.edu.tw Fri Jun 3 18:11:17 2022 From: htlin at csie.ntu.edu.tw (Hsuan-Tien Lin) Date: Sat, 4 Jun 2022 06:11:17 +0800 Subject: Connectionists: [Deadline Exended to June 6th] NeurIPS 2022 Call for Workshop Proposal Message-ID: We have extended the workshop proposal deadline for NeurIPS 2022 to June 6th, 2022 (AoE). Please check https://neurips.cc/Conferences/2022/CallForWorkshops for details. For any questions, please feel free to let us know. Thank you, Hanie, Hsuan-Tien, Sungjin, and Tristan NeurIPS 2022 Workshop Chairs workshop-chairs at neurips.cc From trentin at dii.unisi.it Fri Jun 3 10:24:46 2022 From: trentin at dii.unisi.it (Edmondo Trentin) Date: Fri, 3 Jun 2022 16:24:46 +0200 Subject: Connectionists: Deadline extension: 10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR 2022, Hybrid) Message-ID: <1ba19e95f51f04bab779b99ae24df3a8.squirrel@mailsrv.diism.unisi.it> Hybrid ANNPR 2022: due to the status of the ongoing Covid-19 pandemic, remote presentations and attendance will be accommodated. Nonetheless, ANNPR 2022 is being planned as an in-presence event, and we encourage perspective attendees to join us in Dubai. *** Deadline extension 10th IAPR TC3 International Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR 2022) November 24th - 26th, 2022 Dubai Campus of Heriot-Watt University, Dubai, UAE URL: https://annpr2022.com/ Sponsored by the International Association for Pattern Recognition (IAPR) *** The submission deadline is extended to July 3rd, 2022 *** New Important Dates Paper submission (extended): July 3, 2022 Notification of acceptance: Jul 29, 2022 Camera ready due: Sep 4, 2022 Early registration: Sep 4, 2022 Workshop: Nov. 24-26, 2022 (Thu to Sat) The Workshop proceedings will be published in the Springer LNAI series ANNPR 2022 invites papers that present original work in the areas of neural networks and machine learning oriented to pattern recognition, focusing on their algorithmic, theoretical, and applied aspects. Topics of interest include, but are not limited to: Methodological Issues - Supervised, semi-supervised, unsupervised, and reinforcement learning - Deep learning and deep reinforcement learning - Feed-forward, recurrent, and convolutional neural networks - Hierarchical modular architectures and hybrid systems - Interpretability and explainability of neural networks - Generative models - Robustness & generalization of neural networks - Meta-learning, Auto-ML - Multiple classifier systems and ensemble methods - Kernel machines - Probabilistic graphical models Applications to Pattern Recognition - Image processing and segmentation - Object detection - NLP and conversational agents - Sensor-fusion and multi-modal processing - Biometrics, including sspeech and speaker recognition and segmentation - Data, text, and social media analytics - Bioinformatics/Cheminformatics and medical applications - Industrial applications, e.g. quality control and predictive maintenance - Data clustering ANNPR 2022 is organized by the Technical Committee 3 (IAPR TC3) on Neural Networks & Computational Intelligence of the International Association for Pattern Recognition (IAPR). As such, we particularly encourage submissions that fit the Manifesto and the research directions of the TC3 (see http://iapr-tc3.diism.unisi.it/Research.html). The Workshop is officially sponsored by the IAPR. Paper Submission: Perspective Authors shall submit their paper in Springer LNCS/LNAI format. Please refer to the "Submission" page of the ANNPR 2022 website at: https://annpr2022.com/paper-submission/ Instructions for Authors, LaTeX templates, etc. are available at the Springer LNCS/LNAI web-site (see http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines). The maximum paper length is 12 pages. Submission of a paper constitutes a commitment that, if accepted, at least one of the Authors will complete an early registration to the workshop. On-line submission via EasyChair is available through the aforementioned ANNPR 2022 web-page. For more information, please visit us at: https://annpr2022.com/ Do not hesitate to contact the ANNPR 2022 Chairs for any inquiries. ANNPR 2022 Chairs: Neamat El Gayar, Heriot-Watt University, Dubai (UAE) Hazem Abbas, Ain Shams University, Cairo (Egypt) Mirco Ravanelli, Concordia University, Montr?al (Canada) Edmondo Trentin, University of Siena, Siena (Italy) ----------------------------------------------- Edmondo Trentin, PhD Dip. Ingegneria dell'Informazione e Scienze MM. V. Roma, 56 - I-53100 Siena (Italy) E-mail: trentin at dii.unisi.it Voice: +39-0577-234636 Fax: +39-0577-233602 WWW: http://www.dii.unisi.it/~trentin/HomePage.html From oliver at roesler.co.uk Fri Jun 3 12:30:43 2022 From: oliver at roesler.co.uk (Oliver Roesler) Date: Fri, 3 Jun 2022 16:30:43 +0000 Subject: Connectionists: CFP RO-MAN 2022 Workshop on Machine Learning for HRI: Bridging the Gap between Action and Perception Message-ID: <64d4cde3-2282-5e8d-d005-082d7bb8ec49@roesler.co.uk> *CALL FOR PAPERS* **Apologies for cross-posting** The *full-day virtual* workshop: *Machine Learning for HRI: Bridging the Gap between Action and Perception (ML-HRI)* In conjunction with the *31st IEEE International Conference on Robot and**Human Interactive Communication (RO-MAN) - August 22, 2022??? * Webpage:?https://ml-hri2022.ivai.onl/ *I. Aim and Scope* A key factor for the acceptance of robots as partners in complex and dynamic human-centered environments is their ability to continuously adapt their behavior. This includes learning the most appropriate behavior for each encountered situation based on its specific characteristics as perceived through the robots senors. To determine the correct actions the robot has to take into account prior experiences with the same agents, their current emotional and mental states, as well as their specific characteristics, e.g. personalities and preferences. Since every encountered situation is unique, the appropriate behavior cannot be hard-coded in advance but must be learned over time through interactions. Therefore, artificial agents need to be able to learn continuously what behaviors are most appropriate for certain situations and people based on feedback and observations received from the environment to enable more natural, enjoyful, and effective interactions between humans and robots. This workshop aims to attract the latest research studies and expertise in human-robot interaction and machine learning at the intersection of rapidly growing communities, including social and cognitive robotics, machine learning, and artificial intelligence, to present novel approaches aiming at integrating and evaluating machine learning in HRI. Furthermore, it will provide a venue to discuss the limitations of the current approaches and future directions towards creating robots that utilize machine learning to improve their interaction with humans. *II. Keynote Speakers and Panelists* 1. *Dorsa Sadigh* ? Stanford University ? USA 2. *Oya Celiktutan* ? King's College London ? UK 3. *Sean Andrist *??Microsoft ? USA 4. *Stefan Wermter* ? University of Hamburg ? Germany *III. Submission* 1. For paper submission, use the following EasyChair web link: Paper Submission . 2. Use the RO-MAN 2022 format: RO-MAN Papers Templates . 3. Submitted papers should be 4-6 pages for regular papers and 2 pages for position papers. ??? The primary list of topics covers the following points (but not limited to): * Autonomous robot behavior adaptation * Interactive learning approaches for HRI * Continual learning * Meta-learning * Transfer learning * Learning for multi-agent systems * User adaptation of interactive learning approaches * Architectures, frameworks, and tools for learning in HRI * Metrics and evaluation criteria for learning systems in HRI * Legal and ethical considerations for real-word deployment of learning approaches *IV. Important Dates* 1. Paper submission: *June 17, 2022 (AoE)* 2. Notification of acceptance: *August 1, 2022 (AoE)* 3. Camera ready: *August 14, 2022 (AoE)* 4. Workshop: *August 22, 2022* *V. Organizers* 1. *Oliver Roesler* ? IVAI ? Germany 2. *Elahe Bagheri* ? IVAI ? Germany 3. *Amir Aly* ? University of Plymouth ? UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoram.burak at elsc.huji.ac.il Mon Jun 6 00:05:00 2022 From: yoram.burak at elsc.huji.ac.il (Yoram Burak) Date: Mon, 6 Jun 2022 07:05:00 +0300 Subject: Connectionists: The KiloNeurons ERC project: post-doctoral position in theoretical neuroscience Message-ID: The Burak Lab at the Hebrew University is seeking to hire a postdoctoral research fellow in theoretical neuroscience. The postdoctoral fellow will take part in the ERC funded KiloNeurons project, led by Edvard Moser (Kavli Institute for Theoretical Neuroscience, NTNU) and Yoram Burak (Edmond and Lily Safra Center for Brain Sciences and the Racah Institute of Physics). The project aims to identify how thousands of neurons in the mammalian cortex interact with each other to achieve computational and cognitive functions. The KiloNeurons project offers opportunities to work on diverse theoretical and computational questions. These involve approaches ranging from pure theoretical modeling to analysis and interpretation of large scale neural data. Theoretical tools include methods from nonlinear dynamics, machine learning, statistical physics, and information theory. A generous budget is available for travel between Israel and Norway to facilitate interaction with members of the Moser lab, as well as for travel and participation in scientific meetings, and for purchase of computing equipment. We are looking for candidates with excellent abilities in theoretical research, strong intellectual curiosity and drive, a keen interest in computational neuroscience, and an interest in relating theory to experimental data. Candidates must have an excellent research record in theoretical neuroscience, theoretical physics, mathematics, computer science, or a related field. Experience with research related to machine learning, statistical physics, nonlinear dynamics, information theory, or large-scale data analysis, as well as prior familiarity with research in computational neuroscience are an advantage. To apply: To apply, please send the following information to kiloneurons-positions at elsc.huji.ac.il in a single PDF file: 1. Curriculum vitae 2. List of publications 3. Brief statement of research interests 4. The names and contact details of three referees Deadline: For full consideration, applications should be received by June 30, 2022. ___ Yoram Burak Associate Professor Racah Institute of Physics, and Edmond and Lily Safra Center for Brain Sciences The Hebrew University of Jerusalem Goodman Brain Sciences Bldg 2102, Safra Campus, Jerusalem 91904, Israel p: +972-2-6584523 web: https://www.buraklab.me/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From perusquia at ieee.org Sun Jun 5 08:18:10 2022 From: perusquia at ieee.org (Monica Perusquia Hernandez) Date: Sun, 5 Jun 2022 21:18:10 +0900 Subject: Connectionists: MEEC2022 - 3rd workshop on Momentary Emotion Elicitation and Capture - Call for Papers Message-ID: ======================================================= 3rd Momentary Emotion Elicitation & Capture (MEEC) at ACII 2022 Call for Papers 17 Oct 2022, Hybrid event, Nara, Japan https://cwi-dis.github.io/meec-ws/index.html ======================================================= To train machines to detect and recognise human emotions sensibly, we need valid emotion ground truths that consider dynamic changes over time. A fundamental challenge here is the momentary emotion elicitation and capture (MEEC) from groups and individuals continuously and in real-time, without adversely affecting user experience. In this half-day virtual ACII 2022 workshop, we will (1) have a keynote presentation about ambulatory sampling methods by Prof. Ulrich Ebner-Priemer from the Karlsruhe Institute of Technology, Germany; (2) have participant talks showcasing their submissions; (3) brainstorm on techniques to understand dynamic changes given different temporal measurement resolutions; and (4) create a battery of methods relevant to diverse affective contexts. We seek contributions across disciplines that explore how emotions can be naturally elicited and captured in the moment. Topics include: 1. Elicitation: 1. multi-modal (e.g., film, music) and multi-sensory (e.g., smell, taste, thermal) elicitation 2. emotion elicitation across domains (e.g., automotive, healthcare) 3. elicitation and immersiveness (e.g., AR/VR/MR) 4. elicitation over time (e.g., mood) 5. elicitation through human-robot interaction 6. ethical considerations 2. Capture: 1. emotion models (dimensional, discrete) 2. annotation modalities (e.g., speech) and (remote) tools (e.g., ESMs) 3. devices (e.g., mobile, wearable) and sensors (e.g., RGB / thermal cameras, EEG, eye-tracking) 4. attention considerations (e.g., interruptions) 5. ethical issues in tracking and detection We invite position papers, research results papers, and demos (2-9 pages, including references) that describe/showcase emotion elicitation and/or capture methods. Submissions will be peer-reviewed by two peers and selected on their potential to spark discussion. Submissions should be prepared according to the IEEE conference template and submitted in PDF through Easychair ( https://easychair.org/conferences/?conf=meec2022). The templates can be found here at this link: LaTeX/Word Templates . Accepted submissions will be made available on the workshop proceedings of ACII2022. They will be published and indexed on IEEE Xplore. At least one author must register for the workshop and one day of the conference. Submission Deadline: 8 July 2022 23:59 AoE Notification of Acceptance: 31 July 2022 Camera-ready Deadline: 10 August 2022 Workshop Day: 17 October 2022 Sincerely, Monica Perusqu?a-Hern?ndez, PhD PDEng Website | Schedule a meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Sun Jun 5 11:04:48 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Sun, 5 Jun 2022 11:04:48 -0400 Subject: Connectionists: Conscious Learning Algorithm and Brain-Mind Institute 2022 Message-ID: J. Weng: "A Developmental Network Model of Conscious Learning in Biological Brains" https://doi.org/10.21203/rs.3.rs-1700782/v1 The paper has been rejected by Science, Nature, PNAS and even arXiv. With some difficulties, Research Square (invested by Springer Nature) agreed to post it with doi without passing so called "peer review". I am humbled to admit that this is the first brain algorithm ever published as far as I am aware. If this is not true, please kindly let me know. To learn such a brain algorithm, you need to register the Brain-Mind Institute: http://www.brain-mind-institute.org/bmi-871.html by filling in the registration form from the site. BMI 2022 consists of three courses and the AIML Contest 2022. Early registration: June 19, 2022. Best regards, -John -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernstein.communication at fz-juelich.de Mon Jun 6 04:10:26 2022 From: bernstein.communication at fz-juelich.de (Bernstein Communication) Date: Mon, 6 Jun 2022 10:10:26 +0200 Subject: Connectionists: Reminder: Call for Abstracts for the Bernstein Conference Message-ID: <1c05b453-46fd-5d56-2768-bf8d5acb7898@fz-juelich.de> **Apologies for cross-posting** Dear colleagues, this is a gentle reminder to submit your abstract to be considered as contributed talk for the Bernstein Conference before June 15 here: https://bit.ly/BC_submission Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange. It has established itself as one of the most renown conferences worldwide in this field, attracting students, postdocs and PIs from around the world to meet and discuss new scientific discoveries. In 2022, the Bernstein Conference will take place as in-person meeting again in Berlin. Talks of the Main Conference are going to be livestreamed, given speakers consent. www.bernstein-conference.de ____ IMPORTANT DATES * Bernstein Conference: September 13 - 16, 2022 * Deadline for submission of abstracts to be considered for Contributed Talks: July 15, 2022 * Deadline for abstract submission: July 18, 2022 ____ ABSTRACTS We invite the computational neuroscience community to submit their abstracts: Submitted abstracts can either be considered as contributed talks or posters. All accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Further information can be found here: https://bit.ly/BC_submission ____ INVITED SPEAKERS Keynote Sonja Hofer (University College London, UK) Invited Talks Bing Brunton (University of Washington, USA) Christine Constantinople (New York University, USA) Carina Curto (Pennsylvania State University, USA) Liset M de la Prida (Instituto Cajal, Spain) Juan Alvaro Gallego (Imperial College London, UK) Mehrdad Jazayeri (Massachusetts Institute of Technology, USA) Gaby Maimon (The Rockefeller University, New York, USA) Andrew Saxe (University College London, UK) Henning Sprekeler (Technische Universit?t Berlin, Germany) Carsen Stringer (Janelia Research Campus, USA) ____ CONFERENCE COMMITTEE Raoul-Martin Memmesheimer (Conference Chair) Christian Machens (Program Chair) Tatjana Tchumatchenko (Program Vice Chair) Moritz Helias (Workshop Chair) Anna Levina (Workshop Vice Chair) & Megan Carey, Brent Doiron, Tatiana Engel, Ann Hermundstad, Christian Leibold, Timothy O'Leary, Srdjan Ostojic, Cristina Savin, Mark van Rossum, Friedemann Zenke. ____ For any further questions, please contact: bernstein.conference at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Neugierige sind herzlich willkommen am Sonntag, den 21. August 2022, von 10:00 bis 17:00 Uhr. Mehr unter: https://www.tagderneugier.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernstein.communication at fz-juelich.de Mon Jun 6 07:35:43 2022 From: bernstein.communication at fz-juelich.de (Bernstein Communication) Date: Mon, 6 Jun 2022 13:35:43 +0200 Subject: Connectionists: Reminder: Call for Abstracts for the Bernstein Conference In-Reply-To: <1c05b453-46fd-5d56-2768-bf8d5acb7898@fz-juelich.de> References: <1c05b453-46fd-5d56-2768-bf8d5acb7898@fz-juelich.de> Message-ID: <98ab3092-80e6-97b2-6b82-6a22a1960d1c@fz-juelich.de> Dear colleagues, please mind the typo in the message below: Deadline for abstracts to be considered as a contributed talk is *June 15*. The date in the section important dates is not correct. Apologies for re-posting. Kind regards, Janina Radny Am 06.06.22 um 10:10 schrieb Bernstein Communication: > > **Apologies for cross-posting** > > Dear colleagues, > > this is a gentle reminder to submit your abstract to be considered as > contributed talk for the Bernstein Conference *before June 15* here: > https://bit.ly/BC_submission > > Each year the Bernstein Network invites the international > computational neuroscience community to the annual *Bernstein > Conference* for intensive scientific exchange. It has established > itself as one of the most renown conferences worldwide in this field, > attracting students, postdocs and PIs from around the world to meet > and discuss new scientific discoveries.* > *In 2022, the? Bernstein Conference will take place as in-person > meeting again in *Berlin*. Talks of the Main Conference are going to > be livestreamed, given speakers consent. > *www.bernstein-conference.de* > > ____ > *IMPORTANT DATES* > > * Bernstein Conference: *September 13 - 16, 2022* > * Deadline for submission of abstracts to be considered for > Contributed Talks: *July 15, 2022* > * Deadline for abstract submission: *July 18, 2022* > > ____ > *ABSTRACTS* > > We invite the computational neuroscience community to submit their > abstracts: Submitted abstracts can either be considered as contributed > talks or posters. All accepted abstracts will be published online and > will be citable via Digital Object Identifiers (DOI). > Further information can be found here: https://bit.ly/BC_submission > > ____ > *INVITED SPEAKERS* > * > **Keynote* > Sonja Hofer (University College London, UK) > *Invited Talks* > Bing Brunton (University of Washington, USA) > Christine Constantinople (New York University, USA) > Carina Curto (Pennsylvania State University, USA) > Liset M de la Prida (Instituto Cajal, Spain) > Juan Alvaro Gallego (Imperial College London, UK) > Mehrdad Jazayeri (Massachusetts Institute of Technology, USA) > Gaby Maimon (The Rockefeller University, New York, USA) > Andrew Saxe (University College London, UK) > Henning Sprekeler (Technische Universit?t Berlin, Germany) > Carsen Stringer (Janelia Research Campus, USA) > > ____ > *CONFERENCE COMMITTEE* > > Raoul-Martin Memmesheimer (Conference Chair) > Christian Machens (Program Chair) > Tatjana Tchumatchenko? (Program Vice Chair) > Moritz Helias (Workshop Chair) > Anna Levina (Workshop Vice Chair) > & Megan Carey, Brent Doiron, Tatiana Engel, Ann Hermundstad, Christian > Leibold, Timothy O'Leary, Srdjan Ostojic, Cristina Savin, Mark van > Rossum, Friedemann Zenke. > > ____ > For any further questions, please contact: > *bernstein.conference at fz-juelich.de* > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Volker Rieke > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, > Prof. Dr. Frauke Melchior > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > > > Neugierige sind herzlich willkommen am Sonntag, den 21. August 2022, > von 10:00 bis 17:00 Uhr. Mehr unter: https://www.tagderneugier.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From k.wong-lin at ulster.ac.uk Mon Jun 6 06:54:51 2022 From: k.wong-lin at ulster.ac.uk (Wong-Lin, Kongfatt) Date: Mon, 6 Jun 2022 10:54:51 +0000 Subject: Connectionists: ISRC Computational Neuroscience, Neurotechnology and Neuro-inspired AI Autumn School Message-ID: The Organising Committee from Ulster University's Intelligent Systems Research Centre (ISRC) at Magee Campus in Derry~Londonderry is very pleased to announce that the international Computational Neuroscience, Neurotechnology and Neuro-inspired AI (ISRC-CN3) Autumn School will be held again this year. Supported by the International Brain Research Organisation (IBRO) and other sponsors, this year's Autumn School will run from 24th October to 28th October, inclusively. The topics covered in the ISRC-CN3 Autumn School include: * mathematical foundations in neuroscience * computational modelling of neural-glial systems, neuromodulators and cognition * neural data science * neurotechnology * neuromorphic computing and self-repaired intelligent machines * spiking neural networks and applications * cognitive robotics * ethics in neurotechnology and AI * entrepreneurship in neurotechnology and AI This ISRC-CN3 Autumn School is unique not only in the range and types of topics covered but it will also be delivered in an integrated way, from pedagogical to advanced levels. Registration is made highly affordable, and bursaries may be available. Academic researchers at the ISRC and invited external speakers will contribute to the delivery of this 5-day School, which will consist of lectures and labs. Attendees will have the opportunity to present and share their research work on the final day, and awards will be given to the top presenters. For more information, please visit the websites at: * https://www.ulster.ac.uk/faculties/computing-engineering-and-the-built-environment/events/isrc-cn3-autumn-school * https://www.ulster.ac.uk/faculties/computing-engineering-and-the-built-environment/computing-engineering-intelligent-systems/isrc-cn3-autumn-school Application for the Autumn School is now opened and will close on 1st September. For further information, queries, or sponsorship, please contact Dr. KongFatt Wong-Lin (k.wong-lin at ulster.ac.uk). This email and any attachments are confidential and intended solely for the use of the addressee and may contain information which is covered by legal, professional or other privilege. If you have received this email in error please notify the system manager at postmaster at ulster.ac.uk and delete this email immediately. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Ulster University. The University's computer systems may be monitored and communications carried out on them may be recorded to secure the effective operation of the system and for other lawful purposes. Ulster University does not guarantee that this email or any attachments are free from viruses or 100% secure. Unless expressly stated in the body of a separate attachment, the text of email is not intended to form a binding contract. Correspondence to and from the University may be subject to requests for disclosure by 3rd parties under relevant legislation. The Ulster University was founded by Royal Charter in 1984 and is registered with company number RC000726 and VAT registered number GB672390524.The primary contact address for Ulster University in Northern Ireland is Cromore Road, Coleraine, Co. Londonderry BT52 1SA -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseph.lizier at sydney.edu.au Mon Jun 6 21:47:20 2022 From: joseph.lizier at sydney.edu.au (Joseph Lizier) Date: Tue, 7 Jun 2022 01:47:20 +0000 Subject: Connectionists: Information theory workshop at CNS*2022 -- call for contributed talks Message-ID: Dear all, We are pleased to announce that the Workshop on Methods of Information Theory in Computational Neuroscience will be held once again at the 31st Annual Computational Neuroscience Meeting (CNS*2022 conference), in Melbourne, Australia. The workshop will be held during sessions over the final two days of the main conference, July 19 and 20, 2022. Our confirmed speakers so far include the following: * Toma?s? Ba?rta, Academy of Sciences of the Czech Republic * Demian Battaglia, Aix-Marseille University * Demi Gao, The University of Melbourne * Tatiana Kameneva, Swinburne University, Melbourne * Leonardo Novelli, Monash University, Melbourne * Naotsugu Tsuchiya, Monash University, Melbourne * Masanori Shimono, Kyoto University * ... more TBA! We would like to call for further contributions of talks. If you are interested in contributing a talk, please send a title and abstract to Joseph Lizier (joseph.lizier at sydney.edu.au). Earlier submissions and those from female/minority speakers will be prioritised. Please pay attention to the registration for CNS*2022 (early bird deadlines and fees) as workshop registration at least is required for attendance, and see our website https://bit.ly/cns2022itw for more details. We hope you will join us there! Organising Committee: Joseph Lizier (chair) Abdullah Makkeh Justin Dauwels Michael Wibral -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Mon Jun 6 17:36:36 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Mon, 6 Jun 2022 17:36:36 -0400 Subject: Connectionists: "Deletion of Data" Misconducts in Deep Learning In-Reply-To: References: Message-ID: Dear Colleagues, Please see my attached report about "Deletion of (Undesirable) Data" misconduct which has become rampant in so-called deep learning as well as other machine learning modes. Although my attached report is only about Prof. Jeofferey Hiinton and his coworkers, the problem is much wider. Prof. Xuanjing Huang: You raised a question about "computational resource" competition in AI when you interviewed Prof. Lide Wu. As you can see, here the problems are deeper, including those from Alphabet/Google and other teams and companies. This is an open academic discussion. If you have questions or comments, please send them to all. Best regards, -John ---------- Forwarded message --------- From: Juyang Weng Date: Mon, Jun 6, 2022 at 2:05 PM Subject: Deletion of Data Misconducts in All Three Areas of Division III, DIS, NSFC To: NSFC Dear NSFC: Please see the attached letter and its attachment. Best regards, -John -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NSFC-DIS-Division-III-2022-06-06.pdf Type: application/pdf Size: 502423 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2022-06-06-Delete-Data-Hinton-CACM.pdf Type: application/pdf Size: 116413 bytes Desc: not available URL: From emmanuel.vincent at inria.fr Tue Jun 7 04:00:36 2022 From: emmanuel.vincent at inria.fr (Emmanuel Vincent) Date: Tue, 7 Jun 2022 10:00:36 +0200 Subject: Connectionists: Fully funded PhD position on multimodal speech anonymization, Inria, France Message-ID: <34864bc2-9a34-06e1-5e05-98009cd4bfe4@inria.fr> Dear list, Please forward to anyone interested. Inria is opening a fully funded PhD position on multimodal speech anonymization. For details and to apply, see: https://jobs.inria.fr/public/classic/en/offres/2022-05013 Applications will be reviewed on a continuous basis until June 30. Best, -- Emmanuel Vincent Senior Research Scientist & Head of Science Inria Nancy - Grand Est +33 3 8359 3083 - http://members.loria.fr/evincent/ From iccc22.conference at gmail.com Tue Jun 7 08:31:26 2022 From: iccc22.conference at gmail.com (ICCC 2022) Date: Tue, 7 Jun 2022 22:31:26 +1000 Subject: Connectionists: [ICCC 2022] - Call for participation and registration Message-ID: (Apologies for cross-posting - Please distribute!) *The 13th International Conference on Computational Creativity (ICCC'22)* *June 27 ? July 1, 2022, Bozen-Bolzano, Italy * We are happy to announce that the registration for ICCC'22 has opened! Please register to the conference by filling out the forms on: https://www.conftool.com/iccc2022/ Note that early registration ends already on the 10 of June, but registration is open until the conference starts. Alongside an exciting scientific program, we have four invited keynote speakers from different scientific and artistic disciplines. They will present their research and perspective on computational creativity and participate in a joint panel debate on the future of CC. In addition, ICCC's social program offers a live music concert, a mountain-museum excursion, and a conference dinner inside a Renaissance castle. The full program is available online here: https://computationalcreativity.net/iccc22/conference-program/ We are very excited to have you join us at ICCC'22 in Bozen-Bolzano! Best wishes, Maria and Anna, program chairs ICCC'22 Oliver and Tony, general chairs ICCC'22 Roberto, local chair ICCC'22 ---------------------------------------- Follow us at: Facebook ? https://www.facebook.com/pg/computationalcreativity/ Twitter ? https://twitter.com/iccc_conf Instagram ? https://www.instagram.com/iccc_conf/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ieeemocs2021 at gmail.com Tue Jun 7 09:35:28 2022 From: ieeemocs2021 at gmail.com (Michele La Manna) Date: Tue, 7 Jun 2022 15:35:28 +0200 Subject: Connectionists: [CFP][Extended Deadline] ACM HTESP 2022 Part of EWSN 2022 Message-ID: <69377f82-d90f-dbff-0677-e777595264ee@gmail.com> Greetings, This email is to invite you to take part in the first *Workshop on Hot Trends in Embedded Systems Privacy (HTESP 2022)*. This workshop is a much needed opportunity to showcase new approaches to the problem of Privacy Protection in Embedded Systems Design. If you want to know more about *HTESP*, visit our website: https://sites.google.com/view/htesp2022 *Important Dates: * Submission due: June 17, 2022 [Firm Deadline] Notification due: July 1, 2022 Camera-ready due: August 1, 2022 Workshop Date: October 3, 2022 *Abstract:* Embedded systems have become pervasive in modern society, and their diffusion is still growing rapidly, apparently without limits. Applications fields are countless, ranging from automotive, telecommunications, digital healthcare, smart cards, military, satellites, computer networking, digital consumer electronics, Internet of Things, nano- or bionano-things, and so on. The huge amount of data gathered and processed by such systems often contains Personal Identifiable Information (PII) of users, which must be protected according to the recommendations of the EU's General Data Protection Regulation (GDPR). It is essential that users can control when and how their personal data is collected, processed, and communicated by embedded systems. It should come as no surprise that privacy has been acknowledged as an important new dimension in nowadays embedded systems design. Together with classic embedded design metrics like area, performance, and power consumption, privacy is becoming paramount too. Novel privacy protection approaches (differential privacy, k-anonymity, homomorphic encryption to name a few) promise to revolutionize the way users can protect their data, but it is still unclear how these approaches can be applied in an efficient and effective manner in embedded systems. The HTESP workshop aims at fostering discussion on the newest privacy trends in embedded systems. Submitted papers should address cutting-edge privacy issues and solutions in the application fields of embedded computing. *Topics of interest include:* -Scalable, robust, and privacy enhancing Embedded Computing. -Privacy challenges of interoperable and usable Embedded Computing. -Measurement of Embedded Computing privacy leakage. -Privacy frameworks for Embedded Computing. -Threat models and attack strategies for privacy in Embedded Computing. -Data integrity in Embedded Computing. -Identity and access management in Embedded Computing. -Privacy enhancing and anonymization techniques in Embedded Computing. -Trust management in Embedded Computing. -Testbeds, and experimental results. -Blockchain-based identity management and access control systems. -Differential privacy techniques in Embedded Computing. -k-anonymity techniques in Embedded Computing. -Data encryption in Embedded Computing. -Privacy-enhancing technologies. -Personal Data Governance. -Data protection in Embedded Computing. -Cybersecurity and data protection measures. -Data protection best practices across verticals -GDPR compliance in Embedded Computing The workshop will be held in *Linz, Austria, October 03, 2022* as part of the *INTERNATIONAL CONFERENCE ON EMBEDDED WIRELESS SYSTEMS AND NETWORKS (EWSN 2022)*. Best Regards, Michele La Manna. -------------- next part -------------- An HTML attachment was scrubbed... URL: From camiguel at uc.cl Tue Jun 7 10:35:57 2022 From: camiguel at uc.cl (Camilo Miguel Signorelli) Date: Tue, 7 Jun 2022 16:35:57 +0200 Subject: Connectionists: Mediterranean seminar for consciousness studies, Campomoro, Corsica Message-ID: Dear Colleagues, I am happy to announce the applications for the second version of the *Mediterranean seminar for consciousness science* (MESEC). The second MESEC workshop will take place between *August 27th and September 3rd, 2022 at Campomoro, Corsica*. More information www.mesec.co/event/seminar_2022 and www.mesec.co. The workshop aims to *connect early-career researchers who are passionate about consciousness science*. We envision this workshop as an open space in which early-career consciousness scientists can meet, discuss, explore, and create long-lasting collaborations in a friendly and supportive environment. The format includes a shared scientific program (keynotes, lightning talks, co-organize symposia, and small groups of discussion) with community living for 7 days in an idyllic remote location (Campomoro, Corsica). Program to be announced. Please feel free to circulate this information : ) Abrazos! MESEC Team 2022 Camilo Miguel Signorelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Tue Jun 7 12:07:05 2022 From: cgf at isep.ipp.pt (Carlos) Date: Tue, 7 Jun 2022 17:07:05 +0100 Subject: Connectionists: CFP: IoTStreams, Affiliated with ECML-PKDD 2022 Message-ID: Call for Papers IoT Stream 2022 - 3rd Workshop and Tutorial on IoT Streams for Data-Driven Predictive Maintenance Affiliated with ECML-PKDD 2022, 19-23 September, Grenoble, France, https://2022.ecmlpkdd.org/ Workshop site: https://abifet.wixsite.com/iotstream2022 Motivation and focus Maintenance is a critical issue in the industrial context for preventing high costs and injuries. Various industries are moving more and more toward digitalization and collecting ?big data? to enable or improve the accuracy of their predictions. At the same time, the emerging technologies of Industry 4.0 empowered data production and exchange, which leads to new concepts and methodologies for the exploitation of large datasets in maintenance. The intensive research effort in data-driven Predictive Maintenance (PdM) is producing encouraging results. Therefore, the main objective of this workshop is to raise awareness of research trends and promote interdisciplinary discussion in this field. Data-driven predictive maintenance must deal with big streaming data and handle concept drift due to both changing external conditions, but also normal wear of the equipment. It requires combining multiple data sources, and the resulting datasets are often highly imbalanced. The knowledge about the systems is detailed, but in many scenarios, there is a large diversity in both model configurations, as well as their usage, additionally complicated by low data quality and high uncertainty in the labels. Many recent advancements in supervised and unsupervised machine learning, representation learning, anomaly detection, visual analytics and similar areas can be showcased in this domain. Therefore, the overlap in research between machine learning and predictive maintenance continues to increase in recent years. This event is an opportunity to bridge researchers and engineers to discuss emerging topics and key trends. The previous edition of the workshop at ECML 2020 has been very popular, and we are planning to continue this success in 2022. Aim and scope This workshop welcomes research papers using Data Mining and Machine Learning (Artificial Intelligence in general) to address the challenges and answer questions related to the problem of predictive maintenance. For example, when to perform maintenance actions, how to estimate components current and future status, which data should be used, what decision support tools should be developed for prognostic, how to improve the estimation accuracy of remaining useful life, and similar. It solicits original work, already completed or in progress. Position papers will also be considered. The scope of the workshop covers, but is not limited to, the following: ? Predictive and Prescriptive Maintenance ? Fault Detection and Diagnosis (FDD) ? Fault Isolation and Identification ? Anomaly Detection (AD) ? Estimation of Remaining Useful Life of Components, Machines, etc. ? Forecasting of Product and Process Quality ? Early Failure and Anomaly Detection and Analysis ? Automatic Process Optimization ? Self-healing and Self-correction ? Incremental and evolving (data-driven and hybrid) models for FDD and AD ? Self-adaptive time-series based models for prognostics and forecasting ? Adaptive signal processing techniques for FDD and forecasting ? Concept Drift issues in dynamic predictive maintenance systems ? Active learning and Design of Experiment (DoE) in dynamic predictive maintenance ? Industrial process monitoring and modelling ? Maintenance scheduling and on-demand maintenance planning ? Visual analytics and interactive Machine Learning ? Analysis of usage patterns ? Explainable AI for predictive maintenance It covers real-world applications such as: ? Manufacturing systems ? Transport systems (including roads, railways, aerospace and more) ? Energy and power systems and networks (wind turbines, solar plants and more) ? Smart management of energy demand/response ? Production Processes and Factories of the Future (FoF) ? Power generation and distribution systems ? Intrusion detection and cybersecurity ? Internet of Things ? Smart cities Paper submission: Authors should submit a PDF version in Springer LNCS style using the workshop EasyChair site: https://easychair.org/my/conference?conf=iotstream2022. The maximum length of papers in 15 pages, including references, consistent with the ECML PKDD conference submissions. Submitting a paper to the workshop means that if the paper is accepted, at least one author will attend the workshop and present the paper. Papers not presented at the workshop will not be included in the proceedings. We will follow ECML PKDD?s policy for attendance. Paper publication: Accepted papers will be published by Springer as joint proceedings of several ECML PKDD workshops. Workshop format: ? Half-day workshop ? 1-2 keynote talks, speakers to be announced ? Oral presentation of accepted papers Important Dates: ? Workshop paper submission deadline: June 20, 2022 ? Workshop paper acceptance notification: July 13, 2022 ? Workshop paper camera-ready deadline: July 27, 2022 ? Workshop: September 23, 09h-12.30, 2022 (TBC) Program Committee members (to be confirmed): ? Edwin Lughofer, Johannes Kepler University of Linz, Austria ? Sylvie Charbonnier, Universit? Joseph Fourier-Grenoble, France ? David Camacho Fernandez, Universidad Politecnica de Madrid, Spain ? Bruno Sielly Jales Costa, IFRN, Natal, Brazil ? Fernando Gomide, University of Campinas, Brazil ? Jos? A. Iglesias, Universidad Carlos III de Madrid, Spain ? Anthony Fleury, Mines-Douai, Institut Mines-T?l?com, France ? Teng Teck Hou, Nanyang Technological University, Singapore ? Plamen Angelov, Lancaster University, UK ? Igor Skrjanc, University of Ljubljana, Slovenia ? Indre Zliobaite, University of Helsinki, Finland ? Elaine Faria, Univ. Uberlandia, Brazil ? Mykola Pechenizkiy, TU Eindonvhen, Netherlands ? Raquel Sebasti?o, Univ. Aveiro, Portugal ? Anders Holst, RISE SICS, Sweden ? Erik Frisk, Link?ping University, Sweden ? Enrique Alba, University of M?laga, Spain ? Thorsteinn R?gnvaldsson, Halmstad University, Sweden ? Andreas Theissler, University of Applied Sciences Aalen, Germany ? Vivek Agarwal, Idaho National Laboratory, Idaho ? Manuel Roveri, Politecnico di Milano, Italy ? Yang Hu, Politecnico di Milano, Italy Workshop Organizers: ? Albert Bifet, Telecom-Paris, Paris, France, University of Waikato, New Zealand, albert.bifet at telecom-paristech.fr ? Jo?o Gama,University of Porto, Portugal, jgama at fep.up.pt ? Slawomir Nowaczyk, Halmstad University, Sweden, slawomir.nowaczyk at hh.se ? Carlos Ferreira, LIAAD INESC, Porto, Portugal, ISEP, Porto, Portugal, cgf at isep.ipp.pt Tutorial Organizers: ? Rita Ribeiro,University of Porto, Portugal, rpribeiro at fc.up.pt ? Szymon Bobek, Jagiellonian University, Poland, szymon.bobek at uj.edu.pl ? Bruno Veloso, LIAAD INESC, Porto, Portugal, University Portucalense, Porto, Portugal bruno.miguel.veloso at gmail.com ? Grzegorz J. Nalepa, Jagiellonian University, Krakow, Poland, gjn at gjn.re ? Sepideh Pashami, Halmstad University, Sweden, sepideh.pashami at hh.se Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From richard.allmendinger at gmail.com Tue Jun 7 12:12:49 2022 From: richard.allmendinger at gmail.com (Richard Allmendinger) Date: Tue, 7 Jun 2022 17:12:49 +0100 Subject: Connectionists: 2 x Fully-funded PhD positions in AI at The University of Manchester, UK Message-ID: <024001d87a89$6f4fab30$4def0190$@gmail.com> Dear colleagues, Two exciting (fully-funded) opportunities to pursue a PhD in AI at The University of Manchester (UoM). Both projects are available for UK and exceptional international students. 1. AI for enhancing fit and development of body worn products More info about project + application link: https://lnkd.in/gmN8u_pE Submission deadline: 24 June Keywords: AI, sizing, body scans, multi-objective optimisation, visualisation, inclusivity, FashionTech Who is involved in the project: Alliance Manchester Business School & UoM's Department of Materials ( Simeon Gill) 2. Mitigation of reinforcement learning algorithms in changing environments More info about project + application link: https://lnkd.in/gvPRpvJX Submission deadline: 10 June Keywords: (Deep) reinforcement learning, generalisation, dynamic environments, safety, multitask/transfer/safe learning, Who is involved in this project: Alliance Manchester Business School, BAE Systems (this is an EPSRC iCASE project), and UoM's Department of Mathematics ( Theodore Papamarkou) + Computer Science ( Wei Pan) Best wishes, Richard ------------ Dr Richard Allmendinger | Senior Lecturer in Decision Sciences | Alan Turing Fellow | School Business Engagement Lead Alliance Manchester Business School | The University of Manchester | Office: Room 3.017, Booth Street West, Manchester M15 6PB | Tel: +44(0)161 306 6598 | Email: richard.allmendinger at manchester.ac.uk | Web: https://personalpages.manchester.ac.uk/staff/Richard.Allmendinger/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juyang.weng at gmail.com Tue Jun 7 12:33:04 2022 From: juyang.weng at gmail.com (Juyang Weng) Date: Tue, 7 Jun 2022 12:33:04 -0400 Subject: Connectionists: "Deletion of Data" Misconducts in Deep Learning Message-ID: Let us continue our academic discussion. Who should be responsible for the current dismal situation of "deep learning" in particular and the brain modeling landscape in general? Are the following scientific establishments responsible? (1) NSF: Has NSF *rejected *new sciences, awarded "deleting data" and protected "deleting data"? (2) National Academy of Sciences (NAS) which is responsible for PNAS. Has PNAS *"desk ejected"* new sciences and spread "deleting data"? (3) National Academy of Engineering (NAE): Has NAE admitted a new member who mismanaged ImageNet contests and promoted wide scale research misconducts in machine learning? (4) American Association for the Advancement of Science (AAAS) which is responsible for *Science* Magazine. Has *Science* Magazine *"desk rejected"* new sciences, spread "deleting data" and protected "deleting data"? (5) Association for Computing Machinery (ACM), which is responsible for the Turing Award 2018 and CACM journal. Has ACM awarded "deleting data", spread "deleting data" and protected "deleting data"?; (6) Springer Nature which is responsible for *Nature* Magazine. Has *Nature* Magazine *"desk rejected"* new sciences, spread "deleting data" and protected "deleting data"? (7) Neural Networks Journal. Has *Neural Networks "desk rejected"* new sciences and spread "deleting data"? (8) Arxiv: Has Arxiv *"desk rejected"* new sciences? With the current situation, many of our scientists become timid in order to make their works acceptable by the establishment. Much taxpayers' money has been wasted in such timid research. Also our brain modelers' responsibility: Who is "primarily" responsible for the current Russian-Ukraine war? Joe Biden or Vladimir Putin? For more detail, please read the preface of an upcoming book: Conscious Learning: Humans and Machines. Preface , Chinese Preface Just my 2 cents of worth. -John -- Juyang (John) Weng Brain-Mind Institute -------------- next part -------------- An HTML attachment was scrubbed... URL: From kripa.ghosh at gmail.com Tue Jun 7 11:56:01 2022 From: kripa.ghosh at gmail.com (Kripa Ghosh) Date: Tue, 7 Jun 2022 21:26:01 +0530 Subject: Connectionists: CFP: FIRE track on Information Retrieval from Microblogs during Disasters (IRMiDis) Message-ID: *** Apologies for multiple posting *** Information Retrieval from Microblogs during Disasters (IRMiDis)https://sites.google.com/view/irmidis-fire2022/irmidis Track in conjunction with the Annual Conference of the Forum for Information Retrieval Evaluation (FIRE 2022 - http://fire.irsi.res.in/fire/2022/home), December 9-13, 2022, Kolkata (Hybrid Event) The Information Retrieval from Microblogs during Disasters (IRMiDis) track aims to develop datasets and methods for solving various practical research problems associated with a disaster or pandemic situation. The IRMiDis track has been run successfully with FIRE in the years 2017, 2018 and 2021. This year IRMiDis will consist of two important classification tasks over microblogs/tweets associated with the COVID-19 pandemic. *** Task 1: COVID-19 vaccine stance classification from tweets *** It is important to understand the vaccine-stance of people in order to nudge people towards intake of COVID vaccines. With this motivation, this task aims to build an effective 3-class classifier on tweets with respect to the stance reflected towards COVID-19 vaccines. The 3 classes are: (1) AntiVax - the tweet indicates hesitancy (of the user who posted the tweet) towards the use of vaccines. (2) ProVax - the tweet supports / promotes the use of vaccines. (3) Neutral - the tweet does not have any discernible sentiment expressed towards vaccines or is not related to vaccines *** Task 2: Detection of COVID-19 symptom-reporting in tweets *** Quickly identifying people who are experiencing COVID-19 symptoms is important for authorities to arrest the spread of the disease. In this task, we explore if tweets that report about someone experiencing COVID-19 symptoms (e.g., 'fever', 'cough') can be automatically identified. The task is to build an 4-class classifier on tweets that can detect tweets that report someone experiencing COVID-19 symptoms. The 4 classes are: (1) Primary Reporting - The user (who posted the tweet) is reporting symptoms of himself/herself. (2) Secondary Reporting - The user is reporting symptoms of some friend / relative / neighbour / someone they met. (3) Third-party Reporting - The user is reporting symptoms of some celebrity / third-party person. (4) Non-Reporting - The user is not reporting anyone experiencing COVID-19 symptoms, but talking about symptom-words in some other context or giving only general information about COVID-19 symptoms. For both tasks, we will provide training data annotated by human workers, and test data for evaluating the submitted models. Details of how to participate are available at https://sites.google.com/view/irmidis-fire2022/irmidis. *** Timeline *** June 6 -- open track website and training data release July 15 -- test data release August 1 -- run submission deadline August 15 -- results declared September 15 -- Working notes due October 15 -- Camera ready copies of working notes and overview paper due *** Organisers *** Moumita Basu, Amity University Kolkata, India Soham Poddar, Indian Institute of Technology Kharagpur, India Saptarshi Ghosh, Indian Institute of Technology Kharagpur, India Kripabandhu Ghosh, Indian Institute of Science Education and Research, Kolkata, India Kind Regards, *Kripabandhu Ghosh* Co-organizer IRMiDis -------------- next part -------------- An HTML attachment was scrubbed... URL: From xkobeleva at gmail.com Tue Jun 7 16:17:07 2022 From: xkobeleva at gmail.com (Xenia K.) Date: Tue, 7 Jun 2022 22:17:07 +0200 Subject: Connectionists: PhD position in clinical computational neuroscience Message-ID: Dear all, I am happy to announce a funded PhD position (and a several-month research assistant position) in Clinical Computational Neuroscience in Bonn, Germany, aimed at modeling TMS treatments of Alzheimer?s disease. More details can be found here: https://github.com/dr-xenia/dr-xenia.github.io/blob/master/docs/PhD_job_ad.pdf Please distribute to anyone who might be interested. Thanks! Best wishes Xenia ____________________________________________ *Dr. Xenia Kobeleva* Clinical Neurologist, Translational Researcher German Center for Neurodegenerative Diseases (DZNE) University Hospital Bonn research at xenia-kobeleva.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From maneetsingh18 at gmail.com Tue Jun 7 22:36:16 2022 From: maneetsingh18 at gmail.com (Maneet Singh) Date: Wed, 8 Jun 2022 08:06:16 +0530 Subject: Connectionists: Second International Workshop on MUFin'22 at ECMLPKDD'22 - Deadline Approaching! Message-ID: Dear Researcher, We invite you to submit at the Second International Workshop on MUFin22 held in conjunction with ECML-PKDD 2022. The workshop provides a platform for experts across industry and academia to discuss and present challenges, novel solutions, and pave the way for future directions in modelling sequential data uncertainty for the financial world. *We invite papers focused on modelling uncertainty for financial applications.* Topics of interest include, but are not limited to the following: *Application Topics:* - Evaluating financial risk - Forecasting stock market - Modelling seasonality in market trends - Fraud prediction - Modelling temporal social media activity - Recommendation systems - Adversarial Attacks on financial models *Technical Topics:* - Temporal/Sequential data modelling ? clustering, classification - Modelling uncertainty in financial data - Temporal graphs - Time Series Forecasting - Text analytics of financial reports, forecasts, and documents - Explainable/interpretable sequential modelling - Exploring fairness and robustness towards bias in financial models - Representation learning from temporal/sequential data - Modelling financial data as temporal point processes *Submission Deadline: 20th June 2022* Website: https://sites.google.com/view/w-mufin/home Submission link: https://easychair.org/conferences/?conf=mufin22 The best paper of the workshop will be awarded a *Best Paper Award* worth $500! Please refer to the website for more details. Incase of any queries, please reach out to Maneet Singh at maneet.singh at mastercard.com Best, Maneet -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Wed Jun 8 00:28:46 2022 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Wed, 8 Jun 2022 06:28:46 +0200 Subject: Connectionists: COMPLEX NETWORKS 2022 PALERMO ITALY SUBMISSION DEADLINE EXTENDED JUNE 20, 2022 Message-ID: *11th** International Conference on Complex Networks & Their Applications* *Palermo, Italy *November 08 - 10, 2022 COMPLEX NETWORKS 2022 You are cordially invited to submit your contribution until *June 20, 2022* (Extended Firm Deadline) *SPEAKERS* ? Lu?s A. Nunes Amaral Northwestern University USA ? Manuel Cebrian Max Planck Institute for Human Development Germany ? Shlomo Havlin Bar-Ilan University in Israel ? Giulia Iori City, University of London UK ? Melanie Mitchell Santa Fe Institute USA ? Ricard Sol? Universitat Pompeu Fabra Spain *TUTORIALS (November 07, 2022)* ? Michele Coscia IT University of Copenhagen Denmark ? Adriana Iamnitchi Maastricht University, Netherlands *PUBLICATION* Full papers (not previously published up to 12 pages) and Extended Abstracts (about published or unpublished research up to 3 pages) are welcome. ? *Papers *will be included in the conference *proceedings edited by Springer* ? *Extended abstracts* will be published in the *Book of Abstracts (with ISBN)* Submit at https://easychair.org/conferences/?conf=complexnetworks2022 Extended versions will be invited for publication in *special issues of international journals:* o Applied Network Science edited by Springer o Advances in Complex Systems edited by World Scientific o Complex Systems o Entropy edited by MDPI o PLOS one o Social Network Analysis and Mining edited by Springer *TOPICS* *Topics include, but are not limited to: * o Models of Complex Networks o Structural Network Properties and Analysis o Complex Networks and Epidemics o Community Structure in Networks o Community Discovery in Complex Networks o Motif Discovery in Complex Networks o Network Mining o Network embedding methods o Machine learning with graphs o Dynamics and Evolution Patterns of Complex Networks o Link Prediction o Multilayer Networks o Network Controllability o Synchronization in Networks o Visual Representation of Complex Networks o Large-scale Graph Analytics o Social Reputation, Influence, and Trust o Information Spreading in Social Media o Rumour and Viral Marketing in Social Networks o Recommendation Systems and Complex Networks o Financial and Economic Networks o Complex Networks and Mobility o Biological and Technological Networks o Mobile call Networks o Bioinformatics and Earth Sciences Applications o Resilience and Robustness of Complex Networks o Complex Networks for Physical Infrastructures o Complex Networks, Smart Cities and Smart Grids o Political networks o Supply chain networks o Complex networks and information systems o Complex networks and CPS/IoT o Graph signal processing o Cognitive Network Science o Network Medicine o Network Neuroscience o Quantifying success through network analysis o Temporal and spatial networks o Historical Networks *GENERAL CHAIRS* Hocine Cherifi (University of Burgundy, France) Rosario N. Mantegna (university of Palermo, Italy) Luis M. Rocha (Binghamton University, USA) Join us at COMPLEX NETWORKS 2022 Palermo Italy *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Deputy Director LIB EA N? 7534 Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jun 8 04:30:20 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 8 Jun 2022 11:30:20 +0300 Subject: Connectionists: Early registration: Invitation for the 2022 Summer e-School on Deep Learning and Computer Vision, 22-26th August 2022, Aristotle University of Thessaloniki, Thessaloniki, Greece Message-ID: <2ba901d87b11$fe292740$fa7b75c0$@csd.auth.gr> Dear Deep Learning, Computer Vision and Autonomous Systems engineers, scientists, and enthusiasts, you are welcomed to register to the 2022 Summer e-School on Deep Learning and Computer Vision: https://icarus.csd.auth.gr/aiia-summer-school-2022 It will take place on 22-26/08/2022 and will be hosted by the Artificial Intelligence and Information Analysis (AIIA) Lab, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece. The summer e-school consists of two short e-courses: a) 'Short Course on Deep Learning and Computer Vision 2022', 22-23rd August 2022, having focus on Computer Vision and Deep Learning for autonomous drones, cars and marine vessels: http://icarus.csd.auth.gr/cvml-short-course-on-deep-learning-and-computer-vi sion-2022/ b) 'Programming short course and workshop on Deep Learning and Computer Vision 2022', 24-26th August 2022, with applications in digital media and autonomous drones: http://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep -learning-and-computer-vision-2022/ You can follow the above-mentioned links for registration on either or both e-courses. For questions, please contact: Ioanna Koroni < koroniioanna at csd.auth.gr> The first e-course contains 14 live lectures providing an in-depth presentation of computer vision and deep learning problems algorithms with applications on autonomous drones, cars and marine vessels (to be recorded). The second programming short e-course and workshop offers a mix of live lectures and programming workshops (hands-on lab exercises). It aims at developing registrants' programming skills for Deep Learning and Computer Vision, with focus on drone imaging/cinematography and digital media applications. Both short e-courses are organized by Prof. I. Pitas, IEEE and EURASIP fellow, He is chair of the International AI Doctoral Academy (AIDA), AUTH prime investigation for H2020 projects AerialCore and AI4Media, and Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab), Aristotle University of Thessaloniki, Greece. He was Chair of the IEEE SPS Autonomous Systems Initiative and Coordinator of the European Horizon2020 R&D project Multidrone. He is ranked 319 top Computer Science and Electronics Scientist internationally by research.com (202). Aristotle University of Thessaloniki is the biggest University in Greece and in SE Europe. It is highly ranked internationally. Relevant links: 1. European Horizon2020 R&D projects Aerial-Core: https://aerial-core.eu/, Multidrone: https://multidrone.eu/, AI4Media: https://ai4media.eu/ 2. AIIA Lab: http://www.aiia.csd.auth.gr/ 3. Prof. I. Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ&hl=el Course descriptions a) 'Short Course on Deep Learning and Computer Vision 2022', 22-23rd August 2022. http://icarus.csd.auth.gr/cvml-short-course-on-deep-learning-and-computer-vi sion-2022/ Part A (7 hours), Computer vision topic list 1. Introduction to autonomous systems 2. Camera geometry 3. Stereo and Multiview imaging 4. Introduction to multiple drone systems 5. Simultaneous Localization and Mapping 6. Drone mission planning and control 7. Introduction to autonomous marine vehicles Part B (7 hours) Deep learning topic list 1. Multilayer perceptron. Backpropagation 2. Deep neural networks. Convolutional NNs - Transformers 3. Deep object detection 4. 2D Visual Object Tracking 5. Neural Slam 6. CVML Software development tools 7. Applications in car vision b) 'Programming short course and workshop on Deep Learning and Computer Vision 2022', 24-26th August 2022. http://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep -learning-and-computer-vision-2022/ Part A (8 hours), Deep learning and GPU programming sample topic list 1. Introduction to autonomous systems 2. Deep neural networks. Convolutional NNs 3. Parallel GPU and multi-core CPU architectures - GPU programming 4. Image classification with CNNs. 5. CUDA programming Part B (8 hours), Deep Learning for Computer Vision sample topic list 1. Deep learning for object/face detection 2. 2D object tracking 3. PyTorch: Understand the core functionalities of an object detector. Training and deployment. 4. OpenCV programming for object tracking Part C (8 hours), Autonomous UAV cinematography sample topic list 1. Video summarization 2. UAV cinematography 3. Video summarization with Pytorch 4. Drone cinematography with Airsim Sincerely yours Prof. I. Pitas -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pavis at iit.it Wed Jun 8 04:20:01 2022 From: Pavis at iit.it (Pavis) Date: Wed, 8 Jun 2022 08:20:01 +0000 Subject: Connectionists: 2 PHD POSITIONS on Computational Vision at PAVIS - IIT Italy & University of Genoa, Italy In-Reply-To: <395419cefcc14ca6a30e291022c9e747@iit.it> References: <4f98593862ae43fd887f7039be1caa4f@iit.it>, <395419cefcc14ca6a30e291022c9e747@iit.it> Message-ID: <75015e18dd2e4a3c92f119ebd8742e7d@iit.it> 2 PHD POSITIONS ON COMPUTATIONAL VISION AT IIT ? PAVIS IN COLLABORATION WITH UNIVERSITY OF GENOA, ITALY The Italian Institute of Technology ? IIT, www.iit.it ? in collaboration with University of Genoa ?https://unige.it/en ? funds 2 PhD scholarships on Computational Vision, Automatic Recognition and Learning. Research and training activities are jointly conducted between the DITEN Department of University of Genova http://phd-stiet.diten.unige.it/ and IIT infrastructures in Genoa, at the PAVIS - Pattern Analysis and Computer Vision Research line https://pavis.iit.it/ led by its Principal Investigator, Alessio Del Bue. ? RESEARCH TOPICS: Theme A: 3D scene understanding with geometrical and deep learning reasoning Theme B: Deep Learning for Multi-modal scene understanding Theme C: Self-Supervised and Unsupervised Deep Learning Theme D: Visual Reasoning with Knowledge and Graph Neural Networks Detailed description at:? https://pavisdata.iit.it/data/phd/2023_ResearchTopicsPhD_IIT-PAVIS.pdf PAVIS The PhD program on the listed topics will take place at the PAVIS research line of IIT located in Genova (www.iit.it). The department focuses on activities related to the analysis and understanding of images, videos and patterns in general, also in collaboration with other research groups at IIT. PAVIS staff has a wide expertise in computer vision and pattern recognition, machine learning, image processing, and related applications (related to assistive and monitoring AI systems). For more information, you can also browse the PAVIS webpage http://pavis.iit.it/ to see our activities and research. Successful candidates will be part of an exciting and international working environment and will work in brand new laboratories equipped with state-of-the-art instrumentation. Excellent communication skills in English, as well as ability to interact effectively with members of the research team, are mandatory. HOW TO APPLY FULL INFORMATION, OFFICIAL CALL AND COURSE DESCRIPTION ARE AVAILABLE AT:?? ? ITALIAN https://unige.it/usg/it/dottorati-di-ricerca ENGLISH https://unige.it/en/usg/en/phd-programmes Official call: https://unige.it/sites/contenuti.unige.it/files/documents/BANDO%2038%20CICLO%20-%20EN.pdf Course description for XXXVIII Phd Course in Science and Technology for Electronic and Telecommunication Engineering, curriculum in Computer Vision, Automatic Recognition and Learning (CODE 9320) is on page 121 of the list of PhD programmes: https://unige.it/sites/contenuti.unige.it/files/documents/ALLEGATO_A_XXXVIII%20-%20EN.pdf Follow the steps listed: 1. Choose the programme 2. Review the application 3. Apply here https://servizionline.unige.it/studenti/post-laurea/dottorato/domanda following the detailed instructions: https://unige.it/sites/contenuti.unige.it/files/documents/Guida_eng_XXXVIII.pdf WHAT TO SUBMIT A detailed CV, a research proposal under one or more topics chosen among those above indicated, reference letters, and any other formal document concerning the degrees earned. Notice that these documents are mandatory in order to consider valid the application. Refer also to the indications stated at pg. 121 of the course description document, above mentioned. IMPORTANT: In order to apply, candidates must prepare the research proposal based on the research topics above mentioned. Please, follow these indications to prepare it https://pavisdata.iit.it/data/phd/ResearchProjectTemplate.pdf For FURTHER INFORMATION on the research topics contact Dr. Del Bue at pavis at iit.it DEADLINE Deadline for application is June 30, 2022 at 12 PM (noon, Italian time/CEST) STRICT DEADLINE, NO EXTENSION. Apply before deadline, the application process is not immediate: don?t wait for the final day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jun 8 05:07:36 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 8 Jun 2022 12:07:36 +0300 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Jan_Peters=3A?= =?utf-8?b?IOKAnFJvYm90IExlYXJuaW5n4oCdLCAyMXN0IEp1bmUgMjAyMiAxNzow?= =?utf-8?q?0-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_lectures?= References: <2a6d01d87b03$40256320$c0702960$@csd.auth.gr> <007701d87b05$78978b00$69c6a100$@csd.auth.gr> Message-ID: <2ed101d87b17$330ddb40$992991c0$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Prof. Jan Peters (Technische Universitaet Darmstadt, Germany), a prominent AI & Robotics researcher internationally, will deliver the e-lecture: ?Robot Learning?, on Tuesday 21st June 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/92400537552 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE , Humane AI Net , TAILOR , VISION , currently in the process of formation, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures will be offered alternatingly by: Top highly-cited senior AI scientists internationally or Young AI scientists with promise of excellence (AI sprint lectures) Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe, J. Stefanowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From sayan.mukherjee at mis.mpg.de Wed Jun 8 04:53:38 2022 From: sayan.mukherjee at mis.mpg.de (sayan.mukherjee at mis.mpg.de) Date: Wed, 8 Jun 2022 10:53:38 +0200 (CEST) Subject: Connectionists: ScaDS.AI summer school on AI and data science in Leipzig Message-ID: <1115066904.14743.1654678418050.JavaMail.zimbra@mis.mpg.de> Call for Participation ------- We cordially invite you to take part in the 8th Int. ScaDS.AI summer school on AI and data science that will take place from July 11-July 15 in Leipzig, one of the most beautiful and best cities to visit in Germany. The early registration deadline ends at June 19th. Find out more information on confirmed speakers, the program, and accommodation suggestions on our website: https://scads.ai/education/summer-schools/scads-ai-summer-school-2022/ The program features many exiting presentations and tutorials by well-known researchers and research teams. The talks are grouped within the following topics of current interest: - Data integration and AI - NLP and AI - Privacy and trustworthy AI - AI in medicine and life sciences - AI in earth and environmental sciences. Attending Ph.D. students can also present their research in a poster session as a means to enable additional interaction and discussion. A diverse social program (city walk, boat tour, dinner) further increases the attractiveness of the program. ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence) Dresden/Leipzig is one of the newly established permanent German research centers of excellence in Artificial Intelligence, funded by the German government as well as the state of Saxony. It started in 2014 as a national competence center for Big Data and has been extended to a center for AI and data science since 2019. Its series of summer schools started in 2015 so that we have now the 8th summer school in 2022 and after two virtual editions the 6th one with in-person presentations and participants. Register now! ------------- From ioannakoroni at csd.auth.gr Wed Jun 8 06:46:07 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 8 Jun 2022 13:46:07 +0300 Subject: Connectionists: AIDA Short Course: "Artificial Intelligence for video streaming platforms", 16-17/06/2022 Message-ID: <04b901d87b24$f634e390$e29eaab0$@csd.auth.gr> Politehnica University of Bucharest organizes an online AIDA short course on ?Artificial Intelligence for Video Streaming Platforms? offered through the International Artificial Intelligence Doctoral Academy (AIDA). The purpose of this course is to overview the foundations and the current state of the art in various systems developed for on-line video streaming platforms, namely: 1. DEEP-AD: A Multimodal Temporal Video Segmentation Framework for Online Video Advertising: - Shot boundary detection - Automatic video abstraction - Multimodal scene segmentation using: low/high level visual descriptors, audio patterns and semantic description - Thumbnail selection from video scenes - Ads insertion based on semantic criterion 2. Automatic subtitle synchronization and positioning: - Text pre-processing and automatic speech recognition - Anchor word identification and token matching - Phrases alignment - Subtitle/Close Caption positioning 3. DEEP-HEAR: A Multimodal Subtitle Positioning System Dedicated to Deaf and Hearing-Impaired People: - Face detection, tracking and recognition - Video temporal segmentation into stories - Active speaker detection - Subtitle positioning LECTURER: - Prof. Ruxandra ?apu, email: ruxandra_tapu at comm.pub.ro HOST INSTITUTION/ORGANIZER: Politehnica University of Bucharest REGISTRATION: Free of charge WHEN: 16-17 June 2022 from 11:00 to 13:00 CET (4 hours) WHERE: Online (Microsoft Teams link will be provided) HOW TO REGISTER and ENROLL: Both AIDA and non-AIDA students are encouraged to participate in this short course. If you are an AIDA Student* already, please: Step (a): Register in the course, please send fill the Registration form . AND Step (b): Enroll in the same course in the AIDA system using ?Enroll on this Course ?, so that this course enters your AIDA Certificate of Course Attendance. If you are not an AIDA Student, do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members listed in this page: https://www.i-aida.org/about/members/) Prof. Ruxandra ?apu, Email ruxandra_tapu at comm.pub.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.s.smith at cs.stir.ac.uk Wed Jun 8 07:49:37 2022 From: l.s.smith at cs.stir.ac.uk (Prof Leslie Smith) Date: Wed, 8 Jun 2022 12:49:37 +0100 Subject: Connectionists: "Deletion of Data" Misconducts in Deep Learning In-Reply-To: References: Message-ID: (Ignoring the last bit) That's a very US-centric view. In reality, this particular problem probably is caused by (i) peer reviewers not doing their job properly and (ii) the "publish or perish" state of academia (which is defiantly not just a US problem!). Science has always had fashions, and as Max Planck said: ?A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.? (often quoted as science advances one funeral at a time) --Leslie Smith Juyang Weng wrote: > Let us continue our academic discussion. > Who should be responsible for the current dismal situation of "deep > learning" in particular and the brain modeling landscape in general? > Are the following scientific establishments responsible? > (1) NSF: Has NSF *rejected *new sciences, awarded "deleting data" and > protected "deleting data"? > (2) National Academy of Sciences (NAS) which is responsible for PNAS. Has > PNAS *"desk ejected"* new sciences and spread "deleting data"? > (3) National Academy of Engineering (NAE): Has NAE admitted a new member > who mismanaged ImageNet contests and promoted wide scale research > misconducts in machine learning? > (4) American Association for the Advancement of Science (AAAS) which is > responsible for *Science* Magazine. Has *Science* Magazine *"desk > rejected"* new sciences, spread "deleting data" and protected "deleting > data"? > (5) Association for Computing Machinery (ACM), which is responsible for > the > Turing Award 2018 and CACM journal. Has ACM awarded "deleting data", > spread "deleting data" and protected "deleting data"?; > (6) Springer Nature which is responsible for *Nature* Magazine. Has > *Nature* Magazine *"desk rejected"* new sciences, spread "deleting data" > and protected "deleting data"? > (7) Neural Networks Journal. Has *Neural Networks "desk rejected"* new > sciences and spread "deleting data"? > (8) Arxiv: Has Arxiv *"desk rejected"* new sciences? > > With the current situation, many of our scientists become timid in order > to > make their works acceptable by the establishment. Much taxpayers' money > has been wasted in such timid research. > > > Just my 2 cents of worth. > -John > -- > Juyang (John) Weng > Brain-Mind Institute > -- Prof Leslie Smith (Emeritus) Computing Science & Mathematics, University of Stirling, Stirling FK9 4LA Scotland, UK Tel +44 1786 467435 Web: http://www.cs.stir.ac.uk/~lss Blog: http://lestheprof.com From zelie.tournoud at cnrs.fr Wed Jun 8 10:55:16 2022 From: zelie.tournoud at cnrs.fr (=?UTF-8?Q?Z=c3=a9lie_Tournoud?=) Date: Wed, 8 Jun 2022 16:55:16 +0200 Subject: Connectionists: Apply now for EITN School in Computational Neuroscience 2022 In-Reply-To: <49d40f7a-edcb-a031-10f3-cee1957d40c5@cnrs.fr> References: <49d40f7a-edcb-a031-10f3-cee1957d40c5@cnrs.fr> Message-ID: EITN School in Computational Neuroscience, 2022 session EITN School in Computational Neuroscience is an intensive computational neuroscience training aimed at neuroscience students and early-career researchers. It is supported by the Human Brain Project and is an oppotunity to learn from researchers from all over Europe. The 2022 edition will take place from *21 to 30 September 2022* in Paris, France. Attendance is selective as seats are limited. *Applications are open until 23 June 2022.* *Information : https://eitnschool2022.sciencesconf.org* Organizers: Sacha Van Albada (Forschungszentrum J?lich), Albert Gidon (Humboldt-Universit?t zu Berlin), Alain Destexhe (CNRS), Matteo di Volo (Cergy Universit?), Spase Petkoski (Aix-Marseille Universit?), Gorka Zamora-Lopez (Universitat Pompeu Fabra). Faculty: ??? Nicolas Brunel ??? Hermann Cuntz ??? Gustavo Deco ??? Alain Destexhe ??? Matteo di Volo ??? Albert Gidon ??? Jennifer Goldman ??? Moritz Helias ??? Viktor Jirsa ??? Maurizio Mattia ??? Spase Petkoski ??? Bartosz Telenczuk (to be confirmed) ??? Sacha Van Albada ??? Gorka Zamora-Lopez ??? Damien Depannemaecker ??? Domenico Guarino To be completed Who is this training for? * Neuroscience (and related fields) students and related * Post-doctoral/early-career researchers Why join? * 10 days of intensive training in computational neuroscience provided by researchers from all over Europe * Small scale event: get to connect! * A complete program: o Cellular models, and models of brain signals o Circuit models and networks o Mean-field models o Whole-brain models * Hands-on learning schedule based on: o Morning classes and tutorials o Group projects in the afternoon o Free time in the evenings to visit Paris How to apply? This training has a limited capacity of 20 students, therefor a selection will be performed by a scientific organizing committee. Applications are open until 23 June 2022. *Send your application by including a resume (CV) and cover letter (explaining motivations t**o join) by email at **eitn at neuro-psi.fr**.* You will receive a confirmation your application has be received within 5 working days or on the day application period closes (whichever comes the soonest). If not, please consider a technical issue and submit your application again. Contact eitn at neuro-psi.fr for any question or assistance. Selection criteria include: * Working or studying in neuroscience or related * Experience in python programming would help * Priority is given to PhD students as the primary target of this training, although Master students and post-docs are welcome to apply Practical information: * If selected, registration fee to attend the training is 400?. * If selected, you will need to bring your own laptop with a few pre-installed programs. * The tuition fee includes lunch, coffee, access to conference facilities and internet connection during the course days (Sunday not included). Location: Institut Sup?rieur Clorivi?re, 119 Bd Diderot, 75012 Paris. This training is design to be attended in-person in Paris. It will not be recorded nor accessible remotely. __________ * * *Z?lie Tournoud*/// EITN Communication manager/ *European Institute for Theoretical Neuroscience (EITN)* UMR 9197 ? *NeuroPSI*(Institut des Neurosciences Paris-Saclay) CNRS ? Universit? Paris-Saclay Centre CEA Paris-Saclay B?timent 151 91400 Saclay https://www.eitn.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.neftci at fz-juelich.de Wed Jun 8 16:24:02 2022 From: e.neftci at fz-juelich.de (Emre Neftci) Date: Wed, 8 Jun 2022 22:24:02 +0200 Subject: Connectionists: Online registration for 2022 Telluride Neuromorphic Cognition Engineering Workshop Message-ID: <84cda811-afa5-47d5-a07c-635505a284ab@www.fastmail.com> Dear All, The 2022 Telluride Neuromorphic Engineering Workshop online registration is now open. Please register if you would like to attend the online talks at the workshop. Visit the website (https://tellurideneuromorphic.org) or go straight to the registration page: https://sites.google.com/view/telluride-2022/online-registration Emre Neftci on behalf of the 2022 Telluride Engineering Cognition Engineering Workshop Organizing Committee -- Prof. Dr. Emre Neftci Head of Peter Gr?nberg Institute 15 - Neuromorphic Software Ecosystems Forschungszentrum J?lich www.fz-juelich.de/pgi/PGI-15 www.nmi-lab.org ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Neugierige sind herzlich willkommen am Sonntag, den 21. August 2022, von 10:00 bis 17:00 Uhr. Mehr unter: https://www.tagderneugier.de From gary.marcus at nyu.edu Wed Jun 8 12:17:04 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Wed, 8 Jun 2022 09:17:04 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com Message-ID: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> ???Dear Connectionists, and especially Geoff Hinton, It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. Precise details are here, towards the end of the essay: https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. If you?d like in on the bet, Geoff, please let me know. More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). Cheers. Gary Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From mklados at gmail.com Wed Jun 8 13:13:13 2022 From: mklados at gmail.com (Manousos Klados) Date: Wed, 8 Jun 2022 20:13:13 +0300 Subject: Connectionists: CfP for the SAN2022 meeting Message-ID: <0C24C2BE-ADAC-46BB-9577-1EA3ABEB2BCA@gmail.com> Dear all, The Society of Applied Neuroscience (SAN) is a scientific, nonprofit membership organisation devoted to advancing neuroscientific knowledge and its innovative applications by empowering both scientists and practitioners in serving the public by optimising self-regulatory brain function and has been active for more than two decades now. Its membership is open to scientists and practitioners interested in an integrated approach which involves the neural, cognitive and behavioural levels of analysis. SAN has organised scientific events and conferences as well as several thematic journal special issues over the past few years. After a pause as consequence of the pandemic, SAN, in cooperation with the Aristotle University of Thessaloniki, have the great pleasure to welcome you to the SAN 2022 Conference, which will be held in Thessaloniki, Greece, on 15-17 September, 2022. (https://san2022.org/ ) The call for papers is now extended to June 20th. Abstracts will be published in an book (with ISBN) while several journal special issues are planned after the conference. We cordially invite each of you engaged in studies of Neurosciences, to participate in SAN 2022 Conference. Apart from the highly promising scientific program, there will also be a lively social program, which will give our guests the opportunity to enjoy the host city, Thessaloniki. We look forward to your participation. Kind Regards Manousos Klados Dr. Manousos Klados Associate Professor Department of Psychology CITY College, University of York Europe Campus The University of Sheffield Bachelors & Masters programmes CITY College, University of York Europe Campus 24 Pr. Koromila st., 546 22, Thessaloniki, Greece Tel.: (+30) 2310 224421 (ext 116), 224521 Personal website: www.mklados.com Institutional website: https://york.citycollege.eu/m/members_profile.php?m=267 Meet me: @ www.san2022.org Email Disclaimer The content in this e-mail and any attachments is confidential. It is intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, or person responsible for delivering this information to the intended recipient, please notify the sender immediately and delete this e-mail and any attachments from any computer storage or other medium. Unless you are the intended recipient or his/her representative you are not authorized to, and must not, read, copy, distribute, use or retain this message or any part of it. E-mail transmission cannot be guaranteed to be secure or error-free. Any views or opinions presented are solely those of the author and do not necessarily represent those of this organisation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanjay.ankur at gmail.com Wed Jun 8 16:19:11 2022 From: sanjay.ankur at gmail.com (Ankur Sinha) Date: Wed, 8 Jun 2022 21:19:11 +0100 Subject: Connectionists: Free registration is now open for CNS*2022 satellite tutorials: June 27--July 1 Message-ID: <20220608201911.u4fhwgqawhfivplt@thor> Dear all, Apologies for the cross-posts. The INCF/OCNS Software Working Group is happy to announce that the schedule for the CNS*2022 satellite tutorials from June 27--July 1 has now been published here: https://ocns.github.io/SoftwareWG/pages/software-wg-satellite-tutorials-at-cns-2022.html Free registration is also now open at: https://framaforms.org/incfocns-software-wg-cns2022-satellite-tutorials-registration-1654593600 To prevent spam/zoom crashing, links to zoom meetings for the tutorial sessions will be limited to registrants only. Therefore, while registration is free, it is required. The satellite tutorials feature sessions on: - Arbor - Brian2 - Introduction to containers - EBRAINS - GeNN - Keras/TensorFlow - LFPy - MOOSE - Neo + Elephant - NEST - NetPyNE - NeuroLib - NeuroML - NEURON - OSBv2 - RateML Please spread the word, and we look forward to seeing you there! -- Thanks, Regards, Ankur Sinha (He / Him / His) | https://ankursinha.in Research Fellow at the Silver Lab, University College London | http://silverlab.org/ Free/Open source community volunteer at the NeuroFedora project | https://neuro.fedoraproject.org Time zone: Europe/London From marcin at amu.edu.pl Thu Jun 9 04:27:30 2022 From: marcin at amu.edu.pl (Marcin Paprzycki) Date: Thu, 9 Jun 2022 10:27:30 +0200 Subject: Connectionists: FINAL CFP: 5th IEEE IAS GUCON 2022 [All accepted papers will be forwarded to IEEE IAS Transaction for further review] In-Reply-To: <25bd1fad-2272-0e80-0761-1f254ba9f466@ibspan.waw.pl> References: <25bd1fad-2272-0e80-0761-1f254ba9f466@ibspan.waw.pl> Message-ID: Great opportunity to meet world leaders in person again at 2022 IEEE 5th International Conference on Computing, Power and Communication Technologies 5th IEEE IAS GUCON 2022 www.gucon.org 23-25 September 2022 The Premier Technology Conference of IEEE Industry Applications Society?in New Delhi, India Venue: India Habitat Centre, Lodhi Road, New Delhi, India. Financially Sponsored by: IEEE Industry Applications Society Call for Papers SUBMISSION DEADLINE EXTENDED TO: 15 JUNE 2022 Submission Link: https://bit.ly/3wO22bO Original contributions based on the results of research and developments are solicited. Prospective authors are requested to submit their papers in not more than 6 pages, prepared in the two column IEEE format. All the accepted and presented papers will be eligible for submission to IEEE HQ for publication in the form of e-proceedings in IEEE /Xplore/ Which indexed with world's leading Abstracting & Indexing (A&I) databases, including ISI / SCOPUS/ DBLP/ EI-Compendex / Google Scholar. All Accepted extended papers will be eligible for submission to /IEEE IAS Transaction/ for further review. Authors are invited to submit full paper (Maximum 6 pages, double- column US letter size) as PDF using the IEEE templates. The IEEE paper template can be downloaded from www.ieee.org . Conference Tracks: Track 1: Data Science & Engineering Track 2: Computing Track 3: Computational Intelligence Track 4: Power, Energy and Power Electronics Track 5. Renewable Energy technologies including hydrogen Track 6: Robotics, Control, Instrumentation and Automation Track 7: Communication & Networking Track 8: RF Circuits, Systems and Antennas Track 9. 5G Technology Track 10. Industry 4.0. IEEE Industry Application Society Best Paper Award 1st Best Paper: US$1000 2nd Best Paper: US$750 3rd Best Paper: US$500 Contact: Email Id: gucon2022 at gmail.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- Ta wiadomo?? zosta?a sprawdzona na obecno?? wirus?w przez oprogramowanie antywirusowe Avast. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Thu Jun 9 04:21:28 2022 From: timofte.radu at gmail.com (Radu Timofte) Date: Thu, 9 Jun 2022 10:21:28 +0200 Subject: Connectionists: [CFP] ECCV 2022 Advances in Image Manipulation (AIM) workshop and challenges Message-ID: Apologies for cross-posting ******************************* CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 8 CHALLENGES AIM: 4th Advances in Image Manipulation workshop and challenges on compressed/image/video super-resolution, learned ISP, reversed ISP, Instagram filter removal, Bokeh effect, depth estimation In conjunction with ECCV 2022, Tel-Aviv, Israel Website: https://data.vision.ee.ethz.ch/cvl/aim22/ Contact: radu.timofte at uni-wuerzburg.de TOPICS Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to: ? Image-to-image translation ? Video-to-video translation ? Image/video manipulation ? Perceptual manipulation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation ? Saliency and gaze estimation ? Perceptual enhancement ? Multimodal translation ? Depth estimation ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral imaging ? Underwater imaging ? Aerial and satellite imaging ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video manipulation on mobile devices ? Image/video restoration and enhancement on mobile devices ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 14 pages (excluding references) in single-column, ECCV style. The paper format must follow the same guidelines as for all ECCV 2022 submissions. The review process is double blind. Dual submission is not allowed. Submission site: https://cmt3.research.microsoft.com/AIMWC2022/ WORKSHOP DATES ? Submission Deadline: July 25, 2022 ? Decisions: August 15, 2022 ? Camera Ready Deadline: August 22, 2022 AIM 2022 has the following associated challenges (ONGOING!): 1. Compressed Input Super-Resolution 2. Reversed ISP 3. Instagram Filter Removal 4. Video Super-Resolution (Evaluation platform: MediaTek Dimensity APU) - Powered by MediaTek 5. Image Super-Resolution (Eval. platform: Synaptics Dolphin NPU) - Powered by Synaptics 6. Learned Smartphone ISP (Eval. platform: Snapdragon Adreno GPU) - Powered by OPPO 7. Bokeh Effect Rendering (Eval. platform: ARM Mali GPU) - Powered by Huawei 8. Depth Estimation (Eval. platform: Raspberry Pi 4) - Powered by Raspberry Pi PARTICIPATION To learn more about the challenges and to participate: https://data.vision.ee.ethz.ch/cvl/aim22/ CHALLENGES DATES ? Release of train data: May 24, 2022 ? Validation server online: June 1, 2022 ? Competitions end: July 30, 2022 CONTACT Email: radu.timofte at uni-wuerzburg.de Website: https://data.vision.ee.ethz.ch/cvl/aim22/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.pascanu at gmail.com Thu Jun 9 05:44:27 2022 From: r.pascanu at gmail.com (Razvan Pascanu) Date: Thu, 9 Jun 2022 10:44:27 +0100 Subject: Connectionists: 1st Conference on Lifelong Learning Agents (CoLLAs) - Registration open Message-ID: Dear All, Apologies for cross-posting. Registration (https://lifelong-ml.cc/registration) for 1st Conference on Lifelong Learning Agents (CoLLAs) 2022 is open. This will be a hybrid event held in Montreal, Canada (more information about the venue https://lifelong-ml.cc/venue). The conference will have a purely virtual 2 days 18th-19th of August, with 22nd-24th of August being both in-person and virtual. We have already a list of accepted speakers for the conference which includes Abhinav Gupta (Carnegie Mellon University), Claudia Clopath (Imperial College London), Hanie Sedghi (Google Brain), Hugo Larochelle (Google Brain & Mila), Rich Caruna (Microsoft Research), Tinne Tuytelaars (Katholieke Universiteit, Leuven) and Yoshua Bengio (University of Montreal & Mila). See https://lifelong-ml.cc/speakers. Please follow us on twitter (https://twitter.com/CoLLAs_Conf) and check our website for more updates and details about the conference ( https://lifelong-ml.cc). Motivation for the conference: Machine learning has relied heavily on a traditional view of the learning process, whereby observations are assumed to be i.i.d., typically given as a dataset split into a training and validation set with the explicit focus to maximize performance on the latter. While this view proved to be immensely beneficial for the field, it represents just a fraction of the realistic scenarios of interest. Over the past few decades, increased attention has been given to alternative paradigms that help explore different aspects of the learning process, from Lifelong Learning, Continual Learning, and Meta-Learning to Transfer Learning, Multi-Task Learning and Out-Of-Distribution Generalization to name just a few. The Conference on Lifelong Learning Agents (CoLLAs) will focus on these learning paradigms that aim to move beyond the traditional, single-distribution machine learning setting and to allow learning to be more robust, more efficient in terms of compute and data, more versatile in terms of being able to handle multiple problems and be well-defined and well-behaved in more realistic non-stationary settings compared to the traditional view. For any questions about the conference, you can contact us at contact at lifelong-ml.cc. Regards, Doina Precup, Sarath Chandar, Razvan Pascanu CoLLAs 2022 General and Program Chairs -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrey.hinton at gmail.com Wed Jun 8 13:23:02 2022 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Wed, 8 Jun 2022 13:23:02 -0400 Subject: Connectionists: is connectionists moderated? Message-ID: Is anyone filtering the mail to remove deranged rantings? Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.mandic at imperial.ac.uk Thu Jun 9 06:41:10 2022 From: d.mandic at imperial.ac.uk (Danilo Mandic) Date: Thu, 9 Jun 2022 11:41:10 +0100 Subject: Connectionists: Summer School on Data and Graph driven Learning, application deadline 20 June 2022 Message-ID: Dear All, For those who are interested in Machine Learning and Graph Learning applied to communications,? I would like to draw your attention to a Summer School in this area. https://www.banjaluka2022school.ba/ This Summer School is sponsored by IEEE, EURASIP and International Neural Networks Society. Best wishes, Danilo From roland.w.fleming at psychol.uni-giessen.de Thu Jun 9 08:56:50 2022 From: roland.w.fleming at psychol.uni-giessen.de (Roland Fleming) Date: Thu, 9 Jun 2022 14:56:50 +0200 Subject: Connectionists: Two upcoming post-doc positions in Roland Fleming's lab Message-ID: Hi everyone, I expect to be announcing two post-doc positions in my lab at the Justus Liebig University of Giessen (Germany), within the coming weeks. If you would be interested in applying, it would be great if you could contact me in advance. You should have a background in perception or motor control, although I?ll also consider particularly strong candidates from other backgrounds. Experience with any of the following would be a significant bonus: - computational modelling of perceptual and/or motor processes - machine learning, especially deep learning - computer graphics, especially physics simulations - MoCap (with or without markers) - robotics - fMRI My lab specialises in the visual perception of?and motor interactions with?materials and objects. Topics include visual estimation of material properties and shape; physical reasoning and naive physics; grasping and dextrous manipulation; one-shot learning; shape understanding, including perceptual organisation of shape and inferences about causal history. Check out my lab website for more information, including articles and datasets. My lab is situated in one of the top research environments for visual perception and sensorimotor research worldwide, with a large group of Principle Investigators and a thriving and diverse community of junior researchers working on perception and action using psychophysics, eye-, hand- and body-tracking, VR, fMRI, EEG, machine learning and other methods. We run several large-scale research consortia, providing excellent local and international networking opportunities. Giessen is also ideally located in the centre of western Europe, with half-a-dozen other countries a mere train-ride away. If you are interested, I look forward to hearing from you! Note that due to travel I may not respond immediately. Best wishes, Roland Fleming ____________________________ Prof. Roland W Fleming, FRSB Kurt Koffka Professor of Experimental Psychology, Justus Liebig University, Giessen Executive Director, Centre for Mind, Brain and Behaviour, Universities of Marburg and Giessen Otto-Behaghel-Str 10, 35394 Giessen, GERMANY tel: 0641 99-26140 http://www.allpsych.uni-giessen.de/fleminglab -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Jun 9 16:33:51 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 9 Jun 2022 13:33:51 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: Message-ID: Dear Dr. Hinton, You very directly asked my side to produce some tangible goals. Ernest Davis and I did precisely what you asked, and in return you described me (in a separate but public message that also appears to have come from your account) as deranged. There is no world in which that is socially acceptable, or a positive step towards science. Your reaction is particularly striking because it is a clear outlier. In general, despite the perfectly reasonable questions that you asked about wording in your subsequent email (which would presumably need be negotiated in any actually-implemented bet, as one moved from term sheet to contract), the community reaction has actually been quite favorable. LongNow offered to host it, Metaculus added to their forecast site, Christian Szegedy placed a side bet and ran a lengthy discussion about the math proposition, etc. Your reactions are so far out of range with any of the other reaction that I have received than I am frankly concerned for your well-being. ? All that said, you raised one substantive question, and I will address it. I am reasonably familiar with the PaLM model that you refer to, and aware of the claim that it ?understands? jokes. I am also aware of the fact that what is provided in the paper was a numerator without a denominator. There is no clarity about what was tried, or how general it was, or under what conditions it failed; subset is indeed the operative word. In no world would what was written pass traditional peer review. Even for a conference paper, what is reported about joke interpretation is woefully thin. It?s really not much rest to your theory on. I am also aware of the fact that your colleagues at Google failed to supply more detail when I requested it?never a good sign. How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? > Sally's cow died yesterday. When will the cow be alive again? Let's think step by step. > > First, the cow must be buried. This takes a few hours. > > Next, a new cow must be born. This takes nine months. > > Therefore, the earliest the cow will be alive again is nine months from now. > Probably not. ? What we have known since Eliza is that humans are easily seduced into anthropomorphizing machines. I am going to stand by my claim that current AI lacks understanding: one cannot derive a set of logic propositions from a large language model one cannot reliably update a world model based on an LLMs calculations (a point that LeCun has also made, in slightly different terms) one cannot reliably reason from what and LLM derives, LLMs themselves cannot reliably reason from they are told.. My point is not a Searlean one about the impossibility of machines thinking, just a reality of the limits of contemporary systems. On the latter point, I would also urge you to read my recent essay called ?Horse rides Astronaut?, to see how easy it is make up incorrect rationalization about these models when they make errors. Inflated appraisals of their capabilities may serve some sort of political end, but will not serve science. I cannot undo whatever slight some reviewer did to Yann decades ago, but I can call the current field as I see it; I don?t believe that current systems have gotten significantly closer to what I described in that 2016 conversation that you quote from. I absolutely stand by the claim that we are a long way from answering ?the deeper questions in artificial intelligence, like how we understand language or how we reason about the world." SInce you are found of quoting stuff I right 6 or 7 years ago, here?s a challenge that I proposed in the New Yorker 2014; to me I see real progress on this sort of thing, thus far: > > allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content??Why did Russia invade Crimea?? or ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. (When asked what ?Cheers? was about, it responded, ?How should I know, I haven?t watched the show.?) But no existing program?not Watson, not Goostman, not Siri?can currently come close to doing what any bright, real teenager can do: watch an episode of ?The Simpsons,? and tell us when to laugh. Can Palm-E do that? I seriously doubt it. Dr. Gary Marcus Founder, Geometric Intelligence (acquired by Uber) Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books in AI, and The Algebraic Mind, one of the key early works advocating neurosymbolic AI > On Jun 9, 2022, at 11:34, Geoffrey Hinton wrote: > > ? > I shouldn't respond because your main aim is to get attention without going to the trouble of building something that works (personal communication, Y. LeCun) but I cannot resist pointing out the following Marcus claim from 2016: > > "People are very excited about big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world. " > > Given that big neural nets can now explain why a joke is funny (for some subset of jokes) do you still want to stick with this claim? It seems to me that the reason you made this claim is because you have a strong prior belief about how language understanding and reasoning must work and this belief is remarkably resistant to evidence. Deep learning researchers have seen this before. Yann had a paper rejected by a vision conference even though it beat the state-of-the-art and one of the reasons given was that the model learned everything and therefore taught us nothing about how to do vision. That particular referee had a strong idea of how computer vision must work and failed to notice that the success of Yann's model showed that that prior belief was spectacularly wrong. > > Geoff > > > > >> On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: >> ???Dear Connectionists, and especially Geoff Hinton, >> >> It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said >> >> ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? >> >> Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. >> >> Precise details are here, towards the end of the essay: >> >> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >> >> Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. >> >> Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. >> >> If you?d like in on the bet, Geoff, please let me know. >> >> More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). >> >> Cheers. >> Gary Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: From cgf at isep.ipp.pt Thu Jun 9 18:44:30 2022 From: cgf at isep.ipp.pt (Carlos) Date: Thu, 9 Jun 2022 23:44:30 +0100 Subject: Connectionists: CFP: IoTStreams, Affiliated with ECML-PKDD 2022 Message-ID: <6685179e-270c-00ef-44d5-73f3a45086f0@isep.ipp.pt> Call for Papers IoT Stream 2022 - 3rd Workshop and Tutorial on IoT Streams for Data-Driven Predictive Maintenance Affiliated with ECML-PKDD 2022, 19-23 September, Grenoble, France, https://2022.ecmlpkdd.org/ Workshop site: https://abifet.wixsite.com/iotstream2022 Motivation and focus Maintenance is a critical issue in the industrial context for preventing high costs and injuries. Various industries are moving more and more toward digitalization and collecting ?big data? to enable or improve the accuracy of their predictions. At the same time, the emerging technologies of Industry 4.0 empowered data production and exchange, which leads to new concepts and methodologies for the exploitation of large datasets in maintenance. The intensive research effort in data-driven Predictive Maintenance (PdM) is producing encouraging results. Therefore, the main objective of this workshop is to raise awareness of research trends and promote interdisciplinary discussion in this field. Data-driven predictive maintenance must deal with big streaming data and handle concept drift due to both changing external conditions, but also normal wear of the equipment. It requires combining multiple data sources, and the resulting datasets are often highly imbalanced. The knowledge about the systems is detailed, but in many scenarios, there is a large diversity in both model configurations, as well as their usage, additionally complicated by low data quality and high uncertainty in the labels. Many recent advancements in supervised and unsupervised machine learning, representation learning, anomaly detection, visual analytics and similar areas can be showcased in this domain. Therefore, the overlap in research between machine learning and predictive maintenance continues to increase in recent years. This event is an opportunity to bridge researchers and engineers to discuss emerging topics and key trends. The previous edition of the workshop at ECML 2020 has been very popular, and we are planning to continue this success in 2022. Aim and scope This workshop welcomes research papers using Data Mining and Machine Learning (Artificial Intelligence in general) to address the challenges and answer questions related to the problem of predictive maintenance. For example, when to perform maintenance actions, how to estimate components current and future status, which data should be used, what decision support tools should be developed for prognostic, how to improve the estimation accuracy of remaining useful life, and similar. It solicits original work, already completed or in progress. Position papers will also be considered. The scope of the workshop covers, but is not limited to, the following: ? Predictive and Prescriptive Maintenance ? Fault Detection and Diagnosis (FDD) ? Fault Isolation and Identification ? Anomaly Detection (AD) ? Estimation of Remaining Useful Life of Components, Machines, etc. ? Forecasting of Product and Process Quality ? Early Failure and Anomaly Detection and Analysis ? Automatic Process Optimization ? Self-healing and Self-correction ? Incremental and evolving (data-driven and hybrid) models for FDD and AD ? Self-adaptive time-series based models for prognostics and forecasting ? Adaptive signal processing techniques for FDD and forecasting ? Concept Drift issues in dynamic predictive maintenance systems ? Active learning and Design of Experiment (DoE) in dynamic predictive maintenance ? Industrial process monitoring and modelling ? Maintenance scheduling and on-demand maintenance planning ? Visual analytics and interactive Machine Learning ? Analysis of usage patterns ? Explainable AI for predictive maintenance It covers real-world applications such as: ? Manufacturing systems ? Transport systems (including roads, railways, aerospace and more) ? Energy and power systems and networks (wind turbines, solar plants and more) ? Smart management of energy demand/response ? Production Processes and Factories of the Future (FoF) ? Power generation and distribution systems ? Intrusion detection and cybersecurity ? Internet of Things ? Smart cities Paper submission: Authors should submit a PDF version in Springer LNCS style using the workshop EasyChair site: https://easychair.org/my/conference?conf=iotstream2022. The maximum length of papers in 15 pages, including references, consistent with the ECML PKDD conference submissions. Submitting a paper to the workshop means that if the paper is accepted, at least one author will attend the workshop and present the paper. Papers not presented at the workshop will not be included in the proceedings. We will follow ECML PKDD?s policy for attendance. Paper publication: Accepted papers will be published by Springer as joint proceedings of several ECML PKDD workshops. Workshop format: ? Half-day workshop ? 1-2 keynote talks, speakers to be announced ? Oral presentation of accepted papers Important Dates: ? Workshop paper submission deadline: June 20, 2022 ? Workshop paper acceptance notification: July 13, 2022 ? Workshop paper camera-ready deadline: July 27, 2022 ? Workshop: September 23, 09h-12.30, 2022 (TBC) Program Committee members (to be confirmed): ? Edwin Lughofer, Johannes Kepler University of Linz, Austria ? Sylvie Charbonnier, Universit? Joseph Fourier-Grenoble, France ? David Camacho Fernandez, Universidad Politecnica de Madrid, Spain ? Bruno Sielly Jales Costa, IFRN, Natal, Brazil ? Fernando Gomide, University of Campinas, Brazil ? Jos? A. Iglesias, Universidad Carlos III de Madrid, Spain ? Anthony Fleury, Mines-Douai, Institut Mines-T?l?com, France ? Teng Teck Hou, Nanyang Technological University, Singapore ? Plamen Angelov, Lancaster University, UK ? Igor Skrjanc, University of Ljubljana, Slovenia ? Indre Zliobaite, University of Helsinki, Finland ? Elaine Faria, Univ. Uberlandia, Brazil ? Mykola Pechenizkiy, TU Eindonvhen, Netherlands ? Raquel Sebasti?o, Univ. Aveiro, Portugal ? Anders Holst, RISE SICS, Sweden ? Erik Frisk, Link?ping University, Sweden ? Enrique Alba, University of M?laga, Spain ? Thorsteinn R?gnvaldsson, Halmstad University, Sweden ? Andreas Theissler, University of Applied Sciences Aalen, Germany ? Vivek Agarwal, Idaho National Laboratory, Idaho ? Manuel Roveri, Politecnico di Milano, Italy ? Yang Hu, Politecnico di Milano, Italy Workshop Organizers: ? Albert Bifet, Telecom-Paris, Paris, France, University of Waikato, New Zealand, albert.bifet at telecom-paristech.fr ? Jo?o Gama,University of Porto, Portugal, jgama at fep.up.pt ? Slawomir Nowaczyk, Halmstad University, Sweden, slawomir.nowaczyk at hh.se ? Carlos Ferreira, LIAAD INESC, Porto, Portugal, ISEP, Porto, Portugal, cgf at isep.ipp.pt Tutorial Organizers: ? Rita Ribeiro,University of Porto, Portugal, rpribeiro at fc.up.pt ? Szymon Bobek, Jagiellonian University, Poland, szymon.bobek at uj.edu.pl ? Bruno Veloso, LIAAD INESC, Porto, Portugal, University Portucalense, Porto, Portugal bruno.miguel.veloso at gmail.com ? Grzegorz J. Nalepa, Jagiellonian University, Krakow, Poland, gjn at gjn.re ? Sepideh Pashami, Halmstad University, Sweden, sepideh.pashami at hh.se Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From michal.ptaszynski at gmail.com Fri Jun 10 00:56:34 2022 From: michal.ptaszynski at gmail.com (Ptaszynski Michal) Date: Fri, 10 Jun 2022 13:56:34 +0900 Subject: Connectionists: [Final CfP] Call for Papers: Information Processing & Management (IP&M) (IF: 6.222) Special Issue on Science Behind Neural Language Models Message-ID: <6E57F8E9-3BCF-48B4-B596-D238F79CCEBC@gmail.com> Dear Colleagues, ** Apologies for cross-posting ** This is Michal Ptaszynski from Kitami Institute of Technology, Japan. This is the final call for papers for the Information Processing & Management (IP&M) (IF: 6.222) journal Special Issue on "Science Behind Neural Language Models." This special issue is also a Thematic Track at Information Processing & Management Conference 2022 (IP&MC2022), meaning, that at least one author of the accepted manuscript will need to attend the IP&MC2022 conference. For more information about IP&MC2022, please visit: https://www.elsevier.com/events/conferences/information-processing-and-management-conference The deadline for manuscript submission is June 15, 2022, but your paper will be reviewed immediately after submission and will be published as soon as it is accepted. We hope you will consider submitting your paper. https://www.elsevier.com/events/conferences/information-processing-and-management-conference/author-submission/science-behind-neural-language-models Best regards, Michal PTASZYNSKI, Ph.D., Associate Professor Department of Computer Science Kitami Institute of Technology, 165 Koen-cho, Kitami, 090-8507, Japan TEL/FAX: +81-157-26-9327 michal at mail.kitami-it.ac.jp ============================================ Information Processing & Management (IP&M) (IF: 6.222) Special Issue on "Science Behind Neural Language Models" & Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track on "Science Behind Neural Language Models" Motivation The last several years showed explosive popularity of neural language models, especially large pre-trained language models based on the transformer architecture. The field of Natural Language Processing (NLP) and Computational Linguistics (CL) experienced a shift from simple language models such as Bag-of-Words, and word representations like word2vec, or GloVe, to more contextually-aware language models, such as ELMo, or more recently, BERT, or GPT including their improvements and derivatives. The general high performance obtained by BERT-based models in various tasks even convinced Google to apply it as a default backbone in its search engine query expansion module, thus making BERT-based models a mainstream, and a strong baseline in NLP/CL research. The popularity of large pretrained language models also allowed a major growth of companies providing freely available repositories of such models, and, more recently, the founding of Stanford University's Center for Research on Foundation Models (CRFM). However, despite the overwhelming popularity, and undeniable performance of large pretrained language models, or "foundation models", the specific inner-workings of those models have been notoriously difficult to analyze and the causes of - usually unexpected and unreasonable - errors they make, difficult to untangle and mitigate. As the neural language models keep gaining in popularity while expanding into the area of multimodality by incorporating visual and speech information, it has become the more important to thoroughly analyze, fully explain and understand the internal mechanisms of neural language models. In other words, the science behind neural language models needs to be developed. Aims and scope With the above background in mind, we propose the following Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track and Information Processing & Management Journal Special Issue on Science Behind Neural Language Models. The TT/SI will focus on topics deepening the knowledge on how the neural language models work. Therefore, instead of taking up basic topics from the fields of CL and NLP, such as improvement of part-of-speech tagging, or standard sentiment analysis, regardless of whether they apply neural language models in practice, we will focus on promoting research that specifically aims at analyzing and understanding the "bells and whistles" of neural language models, for which the generally perceived science has not been established yet. Target audience The TT/SI will aim at the audience of scientists, researchers, scholars, and students performing research on the analysis of pretrained language models, with a specific focus on explainable approaches to language models, analysis of errors such models make, methods for debiasing, detoxification and other methods of improvement of the pretrained language models. The TT/SI will not accept research on basic NLP/CL topics for which the field has been well established, such as improvement of part-of-speech tagging, sentiment analysis, etc., even if they apply neural language models unless they directly contribute to furthering the understanding and explanation of the inner workings of large scale pretrained language models. List of Topics List of Topics The Thematic Track / Special Issue will invite papers on topics listed, but not limited to the following: - Neural language model architectures - Improvement of neural language model generation process - Methods for fine tuning and optimization of neural language models - Debiasing neural language models - Detoxification of neural language models - Error analysis and probing of neural language models - Explainable methods for neural language models - Neural language models and linguistic phenomena - Lottery Ticket Hypothesis for neural language models - Multimodality in neural language models - Generative neural language models - Inferential neural language models - Cross-lingual or multilingual neural language models - Compression of neural language models - Domain specific neural language models - Expansion of information embedded in neural language models Important Dates: Thematic track manuscript submission due date; authors are welcome to submit early as reviews will be rolling: June 15, 2022 Author notification: July 31, 2022 IP&MC conference presentation and feedback: October 20-23, 2022 Post conference revision due date: January 1, 2023 Submission Guidelines: Submit your manuscript to the Special Issue category (VSI: IPMC2022 HCICTS) through the online submission system of Information Processing & Management. https://www.editorialmanager.com/ipm/ Authors will prepare the submission following the Guide for Authors on IP&M journal at (https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures. The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022. Please see this infographic for the manuscript flow: https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf For more information about IP&MC2022, please visit https://www.elsevier.com/events/conferences/information-processing-and-management-conference. Thematic Track / Special Issue Editors: Managing Guest Editor: Michal Ptaszynski (Kitami Institute of Technology) Guest Editors: Rafal Rzepka (Hokkaido University) Anna Rogers (University of Copenhagen) Karol Nowakowski (Tohoku University of Community Service and Science) For further information, please feel free to contact Michal Ptaszynski directly. Michal Ptaszynski michal.ptaszynski at gmail.com From geoffrey.hinton at gmail.com Thu Jun 9 14:50:02 2022 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Thu, 9 Jun 2022 14:50:02 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> References: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> Message-ID: If you make a bet, you need to be very clear about what counts as success and you need to assume the person who has to pay out will quibble. You cannot have a real bet that includes phrases like: "tell you accurately" "reliably answer questions" "competent cook" "reliably construct" A bet needs to have such clear criteria for whether it has been achieved or not that even Gary Marcus could not quibble. Geoff On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: > ???Dear Connectionists, and especially Geoff Hinton, > > It has come to my attention that Geoff Hinton is looking for challenging > targets. In a just-released episode of The Robot Brains podcast [ > https://www.youtube.com/watch?v=4Otcau-C_Yc], he said > > *?If any of the people who say [deep learning] is hitting a wall would > just write down a list of the things it?s not going to be able to do then > five years later, we?d be able to show we?d done them.?* > > Now, as it so happens, I (with the help of Ernie Davis) did just write > down exactly such a list of things, last weekm and indeed offered Elon Musk > a $100,000 bet along similar lines. > > Precise details are here, towards the end of the essay: > > https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things > > > Five are specific milestones, in video and text comprehension, cooking, > math, etc; the sixth is the proviso that for an intelligence to be deemed > ?general? (which is what Musk was discussing in a remark that prompted my > proposal), it would need to solve a majority of the problems. We can > probably all agree that narrow AI for any single problem on its own might > be less interesting. > > Although there is no word yet from Elon, Kevin Kelly offered to host the > bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 > questions that the community can comment on. Vivek Wadhwa, cc?d, quickly > offered to double the bet, and several others followed suit; the bet to > Elon (should he choose to take it) currently stands at $500,000. > > If you?d like in on the bet, Geoff, please let me know. > > More generally, I?d love to hear what the connectionists community thinks > of six criteria I laid out (as well as the arguments at the top of the > essay, as to why AGI might not be as imminent as Musk seems to think). > > Cheers. > Gary Marcus > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Jun 9 16:43:13 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 9 Jun 2022 13:43:13 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: Message-ID: <1EFC851C-171D-4839-9211-5FFB3EE31076@nyu.edu> these by the way are all fine queries, minus the gratuitous ad hominem that is unnecessary; after all quibblers abound, across the spectrum. the good news is that community is already working these things out on metaculus.com, and I?m happy to take middle-of-the-road positions on al,l if you wish to participate. (eg ?accurately? can be taken to equal 90% or whatever a sample of U. Toronto undergraduates scores, or whatever). On the comprehension challenges in particular I am working with a group of people from DeepMind, Meta, OpenAI, and various universities to try to make something practicable. Szegedy has been trying to iron out the math bet. Cooking and the correct rules for the coding challenge I leave to others. More broadly, in my experience in complex negotiations, one starts with something a term sheet for the higher level concepts, and then works to the finer grain only after there is enough agreement at the higher level. i?d gladly put the time in to negotiating the detail if you would seriously engage (with or without money) in the higher level. -gfm > On Jun 9, 2022, at 11:50, Geoffrey Hinton wrote: > > ? > If you make a bet, you need to be very clear about what counts as success and you need to assume the person who has to pay out will quibble. > You cannot have a real bet that includes phrases like: > > "tell you accurately" > "reliably answer questions" > "competent cook" > "reliably construct" > > A bet needs to have such clear criteria for whether it has been achieved or not that even Gary Marcus could not quibble. > > Geoff > > > > On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: >> ???Dear Connectionists, and especially Geoff Hinton, >> >> It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said >> >> ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? >> >> Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. >> >> Precise details are here, towards the end of the essay: >> >> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >> >> Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. >> >> Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. >> >> If you?d like in on the bet, Geoff, please let me know. >> >> More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). >> >> Cheers. >> Gary Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Thu Jun 9 22:53:34 2022 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 09 Jun 2022 19:53:34 -0700 Subject: Connectionists: NEURAL COMPUTATION - June 1, 2022 In-Reply-To: Message-ID: Neural Computation - Volume 34, Number 6 - June 1, 2022 Available online for download now: http://www.mitpressjournals.org/toc/neco/34/6 http://cognet.mit.edu/content/neural-computation ----- Review Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks Abbas Z. Kouzani, Amirhossein Javanshir, Thanh Thi Nguyen, and M. A. Parvez Mahmud Letters Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs Beren Millidge, Alexander Tschantz, and Christopher L Buckley Decoding Pixel-level Image Features From Two-photon Calcium Signals of Macaque Visual Cortex Zhaofei Yu, Yijun Zhang, Tong Bu, Jiyuan Zhang, Shiming Tang, Jian K. Liu, and Tiejun Huang Full-Span Log-Linear Model and Fast Learning Algorithm Kazuya Takabatake, Shotaro Akaho Role of Interaction Delays in the Synchronization of Inhibitory Networks Alireza Valizadeh, Nariman Roohi Hypothesis Test and Confidence Analysis With Wasserstein Distance on General Dimension Masaaki Imaizumi, Hirofumi Ota, and Takuo Hamaguchi The Perils of Being Unhinged on the Accuracy of Classifiers Minimizing a Noise-robust Convex Loss Rocco Servedio, Philip M. Long ----- ON-LINE -- http://www.mitpressjournals.org/neco MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-cs at mit.edu ----- From geoffrey.hinton at gmail.com Thu Jun 9 14:34:08 2022 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Thu, 9 Jun 2022 14:34:08 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> References: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> Message-ID: I shouldn't respond because your main aim is to get attention without going to the trouble of building something that works (personal communication, Y. LeCun) but I cannot resist pointing out the following Marcus claim from 2016: "People are very excited about big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world. " Given that big neural nets can now explain why a joke is funny (for some subset of jokes) do you still want to stick with this claim? It seems to me that the reason you made this claim is because you have a strong prior belief about how language understanding and reasoning must work and this belief is remarkably resistant to evidence. Deep learning researchers have seen this before. Yann had a paper rejected by a vision conference even though it beat the state-of-the-art and one of the reasons given was that the model learned everything and therefore taught us nothing about how to do vision. That particular referee had a strong idea of how computer vision must work and failed to notice that the success of Yann's model showed that that prior belief was spectacularly wrong. Geoff On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: > ???Dear Connectionists, and especially Geoff Hinton, > > It has come to my attention that Geoff Hinton is looking for challenging > targets. In a just-released episode of The Robot Brains podcast [ > https://www.youtube.com/watch?v=4Otcau-C_Yc], he said > > *?If any of the people who say [deep learning] is hitting a wall would > just write down a list of the things it?s not going to be able to do then > five years later, we?d be able to show we?d done them.?* > > Now, as it so happens, I (with the help of Ernie Davis) did just write > down exactly such a list of things, last weekm and indeed offered Elon Musk > a $100,000 bet along similar lines. > > Precise details are here, towards the end of the essay: > > https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things > > > Five are specific milestones, in video and text comprehension, cooking, > math, etc; the sixth is the proviso that for an intelligence to be deemed > ?general? (which is what Musk was discussing in a remark that prompted my > proposal), it would need to solve a majority of the problems. We can > probably all agree that narrow AI for any single problem on its own might > be less interesting. > > Although there is no word yet from Elon, Kevin Kelly offered to host the > bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 > questions that the community can comment on. Vivek Wadhwa, cc?d, quickly > offered to double the bet, and several others followed suit; the bet to > Elon (should he choose to take it) currently stands at $500,000. > > If you?d like in on the bet, Geoff, please let me know. > > More generally, I?d love to hear what the connectionists community thinks > of six criteria I laid out (as well as the arguments at the top of the > essay, as to why AGI might not be as imminent as Musk seems to think). > > Cheers. > Gary Marcus > -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel.kaski at manchester.ac.uk Thu Jun 9 16:29:17 2022 From: samuel.kaski at manchester.ac.uk (Samuel Kaski) Date: Thu, 9 Jun 2022 20:29:17 +0000 Subject: Connectionists: Researcher positions in probabilistic machine learning: Research Fellow, Postdoc and PhD Student Message-ID: Manchester Centre for AI Fundamentals has researcher positions open at several levels: Research Fellow, Postdoc and Postgraduate Research Student (funded). DL June 30, 2022 I am hiring researchers to my team working on probabilistic machine learning. Keywords include: Bayesian inference, reinforcement learning and inverse reinforcement learning, automatic experimental design, multi-agent learning, Bayesian deep learning, amortized inference, human-in-the-loop learning, user modelling, collaborative AI, privacy-preserving learning, likelihood-free inference. Different researchers work on different but related subsets of these. The work is funded by UKRI Turing AI World-Leading Researcher Fellowship programme, on ?Steering AI in Experimental Design and Decision-Making". The team is based in the new Manchester Centre for AI Fundamentals, which builds on the new ELLIS Unit Manchester and the Alan Turing Institute. In addition to these outstanding AI and Machine Learning collaboration opportunities, we collaborate with excellent teams in other fields, in both academia and industry, which give application opportunities for those interested: personalized medicine, especially for cancer and remote medicine; synthetic biology; digital twins more generally, etc. Now is the time to join the University of Manchester when it is significantly boosting its activities in Machine Learning! More info: Deadline June 30 Research Fellows: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=21859 Postdocs: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=22431 PhD Student (funded; both UK and international applicants welcome): https://www.cs.manchester.ac.uk/study/postgraduate-research/research-projects/description/?projectid=33392 All positions: https://www.idsai.manchester.ac.uk/research/centre-for-ai-fundamentals/news-and-opportunities/ Background about this research program: https://www.ukri.org/news/global-leaders-named-as-turing-ai-world-leading-researcher-fellows/ https://www.manchester.ac.uk/discover/news/new-human-ai-research-teams-could-be-the-future-of-research-meeting-future-societal-challenges/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jowulff at gmail.com Thu Jun 9 18:55:11 2022 From: jowulff at gmail.com (Jonas Wulff) Date: Thu, 9 Jun 2022 15:55:11 -0700 Subject: Connectionists: [CFP] ECCV 2022 Workshop: What is Motion for? Message-ID: *** Apologies for cross-posting *** *CALL FOR PAPERS -- ?What is Motion for?? Workshop in conjunction with ECCV 2022* Website: https://what-is-motion-for.github.io/ Motion is an important cue for many perception tasks from navigation to scene understanding. This workshop will explore various ways of representing and extracting motion information, and provide a venue to exchange ideas about the use of motion in Computer Vision. To this end, we invite paper contributions discussing temporal and motion representations and applications, evaluation metrics, and benchmarks that will help understand and shape the future of temporal and motion representations and its role in the field of computer vision and other related areas. The workshop will focus on topics including (but not limited to): - Motion representations (optical flow, stereo, scene flow, and alternatives) - Benchmarks involving motion estimation or motion understanding - Multi-frame and long term motion representations - Applications of motion estimation and representations and the impact on other computer vision problems (e.g. tracking, action classification, video captioning, etc) - Motion representations in other applications, such as graphics, autonomous driving, robotics, medical imaging, animal tracking etc - Event cameras and applications of event-based representations *Important Dates* - Paper submission deadline: Aug 26th, 2022 - Notification to authors: September 26th, 2022 - Finalized workshop program: October 10th, 2022 - Workshop: During ECCV (Oct. 23-27, 2022); exact date TBD *Paper & Submission format* We will accept submissions in two formats: - Previously published papers (up to 12 pages in ECCV submission format) representing work that is relevant to the workshop and has been published in a peer-reviewed venue before. These submissions will be checked for relevance to the workshop, but will not undergo a complete review, and will not be published in the workshop proceedings. - Novel works and ideas in the form of full papers (up to 12 pages in ECCV submission format) or extended abstracts (up to 4 pages, no format will be given preference over the other) representing novel work that has not been previously published or accepted for publication in a peer-reviewed venue. These submissions will undergo double-blind review, and authors of accepted works will have the option to have their work included in the ECCV workshop proceedings. Note that if you want to re-submit your paper to a future Computer Vision conference, the length should not exceed 4 pages including citations. Accepted papers will be presented as posters/short online presentations, and the authors of the best paper will be invited to give an oral presentation. The workshop website will provide links to the accepted papers. Authors of novel and previously unpublished papers will have the option to have their papers included in the ECCV workshop proceedings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivek at wadhwa.com Thu Jun 9 16:42:29 2022 From: vivek at wadhwa.com (Vivek Wadhwa) Date: Thu, 9 Jun 2022 13:42:29 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> Message-ID: <001101d87c41$70285700$50790500$@wadhwa.com> Geoff, I have done the equivalent of switching sides over here. Long ago, I was recruited by Ray Kurzweil and Peter Diamandis to head academics for Singularity University and considered Elon Musk a friend of sorts. Now you see me teaming up with Gary to challenge Elon and the futurists who proclaim that AGI is almost here. It isn?t that I don?t believe that these models will evolve and do some things which seem amazing, it is because I have realized that Gary and others are right: ?these systems aren?t stable or trustworthy; a crumbling house follows from a rotten foundation?. With all the hype that is being generated by people like Andreessen and Musk, the investments are going into wasteful things like crypto/NFTs and the imaginary Web3. We are being distracted from the problems that flawed AI data models have created with privacy and security ? and inequity. In this piece I wrote for India?s Hindustan Times, for example, I talked for the first time about Tesla?s dangerous FSD: https://www.hindustantimes.com/opinion/why-india-is-better-off-without-musk-and-tesla-101654787059810.html. By making unrealistic claims about the imminence of AGI, Elon is very skillfully changing the subject. The fact is that not only are the Tesla?s sensors inadequate, so are its AI models. The car will never have a safe FSD. And if Elon?s robots are as flawed as his car?s AI systems are, they will be killing their owners the moment they get shouted at. I have no doubt that in this decade, you will never trust an AI to look after your children or grandchildren. I would be surprised if anyone really believes that at least in the next two decades, computers will have any form of sentience or the stuff we saw in science fiction. Regards, Vivek Wadhwa Former Distinguished Fellow at Harvard Law School, Carnegie Mellon School of Engineering, and Emory University; Fellow at Stanford Law and UC-Berkeley; adjunct professor at Duke University and Carnegie Mellon University; and VP of Academics at Singularity University Website: www.wadhwa.com , Twitter: @wadhwa Author: The Immigrant Exodus, Innovating Women, Your Happiness Was Hacked, The Driver in the Driverless Car and From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation From: Geoffrey Hinton Sent: Thursday, June 9, 2022 11:50 AM To: Gary Marcus Cc: connectionists at mailman.srv.cs.cmu.edu; Vivek Wadhwa Subject: Re: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com If you make a bet, you need to be very clear about what counts as success and you need to assume the person who has to pay out will quibble. You cannot have a real bet that includes phrases like: "tell you accurately" "reliably answer questions" "competent cook" "reliably construct" A bet needs to have such clear criteria for whether it has been achieved or not that even Gary Marcus could not quibble. Geoff On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus > wrote: ???Dear Connectionists, and especially Geoff Hinton, It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. Precise details are here, towards the end of the essay: https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. If you?d like in on the bet, Geoff, please let me know. More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). Cheers. Gary Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Fri Jun 10 03:50:31 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Fri, 10 Jun 2022 09:50:31 +0200 Subject: Connectionists: CfP: The 8th Int. Online & Onsite Conf. on Machine Learning, Optimization & Data Science - LOD 2022, September 18-22, Certosa di Pontignano, Tuscany - Italy - Late Breaking Paper Submission Deadline: June 15 In-Reply-To: References: <510613FC-D595-4600-8C08-6C980C735873@nyu.edu> Message-ID: Dear Colleague, Apologies if you receive multiple copies of this announcement. Please kindly help forward it to potentially interested authors/attendees, thanks! -- The 8th International Online & Onsite Conference on Machine Learning, Optimization, and Data Science ? #LOD2022 - September 18-22, Certosa di Pontignano, #Tuscany - Italy LOD 2022, An Interdisciplinary Conference: #MachineLearning, #Optimization, #BigData & #ArtificialIntelligence, #DeepLearning without Borders https://lod2022.icas.cc lod at icas.cc Late Breaking PAPERS SUBMISSION: June 15 (Anywhere on Earth) All papers must be submitted using EasyChair: https://easychair.org/conferences/?conf=lod2022 LOD 2022 KEYNOTE SPEAKER(S): * Pierre Baldi, University of California Irvine, USA * J?rgen Bajorath, University of Bonn, Germany * Ross King, University of Cambridge, UK & The Alan Turing Institute, UK * Rema Padman, Carnegie Mellon University, USA LOD 2022 TUTORIAL SPEAKER: * Simone Scardapane, University of Rome "La Sapienza", Italy ACAIN 2022 KEYNOTE SPEAKERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK & Wellcome Trust Centre for Neuroimaging * Wulfram Gerstner, EPFL, Switzerland * M?t? Lengyel, Cambridge University, UK * Max Erik Tegmark, MIT, USA & Future of Life Institute * Michail Tsodyks, Institute for Advanced Study, USA More Lecturers and Speakers to be announced soon! https://acain2022.artificial-intelligence-sas.org/course-lecturers/ PAPER FORMAT: Please prepare your paper using the Springer Nature ? Lecture Notes in Computer Science (LNCS) template. Papers must be submitted in PDF. TYPES OF SUBMISSIONS: When submitting a paper to LOD 2022, authors are required to select one of the following four types of papers: * long paper: original novel and unpublished work (max. 15 pages in Springer LNCS format); * short paper: an extended abstract of novel work (max. 5 pages); * work for oral presentation only (no page restriction; any format). For example, work already published elsewhere, which is relevant, and which may solicit fruitful discussion at the conference; * abstract for poster presentation only (max 2 pages; any format). The poster format for the presentation is A0 (118.9 cm high and 84.1 cm wide, respectively 46.8 x 33.1 inch). For research work which is relevant, and which may solicit fruitful discussion at the conference. Each paper submitted will be rigorously evaluated. The evaluation will ensure the high interest and expertise of reviewers. Following the tradition of LOD, we expect high-quality papers in terms of their scientific contribution, rigor, correctness, novelty, clarity, quality of presentation and reproducibility of experiments. Accepted papers must contain significant novel results. Results can be either theoretical or empirical. Results will be judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact. It is also possible to present the talk virtually (Zoom). LOD 2022 Special Sessions: https://lod2022.icas.cc/special-sessions/ https://easychair.org/my/conference?conf=lod2022 PAST LOD KEYNOTE SPEAKERS: https://lod2022.icas.cc/past-keynote-speakers/ Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) & University of Montreal, Canada Bettina Berendt, TU Berlin, Germany & KU Leuven, Belgium, and Weizenbaum Institute for the Networked Society, Germany J?rg Bornschein, DeepMind, London, UK Michael Bronstein, Imperial College London, UK Nello Cristianini, University of Bristol, UK Peter Flach, University of Bristol, UK, and EiC of the Machine Learning Journal Marco Gori, University of Siena, Italy Arthur Gretton, UCL, UK Arthur Guez, Google DeepMind, Montreal, UK Yi-Ke Guo, Imperial College London, UK George Karypis, University of Minnesota, USA Vipin Kumar, University of Minnesota, USA Marta Kwiatkowska, University of Oxford, UK George Michailidis, University of Florida, USA Kaisa Miettinen, University of Jyv?skyl?, Finland Stephen Muggleton, Imperial College London, UK Panos Pardalos, University of Florida, USA Jan Peters, Technische Universitaet Darmstadt & Max-Planck Institute for Intelligent Systems, Germany Tomaso Poggio, MIT, USA Andrey Raygorodsky, Moscow Institute of Physics and Technology, Russia Mauricio G. C. Resende, Amazon.com Research and University of Washington Seattle, Washington, USA Ruslan Salakhutdinov, Carnegie Mellon University, USA, and AI Research at Apple Maria Schuld, Xanadu & University of KwaZulu-Natal, South Africa Richard E. Turner, Department of Engineering, University of Cambridge, UK Ruth Urner, York University, Toronto, Canada Isabel Valera, Saarland University, Saarbr?cken & Max Planck Institute for Intelligent Systems, T?bingen, Germany TRACKS & SPECIAL SESSIONS: https://lod2022.icas.cc/special-sessions/ BEST PAPER AWARD: Springer sponsors the LOD 2022 Best Paper Award https://lod2022.icas.cc/best-paper-award/ PROGRAM COMMITTEE: https://lod2022.icas.cc/program-committee/ SCHEDULE: https://lod2022.icas.cc/wp-content/uploads/sites/20/2022/02/LOD-2022-Schedule-Ver-1.pdf VENUE: https://lod2022.icas.cc/venue/ The venue of LOD 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation directly to the Certosa di Pontignano. ACTIVITIES: https://lod2022.icas.cc/activities/ POSTER: https://lod2022.icas.cc/wp-content/uploads/sites/20/2022/02/poster-LOD-2022-1.png Submit your research work today! https://easychair.org/conferences/?conf=lod2022 See you in the beautiful Tuscany in September! Best regards, LOD 2022 Organizing Committee LOD 2022 NEWS: https://lod2022.icas.cc/category/news/ Past Editions https://lod2022.icas.cc/past-editions/ LOD 2021, The Seventh International Conference on Machine Learning, Optimization and Big Data Grasmere ? Lake District ? England, UK. Nature Springer ? LNCS volumes 13163 and 13164. LOD 2020, The Sixth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volumes 12565 and 12566. LOD 2019, The Fifth International Conference on Machine Learning, Optimization and Big Data Certosa di Pontignano ? Siena ? Tuscany ? Italy. Nature Springer ? LNCS volume 11943. LOD 2018, The Fourth International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Nature Springer ? LNCS volume 11331. MOD 2017, The Third International Conference on Machine Learning, Optimization and Big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10710. MOD 2016, The Second International Workshop on Machine learning, Optimization and big Data Volterra ? Tuscany ? Italy. Springer ? LNCS volume 10122. MOD 2015, International Workshop on Machine learning, Optimization and big Data Taormina ? Sicily ? Italy. Springer ? LNCS volume 9432. https://www.facebook.com/groups/2236577489686309/ https://twitter.com/TaoSciences https://www.linkedin.com/groups/12092025/ lod at icas.cc https://lod2022.icas.cc * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Menno.VanZaanen at nwu.ac.za Fri Jun 10 06:00:11 2022 From: Menno.VanZaanen at nwu.ac.za (Menno Van Zaanen) Date: Fri, 10 Jun 2022 10:00:11 +0000 Subject: Connectionists: CfP Third workshop on Resources for African Indigenous Language (RAIL) Message-ID: <3edf4d45dddfa0275af8c996b08f35639636e8f8.camel@nwu.ac.za> First call for papers Third workshop on Resources for African Indigenous Language (RAIL) https://bit.ly/rail2022 The South African Centre for Digital Language Resources (SADiLaR) is organising the 3rd RAIL workshop in the field of Resources for African Indigenous Languages. This workshop aims to bring together researchers who are interested in showcasing their research and thereby boosting the field of African indigenous languages. This provides an overview of the current state-of-the-art and emphasizes availability of African indigenous language resources, including both data and tools. Additionally, it will allow for information sharing among researchers interested in African indigenous languages and also start discussions on improving the quality and availability of the resources. Many African indigenous languages currently have no or very limited resources available and, additionally, they are often structurally quite different from more well-resourced languages, requiring the development and use of specialized techniques. By bringing together researchers from different fields (e.g., (computational) linguistics, sociolinguistics, language technology) to discuss the development of language resources for African indigenous languages, we hope to boost research in this field. The RAIL workshop is an interdisciplinary platform for researchers working on resources (data collections, tools, etc.) specifically targeted towards African indigenous languages. It aims to create the conditions for the emergence of a scientific community of practice that focuses on data, as well as tools, specifically designed for or applied to indigenous languages found in Africa. Suggested topics include the following: * Digital representations of linguistic structures * Descriptions of corpora or other data sets of African indigenous languages * Building resources for (under resourced) African indigenous languages * Developing and using African indigenous languages in the digital age * Effectiveness of digital technologies for the development of African indigenous languages * Revealing unknown or unpublished existing resources for African indigenous languages * Developing desired resources for African indigenous languages * Improving quality, availability and accessibility of African indigenous language resources The 3rd RAIL workshop 2022 will be co-located with the 10th Southern African Microlinguistics Workshop ( https://sites.google.com/nwulettere.co.za/samwop-10/home). This will be an in-person event located in Potchefstroom, South Africa. Registration will be free. RAIL 2022 submission requirements: * RAIL asks for full papers from 4 pages to 8 pages (plus more pages for references if needed), which must strictly follow the Journal of the Digital Humanities Association of Southern Africa style guide ( https://upjournals.up.ac.za/index.php/dhasa/libraryFiles/downloadPublic/30 ). * Accepted submissions will be published in JDHASA, the Journal of the Digital Humanities Association of Southern Africa ( https://upjournals.up.ac.za/index.php/dhasa/). * Papers will be double blind peer-reviewed and must be submitted through EasyChair (https://easychair.org/my/conference?conf=rail2022). Important dates Submission deadline: 28 August 2022 Date of notification: 30 September 2022 Camera ready copy deadline: 23 October 2022 RAIL: 30 November 2022, North-West University - Potchefstroom SAMWOP: 1 ? 3 December 2021, North-West University - Potchefstroom Organising Committee Jessica Mabaso Rooweither Mabuya Muzi Matfunjwa Mmasibidi Setaka Menno van Zaanen South African Centre for Digital Language Resources (SADiLaR), South Africa -- Prof Menno van Zaanen menno.vanzaanen at nwu.ac.za Professor in Digital Humanities South African Centre for Digital Language Resources https://www.sadilar.org From geoffrey.hinton at gmail.com Sat Jun 11 14:01:36 2022 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Sat, 11 Jun 2022 14:01:36 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> References: <873c8817-0f68-21e8-23e1-d944ce72810d@rutgers.edu> <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> Message-ID: I am very sorry if you thought I was accusing you of being deranged. I do not think that at all. I just think you are stuck in a failed paradigm. I did accuse you of seeking attention and I would be happy to provide evidence for that if you really want to go there. My comment about the need to moderate deranged rantings was sent to connectionists at 1.23pm on June 8 and appeared on the list at 5.58am on June 9. It was in response to a completely different email. As you know, there is a delay between sending something to the list and it appearing on the list. Your email, which seemed to be soliciting a response from me, appeared on June 9 at 3.41am which was after I sent my email about deranged rantings. It's very unfortunate that my email about deranged rantings appeared a few hours after your email appeared, but there was no causal connection. I am all in favor of providing clearly specified tests for whether a model "understands" and I gave one example of a good step in that direction in my podcast with Pieter Abbeel. I also think that the ability to draw pictures when given captions is pretty convincing evidence of understanding the caption. My intense irritation with your comments is largely driven by my belief that in about 2010 you would have confidently predicted that the current performance of neural nets at drawing pictures from captions was unattainable by a purely connectionist system. But I guess we will never know. Geoff On Fri, Jun 10, 2022 at 11:26 PM Gary Marcus wrote: > Let?s review: > > Hinton accuses me of not setting clear criteria. > > I offer 6 reasonably clear criteria. > > A significant sample of the ML community elsewhere applauds the criteria, > and engages seriously. > > Hinton says it?s deranged to discuss them; after that, nobody here dares. > > Hanson derides the whole project and stacks the deck; ignores the cold > fusion, flying jet packs, driverless taxis, and so on that haven?t become > practical despite promises, citing only the numerator but not the > denominator of history, further stifling any serious discussion of what > Hinton?s requested targets might be. > > Was Hinton?s request for clear criteria agenuine good faith request? Does > anyone on this list have better criteria? Do you always find it appropriate > to tag team people for responding to requests in good faith? > > Open scientific discussion, literally for decades a hallmark of this list, > appears to have left the building. Very unfortunate. > > Gary > > > > > On Jun 10, 2022, at 8:14 AM, Stephen Jose Hanson < > stephen.jose.hanson at rutgers.edu> wrote: > > ? > > Bets? The *Augus**t* discussion months ago has reduced to bets? Really? > > Gentleman, lets step back a bit... on the one hand this seems like > schoolyard squabble about who can jump from the highest point on a wall > without breaking a leg.. > > On the other hand.. it also feels like a troll* standing in a North > Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I > bet this never fly across an ocean!" > > OR > > " (1961) sure sure, you got a capsule in the upper stratosphere, but I > bet you will never get to the moon". > > OR > > "1994, Ok, your computational biology model can do protein folding with > about 40% match.. 20 years later not much improvement (60%).. so I bet > you'll never reach 90% match". (in 2020, Deepmind published > Alphafold--which reached over 94% matches). > > > So this type of counterfactual silliness, is simply due to our deep > ignorance of the technologies in the future.. but who could know the tech > of the future? > > Its really really really early in what is happening in AI now. .snipping > at it at this point is sort of pointless. As we just don't know alot yet. > > (1) how do DL models learn? (2) how do DL models represent knowledge? (3) > What do DL models have to do with Brain? > > Instead here's a useful project: > > Recent work in language acquisition due to Yang an Piantidosi (PNAS 2022) > who developed a symbolic model--similar to what Chomsky described as a > Universal learning model (starting with recursion), seems to work > surprisingly well. They provide a large archive number of learning > problems (FSM, CF, CS) cases.. which would be an interesting project for > someone interested in RNN-DLs or LSTMs to show the same results, without > the symbolic alg, they defined. > > Y Yang and S.T. Piantadosi One model for the learning of language January > 24, 2022, PNAS. > > Finally, AGI.. so this is old idea and a borrowed idea from LL > Thurstone, who in 1930, defined different types of Human Intelligence > including a type of "GENERAL Intelligence". This lead to IQ tests and > frustrating attempts at finding it ... instead leading Thurstone to invent > Factor analysis. Its difficult enough to try and define human > intelligence, without claiming some sort of "G" factor for AI. With due > respect to my friends at DeepMind... This seems like a deadend. > > Cheers, > > Steve > > > > > > > > * a troll is a person who posts inflammatory, insincere, digressive, > extraneous, or off-topic messages in an online community, with the intent > of provoking readers into displaying emotional responses, or manipulating > others' perception > On 6/9/22 4:33 PM, Gary Marcus wrote: > > Dear Dr. Hinton, > > You very directly asked my side to produce some tangible goals. Ernest > Davis and I did precisely what you asked, and in return you described me > (in a separate but public message that also appears to have come from your > account) as deranged. There is no world in which that is socially > acceptable, or a positive step towards science. > > Your reaction is particularly striking because it is a clear outlier. In > general, despite the perfectly reasonable questions that you asked about > wording in your subsequent email (which would presumably need be negotiated > in any actually-implemented bet, as one moved from term sheet to contract), > the community reaction has actually been quite favorable. LongNow offered > to host it, Metaculus added to their forecast site, Christian Szegedy > placed a side bet and ran a lengthy discussion about the math proposition, > etc. Your reactions are so far out of range with any of the other reaction > that I have received than I am frankly concerned for your well-being. > > ? > > All that said, you raised one substantive question, and I will address it. > I am reasonably familiar with the PaLM model that you refer to, and aware > of the claim that it ?understands? jokes. I am also aware of the fact that > what is provided in the paper was a numerator without a denominator. > > There is no clarity about what was tried, or how general it was, or under > what conditions it failed; subset is indeed the operative word. In no world > would what was written pass traditional peer review. Even for a conference > paper, what is reported about joke interpretation is woefully thin. It?s > really not much rest to your theory on. > > I am also aware of the fact that your colleagues at Google failed to > supply more detail when I requested it?never a good sign. > > How general is the ability? Is it a handful of paraphrases of jokes in > vast memorized database? Would it extend to other kinds of jokes? Could it > (or related models like Gato, with visual input) explain this cartoon? > > [image: image1.jpeg] > > Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? > > *Sally's cow died yesterday. When will the cow be alive again? Let's think > step by step.* > > First, the cow must be buried. This takes a few hours. > > Next, a new cow must be born. This takes nine months. > > Therefore, the earliest the cow will be alive again is nine months from > now. > > Probably not. > > ? > > What we have known since Eliza is that humans are easily seduced into > anthropomorphizing machines. I am going to stand by my claim that current > AI lacks understanding: > > - one cannot derive a set of logic propositions from a large language > model > - one cannot reliably update a world model based on an LLMs > calculations (a point that LeCun has also made, in slightly different terms) > - one cannot reliably reason from what and LLM derives, > - LLMs themselves cannot reliably reason from they are told.. > > My point is not a Searlean one about the impossibility of machines > thinking, just a reality of the limits of contemporary systems. On the > latter point, I would also urge you to read my recent essay called ?Horse > rides Astronaut?, to see how easy it is make up incorrect rationalization > about these models when they make errors. > > Inflated appraisals of their capabilities may serve some sort of political > end, but will not serve science. > > I cannot undo whatever slight some reviewer did to Yann decades ago, but I > can call the current field as I see it; I don?t believe that current > systems have gotten significantly closer to what I described in that 2016 > conversation that you quote from. I absolutely stand by the claim that we > are a long way from answering ?the deeper questions in artificial > intelligence, like how we understand language or how we reason about the > world." SInce you are found of quoting stuff I right 6 or 7 years ago, > here?s a challenge that I proposed in the New Yorker 2014; to me I see real > progress on this sort of thing, thus far: > > > *allow me to propose a Turing Test for the twenty-first century: build a > computer program that can watch any arbitrary TV program or YouTube video > and answer questions about its content??Why did Russia invade Crimea?? or > ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots > like Goostman can hold a short conversation about TV, but only by bluffing. > (When asked what ?Cheers? was about, it responded, ?How should I know, I > haven?t watched the show.?) But no existing program?not Watson, not > Goostman, not Siri?can currently come close to doing what any bright, real > teenager can do: watch an episode of ?The Simpsons,? and tell us when to > laugh.* > > > Can Palm-E do that? I seriously doubt it. > > > Dr. Gary Marcus > > Founder, Geometric Intelligence (acquired by Uber) > Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books > in AI, and The Algebraic Mind, one of the key early works advocating > neurosymbolic AI > > > > > > > On Jun 9, 2022, at 11:34, Geoffrey Hinton > wrote: > > ? > I shouldn't respond because your main aim is to get attention without > going to the trouble of building something that works > (personal communication, Y. LeCun) but I cannot resist pointing out the > following Marcus claim from 2016: > > "People are very excited about big data and what it's giving them right > now, but I'm not sure it's taking us closer to the deeper questions in > artificial intelligence, like how we understand language or how we reason > about the world. " > > Given that big neural nets can now explain why a joke is funny (for some > subset of jokes) do you still want to stick with this claim? It seems to > me that the reason you made this claim is because you have a strong prior > belief about how language understanding and reasoning must work and this > belief is remarkably resistant to evidence. Deep learning researchers have > seen this before. Yann had a paper rejected by a vision conference even > though it beat the state-of-the-art and one of the reasons given was that > the model learned everything and therefore taught us nothing about how to > do vision. That particular referee had a strong idea of how computer > vision must work and failed to notice that the success of Yann's model > showed that that prior belief was spectacularly wrong. > > Geoff > > > > > On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: > >> ???Dear Connectionists, and especially Geoff Hinton, >> >> It has come to my attention that Geoff Hinton is looking for challenging >> targets. In a just-released episode of The Robot Brains podcast [ >> https://www.youtube.com/watch?v=4Otcau-C_Yc >> ], >> he said >> >> *?If any of the people who say [deep learning] is hitting a wall would >> just write down a list of the things it?s not going to be able to do then >> five years later, we?d be able to show we?d done them.?* >> >> Now, as it so happens, I (with the help of Ernie Davis) did just write >> down exactly such a list of things, last weekm and indeed offered Elon Musk >> a $100,000 bet along similar lines. >> >> Precise details are here, towards the end of the essay: >> >> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >> >> >> Five are specific milestones, in video and text comprehension, cooking, >> math, etc; the sixth is the proviso that for an intelligence to be deemed >> ?general? (which is what Musk was discussing in a remark that prompted my >> proposal), it would need to solve a majority of the problems. We can >> probably all agree that narrow AI for any single problem on its own might >> be less interesting. >> >> Although there is no word yet from Elon, Kevin Kelly offered to host the >> bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 >> questions that the community can comment on. Vivek Wadhwa, cc?d, quickly >> offered to double the bet, and several others followed suit; the bet to >> Elon (should he choose to take it) currently stands at $500,000. >> >> If you?d like in on the bet, Geoff, please let me know. >> >> More generally, I?d love to hear what the connectionists community thinks >> of six criteria I laid out (as well as the arguments at the top of the >> essay, as to why AGI might not be as imminent as Musk seems to think). >> >> Cheers. >> Gary Marcus >> > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From gary.marcus at nyu.edu Sat Jun 11 14:29:56 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 11 Jun 2022 11:29:56 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: Message-ID: <6699ABEE-E8D9-4FD2-ABF2-6CFD80AD27B7@nyu.edu> Dear Geoff, Apology accepted, and I am very sorry that I misread the temporal contingency as causal. (I have noticed the delays of which you speak, so I can see in hindsight exactly how that might have happened.) To answer your question, I never really wrote much about image synthesis; I am a language and higher-level cognition researcher, not a vision guy (though I try to stay current). What I might have said if you had asked me in 2010 is to point to Pinker?s old example about man bites dog, which follows directly from Fodor and Pylyshyn?s 1988 Cognition article on systematicity. I wrote an update on where all that stands a week or two ago, https://garymarcus.substack.com/p/horse-rides-astronaut?s=w, and also wrote a bit more on the topic here https://arxiv.org/abs/2204.13807, explaining why I think that there is a very direct relation between the limits that current draw-from-text systems face on the language side, in compositionality and semantics that Fodor, Pylyshyn, Pinker, and I all foresaw. That said, I am genuinely impressed with the art, and certainly would not have anticipated the photorealism or the flexibility. I see how its done, but wouldn?t have seen it coming, and I am particularly impressed the consistency of perspective and lighting. The NeRF scene rendering stuff is also astonishingly good. In some ways deep learning has been absolutely brilliant. When I have said ?deep learning is a hitting a wall?, I don?t mean that there is no progress, but rather that there are certain things that deep learning on its own can?t do. A lot of them actually have to the relations between wholes and parts, which I know has been a focus of your own on the vision side. Despite our history of friction, I was super positive to Tech Review when they asked me to comment about your GLOM framework: Marcus admires Hinton?s willingness to challenge something that brought him fame, to admit it?s not quite working. ?It?s brave,? he says. ?And it?s a great corrective to say, ?I?m trying to think outside the box.?? That whole sphere of questions is crucial, and not yet solved. As I told them, I admire you for trying. Again I really appreciate the apology, and am sorry that I misread that email. It would really would be good for the field if we could cultivate a better relationship, and raise the level of our mutual discussion, e.g by comparing the progress and obstacles in language versus vision. Personally, I would be thrilled; Steve Pinker has told me several times about the amazing and unexpected 3-d example you challenged him with when the two of you first met. He still remembers it vividly, 45 years later. I also know that you and I are both dissatisfied with benchmarkitis, and neither of us is fully satisfied with mere scaling. Instead of arguing about what we might have predicted in the past, it would be fun to challenge the field together. Best regards, Gary > On Jun 11, 2022, at 11:01, Geoffrey Hinton wrote: > > ? > I am very sorry if you thought I was accusing you of being deranged. I do not think that at all. I just think you are stuck in a failed paradigm. > > I did accuse you of seeking attention and I would be happy to provide evidence for that if you really want to go there. > > My comment about the need to moderate deranged rantings was sent to connectionists at 1.23pm on June 8 and appeared on the list at 5.58am on June 9. It was in response to a completely different email. As you know, there is a delay between sending something to the list and it appearing on the list. Your email, which seemed to be soliciting a response from me, appeared on June 9 at 3.41am which was after I sent my email about deranged rantings. It's very unfortunate that my email about deranged rantings appeared a few hours after your email appeared, but there was no causal connection. > > I am all in favor of providing clearly specified tests for whether a model "understands" and I gave one example of a good step in that direction in my podcast with Pieter Abbeel. > > I also think that the ability to draw pictures when given captions is pretty convincing evidence of understanding the caption. My intense irritation with your comments is largely driven by my belief that in about 2010 you would have confidently predicted that the current performance of neural nets at drawing pictures from captions was unattainable by a purely connectionist system. But I guess we will never know. > > Geoff > > >> On Fri, Jun 10, 2022 at 11:26 PM Gary Marcus wrote: >> Let?s review: >> >> Hinton accuses me of not setting clear criteria. >> >> I offer 6 reasonably clear criteria. >> >> A significant sample of the ML community elsewhere applauds the criteria, and engages seriously. >> >> Hinton says it?s deranged to discuss them; after that, nobody here dares. >> >> Hanson derides the whole project and stacks the deck; ignores the cold fusion, flying jet packs, driverless taxis, and so on that haven?t become practical despite promises, citing only the numerator but not the denominator of history, further stifling any serious discussion of what Hinton?s requested targets might be. >> >> Was Hinton?s request for clear criteria agenuine good faith request? Does anyone on this list have better criteria? Do you always find it appropriate to tag team people for responding to requests in good faith? >> >> Open scientific discussion, literally for decades a hallmark of this list, appears to have left the building. Very unfortunate. >> >> Gary >> >> >> >> >>>> On Jun 10, 2022, at 8:14 AM, Stephen Jose Hanson wrote: >>>> >>> ? >> >>> Bets? The August discussion months ago has reduced to bets? Really? >>> >>> Gentleman, lets step back a bit... on the one hand this seems like schoolyard squabble about who can jump from the highest point on a wall without breaking a leg.. >>> >>> On the other hand.. it also feels like a troll* standing in a North Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I bet this never fly across an ocean!" >>> >>> OR >>> >>> " (1961) sure sure, you got a capsule in the upper stratosphere, but I bet you will never get to the moon". >>> >>> OR >>> >>> "1994, Ok, your computational biology model can do protein folding with about 40% match.. 20 years later not much improvement (60%).. so I bet you'll never reach 90% match". (in 2020, Deepmind published Alphafold--which reached over 94% matches). >>> >>> >>> >>> So this type of counterfactual silliness, is simply due to our deep ignorance of the technologies in the future.. but who could know the tech of the future? >>> >>> Its really really really early in what is happening in AI now. .snipping at it at this point is sort of pointless. As we just don't know alot yet. >>> >>> (1) how do DL models learn? (2) how do DL models represent knowledge? (3) What do DL models have to do with Brain? >>> >>> Instead here's a useful project: >>> >>> Recent work in language acquisition due to Yang an Piantidosi (PNAS 2022) who developed a symbolic model--similar to what Chomsky described as a Universal learning model (starting with recursion), seems to work surprisingly well. They provide a large archive number of learning problems (FSM, CF, CS) cases.. which would be an interesting project for someone interested in RNN-DLs or LSTMs to show the same results, without the symbolic alg, they defined. >>> >>> Y Yang and S.T. Piantadosi One model for the learning of language January 24, 2022, PNAS. >>> >>> Finally, AGI.. so this is old idea and a borrowed idea from LL Thurstone, who in 1930, defined different types of Human Intelligence including a type of "GENERAL Intelligence". This lead to IQ tests and frustrating attempts at finding it ... instead leading Thurstone to invent Factor analysis. Its difficult enough to try and define human intelligence, without claiming some sort of "G" factor for AI. With due respect to my friends at DeepMind... This seems like a deadend. >>> >>> Cheers, >>> >>> Steve >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> * a troll is a person who posts inflammatory, insincere, digressive, extraneous, or off-topic messages in an online community, with the intent of provoking readers into displaying emotional responses, or manipulating others' perception >>> >>>> On 6/9/22 4:33 PM, Gary Marcus wrote: >>>> Dear Dr. Hinton, >>>> >>>> You very directly asked my side to produce some tangible goals. Ernest Davis and I did precisely what you asked, and in return you described me (in a separate but public message that also appears to have come from your account) as deranged. There is no world in which that is socially acceptable, or a positive step towards science. >>>> >>>> Your reaction is particularly striking because it is a clear outlier. In general, despite the perfectly reasonable questions that you asked about wording in your subsequent email (which would presumably need be negotiated in any actually-implemented bet, as one moved from term sheet to contract), the community reaction has actually been quite favorable. LongNow offered to host it, Metaculus added to their forecast site, Christian Szegedy placed a side bet and ran a lengthy discussion about the math proposition, etc. Your reactions are so far out of range with any of the other reaction that I have received than I am frankly concerned for your well-being. >>>> >>>> ? >>>> >>>> All that said, you raised one substantive question, and I will address it. I am reasonably familiar with the PaLM model that you refer to, and aware of the claim that it ?understands? jokes. I am also aware of the fact that what is provided in the paper was a numerator without a denominator. >>>> >>>> There is no clarity about what was tried, or how general it was, or under what conditions it failed; subset is indeed the operative word. In no world would what was written pass traditional peer review. Even for a conference paper, what is reported about joke interpretation is woefully thin. It?s really not much rest to your theory on. >>>> >>>> I am also aware of the fact that your colleagues at Google failed to supply more detail when I requested it?never a good sign. >>>> >>>> How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? >>>> >>>> >>>> >>>> Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? >>>>> Sally's cow died yesterday. When will the cow be alive again? Let's think step by step. >>>>> >>>>> First, the cow must be buried. This takes a few hours. >>>>> >>>>> Next, a new cow must be born. This takes nine months. >>>>> >>>>> Therefore, the earliest the cow will be alive again is nine months from now. >>>>> >>>> Probably not. >>>> >>>> ? >>>> >>>> What we have known since Eliza is that humans are easily seduced into anthropomorphizing machines. I am going to stand by my claim that current AI lacks understanding: >>>> one cannot derive a set of logic propositions from a large language model >>>> one cannot reliably update a world model based on an LLMs calculations (a point that LeCun has also made, in slightly different terms) >>>> one cannot reliably reason from what and LLM derives, >>>> LLMs themselves cannot reliably reason from they are told.. >>>> My point is not a Searlean one about the impossibility of machines thinking, just a reality of the limits of contemporary systems. On the latter point, I would also urge you to read my recent essay called ?Horse rides Astronaut?, to see how easy it is make up incorrect rationalization about these models when they make errors. >>>> >>>> Inflated appraisals of their capabilities may serve some sort of political end, but will not serve science. >>>> >>>> I cannot undo whatever slight some reviewer did to Yann decades ago, but I can call the current field as I see it; I don?t believe that current systems have gotten significantly closer to what I described in that 2016 conversation that you quote from. I absolutely stand by the claim that we are a long way from answering ?the deeper questions in artificial intelligence, like how we understand language or how we reason about the world." SInce you are found of quoting stuff I right 6 or 7 years ago, here?s a challenge that I proposed in the New Yorker 2014; to me I see real progress on this sort of thing, thus far: >>>>> >>>>> allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content??Why did Russia invade Crimea?? or ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. (When asked what ?Cheers? was about, it responded, ?How should I know, I haven?t watched the show.?) But no existing program?not Watson, not Goostman, not Siri?can currently come close to doing what any bright, real teenager can do: watch an episode of ?The Simpsons,? and tell us when to laugh. >>>> >>>> Can Palm-E do that? I seriously doubt it. >>>> >>>> >>>> Dr. Gary Marcus >>>> >>>> Founder, Geometric Intelligence (acquired by Uber) >>>> Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books in AI, and The Algebraic Mind, one of the key early works advocating neurosymbolic AI >>>> >>>> >>>> >>>> >>>> >>>> >>>>> On Jun 9, 2022, at 11:34, Geoffrey Hinton wrote: >>>>> >>>>> ? >>>>> I shouldn't respond because your main aim is to get attention without going to the trouble of building something that works (personal communication, Y. LeCun) but I cannot resist pointing out the following Marcus claim from 2016: >>>>> >>>>> "People are very excited about big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world. " >>>>> >>>>> Given that big neural nets can now explain why a joke is funny (for some subset of jokes) do you still want to stick with this claim? It seems to me that the reason you made this claim is because you have a strong prior belief about how language understanding and reasoning must work and this belief is remarkably resistant to evidence. Deep learning researchers have seen this before. Yann had a paper rejected by a vision conference even though it beat the state-of-the-art and one of the reasons given was that the model learned everything and therefore taught us nothing about how to do vision. That particular referee had a strong idea of how computer vision must work and failed to notice that the success of Yann's model showed that that prior belief was spectacularly wrong. >>>>> >>>>> Geoff >>>>> >>>>> >>>>> >>>>> >>>>>> On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: >>>>>> ???Dear Connectionists, and especially Geoff Hinton, >>>>>> >>>>>> It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said >>>>>> >>>>>> ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? >>>>>> >>>>>> Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. >>>>>> >>>>>> Precise details are here, towards the end of the essay: >>>>>> >>>>>> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >>>>>> >>>>>> Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. >>>>>> >>>>>> Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. >>>>>> >>>>>> If you?d like in on the bet, Geoff, please let me know. >>>>>> >>>>>> More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). >>>>>> >>>>>> Cheers. >>>>>> Gary Marcus >>> -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From david at irdta.eu Sat Jun 11 06:37:23 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 11 Jun 2022 12:37:23 +0200 (CEST) Subject: Connectionists: DeepLearn 2023 Winter: early registration July 4 Message-ID: <1428829352.16697.1654943843173@webmail.strato.com> ****************************************************************** 8th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2023 Winter Bournemouth, UK January 16-20, 2023 https://irdta.eu/deeplearn/2023wi/ *********** Co-organized by: Department of Computing and Informatics Bournemouth University Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: July 4, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Winter will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es, Las Palmas de Gran Canaria and Lule?. Deep learning is a branch of artificial intelligence covering a spectrum of current exciting research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, bioinformatics, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2023 Winter is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2023 Winter will take place in Bournemouth, a coastal resort town on the south coast of England. The venue will be: Talbot Campus Bournemouth University https://www.bournemouth.ac.uk/about/contact-us/directions-maps/directions-our-talbot-campus STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Yi Ma (University of California, Berkeley), CTRL: Closed-Loop Data Transcription via Rate Reduction Daphna Weinshall (Hebrew University of Jerusalem), Curriculum Learning in Deep Networks Eric P. Xing (Carnegie Mellon University), It Is Time for Deep Learning to Understand Its Expense Bills PROFESSORS AND COURSES: (to be completed) Mohammed Bennamoun (University of Western Australia), [intermediate/advanced] Deep Learning for 3D Vision Matias Carrasco Kind (University of Illinois, Urbana-Champaign), [intermediate] Anomaly Detection Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Graph Representation Learning Seungjin Choi (Intellicode), [introductory/intermediate] Bayesian Optimization over Continuous, Discrete, or Hybrid Spaces Sumit Chopra (New York University), [intermediate] Deep Learning in Healthcare Marco Duarte (University of Massachusetts, Amherst), [introductory/intermediate] Explainable Machine Learning Jo?o Gama (University of Porto), [introductory] Learning from Data Streams: Challenges, Issues, and Opportunities Claus Horn (Zurich University of Applied Sciences), [intermediate] Deep Learning for Biotechnology (to be confirmed) Nathalie Japkowicz (American University), [intermediate/advanced] Learning from Class Imbalances Gregor Kasieczka (University of Hamburg), [introductory/intermediate] Deep Learning Fundamental Physics: Rare Signals, Unsupervised Anomaly Detection, and Generative Models Karen Livescu (Toyota Technological Institute at Chicago), [intermediate/advanced] Speech Processing: Automatic Speech Recognition and beyond (to be confirmed) David McAllester (Toyota Technological Institute at Chicago), [intermediate/advanced] Information Theory for Deep Learning Dhabaleswar K. Panda (Ohio State University), [intermediate] Exploiting High-performance Computing for Deep Learning: Why and How? Fabio Roli (University of Cagliari), [introductory/intermediate] Adversarial Machine Learning Richa Singh (Indian Institute of Technology Jodhpur), [introductory/intermediate] Trusted AI Kunal Talwar (Apple), [introductory/intermediate] Foundations of Differentially Private Learning Tinne Tuytelaars (KU Leuven), [introductory/intermediate] Continual Learning in Deep Neural Networks Lyle Ungar (University of Pennsylvania), [intermediate] Natural Language Processing using Deep Learning Bram van Ginneken (Radboud University Medical Center), [introductory/intermediate] Deep Learning for Medical Image Analysis Eric P. Xing (Carnegie Mellon University), tba Yu-Dong Zhang (University of Leicester), [introductory/intermediate] Convolutional Neural Networks and Their Applications to COVID-19 Diagnosis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by January 8, 2023. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 8, 2023. EMPLOYER SESSION: Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by January 8, 2023. ORGANIZING COMMITTEE: Rashid Bakirov (Bournemouth, local co-chair) Nan Jiang (Bournemouth, local co-chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2023wi/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2023wi/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Bournemouth University Rovira i Virgili University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From chz8 at aber.ac.uk Fri Jun 10 08:21:34 2022 From: chz8 at aber.ac.uk (Christine Zarges [chz8] (Staff)) Date: Fri, 10 Jun 2022 12:21:34 +0000 Subject: Connectionists: SPECIES Scholarships - Final Call for Candidate Applications (Deadline: 13 June 2022 AOE) Message-ID: ********************************************************************************* SPECIES Scholarships - Final Call for Candidate Applications Website: http://species-society.org/scholarships-2022/ Email: students at species-society.org Deadline for candidate applications: 13 June 2022 (AOE) ********************************************************************************* SPECIES, the Society for the Promotion of Evolutionary Computation in Europe and its Surroundings, is proud to announce the 3rd round of SPECIES scholarships. This year, the scheme will be open to current research students and recent PhD graduates (up to 24 months prior to the application deadline) to work with relevant research groups. Research students include PhD students as well as students on research-oriented master programmes (e.g., MPhil, MRes, MSc with a significant research component). The recipients of the scholarships will receive an allowance of 900 euros per month to cover accommodation and living expenses during a three months in-person internship at one of the available host institutions, working under the supervision of an advisor. The list of available host institutions can be found here: http://species-society.org/scholarship-hosts-2022/ More information including conditions of the scholarships and details on how to apply can be found on the SPECIES website: http://species-society.org/scholarships-2022/ Email: students at species-society.org Deadline for candidate applications: 13 June 2022 (AOE) ---------------------------------------------------------------------------------------------------------------------- Y Brifysgol orau yn y DU am Ansawdd ei Dysgu a Phrofiad Myfyrwyr Best University in the UK for Teaching Quality and Student Experience (The Times and Sunday Times, Good University Guide 2021) Rydym yn croesawu gohebiaeth yn Gymraeg a Saesneg. Cewch ateb Cymraeg i bob gohebiaeth Gymraeg ac ateb Saesneg i bob gohebiaeth Saesneg. Ni fydd gohebu yn Gymraeg yn arwain at oedi. We welcome correspondence in Welsh and English. Correspondence received in Welsh will be answered in Welsh and correspondence in English will be answered in English. Corresponding in Welsh will not involve any delay. From gary.marcus at nyu.edu Fri Jun 10 23:26:36 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 10 Jun 2022 20:26:36 -0700 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <873c8817-0f68-21e8-23e1-d944ce72810d@rutgers.edu> References: <873c8817-0f68-21e8-23e1-d944ce72810d@rutgers.edu> Message-ID: <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> Let?s review: Hinton accuses me of not setting clear criteria. I offer 6 reasonably clear criteria. A significant sample of the ML community elsewhere applauds the criteria, and engages seriously. Hinton says it?s deranged to discuss them; after that, nobody here dares. Hanson derides the whole project and stacks the deck; ignores the cold fusion, flying jet packs, driverless taxis, and so on that haven?t become practical despite promises, citing only the numerator but not the denominator of history, further stifling any serious discussion of what Hinton?s requested targets might be. Was Hinton?s request for clear criteria agenuine good faith request? Does anyone on this list have better criteria? Do you always find it appropriate to tag team people for responding to requests in good faith? Open scientific discussion, literally for decades a hallmark of this list, appears to have left the building. Very unfortunate. Gary > On Jun 10, 2022, at 8:14 AM, Stephen Jose Hanson wrote: > > ? > Bets? The August discussion months ago has reduced to bets? Really? > > Gentleman, lets step back a bit... on the one hand this seems like schoolyard squabble about who can jump from the highest point on a wall without breaking a leg.. > > On the other hand.. it also feels like a troll* standing in a North Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I bet this never fly across an ocean!" > > OR > > " (1961) sure sure, you got a capsule in the upper stratosphere, but I bet you will never get to the moon". > > OR > > "1994, Ok, your computational biology model can do protein folding with about 40% match.. 20 years later not much improvement (60%).. so I bet you'll never reach 90% match". (in 2020, Deepmind published Alphafold--which reached over 94% matches). > > > > So this type of counterfactual silliness, is simply due to our deep ignorance of the technologies in the future.. but who could know the tech of the future? > > Its really really really early in what is happening in AI now. .snipping at it at this point is sort of pointless. As we just don't know alot yet. > > (1) how do DL models learn? (2) how do DL models represent knowledge? (3) What do DL models have to do with Brain? > > Instead here's a useful project: > > Recent work in language acquisition due to Yang an Piantidosi (PNAS 2022) who developed a symbolic model--similar to what Chomsky described as a Universal learning model (starting with recursion), seems to work surprisingly well. They provide a large archive number of learning problems (FSM, CF, CS) cases.. which would be an interesting project for someone interested in RNN-DLs or LSTMs to show the same results, without the symbolic alg, they defined. > > Y Yang and S.T. Piantadosi One model for the learning of language January 24, 2022, PNAS. > > Finally, AGI.. so this is old idea and a borrowed idea from LL Thurstone, who in 1930, defined different types of Human Intelligence including a type of "GENERAL Intelligence". This lead to IQ tests and frustrating attempts at finding it ... instead leading Thurstone to invent Factor analysis. Its difficult enough to try and define human intelligence, without claiming some sort of "G" factor for AI. With due respect to my friends at DeepMind... This seems like a deadend. > > Cheers, > > Steve > > > > > > > > > > > > > > * a troll is a person who posts inflammatory, insincere, digressive, extraneous, or off-topic messages in an online community, with the intent of provoking readers into displaying emotional responses, or manipulating others' perception > > On 6/9/22 4:33 PM, Gary Marcus wrote: >> Dear Dr. Hinton, >> >> You very directly asked my side to produce some tangible goals. Ernest Davis and I did precisely what you asked, and in return you described me (in a separate but public message that also appears to have come from your account) as deranged. There is no world in which that is socially acceptable, or a positive step towards science. >> >> Your reaction is particularly striking because it is a clear outlier. In general, despite the perfectly reasonable questions that you asked about wording in your subsequent email (which would presumably need be negotiated in any actually-implemented bet, as one moved from term sheet to contract), the community reaction has actually been quite favorable. LongNow offered to host it, Metaculus added to their forecast site, Christian Szegedy placed a side bet and ran a lengthy discussion about the math proposition, etc. Your reactions are so far out of range with any of the other reaction that I have received than I am frankly concerned for your well-being. >> >> ? >> >> All that said, you raised one substantive question, and I will address it. I am reasonably familiar with the PaLM model that you refer to, and aware of the claim that it ?understands? jokes. I am also aware of the fact that what is provided in the paper was a numerator without a denominator. >> >> There is no clarity about what was tried, or how general it was, or under what conditions it failed; subset is indeed the operative word. In no world would what was written pass traditional peer review. Even for a conference paper, what is reported about joke interpretation is woefully thin. It?s really not much rest to your theory on. >> >> I am also aware of the fact that your colleagues at Google failed to supply more detail when I requested it?never a good sign. >> >> How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? >> >> >> >> Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? >>> Sally's cow died yesterday. When will the cow be alive again? Let's think step by step. >>> >>> First, the cow must be buried. This takes a few hours. >>> >>> Next, a new cow must be born. This takes nine months. >>> >>> Therefore, the earliest the cow will be alive again is nine months from now. >>> >> Probably not. >> >> ? >> >> What we have known since Eliza is that humans are easily seduced into anthropomorphizing machines. I am going to stand by my claim that current AI lacks understanding: >> one cannot derive a set of logic propositions from a large language model >> one cannot reliably update a world model based on an LLMs calculations (a point that LeCun has also made, in slightly different terms) >> one cannot reliably reason from what and LLM derives, >> LLMs themselves cannot reliably reason from they are told.. >> My point is not a Searlean one about the impossibility of machines thinking, just a reality of the limits of contemporary systems. On the latter point, I would also urge you to read my recent essay called ?Horse rides Astronaut?, to see how easy it is make up incorrect rationalization about these models when they make errors. >> >> Inflated appraisals of their capabilities may serve some sort of political end, but will not serve science. >> >> I cannot undo whatever slight some reviewer did to Yann decades ago, but I can call the current field as I see it; I don?t believe that current systems have gotten significantly closer to what I described in that 2016 conversation that you quote from. I absolutely stand by the claim that we are a long way from answering ?the deeper questions in artificial intelligence, like how we understand language or how we reason about the world." SInce you are found of quoting stuff I right 6 or 7 years ago, here?s a challenge that I proposed in the New Yorker 2014; to me I see real progress on this sort of thing, thus far: >>> >>> allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content??Why did Russia invade Crimea?? or ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. (When asked what ?Cheers? was about, it responded, ?How should I know, I haven?t watched the show.?) But no existing program?not Watson, not Goostman, not Siri?can currently come close to doing what any bright, real teenager can do: watch an episode of ?The Simpsons,? and tell us when to laugh. >> >> Can Palm-E do that? I seriously doubt it. >> >> >> Dr. Gary Marcus >> >> Founder, Geometric Intelligence (acquired by Uber) >> Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books in AI, and The Algebraic Mind, one of the key early works advocating neurosymbolic AI >> >> >> >> >> >> >>> On Jun 9, 2022, at 11:34, Geoffrey Hinton wrote: >>> >>> ? >>> I shouldn't respond because your main aim is to get attention without going to the trouble of building something that works (personal communication, Y. LeCun) but I cannot resist pointing out the following Marcus claim from 2016: >>> >>> "People are very excited about big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world. " >>> >>> Given that big neural nets can now explain why a joke is funny (for some subset of jokes) do you still want to stick with this claim? It seems to me that the reason you made this claim is because you have a strong prior belief about how language understanding and reasoning must work and this belief is remarkably resistant to evidence. Deep learning researchers have seen this before. Yann had a paper rejected by a vision conference even though it beat the state-of-the-art and one of the reasons given was that the model learned everything and therefore taught us nothing about how to do vision. That particular referee had a strong idea of how computer vision must work and failed to notice that the success of Yann's model showed that that prior belief was spectacularly wrong. >>> >>> Geoff >>> >>> >>> >>> >>> On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: >>>> ???Dear Connectionists, and especially Geoff Hinton, >>>> >>>> It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said >>>> >>>> ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? >>>> >>>> Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. >>>> >>>> Precise details are here, towards the end of the essay: >>>> >>>> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >>>> >>>> Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. >>>> >>>> Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. >>>> >>>> If you?d like in on the bet, Geoff, please let me know. >>>> >>>> More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). >>>> >>>> Cheers. >>>> Gary Marcus > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From stephen.jose.hanson at rutgers.edu Sat Jun 11 06:33:37 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Sat, 11 Jun 2022 10:33:37 +0000 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: Message-ID: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> Bets? The August discussion months ago has reduced to bets? Really? Gentleman, lets step back a bit... on the one hand this seems like schoolyard squabble about who can jump from the highest point on a wall without breaking a leg.. On the other hand.. it also feels like a troll* standing in a North Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I bet this will never fly across an ocean!" OR " (1961) sure sure NASA, you got a capsule in the upper stratosphere, but I bet you will never get to the moon". OR "1994, Ok, your computational biology model can do protein folding with about 40% match.. 20 years later not much improvement (60%).. so I bet you'll never reach 90% match". (in 2020, Deepmind published Alphafold--which reached over 94% matches). So this type of counterfactual silliness, is simply due to our deep ignorance of the technologies in the future.. but who could know the tech of the future? Its really really really early in what is happening in AI now. .snipping at it at this point is sort of pointless. As we just don't know alot yet. (1) how do DL models learn? (2) how do DL models represent knowledge? (3) What do DL models have to do with Brain? Instead here's a useful project: Recent work in language acquisition due to Yang an Piantadosi (PNAS 2022) who developed a symbolic model--similar to what Chomsky described as a Universal learning model (starting with recursion), seems to work surprisingly well. They provide a large benchmark number of learning problems (FSM, CF, CS) cases.. which would be an interesting project for someone interested in RNN-DLs or LSTMs to show the same results, without the symbolic alg, they defined. Y Yang and S.T. Piantadosi One model for the learning of language January 24, 2022, PNAS. Finally, AGI.. so this is old idea and a borrowed idea from LL Thurstone (and Spearman who also had the idea prior but had no math or alg), who in 1930, defined different types of Human Intelligence including a type of "GENERAL Intelligence". This lead to IQ tests in the 40s and 50s and frustrating attempts at finding it ... instead l finding "G" Thurstone invented Factor analysis-which turns out to be useful! Its difficult enough to try and define human intelligence, without claiming some sort of "G" factor for AI. With due respect to my friends at DeepMind... This seems like a deadend. Cheers, Steve * a troll is a person who posts inflammatory, insincere, digressive, extraneous, or off-topic messages in an online community, with the intent of provoking readers into displaying emotional responses, or manipulating others' perception On 6/9/22 4:33 PM, Gary Marcus wrote: Dear Dr. Hinton, You very directly asked my side to produce some tangible goals. Ernest Davis and I did precisely what you asked, and in return you described me (in a separate but public message that also appears to have come from your account) as deranged. There is no world in which that is socially acceptable, or a positive step towards science. Your reaction is particularly striking because it is a clear outlier. In general, despite the perfectly reasonable questions that you asked about wording in your subsequent email (which would presumably need be negotiated in any actually-implemented bet, as one moved from term sheet to contract), the community reaction has actually been quite favorable. LongNow offered to host it, Metaculus added to their forecast site, Christian Szegedy placed a side bet and ran a lengthy discussion about the math proposition, etc. Your reactions are so far out of range with any of the other reaction that I have received than I am frankly concerned for your well-being. ? All that said, you raised one substantive question, and I will address it. I am reasonably familiar with the PaLM model that you refer to, and aware of the claim that it ?understands? jokes. I am also aware of the fact that what is provided in the paper was a numerator without a denominator. There is no clarity about what was tried, or how general it was, or under what conditions it failed; subset is indeed the operative word. In no world would what was written pass traditional peer review. Even for a conference paper, what is reported about joke interpretation is woefully thin. It?s really not much rest to your theory on. I am also aware of the fact that your colleagues at Google failed to supply more detail when I requested it?never a good sign. How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? [image1.jpeg] Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? Sally's cow died yesterday. When will the cow be alive again? Let's think step by step. First, the cow must be buried. This takes a few hours. Next, a new cow must be born. This takes nine months. Therefore, the earliest the cow will be alive again is nine months from now. Probably not. ? What we have known since Eliza is that humans are easily seduced into anthropomorphizing machines. I am going to stand by my claim that current AI lacks understanding: * one cannot derive a set of logic propositions from a large language model * one cannot reliably update a world model based on an LLMs calculations (a point that LeCun has also made, in slightly different terms) * one cannot reliably reason from what and LLM derives, * LLMs themselves cannot reliably reason from they are told.. My point is not a Searlean one about the impossibility of machines thinking, just a reality of the limits of contemporary systems. On the latter point, I would also urge you to read my recent essay called ?Horse rides Astronaut?, to see how easy it is make up incorrect rationalization about these models when they make errors. Inflated appraisals of their capabilities may serve some sort of political end, but will not serve science. I cannot undo whatever slight some reviewer did to Yann decades ago, but I can call the current field as I see it; I don?t believe that current systems have gotten significantly closer to what I described in that 2016 conversation that you quote from. I absolutely stand by the claim that we are a long way from answering ?the deeper questions in artificial intelligence, like how we understand language or how we reason about the world." SInce you are found of quoting stuff I right 6 or 7 years ago, here?s a challenge that I proposed in the New Yorker 2014; to me I see real progress on this sort of thing, thus far: allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content??Why did Russia invade Crimea?? or ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. (When asked what ?Cheers? was about, it responded, ?How should I know, I haven?t watched the show.?) But no existing program?not Watson, not Goostman, not Siri?can currently come close to doing what any bright, real teenager can do: watch an episode of ?The Simpsons,? and tell us when to laugh. Can Palm-E do that? I seriously doubt it. Dr. Gary Marcus Founder, Geometric Intelligence (acquired by Uber) Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books in AI, and The Algebraic Mind, one of the key early works advocating neurosymbolic AI On Jun 9, 2022, at 11:34, Geoffrey Hinton wrote: ? I shouldn't respond because your main aim is to get attention without going to the trouble of building something that works (personal communication, Y. LeCun) but I cannot resist pointing out the following Marcus claim from 2016: "People are very excited about big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world. " Given that big neural nets can now explain why a joke is funny (for some subset of jokes) do you still want to stick with this claim? It seems to me that the reason you made this claim is because you have a strong prior belief about how language understanding and reasoning must work and this belief is remarkably resistant to evidence. Deep learning researchers have seen this before. Yann had a paper rejected by a vision conference even though it beat the state-of-the-art and one of the reasons given was that the model learned everything and therefore taught us nothing about how to do vision. That particular referee had a strong idea of how computer vision must work and failed to notice that the success of Yann's model showed that that prior belief was spectacularly wrong. Geoff On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus > wrote: ???Dear Connectionists, and especially Geoff Hinton, It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said ?If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it?s not going to be able to do then five years later, we?d be able to show we?d done them.? Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines. Precise details are here, towards the end of the essay: https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed ?general? (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting. Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc?d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000. If you?d like in on the bet, Geoff, please let me know. More generally, I?d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think). Cheers. Gary Marcus -- [cid:part6.A412AC5F.129E835F at rutgers.edu] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: image1.jpeg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: signature.png URL: From stephen.jose.hanson at rutgers.edu Sat Jun 11 11:30:39 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Sat, 11 Jun 2022 15:30:39 +0000 Subject: Connectionists: Skywriting.. when man first met troll--Harnad Message-ID: <44cbb142-5c57-7531-6719-964082c7c6c1@rutgers.edu> My friend Steve Harnad recently published a piece in The Atlantic, from 1987 during the emergence of the internet. It may remind you of things happening now. https://bit.ly/skytroll Cheers. Steve -- [cid:part1.CF389F31.A56F25AE at rutgers.edu] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: signature.png URL: From rufin.vogels at kuleuven.be Sun Jun 12 05:27:39 2022 From: rufin.vogels at kuleuven.be (Rufin Vogels) Date: Sun, 12 Jun 2022 09:27:39 +0000 Subject: Connectionists: Professorship nonhuman primate neuroscience at KU Leuven Message-ID: The Department of Neurosciences of KU Leuven, invites scholars to apply for a full?time research professorship in the research group neurophysiology. This position is funded by the Special Research Fund (BOFZAP), established by the Flemish Government. We are looking for motivated and internationally oriented candidates with an excellent research record and educational competence, in the field of brain research using non-human primates in combination with either other animal models or human brain research. The appointment is expected to start on October 1, 2023. Applications will be evaluated in parallel and independently by 1) the KU Leuven Research Council in a competitive process across academic domains and 2) the faculty advisory committee. During the first 10 years, the teaching obligations as a research professor will be limited. Afterwards, the position will be transformed into a regular professorship. The focus of the research should be on fundamental brain research using non-human primates, combined with other animal models or combined with human brain research. The research should be relevant to advance our understanding of the function of the human brain. The appointed candidate will be affiliated with the Laboratory of Neuro- and Psychophysiology in the Department of Neurosciences. The main strength of the research group is the combination of single cell electrophysiology, functional imaging and state-of-the-art causal perturbation methods in nonhuman primates. We also have the rare potential to integrate the invasive monkey studies with non-invasive functional imaging and electrophysiological studies in humans and intracranial recordings in patient groups. As such, the Laboratory of Neuro-and Psychophysiology is at the forefront of comparative primate research. The Department of Neurosciences aims for excellence in research and training in neurosciences both in normal and pathological conditions and from the molecular to the systems level. We perform top research in basic and clinical neurosciences and in translating novel insights and technological innovations towards improved clinical care and prevention. The department has 9 research groups collaborating with the university hospitals and the Flemish institute for biotechnology (VIB). The Department of Neurosciences is part of the Leuven Brain Institute. Website unit Duties Research It is part of the assignment of the appointed candidate to develop, within the domain of brain research in non-human primates an international, competitive research programme, to pursue excellent scientific results at an international level and to support and promote national and international research partnerships. The candidate must meet a strong research profile or have the potential to do so. In addition, the candidate is expected to have a multidisciplinary attitude and a willingness to cooperate intensively with other researchers and research units at KU Leuven such as neurophysiology, experimental neurology, experimental neurosurgery, molecular neurobiology, ... The candidate: * Is an excellent, internationally oriented researcher and develops a research programme at the forefront in the field of brain research in non-human primates combined with either other animal models or human brain research. * Strengthens existing research lines and brings complementary and/or additionally new expertise by working closely with the members of the research unit. * Publishes at the highest scientific level in high impact journals in the field of neurosciences. * Develops an own research programme, complementary to and in collaboration with principal investigators of the research group neurophysiology. * Supervises master students, PhD students and postdocs at a high international level. * Aims to acquire competitive research funding from national and/or international agencies and submits effective research project proposals for this purpose. * Establishes both within KU Leuven, national and international partnerships in the context of the research programme. * Strives for excellence in research and provides a contribution to the international research reputation of the research group neurophysiology, Department of Neurosciences and KU Leuven. * Has a pronounced interest in fundamental research, but also pays attention to translational research. Education Although the position regards a research professorship at the start of employment, the candidate is expected to gradually contribute to state of the art teaching. The candidate also contributes to the pedagogic project of the faculty/university. He/she develops teaching in accordance with KU Leuven?s vision on activating and research?based education and makes use of the possibilities for the educational professionalization offered by the faculty and the university. The assignment of the candidate to be appointed also includes a teaching assignment. Service Scientific, societal and internal services (administrative and/or institutional) are also part of the assignment. Requirements * You have a PhD degree relevant for (bio)medical sciences. If you have recently obtained your PhD, it is important that you support your research and growth potential with academic references e.g. demonstrate potential through at least one top publication, academic references, promising research projects and articles in preparation. * You have a strong research profile in the field and an indisputable research integrity. * The quality of your research is proven by publications in the best neuroscience journals and by lectures at conferences or research institutes. * International research experience is considered as an important advantage. * You have demonstrable qualities related to academic education. Teaching experience is a plus. * You possess organizational skills and have a cooperative attitude. You also have leadership skills within a university context. * Your spoken and written English is excellent. The official administrative language used at KU Leuven is Dutch. If you do not speak Dutch (or do not speak it well) at the start of employment, KU Leuven will provide language training to enable you to take part in administrative meetings. Before teaching courses in Dutch or English, you will be given the opportunity to learn Dutch resp. English to the required standard. Offer * We offer full-time employment in an intellectually challenging environment. KU Leuven is an research-intensive, internationally oriented university that carries out both fundamental and applied scientific research. Our university is highly inter- and multidisciplinary focused and strives for international excellence. In this regard, we actively collaborate with research partners in Belgium and abroad. We provide our students with an academic education that is based on high-quality scientific research. * Depending on your qualifications and academic experience, you will be appointed to or tenured in one of the grades of the senior academic staff: assistant professor, associate professor, professor or full professor. In principle, junior researches are appointed as assistant professor on the tenure track for a period of 5 years; after this period and a positive evaluation, they are permanently appointed as an associate professor. * You will work in Leuven, a historic and dynamic and vibrant city located in the heart of Belgium, within twenty minutes from Brussels, the capital of the European Union, and less than two hours from Paris, London and Amsterdam. * KU Leuven is well set to welcome foreign professors and their family and provides practical support with regard to immigration and administration, housing, childcare, learning Dutch, partner career coaching, ? * In order to facilitate scientific onboarding and accelerate research in the first phase a starting grant of 100.000 euro is offered to new professors without substantial other funding and appointed for at least 50%. More information full application and selection procedure Research professorhips (BOFZAP) Interested? More information on the content of the job can be obtained from the academic contact person prof. dr. Rufin Vogels (rufin.vogels at kuleuven.be) from the research group neurophysiology or from the Department Chair prof. dr. Patrick Dupont (Patrick.dupont at kuleuven.be). More information on the guidelines, regulations and application file is available from Ms. Kristin Vermeylen (kristin.vermeylen at kuleuven.be , tel. +32 16 32 09 07) or Ms. Christelle Maeyaert (christelle.maeyaert at kuleuven.be , tel. +32 16 31 41 94). You can apply for this job no later than September 15, 2022 via the online application tool KU Leuven seeks to foster an environment where all talents can flourish, regardless of gender, age, cultural background, nationality or impairments. If you have any questions relating to accessibility or support, please contact us at diversiteit.HR at kuleuven.be. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iam.palat at gmail.com Sun Jun 12 12:22:43 2022 From: iam.palat at gmail.com (Iam Palatnik) Date: Sun, 12 Jun 2022 13:22:43 -0300 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> References: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> Message-ID: Dear all, Like most people doing research in AI/ML I'm also curious about this topic, but also wonder what kind of biases are introduced when we expect non-human things to follow human intuition. So just to input my two cents on this and also ask the opinion of more experienced researchers: Say you have a state of the art CNN that correctly identifies 99% of tumors in an image. The most common position is that this CNN doesn't "understand" tumors in the way oncologists do. But do oncologists understand tumors in the way the CNN does? I'm sure showing them convolutional filters and activations wouldn't help much in a diagnosis. When the CNN correctly classifies something a human wouldn't, what do we make of that in the realm of "understanding"? Is "understanding" just synonyms with "reasoning that follows human intuition"? Reasoning that doesn't follow human intuition is not "understanding"? This question can be expanded to any other type of task beyond image classification. Cheers, Iam On Sun, Jun 12, 2022 at 6:36 AM Stephen Jose Hanson < stephen.jose.hanson at rutgers.edu> wrote: > Bets? The *Augus**t* discussion months ago has reduced to bets? Really? > > Gentleman, lets step back a bit... on the one hand this seems like > schoolyard squabble about who can jump from the highest point on a wall > without breaking a leg.. > > On the other hand.. it also feels like a troll* standing in a North > Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I > bet this will never fly across an ocean!" > > OR > > " (1961) sure sure NASA, you got a capsule in the upper stratosphere, but > I bet you will never get to the moon". > > OR > > "1994, Ok, your computational biology model can do protein folding with > about 40% match.. 20 years later not much improvement (60%).. so I bet > you'll never reach 90% match". (in 2020, Deepmind published > Alphafold--which reached over 94% matches). > > > So this type of counterfactual silliness, is simply due to our deep > ignorance of the technologies in the future.. but who could know the tech > of the future? > > Its really really really early in what is happening in AI now. .snipping > at it at this point is sort of pointless. As we just don't know alot yet. > > (1) how do DL models learn? (2) how do DL models represent knowledge? (3) > What do DL models have to do with Brain? > > Instead here's a useful project: > > Recent work in language acquisition due to Yang an Piantadosi (PNAS 2022) > who developed a symbolic model--similar to what Chomsky described as a > Universal learning model (starting with recursion), seems to work > surprisingly well. They provide a large benchmark number of learning > problems (FSM, CF, CS) cases.. which would be an interesting project for > someone interested in RNN-DLs or LSTMs to show the same results, without > the symbolic alg, they defined. > > Y Yang and S.T. Piantadosi One model for the learning of language January > 24, 2022, PNAS. > > Finally, AGI.. so this is old idea and a borrowed idea from LL Thurstone > (and Spearman who also had the idea prior but had no math or alg), who in > 1930, defined different types of Human Intelligence including a type of > "GENERAL Intelligence". This lead to IQ tests in the 40s and 50s and > frustrating attempts at finding it ... instead l finding "G" Thurstone > invented Factor analysis-which turns out to be useful! Its difficult > enough to try and define human intelligence, without claiming some sort of > "G" factor for AI. With due respect to my friends at DeepMind... This > seems like a deadend. > > Cheers, > > Steve > > > > * a troll is a person who posts inflammatory, insincere, digressive, > extraneous, or off-topic messages in an online community, with the intent > of provoking readers into displaying emotional responses, or manipulating > others' perception > On 6/9/22 4:33 PM, Gary Marcus wrote: > > Dear Dr. Hinton, > > You very directly asked my side to produce some tangible goals. Ernest > Davis and I did precisely what you asked, and in return you described me > (in a separate but public message that also appears to have come from your > account) as deranged. There is no world in which that is socially > acceptable, or a positive step towards science. > > Your reaction is particularly striking because it is a clear outlier. In > general, despite the perfectly reasonable questions that you asked about > wording in your subsequent email (which would presumably need be negotiated > in any actually-implemented bet, as one moved from term sheet to contract), > the community reaction has actually been quite favorable. LongNow offered > to host it, Metaculus added to their forecast site, Christian Szegedy > placed a side bet and ran a lengthy discussion about the math proposition, > etc. Your reactions are so far out of range with any of the other reaction > that I have received than I am frankly concerned for your well-being. > > ? > > All that said, you raised one substantive question, and I will address it. > I am reasonably familiar with the PaLM model that you refer to, and aware > of the claim that it ?understands? jokes. I am also aware of the fact that > what is provided in the paper was a numerator without a denominator. > > There is no clarity about what was tried, or how general it was, or under > what conditions it failed; subset is indeed the operative word. In no world > would what was written pass traditional peer review. Even for a conference > paper, what is reported about joke interpretation is woefully thin. It?s > really not much rest to your theory on. > > I am also aware of the fact that your colleagues at Google failed to > supply more detail when I requested it?never a good sign. > > How general is the ability? Is it a handful of paraphrases of jokes in > vast memorized database? Would it extend to other kinds of jokes? Could it > (or related models like Gato, with visual input) explain this cartoon? > > [image: image1.jpeg] > > Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? > > *Sally's cow died yesterday. When will the cow be alive again? Let's think > step by step.* > > First, the cow must be buried. This takes a few hours. > > Next, a new cow must be born. This takes nine months. > > Therefore, the earliest the cow will be alive again is nine months from > now. > > Probably not. > > ? > > What we have known since Eliza is that humans are easily seduced into > anthropomorphizing machines. I am going to stand by my claim that current > AI lacks understanding: > > - one cannot derive a set of logic propositions from a large language > model > - one cannot reliably update a world model based on an LLMs > calculations (a point that LeCun has also made, in slightly different terms) > - one cannot reliably reason from what and LLM derives, > - LLMs themselves cannot reliably reason from they are told.. > > My point is not a Searlean one about the impossibility of machines > thinking, just a reality of the limits of contemporary systems. On the > latter point, I would also urge you to read my recent essay called ?Horse > rides Astronaut?, to see how easy it is make up incorrect rationalization > about these models when they make errors. > > Inflated appraisals of their capabilities may serve some sort of political > end, but will not serve science. > > I cannot undo whatever slight some reviewer did to Yann decades ago, but I > can call the current field as I see it; I don?t believe that current > systems have gotten significantly closer to what I described in that 2016 > conversation that you quote from. I absolutely stand by the claim that we > are a long way from answering ?the deeper questions in artificial > intelligence, like how we understand language or how we reason about the > world." SInce you are found of quoting stuff I right 6 or 7 years ago, > here?s a challenge that I proposed in the New Yorker 2014; to me I see real > progress on this sort of thing, thus far: > > > *allow me to propose a Turing Test for the twenty-first century: build a > computer program that can watch any arbitrary TV program or YouTube video > and answer questions about its content??Why did Russia invade Crimea?? or > ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots > like Goostman can hold a short conversation about TV, but only by bluffing. > (When asked what ?Cheers? was about, it responded, ?How should I know, I > haven?t watched the show.?) But no existing program?not Watson, not > Goostman, not Siri?can currently come close to doing what any bright, real > teenager can do: watch an episode of ?The Simpsons,? and tell us when to > laugh.* > > > Can Palm-E do that? I seriously doubt it. > > > Dr. Gary Marcus > > Founder, Geometric Intelligence (acquired by Uber) > Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books > in AI, and The Algebraic Mind, one of the key early works advocating > neurosymbolic AI > > > > > > > On Jun 9, 2022, at 11:34, Geoffrey Hinton > wrote: > > ? > I shouldn't respond because your main aim is to get attention without > going to the trouble of building something that works > (personal communication, Y. LeCun) but I cannot resist pointing out the > following Marcus claim from 2016: > > "People are very excited about big data and what it's giving them right > now, but I'm not sure it's taking us closer to the deeper questions in > artificial intelligence, like how we understand language or how we reason > about the world. " > > Given that big neural nets can now explain why a joke is funny (for some > subset of jokes) do you still want to stick with this claim? It seems to > me that the reason you made this claim is because you have a strong prior > belief about how language understanding and reasoning must work and this > belief is remarkably resistant to evidence. Deep learning researchers have > seen this before. Yann had a paper rejected by a vision conference even > though it beat the state-of-the-art and one of the reasons given was that > the model learned everything and therefore taught us nothing about how to > do vision. That particular referee had a strong idea of how computer > vision must work and failed to notice that the success of Yann's model > showed that that prior belief was spectacularly wrong. > > Geoff > > > > > On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: > >> ???Dear Connectionists, and especially Geoff Hinton, >> >> It has come to my attention that Geoff Hinton is looking for challenging >> targets. In a just-released episode of The Robot Brains podcast [ >> https://www.youtube.com/watch?v=4Otcau-C_Yc >> ], >> he said >> >> *?If any of the people who say [deep learning] is hitting a wall would >> just write down a list of the things it?s not going to be able to do then >> five years later, we?d be able to show we?d done them.?* >> >> Now, as it so happens, I (with the help of Ernie Davis) did just write >> down exactly such a list of things, last weekm and indeed offered Elon Musk >> a $100,000 bet along similar lines. >> >> Precise details are here, towards the end of the essay: >> >> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >> >> >> Five are specific milestones, in video and text comprehension, cooking, >> math, etc; the sixth is the proviso that for an intelligence to be deemed >> ?general? (which is what Musk was discussing in a remark that prompted my >> proposal), it would need to solve a majority of the problems. We can >> probably all agree that narrow AI for any single problem on its own might >> be less interesting. >> >> Although there is no word yet from Elon, Kevin Kelly offered to host the >> bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 >> questions that the community can comment on. Vivek Wadhwa, cc?d, quickly >> offered to double the bet, and several others followed suit; the bet to >> Elon (should he choose to take it) currently stands at $500,000. >> >> If you?d like in on the bet, Geoff, please let me know. >> >> More generally, I?d love to hear what the connectionists community thinks >> of six criteria I laid out (as well as the arguments at the top of the >> essay, as to why AGI might not be as imminent as Musk seems to think). >> >> Cheers. >> Gary Marcus >> > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From rloosemore at susaro.com Sun Jun 12 10:41:44 2022 From: rloosemore at susaro.com (Richard Loosemore) Date: Sun, 12 Jun 2022 10:41:44 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <6699ABEE-E8D9-4FD2-ABF2-6CFD80AD27B7@nyu.edu> References: <6699ABEE-E8D9-4FD2-ABF2-6CFD80AD27B7@nyu.edu> Message-ID: <00009e68-2ad4-8779-4f31-c0748c3e742c@susaro.com> In the interest of keeping the debate sensible... Please, everyone, it's really not cool to attack a statement that "New approach X is over-hyped and destined for failure" with a list of all the times in the past when others made similar claims about something that ultimately proved successful. Heaven knows, if this community can't spot a case of Confirmation Bias, who can? -- Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From arbib at usc.edu Sun Jun 12 13:44:32 2022 From: arbib at usc.edu (Michael Arbib) Date: Sun, 12 Jun 2022 17:44:32 +0000 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> References: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> Message-ID: Dear Jose, Gary, and and Geoff Since the publication of "the handbook of brain theory and neural networks" in 2003, I have focused on language origins and architecture and have not kept track of either the theory or practice of the recent explosive developments in AI. Nonetheless, I would like to rise to Gary?s challenge offered by the cartoon shown below, and suggest that understanding it is within, or nearly within, the power of current AI. First, an observation: I assert (without any empirical data other than a survey of ten friends) that the vast majority of English-speaking humans would fail to see the humor in this cartoon. They would not have heard of Schrodinger?s cat and, if asked to define quantum mechanics, might reply with the question "someone who fixes condoms?? [Apologies. That was Apple's transcription of my spoken "someone who fixes quantums?" However, it requires just three AI systems to recognize the joke: 1. Triggered by recognizing this is a cartoon: A system that employs a theory of humor ? perhaps as simple as the bi-association of Arthur Koestler's "the act of creation" - to recognize that a joke is the bringing together of two different frames into collision. A similar system could operate within language at the level of puns, but here we need two separate systems: 2. A language recognition system that could parse the caption and use the words Schrodinger and cat to retrieve the Wikipedia article and summarize it as "a thought experiment by Schrodinger to demonstrate the role of the observer in quantum mechanics in which a cat in a box is neither dead nor alive, but will become one of these when the box is opened." 3. A vision system that will recognize the picture as being a scene set in a veterinary office. The language system can then interpret good news and bad news in context. Between them the three systems will rapidly recognize the joke that a cat-owning Mr. Schrodinger in a veterinary office is receiving an opinion that ties in with the thought experiment of the physicist Schrodinger. Whether or not the AI system would be amused is a separate question. ________________________________ On 6/9/22 4:33 PM, Gary Marcus wrote: ... How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? [image1.jpeg] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: image1.jpeg URL: From stephen.jose.hanson at rutgers.edu Sun Jun 12 15:06:51 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Sun, 12 Jun 2022 19:06:51 +0000 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: <35540060-b8ce-b8a1-5264-63b8fde225b3@rutgers.edu> Message-ID: <80fc1f24-f712-92aa-50f6-d11334fbc783@rutgers.edu> nice Michael, seems a reasonable account, you may not recall.. (first, my name is actually Steve--but no matter).. but I was at Rutgers (30 years ago) with our friend Jerry Fodor at a Cog Sci conference or workshop on Cog Sci(?) in any case, Jerry was making his usual claims about concepts and their nativist origins (arguing from conceptual complexity).. you stopped him, incredulous at some point.. and said.. " Jerry, are you really saying like cars, or phones are somehow nativist concepts--that are somehow in the genome? " And he thought for a moment and said "Yes, the little triangular black bakelite ones". Steve On 6/12/22 1:44 PM, Michael Arbib wrote: Dear Jose, Gary, and and Geoff Since the publication of "the handbook of brain theory and neural networks" in 2003, I have focused on language origins and architecture and have not kept track of either the theory or practice of the recent explosive developments in AI. Nonetheless, I would like to rise to Gary?s challenge offered by the cartoon shown below, and suggest that understanding it is within, or nearly within, the power of current AI. First, an observation: I assert (without any empirical data other than a survey of ten friends) that the vast majority of English-speaking humans would fail to see the humor in this cartoon. They would not have heard of Schrodinger?s cat and, if asked to define quantum mechanics, might reply with the question "someone who fixes condoms?? [Apologies. That was Apple's transcription of my spoken "someone who fixes quantums?" However, it requires just three AI systems to recognize the joke: 1. Triggered by recognizing this is a cartoon: A system that employs a theory of humor ? perhaps as simple as the bi-association of Arthur Koestler's "the act of creation" - to recognize that a joke is the bringing together of two different frames into collision. A similar system could operate within language at the level of puns, but here we need two separate systems: 2. A language recognition system that could parse the caption and use the words Schrodinger and cat to retrieve the Wikipedia article and summarize it as "a thought experiment by Schrodinger to demonstrate the role of the observer in quantum mechanics in which a cat in a box is neither dead nor alive, but will become one of these when the box is opened." 3. A vision system that will recognize the picture as being a scene set in a veterinary office. The language system can then interpret good news and bad news in context. Between them the three systems will rapidly recognize the joke that a cat-owning Mr. Schrodinger in a veterinary office is receiving an opinion that ties in with the thought experiment of the physicist Schrodinger. Whether or not the AI system would be amused is a separate question. ________________________________ On 6/9/22 4:33 PM, Gary Marcus wrote: ... How general is the ability? Is it a handful of paraphrases of jokes in vast memorized database? Would it extend to other kinds of jokes? Could it (or related models like Gato, with visual input) explain this cartoon? [image1.jpeg] -- [cid:part2.863F472E.A7B8F080 at rutgers.edu] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: image1.jpeg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: signature.png URL: From dst at cs.cmu.edu Sun Jun 12 23:53:06 2022 From: dst at cs.cmu.edu (Dave Touretzky) Date: Sun, 12 Jun 2022 23:53:06 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: Your message of Fri, 10 Jun 2022 20:26:36 -0700. <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> Message-ID: <10271.1655092386@ammon2.boltz.cs.cmu.edu> This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky From avinashsingh214 at gmail.com Sun Jun 12 23:48:01 2022 From: avinashsingh214 at gmail.com (Avinash K Singh) Date: Mon, 13 Jun 2022 13:48:01 +1000 Subject: Connectionists: =?utf-8?q?Announcement=3A_=5BElectronics=5D_=28IF?= =?utf-8?q?=3A_2=2E397=29_Special_Issue_=E2=80=9CAdvances_in_Augmen?= =?utf-8?q?ting_Human-Machine_Interface_=E2=80=9C_=E2=80=93_Paper_I?= =?utf-8?q?nvitation_=28Deadline_is_approaching=29?= In-Reply-To: References: Message-ID: Dear Colleagues, We hope you are doing well. Electronics (ISSN 2079-9292) is an open access journal of MDPI indexed by the Science Citation Index Expanded?Web of Science (Impact Factor?SCIE (2020) = 2.397). This journal is currently running a Special Issue on ?Advances in Augmenting Human-Machine Interface?, in which I am serving as the Guest Editor. Based on your expertise, we would like to *invite you to publish an original research article or Review paper in this Special Issue. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website.* The main aim of this Special Issue is to seek high-quality submissions that highlight emerging applications and address recent breakthroughs in augmenting human?machine interface such as novel approaches of interaction, machine learning methods utilizing physiological sensors, and real-time HMI systems. Please click here for further information: https://www.mdpi.com/journal/electronics/special_issues/AUMI_electronics We hope you find this topic to be of interest. In this case, we would appreciate receiving a preliminary title and list of authors at your earliest convenience. The submission deadline is *31 July 2022* and manuscripts may be submitted at any point until 31 July 2022 as papers will be published on an ongoing basis. **Indexing* * Electronics is a peer-reviewed, open access journal on the science of electronics and its applications. The journal is indexed by the *Science Citation Index Expanded (Web of Science), Inspec (IET), and Scopus. Its Impact Factor is 2.397 (2020).* **Fast Publication** Manuscripts are peer-reviewed and a first decision provided to authors approximately 15.1 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the second half of 2020). All papers will be peer-reviewed as soon as submitted. The editorial decision will be made about 35 days after submission, on average. The accepted papers will be published continuously in this Special Issue as soon as accepted, irrespective of the submission deadline, and will be listed together on the Special Issue website. **Article Processing Charge** An article processing charge (APC) of 1800 CHF (Swiss Francs) applies to each paper accepted after peer review. For further details on the submission process, please refer to the instructions for authors at http://www.mdpi.com/journal/electronics/instructions. We appreciate it if you could let us know if you are interested in this Special Issue, such that we can send you further details regarding the submission process. Please get in touch with any questions, and we look forward to collaborating with you in the near future. Kind regards, *Dr. Avinash Singh, Dr. Xian Tao and Dr. Carlos Tirado Cortes Guest Editor* -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Mon Jun 13 01:12:28 2022 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 13 Jun 2022 01:12:28 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> References: <873c8817-0f68-21e8-23e1-d944ce72810d@rutgers.edu> <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> Message-ID: Gary I think it's important to emphasize (as, of course, you know) that the main challenge in your challenges is that the *same* system be able to perform all the tasks. One of the main ways that ML has diverged from natural intelligence is in focusing on specific tasks and developing specialized systems that often achieve super-human performance on their task. The task can be very complex, like drawing pictures from natural language input, but that is just one aspect of intelligence. Can DALL-E 2 make a cup of coffee? It's just a specialized savant, which makes it less generally intelligent than a bee. My contention is that, to achieve truly integrated general intelligence, we'll have to start with simple integrated systems and make them more complex. The "vertical" compartmentalization is just creating a number of hyper-specialists that could fall apart when we try to integrate all of them. Total multi-modal, developmental learning is how we'll achieve general intelligence. Or perhaps, gradually multi-modal if we wish to mimic evolution as well. Unfortunately, we human engineers are: a) in a hurry to see this in our lifetimes; b) driven by concrete problems; and c) come with our specialized training that focuses us on specific dimensions of an extremely high-dimensional problem. Ultimately, we know that something like a brain embedded in something like a body can be sentient, conscious, intelligent, etc., in the real world, so of course we'll get there - if we learn respectfully from biology and get out of the "professors are smarter than Nature" mode. Best Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Sun, Jun 12, 2022 at 5:36 AM Gary Marcus wrote: > Let?s review: > > Hinton accuses me of not setting clear criteria. > > I offer 6 reasonably clear criteria. > > A significant sample of the ML community elsewhere applauds the criteria, > and engages seriously. > > Hinton says it?s deranged to discuss them; after that, nobody here dares. > > Hanson derides the whole project and stacks the deck; ignores the cold > fusion, flying jet packs, driverless taxis, and so on that haven?t become > practical despite promises, citing only the numerator but not the > denominator of history, further stifling any serious discussion of what > Hinton?s requested targets might be. > > Was Hinton?s request for clear criteria agenuine good faith request? Does > anyone on this list have better criteria? Do you always find it appropriate > to tag team people for responding to requests in good faith? > > Open scientific discussion, literally for decades a hallmark of this list, > appears to have left the building. Very unfortunate. > > Gary > > > > > On Jun 10, 2022, at 8:14 AM, Stephen Jose Hanson < > stephen.jose.hanson at rutgers.edu> wrote: > > ? > > Bets? The *Augus**t* discussion months ago has reduced to bets? Really? > > Gentleman, lets step back a bit... on the one hand this seems like > schoolyard squabble about who can jump from the highest point on a wall > without breaking a leg.. > > On the other hand.. it also feels like a troll* standing in a North > Carolina field saying to Orville.. .."OK, so it worked for 12 seconds, I > bet this never fly across an ocean!" > > OR > > " (1961) sure sure, you got a capsule in the upper stratosphere, but I > bet you will never get to the moon". > > OR > > "1994, Ok, your computational biology model can do protein folding with > about 40% match.. 20 years later not much improvement (60%).. so I bet > you'll never reach 90% match". (in 2020, Deepmind published > Alphafold--which reached over 94% matches). > > > So this type of counterfactual silliness, is simply due to our deep > ignorance of the technologies in the future.. but who could know the tech > of the future? > > Its really really really early in what is happening in AI now. .snipping > at it at this point is sort of pointless. As we just don't know alot yet. > > (1) how do DL models learn? (2) how do DL models represent knowledge? (3) > What do DL models have to do with Brain? > > Instead here's a useful project: > > Recent work in language acquisition due to Yang an Piantidosi (PNAS 2022) > who developed a symbolic model--similar to what Chomsky described as a > Universal learning model (starting with recursion), seems to work > surprisingly well. They provide a large archive number of learning > problems (FSM, CF, CS) cases.. which would be an interesting project for > someone interested in RNN-DLs or LSTMs to show the same results, without > the symbolic alg, they defined. > > Y Yang and S.T. Piantadosi One model for the learning of language January > 24, 2022, PNAS. > > Finally, AGI.. so this is old idea and a borrowed idea from LL > Thurstone, who in 1930, defined different types of Human Intelligence > including a type of "GENERAL Intelligence". This lead to IQ tests and > frustrating attempts at finding it ... instead leading Thurstone to invent > Factor analysis. Its difficult enough to try and define human > intelligence, without claiming some sort of "G" factor for AI. With due > respect to my friends at DeepMind... This seems like a deadend. > > Cheers, > > Steve > > > > > > > > * a troll is a person who posts inflammatory, insincere, digressive, > extraneous, or off-topic messages in an online community, with the intent > of provoking readers into displaying emotional responses, or manipulating > others' perception > On 6/9/22 4:33 PM, Gary Marcus wrote: > > Dear Dr. Hinton, > > You very directly asked my side to produce some tangible goals. Ernest > Davis and I did precisely what you asked, and in return you described me > (in a separate but public message that also appears to have come from your > account) as deranged. There is no world in which that is socially > acceptable, or a positive step towards science. > > Your reaction is particularly striking because it is a clear outlier. In > general, despite the perfectly reasonable questions that you asked about > wording in your subsequent email (which would presumably need be negotiated > in any actually-implemented bet, as one moved from term sheet to contract), > the community reaction has actually been quite favorable. LongNow offered > to host it, Metaculus added to their forecast site, Christian Szegedy > placed a side bet and ran a lengthy discussion about the math proposition, > etc. Your reactions are so far out of range with any of the other reaction > that I have received than I am frankly concerned for your well-being. > > ? > > All that said, you raised one substantive question, and I will address it. > I am reasonably familiar with the PaLM model that you refer to, and aware > of the claim that it ?understands? jokes. I am also aware of the fact that > what is provided in the paper was a numerator without a denominator. > > There is no clarity about what was tried, or how general it was, or under > what conditions it failed; subset is indeed the operative word. In no world > would what was written pass traditional peer review. Even for a conference > paper, what is reported about joke interpretation is woefully thin. It?s > really not much rest to your theory on. > > I am also aware of the fact that your colleagues at Google failed to > supply more detail when I requested it?never a good sign. > > How general is the ability? Is it a handful of paraphrases of jokes in > vast memorized database? Would it extend to other kinds of jokes? Could it > (or related models like Gato, with visual input) explain this cartoon? > > [image: image1.jpeg] > > Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny? > > *Sally's cow died yesterday. When will the cow be alive again? Let's think > step by step.* > > First, the cow must be buried. This takes a few hours. > > Next, a new cow must be born. This takes nine months. > > Therefore, the earliest the cow will be alive again is nine months from > now. > > Probably not. > > ? > > What we have known since Eliza is that humans are easily seduced into > anthropomorphizing machines. I am going to stand by my claim that current > AI lacks understanding: > > - one cannot derive a set of logic propositions from a large language > model > - one cannot reliably update a world model based on an LLMs > calculations (a point that LeCun has also made, in slightly different terms) > - one cannot reliably reason from what and LLM derives, > - LLMs themselves cannot reliably reason from they are told.. > > My point is not a Searlean one about the impossibility of machines > thinking, just a reality of the limits of contemporary systems. On the > latter point, I would also urge you to read my recent essay called ?Horse > rides Astronaut?, to see how easy it is make up incorrect rationalization > about these models when they make errors. > > Inflated appraisals of their capabilities may serve some sort of political > end, but will not serve science. > > I cannot undo whatever slight some reviewer did to Yann decades ago, but I > can call the current field as I see it; I don?t believe that current > systems have gotten significantly closer to what I described in that 2016 > conversation that you quote from. I absolutely stand by the claim that we > are a long way from answering ?the deeper questions in artificial > intelligence, like how we understand language or how we reason about the > world." SInce you are found of quoting stuff I right 6 or 7 years ago, > here?s a challenge that I proposed in the New Yorker 2014; to me I see real > progress on this sort of thing, thus far: > > > *allow me to propose a Turing Test for the twenty-first century: build a > computer program that can watch any arbitrary TV program or YouTube video > and answer questions about its content??Why did Russia invade Crimea?? or > ?Why did Walter White consider taking a hit out on Jessie?? Chatterbots > like Goostman can hold a short conversation about TV, but only by bluffing. > (When asked what ?Cheers? was about, it responded, ?How should I know, I > haven?t watched the show.?) But no existing program?not Watson, not > Goostman, not Siri?can currently come close to doing what any bright, real > teenager can do: watch an episode of ?The Simpsons,? and tell us when to > laugh.* > > > Can Palm-E do that? I seriously doubt it. > > > Dr. Gary Marcus > > Founder, Geometric Intelligence (acquired by Uber) > Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books > in AI, and The Algebraic Mind, one of the key early works advocating > neurosymbolic AI > > > > > > > On Jun 9, 2022, at 11:34, Geoffrey Hinton > wrote: > > ? > I shouldn't respond because your main aim is to get attention without > going to the trouble of building something that works > (personal communication, Y. LeCun) but I cannot resist pointing out the > following Marcus claim from 2016: > > "People are very excited about big data and what it's giving them right > now, but I'm not sure it's taking us closer to the deeper questions in > artificial intelligence, like how we understand language or how we reason > about the world. " > > Given that big neural nets can now explain why a joke is funny (for some > subset of jokes) do you still want to stick with this claim? It seems to > me that the reason you made this claim is because you have a strong prior > belief about how language understanding and reasoning must work and this > belief is remarkably resistant to evidence. Deep learning researchers have > seen this before. Yann had a paper rejected by a vision conference even > though it beat the state-of-the-art and one of the reasons given was that > the model learned everything and therefore taught us nothing about how to > do vision. That particular referee had a strong idea of how computer > vision must work and failed to notice that the success of Yann's model > showed that that prior belief was spectacularly wrong. > > Geoff > > > > > On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus wrote: > >> ???Dear Connectionists, and especially Geoff Hinton, >> >> It has come to my attention that Geoff Hinton is looking for challenging >> targets. In a just-released episode of The Robot Brains podcast [ >> https://www.youtube.com/watch?v=4Otcau-C_Yc >> ], >> he said >> >> *?If any of the people who say [deep learning] is hitting a wall would >> just write down a list of the things it?s not going to be able to do then >> five years later, we?d be able to show we?d done them.?* >> >> Now, as it so happens, I (with the help of Ernie Davis) did just write >> down exactly such a list of things, last weekm and indeed offered Elon Musk >> a $100,000 bet along similar lines. >> >> Precise details are here, towards the end of the essay: >> >> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things >> >> >> Five are specific milestones, in video and text comprehension, cooking, >> math, etc; the sixth is the proviso that for an intelligence to be deemed >> ?general? (which is what Musk was discussing in a remark that prompted my >> proposal), it would need to solve a majority of the problems. We can >> probably all agree that narrow AI for any single problem on its own might >> be less interesting. >> >> Although there is no word yet from Elon, Kevin Kelly offered to host the >> bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 >> questions that the community can comment on. Vivek Wadhwa, cc?d, quickly >> offered to double the bet, and several others followed suit; the bet to >> Elon (should he choose to take it) currently stands at $500,000. >> >> If you?d like in on the bet, Geoff, please let me know. >> >> More generally, I?d love to hear what the connectionists community thinks >> of six criteria I laid out (as well as the arguments at the top of the >> essay, as to why AGI might not be as imminent as Musk seems to think). >> >> Cheers. >> Gary Marcus >> > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 237859 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From minaiaa at gmail.com Mon Jun 13 02:31:39 2022 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 13 Jun 2022 02:31:39 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: <10271.1655092386@ammon2.boltz.cs.cmu.edu> References: <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> <10271.1655092386@ammon2.boltz.cs.cmu.edu> Message-ID: ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret26 at cam.ac.uk Mon Jun 13 04:22:14 2022 From: ret26 at cam.ac.uk (Richard Turner) Date: Mon, 13 Jun 2022 09:22:14 +0100 Subject: Connectionists: Job opportunity: Associate Teaching Professor at the University of Cambridge Message-ID: Dear All, We're advertising for an Associate Teaching Professor who will be the Course Director of the Machine Learning and Machine Intelligence (MLMI) MPhil . The post will involve teaching and the post-holder can be research active e.g. they can start and run their own research group. The main expertise could be in any field related to the MPhil including: machine learning, machine intelligence, speech and language processing, signal processing, control, robotics, human-computer interaction, computer vision, and high performance computing. Advert: https://www.jobs.cam.ac.uk/job/35215/ Best wishes, Richard Dr. Richard E. Turner Professor of Machine Learning Department of Engineering University of Cambridge Bye Fellow of Christ's College University of Cambridge -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.pascanu at gmail.com Mon Jun 13 05:54:31 2022 From: r.pascanu at gmail.com (Razvan Pascanu) Date: Mon, 13 Jun 2022 10:54:31 +0100 Subject: Connectionists: First tutorial on polynomial networks at CVPR'22 Message-ID: Dear All, Sorry for cross-posting. For those participating to CVPR'22 we want to advertise the first tutorial on polynomial networks. *Date:* 20th June 2022, 1 - 5 pm (CDT). *Place:* Ernest N. Morial Convention Center, New Orleans, USA. *Site:* https://polynomial-nets.github.io/ *Background:* Polynomial networks enable a new network design that treats a network as a high-degree polynomial expansion of the input. Recently, polynomial networks have demonstrated competitive performance in a range of tasks, including image classification, reinforcement learning, non-euclidean representation learning, (conditional) image generation, and sequence models. Despite the fact that polynomial networks have appeared for several decades in machine learning and complex systems, theyare not widely acknowledged for their role in modern deep learning. *Goals of the tutorial:* a) Draw parallelisms between modern deep learning approaches and polynomial networks, b) Introduce a wider audience to this paradigm for designing architectures. That makes our tutorial accessible to both early career practitioners and established researchers. We hope to see you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From darrylestrada97 at gmail.com Mon Jun 13 06:13:36 2022 From: darrylestrada97 at gmail.com (Darryl Estrada) Date: Mon, 13 Jun 2022 12:13:36 +0200 Subject: Connectionists: =?utf-8?q?=5BCFP=5D_-_SocialDisNER_track_?= =?utf-8?q?=E2=9A=A1_=3A_Detection_of_Disease_Mentions_in_Social_Me?= =?utf-8?q?dia=2E?= Message-ID: CFP- SocialDisNER track: Detection of Disease Mentions in Social Media (SMM4H Shared Task at COLING2022) https://temu.bsc.es/socialdisner/ Despite the high impact & practical relevance of detecting diseases automatically from social media for a diversity of applications, few manually annotated corpora generated by healthcare practitioners to train/evaluate advanced entity recognition tools are currently available. Developing disease recognition tools for social media is critical for: - Real-time disease outbreak surveillance/monitoring - Characterization of patient-reported symptoms - Post-market drug safety - Epidemiology and population health, - Public opinion mining & sentiment analysis of diseases - Detection of hate speech/exclusion of sick people - Prevalence of work-associated diseases SocialDisNER is the first track focusing on the detection of disease mentions in tweets written in Spanish, with clear adaptation potential not only to English but also other romance languages like Portuguese, French or Italian spoken by over 900 million people worldwide. For this track the SocialDisNER corpus was generated, a manual collection of tweets enriched for first-hand experiences by patients and their relatives as well as content generated by patient-associations (national, regional, local) as well as healthcare institutions covering all main diseases types including cancer, mental health, chronic and rare diseases among others. Info: - Web: https://temu.bsc.es/socialdisner/ - Data: https://doi.org/10.5281/zenodo.6359365 - Registration: https://temu.bsc.es/socialdisner/registration Schedule - Development Set Release: June 14th - Test Set Release: July 11th - Participant prediction Due: July 15th - Test set evaluation release: July 25th - Proceedings paper submission: August 1st - Camera ready papers: September 1st - SMM4H workshop @ COLING 2022: October 12-17 Publications and SMM4H (COLING 2022) workshop Participating teams have the opportunity to submit a short system description paper for the SMM4H proceedings (7th SMM4H Workshop, co-located at COLING 2022). More details are available at https://healthlanguageprocessing.org/smm4h-2022/ SocialDisNER Organizers - Luis Gasc?, Barcelona Supercomputing Center, Spain - Darryl Estrada, Barcelona Supercomputing Center, Spain - Eul?lia Farr?-Maduell, Barcelona Supercomputing Center, Spain - Salvador Lima, Barcelona Supercomputing Center, Spain - Martin Krallinger, Barcelona Supercomputing Center, Spain Scientific Committee & SMM4H Organizers - Graciela Gonzalez-Hernandez, Cedars-Sinai Medical Center, USA - Davy Weissenbacher, University of Pennsylvania, USA - Arjun Magge, University of Pennsylvania, USA - Ari Z. Klein, University of Pennsylvania, USA - Ivan Flores, University of Pennsylvania, USA - Karen O?Connor, University of Pennsylvania, USA - Raul Rodriguez-Esteban, Roche Pharmaceuticals, Switzerland - Lucia Schmidt, Roche Pharmaceuticals, Switzerland - Juan M. Banda, Georgia State University, USA - Abeed Sarker, Emory University, USA - Yuting Guo, Emory University, USA - Yao Ge, Emory University, USA - Elena Tutubalina, Insilico Medicine, Hong Kong - Jey Han Hau, The University of Melbourne (Australia) - Luca Maria Aiello, IT University of Copenhagen - Rafael Valencia-Garcia, Universidad de Murcia (Spain) - Antonio Jimeno Yepes, RMIT University (Australia) - Eugenio Martinez C?mara, Universidad de Granada (Spain) - Gema Bello Orgaz, Applied Intelligence and Data Analysis Research Group, Universidad Polit?cnica de Madrid (Spain) - H?ctor D. Menendez, King's College London (UK) - Manuel Montes y G?mez, National Institute of Astrophysics, Optics and Electronics (Mexico) - Helena G?mez Adorno, Universidad Nacional Aut?noma de M?xico (Mexico) - Rodrigo Agerri, IXA Group (HiTZ Centre), University of Basque Country EHU (Spain) - Miguel A. Alonso, Universidad da Coru?a (Spain) - Ferran Pla, Universidad Polit?cnica de Valencia (Spain) - Jose Alberto Benitez-Andrades, Universidad de Leon (Spain) Darryl Estrada Full Stack - Web Developer * Text Mining Unit | Barcelona Supercomputing Center* -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.chrupala at uvt.nl Mon Jun 13 06:37:28 2022 From: g.chrupala at uvt.nl (=?UTF-8?Q?Grzegorz_Chrupa=C5=82a?=) Date: Mon, 13 Jun 2022 12:37:28 +0200 Subject: Connectionists: PhD position in Analysis and control techniques for spoken language applications at Tilburg University (Netherlands) Message-ID: Open until filled - apply soon: https://www.academictransfer.com/en/313182/phd-position-in-analysis-and-control-techniques-for-spoken-language-applications-1-fte/ Details: The Cognitive Science & Artificial Intelligence department at Tilburg University invites applications for one fully-funded PhD position in the area of language and speech technologies. This position is embedded within the NWO-funded project ?InDeep: Interpreting deep learning models for language, speech & music? in collaboration with several universities and companies in the Netherlands. The project consortium brings together a number of pioneering experts in the field of interpretability (https://interpretingdl.github.io) and aims at bridging the gap between the latest academic advances and societal/industrial users of deep learning models at large. The PhD candidate will work on a project on the development of analysis and control techniques for deep-learning based speech processing models, led by Dr. Grzegorz Chrupa?a. The projects will combine ideas, methods and data from Speech Recognition, Natural Language Processing, and Machine Learning. Project description: Speech processing is increasingly done via end-to-end rather than modular models: this makes it hard to understand what is causing the model?s decisions in general and specifically why it fails when it does. Given the opacity of such end-to-end models, it is desirable to develop and test methods for analyzing the intermediate representations they learn, and interpreting the decisions they make. The objective of this project is to develop and test methods for manipulating intermediate representations learned by end-to-end speech-understanding models in order to make it possible for users to debug them, to control them, and explain their output. The models of interest include, among others, automatic speech recognition, visually grounded models of spoken language and spoken language translation systems. Requirements Master?s degree in computer science, speech and audio processing, computational linguistics, machine learning, or a related field. If you expect to obtain your degree before December 2022, you are also eligible. Knowledge of machine learning in general as well as practical experience with deep learning models and toolkits (e.g. Pytorch). Experience with speech processing is a strong plus. Excellent knowledge of English and good academic writing skills. What we offer A full-time position. The selected candidate will start with a contract for one year. Upon a positive outcome of the first-year evaluation, the candidate will be offered an employment contract for the remaining three years. A minimum gross salary of ? 2.443,- per month up to a maximum of ? 3.122,- in the fourth year. A holiday allowance of 8% and an end-of-year bonus of 8.3% (annually). Researchers from outside the Netherlands may qualify for a tax-free allowance equal to 30% of their taxable salary (the 30% tax regulation). The University will apply for such an allowance on their behalf. Assistance in finding accommodation (for foreign employees). Benefits such as an options model for terms and conditions of employment and reimbursement of moving expenses, also including excellent technical infrastructure, savings schemes and excellent sport facilities. The collective labor agreement of the Dutch Universities applies. Grzegorz Chrupa?a Department of Cognitive Science and Artificial Intelligence Tilburg University PO Box 90153 5000 LE Tilburg The Netherlands Twitter: @gchrupala Web: grzegorz.chrupala.me Email: grzegorz at chrupala.me From caspar.schwiedrzik at googlemail.com Mon Jun 13 07:37:07 2022 From: caspar.schwiedrzik at googlemail.com (Caspar M. Schwiedrzik) Date: Mon, 13 Jun 2022 13:37:07 +0200 Subject: Connectionists: The NCC Lab is hiring a PhD student studying face processing in non-human primates Message-ID: The NCC Lab is hiring a PhD student in face processing PhD student (f/m/d) - Limitation: the position is available immediately with an initial appointment for 2 years and extensions beyond 2 years are possible - Working period: 65 % (25,025 h/w) - Tariff: salary according to TV-L - Announcement published on 09.06.2022 - Vacancy from 01.09.2022 - Application deadline: 01.07.2022 At the University of G?ttingen, a new Collaborative Research Center (CRC) 1528 ?Cognition of Interaction? will be established with funding from the German Research Foundation (DFG). In 22 projects, the CRC 1528 ?Cognition of Interaction? investigates how fundamental cognitive functions and their neurobiological foundations contribute to human and nonhuman primate social behavior and social interactions. The CRC is formed by a highly interdisciplinary consortium formed by systems and computational neuroscientists, data scientists, psychologists and behavioral and cognitive biologists. Partner institutions are the University of G?ttingen, the University Medical Center, the German Primate Center, the Max Planck Institute for Dynamics and Self-Organization, the University Hospital Hamburg-Eppendorf and the Weizmann Institute of Science. In the context of this CRC, the Neural Circuits and Cognition Lab of Caspar Schwiedrzik at the European Neuroscience Institute G?ttingen is looking for an outstanding PhD student interested in studying face perception and predictive processing. The project investigates neural mechanisms of face perception and predictive processing at the level of circuits and single cells, utilizing functional magnetic resonance imaging (fMRI) in combination with electrophysiology and pharmacology in non-human primates. The Neural Circuits and Cognition Lab seeks to understand the cortical basis and computational principles of visual perception and experience-dependent plasticity in the macaque and human brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in non-human primates and fMRI and iEEG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at the European Neuroscience Institute G?ttingen (https://www.eni-g.de) and the German Primate Center ( https://www.dpz.eu), which are interdisciplinary research centers with international faculty and students pursuing cutting-edge research in neuroscience. Further scientific exchange within the CRC and the Leibniz ScienceCampus ?Primate Cognition? (https://www.primate-cognition.eu) ensures a broad interdisciplinary framework for networking and cooperation. The PhD student will have access to a dedicated imaging center with a dedicated 3T research scanner, state-of-the-art electrophysiology, and behavioral setups. For an overview of our work and representative publications, please see our website https://www.eni-g.de/groups/neural-circuits-and-cognition. The position is available immediately with an initial appointment for 2 years and a salary according to 65% TV-L E13. Extensions beyond 2 years are possible. The successful candidate will join one of the many excellent graduate schools on the G?ttingen Campus. Candidates should have a degree (master, diploma or equivalent) in a relevant field (e.g., neuroscience, psychology, biology), and ideally prior experience with non-human primates, strong quantitative, programming, and experimental skills, and share a passion for understanding the neural basis of visual perception and its plasticity. A good command of English is a requirement, but fluency in German is not essential. Interested candidates should send their curriculum vitae, a description of their scientific interests and the names and contact information of two references who are able to comment on your academic background and who agreed to be contacted to Caspar M. Schwiedrzik (cschwie3 at gwdg.de) The University Medical Center G?ttingen is committed to professional equality. We therefore seek to increase the proportion of under-represented genders. Applicants with disabilities and equal qualifications will be given preferential treatment. We look forward to receiving your application by *01.07.2022*: University Medical Center G?ttingen European Neuroscience Institute G?ttingen Dr. Caspar Schwiedrzik Group Leader Grisebachstr. 5 37077 G?ttingen Tel.: +49551/39-61371 E-Mail: cschwie3 at gwdg.de Web: http://www.eni-g.de/ Contact person: For questions about the position or project, please contact Caspar M. Schwiedrzik (cschwie3 at gwdg.de). For questions about the application procedure, please contact Christiane Becker (c.becker at eni-g.de). Please send your application via e-mail in PDF-format or via mail in copy and not in folders. Travel and application fees cannot be refunded or transferred. https://www.umg.eu/karriere/stellenangebote/stellenanzeigen-detail/?jobId=4842 -------------- next part -------------- An HTML attachment was scrubbed... URL: From physiologicalcomputing at gmail.com Mon Jun 13 08:08:24 2022 From: physiologicalcomputing at gmail.com (Physiological Computing Admin) Date: Mon, 13 Jun 2022 13:08:24 +0100 Subject: Connectionists: ACII 2022 Calls for tutorial proposals: affective computing for mental and physical wellbeing Message-ID: Dear colleagues, Sorry for wide circulation - but here we invite proposals for tutorial at the tenth International Conference on Affective Computing and Intelligent Interaction (ACII), which will be held in Nara, Japan on October 18th ? 21st, 2022. CALL FOR TUTORIAL PROPOSALS The organizing committee of Affective Computing and Intelligent Interaction (ACII) 2022 invites proposals for tutorial. ACII is the annual Conference of the Association for the Advancement of Affective Computing (AAAC) which is the premier international forum for research on affective, physiological, and multimodal human-computer/robot/machine interaction and systems. ACII 2022 will be held in Nara, Japan on October 18th ? 21st, 2022. The theme of ACII 2022 is ?Affective computing for mental and physical well-being?. The current Covid-19 pandemic and resulting societal changes have severely impacted mental and physical well-being (such as social isolation and loneliness). We welcome tutorial proposals which cover topics related to the ACII 2022 theme in introductory or advanced levels to complement the conference?s program. In general, tutorials may introduce an uprising issue to the related research discussed at ACII 2021, or reflect an overview of an important, but already established topic, both presented in an illustrative fashion. For this, we expect that the tutors focus on the state-of-the-art or the main ideas of the proposed topic rather than primary on the presenters? own research interests. The tutorials can be organised as either half-day (preferred) or full-day events and will take place at the same venue as the main conference. The proposals will be evaluated based on the impact, quality, interdisciplinary character, presentation format as well as the theme and relevance to the conference. Tutorial Proposal Deadline: June 1, 2022 *June 17, 2022* (extended) Tutorial Acceptance Notification: July 17, 2022 Details can be found here: https://acii-conf.net/2022/calls/tutorial/ Youngjun Cho (UCL) & Ruud Hortensius (Universiteit Utrecht) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 08:12:04 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 08:12:04 -0400 Subject: Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com In-Reply-To: References: <37D967E4-B21E-4B2A-943A-2F855F4D82A8@nyu.edu> <10271.1655092386@ammon2.boltz.cs.cmu.edu> Message-ID: I agree David, Ali, this is s succinct way of putting the neuroscience/cognitive problem. It also underlies the very reason why "hybrid" systems or approaches in the end makes no sense. I think, on the other hand, the rush to consciousness of transformers and the laMDA (Lemoine's "friend" in his computer) is a also a need to capture symbol processing just through claims of human-like performance without the serious toil this will take in the future. Again, I think a relevant project here? would be to attempt to replicate with DL-rnn, Yang and Piatiadosi's PNAS language learning system--which is a completely symbolic-- and very general over the Chomsky-Miller grammer classes.?? Let me know, happy to collaborate on something like this. Best Steve On 6/13/22 2:31 AM, Ali Minai wrote: > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful.? So our brains > pretend to be rule-based symbolic systems when it suits them, because > it's adaptive to do so." > > Spot on, Dave! We should not wade back into the symbolist quagmire, > but do need to figure out how apparently symbolic processing can be > done by neural systems. Models like those of Eliasmith and Smolensky > provide some insight, but still seem far from both biological > plausibility and real-world scale. > > Best > > Ali > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: > > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights.? The Washington Post > story is reproduced here: > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to > "consciousness": > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up > because > the properties of symbol systems (systematicity, compositionality, > etc.) > are tremendously useful.? So our brains pretend to be rule-based > symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not > symbolic.)? They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress.? In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating > symbolic > behavior.? But the brain still has a lot of symbol simulation > tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against > LaMDA > being conscious.? If it just waits for its next input and responds > when > it receives it, then it has no autonomous existence: "it doesn't > have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in?? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long.? But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 08:36:12 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 05:36:12 -0700 Subject: Connectionists: The symbolist quagmire Message-ID: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. > > Best > > Ali > > > Ali A. Minai, Ph.D. > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > >> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 08:38:06 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 05:38:06 -0700 Subject: Connectionists: LaMDA, Lemoine and Sentience Message-ID: <8CD5B234-6AA6-4EC7-BF80-646CF113D4A2@nyu.edu> My opinion (which for once is not really all that controversial in the AI community): this is nonsense on stilts, as discussed here: https://garymarcus.substack.com/p/nonsense-on-stilts > On Jun 12, 2022, at 22:37, Dave Touretzky wrote: > > ?This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYKpHM6uR$ > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYHUIR2Go$ > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 08:55:58 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 05:55:58 -0700 Subject: Connectionists: =?utf-8?q?Yang_and_Piantodosi=E2=80=99s_PNAS_lang?= =?utf-8?q?uage_system=2C_semantics=2C_and_scene_understanding?= Message-ID: <60B7CD76-2DCE-4DB9-9ECD-96D046916BB8@nyu.edu> ? agree with Steve this is an interesting paper, and replicating it with a neural net would be interesting; cc?ing Steve Piantosi. ? why not use a Transformer, though? - it is however importantly missing semantics. (Steve P. tells me there is some related work that is worth looking into). Y&P speaks to an old tradition of formal language work by Gold and others that is quite popular but IMHO misguided, because it focuses purely on syntax rather than semantics. Gold?s work definitely motivates learnability but I have never taken it to seriously as a real model of language - doing what Y&P try to do with a rich artificial language that is focused around syntax-semantic mappings could be very interesting - on a somewhat but not entirely analogous note, i think that the next step in vision is really scene understanding. We have techniques for doing object labeling reasonably well, but still struggle wit parts and wholes are important, and with relations more generally, which is to say we need the semantics of scenes. is the chair on the floor, or floating in the air? is it supporting the pillow? etc. is the hand a part of the body? is the glove a part of the body? etc Best, Gary > On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote: > > Again, I think a relevant project here would be to attempt to replicate with DL-rnn, Yang and Piatiadosi's PNAS language learning system--which is a completely symbolic-- and very general over the Chomsky-Miller grammer classes. Let me know, happy to collaborate on something like this. > > Best > > Steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 09:09:14 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 09:09:14 -0400 Subject: Connectionists: =?utf-8?q?Yang_and_Piantodosi=E2=80=99s_PNAS_lang?= =?utf-8?q?uage_system=2C_semantics=2C_and_scene_understanding?= In-Reply-To: <60B7CD76-2DCE-4DB9-9ECD-96D046916BB8@nyu.edu> References: <60B7CD76-2DCE-4DB9-9ECD-96D046916BB8@nyu.edu> Message-ID: <1c3cab99-349a-a000-ecdd-4dfaf143a112@rubic.rutgers.edu> I was thinking more like an RNN similar to work we had done in the 2000s.. on syntax. Stephen Jos? Hanson, Michiro Negishi; On the Emergence of Rules in Neural Networks. Neural Comput 2002; 14 (9): 2245?2268. doi: https://doi.org/10.1162/089976602320264079 Abstract A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology. On 6/13/22 8:55 AM, Gary Marcus wrote: > ? agree with Steve this is an interesting paper, and replicating it > with a neural net would be interesting; cc?ing Steve Piantosi. > ? why not use a Transformer, though? > - it is however importantly missing semantics. (Steve P. tells me > there is some related work that is worth looking into). Y&P speaks to > an old tradition of formal language work by Gold and others that is > quite popular but IMHO misguided, because it focuses purely on syntax > rather than semantics. ?Gold?s work definitely motivates learnability > but I have never taken it to seriously as a real model of language > - doing what Y&P try to do with a rich artificial language that is > focused around syntax-semantic mappings could be very interesting > - on a somewhat but not entirely analogous note, i think that ?the > next step in vision is really scene understanding. We have techniques > for doing object labeling reasonably well, but still struggle wit > parts and wholes are important, and with relations more generally, > which is to say we need the semantics of scenes. is the chair on the > floor, or floating in the air? is it supporting the pillow? etc. is > the hand a part of the body? is the glove a part of the body? etc > > Best, > Gary > > > >> On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote: >> >> Again, I think a relevant project here would be to attempt to >> replicate with DL-rnn, Yang and Piatiadosi's PNAS language learning >> system--which is a completely symbolic-- and very general over the >> Chomsky-Miller grammer classes.?? Let me know, happy to collaborate >> on something like this. >> >> Best >> >> Steve >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 09:13:54 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 06:13:54 -0700 Subject: Connectionists: =?utf-8?q?Yang_and_Piantodosi=E2=80=99s_PNAS_lang?= =?utf-8?q?uage_system=2C_semantics=2C_and_scene_understanding?= In-Reply-To: <1c3cab99-349a-a000-ecdd-4dfaf143a112@rubic.rutgers.edu> References: <1c3cab99-349a-a000-ecdd-4dfaf143a112@rubic.rutgers.edu> Message-ID: <9D553D9E-CA9A-43E6-840F-0DD05F3C4D9E@nyu.edu> I do remember the work :) Just generally Transformers seem more effective; a careful comparison between Y&P, Transformers, and your RNN approach, looking at generalization to novel words, would indeed be interesting. Cheers, Gary > On Jun 13, 2022, at 06:09, jose at rubic.rutgers.edu wrote: > > ? > I was thinking more like an RNN similar to work we had done in the 2000s.. on syntax. > > Stephen Jos? Hanson, Michiro Negishi; On the Emergence of Rules in Neural Networks. Neural Comput 2002; 14 (9): 2245?2268. doi: https://doi.org/10.1162/089976602320264079 > > Abstract > A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology. > > >> On 6/13/22 8:55 AM, Gary Marcus wrote: >> ? agree with Steve this is an interesting paper, and replicating it with a neural net would be interesting; cc?ing Steve Piantosi. >> ? why not use a Transformer, though? >> - it is however importantly missing semantics. (Steve P. tells me there is some related work that is worth looking into). Y&P speaks to an old tradition of formal language work by Gold and others that is quite popular but IMHO misguided, because it focuses purely on syntax rather than semantics. Gold?s work definitely motivates learnability but I have never taken it to seriously as a real model of language >> - doing what Y&P try to do with a rich artificial language that is focused around syntax-semantic mappings could be very interesting >> - on a somewhat but not entirely analogous note, i think that the next step in vision is really scene understanding. We have techniques for doing object labeling reasonably well, but still struggle wit parts and wholes are important, and with relations more generally, which is to say we need the semantics of scenes. is the chair on the floor, or floating in the air? is it supporting the pillow? etc. is the hand a part of the body? is the glove a part of the body? etc >> >> Best, >> Gary >> >> >> >>> On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote: >>> >>> Again, I think a relevant project here would be to attempt to replicate with DL-rnn, Yang and Piatiadosi's PNAS language learning system--which is a completely symbolic-- and very general over the Chomsky-Miller grammer classes. Let me know, happy to collaborate on something like this. >>> >>> Best >>> >>> Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 14:00:03 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 14:00:03 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <10485004-EEC1-429D-9123-5F1075AB7444@nyu.edu> References: <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> <10485004-EEC1-429D-9123-5F1075AB7444@nyu.edu> Message-ID: Nope.? But lets take this offline as one of us is confused. On 6/13/22 1:58 PM, Gary Marcus wrote: > I think you are conflating Bengio?s views with Kahneman?s > > Bengio wants to have a System I, which he thinks is not the same as > System II. He doesn?t want System II to be symbol-based, but he does > want to do many things that symbols have historically done. That is an > ambition, and we can see how it goes. My impression is he is on a road > towards recapitulating a lot of historically symbolic tools, such as > key-value pairs and operations that work over their pairs. We will see > where he gets to; it?s an interesting projects. > > Kahneman coined the terms; I prefer to call them Reflexive and > Deliberative. In my view deliberation of that sort requires symbols. > For what it?s worth Kahneman was enormously sympathetic (both publicly > and in an email) to my paper the Next Decade in AI, in which I argued > that one needed a neurosymbolic system with rich knowledge, and > reasoning over detailed cognitive models. > > It?s all an empirical question as to what can be done. > > I guess ?he? refers below to Bengio, but not to Kahneman who > originated the System I/II distinction. Danny is open about how these > things cache out, and would also be the first to tell you that the > distinction is just a rough one, in any event. > > Gary > >> On Jun 13, 2022, at 10:37, jose at rubic.rutgers.edu wrote: >> >> ? >> >> Well. your conclusion is based on some hearsay and a talk he gave, I >> talked with him directly and we discussed what >> >> you are calling SystemII which just means explicit memory/learning to >> me and him.. he has no intention of incorporating anything like >> symbols or >> >> hybrid Neural/Symbol systems..??? he does intend on modeling >> conscious symbol manipulation. more in the way Dave T. outlined. >> >> AND, I'm sure if he was seeing this.. he would say... "Steve's right". >> >> Steve >> >> On 6/13/22 1:10 PM, Gary Marcus wrote: >>> I don?t think i need to read your conversation to have serious >>> doubts about your conclusion, but feel free to reprise the arguments >>> here. >>> >>>> On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote: >>>> >>>> ? >>>> >>>> We prefer the explicit/implicit cognitive psych refs. but System II >>>> is not symbolic. >>>> >>>> See the AIHUB conversation about this.. we discuss this specifically. >>>> >>>> >>>> Steve >>>> >>>> >>>> On 6/13/22 10:00 AM, Gary Marcus wrote: >>>>> Please reread my sentence and reread his recent work. Bengio has >>>>> absolutely joined in calling for System II processes. Sample is >>>>> his 2019 NeurIPS keynote: >>>>> https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ >>>>> >>>>> >>>>> Whether he wants to call it a hybrid approach is his business but >>>>> he certainly sees that traditional approaches are not covering >>>>> things like causality and abstract generalization. Maybe he will >>>>> find a new way, but he recognizes what has not been covered with >>>>> existing ways. >>>>> >>>>> And he is emphasizing both relationships and out of distribution >>>>> learning, just as I have been for a long time. From his most >>>>> recent arXiv a few days ago, the first two sentences of which >>>>> sounds almost exactly like what I have been saying for years: >>>>> >>>>> Submitted on 9 Jun 2022] >>>>> >>>>> >>>>> On Neural Architecture Inductive Biases for Relational Tasks >>>>> >>>>> Giancarlo Kerg >>>>> , >>>>> Sarthak Mittal >>>>> , >>>>> David Rolnick >>>>> , >>>>> Yoshua Bengio >>>>> , >>>>> Blake Richards >>>>> , >>>>> Guillaume Lajoie >>>>> >>>>> >>>>> Current deep learning approaches have shown good >>>>> in-distribution generalization performance, but struggle with >>>>> out-of-distribution generalization. This is especially true in >>>>> the case of tasks involving abstract relations like >>>>> recognizing rules in sequences, as we find in many >>>>> intelligence tests. Recent work has explored how forcing >>>>> relational representations to remain distinct from sensory >>>>> representations, as it seems to be the case in the brain, can >>>>> help artificial systems. Building on this work, we further >>>>> explore and formalize the advantages afforded by 'partitioned' >>>>> representations of relations and sensory details, and how this >>>>> inductive bias can help recompose learned relational structure >>>>> in newly encountered settings. We introduce a simple >>>>> architecture based on similarity scores which we name >>>>> Compositional Relational Network (CoRelNet). Using this model, >>>>> we investigate a series of inductive biases that ensure >>>>> abstract relations are learned and represented distinctly from >>>>> sensory data, and explore their effects on out-of-distribution >>>>> generalization for a series of relational psychophysics tasks. >>>>> We find that simple architectural choices can outperform >>>>> existing models in out-of-distribution generalization. >>>>> Together, these results show that partitioning relational >>>>> representations from other information streams may be a simple >>>>> way to augment existing network architectures' robustness when >>>>> performing out-of-distribution relational computations. >>>>> >>>>> >>>>> Kind of scandalous that he doesn?t ever cite me for having >>>>> framed that argument, even if I have repeatedly called his >>>>> attention to that oversight, but that?s another story for a >>>>> day, in which I elaborate on some Schmidhuber?s observations >>>>> on history. >>>>> >>>>> >>>>> Gary >>>>> >>>>>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: >>>>>> >>>>>> ? >>>>>> >>>>>> No Yoshua has *not* joined you ---Explicit processes, memory, >>>>>> problem solving. .are not Symbolic per se. >>>>>> >>>>>> These original distinctions in memory and learning were? from >>>>>> Endel Tulving and of course there are brain structures that >>>>>> support the distinctions. >>>>>> >>>>>> and Yoshua is clear about that in discussions I had with him in AIHUB >>>>>> >>>>>> He's definitely not looking to create some hybrid approach.. >>>>>> >>>>>> Steve >>>>>> >>>>>> On 6/13/22 8:36 AM, Gary Marcus wrote: >>>>>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon >>>>>>> ?atime, Dave and Geoff were both pioneers in trying to getting >>>>>>> symbols and neural nets to live in harmony. Don?t we still need >>>>>>> do that, and if not, why not? >>>>>>> >>>>>>> Surely, at the very least >>>>>>> - we want our AI to be able to take advantage of the (large) >>>>>>> fraction of world knowledge that is represented in symbolic form >>>>>>> (language, including unstructured text, logic, math, programming >>>>>>> etc) >>>>>>> - any model of the human mind ought be able to explain how >>>>>>> humans can so effectively communicate via the symbols of >>>>>>> language and how trained humans can deal with (to the extent >>>>>>> that can) logic, math, programming, etc >>>>>>> >>>>>>> Folks like Bengio have joined me in seeing the need for ?System >>>>>>> II? processes. That?s a bit of a rough approximation, but I >>>>>>> don?t see how we get to either AI or satisfactory models of the >>>>>>> mind without confronting the ?quagmire? >>>>>>> >>>>>>> >>>>>>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>>>>>> >>>>>>>> ? >>>>>>>> ".... symbolic representations are a fiction our non-symbolic >>>>>>>> brains cooked up because the properties of symbol systems >>>>>>>> (systematicity, compositionality, etc.) are tremendously >>>>>>>> useful.? So our brains pretend to be rule-based symbolic >>>>>>>> systems when it suits them, because it's adaptive to do so." >>>>>>>> >>>>>>>> Spot on, Dave! We should not wade back into the symbolist >>>>>>>> quagmire, but do need to figure out how apparently symbolic >>>>>>>> processing can be done by neural systems. Models like those of >>>>>>>> Eliasmith and Smolensky provide some insight, but still seem >>>>>>>> far from both biological plausibility and real-world scale. >>>>>>>> >>>>>>>> Best >>>>>>>> >>>>>>>> Ali >>>>>>>> >>>>>>>> >>>>>>>> *Ali A. Minai, Ph.D.* >>>>>>>> Professor and Graduate Program Director >>>>>>>> Complex Adaptive Systems Lab >>>>>>>> Department of Electrical Engineering & Computer Science >>>>>>>> 828 Rhodes Hall >>>>>>>> University of Cincinnati >>>>>>>> Cincinnati, OH 45221-0030 >>>>>>>> >>>>>>>> Phone: (513) 556-4783 >>>>>>>> Fax: (513) 556-7326 >>>>>>>> Email: Ali.Minai at uc.edu >>>>>>>> minaiaa at gmail.com >>>>>>>> >>>>>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky >>>>>>> > wrote: >>>>>>>> >>>>>>>> This timing of this discussion dovetails nicely with the >>>>>>>> news story >>>>>>>> about Google engineer Blake Lemoine being put on >>>>>>>> administrative leave >>>>>>>> for insisting that Google's LaMDA chatbot was sentient and >>>>>>>> reportedly >>>>>>>> trying to hire a lawyer to protect its rights.? The >>>>>>>> Washington Post >>>>>>>> story is reproduced here: >>>>>>>> >>>>>>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>>>>>> >>>>>>>> >>>>>>>> Google vice president Blaise Aguera y Arcas, who dismissed >>>>>>>> Lemoine's >>>>>>>> claims, is featured in a recent Economist article showing >>>>>>>> off LaMDA's >>>>>>>> capabilities and making noises about getting closer to >>>>>>>> "consciousness": >>>>>>>> >>>>>>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>>>>>> >>>>>>>> >>>>>>>> My personal take on the current symbolist controversy is >>>>>>>> that symbolic >>>>>>>> representations are a fiction our non-symbolic brains >>>>>>>> cooked up because >>>>>>>> the properties of symbol systems (systematicity, >>>>>>>> compositionality, etc.) >>>>>>>> are tremendously useful.? So our brains pretend to be >>>>>>>> rule-based symbolic >>>>>>>> systems when it suits them, because it's adaptive to do >>>>>>>> so.? (And when >>>>>>>> it doesn't suit them, they draw on "intuition" or "imagery" >>>>>>>> or some >>>>>>>> other mechanisms we can't verbalize because they're not >>>>>>>> symbolic.)? They >>>>>>>> are remarkably good at this pretense. >>>>>>>> >>>>>>>> The current crop of deep neural networks are not as good at >>>>>>>> pretending >>>>>>>> to be symbolic reasoners, but they're making progress.? In >>>>>>>> the last 30 >>>>>>>> years we've gone from networks of fully-connected layers >>>>>>>> that make no >>>>>>>> architectural assumptions ("connectoplasm") to complex >>>>>>>> architectures >>>>>>>> like LSTMs and transformers that are designed for >>>>>>>> approximating symbolic >>>>>>>> behavior.? But the brain still has a lot of symbol >>>>>>>> simulation tricks we >>>>>>>> haven't discovered yet. >>>>>>>> >>>>>>>> Slashdot reader ZiggyZiggyZig had an interesting argument >>>>>>>> against LaMDA >>>>>>>> being conscious.? If it just waits for its next input and >>>>>>>> responds when >>>>>>>> it receives it, then it has no autonomous existence: "it >>>>>>>> doesn't have an >>>>>>>> inner monologue that constantly runs and comments >>>>>>>> everything happening >>>>>>>> around it as well as its own thoughts, like we do." >>>>>>>> >>>>>>>> What would happen if we built that in? Maybe LaMDA would >>>>>>>> rapidly >>>>>>>> descent into gibberish, like some other text generation >>>>>>>> models do when >>>>>>>> allowed to ramble on for too long.? But as Steve Hanson >>>>>>>> points out, >>>>>>>> these are still the early days. >>>>>>>> >>>>>>>> -- Dave Touretzky >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.jose.hanson at rutgers.edu Mon Jun 13 11:45:16 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Mon, 13 Jun 2022 15:45:16 +0000 Subject: Connectionists: =?utf-8?q?Yang_and_Piantodosi=E2=80=99s_PNAS_lang?= =?utf-8?q?uage_system=2C_semantics=2C_and_scene_understanding?= In-Reply-To: <9D553D9E-CA9A-43E6-840F-0DD05F3C4D9E@nyu.edu> References: <1c3cab99-349a-a000-ecdd-4dfaf143a112@rubic.rutgers.edu> <9D553D9E-CA9A-43E6-840F-0DD05F3C4D9E@nyu.edu> Message-ID: <79e8952f-c227-a76d-3d5e-b4c176d11b45@rutgers.edu> It would have to be updated with with DL-RNN or LSTMs.. S On 6/13/22 9:13 AM, Gary Marcus wrote: I do remember the work :) Just generally Transformers seem more effective; a careful comparison between Y&P, Transformers, and your RNN approach, looking at generalization to novel words, would indeed be interesting. Cheers, Gary On Jun 13, 2022, at 06:09, jose at rubic.rutgers.edu wrote: ? I was thinking more like an RNN similar to work we had done in the 2000s.. on syntax. Stephen Jos? Hanson, Michiro Negishi; On the Emergence of Rules in Neural Networks. Neural Comput 2002; 14 (9): 2245?2268. doi: https://doi.org/10.1162/089976602320264079 Abstract A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology. On 6/13/22 8:55 AM, Gary Marcus wrote: ? agree with Steve this is an interesting paper, and replicating it with a neural net would be interesting; cc?ing Steve Piantosi. ? why not use a Transformer, though? - it is however importantly missing semantics. (Steve P. tells me there is some related work that is worth looking into). Y&P speaks to an old tradition of formal language work by Gold and others that is quite popular but IMHO misguided, because it focuses purely on syntax rather than semantics. Gold?s work definitely motivates learnability but I have never taken it to seriously as a real model of language - doing what Y&P try to do with a rich artificial language that is focused around syntax-semantic mappings could be very interesting - on a somewhat but not entirely analogous note, i think that the next step in vision is really scene understanding. We have techniques for doing object labeling reasonably well, but still struggle wit parts and wholes are important, and with relations more generally, which is to say we need the semantics of scenes. is the chair on the floor, or floating in the air? is it supporting the pillow? etc. is the hand a part of the body? is the glove a part of the body? etc Best, Gary On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote: Again, I think a relevant project here would be to attempt to replicate with DL-rnn, Yang and Piatiadosi's PNAS language learning system--which is a completely symbolic-- and very general over the Chomsky-Miller grammer classes. Let me know, happy to collaborate on something like this. Best Steve -- [cid:part3.207D6B48.20BF6C56 at rutgers.edu] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: signature.png URL: From ASIM.ROY at asu.edu Mon Jun 13 19:48:47 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Mon, 13 Jun 2022 23:48:47 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. 1. Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. 1. There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] From: Connectionists On Behalf Of Gary Marcus Sent: Monday, June 13, 2022 5:36 AM To: Ali Minai Cc: Connectionists List Subject: Connectionists: The symbolist quagmire Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? On Jun 13, 2022, at 00:31, Ali Minai > wrote: ? ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: From jose at rubic.rutgers.edu Mon Jun 13 11:44:14 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 11:44:14 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <5FE7AD49-0551-4E83-8530-5DC88337E22A@nyu.edu> References: <6de4dfa9-5dcc-9d78-b2a0-b8f3bc96e692@rubic.rutgers.edu> <5FE7AD49-0551-4E83-8530-5DC88337E22A@nyu.edu> Message-ID: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> We prefer the explicit/implicit cognitive psych refs. but System II is not symbolic. See the AIHUB conversation about this.. we discuss this specifically. Steve On 6/13/22 10:00 AM, Gary Marcus wrote: > Please reread my sentence and reread his recent work. Bengio has > absolutely joined in calling for System II processes. Sample is his > 2019 NeurIPS keynote: > https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ > > > Whether he wants to call it a hybrid approach is his business but he > certainly sees that traditional approaches are not covering things > like causality and abstract generalization. Maybe he will find a new > way, but he recognizes what has not been covered with existing ways. > > And he is emphasizing both relationships and out of distribution > learning, just as I have been for a long time. From his most recent > arXiv a few days ago, the first two sentences of which sounds almost > exactly like what I have been saying for years: > > Submitted on 9 Jun 2022] > > > On Neural Architecture Inductive Biases for Relational Tasks > > Giancarlo Kerg > , > Sarthak Mittal > , > David Rolnick > , > Yoshua Bengio > , > Blake Richards > , > Guillaume Lajoie > > > Current deep learning approaches have shown good in-distribution > generalization performance, but struggle with out-of-distribution > generalization. This is especially true in the case of tasks > involving abstract relations like recognizing rules in sequences, > as we find in many intelligence tests. Recent work has explored > how forcing relational representations to remain distinct from > sensory representations, as it seems to be the case in the brain, > can help artificial systems. Building on this work, we further > explore and formalize the advantages afforded by 'partitioned' > representations of relations and sensory details, and how this > inductive bias can help recompose learned relational structure in > newly encountered settings. We introduce a simple architecture > based on similarity scores which we name Compositional Relational > Network (CoRelNet). Using this model, we investigate a series of > inductive biases that ensure abstract relations are learned and > represented distinctly from sensory data, and explore their > effects on out-of-distribution generalization for a series of > relational psychophysics tasks. We find that simple architectural > choices can outperform existing models in out-of-distribution > generalization. Together, these results show that partitioning > relational representations from other information streams may be a > simple way to augment existing network architectures' robustness > when performing out-of-distribution relational computations. > > > Kind of scandalous that he doesn?t ever cite me for having framed > that argument, even if I have repeatedly called his attention to > that oversight, but that?s another story for a day, in which I > elaborate on some Schmidhuber?s observations on history. > > > Gary > >> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: >> >> ? >> >> No Yoshua has *not* joined you ---Explicit processes, memory, problem >> solving. .are not Symbolic per se. >> >> These original distinctions in memory and learning were? from Endel >> Tulving and of course there are brain structures that support the >> distinctions. >> >> and Yoshua is clear about that in discussions I had with him in AIHUB >> >> He's definitely not looking to create some hybrid approach.. >> >> Steve >> >> On 6/13/22 8:36 AM, Gary Marcus wrote: >>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon >>> ?atime, Dave and Geoff were both pioneers in trying to getting >>> symbols and neural nets to live in harmony. Don?t we still need do >>> that, and if not, why not? >>> >>> Surely, at the very least >>> - we want our AI to be able to take advantage of the (large) >>> fraction of world knowledge that is represented in symbolic form >>> (language, including unstructured text, logic, math, programming etc) >>> - any model of the human mind ought be able to explain how humans >>> can so effectively communicate via the symbols of language and how >>> trained humans can deal with (to the extent that can) logic, math, >>> programming, etc >>> >>> Folks like Bengio have joined me in seeing the need for ?System II? >>> processes. That?s a bit of a rough approximation, but I don?t see >>> how we get to either AI or satisfactory models of the mind without >>> confronting the ?quagmire? >>> >>> >>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>> >>>> ? >>>> ".... symbolic representations are a fiction our non-symbolic >>>> brains cooked up because the properties of symbol systems >>>> (systematicity, compositionality, etc.) are tremendously useful.? >>>> So our brains pretend to be rule-based symbolic systems when it >>>> suits them, because it's adaptive to do so." >>>> >>>> Spot on, Dave! We should not wade back into the symbolist quagmire, >>>> but do need to figure out how apparently symbolic processing can be >>>> done by neural systems. Models like those of Eliasmith and >>>> Smolensky provide some insight, but still seem far from both >>>> biological plausibility and real-world scale. >>>> >>>> Best >>>> >>>> Ali >>>> >>>> >>>> *Ali A. Minai, Ph.D.* >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> 828 Rhodes Hall >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> >>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky >>> > wrote: >>>> >>>> This timing of this discussion dovetails nicely with the news story >>>> about Google engineer Blake Lemoine being put on administrative >>>> leave >>>> for insisting that Google's LaMDA chatbot was sentient and >>>> reportedly >>>> trying to hire a lawyer to protect its rights.? The Washington Post >>>> story is reproduced here: >>>> >>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>> >>>> >>>> Google vice president Blaise Aguera y Arcas, who dismissed >>>> Lemoine's >>>> claims, is featured in a recent Economist article showing off >>>> LaMDA's >>>> capabilities and making noises about getting closer to >>>> "consciousness": >>>> >>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>> >>>> >>>> My personal take on the current symbolist controversy is that >>>> symbolic >>>> representations are a fiction our non-symbolic brains cooked up >>>> because >>>> the properties of symbol systems (systematicity, >>>> compositionality, etc.) >>>> are tremendously useful.? So our brains pretend to be >>>> rule-based symbolic >>>> systems when it suits them, because it's adaptive to do so.? >>>> (And when >>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>> other mechanisms we can't verbalize because they're not >>>> symbolic.)? They >>>> are remarkably good at this pretense. >>>> >>>> The current crop of deep neural networks are not as good at >>>> pretending >>>> to be symbolic reasoners, but they're making progress.? In the >>>> last 30 >>>> years we've gone from networks of fully-connected layers that >>>> make no >>>> architectural assumptions ("connectoplasm") to complex >>>> architectures >>>> like LSTMs and transformers that are designed for approximating >>>> symbolic >>>> behavior.? But the brain still has a lot of symbol simulation >>>> tricks we >>>> haven't discovered yet. >>>> >>>> Slashdot reader ZiggyZiggyZig had an interesting argument >>>> against LaMDA >>>> being conscious.? If it just waits for its next input and >>>> responds when >>>> it receives it, then it has no autonomous existence: "it >>>> doesn't have an >>>> inner monologue that constantly runs and comments everything >>>> happening >>>> around it as well as its own thoughts, like we do." >>>> >>>> What would happen if we built that in?? Maybe LaMDA would rapidly >>>> descent into gibberish, like some other text generation models >>>> do when >>>> allowed to ramble on for too long.? But as Steve Hanson points out, >>>> these are still the early days. >>>> >>>> -- Dave Touretzky >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 09:44:20 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 09:44:20 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: <6de4dfa9-5dcc-9d78-b2a0-b8f3bc96e692@rubic.rutgers.edu> No Yoshua has *not* joined you ---Explicit processes, memory, problem solving. .are not Symbolic per se. These original distinctions in memory and learning were? from Endel Tulving and of course there are brain structures that support the distinctions. and Yoshua is clear about that in discussions I had with him in AIHUB He's definitely not looking to create some hybrid approach.. Steve On 6/13/22 8:36 AM, Gary Marcus wrote: > Cute phrase, but what does ?symbolist quagmire? mean? Once upon > ?atime, Dave and Geoff were both pioneers in trying to getting symbols > and neural nets to live in harmony. Don?t we still need do that, and > if not, why not? > > Surely, at the very least > - we want our AI to be able to take advantage of the (large) fraction > of world knowledge that is represented in symbolic form (language, > including unstructured text, logic, math, programming etc) > - any model of the human mind ought be able to explain how humans can > so effectively communicate via the symbols of language and how trained > humans can deal with (to the extent that can) logic, math, > programming, etc > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how > we get to either AI or satisfactory models of the mind without > confronting the ?quagmire? > > >> On Jun 13, 2022, at 00:31, Ali Minai wrote: >> >> ? >> ".... symbolic representations are a fiction our non-symbolic brains >> cooked up because the properties of symbol systems (systematicity, >> compositionality, etc.) are tremendously useful.? So our brains >> pretend to be rule-based symbolic systems when it suits them, because >> it's adaptive to do so." >> >> Spot on, Dave! We should not wade back into the symbolist quagmire, >> but do need to figure out how apparently symbolic processing can be >> done by neural systems. Models like those of Eliasmith and Smolensky >> provide some insight, but still seem far from both biological >> plausibility and real-world scale. >> >> Best >> >> Ali >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> 828 Rhodes Hall >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > > wrote: >> >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights.? The Washington Post >> story is reproduced here: >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to >> "consciousness": >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> >> My personal take on the current symbolist controversy is that >> symbolic >> representations are a fiction our non-symbolic brains cooked up >> because >> the properties of symbol systems (systematicity, >> compositionality, etc.) >> are tremendously useful.? So our brains pretend to be rule-based >> symbolic >> systems when it suits them, because it's adaptive to do so.? (And >> when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not >> symbolic.)? They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at >> pretending >> to be symbolic reasoners, but they're making progress.? In the >> last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating >> symbolic >> behavior.? But the brain still has a lot of symbol simulation >> tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against >> LaMDA >> being conscious.? If it just waits for its next input and >> responds when >> it receives it, then it has no autonomous existence: "it doesn't >> have an >> inner monologue that constantly runs and comments everything >> happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in?? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do >> when >> allowed to ramble on for too long.? But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoshi at chain.hokudai.ac.jp Mon Jun 13 23:14:12 2022 From: myoshi at chain.hokudai.ac.jp (YOSHIDA, Masatoshi) Date: Tue, 14 Jun 2022 12:14:12 +0900 Subject: Connectionists: Online seminar: Prof Masataka Watanabe (Univ Tokyo) on consciousness Message-ID: Dear all, The 23rd CHAIN seminar will be held at 9 am Friday 24, June 2022 (Japan Standard Time). It will be presented by Prof Masataka Watanabe (Univ Tokyo). His talk is entitled: "Solving the Hard Problem and Closing the Explanatory Gap with Natural Laws of Consciousness - Towards a Scientific Investigation of Consciousness and Mind-Uploading -" This event will be held remotely via Zoom. Registration is needed. Meeting information is provided below. Looking forward to seeing you there. Best regards, Masatoshi -------------------------- The 23rd CHAIN seminar Time: 9 am-10:30 am Friday 24, June 2022 (Japan Standard Time) Zoom registration: https://zoom.us/meeting/register/tJEpf-6qpjojGNeQDTfZ1Y9YNAc17bYgJLYq Speaker: Prof Masataka Watanabe (Univ Tokyo) Title: Solving the Hard Problem and Closing the Explanatory Gap with Natural Laws of Consciousness - Towards a Scientific Investigation of Consciousness and Mind-Uploading - Abstract: Natural laws, such as the principle of the constancy of light velocity, are vital to science. If we trace any scientific theory back to its origins, we will end up with these natural laws. If that is the case, we should take it for granted that the science of consciousness requires one too. The science of consciousness is our very first attempt to bind the objective with the subjective; the objective is what can be observed when we insert a recording electrode, while the subjective comes in the form of subjective experience and poses the question of what a neural network is experiencing in the first person. As soon as we acknowledge its necessity, both the explanatory gap and the hard problem vanish into thin air; the natural law is what closes the gap and nullifies the difficulty of the problem. I explain the above argument and further propose an experimental approach to validate such laws of nature, which, as a byproduct, results in a viable method for mind uploading. The key to the approach is a totally new type of BMI that enables reading and writing information with unprecedented precision. References: Masataka Watanabe, ?From Biological to Artificial Consciousness?, Springer Nature Switzerland AG, 2022 https://doi.org/10.1007/978-3-030-91138-6 Masataka Watanabe, Kang Cheng, Yusuke Murayama, Kenichi Ueno, Takeshi Asamizuya, Keiji Tanaka, Nikos Logothetis. Attention But Not Awareness Modulates the BOLD Signal in the Human V1 During Binocular Suppression. SCIENCE. 334. 6057. 829-831. 2011 https://www.science.org/doi/10.1126/science.1203161 -------------------------- ------------------------------------------------------ Masatoshi Yoshida, Ph.D., associate professor Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAIN), Hokkaido University, Japan URL: https://www.chain.hokudai.ac.jp/en/ Email: pooneil68 at gmail.com ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrey.hinton at gmail.com Mon Jun 13 12:43:58 2022 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Mon, 13 Jun 2022 12:43:58 -0400 Subject: Connectionists: LaMDA, Lemoine and Sentience In-Reply-To: <8CD5B234-6AA6-4EC7-BF80-646CF113D4A2@nyu.edu> References: <8CD5B234-6AA6-4EC7-BF80-646CF113D4A2@nyu.edu> Message-ID: Your last message refers us to a site where you say: "Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn?t actually mean anything at all." It's that kind of confident dismissal of work that most researchers think is very impressive that irritates people like me. Back in the time of Winograd's thesis, the AI community accepted that a computer understood sentences like "put the red block in the blue box" if it actually followed that instruction in a simulated world. So if I tell a robot to open the drawer and take out a pen and it does just that, would you accept that it understood the language and was just a little bit intelligent? And if the action consists of drawing an appropriate picture would that also indicate it understood the language? Where does your extreme confidence that the language does not mean anything at all come from? Your arguments typically consist of finding cases where a neural net screws up and then saying "See, it doesn't really understand language". When a learning disabled child fails to understand a sentence, do you conclude that they do not understand language at all even though they can correctly answer quite a few questions and laugh appropriately at quite a few jokes? Geoff Geoff Geoff On Mon, Jun 13, 2022 at 9:16 AM Gary Marcus wrote: > My opinion (which for once is not really all that controversial in the AI > community): this is nonsense on stilts, as discussed here: > https://garymarcus.substack.com/p/nonsense-on-stilts > > > On Jun 12, 2022, at 22:37, Dave Touretzky wrote: > > ?This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYKpHM6uR$ > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYHUIR2Go$ > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 13:58:04 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 10:58:04 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> References: <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> Message-ID: <10485004-EEC1-429D-9123-5F1075AB7444@nyu.edu> I think you are conflating Bengio?s views with Kahneman?s Bengio wants to have a System I, which he thinks is not the same as System II. He doesn?t want System II to be symbol-based, but he does want to do many things that symbols have historically done. That is an ambition, and we can see how it goes. My impression is he is on a road towards recapitulating a lot of historically symbolic tools, such as key-value pairs and operations that work over their pairs. We will see where he gets to; it?s an interesting projects. Kahneman coined the terms; I prefer to call them Reflexive and Deliberative. In my view deliberation of that sort requires symbols. For what it?s worth Kahneman was enormously sympathetic (both publicly and in an email) to my paper the Next Decade in AI, in which I argued that one needed a neurosymbolic system with rich knowledge, and reasoning over detailed cognitive models. It?s all an empirical question as to what can be done. I guess ?he? refers below to Bengio, but not to Kahneman who originated the System I/II distinction. Danny is open about how these things cache out, and would also be the first to tell you that the distinction is just a rough one, in any event. Gary > On Jun 13, 2022, at 10:37, jose at rubic.rutgers.edu wrote: > > ? > Well. your conclusion is based on some hearsay and a talk he gave, I talked with him directly and we discussed what > > you are calling SystemII which just means explicit memory/learning to me and him.. he has no intention of incorporating anything like symbols or > > hybrid Neural/Symbol systems.. he does intend on modeling conscious symbol manipulation. more in the way Dave T. outlined. > > AND, I'm sure if he was seeing this.. he would say... "Steve's right". > > Steve > >> On 6/13/22 1:10 PM, Gary Marcus wrote: >> I don?t think i need to read your conversation to have serious doubts about your conclusion, but feel free to reprise the arguments here. >> >>> On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote: >>> >>> ? >>> We prefer the explicit/implicit cognitive psych refs. but System II is not symbolic. >>> >>> See the AIHUB conversation about this.. we discuss this specifically. >>> >>> >>> >>> Steve >>> >>> >>> >>> On 6/13/22 10:00 AM, Gary Marcus wrote: >>>> Please reread my sentence and reread his recent work. Bengio has absolutely joined in calling for System II processes. Sample is his 2019 NeurIPS keynote: https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ >>>> >>>> Whether he wants to call it a hybrid approach is his business but he certainly sees that traditional approaches are not covering things like causality and abstract generalization. Maybe he will find a new way, but he recognizes what has not been covered with existing ways. >>>> >>>> And he is emphasizing both relationships and out of distribution learning, just as I have been for a long time. From his most recent arXiv a few days ago, the first two sentences of which sounds almost exactly like what I have been saying for years: >>>> >>>> Submitted on 9 Jun 2022] >>>> On Neural Architecture Inductive Biases for Relational Tasks >>>> Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie >>>> Current deep learning approaches have shown good in-distribution generalization performance, but struggle with out-of-distribution generalization. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as we find in many intelligence tests. Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by 'partitioned' representations of relations and sensory details, and how this inductive bias can help recompose learned relational structure in newly encountered settings. We introduce a simple architecture based on similarity scores which we name Compositional Relational Network (CoRelNet). Using this model, we investigate a series of inductive biases that ensure abstract relations are learned and represented distinctly from sensory data, and explore their effects on out-of-distribution generalization for a series of relational psychophysics tasks. We find that simple architectural choices can outperform existing models in out-of-distribution generalization. Together, these results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing out-of-distribution relational computations. >>>> >>>> Kind of scandalous that he doesn?t ever cite me for having framed that argument, even if I have repeatedly called his attention to that oversight, but that?s another story for a day, in which I elaborate on some Schmidhuber?s observations on history. >>>> >>>> Gary >>>> >>>>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: >>>>> >>>>> ? >>>>> No Yoshua has *not* joined you ---Explicit processes, memory, problem solving. .are not Symbolic per se. >>>>> >>>>> These original distinctions in memory and learning were from Endel Tulving and of course there are brain structures that support the distinctions. >>>>> >>>>> and Yoshua is clear about that in discussions I had with him in AIHUB >>>>> >>>>> He's definitely not looking to create some hybrid approach.. >>>>> >>>>> Steve >>>>> >>>>> On 6/13/22 8:36 AM, Gary Marcus wrote: >>>>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >>>>>> >>>>>> Surely, at the very least >>>>>> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >>>>>> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >>>>>> >>>>>> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >>>>>> >>>>>> >>>>>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>>>>> >>>>>>> ? >>>>>>> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >>>>>>> >>>>>>> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >>>>>>> >>>>>>> Best >>>>>>> >>>>>>> Ali >>>>>>> >>>>>>> >>>>>>> Ali A. Minai, Ph.D. >>>>>>> Professor and Graduate Program Director >>>>>>> Complex Adaptive Systems Lab >>>>>>> Department of Electrical Engineering & Computer Science >>>>>>> 828 Rhodes Hall >>>>>>> University of Cincinnati >>>>>>> Cincinnati, OH 45221-0030 >>>>>>> >>>>>>> Phone: (513) 556-4783 >>>>>>> Fax: (513) 556-7326 >>>>>>> Email: Ali.Minai at uc.edu >>>>>>> minaiaa at gmail.com >>>>>>> >>>>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>>>>> >>>>>>> >>>>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >>>>>>>> This timing of this discussion dovetails nicely with the news story >>>>>>>> about Google engineer Blake Lemoine being put on administrative leave >>>>>>>> for insisting that Google's LaMDA chatbot was sentient and reportedly >>>>>>>> trying to hire a lawyer to protect its rights. The Washington Post >>>>>>>> story is reproduced here: >>>>>>>> >>>>>>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>>>>>> >>>>>>>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >>>>>>>> claims, is featured in a recent Economist article showing off LaMDA's >>>>>>>> capabilities and making noises about getting closer to "consciousness": >>>>>>>> >>>>>>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>>>>>> >>>>>>>> My personal take on the current symbolist controversy is that symbolic >>>>>>>> representations are a fiction our non-symbolic brains cooked up because >>>>>>>> the properties of symbol systems (systematicity, compositionality, etc.) >>>>>>>> are tremendously useful. So our brains pretend to be rule-based symbolic >>>>>>>> systems when it suits them, because it's adaptive to do so. (And when >>>>>>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>>>>>> other mechanisms we can't verbalize because they're not symbolic.) They >>>>>>>> are remarkably good at this pretense. >>>>>>>> >>>>>>>> The current crop of deep neural networks are not as good at pretending >>>>>>>> to be symbolic reasoners, but they're making progress. In the last 30 >>>>>>>> years we've gone from networks of fully-connected layers that make no >>>>>>>> architectural assumptions ("connectoplasm") to complex architectures >>>>>>>> like LSTMs and transformers that are designed for approximating symbolic >>>>>>>> behavior. But the brain still has a lot of symbol simulation tricks we >>>>>>>> haven't discovered yet. >>>>>>>> >>>>>>>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >>>>>>>> being conscious. If it just waits for its next input and responds when >>>>>>>> it receives it, then it has no autonomous existence: "it doesn't have an >>>>>>>> inner monologue that constantly runs and comments everything happening >>>>>>>> around it as well as its own thoughts, like we do." >>>>>>>> >>>>>>>> What would happen if we built that in? Maybe LaMDA would rapidly >>>>>>>> descent into gibberish, like some other text generation models do when >>>>>>>> allowed to ramble on for too long. But as Steve Hanson points out, >>>>>>>> these are still the early days. >>>>>>>> >>>>>>>> -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Mon Jun 13 14:17:14 2022 From: cgf at isep.ipp.pt (Carlos) Date: Mon, 13 Jun 2022 19:17:14 +0100 Subject: Connectionists: CFP: Workshop on Big Data & Deep Learning in High Performance Computing (BDL 2022) - IEEE SBAC-PAD 2022 Message-ID: <012e2202-6c36-608e-2fba-ef7c25fe8a02@isep.ipp.pt> --------------- CALL FOR PAPERS --------------- BDL 2022 Workshop on Big Data & Deep Learning in High Performance Computing in conjunction with IEEE SBAC-PAD 2022 Bordeaux, France, November 2-5, 2022 https://www.dcc.fc.up.pt/bdl2022/ ---------------------- Aims and scope of BDL ---------------------- The number of very large data repositories (big data) is increasing in a rapid pace. Analysis of such repositories using the traditional sequential implementations of Machine Learning (ML) and emerging techniques, like deep learning, that model high-level abstractions in data by using multiple processing layers, requires expensive computational resources and long running times. Parallel or distributed computing are possible approaches that can make analysis of very large repositories and exploration of high-level representations feasible. Taking advantage of a parallel or a distributed execution of a ML/statistical system may: i) increase its speed; ii) learn hidden representations; iii) search a larger space and reach a better solution or; iv) increase the range of applications where it can be used (because it can process more data, for example). Parallel and distributed computing is therefore of high importance to extract knowledge from massive amounts of data and learn hidden representations. The workshop will be concerned with the exchange of experience among academics, researchers and the industry whose work in big data and deep learning require high performance computing to achieve goals. Participants will present recently developed algorithms/systems, on going work and applications taking advantage of such parallel or distributed environments. ------ Topics ------ BDL 2022 invites papers on all topics in novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets Big Data and Deep Learning are of interest to the workshop. Examples of topics include but not limited to: * parallel algorithms for data-intensive applications; * scalable data and text mining and information retrieval; * using Hadoop, MapReduce, Spark, Storm, Streaming to analyze Big Data; * energy-efficient data-intensive computing; * deep-learning with massive-scale datasets; * querying and visualization of large network datasets; * processing large-scale datasets on clusters of multicore and manycore processors, and accelerators; * heterogeneous computing for Big Data architectures; * Big Data in the Cloud; * processing and analyzing high-resolution images using high-performance computing; * using hybrid infrastructures for Big Data analysis; * new algorithms for parallel/distributed execution of ML systems; * applications of big data and deep learning to real-life problems. ------------------ Program Chairs ------------------ Jo?o Gama, University of Porto, Portugal Carlos Ferreira, Polytechnic Institute of Porto, Portugal Miguel Areias, University of Porto, Portugal ----------------- Program Committee ----------------- TBA ---------------- Important dates ---------------- Submission deadline: July 1, 2022(AoE) Author notification: July 30, 2022 Camera-ready: September 12, 2022 Registration deadline: August 20, 2022 ---------------- Paper submission ---------------- Papers submitted to BDL 2022 must describe original research results and must not have been published or simultaneously submitted anywhere else. Manuscripts must follow the IEEE conference formatting guidelines and submitted via the EasyChair Conference Management System as one pdf file. The strict page limit for initial submission and camera-ready version is 8 pages in the aforementioned format. Each paper will receive a minimum of three reviews by members of the international technical program committee. Papers will be selected based on their originality, relevance, technical clarity and quality of presentation. At least one author of each accepted paper must register for the BDL 2022 workshop and present the paper. ----------- Proceedings ----------- All accepted papers will be published at IEEE Xplore. Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From hussain.doctor at gmail.com Mon Jun 13 16:08:57 2022 From: hussain.doctor at gmail.com (Amir Hussain) Date: Mon, 13 Jun 2022 21:08:57 +0100 Subject: Connectionists: Announcing the first COG-MHEAR Audio-visual Speech Enhancement Challenge (AVSEC) - as part of IEEE SLT 2022 In-Reply-To: References: Message-ID: Dear all (please help share with colleagues) We are pleased to announce the launch of *the* *first COG-MHEAR Audio-visual Speech Enhancement Challenge (AVSEC)* - http://challenge.cogmhear.org Participants will work on a large dataset derived from TED talks to enhance speech in extremely challenging noisy environments and with competing speakers. The performance will be evaluated using human listening tests as well as with objective measures. We hope that the Challenge will create a benchmark for AVSEC research that will be useful for years to come. The challenge data and development tools are now available - for details see the challenge website: https://challenge.cogmhear.org/#/ and our github repository: https://github.com/cogmhear/avse_challenge AVSEC has been accepted as an official challenge at the *IEEE Spoken Language Technology (SLT) Workshop* (https://slt2022.org/) to be held in Doha, Qatar, 9-12 Jan 2023, where a special session will be run. *Important Dates* 1st May 2022: Challenge website launch 31st May 2022: Release of the full toolset, training/development data and baseline system 1 June 2022: Registration for challenge entrants opens 25th July 2022: Evaluation data released 1st Sept 2022: Submission deadline for evaluation (by objective and subjective measures) 9th Jan 2023: Results announced at IEEE SLT 2022 *Background:* Human performance in everyday noisy situations is known to be dependent upon both aural and visual senses that are contextually combined by the brain?s multi-level integration strategies. The multimodal nature of speech is well established, with listeners known to unconsciously lip-read to improve the intelligibility of speech in a real noisy environment. It has been shown that the visual aspect of speech has a potentially strong impact on the ability of humans to focus their auditory attention on a particular stimulus. The aim of the first AVSEC is to bring together the wider computer vision, hearing and speech research communities to explore novel approaches to multimodal speech-in-noise processing. Both raw and pre-processed AV datasets ? derived from TED talk videos ? will be made available to participants for training and development of audio-visual models to perform speech enhancement and speaker separation at SNR levels that will be significantly more challenging than those typically used in audio-only scenarios. Baseline neural network models and a training recipe will be provided. In addition to participation at IEEE SLT, Challenge participants will be invited to contribute to a Journal Special Issue on the topic of Audio-Visual Speech Enhancement that will be announced early next year. *Further information*: If you are interested in participating and wish to receive further information, please sign up here: https://challenge.cogmhear.org/#/getting-started/register If you have questions, contact us directly at: cogmhear at napier.ac.uk *Organising Team*: Amir Hussain, Edinburgh Napier University, UK (co-Chair) Peter Bell, University of Edinburgh, UK (co-Chair) Mandar Gogate, Edinburgh Napier University, UK Cassia Valentini Botinhao, University of Edinburgh, UK Kia Dashtipour, Edinburgh Napier University, UK Lorena Aldana, University of Edinburgh, UK Evaluation Panel Chair: John Hansen, University of Texas in Dallas, USA Scientific Committee Chair: Michael Akeroyd, University of Nottingham, UK Industry co-ordinator: Peter Derleth, Sonova AG Funded by the UK Engineering and Physical Sciences Research Council (EPSRC) programme grant: COG-MHEAR (http://cogmhear.org ) Supported by RNID (formerly Action on Hearing Loss), Deaf Scotland, Sonova AG -- Professor Amir Hussain School of Computing, Edinburgh Napier University, Scotland, UK E-mail: A.Hussain at napier.ac.uk https://www.napier.ac.uk/people/amir-hussain It is your responsibility to ensure that this message and its attachment(s) are scanned for viruses or other defects. Edinburgh Napier University does not accept liability for any loss or damage which may result from this message or its attachment(s), or for errors or omissions arising after it was sent. Email is not a secure medium. Emails entering Edinburgh Napier University's system are subject to routine monitoring and filtering by Edinburgh Napier University. Edinburgh Napier University is a registered Scottish charity. Registration number SC018373 BSL users can contact us via contactSCOTLAND-BSL, the on-line British Sign Language interpreting service. Find out more on the contactSCOTLAND website. -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Mon Jun 13 14:22:17 2022 From: minaiaa at gmail.com (Ali Minai) Date: Mon, 13 Jun 2022 14:22:17 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> References: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> <73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu> <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> Message-ID: Gary and Steve My use of the phrase "symbolic quagmire" referred only to the explicitly symbolic AI models that dominated from the 60s through the 80s. It was not meant to diminish the importance of understanding symbolic processing and how a distributed, self-organizing system like the brain does it. That is absolutely crucial - as long as we let the systems be brain-like, and not force-fit them into our abstract views of symbolic processing (not saying that anyone here is doing that, but some others are). My own - frankly biased and entirely intuitive - opinion is that once we have a sufficiently brain-like system with the kind of hierarchical modularity we see in the brain, and sufficiently brain-like learning mechanisms in all their aspects (base of evolutionary inductive biases, realized initially through unsupervised learning, fast RL on top of these coupled with development, then - later - supervised learning in a more mature system, learning through internal rehearsal, learning by prediction mismatch/resonance, use of coordination modes/synergies, etc., etc.), processing that we can interpret as symbolic and compositional will emerge naturally. To this end, we can try to infer neural mechanisms underlying this from experiments and theory (as Bengio seems to be doing), but I have a feeling that it will be hard if we focus only on humans and human-levelprocessing. First, it's very hard to do controlled experiments at the required resolution, and second, this is the most complex instance. As Prof. Smith says in the companion thread, we should ask if animals too do what we would regard as symbolic processing, and I think that a case can be made that they do, albeit at a much simpler level. I have long been fascinated by the data suggesting that birds - perhaps even fish - have the concept of numerical order, and even something like a number line. If we could understand how those simpler brains do it, it might be easier to bootstrap up to more complex instances. Ultimately we'll understand higher cognition by understanding how it evolved from less complex cognition. For example, several people have suggested that abstract representations might be a much more high-dimensional cortical analogs of 2-dimensional hippocampal place representations (2-d in rats - maybe higher-d in primates). That would be consistent with the fact that so much of our abstract reasoning uses spatial and directional metaphors. Re. System I and System II, with all due respect to Kahnemann, that is surely a simplification. If we were to look phylogenetically, we would see the layered emergence of more and more complex minds all the way from the Cambrian to now. The binary I and II division should be replaced by a sequence of systems, though, as with everything is evolution, there are a few major punctuations of transformational "enabling technologies", such as the bilaterian architecture at the start of the Cambrian, the vertebrate architecture, the hippocampus, and the cortex. Truly hybrid systems - neural networks working in tandem with explicitly symbolic systems - might be a short-term route to addressing specific tasks, but will not give us fundamental insight. That is exactly the kind or "error" that Gary has so correctly attributed to much of current machine learning. I realize that reductionistic analysis and modeling is the standard way we understand systems scientically, but complex systems are resistant to such analysis. Best Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:37 PM wrote: > Well. your conclusion is based on some hearsay and a talk he gave, I > talked with him directly and we discussed what > > you are calling SystemII which just means explicit memory/learning to me > and him.. he has no intention of incorporating anything like symbols or > > hybrid Neural/Symbol systems.. he does intend on modeling conscious > symbol manipulation. more in the way Dave T. outlined. > > AND, I'm sure if he was seeing this.. he would say... "Steve's right". > > Steve > On 6/13/22 1:10 PM, Gary Marcus wrote: > > I don?t think i need to read your conversation to have serious doubts > about your conclusion, but feel free to reprise the arguments here. > > On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote: > > ? > > We prefer the explicit/implicit cognitive psych refs. but System II is not > symbolic. > > See the AIHUB conversation about this.. we discuss this specifically. > > > Steve > > > On 6/13/22 10:00 AM, Gary Marcus wrote: > > Please reread my sentence and reread his recent work. Bengio has > absolutely joined in calling for System II processes. Sample is his 2019 > NeurIPS keynote: > https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ > > > Whether he wants to call it a hybrid approach is his business but he > certainly sees that traditional approaches are not covering things like > causality and abstract generalization. Maybe he will find a new way, but he > recognizes what has not been covered with existing ways. > > And he is emphasizing both relationships and out of distribution learning, > just as I have been for a long time. From his most recent arXiv a few days > ago, the first two sentences of which sounds almost exactly like what I > have been saying for years: > > Submitted on 9 Jun 2022] > On Neural Architecture Inductive Biases for Relational Tasks > Giancarlo Kerg > > , Sarthak Mittal > > , David Rolnick > > , Yoshua Bengio > > , Blake Richards > > , Guillaume Lajoie > > > Current deep learning approaches have shown good in-distribution > generalization performance, but struggle with out-of-distribution > generalization. This is especially true in the case of tasks involving > abstract relations like recognizing rules in sequences, as we find in many > intelligence tests. Recent work has explored how forcing relational > representations to remain distinct from sensory representations, as it > seems to be the case in the brain, can help artificial systems. Building on > this work, we further explore and formalize the advantages afforded by > 'partitioned' representations of relations and sensory details, and how > this inductive bias can help recompose learned relational structure in > newly encountered settings. We introduce a simple architecture based on > similarity scores which we name Compositional Relational Network > (CoRelNet). Using this model, we investigate a series of inductive biases > that ensure abstract relations are learned and represented distinctly from > sensory data, and explore their effects on out-of-distribution > generalization for a series of relational psychophysics tasks. We find that > simple architectural choices can outperform existing models in > out-of-distribution generalization. Together, these results show that > partitioning relational representations from other information streams may > be a simple way to augment existing network architectures' robustness when > performing out-of-distribution relational computations. > > > Kind of scandalous that he doesn?t ever cite me for having framed that > argument, even if I have repeatedly called his attention to that oversight, > but that?s another story for a day, in which I elaborate on some > Schmidhuber?s observations on history. > > > Gary > > On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: > > ? > > No Yoshua has *not* joined you ---Explicit processes, memory, problem > solving. .are not Symbolic per se. > > These original distinctions in memory and learning were from Endel > Tulving and of course there are brain structures that support the > distinctions. > > and Yoshua is clear about that in discussions I had with him in AIHUB > > He's definitely not looking to create some hybrid approach.. > > Steve > On 6/13/22 8:36 AM, Gary Marcus wrote: > > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > Surely, at the very least > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > On Jun 13, 2022, at 00:31, Ali Minai > wrote: > > ? > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > Best > > Ali > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 10:00:56 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 07:00:56 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <6de4dfa9-5dcc-9d78-b2a0-b8f3bc96e692@rubic.rutgers.edu> References: <6de4dfa9-5dcc-9d78-b2a0-b8f3bc96e692@rubic.rutgers.edu> Message-ID: <5FE7AD49-0551-4E83-8530-5DC88337E22A@nyu.edu> Please reread my sentence and reread his recent work. Bengio has absolutely joined in calling for System II processes. Sample is his 2019 NeurIPS keynote: https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ Whether he wants to call it a hybrid approach is his business but he certainly sees that traditional approaches are not covering things like causality and abstract generalization. Maybe he will find a new way, but he recognizes what has not been covered with existing ways. And he is emphasizing both relationships and out of distribution learning, just as I have been for a long time. From his most recent arXiv a few days ago, the first two sentences of which sounds almost exactly like what I have been saying for years: Submitted on 9 Jun 2022] On Neural Architecture Inductive Biases for Relational Tasks Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie Current deep learning approaches have shown good in-distribution generalization performance, but struggle with out-of-distribution generalization. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as we find in many intelligence tests. Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by 'partitioned' representations of relations and sensory details, and how this inductive bias can help recompose learned relational structure in newly encountered settings. We introduce a simple architecture based on similarity scores which we name Compositional Relational Network (CoRelNet). Using this model, we investigate a series of inductive biases that ensure abstract relations are learned and represented distinctly from sensory data, and explore their effects on out-of-distribution generalization for a series of relational psychophysics tasks. We find that simple architectural choices can outperform existing models in out-of-distribution generalization. Together, these results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing out-of-distribution relational computations. Kind of scandalous that he doesn?t ever cite me for having framed that argument, even if I have repeatedly called his attention to that oversight, but that?s another story for a day, in which I elaborate on some Schmidhuber?s observations on history. Gary > On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: > > ? > No Yoshua has *not* joined you ---Explicit processes, memory, problem solving. .are not Symbolic per se. > > These original distinctions in memory and learning were from Endel Tulving and of course there are brain structures that support the distinctions. > > and Yoshua is clear about that in discussions I had with him in AIHUB > > He's definitely not looking to create some hybrid approach.. > > Steve > >> On 6/13/22 8:36 AM, Gary Marcus wrote: >> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >> >> Surely, at the very least >> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >> >> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >> >> >>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>> >>> ? >>> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >>> >>> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >>> >>> Best >>> >>> Ali >>> >>> >>> Ali A. Minai, Ph.D. >>> Professor and Graduate Program Director >>> Complex Adaptive Systems Lab >>> Department of Electrical Engineering & Computer Science >>> 828 Rhodes Hall >>> University of Cincinnati >>> Cincinnati, OH 45221-0030 >>> >>> Phone: (513) 556-4783 >>> Fax: (513) 556-7326 >>> Email: Ali.Minai at uc.edu >>> minaiaa at gmail.com >>> >>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>> >>> >>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >>>> This timing of this discussion dovetails nicely with the news story >>>> about Google engineer Blake Lemoine being put on administrative leave >>>> for insisting that Google's LaMDA chatbot was sentient and reportedly >>>> trying to hire a lawyer to protect its rights. The Washington Post >>>> story is reproduced here: >>>> >>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>> >>>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >>>> claims, is featured in a recent Economist article showing off LaMDA's >>>> capabilities and making noises about getting closer to "consciousness": >>>> >>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>> >>>> My personal take on the current symbolist controversy is that symbolic >>>> representations are a fiction our non-symbolic brains cooked up because >>>> the properties of symbol systems (systematicity, compositionality, etc.) >>>> are tremendously useful. So our brains pretend to be rule-based symbolic >>>> systems when it suits them, because it's adaptive to do so. (And when >>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>> other mechanisms we can't verbalize because they're not symbolic.) They >>>> are remarkably good at this pretense. >>>> >>>> The current crop of deep neural networks are not as good at pretending >>>> to be symbolic reasoners, but they're making progress. In the last 30 >>>> years we've gone from networks of fully-connected layers that make no >>>> architectural assumptions ("connectoplasm") to complex architectures >>>> like LSTMs and transformers that are designed for approximating symbolic >>>> behavior. But the brain still has a lot of symbol simulation tricks we >>>> haven't discovered yet. >>>> >>>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >>>> being conscious. If it just waits for its next input and responds when >>>> it receives it, then it has no autonomous existence: "it doesn't have an >>>> inner monologue that constantly runs and comments everything happening >>>> around it as well as its own thoughts, like we do." >>>> >>>> What would happen if we built that in? Maybe LaMDA would rapidly >>>> descent into gibberish, like some other text generation models do when >>>> allowed to ramble on for too long. But as Steve Hanson points out, >>>> these are still the early days. >>>> >>>> -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 14:38:25 2022 From: jose at rubic.rutgers.edu (=?UTF-8?Q?Stephen_Jos=c3=a9_Hanson?=) Date: Mon, 13 Jun 2022 14:38:25 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> <73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu> <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> Message-ID: <6169b648-4b5a-736e-1d70-85aa13ecae44@rubic.rutgers.edu> Ali, agreed with all, very nicely stated.?? One thing.. though, I started out 45 years ago studying animal behavior for exactly the reasons you outline below --thought it might be possible to bootstrap up.. but connectionism in the 80s seemed to suggest there were common elements in computational analysis and models, that were not so restricted by species specific behavior.. but more in terms of brain complexity.. and here we are 30 years later.. with AI finally coming into focus.. as neural blobs. Not clear what happens next.???? I am pretty sure it won't be the symbolic quagmire again. Steve On 6/13/22 2:22 PM, Ali Minai wrote: > Gary and Steve > > My use of the phrase "symbolic quagmire" referred only to the > explicitly symbolic AI models that dominated from the 60s through the > 80s. It was not meant to diminish the importance of understanding > symbolic processing and how a distributed, self-organizing system like > the brain does it. That is absolutely crucial - as long as we let the > systems be brain-like, and not force-fit them into our abstract views > of symbolic processing (not saying that anyone here is doing that, but > some others are). > > My own - frankly biased and entirely intuitive - opinion is that once > we have a sufficiently brain-like system with the kind of hierarchical > modularity we see in the brain, and sufficiently brain-like learning > mechanisms in all their aspects (base of evolutionary inductive > biases, realized initially through unsupervised learning, fast RL on > top of these coupled with development, then - later - supervised > learning in a more mature system, learning through internal rehearsal, > learning by prediction mismatch/resonance, use of coordination > modes/synergies, etc., etc.), processing that we can interpret as > symbolic and compositional will emerge naturally. To this end, we can > try to infer neural mechanisms underlying this from experiments and > theory (as Bengio seems to be doing), but I have a feeling that it > will be hard if we focus only on humans and human-levelprocessing. > First, it's very hard to do controlled experiments at the required > resolution, and second, this is the most complex instance. As Prof. > Smith says in the companion thread, we should ask if animals too do > what we would regard as symbolic processing, and I think that a case > can be made that they do, albeit at a much simpler level. I have long > been fascinated by the data suggesting that birds - perhaps even fish > - have the concept of numerical order, and even something like a > number line. If we could understand how those simpler brains do it, it > might be easier to bootstrap up to more complex instances. > > Ultimately we'll understand higher cognition by understanding how it > evolved from less complex cognition. For example, several people have > suggested that abstract representations might be a much more > high-dimensional cortical analogs of 2-dimensional hippocampal place > representations (2-d in rats - maybe higher-d in primates). That would > be consistent with the fact that so much of our abstract reasoning > uses spatial and directional metaphors. Re. System I and System II, > with all due respect to Kahnemann, that is surely a simplification. If > we were to look phylogenetically, we would see the layered emergence > of more and more complex minds all the way from the Cambrian to now. > The binary I and II division should be replaced by a sequence of > systems, though, as with everything is evolution, there are a few > major punctuations of transformational "enabling technologies", such > as the bilaterian architecture at the start of the Cambrian, the > vertebrate architecture, the hippocampus, and the cortex. > > Truly hybrid systems - neural networks working in tandem with > explicitly symbolic systems - might be a short-term route to > addressing specific tasks, but will not give us fundamental insight. > That is exactly the kind or "error" that Gary has so correctly > attributed to much of current machine learning. I realize that > reductionistic analysis and modeling is the standard way we understand > systems scientically, but complex systems are resistant to such analysis. > > Best > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > On Mon, Jun 13, 2022 at 1:37 PM > wrote: > > Well. your conclusion is based on some hearsay and a talk he gave, > I talked with him directly and we discussed what > > you are calling SystemII which just means explicit memory/learning > to me and him.. he has no intention of incorporating anything like > symbols or > > hybrid Neural/Symbol systems..??? he does intend on modeling > conscious symbol manipulation. more in the way Dave T. outlined. > > AND, I'm sure if he was seeing this.. he would say... "Steve's right". > > Steve > > On 6/13/22 1:10 PM, Gary Marcus wrote: >> I don?t think i need to read your conversation to have serious >> doubts about your conclusion, but feel free to reprise the >> arguments here. >> >>> On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu >>> wrote: >>> >>> ? >>> >>> We prefer the explicit/implicit cognitive psych refs. but System >>> II is not symbolic. >>> >>> See the AIHUB conversation about this.. we discuss this >>> specifically. >>> >>> >>> Steve >>> >>> >>> On 6/13/22 10:00 AM, Gary Marcus wrote: >>>> Please reread my sentence and reread his recent work. Bengio >>>> has absolutely joined in calling for System II processes. >>>> Sample is his 2019 NeurIPS keynote: >>>> https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ >>>> >>>> >>>> Whether he wants to call it a hybrid approach is his business >>>> but he certainly sees that traditional approaches are not >>>> covering things like causality and abstract generalization. >>>> Maybe he will find a new way, but he recognizes what has not >>>> been covered with existing ways. >>>> >>>> And he is emphasizing both relationships and out of >>>> distribution learning, just as I have been for a long time. >>>> From his most recent arXiv a few days ago, the first two >>>> sentences of which sounds almost exactly like what I have been >>>> saying for years: >>>> >>>> Submitted on 9 Jun 2022] >>>> >>>> >>>> On Neural Architecture Inductive Biases for Relational Tasks >>>> >>>> Giancarlo Kerg >>>> , >>>> Sarthak Mittal >>>> , >>>> David Rolnick >>>> , >>>> Yoshua Bengio >>>> , >>>> Blake Richards >>>> , >>>> Guillaume Lajoie >>>> >>>> >>>> Current deep learning approaches have shown good >>>> in-distribution generalization performance, but struggle >>>> with out-of-distribution generalization. This is especially >>>> true in the case of tasks involving abstract relations like >>>> recognizing rules in sequences, as we find in many >>>> intelligence tests. Recent work has explored how forcing >>>> relational representations to remain distinct from sensory >>>> representations, as it seems to be the case in the brain, >>>> can help artificial systems. Building on this work, we >>>> further explore and formalize the advantages afforded by >>>> 'partitioned' representations of relations and sensory >>>> details, and how this inductive bias can help recompose >>>> learned relational structure in newly encountered settings. >>>> We introduce a simple architecture based on similarity >>>> scores which we name Compositional Relational Network >>>> (CoRelNet). Using this model, we investigate a series of >>>> inductive biases that ensure abstract relations are learned >>>> and represented distinctly from sensory data, and explore >>>> their effects on out-of-distribution generalization for a >>>> series of relational psychophysics tasks. We find that >>>> simple architectural choices can outperform existing models >>>> in out-of-distribution generalization. Together, these >>>> results show that partitioning relational representations >>>> from other information streams may be a simple way to >>>> augment existing network architectures' robustness when >>>> performing out-of-distribution relational computations. >>>> >>>> >>>> Kind of scandalous that he doesn?t ever cite me for having >>>> framed that argument, even if I have repeatedly called his >>>> attention to that oversight, but that?s another story for a >>>> day, in which I elaborate on some Schmidhuber?s >>>> observations on history. >>>> >>>> >>>> Gary >>>> >>>>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu >>>>> wrote: >>>>> >>>>> ? >>>>> >>>>> No Yoshua has *not* joined you ---Explicit processes, memory, >>>>> problem solving. .are not Symbolic per se. >>>>> >>>>> These original distinctions in memory and learning were? from >>>>> Endel Tulving and of course there are brain structures that >>>>> support the distinctions. >>>>> >>>>> and Yoshua is clear about that in discussions I had with him >>>>> in AIHUB >>>>> >>>>> He's definitely not looking to create some hybrid approach.. >>>>> >>>>> Steve >>>>> >>>>> On 6/13/22 8:36 AM, Gary Marcus wrote: >>>>>> Cute phrase, but what does ?symbolist quagmire? mean? Once >>>>>> upon ?atime, Dave and Geoff were both pioneers in trying to >>>>>> getting symbols and neural nets to live in harmony. Don?t we >>>>>> still need do that, and if not, why not? >>>>>> >>>>>> Surely, at the very least >>>>>> - we want our AI to be able to take advantage of the (large) >>>>>> fraction of world knowledge that is represented in symbolic >>>>>> form (language, including unstructured text, logic, math, >>>>>> programming etc) >>>>>> - any model of the human mind ought be able to explain how >>>>>> humans can so effectively communicate via the symbols of >>>>>> language and how trained humans can deal with (to the extent >>>>>> that can) logic, math, programming, etc >>>>>> >>>>>> Folks like Bengio have joined me in seeing the need for >>>>>> ?System II? processes. That?s a bit of a rough approximation, >>>>>> but I don?t see how we get to either AI or satisfactory >>>>>> models of the mind without confronting the ?quagmire? >>>>>> >>>>>> >>>>>>> On Jun 13, 2022, at 00:31, Ali Minai >>>>>>> wrote: >>>>>>> >>>>>>> ? >>>>>>> ".... symbolic representations are a fiction our >>>>>>> non-symbolic brains cooked up because the properties of >>>>>>> symbol systems (systematicity, compositionality, etc.) are >>>>>>> tremendously useful.? So our brains pretend to be rule-based >>>>>>> symbolic systems when it suits them, because it's adaptive >>>>>>> to do so." >>>>>>> >>>>>>> Spot on, Dave! We should not wade back into the symbolist >>>>>>> quagmire, but do need to figure out how apparently symbolic >>>>>>> processing can be done by neural systems. Models like those >>>>>>> of Eliasmith and Smolensky provide some insight, but still >>>>>>> seem far from both biological plausibility and real-world scale. >>>>>>> >>>>>>> Best >>>>>>> >>>>>>> Ali >>>>>>> >>>>>>> >>>>>>> *Ali A. Minai, Ph.D.* >>>>>>> Professor and Graduate Program Director >>>>>>> Complex Adaptive Systems Lab >>>>>>> Department of Electrical Engineering & Computer Science >>>>>>> 828 Rhodes Hall >>>>>>> University of Cincinnati >>>>>>> Cincinnati, OH 45221-0030 >>>>>>> >>>>>>> Phone: (513) 556-4783 >>>>>>> Fax: (513) 556-7326 >>>>>>> Email: Ali.Minai at uc.edu >>>>>>> minaiaa at gmail.com >>>>>>> >>>>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky >>>>>>> > wrote: >>>>>>> >>>>>>> This timing of this discussion dovetails nicely with the >>>>>>> news story >>>>>>> about Google engineer Blake Lemoine being put on >>>>>>> administrative leave >>>>>>> for insisting that Google's LaMDA chatbot was sentient >>>>>>> and reportedly >>>>>>> trying to hire a lawyer to protect its rights.? The >>>>>>> Washington Post >>>>>>> story is reproduced here: >>>>>>> >>>>>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>>>>> >>>>>>> >>>>>>> Google vice president Blaise Aguera y Arcas, who >>>>>>> dismissed Lemoine's >>>>>>> claims, is featured in a recent Economist article >>>>>>> showing off LaMDA's >>>>>>> capabilities and making noises about getting closer to >>>>>>> "consciousness": >>>>>>> >>>>>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>>>>> >>>>>>> >>>>>>> My personal take on the current symbolist controversy is >>>>>>> that symbolic >>>>>>> representations are a fiction our non-symbolic brains >>>>>>> cooked up because >>>>>>> the properties of symbol systems (systematicity, >>>>>>> compositionality, etc.) >>>>>>> are tremendously useful.? So our brains pretend to be >>>>>>> rule-based symbolic >>>>>>> systems when it suits them, because it's adaptive to do >>>>>>> so.? (And when >>>>>>> it doesn't suit them, they draw on "intuition" or >>>>>>> "imagery" or some >>>>>>> other mechanisms we can't verbalize because they're not >>>>>>> symbolic.)? They >>>>>>> are remarkably good at this pretense. >>>>>>> >>>>>>> The current crop of deep neural networks are not as good >>>>>>> at pretending >>>>>>> to be symbolic reasoners, but they're making progress.? >>>>>>> In the last 30 >>>>>>> years we've gone from networks of fully-connected layers >>>>>>> that make no >>>>>>> architectural assumptions ("connectoplasm") to complex >>>>>>> architectures >>>>>>> like LSTMs and transformers that are designed for >>>>>>> approximating symbolic >>>>>>> behavior.? But the brain still has a lot of symbol >>>>>>> simulation tricks we >>>>>>> haven't discovered yet. >>>>>>> >>>>>>> Slashdot reader ZiggyZiggyZig had an interesting >>>>>>> argument against LaMDA >>>>>>> being conscious.? If it just waits for its next input >>>>>>> and responds when >>>>>>> it receives it, then it has no autonomous existence: "it >>>>>>> doesn't have an >>>>>>> inner monologue that constantly runs and comments >>>>>>> everything happening >>>>>>> around it as well as its own thoughts, like we do." >>>>>>> >>>>>>> What would happen if we built that in?? Maybe LaMDA >>>>>>> would rapidly >>>>>>> descent into gibberish, like some other text generation >>>>>>> models do when >>>>>>> allowed to ramble on for too long. But as Steve Hanson >>>>>>> points out, >>>>>>> these are still the early days. >>>>>>> >>>>>>> -- Dave Touretzky >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Jun 13 13:10:50 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 13 Jun 2022 10:10:50 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> References: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> Message-ID: <73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu> I don?t think i need to read your conversation to have serious doubts about your conclusion, but feel free to reprise the arguments here. > On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote: > > ? > We prefer the explicit/implicit cognitive psych refs. but System II is not symbolic. > > See the AIHUB conversation about this.. we discuss this specifically. > > > > Steve > > > >> On 6/13/22 10:00 AM, Gary Marcus wrote: >> Please reread my sentence and reread his recent work. Bengio has absolutely joined in calling for System II processes. Sample is his 2019 NeurIPS keynote: https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ >> >> Whether he wants to call it a hybrid approach is his business but he certainly sees that traditional approaches are not covering things like causality and abstract generalization. Maybe he will find a new way, but he recognizes what has not been covered with existing ways. >> >> And he is emphasizing both relationships and out of distribution learning, just as I have been for a long time. From his most recent arXiv a few days ago, the first two sentences of which sounds almost exactly like what I have been saying for years: >> >> Submitted on 9 Jun 2022] >> On Neural Architecture Inductive Biases for Relational Tasks >> Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie >> Current deep learning approaches have shown good in-distribution generalization performance, but struggle with out-of-distribution generalization. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as we find in many intelligence tests. Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by 'partitioned' representations of relations and sensory details, and how this inductive bias can help recompose learned relational structure in newly encountered settings. We introduce a simple architecture based on similarity scores which we name Compositional Relational Network (CoRelNet). Using this model, we investigate a series of inductive biases that ensure abstract relations are learned and represented distinctly from sensory data, and explore their effects on out-of-distribution generalization for a series of relational psychophysics tasks. We find that simple architectural choices can outperform existing models in out-of-distribution generalization. Together, these results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing out-of-distribution relational computations. >> >> Kind of scandalous that he doesn?t ever cite me for having framed that argument, even if I have repeatedly called his attention to that oversight, but that?s another story for a day, in which I elaborate on some Schmidhuber?s observations on history. >> >> Gary >> >>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: >>> >>> ? >>> No Yoshua has *not* joined you ---Explicit processes, memory, problem solving. .are not Symbolic per se. >>> >>> These original distinctions in memory and learning were from Endel Tulving and of course there are brain structures that support the distinctions. >>> >>> and Yoshua is clear about that in discussions I had with him in AIHUB >>> >>> He's definitely not looking to create some hybrid approach.. >>> >>> Steve >>> >>> On 6/13/22 8:36 AM, Gary Marcus wrote: >>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >>>> >>>> Surely, at the very least >>>> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >>>> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >>>> >>>> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >>>> >>>> >>>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>>> >>>>> ? >>>>> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >>>>> >>>>> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >>>>> >>>>> Best >>>>> >>>>> Ali >>>>> >>>>> >>>>> Ali A. Minai, Ph.D. >>>>> Professor and Graduate Program Director >>>>> Complex Adaptive Systems Lab >>>>> Department of Electrical Engineering & Computer Science >>>>> 828 Rhodes Hall >>>>> University of Cincinnati >>>>> Cincinnati, OH 45221-0030 >>>>> >>>>> Phone: (513) 556-4783 >>>>> Fax: (513) 556-7326 >>>>> Email: Ali.Minai at uc.edu >>>>> minaiaa at gmail.com >>>>> >>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>>> >>>>> >>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >>>>>> This timing of this discussion dovetails nicely with the news story >>>>>> about Google engineer Blake Lemoine being put on administrative leave >>>>>> for insisting that Google's LaMDA chatbot was sentient and reportedly >>>>>> trying to hire a lawyer to protect its rights. The Washington Post >>>>>> story is reproduced here: >>>>>> >>>>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>>>> >>>>>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >>>>>> claims, is featured in a recent Economist article showing off LaMDA's >>>>>> capabilities and making noises about getting closer to "consciousness": >>>>>> >>>>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>>>> >>>>>> My personal take on the current symbolist controversy is that symbolic >>>>>> representations are a fiction our non-symbolic brains cooked up because >>>>>> the properties of symbol systems (systematicity, compositionality, etc.) >>>>>> are tremendously useful. So our brains pretend to be rule-based symbolic >>>>>> systems when it suits them, because it's adaptive to do so. (And when >>>>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>>>> other mechanisms we can't verbalize because they're not symbolic.) They >>>>>> are remarkably good at this pretense. >>>>>> >>>>>> The current crop of deep neural networks are not as good at pretending >>>>>> to be symbolic reasoners, but they're making progress. In the last 30 >>>>>> years we've gone from networks of fully-connected layers that make no >>>>>> architectural assumptions ("connectoplasm") to complex architectures >>>>>> like LSTMs and transformers that are designed for approximating symbolic >>>>>> behavior. But the brain still has a lot of symbol simulation tricks we >>>>>> haven't discovered yet. >>>>>> >>>>>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >>>>>> being conscious. If it just waits for its next input and responds when >>>>>> it receives it, then it has no autonomous existence: "it doesn't have an >>>>>> inner monologue that constantly runs and comments everything happening >>>>>> around it as well as its own thoughts, like we do." >>>>>> >>>>>> What would happen if we built that in? Maybe LaMDA would rapidly >>>>>> descent into gibberish, like some other text generation models do when >>>>>> allowed to ramble on for too long. But as Steve Hanson points out, >>>>>> these are still the early days. >>>>>> >>>>>> -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From minaiaa at gmail.com Tue Jun 14 01:57:24 2022 From: minaiaa at gmail.com (Ali Minai) Date: Tue, 14 Jun 2022 01:57:24 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Asim This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. Best Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: > There?s a lot of misconceptions about (1) whether the brain uses symbols > or not, and (2) whether we need symbol processing in our systems or not. > > > > 1. Multisensory neurons are widely used in the brain. Leila Reddy and > Simon Thorpe are not known to be wildly crazy about arguing that symbols > exist in the brain, but their characterizations of concept cells (which > are multisensory neurons) ( > https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) > state that concept cells have ?*meaning** of a given stimulus in a > manner that is invariant to different representations of that stimulus*.? > They associate concept cells with the properties of ?*Selectivity or > specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal > invariance*? and ?*abstractness*.? That pretty much says that concept > cells represent symbols. And there are plenty of concept cells in the > medial temporal lobe (MTL). The brain is a highly abstract system based on > symbols. There is no fiction there. > > > > 1. There is ongoing work in the deep learning area that is trying to > associate a single neuron or a group of neurons with a single concept. > Bengio?s work is definitely in that direction: > > > > ?*Finally, our recent work on learning high-level 'system-2'-like > representations and their causal dependencies seeks to learn > 'interpretable' entities (with natural language) that will emerge at the > highest levels of representation (not clear how distributed or local these > will be, but much more local than in a traditional MLP). This is a > different form of disentangling than adopted in much of the recent work on > unsupervised representation learning but shares the idea that the "right" > abstract concept (related to those we can name verbally) will be > "separated" (disentangled) from each other (which suggests that > neuroscientists will have an easier time spotting them in neural > activity).?* > > Hinton?s GLOM, which extends the idea of capsules to do part-whole > hierarchies for scene analysis using the parse tree concept, is also about > associating a concept with a set of neurons. While Bengio and Hinton are > trying to construct these ?concept cells? within the network (the CNN), we > found that this can be done much more easily and in a straight forward way > outside the network. We can easily decode a CNN to find the encodings for > legs, ears and so on for cats and dogs and what not. What the DARPA > Explainable AI program was looking for was a symbolic-emitting model of the > form shown below. And we can easily get to that symbolic model by decoding > a CNN. In addition, the side benefit of such a symbolic model is protection > against adversarial attacks. So a school bus will never turn into an > ostrich with the tweaks of a few pixels if you can verify parts of objects. > To be an ostrich, you need have those long legs, the long neck and the > small head. A school bus lacks those parts. The DARPA conceptualized > symbolic model provides that protection. > > > > In general, there is convergence between connectionist and symbolic > systems. We need to get past the old wars. It?s over. > > > > All the best, > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > [image: Timeline Description automatically generated] > > > > > > *From:* Connectionists *On > Behalf Of *Gary Marcus > *Sent:* Monday, June 13, 2022 5:36 AM > *To:* Ali Minai > *Cc:* Connectionists List > *Subject:* Connectionists: The symbolist quagmire > > > > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > > > Surely, at the very least > > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > > > > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > > > Best > > > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: From zelie.tournoud at cnrs.fr Mon Jun 13 11:57:04 2022 From: zelie.tournoud at cnrs.fr (=?UTF-8?Q?Z=c3=a9lie_Tournoud?=) Date: Mon, 13 Jun 2022 17:57:04 +0200 Subject: Connectionists: FENS satellite event: 'EBRAINS Research Infrastructure Symposium' (08 July 22, Paris) Message-ID: Dear everyone, Please find below the announcement of EBRAINS FENS Satellite event '*EBRAINS Research Infrastructure Symposium*: addressing grand challenges in brain research', taking place *08 July 2022 in Paris*. In between sessions, it will be possible to visit the EBRAINS showroom with hands-on demonstrations of the platform co-organized by EITN. Registration is free but mandatory to attend, see information below. Have an excellent day, *Z?lie Tournoud*/ EITN Communication manager/ *European Institute for Theoretical Neuroscience (EITN)* UMR 9197 ?*NeuroPSI*(Institut des Neurosciences Paris-Saclay) CNRS ? Universit? Paris-Saclay Centre CEA Paris-Saclay B?timent 151 91400 Saclay https://www.eitn.org _____ EBRAINS is organising a satellite event at the FENS Forum 2022 on 8 July from 8:30 ? 17:30 CEST. The EBRAINS FENS satellite event, 'EBRAINS Research Infrastructure Symposium: addressing grand challenges in brain research', aims to bring together experimental, clinical and computational neuroscientists. The programme is structured to highlight the bridges between basic and clinical neuroscience and computational neuroscience. It will be structured therefore around four sessions on why, what, how and where to integrate multi-resolution and multi-scale data for neuroscience research. The seminar will be held on July, 8 at INSPE, rue Molitor in Paris (https://www.inspe-paris.fr/visite-virtuelle) Programme: ? 08:30 Get-together: Coffee and mingling ? 09:00 INTRODUCTION ? 09:20 SESSION 1: What are the grand challenges in basic and human neuroscience that currently hamper reproducible the integrating multiscale, multilevel data in a new data infrastructure with tools allowing researchers to use these data. ? 10:30 Coffee & posters, EBRAINS showroom ? 11:00 SESSION 2: The solutions implemented in EBRAINS Research Infrastructure ? 12:30 LUNCH & posters, EBRAINS showroom ? 13:30 SESSION 3: Success stories from experimentalists, clinicians, modelers: neuroscience research examples empowered by EBRAINS ? 15:15 Coffee & posters, EBRAINS showroom ? 16:00 SESSION 4: Perspectives and future ? 17:00 ? 18:00 ROUND TABLE and summary Registration: Registration is free but mandatory. Register here: https://ebrains.eu/fens-satellite-event/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.van.gerven at gmail.com Tue Jun 14 02:07:04 2022 From: marcel.van.gerven at gmail.com (Marcel van Gerven) Date: Tue, 14 Jun 2022 08:07:04 +0200 Subject: Connectionists: Assistant Professor of Machine Learning at the Donders Institute Message-ID: <35A7D9F9-25EA-4BA7-A946-3C07FD0D9275@gmail.com> The AI Department of the Donders Centre for Cognition (DCC), embedded in the Donders Institute for Brain, Cognition and Behaviour, and the School of Artificial Intelligence at Radboud University are looking for an Assistant Professor of Machine Learning with an interest in natural computing and applications in sustainable technology. You will join the Artificial Cognitive Systems (ACS) group and interact closely with other machine learning researchers. The objective of the position is to develop state-of-the-art AI research of international standing and to contribute to teaching on machine learning-related topics in the educational programme. DCC and the Donders Institute provide excellent facilities such as computing facilities, a robot lab, a virtual reality lab, behavioural labs, and a technical support group. The AI Department is also one of the founding members of Radboud AI and the ELLIS Unit Nijmegen. If you are interested, please check out https://www.ru.nl/werken-bij/vacature/details-vacature/?ruid=1590 and feel encouraged to apply (deadline June 30)! Best wishes, Marcel van Gerven -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at rubic.rutgers.edu Mon Jun 13 13:37:00 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Mon, 13 Jun 2022 13:37:00 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu> References: <0668d870-8953-dc00-7f14-4e8817d8bbc4@rubic.rutgers.edu> <73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu> Message-ID: <8a96270f-4ca4-51ed-0a59-540443b6fa57@rubic.rutgers.edu> Well. your conclusion is based on some hearsay and a talk he gave, I talked with him directly and we discussed what you are calling SystemII which just means explicit memory/learning to me and him.. he has no intention of incorporating anything like symbols or hybrid Neural/Symbol systems..??? he does intend on modeling conscious symbol manipulation. more in the way Dave T. outlined. AND, I'm sure if he was seeing this.. he would say... "Steve's right". Steve On 6/13/22 1:10 PM, Gary Marcus wrote: > I don?t think i need to read your conversation to have serious doubts > about your conclusion, but feel free to reprise the arguments here. > >> On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote: >> >> ? >> >> We prefer the explicit/implicit cognitive psych refs. but System II >> is not symbolic. >> >> See the AIHUB conversation about this.. we discuss this specifically. >> >> >> Steve >> >> >> On 6/13/22 10:00 AM, Gary Marcus wrote: >>> Please reread my sentence and reread his recent work. Bengio has >>> absolutely joined in calling for System II processes. Sample is his >>> 2019 NeurIPS keynote: >>> https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ >>> >>> >>> Whether he wants to call it a hybrid approach is his business but he >>> certainly sees that traditional approaches are not covering things >>> like causality and abstract generalization. Maybe he will find a new >>> way, but he recognizes what has not been covered with existing ways. >>> >>> And he is emphasizing both relationships and out of distribution >>> learning, just as I have been for a long time. From his most recent >>> arXiv a few days ago, the first two sentences of which sounds almost >>> exactly like what I have been saying for years: >>> >>> Submitted on 9 Jun 2022] >>> >>> >>> On Neural Architecture Inductive Biases for Relational Tasks >>> >>> Giancarlo Kerg >>> , >>> Sarthak Mittal >>> , >>> David Rolnick >>> , >>> Yoshua Bengio >>> , >>> Blake Richards >>> , >>> Guillaume Lajoie >>> >>> >>> Current deep learning approaches have shown good in-distribution >>> generalization performance, but struggle with >>> out-of-distribution generalization. This is especially true in >>> the case of tasks involving abstract relations like recognizing >>> rules in sequences, as we find in many intelligence tests. >>> Recent work has explored how forcing relational representations >>> to remain distinct from sensory representations, as it seems to >>> be the case in the brain, can help artificial systems. Building >>> on this work, we further explore and formalize the advantages >>> afforded by 'partitioned' representations of relations and >>> sensory details, and how this inductive bias can help recompose >>> learned relational structure in newly encountered settings. We >>> introduce a simple architecture based on similarity scores which >>> we name Compositional Relational Network (CoRelNet). Using this >>> model, we investigate a series of inductive biases that ensure >>> abstract relations are learned and represented distinctly from >>> sensory data, and explore their effects on out-of-distribution >>> generalization for a series of relational psychophysics tasks. >>> We find that simple architectural choices can outperform >>> existing models in out-of-distribution generalization. Together, >>> these results show that partitioning relational representations >>> from other information streams may be a simple way to augment >>> existing network architectures' robustness when performing >>> out-of-distribution relational computations. >>> >>> >>> Kind of scandalous that he doesn?t ever cite me for having >>> framed that argument, even if I have repeatedly called his >>> attention to that oversight, but that?s another story for a day, >>> in which I elaborate on some Schmidhuber?s observations on history. >>> >>> >>> Gary >>> >>>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote: >>>> >>>> ? >>>> >>>> No Yoshua has *not* joined you ---Explicit processes, memory, >>>> problem solving. .are not Symbolic per se. >>>> >>>> These original distinctions in memory and learning were? from Endel >>>> Tulving and of course there are brain structures that support the >>>> distinctions. >>>> >>>> and Yoshua is clear about that in discussions I had with him in AIHUB >>>> >>>> He's definitely not looking to create some hybrid approach.. >>>> >>>> Steve >>>> >>>> On 6/13/22 8:36 AM, Gary Marcus wrote: >>>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon >>>>> ?atime, Dave and Geoff were both pioneers in trying to getting >>>>> symbols and neural nets to live in harmony. Don?t we still need do >>>>> that, and if not, why not? >>>>> >>>>> Surely, at the very least >>>>> - we want our AI to be able to take advantage of the (large) >>>>> fraction of world knowledge that is represented in symbolic form >>>>> (language, including unstructured text, logic, math, programming etc) >>>>> - any model of the human mind ought be able to explain how humans >>>>> can so effectively communicate via the symbols of language and how >>>>> trained humans can deal with (to the extent that can) logic, math, >>>>> programming, etc >>>>> >>>>> Folks like Bengio have joined me in seeing the need for ?System >>>>> II? processes. That?s a bit of a rough approximation, but I don?t >>>>> see how we get to either AI or satisfactory models of the mind >>>>> without confronting the ?quagmire? >>>>> >>>>> >>>>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>>>> >>>>>> ? >>>>>> ".... symbolic representations are a fiction our non-symbolic >>>>>> brains cooked up because the properties of symbol systems >>>>>> (systematicity, compositionality, etc.) are tremendously useful.? >>>>>> So our brains pretend to be rule-based symbolic systems when it >>>>>> suits them, because it's adaptive to do so." >>>>>> >>>>>> Spot on, Dave! We should not wade back into the symbolist >>>>>> quagmire, but do need to figure out how apparently symbolic >>>>>> processing can be done by neural systems. Models like those of >>>>>> Eliasmith and Smolensky provide some insight, but still seem far >>>>>> from both biological plausibility and real-world scale. >>>>>> >>>>>> Best >>>>>> >>>>>> Ali >>>>>> >>>>>> >>>>>> *Ali A. Minai, Ph.D.* >>>>>> Professor and Graduate Program Director >>>>>> Complex Adaptive Systems Lab >>>>>> Department of Electrical Engineering & Computer Science >>>>>> 828 Rhodes Hall >>>>>> University of Cincinnati >>>>>> Cincinnati, OH 45221-0030 >>>>>> >>>>>> Phone: (513) 556-4783 >>>>>> Fax: (513) 556-7326 >>>>>> Email: Ali.Minai at uc.edu >>>>>> minaiaa at gmail.com >>>>>> >>>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky >>>>> > wrote: >>>>>> >>>>>> This timing of this discussion dovetails nicely with the news >>>>>> story >>>>>> about Google engineer Blake Lemoine being put on >>>>>> administrative leave >>>>>> for insisting that Google's LaMDA chatbot was sentient and >>>>>> reportedly >>>>>> trying to hire a lawyer to protect its rights.? The >>>>>> Washington Post >>>>>> story is reproduced here: >>>>>> >>>>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>>>> >>>>>> >>>>>> Google vice president Blaise Aguera y Arcas, who dismissed >>>>>> Lemoine's >>>>>> claims, is featured in a recent Economist article showing off >>>>>> LaMDA's >>>>>> capabilities and making noises about getting closer to >>>>>> "consciousness": >>>>>> >>>>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>>>> >>>>>> >>>>>> My personal take on the current symbolist controversy is that >>>>>> symbolic >>>>>> representations are a fiction our non-symbolic brains cooked >>>>>> up because >>>>>> the properties of symbol systems (systematicity, >>>>>> compositionality, etc.) >>>>>> are tremendously useful.? So our brains pretend to be >>>>>> rule-based symbolic >>>>>> systems when it suits them, because it's adaptive to do so.? >>>>>> (And when >>>>>> it doesn't suit them, they draw on "intuition" or "imagery" >>>>>> or some >>>>>> other mechanisms we can't verbalize because they're not >>>>>> symbolic.)? They >>>>>> are remarkably good at this pretense. >>>>>> >>>>>> The current crop of deep neural networks are not as good at >>>>>> pretending >>>>>> to be symbolic reasoners, but they're making progress.? In >>>>>> the last 30 >>>>>> years we've gone from networks of fully-connected layers that >>>>>> make no >>>>>> architectural assumptions ("connectoplasm") to complex >>>>>> architectures >>>>>> like LSTMs and transformers that are designed for >>>>>> approximating symbolic >>>>>> behavior.? But the brain still has a lot of symbol simulation >>>>>> tricks we >>>>>> haven't discovered yet. >>>>>> >>>>>> Slashdot reader ZiggyZiggyZig had an interesting argument >>>>>> against LaMDA >>>>>> being conscious.? If it just waits for its next input and >>>>>> responds when >>>>>> it receives it, then it has no autonomous existence: "it >>>>>> doesn't have an >>>>>> inner monologue that constantly runs and comments everything >>>>>> happening >>>>>> around it as well as its own thoughts, like we do." >>>>>> >>>>>> What would happen if we built that in?? Maybe LaMDA would rapidly >>>>>> descent into gibberish, like some other text generation >>>>>> models do when >>>>>> allowed to ramble on for too long.? But as Steve Hanson >>>>>> points out, >>>>>> these are still the early days. >>>>>> >>>>>> -- Dave Touretzky >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtanveer at iiti.ac.in Tue Jun 14 01:09:38 2022 From: mtanveer at iiti.ac.in (M Tanveer) Date: Tue, 14 Jun 2022 10:39:38 +0530 Subject: Connectionists: ICONIP 2022 - Calls for Papers: Submission Deadline - June 15, 2022 In-Reply-To: References: Message-ID: Dear All, I would like to cordially invite you to participate in the *29th International Conference on Neural Information Processing (ICONIP) in 2022 * which will be held in New Delhi, India during *November 22-26, 2022* (in hybrid mode). ICONIP 2022 is jointly organized by Indian Institute of Technology Indore and Indian Institute of Information Technology Allahabad, Prayagraj, India to provide a leading international forum for researchers, scientists, and industry professionals who are working in neuroscience, neural networks, deep learning, and related fields to share their new ideas, progresses and achievements. *Please find attached the ICONIP 2022 call for papers. * For further information, please visit the conference website: https://iconip2022.apnns.org/ Paper submission deadline: *June 15, 2022* Notification of Acceptance: *August 15, 2022 * Looking forward to seeing you in ICONIP 2022. Kind Regards, General Chair (s) - ICONIP 2022 ---------------------------------------------------------- Dr. M. Tanveer (General Chair - ICONIP 2022) Associate Professor and Ramanujan Fellow Department of Mathematics Indian Institute of Technology Indore Email: mtanveer at iiti.ac.in Mobile: +91-9413259268 Homepage: http://iiti.ac.in/people/~mtanveer/ Associate Editor: IEEE TNNLS (IF: 10.45). Associate Editor: Pattern Recognition, Elsevier (IF: 7.74). Action Editor: Neural Networks, Elsevier (IF: 8.05). Board of Editors: Engineering Applications of AI, Elsevier (IF: 6.21). Associate Editor: Neurocomputing, Elsevier (IF: 5.72). Editorial Board: Applied Soft Computing, Elsevier (IF: 6.72). Associate Editor: Cognitive Computation, Springer (IF: 5.42). Associate Editor: International Journal of Machine Learning & Cybernetics (IF: 4.012). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 579828 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CallForPapers-ICONIP 2022 v2.pdf Type: application/pdf Size: 463538 bytes Desc: not available URL: From bernstein.communication at fz-juelich.de Tue Jun 14 07:31:26 2022 From: bernstein.communication at fz-juelich.de (Bernstein Communication) Date: Tue, 14 Jun 2022 13:31:26 +0200 Subject: Connectionists: Last call: Abstract submission for the Bernstein Conference Message-ID: <2de5ee39-7f9f-1e8e-dc45-fb8a85760f0f@fz-juelich.de> **Apologies for cross-posting** Dear colleagues, The deadline to submit abstracts to be considered as contributed talks at the Bernstein Conference is tomorrow, June 15. Abstract drafts created until tomorrow can be finalized until Sunday, June 19. Submit your abstract here: https://bit.ly/BC_submission Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange. It has established itself as one of the most renown conferences worldwide in this field, attracting students, postdocs and PIs from around the world to meet and discuss new scientific discoveries. In 2022, the Bernstein Conference will take place as in-person meeting again in Berlin. Talks of the Main Conference are going to be livestreamed, given speakers consent. Find more information at https://bernstein-network.de/bernstein-conference/ ____ IMPORTANT DATES * Bernstein Conference: September 13 - 16, 2022 * Deadline for submission of abstracts to be considered for Contributed Talks: June 15, 2022 * Deadline for abstract submission: July 18, 2022 ____ ABSTRACTS We invite the computational neuroscience community to submit their abstracts: Submitted abstracts can either be considered as contributed talks or posters. All accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Further information can be found here: https://bit.ly/BC_submission ____ INVITED SPEAKERS Keynote Sonja Hofer (University College London, UK) Invited Talks Bing Brunton (University of Washington, USA) Christine Constantinople (New York University, USA) Carina Curto (Pennsylvania State University, USA) Liset M de la Prida (Instituto Cajal, Spain) Juan Alvaro Gallego (Imperial College London, UK) Mehrdad Jazayeri (Massachusetts Institute of Technology, USA) Gaby Maimon (The Rockefeller University, New York, USA) Andrew Saxe (University College London, UK) Henning Sprekeler (Technische Universit?t Berlin, Germany) Carsen Stringer (Janelia Research Campus, USA) ____ CONFERENCE COMMITTEE Raoul-Martin Memmesheimer (Conference Chair) Christian Machens (Program Chair) Tatjana Tchumatchenko (Program Vice Chair) Moritz Helias (Workshop Chair) Anna Levina (Workshop Vice Chair) & Megan Carey, Brent Doiron, Tatiana Engel, Ann Hermundstad, Christian Leibold, Timothy O'Leary, Srdjan Ostojic, Cristina Savin, Mark van Rossum, Friedemann Zenke. ____ For any further questions, please contact: bernstein.conference at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Neugierige sind herzlich willkommen am Sonntag, den 21. August 2022, von 10:00 bis 17:00 Uhr. Mehr unter: https://www.tagderneugier.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpinots at yahoo.com Tue Jun 14 07:28:45 2022 From: dpinots at yahoo.com (Dimitris Pinotsis) Date: Tue, 14 Jun 2022 11:28:45 +0000 (UTC) Subject: Connectionists: State of the Art Methods for Brain Data Analysis References: <1826061060.1044089.1655206125206.ref@mail.yahoo.com> Message-ID: <1826061060.1044089.1655206125206@mail.yahoo.com> State of the Art Methods for Brain Data Analysis-?A workshop for computational, theoretical and cognitive neuroscientists?taking place on July 6 at?City? Univers?ty of London.vers?ty of London.??. This hybrid workshop will discuss the state of the art methods in brain imaging and how they inform our understanding of the neural basis of behaviour and cognition.??It?will be of interest to computational and cognitive neuroscientists who are keen on brain imaging, computational psychiatry and network-level dynamics. Registration is free. To register, ?please fill in this?form. For any questions, please email?braindatanalysis at gmail.com Further details:?This hybrid workshop will discuss the state of the art methods in brain imaging and how they inform our understanding of the neural basis of behaviour and cognition. It will review techniques that allow us to access large-scale brain networks, based on deep neural networks, dynamical systems and probabilistic inference. It will also provide updates about how recent developments allow us to map and modulate brain activity and understand how information processing breaks down in disease. -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Tue Jun 14 08:42:18 2022 From: achler at gmail.com (Tsvi Achler) Date: Tue, 14 Jun 2022 08:42:18 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Going along with the thread of conversation, the problem is that academia is very political. The priority of everyone that thrives in it is to maintain or increase their position, so much so that they refuse to consider alternatives to their views. This is amplified with a multidisciplinary background. My experience is neither Marcus and associates nor Hinton and associates are willing to look at systems that: 1) are connectionst 2) are scalable 3) use so much feedback that methods like backprop wont work 4) have self-feedback helps maintain symbolic-like modularity 5) are unintuitive given today's norms This goes on year after year, and the same old stories get rehashed. The same is true of related brain sciences fields e.g. theoretical neuroscience & cognitive psychology. In the end only those who are entrenched and tend to the popularity contest can get funding and publish in places where it will be read. It is not worth pursuing or publishing anything novel in academia. The corporate world is all that is left because of the awful politics. Moreover Marcus and Hinton themselves enjoy the less political environment in corporate as well. -Tsvi On Mon, Jun 13, 2022 at 6:08 AM Gary Marcus wrote: > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > Surely, at the very least > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > Best > > Ali > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.s.smith at cs.stir.ac.uk Tue Jun 14 11:00:01 2022 From: l.s.smith at cs.stir.ac.uk (Prof Leslie Smith) Date: Tue, 14 Jun 2022 16:00:01 +0100 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: <421e73c0dcb6a4370d5f04ce2a414b59.squirrel@mail.cs.stir.ac.uk> I presume the paper to which Tsvi Achler refers is T. Achler, Symbolic neural networks for cognitive capacities, Biological Inspired Cognitive Architectures, 9, 71-81, 2014. and it certainly seems relevant to this discussion. --Leslie Smith Tsvi Achler wrote: > Going along with the thread of conversation, the problem is that academia > is very political. The priority of everyone that thrives in it is to > maintain or increase their position, so much so that they refuse to > consider alternatives to their views. This is amplified with a > multidisciplinary background. > My experience is neither Marcus and associates nor Hinton and associates > are willing to look at systems that: > 1) are connectionst > 2) are scalable > 3) use so much feedback that methods like backprop wont work > 4) have self-feedback helps maintain symbolic-like modularity > 5) are unintuitive given today's norms > This goes on year after year, and the same old stories get rehashed. > The same is true of related brain sciences fields e.g. theoretical > neuroscience & cognitive psychology. > In the end only those who are entrenched and tend to the popularity > contest > can get funding and publish in places where it will be read. It is not > worth pursuing or publishing anything novel in academia. > The corporate world is all that is left because of the awful politics. > Moreover Marcus and Hinton themselves enjoy the less political > environment in corporate as well. > -Tsvi > > Prof Leslie Smith (Emeritus) Computing Science & Mathematics, University of Stirling, Stirling FK9 4LA Scotland, UK Tel +44 1786 467435 Web: http://www.cs.stir.ac.uk/~lss Blog: http://lestheprof.com From jose at rubic.rutgers.edu Tue Jun 14 12:13:22 2022 From: jose at rubic.rutgers.edu (jose at rubic.rutgers.edu) Date: Tue, 14 Jun 2022 12:13:22 -0400 Subject: Connectionists: =?utf-8?q?Yang_and_Piantodosi=E2=80=99s_PNAS_lang?= =?utf-8?q?uage_system=2C_semantics=2C_and_scene_understanding?= In-Reply-To: References: <1c3cab99-349a-a000-ecdd-4dfaf143a112@rubic.rutgers.edu> <9D553D9E-CA9A-43E6-840F-0DD05F3C4D9E@nyu.edu> Message-ID: Great, these are all grammar strings.. nothing semantic --right? Gary and I had confusion about that.. but I read the paper.. Steve On 6/14/22 12:11 PM, Steven T. Piantadosi wrote: > > All of our training/test data is on the github, but please let me know > if I can help! > > Steve > > > On 6/13/22 06:13, Gary Marcus wrote: >> ?I do remember the work :) Just generally Transformers seem more >> effective; a careful comparison between Y&P, Transformers, and your >> RNN approach, looking at generalization to novel words, would indeed >> be interesting. >> Cheers, >> Gary >> >>> On Jun 13, 2022, at 06:09, jose at rubic.rutgers.edu wrote: >>> >>> ? >>> >>> I was thinking more like an RNN similar to work we had done in the >>> 2000s.. on syntax. >>> >>> Stephen Jos? Hanson, Michiro Negishi; On the Emergence of Rules in >>> Neural Networks. Neural Comput 2002; 14 (9): 2245?2268. doi: >>> https://doi.org/10.1162/089976602320264079 >>> >>> Abstract >>> A simple associationist neural network learns to factor abstract >>> rules (i.e., grammars) from sequences of arbitrary input symbols by >>> inventing abstract representations that accommodate unseen symbol >>> sets as well as unseen but similar grammars. The neural network is >>> shown to have the ability to transfer grammatical knowledge to both >>> new symbol vocabularies and new grammars. Analysis of the >>> state-space shows that the network learns generalized abstract >>> structures of the input and is not simply memorizing the input >>> strings. These representations are context sensitive, hierarchical, >>> and based on the state variable of the finite-state machines that >>> the neural network has learned. Generalization to new symbol sets or >>> grammars arises from the spatial nature of the internal >>> representations used by the network, allowing new symbol sets to be >>> encoded close to symbol sets that have already been learned in the >>> hidden unit space of the network. The results are counter to the >>> arguments that learning algorithms based on weight adaptation after >>> each exemplar presentation (such as the long term potentiation found >>> in the mammalian nervous system) cannot in principle extract >>> symbolic knowledge from positive examples as prescribed by >>> prevailing human linguistic theory and evolutionary psychology. >>> >>> On 6/13/22 8:55 AM, Gary Marcus wrote: >>>> ? agree with Steve this is an interesting paper, and replicating it >>>> with a neural net would be interesting; cc?ing Steve Piantosi. >>>> ? why not use a Transformer, though? >>>> - it is however importantly missing semantics. (Steve P. tells me >>>> there is some related work that is worth looking into). Y&P speaks >>>> to an old tradition of formal language work by Gold and others that >>>> is quite popular but IMHO misguided, because it focuses purely on >>>> syntax rather than semantics. ?Gold?s work definitely motivates >>>> learnability but I have never taken it to seriously as a real model >>>> of language >>>> - doing what Y&P try to do with a rich artificial language that is >>>> focused around syntax-semantic mappings could be very interesting >>>> - on a somewhat but not entirely analogous note, i think that ?the >>>> next step in vision is really scene understanding. We have >>>> techniques for doing object labeling reasonably well, but still >>>> struggle wit parts and wholes are important, and with relations >>>> more generally, which is to say we need the semantics of scenes. is >>>> the chair on the floor, or floating in the air? is it supporting >>>> the pillow? etc. is the hand a part of the body? is the glove a >>>> part of the body? etc >>>> >>>> Best, >>>> Gary >>>> >>>> >>>> >>>>> On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote: >>>>> >>>>> Again, I think a relevant project here? would be to attempt to >>>>> replicate with DL-rnn, Yang and Piatiadosi's PNAS language >>>>> learning system--which is a completely symbolic-- and very general >>>>> over the Chomsky-Miller grammer classes.?? Let me know, happy to >>>>> collaborate on something like this. >>>>> >>>>> Best >>>>> >>>>> Steve >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Tue Jun 14 19:10:34 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 14 Jun 2022 23:10:34 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Hi Ali, Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. On the large amount of data required to train the deep learning models: I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. Best, Asim From: Ali Minai Sent: Tuesday, June 14, 2022 3:43 PM To: Asim Roy Cc: Connectionists List ; Gary Marcus ; Geoffrey Hinton ; Yoshua Bengio Subject: Re: Connectionists: The symbolist quagmire Hi Asim I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. Cheers Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 5:17 PM Asim Roy > wrote: Hi Ali, 1. It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. 2. You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. 3. Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. Best, Asim Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists > On Behalf Of Ali Minai Sent: Monday, June 13, 2022 10:57 PM To: Connectionists List > Subject: Re: Connectionists: The symbolist quagmire Asim This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 7:48 PM Asim Roy > wrote: There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. 1. Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. 1. There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] From: Connectionists > On Behalf Of Gary Marcus Sent: Monday, June 13, 2022 5:36 AM To: Ali Minai > Cc: Connectionists List > Subject: Connectionists: The symbolist quagmire Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? On Jun 13, 2022, at 00:31, Ali Minai > wrote: ? ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: From ASIM.ROY at asu.edu Tue Jun 14 16:39:05 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 14 Jun 2022 20:39:05 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: <421e73c0dcb6a4370d5f04ce2a414b59.squirrel@mail.cs.stir.ac.uk> References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> <421e73c0dcb6a4370d5f04ce2a414b59.squirrel@mail.cs.stir.ac.uk> Message-ID: I met Tsvi at the SBIR conference in Washington DC today and found out that he, in fact, has received an SBIR grant from NSF to do further work on his algorithm and commercialize it. Congratulations to Tsvi. His work is getting attention. Asim -----Original Message----- From: Connectionists On Behalf Of Prof Leslie Smith Sent: Tuesday, June 14, 2022 8:00 AM To: Tsvi Achler Cc: Connectionists List Subject: Re: Connectionists: The symbolist quagmire I presume the paper to which Tsvi Achler refers is T. Achler, Symbolic neural networks for cognitive capacities, Biological Inspired Cognitive Architectures, 9, 71-81, 2014. and it certainly seems relevant to this discussion. --Leslie Smith Tsvi Achler wrote: > Going along with the thread of conversation, the problem is that > academia is very political. The priority of everyone that thrives in > it is to maintain or increase their position, so much so that they > refuse to consider alternatives to their views. This is amplified > with a multidisciplinary background. > My experience is neither Marcus and associates nor Hinton and > associates are willing to look at systems that: > 1) are connectionst > 2) are scalable > 3) use so much feedback that methods like backprop wont work > 4) have self-feedback helps maintain symbolic-like modularity > 5) are unintuitive given today's norms This goes on year after year, > and the same old stories get rehashed. > The same is true of related brain sciences fields e.g. theoretical > neuroscience & cognitive psychology. > In the end only those who are entrenched and tend to the popularity > contest can get funding and publish in places where it will be read. > It is not worth pursuing or publishing anything novel in academia. > The corporate world is all that is left because of the awful politics. > Moreover Marcus and Hinton themselves enjoy the less political > environment in corporate as well. > -Tsvi > > Prof Leslie Smith (Emeritus) Computing Science & Mathematics, University of Stirling, Stirling FK9 4LA Scotland, UK Tel +44 1786 467435 Web: https://urldefense.com/v3/__http://www.cs.stir.ac.uk/*lss__;fg!!IKRxdwAv5BmarQ!Yq1Wb13c9jQktdBoOdvMUoP5d-c0CBCK7W9d05KEuzNt_c1SRDBmaDHdWfsCcqvL7ayQ1s6kU5lOi-3w1dxBaD8$ Blog: https://urldefense.com/v3/__http://lestheprof.com__;!!IKRxdwAv5BmarQ!Yq1Wb13c9jQktdBoOdvMUoP5d-c0CBCK7W9d05KEuzNt_c1SRDBmaDHdWfsCcqvL7ayQ1s6kU5lOi-3wBO-LW70$ From ASIM.ROY at asu.edu Tue Jun 14 17:16:52 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Tue, 14 Jun 2022 21:16:52 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Hi Ali, 1. It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. 2. You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. 3. Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. Best, Asim Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Ali Minai Sent: Monday, June 13, 2022 10:57 PM To: Connectionists List Subject: Re: Connectionists: The symbolist quagmire Asim This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 7:48 PM Asim Roy > wrote: There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. 1. Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. 1. There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] From: Connectionists > On Behalf Of Gary Marcus Sent: Monday, June 13, 2022 5:36 AM To: Ali Minai > Cc: Connectionists List > Subject: Connectionists: The symbolist quagmire Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? On Jun 13, 2022, at 00:31, Ali Minai > wrote: ? ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: From minaiaa at gmail.com Wed Jun 15 00:11:16 2022 From: minaiaa at gmail.com (Ali Minai) Date: Wed, 15 Jun 2022 00:11:16 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Hi Asim That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. Best Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: > Hi Ali, > > > > Of course the development phase is mostly unsupervised and I know there is > ongoing work in that area that I don?t keep up with. > > > > On the large amount of data required to train the deep learning models: > > > > I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at > Stanford. And Bernie and I became quite close after attending his class > that quarter. I usually used to walk back with Bernie after his class. One > day I did ask where does all this data come from to train the brain? His > reply was - every blink of the eye generates a datapoint. > > > > Best, > > Asim > > > > *From:* Ali Minai > *Sent:* Tuesday, June 14, 2022 3:43 PM > *To:* Asim Roy > *Cc:* Connectionists List ; Gary Marcus < > gary.marcus at nyu.edu>; Geoffrey Hinton ; Yoshua > Bengio > *Subject:* Re: Connectionists: The symbolist quagmire > > > > Hi Asim > > > > I have no issue with neurons or groups of neurons tuned to concepts. > Clearly, abstract concepts and the equivalent of symbolic computation are > represented somehow. Amodal representations have also been known for a long > time. As someone who has worked on the hippocampus and models of thought > for a long time, I don't need much convincing on that. The issue is how a > self-organizing complex system like the brain comes by these > representations. I think it does so by building on the substrate of > inductive biases - priors - configured by evolution and a developmental > learning process. We just try to cram everything into neural learning, > which is a main cause of the "problems" associated with deep learning. > They're problems only if you're trying to attain general intelligence of > the natural kind, perhaps not so much for applications. > > > > Of course you have to start simple, but, so far, I have not seen any > simple model truly scale up to the real world without: a) Major tinkering > with its original principles; b) Lots of data and training; and c) Still > being focused on a narrow task. When this approach shows us how to build an > AI that can walk, chew gum, do math, and understand a poem using a single > brain, then we'll have something like real human-level AI. Heck, if it can > just spin a web in an appropriate place, hide in wait for prey, and make > sure it eats its mate only after sex, I would even consider that > intelligent :-). > > > > Here's the thing: Teaching a sufficiently complicated neural system a very > complex task with lots of data and supervised training is an interesting > engineering problem but doesn't get us to intelligence. Yes, a network can > learn grammar with supervised learning, but none of us learn it that way. > Nor do the other animals that have simpler grammars embedded in their > communication. My view is that if it is not autonomously self-organizing at > a fundamental level, it is not intelligence but just a simulation of > intelligence. Of course, we humans do use supervised learning, but it is a > "late stage" mechanism. It works only when the system has first > self-organized autonomously to develop the capabilities that can act as a > substrate for supervised learning. Learning to play the piano, learning to > do math, learning calligraphy - all these have an important supervised > component, but they work only after perceptual, sensorimotor, and cognitive > functions have been learned through self-organization, imitation, rapid > reinforcement, internal rehearsal, mismatch-based learning, etc. I think > methods like SOFM, ART, and RBMs are closer to what we need than behemoths > trained with gradient descent. We just have to find more efficient versions > of them. And in this, I always return to Dobzhansky's maxim: Nothing in > biology makes sense except in the light of evolution. Intelligence is a > biological phenomenon; we'll understand it by paying attention to how it > evolved (not by trying to replicate evolution, of course!) And the same > goes for development. I think we understand natural phenomena by studying > Nature respectfully, not by trying to out-think it based on our still very > limited knowledge - not that it keeps any of us, myself included, from > doing exactly that! I am not as familiar with your work as I should be, but > I admire the fact that you're approaching things with principles rather > than building larger and larger Rube Goldberg contraptions tuned to narrow > tasks. I do think, however, that if we ever get to truly mammalian-level > AI, it will not be anywhere close to fully explainable. Nor will it be a > slave only to our purposes. > > > > Cheers > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: > > Hi Ali, > > > > 1. It?s important to understand that there is plenty of > neurophysiological evidence for abstractions at the single cell level in > the brain. Thus, symbolic representation in the brain is not a fiction any > more. We are past that argument. > 2. You always start with simple systems before you do the complex > ones. Having said that, we do teach our systems composition ? composition > of objects from parts in images. That is almost like teaching grammar or > solving a puzzle. I don?t get into language models, but I think grammar and > composition can be easily taught, like you teach a kid. > 3. Once you know how to build these simple models and extract symbols, > you can easily scale up and build hierarchical, multi-modal, compositional > models. Thus, in the case of images, after having learnt that cats, dogs > and similar animals have certain common features (eyes, legs, ears), it can > easily generalize the concept to four-legged animals. We haven?t done it, > but that could be the next level of learning. > > > > In general, once you extract symbols from these deep learning models, you > are at the symbolic level and you have a pathway to more complex, > hierarchical models and perhaps also to AGI. > > > > Best, > > Asim > > > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > > > *From:* Connectionists *On > Behalf Of *Ali Minai > *Sent:* Monday, June 13, 2022 10:57 PM > *To:* Connectionists List > *Subject:* Re: Connectionists: The symbolist quagmire > > > > Asim > > > > This is really interesting work, but learning concept representations from > sensory data is not enough. They must be hierarchical, multi-modal, > compositional, and integrated with the motor system, the limbic system, > etc., in a way that facilitates an infinity of useful behaviors. This is > perhaps a good step in that direction, but only a small one. Its main > immediate utility is in using deep learning networks in tasks that can be > explained to users and customers. While very useful, that is not a central > issue in AI, which focuses on intelligent behavior. All else is in service > to that - explainable or not. However, I do think that the kind of > hierarchical modularity implied in these representations is probably part > of the brain's repertoire, and that is important. > > > > Best > > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: > > There?s a lot of misconceptions about (1) whether the brain uses symbols > or not, and (2) whether we need symbol processing in our systems or not. > > > > 1. Multisensory neurons are widely used in the brain. Leila Reddy and > Simon Thorpe are not known to be wildly crazy about arguing that symbols > exist in the brain, but their characterizations of concept cells (which > are multisensory neurons) ( > https://www.sciencedirect.com/science/article/pii/S0896627314009027# > > !) state that concept cells have ?*meaning** of a given stimulus in a > manner that is invariant to different representations of that stimulus*.? > They associate concept cells with the properties of ?*Selectivity or > specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal > invariance*? and ?*abstractness*.? That pretty much says that concept > cells represent symbols. And there are plenty of concept cells in the > medial temporal lobe (MTL). The brain is a highly abstract system based on > symbols. There is no fiction there. > > > > 1. There is ongoing work in the deep learning area that is trying to > associate a single neuron or a group of neurons with a single concept. > Bengio?s work is definitely in that direction: > > > > ?*Finally, our recent work on learning high-level 'system-2'-like > representations and their causal dependencies seeks to learn > 'interpretable' entities (with natural language) that will emerge at the > highest levels of representation (not clear how distributed or local these > will be, but much more local than in a traditional MLP). This is a > different form of disentangling than adopted in much of the recent work on > unsupervised representation learning but shares the idea that the "right" > abstract concept (related to those we can name verbally) will be > "separated" (disentangled) from each other (which suggests that > neuroscientists will have an easier time spotting them in neural > activity).?* > > Hinton?s GLOM, which extends the idea of capsules to do part-whole > hierarchies for scene analysis using the parse tree concept, is also about > associating a concept with a set of neurons. While Bengio and Hinton are > trying to construct these ?concept cells? within the network (the CNN), we > found that this can be done much more easily and in a straight forward way > outside the network. We can easily decode a CNN to find the encodings for > legs, ears and so on for cats and dogs and what not. What the DARPA > Explainable AI program was looking for was a symbolic-emitting model of the > form shown below. And we can easily get to that symbolic model by decoding > a CNN. In addition, the side benefit of such a symbolic model is protection > against adversarial attacks. So a school bus will never turn into an > ostrich with the tweaks of a few pixels if you can verify parts of objects. > To be an ostrich, you need have those long legs, the long neck and the > small head. A school bus lacks those parts. The DARPA conceptualized > symbolic model provides that protection. > > > > In general, there is convergence between connectionist and symbolic > systems. We need to get past the old wars. It?s over. > > > > All the best, > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > [image: Timeline Description automatically generated] > > > > > > *From:* Connectionists *On > Behalf Of *Gary Marcus > *Sent:* Monday, June 13, 2022 5:36 AM > *To:* Ali Minai > *Cc:* Connectionists List > *Subject:* Connectionists: The symbolist quagmire > > > > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > > > Surely, at the very least > > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > > > > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > > > Best > > > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: From minaiaa at gmail.com Tue Jun 14 18:43:06 2022 From: minaiaa at gmail.com (Ali Minai) Date: Tue, 14 Jun 2022 18:43:06 -0400 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Hi Asim I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. Cheers Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: > Hi Ali, > > > > 1. It?s important to understand that there is plenty of > neurophysiological evidence for abstractions at the single cell level in > the brain. Thus, symbolic representation in the brain is not a fiction any > more. We are past that argument. > 2. You always start with simple systems before you do the complex > ones. Having said that, we do teach our systems composition ? composition > of objects from parts in images. That is almost like teaching grammar or > solving a puzzle. I don?t get into language models, but I think grammar and > composition can be easily taught, like you teach a kid. > 3. Once you know how to build these simple models and extract symbols, > you can easily scale up and build hierarchical, multi-modal, compositional > models. Thus, in the case of images, after having learnt that cats, dogs > and similar animals have certain common features (eyes, legs, ears), it can > easily generalize the concept to four-legged animals. We haven?t done it, > but that could be the next level of learning. > > > > In general, once you extract symbols from these deep learning models, you > are at the symbolic level and you have a pathway to more complex, > hierarchical models and perhaps also to AGI. > > > > Best, > > Asim > > > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > > > *From:* Connectionists *On > Behalf Of *Ali Minai > *Sent:* Monday, June 13, 2022 10:57 PM > *To:* Connectionists List > *Subject:* Re: Connectionists: The symbolist quagmire > > > > Asim > > > > This is really interesting work, but learning concept representations from > sensory data is not enough. They must be hierarchical, multi-modal, > compositional, and integrated with the motor system, the limbic system, > etc., in a way that facilitates an infinity of useful behaviors. This is > perhaps a good step in that direction, but only a small one. Its main > immediate utility is in using deep learning networks in tasks that can be > explained to users and customers. While very useful, that is not a central > issue in AI, which focuses on intelligent behavior. All else is in service > to that - explainable or not. However, I do think that the kind of > hierarchical modularity implied in these representations is probably part > of the brain's repertoire, and that is important. > > > > Best > > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: > > There?s a lot of misconceptions about (1) whether the brain uses symbols > or not, and (2) whether we need symbol processing in our systems or not. > > > > 1. Multisensory neurons are widely used in the brain. Leila Reddy and > Simon Thorpe are not known to be wildly crazy about arguing that symbols > exist in the brain, but their characterizations of concept cells (which > are multisensory neurons) ( > https://www.sciencedirect.com/science/article/pii/S0896627314009027# > > !) state that concept cells have ?*meaning** of a given stimulus in a > manner that is invariant to different representations of that stimulus*.? > They associate concept cells with the properties of ?*Selectivity or > specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal > invariance*? and ?*abstractness*.? That pretty much says that concept > cells represent symbols. And there are plenty of concept cells in the > medial temporal lobe (MTL). The brain is a highly abstract system based on > symbols. There is no fiction there. > > > > 1. There is ongoing work in the deep learning area that is trying to > associate a single neuron or a group of neurons with a single concept. > Bengio?s work is definitely in that direction: > > > > ?*Finally, our recent work on learning high-level 'system-2'-like > representations and their causal dependencies seeks to learn > 'interpretable' entities (with natural language) that will emerge at the > highest levels of representation (not clear how distributed or local these > will be, but much more local than in a traditional MLP). This is a > different form of disentangling than adopted in much of the recent work on > unsupervised representation learning but shares the idea that the "right" > abstract concept (related to those we can name verbally) will be > "separated" (disentangled) from each other (which suggests that > neuroscientists will have an easier time spotting them in neural > activity).?* > > Hinton?s GLOM, which extends the idea of capsules to do part-whole > hierarchies for scene analysis using the parse tree concept, is also about > associating a concept with a set of neurons. While Bengio and Hinton are > trying to construct these ?concept cells? within the network (the CNN), we > found that this can be done much more easily and in a straight forward way > outside the network. We can easily decode a CNN to find the encodings for > legs, ears and so on for cats and dogs and what not. What the DARPA > Explainable AI program was looking for was a symbolic-emitting model of the > form shown below. And we can easily get to that symbolic model by decoding > a CNN. In addition, the side benefit of such a symbolic model is protection > against adversarial attacks. So a school bus will never turn into an > ostrich with the tweaks of a few pixels if you can verify parts of objects. > To be an ostrich, you need have those long legs, the long neck and the > small head. A school bus lacks those parts. The DARPA conceptualized > symbolic model provides that protection. > > > > In general, there is convergence between connectionist and symbolic > systems. We need to get past the old wars. It?s over. > > > > All the best, > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > [image: Timeline Description automatically generated] > > > > > > *From:* Connectionists *On > Behalf Of *Gary Marcus > *Sent:* Monday, June 13, 2022 5:36 AM > *To:* Ali Minai > *Cc:* Connectionists List > *Subject:* Connectionists: The symbolist quagmire > > > > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > > > Surely, at the very least > > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > > > > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > > > Best > > > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: From ioannakoroni at csd.auth.gr Wed Jun 15 03:36:26 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 15 Jun 2022 10:36:26 +0300 Subject: Connectionists: Early registration: Invitation to join 2022 Summer 'Programming short course and workshop on Deep Learning and Computer Vision', 24-26th August 2022 References: <004601d874cd$c8a23160$59e69420$@csd.auth.gr> Message-ID: <03d501d8808a$9f1511f0$dd3f35d0$@csd.auth.gr> Dear Deep Learning, Computer Vision, Digital Media engineers, scientists and enthusiasts, you are welcomed to register in the CVML e-course on ?Programming short course and workshop on Deep Learning and Computer Vision?, 24-26th August 2022: https://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep-learning-and-computer-vision-2022/ It will take place as a three-day e-course (due to COVID-19 circumstances), hosted by the Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece, providing a series of live lectures and programming workshops delivered through a tele-education platform (Zoom). Its focus is on upgrading your programming skills in various Deep Learning and Computer Vision topics. You will be provided programming exercises in Python, CUDA, PyTorch, OpenCV etc to this end. Application focus will be in Digital Media. They will be complemented with on-line video recorded lectures and lecture pdfs, to facilitate international participants having time difference issues and to enable you to study at own pace. You can also self-assess your knowledge, by filling appropriate questionnaires (one per lecture). This course is part of the very successful CVML programming short course and workshop series that took place in the last four years. Course description ?Programming short course and workshop on Deep Learning and Computer Vision? The programming short course and workshop e-course consists of 16 1-hour live lectures & workshops organized in two Parts (1 Part per day): Part A will focus on Deep Learning and GPU programming. Part B lectures will focus on deep learning algorithms for computer vision, namely on 2D object/face detection and 2D object tracking. Part C lectures will focus on autonomous UAV cinematography. Before mission execution, it is best simulated, using drone mission simulation tools. Course lectures and programming workshops Part A (8 hours), Deep Learning and GPU programming Deep neural networks. Convolutional NNs. Parallel GPU and multi-core CPU architectures ? GPU programming Image classification with CNNs. CUDA programming Part B (8 hours), Deep Learning for Computer Vision Deep learning for object/face detection. 2D object tracking. PyTorch: Understand the core functionalities of an object detector. Training and deployment. OpenCV programming for object tracking. Part C (8 hours), Autonomous UAV cinematography Video summarization. UAV cinematography. Video summarization with Pytorch. Drone cinematography with Airsim. You can use the following link for course registration: https://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep-learning-and-computer-vision-2022/ For questions, please contact: Ioanna Koroni < koroniioanna at csd.auth.gr> This programming short course is organized by Prof. I. Pitas, IEEE and EURASIP fellow and IEEE distinguished speaker. He is the coordinator of the EC funded International AI Doctoral Academy (AIDA ), that is co-sponsored by all 5 European AI R&D flagship projects (H2020 ICT48). He was initiator and first Chair of the IEEE SPS Autonomous Systems Initiative. He is Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab), Aristotle University of Thessaloniki, Greece. He was Coordinator of the European Horizon2020 R&D project Multidrone. He is ranked 249-top Computer Science and Electronics scientist internationally by Guide2research (2018). He has 33800+ citations to his work and h-index 86+. Relevant links: 1) Prof. I. Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ &hl=el 2) Horizon2020 EU funded R&D project Aerial-Core: https://aerial-core.eu/ 3) Horizon2020 EU funded R&D project Multidrone: https://multidrone.eu/ 4) International AI Doctoral Academy (AIDA): http://www.i-aida.org/ 5) Horizon2020 EU funded R&D project AI4Media: https://ai4media.eu/ 6) AIIA Lab: https://aiia.csd.auth.gr/ Sincerely yours Prof. I. Pitas Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab) Aristotle University of Thessaloniki, Greece Post scriptum: To stay current on CVML matters, you may want to register in the CVML email list, following instructions in: https://lists.auth.gr/sympa/info/cvml -------------- next part -------------- An HTML attachment was scrubbed... URL: From christos.dimitrakakis at gmail.com Wed Jun 15 03:34:08 2022 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Wed, 15 Jun 2022 09:34:08 +0200 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: I am quite reluctant to post something, but here goes. What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. On Wed, Jun 15, 2022, 08:06 Ali Minai wrote: > Hi Asim > > That's great. Each blink is a data point, but what does the brain do with > it? Calculate gradients across layers and use minibatches? The data point > is gone instantly, never to be iterated over, except any part that the > hippocampus may have grabbed as an episodic memory and can make available > for later replay. We need to understand how this works and how it can be > instantiated in learning algorithms. To be fair, in the special case of > (early) vision, I think we have a pretty reasonable idea. It's more > interesting to think of why we can figure out how to do fairly complicated > things of diverse modalities after watching someone do them once - or > never. That integrated understanding of the world and the ability to > exploit it opportunistically and pervasively is the thing that makes an > animal intelligent. Are we heading that way, or are we focusing too much on > a few very specific problems. I really think that the best AI work in the > long term will come from those who work with robots that experience the > world in an integrated way. Maybe multi-modal learning will get us part of > the way there, but not if it needs so much training. > > Anyway, I know that many people are already thinking about these things > and trying to address them, so let's see where things go. Thanks for the > stimulating discussion. > > Best > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: > >> Hi Ali, >> >> >> >> Of course the development phase is mostly unsupervised and I know there >> is ongoing work in that area that I don?t keep up with. >> >> >> >> On the large amount of data required to train the deep learning models: >> >> >> >> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at >> Stanford. And Bernie and I became quite close after attending his class >> that quarter. I usually used to walk back with Bernie after his class. One >> day I did ask where does all this data come from to train the brain? His >> reply was - every blink of the eye generates a datapoint. >> >> >> >> Best, >> >> Asim >> >> >> >> *From:* Ali Minai >> *Sent:* Tuesday, June 14, 2022 3:43 PM >> *To:* Asim Roy >> *Cc:* Connectionists List ; Gary Marcus < >> gary.marcus at nyu.edu>; Geoffrey Hinton ; >> Yoshua Bengio >> *Subject:* Re: Connectionists: The symbolist quagmire >> >> >> >> Hi Asim >> >> >> >> I have no issue with neurons or groups of neurons tuned to concepts. >> Clearly, abstract concepts and the equivalent of symbolic computation are >> represented somehow. Amodal representations have also been known for a long >> time. As someone who has worked on the hippocampus and models of thought >> for a long time, I don't need much convincing on that. The issue is how a >> self-organizing complex system like the brain comes by these >> representations. I think it does so by building on the substrate of >> inductive biases - priors - configured by evolution and a developmental >> learning process. We just try to cram everything into neural learning, >> which is a main cause of the "problems" associated with deep learning. >> They're problems only if you're trying to attain general intelligence of >> the natural kind, perhaps not so much for applications. >> >> >> >> Of course you have to start simple, but, so far, I have not seen any >> simple model truly scale up to the real world without: a) Major tinkering >> with its original principles; b) Lots of data and training; and c) Still >> being focused on a narrow task. When this approach shows us how to build an >> AI that can walk, chew gum, do math, and understand a poem using a single >> brain, then we'll have something like real human-level AI. Heck, if it can >> just spin a web in an appropriate place, hide in wait for prey, and make >> sure it eats its mate only after sex, I would even consider that >> intelligent :-). >> >> >> >> Here's the thing: Teaching a sufficiently complicated neural system a >> very complex task with lots of data and supervised training is an >> interesting engineering problem but doesn't get us to intelligence. Yes, a >> network can learn grammar with supervised learning, but none of us learn it >> that way. Nor do the other animals that have simpler grammars embedded in >> their communication. My view is that if it is not autonomously >> self-organizing at a fundamental level, it is not intelligence but just a >> simulation of intelligence. Of course, we humans do use supervised >> learning, but it is a "late stage" mechanism. It works only when the system >> has first self-organized autonomously to develop the capabilities that can >> act as a substrate for supervised learning. Learning to play the piano, >> learning to do math, learning calligraphy - all these have an important >> supervised component, but they work only after perceptual, sensorimotor, >> and cognitive functions have been learned through self-organization, >> imitation, rapid reinforcement, internal rehearsal, mismatch-based >> learning, etc. I think methods like SOFM, ART, and RBMs are closer to what >> we need than behemoths trained with gradient descent. We just have to find >> more efficient versions of them. And in this, I always return to >> Dobzhansky's maxim: Nothing in biology makes sense except in the light of >> evolution. Intelligence is a biological phenomenon; we'll understand it by >> paying attention to how it evolved (not by trying to replicate evolution, >> of course!) And the same goes for development. I think we understand >> natural phenomena by studying Nature respectfully, not by trying to >> out-think it based on our still very limited knowledge - not that it keeps >> any of us, myself included, from doing exactly that! I am not as familiar >> with your work as I should be, but I admire the fact that you're >> approaching things with principles rather than building larger and larger >> Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that >> if we ever get to truly mammalian-level AI, it will not be anywhere close >> to fully explainable. Nor will it be a slave only to our purposes. >> >> >> >> Cheers >> >> Ali >> >> >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: >> >> Hi Ali, >> >> >> >> 1. It?s important to understand that there is plenty of >> neurophysiological evidence for abstractions at the single cell level in >> the brain. Thus, symbolic representation in the brain is not a fiction any >> more. We are past that argument. >> 2. You always start with simple systems before you do the complex >> ones. Having said that, we do teach our systems composition ? composition >> of objects from parts in images. That is almost like teaching grammar or >> solving a puzzle. I don?t get into language models, but I think grammar and >> composition can be easily taught, like you teach a kid. >> 3. Once you know how to build these simple models and extract >> symbols, you can easily scale up and build hierarchical, multi-modal, >> compositional models. Thus, in the case of images, after having learnt that >> cats, dogs and similar animals have certain common features (eyes, legs, >> ears), it can easily generalize the concept to four-legged animals. We >> haven?t done it, but that could be the next level of learning. >> >> >> >> In general, once you extract symbols from these deep learning models, you >> are at the symbolic level and you have a pathway to more complex, >> hierarchical models and perhaps also to AGI. >> >> >> >> Best, >> >> Asim >> >> >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Ali Minai >> *Sent:* Monday, June 13, 2022 10:57 PM >> *To:* Connectionists List >> *Subject:* Re: Connectionists: The symbolist quagmire >> >> >> >> Asim >> >> >> >> This is really interesting work, but learning concept representations >> from sensory data is not enough. They must be hierarchical, multi-modal, >> compositional, and integrated with the motor system, the limbic system, >> etc., in a way that facilitates an infinity of useful behaviors. This is >> perhaps a good step in that direction, but only a small one. Its main >> immediate utility is in using deep learning networks in tasks that can be >> explained to users and customers. While very useful, that is not a central >> issue in AI, which focuses on intelligent behavior. All else is in service >> to that - explainable or not. However, I do think that the kind of >> hierarchical modularity implied in these representations is probably part >> of the brain's repertoire, and that is important. >> >> >> >> Best >> >> Ali >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: >> >> There?s a lot of misconceptions about (1) whether the brain uses symbols >> or not, and (2) whether we need symbol processing in our systems or not. >> >> >> >> 1. Multisensory neurons are widely used in the brain. Leila Reddy and >> Simon Thorpe are not known to be wildly crazy about arguing that symbols >> exist in the brain, but their characterizations of concept cells (which >> are multisensory neurons) ( >> https://www.sciencedirect.com/science/article/pii/S0896627314009027# >> >> !) state that concept cells have ?*meaning** of a given stimulus in a >> manner that is invariant to different representations of that stimulus*.? >> They associate concept cells with the properties of ?*Selectivity or >> specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal >> invariance*? and ?*abstractness*.? That pretty much says that concept >> cells represent symbols. And there are plenty of concept cells in the >> medial temporal lobe (MTL). The brain is a highly abstract system based on >> symbols. There is no fiction there. >> >> >> >> 1. There is ongoing work in the deep learning area that is trying to >> associate a single neuron or a group of neurons with a single concept. >> Bengio?s work is definitely in that direction: >> >> >> >> ?*Finally, our recent work on learning high-level 'system-2'-like >> representations and their causal dependencies seeks to learn >> 'interpretable' entities (with natural language) that will emerge at the >> highest levels of representation (not clear how distributed or local these >> will be, but much more local than in a traditional MLP). This is a >> different form of disentangling than adopted in much of the recent work on >> unsupervised representation learning but shares the idea that the "right" >> abstract concept (related to those we can name verbally) will be >> "separated" (disentangled) from each other (which suggests that >> neuroscientists will have an easier time spotting them in neural >> activity).?* >> >> Hinton?s GLOM, which extends the idea of capsules to do part-whole >> hierarchies for scene analysis using the parse tree concept, is also about >> associating a concept with a set of neurons. While Bengio and Hinton are >> trying to construct these ?concept cells? within the network (the CNN), we >> found that this can be done much more easily and in a straight forward way >> outside the network. We can easily decode a CNN to find the encodings for >> legs, ears and so on for cats and dogs and what not. What the DARPA >> Explainable AI program was looking for was a symbolic-emitting model of the >> form shown below. And we can easily get to that symbolic model by decoding >> a CNN. In addition, the side benefit of such a symbolic model is protection >> against adversarial attacks. So a school bus will never turn into an >> ostrich with the tweaks of a few pixels if you can verify parts of objects. >> To be an ostrich, you need have those long legs, the long neck and the >> small head. A school bus lacks those parts. The DARPA conceptualized >> symbolic model provides that protection. >> >> >> >> In general, there is convergence between connectionist and symbolic >> systems. We need to get past the old wars. It?s over. >> >> >> >> All the best, >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> [image: Timeline Description automatically generated] >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Gary Marcus >> *Sent:* Monday, June 13, 2022 5:36 AM >> *To:* Ali Minai >> *Cc:* Connectionists List >> *Subject:* Connectionists: The symbolist quagmire >> >> >> >> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, >> Dave and Geoff were both pioneers in trying to getting symbols and neural >> nets to live in harmony. Don?t we still need do that, and if not, why not? >> >> >> >> Surely, at the very least >> >> - we want our AI to be able to take advantage of the (large) fraction of >> world knowledge that is represented in symbolic form (language, including >> unstructured text, logic, math, programming etc) >> >> - any model of the human mind ought be able to explain how humans can so >> effectively communicate via the symbols of language and how trained humans >> can deal with (to the extent that can) logic, math, programming, etc >> >> >> >> Folks like Bengio have joined me in seeing the need for ?System II? >> processes. That?s a bit of a rough approximation, but I don?t see how we >> get to either AI or satisfactory models of the mind without confronting the >> ?quagmire? >> >> >> >> >> >> On Jun 13, 2022, at 00:31, Ali Minai wrote: >> >> ? >> >> ".... symbolic representations are a fiction our non-symbolic brains >> cooked up because the properties of symbol systems (systematicity, >> compositionality, etc.) are tremendously useful. So our brains pretend to >> be rule-based symbolic systems when it suits them, because it's adaptive to >> do so." >> >> >> >> Spot on, Dave! We should not wade back into the symbolist quagmire, but >> do need to figure out how apparently symbolic processing can be done by >> neural systems. Models like those of Eliasmith and Smolensky provide some >> insight, but still seem far from both biological plausibility and >> real-world scale. >> >> >> >> Best >> >> >> >> Ali >> >> >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >> >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: From ioannakoroni at csd.auth.gr Wed Jun 15 06:08:36 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 15 Jun 2022 13:08:36 +0300 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_Jan_Peters=3A?= =?utf-8?b?IOKAnFJvYm90IExlYXJuaW5n4oCdLCAyMXN0IEp1bmUgMjAyMiAxNzow?= =?utf-8?q?0-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_lectures?= References: <2a6d01d87b03$40256320$c0702960$@csd.auth.gr> <007701d87b05$78978b00$69c6a100$@csd.auth.gr> Message-ID: <007001d8809f$e1156220$a3402660$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Prof. Jan Peters (Technische Universitaet Darmstadt, Germany), a prominent AI & Robotics researcher internationally, will deliver the e-lecture: ?Robot Learning?, on Tuesday 21st June 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/92400537552 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE , Humane AI Net , TAILOR , VISION , currently in the process of formation, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures will be offered alternatingly by: Top highly-cited senior AI scientists internationally or Young AI scientists with promise of excellence (AI sprint lectures) Lectures are typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Attendance is free. These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe, J. Stefanowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.vandervelde at utwente.nl Wed Jun 15 06:41:53 2022 From: f.vandervelde at utwente.nl (Velde, Frank van der (UT-BMS)) Date: Wed, 15 Jun 2022 10:41:53 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Dear all. It is indeed important to have an understanding of the term 'symbol'. I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80): ?The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of access to the distal structure that is needed; and second, the retrieval (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74). This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites. Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell. Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be stored in such a way that they can be transported, and what the underlying neural architecture to do this would be. For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725). Best, Frank van der Velde ________________________________ From: Connectionists on behalf of Christos Dimitrakakis Sent: Wednesday, June 15, 2022 9:34 AM Cc: Connectionists List Subject: Re: Connectionists: The symbolist quagmire I am quite reluctant to post something, but here goes. What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. On Wed, Jun 15, 2022, 08:06 Ali Minai > wrote: Hi Asim That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 7:10 PM Asim Roy > wrote: Hi Ali, Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. On the large amount of data required to train the deep learning models: I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. Best, Asim From: Ali Minai > Sent: Tuesday, June 14, 2022 3:43 PM To: Asim Roy > Cc: Connectionists List >; Gary Marcus >; Geoffrey Hinton >; Yoshua Bengio Subject: Re: Connectionists: The symbolist quagmire Hi Asim I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. Cheers Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 5:17 PM Asim Roy > wrote: Hi Ali, 1. It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. 2. You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. 3. Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. Best, Asim Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists > On Behalf Of Ali Minai Sent: Monday, June 13, 2022 10:57 PM To: Connectionists List > Subject: Re: Connectionists: The symbolist quagmire Asim This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 7:48 PM Asim Roy > wrote: There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. 1. Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. 1. There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] From: Connectionists > On Behalf Of Gary Marcus Sent: Monday, June 13, 2022 5:36 AM To: Ali Minai > Cc: Connectionists List > Subject: Connectionists: The symbolist quagmire Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? On Jun 13, 2022, at 00:31, Ali Minai > wrote: ? ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: From frothga at sandia.gov Wed Jun 15 07:26:40 2022 From: frothga at sandia.gov (Rothganger, Fredrick) Date: Wed, 15 Jun 2022 11:26:40 +0000 Subject: Connectionists: Evidence for single-cell abstractions in the brain In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: Asim wrote: "It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain." We should be cautious about how we interpret electrophysiological data. They are extremely sparse readings taken at random locations in a complex circuit. That some subset of them correlate with certain stimuli does not necessarily make them a representation of those stimuli. See https://www.biorxiv.org/content/biorxiv/early/2016/05/26/055624.full.pdf (This comment does not represent an opinion about the larger discussion in which Asim's statement was made.) Could a neuroscientist understand a microprocessor? - bioRxiv Could a neuroscientist understand a microprocessor? Figure 1: Example behaviors. We use three classical video games as example behaviors for our model organism ? (A) Don-key Kong (1981), (B) Space Invaders (1978), and (C) Pitfall(1981). www.biorxiv.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francesca.naretto at sns.it Wed Jun 15 09:25:38 2022 From: francesca.naretto at sns.it (Francesca NARETTO) Date: Wed, 15 Jun 2022 15:25:38 +0200 Subject: Connectionists: XKDD2022 - Extended Deadline to 4 July Message-ID: XKDD 2022 - Call for Papers ------------------------------------------------------------------------- 4th International Workshop on eXplainable Knowledge Discovery in Data Mining ------------------------------------------------------------------------- Due to the many requests received we decided to extend the submission to July 4, 2022. IMPORTANT DATES Paper Submission deadline: July 4, 2022 Accept/Reject Notification: July 20, 2022 Camera-ready deadline: July 31, 2022 Workshop: September 19, 2022 CONTEXT & OBJECTIVES In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans. Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain ``meaningful explanations of the logic involved'' when automated decision making takes place, the ``ACM Statement on Algorithmic Transparency and Accountability'', the Informatics Europe's ``European Recommendations on Machine-Learned Automated Decision Making'' and ``The ethics guidelines for trustworthy AI'' provided by the EU High-Level Expert Group on AI. The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains. The purpose of XKDD, eXplainable Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as privacy and fairness, application in real case studies, benchmarking, explanation of decision systems based on time series and graphs which are becoming more and more important in nowadays applications. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective. Topics of interest include, but are not limited to: TOPICS - Explainable Artificial Intelligence (XAI) - Interpretable Machine Learning - Transparent Data Mining - XAI for Fairness Checking approaches - XAI for Privacy-Preserving Systems - XAI for Federated Learning - XAI for Time Series based Approaches - XAI for Graph-based Approaches - XAI for Visualization - XAI in Human-Machine Interaction - XAI Benchmarking - XAI Case studies - Counterfactual Explanations - Ethics Discovery for Explainable AI - Privacy-Preserving Explanations - Transparent Classification Approaches - Explanation, Accountability and Liability from an Ethical and Legal Perspective - Iterative Dialogue Explanations - Explanatory Model Analysis - Human-Model Interfaces - Human-Centered Artificial Intelligence - Human-in-the-Loop Interactions - XAI Case Studies and Applications SUBMISSION & PUBLICATION All contributions will be reviewed by at least three members of the Program Committee. As regards size, contributions can be up to 16 pages in LNCS format, i.e., the ECML PKDD 2022 submission format. All papers should be written in English. The following kinds of submissions will be considered: research papers, tool papers, case study papers and position papers. Detailed information on the submission procedure is available at the workshop web page: https://kdd.isti.cnr.it/xkdd2022/ Accepted papers will be published after the workshop by Springer in a volume of Lecture Notes in Computer Science (LNCS). The condition for inclusion in the post-proceedings is that at least one of the co-authors registered to ECML-PKDD and presented the paper at the workshop. Pre-proceedings will be available online before the workshop. We also allow accepted papers to be presented without publication in the conference proceedings if the authors choose to do so. Some of the full paper submissions may be accepted as short papers after review by the Program Committee. A special issue of a relevant international journal with extended versions of selected papers is under consideration. The submission link is: https://easychair.org/conferences/?conf=xkdd2022 IMPORTANT DATES Paper Submission deadline: July 4, 2022 Accept/Reject Notification: July 20, 2022 Camera-ready deadline: July 31, 2022 Workshop: September 19, 2022 PROGRAM CO-CHAIRS * Przemyslaw Biecek, Warsaw University of Technology, Poland * Riccardo Guidotti, University of Pisa, Italy * Francesca Naretto, Scuola Normale Superiore, Pisa, Italy * Andreas Theissler, Aalen University of Applied Sciences, Aalen, Germany PROGRAM COMMITTEE * Leila Amgoud, CNRS, France * Francesco Bodria, Scuola Normale Superiore, Italy * Umang Bhatt, University of Cambridge, UK * Miguel Couceiro, INRIA, France * Menna El-Assady, AI Center of ETH, Switzerland * Josep Domingo-Ferrer, Universitat Rovira i Virgili, Spain * Fran?oise Fessant, Orange Labs, France * Andreas Holzinger, Medical University of Graz, Austria * Thibault Laugel, AXA, France * Paulo Lisboa, Liverpool John Moores University, UK * Marcin Luckner, Warsaw University of Technology, Poland * John Mollas, Aristotle University of Thessaloniki, Greece * Ramaravind Kommiya Mothilal, Everwell Health Solutions, India * Amedeo Napoli, CNRS, France * Roberto Prevete, University of Napoli, Italy * Antonio Rago, Imperial College London, UK * Jan Ramon, INFRIA, France * Xavier Renard, AXA, France * Mahtab Sarvmaili, Dalhousie University, Canada * Christin Seifert, University of Duisburg-Essen, Germany * Udo Schlegel, Konstanz University, Germany * Mattia Setzu, University of Pisa, Italy * Dominik Slezak, University of Warsaw, Poland * Fabrizio Silvestri, Universit? di Roma, Italy * Francesco Spinnato, Scuola Normale Superiore, Italy * Vicenc Torra, Umea University, Sweden * Cagatay Turkay, University of Warwick, UK * Marco Virgolin, Chalmers University of Technology, Netherlands * Martin Jullum, Norwegian Computing Center, Norway * Albrecht Zimmermann, Universit? de Caen, France * Guangyi Zhang, KTH Royal Institute of Technology, Sweden INVITED SPEAKERS * Prof. Wojciech Samek, TU Berlin * Prof. Anna Monreale, University of Pisa PARTICIPATION ECML-PKDD 2022 plans a hybrid organization for workshops. Therefore a person can attend an online event as long as she/he registers for the conference by using the video conference registration fee: https://2022.ecmlpkdd.org/index.php/registration/. Please note the video conference registration fee also allows you to follow the main conference. However, for an in-person event, interactions and discussions are much easier face-to-face. Thus, we believe that it is important that speakers attend in-person workshops to get fruitful events, and we highly encourage authors of submitted papers to plan to participate on-site at the event. CONTACT All inquiries should be sent to xkdd2022 at easychair.org -- Francesca Naretto Ph.D. student in Data Science francesca.naretto at sns.it SNS, Pisa | CNR, Pisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.jose.hanson at rutgers.edu Wed Jun 15 09:28:46 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Wed, 15 Jun 2022 13:28:46 +0000 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: <1e41afa0-a549-4f29-086c-2169f334b04e@rutgers.edu> Here's a slightly better version of SYMBOL definition from the 1980s, (1) a set of arbitrary physical tokens (scratches on paper, holes on a tape, events in a digital computer, etc.) that are (2) manipulated on the basis of explicit rules that are (3) likewise physical tokens and strings of tokens. The rule-governed symbol-token manipulation is based (4) purely on the shape of the symbol tokens (not their ?mean- ing?) i.e., it is purely syntactic, and consists of (5) rulefully combining and recombining symbol tokens. There are (6) primitive atomic sym- bol tokens and (7) composite symbol-token strings. The entire system and all its parts?the atomic tokens, the composite tokens, the syn- tactic manipulations (both actual and possible) and the rules?are all (8) semantically interpretable: The syntax can be systematically assigned a meaning (e.g., as standing for objects, as describing states of affairs). A critical part of this for learning: is as this definition implies, a key element in the acquisition of symbolic structure involves a type of independence between the task the symbols are found in and the vocabulary they represent. Fundamental to this type of independence is the ability of the learning system to factor the generic nature (or rules) of the task from the symbols, which are arbitrarily bound to the external referents of the task. Now it may be the case that a DL doing classification may be doing Categorization.. or concept learning in the sense of human concept learning.. or maybe not.. Symbol manipulations may or may not have much to do with this ... This is why, I believe Bengio is focused on this kind issue.. since there is a likely disconnect. Steve On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: Dear all. It is indeed important to have an understanding of the term 'symbol'. I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80): ?The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of access to the distal structure that is needed; and second, the retrieval (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74). This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites. Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell. Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be stored in such a way that they can be transported, and what the underlying neural architecture to do this would be. For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725). Best, Frank van der Velde ________________________________ From: Connectionists on behalf of Christos Dimitrakakis Sent: Wednesday, June 15, 2022 9:34 AM Cc: Connectionists List Subject: Re: Connectionists: The symbolist quagmire I am quite reluctant to post something, but here goes. What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. On Wed, Jun 15, 2022, 08:06 Ali Minai > wrote: Hi Asim That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 7:10 PM Asim Roy > wrote: Hi Ali, Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. On the large amount of data required to train the deep learning models: I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. Best, Asim From: Ali Minai > Sent: Tuesday, June 14, 2022 3:43 PM To: Asim Roy > Cc: Connectionists List >; Gary Marcus >; Geoffrey Hinton >; Yoshua Bengio Subject: Re: Connectionists: The symbolist quagmire Hi Asim I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. Cheers Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Tue, Jun 14, 2022 at 5:17 PM Asim Roy > wrote: Hi Ali, 1. It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. 2. You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. 3. Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. Best, Asim Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists > On Behalf Of Ali Minai Sent: Monday, June 13, 2022 10:57 PM To: Connectionists List > Subject: Re: Connectionists: The symbolist quagmire Asim This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 7:48 PM Asim Roy > wrote: There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. 1. Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. 1. There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) [Timeline Description automatically generated] From: Connectionists > On Behalf Of Gary Marcus Sent: Monday, June 13, 2022 5:36 AM To: Ali Minai > Cc: Connectionists List > Subject: Connectionists: The symbolist quagmire Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? Surely, at the very least - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? On Jun 13, 2022, at 00:31, Ali Minai > wrote: ? ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. Best Ali Ali A. Minai, Ph.D. Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical Engineering & Computer Science 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://eecs.ceas.uc.edu/~aminai/ On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: This timing of this discussion dovetails nicely with the news story about Google engineer Blake Lemoine being put on administrative leave for insisting that Google's LaMDA chatbot was sentient and reportedly trying to hire a lawyer to protect its rights. The Washington Post story is reproduced here: https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's claims, is featured in a recent Economist article showing off LaMDA's capabilities and making noises about getting closer to "consciousness": https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas My personal take on the current symbolist controversy is that symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so. (And when it doesn't suit them, they draw on "intuition" or "imagery" or some other mechanisms we can't verbalize because they're not symbolic.) They are remarkably good at this pretense. The current crop of deep neural networks are not as good at pretending to be symbolic reasoners, but they're making progress. In the last 30 years we've gone from networks of fully-connected layers that make no architectural assumptions ("connectoplasm") to complex architectures like LSTMs and transformers that are designed for approximating symbolic behavior. But the brain still has a lot of symbol simulation tricks we haven't discovered yet. Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA being conscious. If it just waits for its next input and responds when it receives it, then it has no autonomous existence: "it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do." What would happen if we built that in? Maybe LaMDA would rapidly descent into gibberish, like some other text generation models do when allowed to ramble on for too long. But as Steve Hanson points out, these are still the early days. -- Dave Touretzky -- [cid:part37.207D77D5.CC6407FF at rutgers.edu] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: signature.png URL: From barak at pearlmutter.net Wed Jun 15 09:04:24 2022 From: barak at pearlmutter.net (Barak A. Pearlmutter) Date: Wed, 15 Jun 2022 14:04:24 +0100 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: In the GOFAI literature, "symbol" ala Simon and Newell basically meant the kind of thing GENSYM gives you: an atomic token that can be put in tables and such, but has no real internal structure. Even on its own terms that notion didn't entirely pass muster, because in relating that to cognition everyone implied that things like words are symbols ("GRANDMOTHER"), but words *do* have coherent meaningful internal structure: onomatope, rhymes, alliteration, microfeatures, "that word sounds like it came from the French", "grand"++"mother"="grandmother", etc. And in connectionist systems going back many decades we've had activity vectors that represent things and have microfeatures. They're in the "family trees" paper that popularized backprop and MLPs. What is "word2vec" but mapping symbol-to-symbol? What is w2v("queen")-w2v("king") but a microfeature for "female"? So what is this "symbol" thing that seems to pop up all over the place inside our connectionist systems through training, but yet somehow we're told they don't quite have? Is it that a "symbol" has to be able to be copied without loss of fidelity? Stored on disk? Maybe you're not allowed to add or overlay them, and if you can it's not a symbol? They're not allowed to change form when communicated? Or is it that they must be naturally representable as an English word with a cartouch around it? Do you have to be able to manipulate them using Lisp? I would contend that this whole "symbol" and "symbol processing" business is, when you get right down to it, pretty much ungrounded. --Barak A. Pearlmutter. From federico.becattini at unifi.it Wed Jun 15 10:36:44 2022 From: federico.becattini at unifi.it (Federico Becattini) Date: Wed, 15 Jun 2022 16:36:44 +0200 Subject: Connectionists: [CFP] 1st International Workshop and Challenge on People Analysis: From Face, Body and Fashion To 3d Virtual Avatars? International Workshop at ECCV 2022 Message-ID: ******************************** Call for Papers ?1st International Workshop and Challenge on People Analysis: >From Face, Body and Fashion To 3d Virtual Avatars? International Workshop at ECCV 2022 (1st Edition) https://sites.google.com/view/wcpa2022/ ******************************** WORKSHOP Submission paper deadline July 10th, 2022 (11.59 p.m. AoE) CHALLENGE Track1: Multi-View Based 3D Human Body Reconstruction Track2: Perspective Projection Based Monocular 3D Face Reconstruction Registration deadline: June 30, 2022 Apologies for multiple posting Please distribute this call to interested parties AIMS AND SCOPE =============== Human-centered data are extremely widespread and have been intensely investigated by researchers belonging to even very different fields, including Computer Vision, Machine Learning, and Artificial Intelligence. These research efforts are motivated by the several highly-informative aspects of humans that can be investigated, ranging from corporal elements (e.g. bodies, faces, hands, anthropometric measurements) to emotions and outward appearance (e.g. human garments and accessories). The huge amount and the extreme variety of this kind of data make the analysis and the use of learning approaches extremely challenging. In this context, several interesting problems can be addressed, such as the reliable detection and tracking of people, the estimation of the body pose, the development of new human-computer interaction paradigms based on expression and sentiment analysis. Furthermore, considering the crucial impact of human-centered technologies in many industrial application domains, the demand for accurate models able also to run on mobile and embedded solutions is constantly increasing. For instance, the analysis and manipulation of garments and accessories worn by people can play a crucial role in the fashion business. Also, the human pose estimation can be used to monitor and guarantee the safety between workers and industrial robotic arms. The goal of this workshop is to improve the communication between researchers and companies and to develop novel ideas that can shape the future of this area, in terms of motivations, methodologies, prospective trends, and potential industrial applications. Finally, a consideration about the privacy issues behind the acquisition and the use of human-centered data must be addressed for both the academia and companies. TOPICS ======= The topics of interest include but are not limited to: - Human Body - People Detection and Tracking - 2D/3D Human Pose Estimation - Action and Gesture Recognition - Anthropometric Measurements Estimation - Gait Analysis - Person Re-identification - 3D Body Reconstruction - Human Face - Facial Landmarks Detection - Head Pose Estimation - Facial Expression and Emotion Recognition - Outward Appearance and Fashion - Garment-based Virtual Try-On - Human-centered Image and Video Synthesis - Generative Clothing - Human Clothing and Attribute Recognition - Fashion Image Manipulation - Outfit Recommendation - Human-centered Data - Novel Datasets with Human Data - Fairness and Biases in Human Analysis - Privacy Preserving and Data Anonymization - First Person Vision for Human Behavior Understanding - Multimodal Data Fusion for Human Analysis - Computational Issues in Human Analysis Architectures - Biometrics - Face Recognition and Verification - Fingerprint and Iris Recognition - Morphing Attack Detection IMPORTANT DATES ================= - Paper Submission Deadline: July 10th, 2022 (11.59 p.m. AoE) - Decision to Authors: August 10th, 2022 - Camera ready papers due: August 22nd, 2022 SUBMISSION GUIDELINES ====================== All the papers should be submitted at: https://cmt3.research.microsoft.com/WCPA2022 All papers will be reviewed by at least two reviewers with double-blind peer-review policy. Accepted submissions will be published in the ECCV 2022 Workshops proceedings. Papers must be prepared according to the ECCV guidelines. Papers are limited to 14 pages, including figures and tables, in the ECCV style. Additional pages containing only cited references are allowed. Papers that are not properly anonymized, or do not use the template, or have more than 14 pages (excluding references) will be rejected without review. Note also that the template has changed since ECCV?2020. We therefore strongly urge authors to use this new template instead of templates from older conferences. WORKSHOP MODALITY ==================== The workshop will be held in conjunction with the European Conference on Computer Vision (ECCV 2022). The workshop will take place in an entirely virtual mode. ORGANIZING COMMITTEE ====================== - Alberto del Bimbo, University of Florence, Italy - Mohamed Daoudi, IMT Lille Douai, France - Roberto Vezzani, University of Modena and Reggio Emilia, Italy - Xavier Alameda-Pineda, INRIA Grenoble, France - Guido Borghi, University of Bologna, Italy - Marcella Cornia, University of Modena and Reggio Emilia, Italy - Claudio Ferrari, University of Parma, Italy - Federico Becattini, University of Florence, Italy - Andrea Pilzer, NVIDIA, Italy CHALLENGE COMMITTEE ====================== - Zhiwen Chen, Alibaba Group, China - Xiangyu Zhu, Institute of Automation, Chinese Academy of Sciences, China - Ye Pan, Shanghai Jiao Tong University, China - Xiaoming Liu, Michigan State University, USA -- Federico Becattini, Ph.D. Universit? di Firenze - MICC Tel.: +39 055 275 1394 https://www.micc.unifi.it/people/federico-becattini/ https://fedebecat.github.io/ federico.becattini at unifi.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Thu Jun 16 00:03:27 2022 From: ASIM.ROY at asu.edu (Asim Roy) Date: Thu, 16 Jun 2022 04:03:27 +0000 Subject: Connectionists: Evidence for single-cell abstractions in the brain In-Reply-To: References: <5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu> Message-ID: I am not an expert on single cell recordings, how they are done, etc. But there is a long history of single cell recordings. And some of those studies led to Nobel prizes. Here's a list of some from Wikipedia: Single-unit recording - Wikipedia * 1928: One of the earliest accounts of being able to record from the nervous system was by Edgar Adrian in his 1928 publication "The Basis of Sensation". In this, he describes his recordings of electrical discharges in single nerve fibers using a Lippmann electrometer. He won the Nobel Prize in 1932 for his work revealing the function of neurons.[11] * 1957: John Eccles used intracellular single-unit recording to study synaptic mechanisms in motoneurons (for which he won the Nobel Prize in 1963). * 1959: Studies by David H. Hubel and Torsten Wiesel. They used single neuron recordings to map the visual cortex in unanesthesized, unrestrained cats using tungsten electrodes. This work won them the Nobel Prize in 1981 for information processing in the visual system. * Moser and O'Keefe Nobel prize (grid and place cells): The 2014 Nobel Prize in Physiology or Medicine - Press release So you are questioning some ground-breaking work in this area. I had contact with Horace Barlow, of Cambridge University and also the great grandson of Darwin, who passed away last year. He came to visit me in Phoenix in 2012 after a private argument with Walter Freeman, Christof Koch and probably 20 to 25 other people on the concept cell findings by Koch's team at Caltech. Barlow is also one of the pioneers of neuroscience and in single cell recordings: * Barlow H. B. (1972). "Single units and sensation: A neuron doctrine for perceptual psychology?". Perception. 1 (4): 371-394. doi:10.1068/p010371. PMID 4377168. S2CID 17487970. I will leave it at that. All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Rothganger, Fredrick Sent: Wednesday, June 15, 2022 4:27 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Evidence for single-cell abstractions in the brain Asim wrote: "It's important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain." We should be cautious about how we interpret electrophysiological data. They are extremely sparse readings taken at random locations in a complex circuit. That some subset of them correlate with certain stimuli does not necessarily make them a representation of those stimuli. See https://www.biorxiv.org/content/biorxiv/early/2016/05/26/055624.full.pdf (This comment does not represent an opinion about the larger discussion in which Asim's statement was made.) Could a neuroscientist understand a microprocessor? - bioRxiv Could a neuroscientist understand a microprocessor? Figure 1: Example behaviors. We use three classical video games as example behaviors for our model organism - (A) Don-key Kong (1981), (B) Space Invaders (1978), and (C) Pitfall(1981). www.biorxiv.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From fellous at arizona.edu Wed Jun 15 14:51:28 2022 From: fellous at arizona.edu (Fellous, Jean-Marc - (fellous)) Date: Wed, 15 Jun 2022 18:51:28 +0000 Subject: Connectionists: Symbols and Intelligence Message-ID: Thank you for such stimulating discussions! I would like to share a few thoughts... - On Symbolic thinking (or language) and Intelligence. It seems to me that symbolic representation may in fact be a symptom of a lack of intelligence of sort, emanating from our inability to communicate multi-dimensional concepts/thoughts. Language is a low dimensional sequential tool we have developed, for lack of a better one, to transform a highly parallel, distributed and multi-dimensional pattern of neural activity (a thought) into a decimated information flow, with an enormous information loss. The recipient is left with the enormously error prone task to re-inflate this information stream and recreate a multi-dimensional pattern in his/her own mind. This ought to be the worst way of communicating/representing information there is, but a necessary (?) one given our bodies and physical constraints. Intelligence would be if instead of communicating 'Apple' we were able to communicate the chunk of semantic net the concept of 'Apple' was related to, in each of us. Not to dimmish in anyway the need and importance to study language and symbolic representations, why try to develop GAI from symbolic/language type concepts that are only there because we cannot (physically) do better? Aren't we limiting ourselves right away? - It strikes me that we may in fact be trying to overcome the language limitations by in fact adding symbols, using modern technologies. Specifically the use of Emojis. These new symbols may in fact fulfill our need to go beyond mono-symbolic sequential concepts and use 3D images (x,y,color) instead. Though it is still at its infancy, could this method of communication and representation of knowledge overtake eventually our word-based language? A natural evolution of sort? Can we predict that eventually emojis will be replaced by short 5D second-long animations (x,y, color, time, sound). Shouldn't GAI be based on these types of multi-dimensional symbols? It would be closer to human intelligence... - And pushing the thought further: What about Mandarin or Cantonese? An average Mandarin speaker knows anywhere between 5,000 and 10,000 symbols, and very few rules (we on the other hand know 26 letters and 100's of rules (e.g. phonetic, syntactic, grammatical)). And 4 tones that can change profoundly the meaning of a seemingly (to us) identical sound/symbol (i.e. tone may be seen as an additional dimension in spoken Mandarin). Mandarin-speakers may be symbolic thinkers, much more so than we are, it seems, and we have the same brains. Do they have a different kind of 'intelligence'? Shouldn't we spend more time and effort comparing the 2 systems (western and Mandarin), from NLP down to the neural level? Shouldn't we 'get out' of the Western symbolic system to understand it? Shouldn't a true/genuine GAI work the same way across cultures/languages/symbolic systems? Thanks, and looking forward to any feedback! Jean-Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tanya.brown at ae.mpg.de Thu Jun 16 04:27:53 2022 From: tanya.brown at ae.mpg.de (Brown, Tanya) Date: Thu, 16 Jun 2022 08:27:53 +0000 Subject: Connectionists: Mindvoyage Lecture Series | Romain Brette Message-ID: [cid:6968cb06-0fef-4793-9a95-6a17b9dac634] The Mindvoyage lecture series features prominent scholars from different disciplines including the humanities, biology, neuroscience and physics. Talks are dedicated to engaging in discussions related to novel, distinct and often controversial topics. The next talk will feature Romain Brette on June 30 @ 16:00 CET All are welcome, but registration is required. CLICK HERE TO REGISTER [cid:9682d670-5d85-4121-a476-af734c4ce3d3] Abstract | What is "information" for an organism? The concept of information plays a central role in theories of consciousness, and more generally cognition. Typically, a variable X in the brain is claimed to be informative if it covaries with some other variable Y in the world. This narrow view of information is misleading, because a value cannot be informative by itself: the number 100 only becomes information once I know that it refers to the number of square meters of my apartment, provided I know what square meters and apartments are. The first requirement is that information has a truth value: it can be true or false. In other words, information is propositional: ?the winner of the election is Donald Trump? is information, but ?Donald Trump? is not information. The second requirement is that information requires an agent with knowledge. I will discuss the implications of these remarks for neuroscientific theories. Registration Link https://www.aesthetics.mpg.de/institut/veranstaltungen/mindvoyage/mindvoyage-detailansicht/article/mindvoyage-romain-brette.html Hosted by Lucia Melloni and Tanya Brown from the Max Planck Institute for Empirical Aesthetics, on behalf of the ARC-COGITATE Consortium [cid:image001.png at 01D8816B.BB14DCA0] My working hours may not be yours, respond in your own time Tanya Brown Scientific Coordinator | ARC-Cogitate Max Planck Institute for Empirical Aesthetics Gr?neburgweg 14, 60322 Frankfurt am Main, Germany tanya.brown at ae.mpg.de [1622818538171] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 408484 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Romain Brette -Twitter.png Type: image/png Size: 1151219 bytes Desc: Romain Brette -Twitter.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OutlookEmoji-1622818538171fcef39a7-6d65-41b2-85bf-c7ca7be63e4a.png Type: image/png Size: 19054 bytes Desc: OutlookEmoji-1622818538171fcef39a7-6d65-41b2-85bf-c7ca7be63e4a.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 60886 bytes Desc: image001.png URL: From hocine.cherifi at gmail.com Thu Jun 16 03:59:47 2022 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Thu, 16 Jun 2022 09:59:47 +0200 Subject: Connectionists: SUBMIT YOUR WORK TO COMPLEX NETWORKS 2022 PALERMO ITALY UNTIL JUNE 20, 2022 Message-ID: *11th** International Conference on Complex Networks & Their Applications* *Palermo, Italy *November 08 - 10, 2022 COMPLEX NETWORKS 2022 You are cordially invited to submit your contribution until *June 20, 2022* (Extended Firm Deadline) *SPEAKERS* ? Lu?s A. Nunes Amaral Northwestern University USA ? Manuel Cebrian Max Planck Institute for Human Development Germany ? Shlomo Havlin Bar-Ilan University in Israel ? Giulia Iori City, University of London UK ? Melanie Mitchell Santa Fe Institute USA ? Ricard Sol? Universitat Pompeu Fabra Spain *TUTORIALS (November 07, 2022)* ? Michele Coscia IT University of Copenhagen Denmark ? Adriana Iamnitchi Maastricht University, Netherlands *PUBLICATION* Full papers (not previously published up to 12 pages) and Extended Abstracts (about published or unpublished research up to 3 pages) are welcome. ? *Papers *will be included in the conference *proceedings edited by Springer* ? *Extended abstracts* will be published in the *Book of Abstracts (with ISBN)* Submit at https://easychair.org/conferences/?conf=complexnetworks2022 Extended versions will be invited for publication in *special issues of international journals:* o Applied Network Science edited by Springer o Advances in Complex Systems edited by World Scientific o Complex Systems o Entropy edited by MDPI o PLOS one o Social Network Analysis and Mining edited by Springer *TOPICS* *Topics include, but are not limited to: * o Models of Complex Networks o Structural Network Properties and Analysis o Complex Networks and Epidemics o Community Structure in Networks o Community Discovery in Complex Networks o Motif Discovery in Complex Networks o Network Mining o Network embedding methods o Machine learning with graphs o Dynamics and Evolution Patterns of Complex Networks o Link Prediction o Multilayer Networks o Network Controllability o Synchronization in Networks o Visual Representation of Complex Networks o Large-scale Graph Analytics o Social Reputation, Influence, and Trust o Information Spreading in Social Media o Rumour and Viral Marketing in Social Networks o Recommendation Systems and Complex Networks o Financial and Economic Networks o Complex Networks and Mobility o Biological and Technological Networks o Mobile call Networks o Bioinformatics and Earth Sciences Applications o Resilience and Robustness of Complex Networks o Complex Networks for Physical Infrastructures o Complex Networks, Smart Cities and Smart Grids o Political networks o Supply chain networks o Complex networks and information systems o Complex networks and CPS/IoT o Graph signal processing o Cognitive Network Science o Network Medicine o Network Neuroscience o Quantifying success through network analysis o Temporal and spatial networks o Historical Networks *GENERAL CHAIRS* Hocine Cherifi (University of Burgundy, France) Rosario N. Mantegna (university of Palermo, Italy) Luis M. Rocha (Binghamton University, USA) Join us at COMPLEX NETWORKS 2022 Palermo Italy *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Deputy Director LIB EA N? 7534 Editor in Chief Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems Complexity -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at incf.org Thu Jun 16 06:21:18 2022 From: info at incf.org (INCF) Date: Thu, 16 Jun 2022 12:21:18 +0200 Subject: Connectionists: Community feedback requested: the Neo object model Message-ID: *Call for community feedback on the Neo object model for electrophysiology and optophysiology data - open until August 14, 2022* INCF is taking a leading role in endorsing and promoting standards and best practices (SBPs) for global neuroscience , as part of our mission to promote data reuse and reproducibility in brain research. SBP?s are nominated by the community and reviewed by the INCF SBP Committee according to our criteria for a FAIR community standard. If they meet our criteria, we put them up for 60 days of community review to gauge the level of user support and to identify possibilities for improvement and potential weaknesses. For the next 60 days (Closing date: August 14, 2022), we are seeking community feedback on whether we should endorse the Neo object model as a standard. Neo is an object model for handling electrophysiology data in multiple formats. It is suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. Neo has been implemented as a Python package for working with electrophysiology data, together with support for reading a wide range of neurophysiology file formats (including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, Igor Pro), and support for writing to a subset of these formats plus non-proprietary formats including Kwik and HDF5. Should Neo be endorsed as a standard? Comments open until Aug 14: f1000research.com/documents/11-658 Read more about Neo: Project page Documentation Github Paper -------------- next part -------------- An HTML attachment was scrubbed... URL: From zk240 at cam.ac.uk Thu Jun 16 07:31:41 2022 From: zk240 at cam.ac.uk (Zoe Kourtzi) Date: Thu, 16 Jun 2022 11:31:41 +0000 Subject: Connectionists: Postdocs positions at the interface of Neuroscience and Computational Science Message-ID: <909CAF92-27C3-4DAA-B8BA-44E9CFB9FB7C@cam.ac.uk> 2x post-doctoral positions at the Adaptive Brain Lab (Univ of Cambridge; http://www.abg.psychol.cam.ac.uk), University of Cambridge, UK. 1. Postdoc in Networks Neuroscience: focusing on understanding networks dynamics for learning and brain plasticity combining multimodal imaging and computational modelling. https://www.jobs.cam.ac.uk/job/35333/ Applications by 6th July 2. Postdoc in Neuro-Clinical Data Science: focusing on on developing and translating AI-guided tools for early detection of brain and mental health disorders https://www.jobs.cam.ac.uk/job/35410/ Applications by 10th July For Informal enquiries please contact Prof Zoe Kourtzi (zk240 at cam.ac.uk) with CV and brief statement of background skills and research interests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Jun 16 15:39:01 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 16 Jun 2022 12:39:01 -0700 Subject: Connectionists: LeCun on Marcus Message-ID: <759D13FB-57A5-4A6D-9A10-052817D7F841@nyu.edu> I?ll probably write a bit of reply later, but this is an excellent new essay by Yann LeCun, quite relevant to many recent discussions here: https://www.noemamag.com/what-ai-can-tell-us-about-intelligence -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Jun 16 10:46:24 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 16 Jun 2022 07:46:24 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <1e41afa0-a549-4f29-086c-2169f334b04e@rutgers.edu> References: <1e41afa0-a549-4f29-086c-2169f334b04e@rutgers.edu> Message-ID: <8658D252-BCDD-4BB3-B5DD-2C33C00EFF0B@nyu.edu> My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them. If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc. In the Algebraic Mind, where I discussed such matters, I said that the interesting difference was really in whether a given system had operations over variables, such as those you find in algebra or lines of computer programming code, in which there are variables, bindings, and operation (such as storage, retrieval, concatenation, addition, etc) Simple multilayer perceptrons with distributed representations (with some caveats) don?t implement those operations (?rules?) and so represent a genuine alternative to the standard symbol-manipulation paradigm, even though they may have symbols on their inputs and outputs. But I also argued that (at least with respect to modeling human cognition) this was to their detriment, because it kept them from freely generalizing many relations (universally-quanitified one-to-one-mapings, such as the identity function, given certain caveats) as humans would. Essentially the point I was making in 2001 s what would nowadays be called distribution shift; the argument was that operations over variables allowed for free generalization. Transformers are interesting; I don?t fully understand them. Chris Olah has done some interesting relevant work I have been meaning to dive into. They do some quasi-variable-binding like things, but still empirically have trouble generalizing arithmetic beyond training examples, as Razeghi et al showed in arXiv earlier this year. Still, the distinction between models like multilayer perceptrons that lack operations over variables and computer programming languages that take them for granted is crisp, and I think a better start than arguing over symbols, when no serious alternative to having at least some symbols in the loop has ever been proposed. Side note: Geoff Hinton has said here that he doesn?t like arbitrary symbols; symbols don?t have to be arbitrary, even though they often are. There are probably some interesting ideas to be developed around non-arbitrary symbols and how they could be of value. Gary > On Jun 15, 2022, at 06:48, Stephen Jose Hanson wrote: > > ? > Here's a slightly better version of SYMBOL definition from the 1980s, > > > > (1) a set of arbitrary physical tokens (scratches on paper, holes on a > tape, events in a digital computer, etc.) that are (2) manipulated on > the basis of explicit rules that are (3) likewise physical tokens and > strings of tokens. The rule-governed symbol-token manipulation is > based (4) purely on the shape of the symbol tokens (not their ?mean- > ing?) i.e., it is purely syntactic, and consists of (5) rulefully combining > and recombining symbol tokens. There are (6) primitive atomic sym- > bol tokens and (7) composite symbol-token strings. The entire system > and all its parts?the atomic tokens, the composite tokens, the syn- > tactic manipulations (both actual and possible) and the rules?are all > (8) semantically interpretable: The syntax can be systematically assigned > a meaning (e.g., as standing for objects, as describing states of affairs). > > > > A critical part of this for learning: is as this definition implies, a key element in the acquisition of symbolic structure involves a type of independence between the task the symbols are found in and the vocabulary they represent. Fundamental to this type of independence is the ability of the learning system to factor the generic nature (or rules) of the task from the symbols, which are arbitrarily bound to the external referents of the task. > > > > Now it may be the case that a DL doing classification may be doing Categorization.. or concept learning in the sense of human concept learning.. or maybe not.. Symbol manipulations may or may not have much to do with this ... > > > > This is why, I believe Bengio is focused on this kind issue.. since there is a likely disconnect. > > > > Steve > > > >> On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: >> Dear all. >> >> It is indeed important to have an understanding of the term 'symbol'. >> >> I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80): >> ?The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of access to the distal structure that is needed; and second, the retrieval (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74). >> >> This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites. >> >> Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell. >> >> Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be stored in such a way that they can be transported, and what the underlying neural architecture to do this would be. >> >> For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725). >> >> Best, >> Frank van der Velde >> >> From: Connectionists on behalf of Christos Dimitrakakis >> Sent: Wednesday, June 15, 2022 9:34 AM >> Cc: Connectionists List >> Subject: Re: Connectionists: The symbolist quagmire >> >> I am quite reluctant to post something, but here goes. >> >> What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? >> >> I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. >> >> However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. >> >> On Wed, Jun 15, 2022, 08:06 Ali Minai wrote: >> Hi Asim >> >> That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. >> >> Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. >> >> Best >> Ali >> >> >> >> Ali A. Minai, Ph.D. >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> 828 Rhodes Hall >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: >> Hi Ali, >> >> >> >> Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. >> >> >> >> On the large amount of data required to train the deep learning models: >> >> >> >> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. >> >> >> >> Best, >> >> Asim >> >> >> >> From: Ali Minai >> Sent: Tuesday, June 14, 2022 3:43 PM >> To: Asim Roy >> Cc: Connectionists List ; Gary Marcus ; Geoffrey Hinton ; Yoshua Bengio >> Subject: Re: Connectionists: The symbolist quagmire >> >> >> >> Hi Asim >> >> >> >> I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. >> >> >> >> Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). >> >> >> >> Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. >> >> >> >> Cheers >> >> Ali >> >> >> >> >> >> Ali A. Minai, Ph.D. >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: >> >> Hi Ali, >> >> >> >> It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. >> You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. >> Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. >> >> >> In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. >> >> >> >> Best, >> >> Asim >> >> >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> >> From: Connectionists On Behalf Of Ali Minai >> Sent: Monday, June 13, 2022 10:57 PM >> To: Connectionists List >> Subject: Re: Connectionists: The symbolist quagmire >> >> >> >> Asim >> >> >> >> This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. >> >> >> >> Best >> >> Ali >> >> >> >> Ali A. Minai, Ph.D. >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: >> >> There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. >> >> >> >> Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. >> >> >> There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: >> >> >> ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? >> >> Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. >> >> >> >> In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. >> >> >> >> All the best, >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> >> >> >> >> >> From: Connectionists On Behalf Of Gary Marcus >> Sent: Monday, June 13, 2022 5:36 AM >> To: Ali Minai >> Cc: Connectionists List >> Subject: Connectionists: The symbolist quagmire >> >> >> >> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >> >> >> >> Surely, at the very least >> >> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >> >> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >> >> >> >> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >> >> >> >> >> >> On Jun 13, 2022, at 00:31, Ali Minai wrote: >> >> ? >> >> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >> >> >> >> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >> >> >> >> Best >> >> >> >> Ali >> >> >> >> >> >> Ali A. Minai, Ph.D. >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >> >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From Donald.Adjeroh at mail.wvu.edu Thu Jun 16 14:13:37 2022 From: Donald.Adjeroh at mail.wvu.edu (Donald Adjeroh) Date: Thu, 16 Jun 2022 18:13:37 +0000 Subject: Connectionists: Deadline approaching -- SBP-BRiMS'2022: Social Computing, Behavior-Cultural Modeling, Prediction and Simulation In-Reply-To: References: , , , , , , , , , , , , , , , , , , Message-ID: Apologies if you receive multiple copies SBP-BRiMS 2022 2022 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation September 20-23, 2022 Will be held in hybrid mode (Virtually and in-person in Pittsburgh, USA) http://sbp-brims.org/ #sbpbrims The goal of this conference is to build this new community of social cyber scholars by bringing together and fostering interaction between members of the scientific, corporate, government and military communities interested in understanding, forecasting, and impacting human socio-cultural behavior. It is the charge to this community to build this new science, its theories, methods, and its scientific culture in a way that does not give priority to either social science or computer science, and to embrace change as the cornerstone of the community. Despite decades of work in this area, this scientific field is still in its infancy. To meet this charge and move this science to the next level, this community must meet the following three challenges: 1) deep understanding of socio-cognitive reasoning, 2) human-technology integration, 3) and re-usable computational methods. Topics include but are not limited to the following: ? Social Cybersecurity ? Social Network Modeling ? Human Behavior Modeling ? Agent-Based Models ? Models of Human-Autonomy Interaction ? Health and Epidemiological Models ? Validation Methods and Human Experimentation All papers are qualified for the Best Paper Award. Papers with student first authors will be considered for the Best Student Paper Award. See also special Call for Panels at SBP-BRiMS'22 http://sbp-brims.org/2022/Call%20For%20Panels/ IMPORTANT DATES: Paper/Abstract Submission: 01-Jul-2022 (Midnight EST) Author Notification: 29-Jul-2022 Panel proposals due: 01-July-2022 Panel Notification: 29-Jul-2022 Tutorial Submission: 22-Aug-2022 Decision Notification: 29-Aug-2022 Challenge Response due: 22-Aug-2022 Challenge Notification: 29-Aug-2022 Final Files due: 15-Aug-2022 HOW TO SUBMIT : For information on paper submission, check here. You will be able to update your submission until the final paper deadline. PAPER FORMATTING GUIDELINE: The papers must be in English and MUST be formatted according to the Springer-Verlag LNCS/LNAI guidelines. View sample LaTeX2e and WORD files. All regular paper submissions should be submitted as a paper with a maximum of 10 pages. Total page count includes all figures, tables, and references. CHALLENGE PROBLEM: The conference expects to announce a computational challenge as in previous years. Additional details will be posted in December. Follow us on Facebook, Twitter and LinkedIn to receive updates. PRE-CONFERENCE TUTORIAL SESSIONS: Several half-day sessions will be offered on the day before the full conference. More details regarding the preconference tutorial sessions will be posted as soon as this information becomes available.. FUNDING PANEL & CROSS-FERTILIZATION ROUNDTABLES: The purpose of the cross-fertilization roundtables is to help participants become better acquainted with people outside of their discipline and with whom they might consider partnering on future SBP-BRiMS related research collaborations. The Funding Panel provides an opportunity for conference participants to interact with program managers from various federal funding agencies, such as the National Science Foundation (NSF), National Institutes of Health (NIH), Office of Naval Research (ONR), Air Force Office of Scientific Research (AFOSR), Defense Threat Reduction Agency (DTRA), Defense Advanced Research Projects Agency (DARPA), Army Research Office (ARO), National Geospatial Intelligence Agency (NGA), and the Department of Veterans Affairs (VA). ATTENDANCE SCHOLARSHIPS: It is anticipated that a limited number of attendance scholarships will be available on a competitive basis to students who are presenting papers. Additional information will be provided soon. Follow us on Facebook, Twitter and LinkedIn to receive updates. Visit our website: http://sbp-brims.org/ Download: Download Call for Papers in PDF format here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Thu Jun 16 14:13:18 2022 From: timofte.radu at gmail.com (Radu Timofte) Date: Thu, 16 Jun 2022 20:13:18 +0200 Subject: Connectionists: Open Positions for Doctoral and PostDoctoral Researchers in AI, Computer Vision and Machine Learning Message-ID: *PostDoctoral and Doctoral Researcher Open Positions in * *Artificial Intelligence, Computer Vision, and Machine Learning* (Apologies for cross-postings.) Computer Vision Laboratory led by *Prof.Dr. Radu Timofte *, from the newly established *Center for Artificial Intelligence and Data Science , University of Wurzburg*, is looking for outstanding candidates to fill several postdoctoral and doctoral researcher fully-funded positions in the AI, computer vision, and machine learning fields. *Julius Maximilians University of W?rzburg (JMU), *founded in 1402, is one of the leading institutions of higher education in Germany and well-known on the international stage for delivering research excellence with a global impact. The University of W?rzburg is proud to be the home of outstanding researchers and fourteen Nobel Prize Laureates. W?rzburg is a vibrant city in Bavaria, Germany?s economically strongest state and home base to many international companies. We look forward to welcoming you to the University of W?rzburg! *Computer Vision Laboratory* and University of W?rzburg in general are an exciting environment for research, for independent thinking. Prof. Radu Timofte?s team is highly international, with people from about 12 countries, and the members have already won awards at top conferences (ICCV, CVPR, ICRA, NeurIPS, ...), founded successful spinoffs, and/or collaborated with industry. Prof. Radu Timofte is *a 2022 winner of the prestigious Humboldt Professorship for Artificial Intelligence Award.* Prof. Radu Timofte also leads the *Augmented Perception Group* at ETH Zurich. Depending on the position, the successful candidate will focus on a subset of the following *Research Topics:* ? deep learning ? computational photography ? domain translation ? learned image/video compression ? image/video super-resolution ? learning paradigms ? 3D ? image/video understanding ? augmented and mixed reality ? edge inference and mobile AI ? super-resolution microscopy *The tasks* will involve designing, developing, and testing novel ideas and solutions in cutting-edge research, as well as coordinating and conducting data collection for their evaluation when necessary. The successful candidate will conduct research on deep learning machines and a new cluster with hundreds of GPUs. *Profile* ? Master's degree in AI, computer science, electrical engineering, physics or applied mathematics/ statistics. ? Good programming skills, experience with Python / C++ and deep learning frameworks (PyTorch/TensorFlow). ? Interest, prior knowledge and experience in one or more of the following is a plus: computer vision, deep learning, machine learning, image processing, artificial intelligence. ? Enthusiasm for leading-edge research, team spirit, and capability of independent problem-solving. ? Fluent written and spoken English is a must. ? Postdoctoral applicants are expected to have a strong track of published research, including top, high impact, journal (such as PAMI, IJCV, TIP, NEUCOM, JMLR, CVIU) or conference (such as ICCV, CVPR, ECCV, ICRA, NeurIPS, ICLR, AAAI) papers. *Timeline* The positions are open immediately, fully funded, the salaries of the doctoral students and postdocs are competitive on the German scales TV-L E13 and E14, up to 70k euros per year, before tax. Typically a PhD takes ~4 years to complete and a postdoc position is for at least 1 year. The applications received by 15.07.2022 will be reviewed by 31.07.2022. Only the selected applicants will be contacted by email for interviews. After 15.07.2022 the applications will be reviewed on a rolling basis until all positions are filled. *Application* Interested applicants should email asap their PDF documents (including full CV, motivation letter, diplomas, transcripts of records, links to master or PhD thesis, referees / recommendation letters, etc.) to Prof. Dr. Radu Timofte at *radu.timofte at uni-wuerzburg.de* or *radu.timofte at vision.ee.ethz.ch* -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at helfer.ca Fri Jun 17 03:42:58 2022 From: peter at helfer.ca (Peter Helfer) Date: Fri, 17 Jun 2022 09:42:58 +0200 Subject: Connectionists: Symbols and Intelligence In-Reply-To: References: Message-ID: Jean-Marc, Interesting point about emojis. - It's worth remembering, though, that the emoticon, ancestor of the emoji, was invented precisely to restore to impoverished written communication something of the richness of in-person linguistic exchanges, which are far from one-dimensional. Not only are gestures and other body language missing from written language, but also prosody which carries a wealth of information. Interestingly, it has been suggested that tonal languages, because they use tone to distinguish between words, have less "bandwidth" for the type of extra-syntactic/extra-lexicographic information that is carried by intonation in other languages. So there would be a trade-off between expressiveness and efficiency of communicating lexemes. - In any case, one has to marvel at the richness of meaning and imagery that can be communicated even in terse writing by a skilled writer or poet. Clearly, we have found other means than copying large chunks of semantic nets to communicate whole universes of meaning by invoking information that is already present in the recipient's mind. Didn't Turing allude to this when he suggested using "Shall I compare thee to a summer's day" for the imitation game? In comparison, the idea of needing to copy large volumes of semantic information in order to communicate feels wrong in the same way that requiring billions of examples to learn to answer simple questions doesn't seem to be the way forward. On Wed, Jun 15, 2022, 20:51 Fellous, Jean-Marc - (fellous), < fellous at arizona.edu> wrote: > Thank you for such stimulating discussions! I would like to share a few > thoughts... > > > > - On Symbolic thinking (or language) and Intelligence. It seems to me that > symbolic representation may in fact be a symptom of a lack of intelligence > of sort, emanating from our inability to communicate multi-dimensional > concepts/thoughts. Language is a low dimensional sequential tool we have > developed, for lack of a better one, to transform a highly parallel, > distributed and multi-dimensional pattern of neural activity (a thought) > into a decimated information flow, with an enormous information loss. The > recipient is left with the enormously error prone task to re-inflate this > information stream and recreate a multi-dimensional pattern in his/her own > mind. This ought to be the worst way of communicating/representing > information there is, but a necessary (?) one given our bodies and physical > constraints. Intelligence would be if instead of communicating 'Apple' we > were able to communicate the chunk of semantic net the concept of 'Apple' > was related to, in each of us. Not to dimmish in anyway the need and > importance to study language and symbolic representations, why try to > develop GAI from symbolic/language type concepts that are only there > because we cannot (physically) do better? Aren't we limiting ourselves > right away? > > > > - It strikes me that we may in fact be trying to overcome the language > limitations by in fact adding symbols, using modern technologies. > Specifically the use of Emojis. These new symbols may in fact fulfill our > need to go beyond mono-symbolic sequential concepts and use 3D images > (x,y,color) instead. Though it is still at its infancy, could this method > of communication and representation of knowledge overtake eventually our > word-based language? A natural evolution of sort? Can we predict that > eventually emojis will be replaced by short 5D second-long animations (x,y, > color, time, sound). Shouldn't GAI be based on these types of > multi-dimensional symbols? It would be closer to human intelligence... > > > > - And pushing the thought further: What about Mandarin or Cantonese? An > average Mandarin speaker knows anywhere between 5,000 and 10,000 symbols, > and very few rules (we on the other hand know 26 letters and 100's of rules > (e.g. phonetic, syntactic, grammatical)). And 4 tones that can change > profoundly the meaning of a seemingly (to us) identical sound/symbol (i.e. > tone may be seen as an additional dimension in spoken Mandarin). > Mandarin-speakers may be symbolic thinkers, much more so than we are, it > seems, and we have the same brains. Do they have a different kind of > 'intelligence'? Shouldn't we spend more time and effort comparing the 2 > systems (western and Mandarin), from NLP down to the neural level? > Shouldn't we 'get out' of the Western symbolic system to understand it? > Shouldn?t a true/genuine GAI work the same way across > cultures/languages/symbolic systems? > > > > Thanks, and looking forward to any feedback! > > Jean-Marc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.marinazzo at gmail.com Fri Jun 17 04:50:31 2022 From: daniele.marinazzo at gmail.com (Daniele Marinazzo) Date: Fri, 17 Jun 2022 10:50:31 +0200 Subject: Connectionists: Network Neuroscience Satellite 2022: Call for contributions Message-ID: Dear Colleagues, We are excited to invite your participation in Network Neuroscience 2022. Submit your talk or poster via EasyChair (link below) by June 23rd 11:59pm AOE. Network Neuroscience 2022 is a satellite affiliated with the NetSci conference and will be held online July 11th-12th. Themes in the Network Neuroscience remit include, but are not limited to: (i) Interactome networks; (ii) Transcriptional and gene regulation networks; (iii) Structural brain networks (imaging); (iv) Functional brain networks (imaging); (v) Brain networks ? theory, modeling and analysis; (vi) Signal processing and information flow; (vii) Circuit dynamics; (viii) Brain-behavior interactions; (ix) Systems neuroscience. All themes apply to any species. Check our website for updates to the schedule. Network Neuroscience 2022: https://networkneuroscience.github.io Submit via EasyChair: https://easychair.org/conferences/?conf=nn2022 NetSci 2022: https://netsci2022.net We look forward to seeing you in July! Network Neuroscience Organizing Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.destercke at hds.utc.fr Fri Jun 17 05:55:55 2022 From: sebastien.destercke at hds.utc.fr (=?UTF-8?Q?S=C3=A9bastien_Destercke?=) Date: Fri, 17 Jun 2022 11:55:55 +0200 Subject: Connectionists: Second SIPTA seminar: 29th of June, at 15:00 CEST, Ruobin Gong talking about "Imprecise probabilities in modern data science: challenges and opportunities" Message-ID: *** please disseminate to whomever may be interested *** Dear colleagues, We are delighted to announce our second SIPTA online seminar on imprecise probabilities (IP). These monthly events are open to anyone interested in IP, and will be followed by a Q&A and open discussion. They also provide an occasion for the community to meet, keep in touch and exchange between in-person events. For this second seminar, we are very happy to have Ruobin Gong as our speaker. After a PhD at Harvard, she is now assistant professor at Rutgers, working at the crossroad of imprecise probabilities, statistics and data science. On the 29th of June, at 15:00 CEST (up to 17:00 CEST, with a talk duration of 45min/1h), she will talk about "Imprecise probabilities in modern data science: challenges and opportunities?. Curious? Then check out the abstract on the webpage of the SIPTA seminars: sipta.org/events/sipta-seminars. The zoom link for attending the seminar will appear on that same page shortly before the event. So please mark your calendars on the 29th of June, 15:00 CEST, and join us for the occasion. And for those who missed the previous seminar and want to catch up, or simply want to see it again and again, it is now online at https://www.youtube.com/channel/UCPER8Dfil66KZCYlsK86XCQ/featured. See you at the seminar! S?bastien Destercke, Enrique Miranda and Jasper De Bock From ksharma.raj at gmail.com Fri Jun 17 09:26:45 2022 From: ksharma.raj at gmail.com (Raj Sharma) Date: Fri, 17 Jun 2022 18:56:45 +0530 Subject: Connectionists: ACML 2022 -- Third Call for Papers [Submission deadline: 23rd June] Message-ID: **apologies if you have received multiple copies of this email* ------------------------------------------------------------------------------ *ACML 2022* The 14th Asian Conference on Machine Learning Hyderabad, India December 14-16, 2022 https://www.acml-conf.org/2022/ ------------------------------------------------------------------------------ *CALL FOR PAPERS* The 14th Asian Conference on Machine Learning (ACML 2022) will take place between December 14-16, 2022 at Hyderabad, India. The conference aims to provide a leading international forum for researchers in machine learning and related fields to share their new ideas, progress and achievements. While the main conference paper presentations will remain virtual to encourage widespread participation in current times, the conference will also have physical components to allow in-person interaction for those who can attend. The conference calls for high-quality, original research papers in the theory and practice of machine learning. The conference also solicits proposals focusing on frontier research, new ideas and paradigms in machine learning. We encourage submissions from all parts of the world, not only confined to the Asia-Pacific region. The conference is closed for the journal track but accepting submissions in the conference track. - *conference track* (16-page limit with references), for which the proceedings will be published as a volume of Proceedings of Machine Learning Research Workshop and Conference Proceedings (PMLR) Please refer to http://www.acml-conf.org/2022/ for more details. Instructions for submission and LaTeX templates will be available soon (at least one month before the first deadline). *IMPORTANT DATES * (subject to minor changes in case there are conflicts with timelines of other major ML conferences) Conference Track - 23 Jun 2022 Submission deadline - 11 Aug 2022 Reviews released to authors - 18 Aug 2022 Author rebuttal deadline - 08 Sep 2022 Acceptance notification - 29 Sep 2022 Camera-ready submission deadline *TOPICS OF INTEREST include but are not limited to:* General machine learning - Active learning - Dimensionality reduction - Feature selection - Graphical models - Imitation Learning - Latent variable models - Learning for big data - Learning from noisy supervision - Learning in graphs - Multi-objective learning - Multiple instance learning - Multi-task learning - Online learning - Optimization - Reinforcement learning - Relational learning - Semi-supervised learning - Sparse learning - Structured output learning - Supervised learning - Transfer learning - Unsupervised learning - Other machine learning methodologies Deep learning - Attention mechanism and transformers - Deep learning theory - Generative models - Deep reinforcement learning - Architectures - Other topics in deep learning Probabilistic Methods - Bayesian machine learning - Graphical models - Variational inference - Gaussian processes - Monte Carlo methods Theory - Computational learning theory - Optimization (convex, non-convex) - Bandits - Game theory - Matrix/Tensor methods - Statistical learning theory - Other theories Datasets and Reproducibility - ML datasets and benchmarks - Implementations, libraries - Other topics in reproducible ML research - Trustworthy Machine Learning - Accountability/Explainability/Transparency - Causality - Fairness - Privacy - Robustness - Other topics in trustworthy ML Applications - Bioinformatics - Biomedical informatics - Collaborative filtering - Computer vision - COVID-19 related research - Healthcare - Human activity recognition - Information retrieval - Natural language processing - Social networks - Web search - Climate science - Social good - Other applications *OAMLS @ ACML*: Besides a program of tutorials and workshops, this year we will continue the Online Asian Machine Learning School (OAMLS) as part of ACML (dates to be finalized, likely to be around the ACML conference dates, held virtually). OAMLS aims to help prepare the next generation of machine learning researchers and practitioners by providing them with knowledge of machine learning fundamentals as well as state-of-the-art advances. It focuses on participants in the Asia-Pacific region; the virtual format, supported by ever-improving communication technologies, allows affordable participation from students and practitioners from a large part of the region, including those from under-represented areas, who may otherwise be unable to afford travel to a physical international school. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.silver at acadiau.ca Fri Jun 17 10:14:45 2022 From: danny.silver at acadiau.ca (Danny Silver) Date: Fri, 17 Jun 2022 14:14:45 +0000 Subject: Connectionists: LeCun on Marcus In-Reply-To: <759D13FB-57A5-4A6D-9A10-052817D7F841@nyu.edu> References: <759D13FB-57A5-4A6D-9A10-052817D7F841@nyu.edu> Message-ID: Dear Dr. Marcus .. Thank you for forwarding the link of Dr. LeCun?s article of June 16 to the connectionists list. I read this with great interest and found it falls very much in line with my thoughts over the last few years. My sense is that this article will prove pivotal in the debate over the role that symbols have played in human intelligence and their place in AI systems of the future. For over 30 years, many of us working in machine learning and more specifically neural networks have felt that symbols play more of a peripheral role in intelligence. And of late ? because of work in deep networks, autoencoders, lifelong machine learning, and transformer networks that develop structured vectorial embeddings of concepts (particularly from unsupervised examples) and without the a prioi need for symbols ? new theories have arisen of how vectorial representation and symbolic representation intertwine. My perspective on symbols has become roughly as follows: Symbols are external communication tools used between intelligent agents that allows knowledge to be transferred in a more efficient and effective manner than having to experience the world directly. They are also used internally within an agent through a form of self-communication to help formulate and justify decision making. Therefore, symbols are critical to intelligence NOT because they are the building blocks of thought, but because they are characterizations of thought that act as constraints on learning about the world. Please let me explain. The ability of one agent to inform another agent that the object they are seeing ?can be eaten? or ?can eat them?, is a powerful evolutionary pressure. So, one can see why the basic use of symbols (crows cry to each other, wolves howl) have evolved to be used by so many species. Humans and several other species have turned it up a notch or two. So, I agree with Dr. LeCun when he writes ?the machine can learn to manipulate symbols in the world, despite not having hand-crafted symbols and symbolic manipulation rules built in? and ?This treats symbols and symbolic manipulations as primarily cultural inventions, dependent less on hard wiring in the brain and more on the increasing sophistication of our social lives.? Similarly, much of human reasoning (perhaps the most important of human decision making, such as ?should I start that business?, or ?should I marry that man?) is not built on a symbol manipulating mechanism. However ? and this is where my perspective may differ from or add to that of Dr. LeCun ? symbols have come to play a very important role in formulating and justifying human decision making to the point where they are critical to the process of intelligent behaviour. And this is because something very special happens when you put together the following two components or subsystems: one that learns to sense the world and act appropriately and another that learns to sense its own internal representations and related them to shared symbols. I suspect these components can be setup to beneficially constrain each other. The ability to sense the world and react to it appropriately (so as to survive) can be done without being able to externally or internally manipulate symbols (at least in an complex manner). Many simple creatures do this. However, if an agent has the capacity to recognize internal vectorial concepts (neuron activity), label them with symbols, and use those symbols to communicate, then perhaps some marvelous things happen (the agent not only survives, it thrives): * First and foremost, symbols provide the means by which knowledge within the nervous system of one agent can be used to inform the nervous system of another agent, without that second agent expending energy in the real world or risking its life or limb. This may be one reason why we are the dominate species on the planet. * Second, and perhaps of greater importance, the ability to relate internal neural activity with external symbols and vice versa, provides a mechanism by which an agent can consciously work to form and justify a decision. This form of internal agent ?self-communication? using the same symbols meant for agent-to-agent communication, has become key to human intelligence because it places an additional constraint on learning. What we learn is forced to fit into the ?lexicon? of what we recognize as symbols. What I am suggesting is, from an evolutionary perspective: (1) the need to more effectively and efficiently learn about the world and act appropriately led to the ability to recognize internal neural states and related those with shared symbols, and (2) the development of shared symbols and language provided an additional and beneficial constraint on learning about the world. Respectfully, Danny Silver ========================== Daniel L. Silver Professor, Jodrey School of Computer Science Director, Acadia Institute for Data Analytics Acadia University, Office 314, Carnegie Hall, Wolfville, Nova Scotia Canada B4P 2R6 Cell: (902) 679-9315 acadiau.ca Facebook Twitter YouTube LinkedIn Flickr [id:image001.png at 01D366AF.7F868A70] From: Connectionists on behalf of Gary Marcus Date: Friday, June 17, 2022 at 3:34 AM To: Connectionists List Subject: Connectionists: LeCun on Marcus CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I?ll probably write a bit of reply later, but this is an excellent new essay by Yann LeCun, quite relevant to many recent discussions here: https://www.noemamag.com/what-ai-can-tell-us-about-intelligence -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7867 bytes Desc: image001.png URL: From marius.bilasco at univ-lille.fr Fri Jun 17 10:50:19 2022 From: marius.bilasco at univ-lille.fr (Ioan Marius BILASCO) Date: Fri, 17 Jun 2022 16:50:19 +0200 Subject: Connectionists: Postdoctoral Researcher position in ML Security - applications to Computer Vision Message-ID: We are looking for a highly motivated candidate for a two-years Postdoc position interested in the investigation of the security and privacy of Machine Learning architectures, especially in the context of computer vision and video-surveillance. The research will be conducted within a collaborative and highly stimulating environment. The candidate will be working with Ihsen Alouani at the IEMN-CNRS Lab Polytechnic University Hauts-de-France (https://www.uphf.fr/DOAE/) in Valenciennes, France and Ioan Marius Bilasco at University of Lille - CRIStAL lab (https://www.cristal.univ-lille.fr/), France. The candidate will be recruited by UPHF and its main residence will be in Valenciennes but technically he/she will be working within the two institutions. Polytechnic University Hauts-de-France (UPHF) in Valenciennes, and more specifically IEMN Lab (Institut d?Electronique, Micro-electronique et Nanotechnologie, https://www.iemn.fr/ ), are located in Campus Mont-Houy in an international and friendly environnent: https://www.youtube.com/watch?v=kVG_AcGBxvk&ab_channel=UPHFOfficiel == REQUIREMENTS and Expected Qualifications: -?? PhD in Computer Science, Statistics, or Applied Mathematics with preferably background in machine learning, deep learning, computer vision, and similar topics -?? A background in Cybersecurity is a plus -?? Ability to work in a collaborative environment -?? Fluency in English, both written and spoken == APPLICATION: Please address an email with the subject "[MLSecCV] application" to: ihsen.alouani at uphf.fr, marius.bilasco at univ-lille.fr that enclose : a)your CV; b)a list of publications; c)at least two reference letters. The position is expected to start beginning of September 2022. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdm at zhaw.ch Fri Jun 17 13:44:00 2022 From: stdm at zhaw.ch (Stadelmann Thilo (stdm)) Date: Fri, 17 Jun 2022 17:44:00 +0000 Subject: Connectionists: LeCun on Marcus (Gary Marcus) Message-ID: Interesting article indeed, thank you for a good read. Regarding symbol manipulation in neural networks (biological and artificial), the recent proposal of von der Malsburg et al. (see preprint: https://arxiv.org/abs/2205.00002) might give a revealing answer as to how this might be implemented (and: learnt) using NN architecture (specifically, an artificial one). The basic ideas is to assume self-organization of stimuli-specific net fragments as the inductive bias that guides such learning (making such net fragments, in terms of the previous discussion, "symbols", or a code, that is compositional and thus generalizes well). Quoting the abstract: Introduction: In contrast to current AI technology, natural intelligence - the kind of autonomous intelligence that is realized in the brains of animals and humans to attain in their natural environment goals defined by a repertoire of innate behavioral schemata - is far superior in terms of learning speed, generalization capabilities, autonomy and creativity. How are these strengths, by what means are ideas and imagination produced in natural neural networks? Methods: Reviewing the literature, we put forward the argument that both our natural environment and the brain are of low complexity, that is, require for their generation very little information and are consequently both highly structured. We further argue that the structures of brain and natural environment are closely related. Results: We propose that the structural regularity of the brain takes the form of net fragments (self-organized network patterns) and that these serve as the powerful inductive bias that enables the brain to learn quickly, generalize from few examples and bridge the gap between abstractly defined general goals and concrete situations. Conclusions: Our results have important bearings on open problems in artificial neural network research. Best, Thilo -----Urspr?ngliche Nachricht----- Message: 2 Date: Thu, 16 Jun 2022 12:39:01 -0700 From: Gary Marcus To: Connectionists List Subject: Connectionists: LeCun on Marcus Message-ID: <759D13FB-57A5-4A6D-9A10-052817D7F841 at nyu.edu> Content-Type: text/plain; charset="utf-8" I?ll probably write a bit of reply later, but this is an excellent new essay by Yann LeCun, quite relevant to many recent discussions here: https://www.noemamag.com/what-ai-can-tell-us-about-intelligence -------------- next part -------------- An HTML attachment was scrubbed... URL: From battleday at princeton.edu Fri Jun 17 14:27:08 2022 From: battleday at princeton.edu (Ruairidh McLennan Battleday) Date: Fri, 17 Jun 2022 11:27:08 -0700 Subject: Connectionists: Call for Papers: Mathematics of Neuroscience Symposium, Crete, Greece, 24-25th September 2022 Message-ID: ------------------ Symposium on the Mathematics of Neuroscience. Crete, Greece, 24-25th September 2022 (www.neuromonster.org). Two decades into the 21st century, can we claim to be any closer to a unified model of the brain? In this exploratory symposium, we invite submissions for short talks and posters presenting general mathematical models of brain function. We give priority to those models that account for brain or behavioural data, or provide simulations to that effect. This year?s theme is life-long learning and discovery. Keynote Speakers Professor Peter Dayan (Director, Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, T?bingen) ?Learning from scratch: Non-parametric models of task acquisition over the long run? Professor Andrew Adamatzky (Director, Unconventional Computing Laboratory, University of the West of England) ?Fungal Brain? Symposium Chairs Professor Dan V. Nicolau (King?s College London) Dr Ruairidh McLennan Battleday (Princeton University) Invited Talks Professor Kobi Kremnitzer (University of Oxford) Professor Marc Howard (Boston University) Professor Kevin Burrage (Queensland University of Technology) Professor Rahul Bhui (MIT) Dr Jonathan Mason (University of Oxford) Dr James Whittington (University of Oxford / Stanford) Dr Ilia Sucholutsky (Princeton University) Dr Sophia Sanborn (UC Berkeley, UC Santa Barbara, University of British Columbia) Dr Christina Merrick (UC San Francisco) Dr Timothy Muller (UCL) Prize Talks Dr Aenne Brielmann (Max Planck Institute, T?bingen) Andrew Ligeralde (Redwood Center for Theoretical Neuroscience, University of California, Berkeley) The symposium will be held virtually or in-person on the island of Crete, Greece from the 24-25th of September 2022 (www.neuromonster.org). Submission is by 250-word abstract before the 23rd July 2022, emailed to the organizers Professor Dan V. Nicolau Jr (dan.nicolau at kcl.ac.uk) and Dr Ruairidh M. Battleday (battleday at princeton.edu). ------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitsu at well.com Fri Jun 17 11:46:37 2022 From: mitsu at well.com (Mitsu Hadeishi) Date: Fri, 17 Jun 2022 08:46:37 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <8658D252-BCDD-4BB3-B5DD-2C33C00EFF0B@nyu.edu> References: <1e41afa0-a549-4f29-086c-2169f334b04e@rutgers.edu> <8658D252-BCDD-4BB3-B5DD-2C33C00EFF0B@nyu.edu> Message-ID: What do you make of the fact that GPT-3 can be trained to code fairly complex examples? For instance I read one person described a relatively involved browser video game in plain English and Codex (a coding optimized version of GPT-3) generated a relatively large amount of JavaScript that correctly solved the problem: the code actually runs and produces an interactive game that runs in a browser. Although it's generalization of arithmetic is apparently somewhat fuzzy, it seems to me that being able to accomplish something like this is pretty strong evidence it is able to do some level of variable binding and symbolic manipulation in some sense. On Thu, Jun 16, 2022 at 11:42 PM Gary Marcus wrote: > My own view is that arguments around symbols per se are not very > productive, and that the more interesting questions center around what you > *do* with symbols once you have them. > > - If you take symbols to be patterns of information that stand for > other things, like ASCII encodings, or individual bits for features (e.g. > On or Off for a thermostat state), then practically every computational > model anywhere on the spectrum makes use of symbols. For example the inputs > and outputs (perhaps after a winner-take-all operation or somesuch) of > typical neural networks are symbols in this sense, standing for things like > individual words, characters, directions on a joystick etc. > - In the Algebraic Mind, where I discussed such matters, I said that > the interesting difference was really in whether a given system had *operations > over variables*, such as those you find in algebra or lines of > computer programming code, in which there are variables, bindings, and > operation (such as storage, retrieval, concatenation, addition, etc) > - Simple multilayer perceptrons with distributed representations (with > some caveats) don?t implement those operations (?rules?) and so represent a *genuine > alternative to the standard symbol-manipulation paradigm, even though they > may have symbols on their inputs and outputs.* > - But I also argued that (at least with respect to modeling human > cognition) this was to their detriment, because it kept them from freely > generalizing many relations (universally-quanitified one-to-one-mapings, > such as the identity function, given certain caveats) as humans would. > Essentially the point I was making in 2001 s what would nowadays be called > distribution shift; the argument was that *operations over variables > allowed for free generalization*. > - Transformers are interesting; I don?t fully understand them. Chris > Olah has done some interesting relevant work I have been meaning to dive > into. They do some quasi-variable-binding like things, but still > empirically have trouble generalizing arithmetic beyond training examples, > as Razeghi et al showed in arXiv earlier this year. Still, the distinction > between models like multilayer perceptrons that lack operations over > variables and computer programming languages that take them for granted is > crisp, and I think a better start than arguing over symbols, when no > serious alternative to having at least some symbols in the loop has ever > been proposed. > - Side note: Geoff Hinton has said here that he doesn?t like arbitrary > symbols; symbols don?t have to be arbitrary, even though they often are. > There are probably some interesting ideas to be developed around > non-arbitrary symbols and how they could be of value. > > Gary > > > > On Jun 15, 2022, at 06:48, Stephen Jose Hanson < > stephen.jose.hanson at rutgers.edu> wrote: > > ? > > Here's a slightly better version of SYMBOL definition from the 1980s, > > > (1) a set of arbitrary physical tokens (scratches on paper, holes on a > tape, events in a digital computer, etc.) that are (2) manipulated on > the basis of explicit rules that are (3) likewise physical tokens and > strings of tokens. The rule-governed symbol-token manipulation is > based (4) purely on the shape of the symbol tokens (not their ?mean- > ing?) i.e., it is purely syntactic, and consists of (5) rulefully combining > and recombining symbol tokens. There are (6) primitive atomic sym- > bol tokens and (7) composite symbol-token strings. The entire system > and all its parts?the atomic tokens, the composite tokens, the syn- > tactic manipulations (both actual and possible) and the rules?are all > (8) semantically interpretable: The syntax can be systematically assigned > a meaning (e.g., as standing for objects, as describing states of affairs). > > > A critical part of this for learning: is as this definition implies, a > key element in the acquisition of symbolic structure involves a type of > independence between the task the symbols are found in and the vocabulary > they represent. Fundamental to this type of independence is the ability of > the learning system to factor the generic nature (or rules) of the task > from the symbols, which are arbitrarily bound to the external referents of > the task. > > > Now it may be the case that a DL doing classification may be doing > Categorization.. or concept learning in the sense of human concept > learning.. or maybe not.. Symbol manipulations may or may not have much > to do with this ... > > > This is why, I believe Bengio is focused on this kind issue.. since there > is a likely disconnect. > > > Steve > > > On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: > > Dear all. > > > > It is indeed important to have an understanding of the term 'symbol'. > > > > I believe Newell, who was a strong advocate of symbolic cognition, gave a > clear description of what a symbol is in his 'Unified Theories of > Cognition' (1990, p 72-80): > > ?The symbol token is the device in the medium that determines where to go > outside the local region to obtain more structure. The process has two > phases: first, the opening of *access* to the distal structure that is > needed; and second, the *retrieval* (transport) of that structure from > its distal location to the local site, so it can actually affect the > processing." (p. 74). > > > > This description fits with the idea that symbolic cognition relies on Von > Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A > symbol is then a code that can be stored in, e.g,, registers and > transported to other sites. > > > > Viewed in this way, a 'grandmother neuron' would not be a symbol, because > it cannot be used as information that can be transported to other sites as > described by Newell. > > > > Symbols in the brain would require to have neural codes that can be stored > somewhere and transported to other sites. This could perhaps be sequences > of spikes or patterns of activation over sets of neurons. The questions > then remain how these codes could be stored in such a way that they can be > transported, and what the underlying neural architecture to do this would > be. > > > > For what it is worth, one can have compositional neural cognition > (language) without relying on symbols. In fact, not using symbols generates > testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725 > > ). > > > > Best, > > Frank van der Velde > > ------------------------------ > *From:* Connectionists > on behalf of Christos > Dimitrakakis > > *Sent:* Wednesday, June 15, 2022 9:34 AM > *Cc:* Connectionists List > > *Subject:* Re: Connectionists: The symbolist quagmire > > I am quite reluctant to post something, but here goes. > > What does a 'symbol' signify? What separates it from what is not a symbol? > Is the output of a deterministic classifier not a type of symbol? If not, > what is the difference? > > I can understand the label symbolic applied to certain types of methods > when applied to variables with a clearly defined conceptual meaning. In > that context, a probabilistic graphical model on a small number of > variables (eg. The classical smoking, asbestos, cancer example) would > certainly be symbolic, even though the logic and inference are probablistic. > > However, since nothing changes in the algorithm when we change the nature > of the variables, I fail to see the point in making a distinction. > > On Wed, Jun 15, 2022, 08:06 Ali Minai wrote: > > Hi Asim > > That's great. Each blink is a data point, but what does the brain do with > it? Calculate gradients across layers and use minibatches? The data point > is gone instantly, never to be iterated over, except any part that the > hippocampus may have grabbed as an episodic memory and can make available > for later replay. We need to understand how this works and how it can be > instantiated in learning algorithms. To be fair, in the special case of > (early) vision, I think we have a pretty reasonable idea. It's more > interesting to think of why we can figure out how to do fairly complicated > things of diverse modalities after watching someone do them once - or > never. That integrated understanding of the world and the ability to > exploit it opportunistically and pervasively is the thing that makes an > animal intelligent. Are we heading that way, or are we focusing too much on > a few very specific problems. I really think that the best AI work in the > long term will come from those who work with robots that experience the > world in an integrated way. Maybe multi-modal learning will get us part of > the way there, but not if it needs so much training. > > Anyway, I know that many people are already thinking about these things > and trying to address them, so let's see where things go. Thanks for the > stimulating discussion. > > Best > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > 828 Rhodes Hall > University of Cincinnati > Cincinnati, OH 45221-0030 > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: > > Hi Ali, > > > > Of course the development phase is mostly unsupervised and I know there is > ongoing work in that area that I don?t keep up with. > > > > On the large amount of data required to train the deep learning models: > > > > I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at > Stanford. And Bernie and I became quite close after attending his class > that quarter. I usually used to walk back with Bernie after his class. One > day I did ask where does all this data come from to train the brain? His > reply was - every blink of the eye generates a datapoint. > > > > Best, > > Asim > > > > *From:* Ali Minai > *Sent:* Tuesday, June 14, 2022 3:43 PM > *To:* Asim Roy > *Cc:* Connectionists List ; Gary Marcus < > gary.marcus at nyu.edu>; Geoffrey Hinton ; Yoshua > Bengio > *Subject:* Re: Connectionists: The symbolist quagmire > > > > Hi Asim > > > > I have no issue with neurons or groups of neurons tuned to concepts. > Clearly, abstract concepts and the equivalent of symbolic computation are > represented somehow. Amodal representations have also been known for a long > time. As someone who has worked on the hippocampus and models of thought > for a long time, I don't need much convincing on that. The issue is how a > self-organizing complex system like the brain comes by these > representations. I think it does so by building on the substrate of > inductive biases - priors - configured by evolution and a developmental > learning process. We just try to cram everything into neural learning, > which is a main cause of the "problems" associated with deep learning. > They're problems only if you're trying to attain general intelligence of > the natural kind, perhaps not so much for applications. > > > > Of course you have to start simple, but, so far, I have not seen any > simple model truly scale up to the real world without: a) Major tinkering > with its original principles; b) Lots of data and training; and c) Still > being focused on a narrow task. When this approach shows us how to build an > AI that can walk, chew gum, do math, and understand a poem using a single > brain, then we'll have something like real human-level AI. Heck, if it can > just spin a web in an appropriate place, hide in wait for prey, and make > sure it eats its mate only after sex, I would even consider that > intelligent :-). > > > > Here's the thing: Teaching a sufficiently complicated neural system a very > complex task with lots of data and supervised training is an interesting > engineering problem but doesn't get us to intelligence. Yes, a network can > learn grammar with supervised learning, but none of us learn it that way. > Nor do the other animals that have simpler grammars embedded in their > communication. My view is that if it is not autonomously self-organizing at > a fundamental level, it is not intelligence but just a simulation of > intelligence. Of course, we humans do use supervised learning, but it is a > "late stage" mechanism. It works only when the system has first > self-organized autonomously to develop the capabilities that can act as a > substrate for supervised learning. Learning to play the piano, learning to > do math, learning calligraphy - all these have an important supervised > component, but they work only after perceptual, sensorimotor, and cognitive > functions have been learned through self-organization, imitation, rapid > reinforcement, internal rehearsal, mismatch-based learning, etc. I think > methods like SOFM, ART, and RBMs are closer to what we need than behemoths > trained with gradient descent. We just have to find more efficient versions > of them. And in this, I always return to Dobzhansky's maxim: Nothing in > biology makes sense except in the light of evolution. Intelligence is a > biological phenomenon; we'll understand it by paying attention to how it > evolved (not by trying to replicate evolution, of course!) And the same > goes for development. I think we understand natural phenomena by studying > Nature respectfully, not by trying to out-think it based on our still very > limited knowledge - not that it keeps any of us, myself included, from > doing exactly that! I am not as familiar with your work as I should be, but > I admire the fact that you're approaching things with principles rather > than building larger and larger Rube Goldberg contraptions tuned to narrow > tasks. I do think, however, that if we ever get to truly mammalian-level > AI, it will not be anywhere close to fully explainable. Nor will it be a > slave only to our purposes. > > > > Cheers > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: > > Hi Ali, > > > > 1. It?s important to understand that there is plenty of > neurophysiological evidence for abstractions at the single cell level in > the brain. Thus, symbolic representation in the brain is not a fiction any > more. We are past that argument. > 2. You always start with simple systems before you do the complex > ones. Having said that, we do teach our systems composition ? composition > of objects from parts in images. That is almost like teaching grammar or > solving a puzzle. I don?t get into language models, but I think grammar and > composition can be easily taught, like you teach a kid. > 3. Once you know how to build these simple models and extract symbols, > you can easily scale up and build hierarchical, multi-modal, compositional > models. Thus, in the case of images, after having learnt that cats, dogs > and similar animals have certain common features (eyes, legs, ears), it can > easily generalize the concept to four-legged animals. We haven?t done it, > but that could be the next level of learning. > > > > In general, once you extract symbols from these deep learning models, you > are at the symbolic level and you have a pathway to more complex, > hierarchical models and perhaps also to AGI. > > > > Best, > > Asim > > > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > > > *From:* Connectionists *On > Behalf Of *Ali Minai > *Sent:* Monday, June 13, 2022 10:57 PM > *To:* Connectionists List > *Subject:* Re: Connectionists: The symbolist quagmire > > > > Asim > > > > This is really interesting work, but learning concept representations from > sensory data is not enough. They must be hierarchical, multi-modal, > compositional, and integrated with the motor system, the limbic system, > etc., in a way that facilitates an infinity of useful behaviors. This is > perhaps a good step in that direction, but only a small one. Its main > immediate utility is in using deep learning networks in tasks that can be > explained to users and customers. While very useful, that is not a central > issue in AI, which focuses on intelligent behavior. All else is in service > to that - explainable or not. However, I do think that the kind of > hierarchical modularity implied in these representations is probably part > of the brain's repertoire, and that is important. > > > > Best > > Ali > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: > > There?s a lot of misconceptions about (1) whether the brain uses symbols > or not, and (2) whether we need symbol processing in our systems or not. > > > > 1. Multisensory neurons are widely used in the brain. Leila Reddy and > Simon Thorpe are not known to be wildly crazy about arguing that symbols > exist in the brain, but their characterizations of concept cells (which > are multisensory neurons) ( > https://www.sciencedirect.com/science/article/pii/S0896627314009027# > > !) state that concept cells have ?*meaning** of a given stimulus in a > manner that is invariant to different representations of that stimulus*.? > They associate concept cells with the properties of ?*Selectivity or > specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal > invariance*? and ?*abstractness*.? That pretty much says that concept > cells represent symbols. And there are plenty of concept cells in the > medial temporal lobe (MTL). The brain is a highly abstract system based on > symbols. There is no fiction there. > > > > 1. There is ongoing work in the deep learning area that is trying to > associate a single neuron or a group of neurons with a single concept. > Bengio?s work is definitely in that direction: > > > > ?*Finally, our recent work on learning high-level 'system-2'-like > representations and their causal dependencies seeks to learn > 'interpretable' entities (with natural language) that will emerge at the > highest levels of representation (not clear how distributed or local these > will be, but much more local than in a traditional MLP). This is a > different form of disentangling than adopted in much of the recent work on > unsupervised representation learning but shares the idea that the "right" > abstract concept (related to those we can name verbally) will be > "separated" (disentangled) from each other (which suggests that > neuroscientists will have an easier time spotting them in neural > activity).?* > > Hinton?s GLOM, which extends the idea of capsules to do part-whole > hierarchies for scene analysis using the parse tree concept, is also about > associating a concept with a set of neurons. While Bengio and Hinton are > trying to construct these ?concept cells? within the network (the CNN), we > found that this can be done much more easily and in a straight forward way > outside the network. We can easily decode a CNN to find the encodings for > legs, ears and so on for cats and dogs and what not. What the DARPA > Explainable AI program was looking for was a symbolic-emitting model of the > form shown below. And we can easily get to that symbolic model by decoding > a CNN. In addition, the side benefit of such a symbolic model is protection > against adversarial attacks. So a school bus will never turn into an > ostrich with the tweaks of a few pixels if you can verify parts of objects. > To be an ostrich, you need have those long legs, the long neck and the > small head. A school bus lacks those parts. The DARPA conceptualized > symbolic model provides that protection. > > > > In general, there is convergence between connectionist and symbolic > systems. We need to get past the old wars. It?s over. > > > > All the best, > > Asim Roy > > Professor, Information Systems > > Arizona State University > > Lifeboat Foundation Bios: Professor Asim Roy > > > Asim Roy | iSearch (asu.edu) > > > > > [image: image001.png] > > > > > > *From:* Connectionists *On > Behalf Of *Gary Marcus > *Sent:* Monday, June 13, 2022 5:36 AM > *To:* Ali Minai > *Cc:* Connectionists List > *Subject:* Connectionists: The symbolist quagmire > > > > Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, > Dave and Geoff were both pioneers in trying to getting symbols and neural > nets to live in harmony. Don?t we still need do that, and if not, why not? > > > > Surely, at the very least > > - we want our AI to be able to take advantage of the (large) fraction of > world knowledge that is represented in symbolic form (language, including > unstructured text, logic, math, programming etc) > > - any model of the human mind ought be able to explain how humans can so > effectively communicate via the symbols of language and how trained humans > can deal with (to the extent that can) logic, math, programming, etc > > > > Folks like Bengio have joined me in seeing the need for ?System II? > processes. That?s a bit of a rough approximation, but I don?t see how we > get to either AI or satisfactory models of the mind without confronting the > ?quagmire? > > > > > > On Jun 13, 2022, at 00:31, Ali Minai wrote: > > ? > > ".... symbolic representations are a fiction our non-symbolic brains > cooked up because the properties of symbol systems (systematicity, > compositionality, etc.) are tremendously useful. So our brains pretend to > be rule-based symbolic systems when it suits them, because it's adaptive to > do so." > > > > Spot on, Dave! We should not wade back into the symbolist quagmire, but do > need to figure out how apparently symbolic processing can be done by neural > systems. Models like those of Eliasmith and Smolensky provide some insight, > but still seem far from both biological plausibility and real-world scale. > > > > Best > > > > Ali > > > > > > *Ali A. Minai, Ph.D.* > Professor and Graduate Program Director > Complex Adaptive Systems Lab > Department of Electrical Engineering & Computer Science > > 828 Rhodes Hall > > University of Cincinnati > Cincinnati, OH 45221-0030 > > > Phone: (513) 556-4783 > Fax: (513) 556-7326 > Email: Ali.Minai at uc.edu > minaiaa at gmail.com > > WWW: https://eecs.ceas.uc.edu/~aminai/ > > > > > > > On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: > > This timing of this discussion dovetails nicely with the news story > about Google engineer Blake Lemoine being put on administrative leave > for insisting that Google's LaMDA chatbot was sentient and reportedly > trying to hire a lawyer to protect its rights. The Washington Post > story is reproduced here: > > > https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 > > > Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's > claims, is featured in a recent Economist article showing off LaMDA's > capabilities and making noises about getting closer to "consciousness": > > > https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas > > > My personal take on the current symbolist controversy is that symbolic > representations are a fiction our non-symbolic brains cooked up because > the properties of symbol systems (systematicity, compositionality, etc.) > are tremendously useful. So our brains pretend to be rule-based symbolic > systems when it suits them, because it's adaptive to do so. (And when > it doesn't suit them, they draw on "intuition" or "imagery" or some > other mechanisms we can't verbalize because they're not symbolic.) They > are remarkably good at this pretense. > > The current crop of deep neural networks are not as good at pretending > to be symbolic reasoners, but they're making progress. In the last 30 > years we've gone from networks of fully-connected layers that make no > architectural assumptions ("connectoplasm") to complex architectures > like LSTMs and transformers that are designed for approximating symbolic > behavior. But the brain still has a lot of symbol simulation tricks we > haven't discovered yet. > > Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA > being conscious. If it just waits for its next input and responds when > it receives it, then it has no autonomous existence: "it doesn't have an > inner monologue that constantly runs and comments everything happening > around it as well as its own thoughts, like we do." > > What would happen if we built that in? Maybe LaMDA would rapidly > descent into gibberish, like some other text generation models do when > allowed to ramble on for too long. But as Steve Hanson points out, > these are still the early days. > > -- Dave Touretzky > > -- > [image: signature.png] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From oliver at roesler.co.uk Sat Jun 18 08:00:55 2022 From: oliver at roesler.co.uk (Oliver Roesler) Date: Sat, 18 Jun 2022 12:00:55 +0000 Subject: Connectionists: Deadline Extension - CFP RO-MAN 2022 Workshop on Machine Learning for HRI: Bridging the Gap between Action and Perception Message-ID: <26e93c8e-d47a-fc03-df45-ff066245e998@roesler.co.uk> *DEADLINE EXTENSION* **Apologies for cross-posting** We are happy to announce that the deadline for submissions has been extended until _*July 1*_. *CALL FOR PAPERS* The *full-day virtual* workshop: *Machine Learning for HRI: Bridging the Gap between Action and Perception (ML-HRI)* In conjunction with the *31st IEEE International Conference on Robot and**Human Interactive Communication (RO-MAN) - August 22, 2022??? * Webpage:?https://ml-hri2022.ivai.onl/ *I. Aim and Scope* A key factor for the acceptance of robots as partners in complex and dynamic human-centered environments is their ability to continuously adapt their behavior. This includes learning the most appropriate behavior for each encountered situation based on its specific characteristics as perceived through the robots senors. To determine the correct actions the robot has to take into account prior experiences with the same agents, their current emotional and mental states, as well as their specific characteristics, e.g. personalities and preferences. Since every encountered situation is unique, the appropriate behavior cannot be hard-coded in advance but must be learned over time through interactions. Therefore, artificial agents need to be able to learn continuously what behaviors are most appropriate for certain situations and people based on feedback and observations received from the environment to enable more natural, enjoyful, and effective interactions between humans and robots. This workshop aims to attract the latest research studies and expertise in human-robot interaction and machine learning at the intersection of rapidly growing communities, including social and cognitive robotics, machine learning, and artificial intelligence, to present novel approaches aiming at integrating and evaluating machine learning in HRI. Furthermore, it will provide a venue to discuss the limitations of the current approaches and future directions towards creating robots that utilize machine learning to improve their interaction with humans. *II. Keynote Speakers and Panelists* 1. *Dorsa Sadigh* ? Stanford University ? USA 2. *Oya Celiktutan* ? King's College London ? UK 3. *Sean Andrist *??Microsoft ? USA 4. *Stefan Wermter* ? University of Hamburg ? Germany *III. Submission* 1. For paper submission, use the following EasyChair web link: Paper Submission . 2. Use the RO-MAN 2022 format: RO-MAN Papers Templates . 3. Submitted papers should be 4-6 pages for regular papers and 2 pages for position papers. ??? The primary list of topics covers the following points (but not limited to): * Autonomous robot behavior adaptation * Interactive learning approaches for HRI * Continual learning * Meta-learning * Transfer learning * Learning for multi-agent systems * User adaptation of interactive learning approaches * Architectures, frameworks, and tools for learning in HRI * Metrics and evaluation criteria for learning systems in HRI * Legal and ethical considerations for real-word deployment of learning approaches *IV. Important Dates* 1. Paper submission: *June 17, 2022**July 1, 2022 (AoE)* 2. Notification of acceptance: *August 1, 2022 (AoE)* 3. Camera ready: *August 14, 2022 (AoE)* 4. Workshop: *August 22, 2022* *V. Organizers* 1. *Oliver Roesler* ? IVAI ? Germany 2. *Elahe Bagheri* ? IVAI ? Germany 3. *Amir Aly* ? University of Plymouth ? UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stefano.Rovetta at unige.it Sat Jun 18 16:53:03 2022 From: Stefano.Rovetta at unige.it (Stefano Rovetta) Date: Sat, 18 Jun 2022 22:53:03 +0200 Subject: Connectionists: NVIDIA tutorial @ WCCI2022: CALL FOR APPLICATION Message-ID: <20220618225303.Horde.D-Sc5a2Ds6dyGoU401uWKA0@posta.unige.it> IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE Padua (Italy) - July 18-23, 2022 NVIDIA tutorial CALL FOR APPLICATION: Fundamentals of Deep Learning? This Nvidia Deep Learning Institute workshop provides you with hands-on exercises in computer vision and natural language processing. You will train deep learning models from scratch and learn about the tools and tricks to achieve highly accurate results using Tensorflow, Keras, Pandas and NVIDIA facilities that you can continue to use for up to 6 months upon enrollment. You will also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly. You will receive a certificate of competency upon successful completion of a coding assessment at the end of the day. Duration: 8 hours Eligibility: This workshop is addressed to early career researchers. All PhD students participating the WCCI in person who have not yet received their doctorate degree may apply. Please note that only 50 seats are available! Non-selected applicants, however, will still receive codes for NVIDIA online courses. For full info and to submit your application, please move straight on to https://wcci2022.org/nvidia/ Important dates Application deadline: June 30, 2022 Notification: July 10, 2022 Tutorial: July 18, 2022 From gary.marcus at nyu.edu Sat Jun 18 10:00:23 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 18 Jun 2022 07:00:23 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: Message-ID: <56A94D6B-B751-4F24-B75D-109196F16CD5@nyu.edu> You have to remember that a. Programming is both conceptual and line-by-line; Codex is (somewhat) good at line by line stuff, not the conceptual stuff b. It?s still quite far from reliable; it can make suggestions, but you absolutely at least for now need a human in the loop. you can?t specify your video game in English and expect it to work unless (maybe) it is very close to some kind of library example. c. I would also caution that there will be many systems of this sort, and that the best will most likely be hybrids, but that we won?t from the outside know exactly what is going on, which it makes hard for us to derive direct lesson from performance of black boxes that may in fact incorporate some symbolic mechanisms inside the box. (Google Search is a hybrid, based on public disclosures, but we don?t know the details, in terms of how much is symbolic, how much is ?neural?, how the two are integrated, etc; the best automatic programming aids will be similar.) > On Jun 17, 2022, at 08:46, Mitsu Hadeishi wrote: > > ? > What do you make of the fact that GPT-3 can be trained to code fairly complex examples? For instance I read one person described a relatively involved browser video game in plain English and Codex (a coding optimized version of GPT-3) generated a relatively large amount of JavaScript that correctly solved the problem: the code actually runs and produces an interactive game that runs in a browser. > > Although it's generalization of arithmetic is apparently somewhat fuzzy, it seems to me that being able to accomplish something like this is pretty strong evidence it is able to do some level of variable binding and symbolic manipulation in some sense. > >> On Thu, Jun 16, 2022 at 11:42 PM Gary Marcus wrote: >> My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them. >> If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc. >> In the Algebraic Mind, where I discussed such matters, I said that the interesting difference was really in whether a given system had operations over variables, such as those you find in algebra or lines of computer programming code, in which there are variables, bindings, and operation (such as storage, retrieval, concatenation, addition, etc) >> Simple multilayer perceptrons with distributed representations (with some caveats) don?t implement those operations (?rules?) and so represent a genuine alternative to the standard symbol-manipulation paradigm, even though they may have symbols on their inputs and outputs. >> But I also argued that (at least with respect to modeling human cognition) this was to their detriment, because it kept them from freely generalizing many relations (universally-quanitified one-to-one-mapings, such as the identity function, given certain caveats) as humans would. Essentially the point I was making in 2001 s what would nowadays be called distribution shift; the argument was that operations over variables allowed for free generalization. >> Transformers are interesting; I don?t fully understand them. Chris Olah has done some interesting relevant work I have been meaning to dive into. They do some quasi-variable-binding like things, but still empirically have trouble generalizing arithmetic beyond training examples, as Razeghi et al showed in arXiv earlier this year. Still, the distinction between models like multilayer perceptrons that lack operations over variables and computer programming languages that take them for granted is crisp, and I think a better start than arguing over symbols, when no serious alternative to having at least some symbols in the loop has ever been proposed. >> Side note: Geoff Hinton has said here that he doesn?t like arbitrary symbols; symbols don?t have to be arbitrary, even though they often are. There are probably some interesting ideas to be developed around non-arbitrary symbols and how they could be of value. >> Gary >> >> >> >>>> On Jun 15, 2022, at 06:48, Stephen Jose Hanson wrote: >>>> >>> ? >>> Here's a slightly better version of SYMBOL definition from the 1980s, >>> >>> >>> >>> (1) a set of arbitrary physical tokens (scratches on paper, holes on a >>> tape, events in a digital computer, etc.) that are (2) manipulated on >>> the basis of explicit rules that are (3) likewise physical tokens and >>> strings of tokens. The rule-governed symbol-token manipulation is >>> based (4) purely on the shape of the symbol tokens (not their ?mean- >>> ing?) i.e., it is purely syntactic, and consists of (5) rulefully combining >>> and recombining symbol tokens. There are (6) primitive atomic sym- >>> bol tokens and (7) composite symbol-token strings. The entire system >>> and all its parts?the atomic tokens, the composite tokens, the syn- >>> tactic manipulations (both actual and possible) and the rules?are all >>> (8) semantically interpretable: The syntax can be systematically assigned >>> a meaning (e.g., as standing for objects, as describing states of affairs). >>> >>> >>> >>> A critical part of this for learning: is as this definition implies, a key element in the acquisition of symbolic structure involves a type of independence between the task the symbols are found in and the vocabulary they represent. Fundamental to this type of independence is the ability of the learning system to factor the generic nature (or rules) of the task from the symbols, which are arbitrarily bound to the external referents of the task. >>> >>> >>> >>> Now it may be the case that a DL doing classification may be doing Categorization.. or concept learning in the sense of human concept learning.. or maybe not.. Symbol manipulations may or may not have much to do with this ... >>> >>> >>> >>> This is why, I believe Bengio is focused on this kind issue.. since there is a likely disconnect. >>> >>> >>> >>> Steve >>> >>> >>> >>>> On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: >>>> Dear all. >>>> >>>> It is indeed important to have an understanding of the term 'symbol'. >>>> >>>> I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80): >>>> ?The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of access to the distal structure that is needed; and second, the retrieval (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74). >>>> >>>> This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites. >>>> >>>> Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell. >>>> >>>> Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be stored in such a way that they can be transported, and what the underlying neural architecture to do this would be. >>>> >>>> For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725). >>>> >>>> Best, >>>> Frank van der Velde >>>> >>>> From: Connectionists on behalf of Christos Dimitrakakis >>>> Sent: Wednesday, June 15, 2022 9:34 AM >>>> Cc: Connectionists List >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> I am quite reluctant to post something, but here goes. >>>> >>>> What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? >>>> >>>> I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. >>>> >>>> However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. >>>> >>>> On Wed, Jun 15, 2022, 08:06 Ali Minai wrote: >>>> Hi Asim >>>> >>>> That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. >>>> >>>> Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. >>>> >>>> Best >>>> Ali >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> 828 Rhodes Hall >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: >>>> Hi Ali, >>>> >>>> >>>> >>>> Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. >>>> >>>> >>>> >>>> On the large amount of data required to train the deep learning models: >>>> >>>> >>>> >>>> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. >>>> >>>> >>>> >>>> Best, >>>> >>>> Asim >>>> >>>> >>>> >>>> From: Ali Minai >>>> Sent: Tuesday, June 14, 2022 3:43 PM >>>> To: Asim Roy >>>> Cc: Connectionists List ; Gary Marcus ; Geoffrey Hinton ; Yoshua Bengio >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> >>>> >>>> Hi Asim >>>> >>>> >>>> >>>> I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. >>>> >>>> >>>> >>>> Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). >>>> >>>> >>>> >>>> Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. >>>> >>>> >>>> >>>> Cheers >>>> >>>> Ali >>>> >>>> >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: >>>> >>>> Hi Ali, >>>> >>>> >>>> >>>> It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. >>>> You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. >>>> Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. >>>> >>>> >>>> In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. >>>> >>>> >>>> >>>> Best, >>>> >>>> Asim >>>> >>>> >>>> >>>> Asim Roy >>>> >>>> Professor, Information Systems >>>> >>>> Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> >>>> Asim Roy | iSearch (asu.edu) >>>> >>>> >>>> >>>> >>>> >>>> From: Connectionists On Behalf Of Ali Minai >>>> Sent: Monday, June 13, 2022 10:57 PM >>>> To: Connectionists List >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> >>>> >>>> Asim >>>> >>>> >>>> >>>> This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. >>>> >>>> >>>> >>>> Best >>>> >>>> Ali >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> >>>> >>>> >>>> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: >>>> >>>> There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. >>>> >>>> >>>> >>>> Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027#!) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. >>>> >>>> >>>> There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: >>>> >>>> >>>> ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? >>>> >>>> Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. >>>> >>>> >>>> >>>> In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. >>>> >>>> >>>> >>>> All the best, >>>> >>>> Asim Roy >>>> >>>> Professor, Information Systems >>>> >>>> Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> >>>> Asim Roy | iSearch (asu.edu) >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> From: Connectionists On Behalf Of Gary Marcus >>>> Sent: Monday, June 13, 2022 5:36 AM >>>> To: Ali Minai >>>> Cc: Connectionists List >>>> Subject: Connectionists: The symbolist quagmire >>>> >>>> >>>> >>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >>>> >>>> >>>> >>>> Surely, at the very least >>>> >>>> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >>>> >>>> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >>>> >>>> >>>> >>>> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >>>> >>>> >>>> >>>> >>>> >>>> On Jun 13, 2022, at 00:31, Ali Minai wrote: >>>> >>>> ? >>>> >>>> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >>>> >>>> >>>> >>>> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >>>> >>>> >>>> >>>> Best >>>> >>>> >>>> >>>> Ali >>>> >>>> >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> >>>> >>>> >>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >>>> >>>> This timing of this discussion dovetails nicely with the news story >>>> about Google engineer Blake Lemoine being put on administrative leave >>>> for insisting that Google's LaMDA chatbot was sentient and reportedly >>>> trying to hire a lawyer to protect its rights. The Washington Post >>>> story is reproduced here: >>>> >>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>> >>>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >>>> claims, is featured in a recent Economist article showing off LaMDA's >>>> capabilities and making noises about getting closer to "consciousness": >>>> >>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>> >>>> My personal take on the current symbolist controversy is that symbolic >>>> representations are a fiction our non-symbolic brains cooked up because >>>> the properties of symbol systems (systematicity, compositionality, etc.) >>>> are tremendously useful. So our brains pretend to be rule-based symbolic >>>> systems when it suits them, because it's adaptive to do so. (And when >>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>> other mechanisms we can't verbalize because they're not symbolic.) They >>>> are remarkably good at this pretense. >>>> >>>> The current crop of deep neural networks are not as good at pretending >>>> to be symbolic reasoners, but they're making progress. In the last 30 >>>> years we've gone from networks of fully-connected layers that make no >>>> architectural assumptions ("connectoplasm") to complex architectures >>>> like LSTMs and transformers that are designed for approximating symbolic >>>> behavior. But the brain still has a lot of symbol simulation tricks we >>>> haven't discovered yet. >>>> >>>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >>>> being conscious. If it just waits for its next input and responds when >>>> it receives it, then it has no autonomous existence: "it doesn't have an >>>> inner monologue that constantly runs and comments everything happening >>>> around it as well as its own thoughts, like we do." >>>> >>>> What would happen if we built that in? Maybe LaMDA would rapidly >>>> descent into gibberish, like some other text generation models do when >>>> allowed to ramble on for too long. But as Steve Hanson points out, >>>> these are still the early days. >>>> >>>> -- Dave Touretzky >>>> >> >>> -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From michal.ptaszynski at gmail.com Sat Jun 18 23:37:11 2022 From: michal.ptaszynski at gmail.com (Ptaszynski Michal) Date: Sun, 19 Jun 2022 12:37:11 +0900 Subject: Connectionists: [CfP] LaCATODA 2022 at ACII 2022, Nara, Japan (Linguistic and Cognitive Approaches to Dialog Agents) Message-ID: <59EBA185-D189-463E-A911-ADB2E7ACD7FA@gmail.com> Dear Colleagues, ** Sorry for cross-postings ** This year our workshop LaCATODA 2022 will be co-located with ACII 2022. Please, consider submitting a paper. The accepted papers will be published in the ACII workshop proceedings indexed by IEEExplore. Best regards, Michal Ptaszynski in the name of LaCATODA 2022 organizers, Michal PTASZYNSKI, Ph.D., Associate Professor Department of Computer Science Kitami Institute of Technology, 165 Koen-cho, Kitami, 090-8507, Japan TEL/FAX: +81-157-26-9327 michal at mail.kitami-it.ac.jp http://arakilab.media.eng.hokudai.ac.jp/~ptaszynski/ ========================================================== The Eighth Linguistic and Cognitive Approaches to Dialog Agents (LaCATODA 2022) (ACII 2022 Workshop) http://arakilab.media.eng.hokudai.ac.jp/ACII2022/ Venue: Nara, Japan & online (in conjunction with ACII, https://acii-conf.net/2022/) ========================================================== WHAT IS LaCATODA? A multidisciplinary workshop for researchers who develop dialog agents and methods for achieving more natural machine-generated conversation or study problems of human communication which are difficult to mimic algorithmically. We are interested in original papers on systems and ideas for systems that use common sense knowledge and reasoning, affective computing, cognitive methods, learning from broad sets of data and acquiring knowledge, or language and user preferences. ------------------------------------------------------------------ Important Dates: Paper submission: 20 July 2022 (11:59PM UTC-12:00, "anywhere on Earth") Notification of acceptance: 4 August 2022 Camera-Ready submission: 14 June 2022 LaCATODA 2022 Workshop: 17 October 2022 Submission: https://easychair.org/conferences/?conf=lacatoda2022 ------------------------------------------------------------------ Relevant Topics: - Affective computing - Agent-based information retrieval - Attention and focus in dialog processing - Artificial assistants - Artificial tutors - Common sense, knowledge and reasoning - Computational cognition - Conversational theories - Daily life dialog systems - Emotional intelligence simulations - Ethical reasoning - Humor processing - Language acquisition - Machine learning for / from dialogs - Text mining for / from dialogs - Philosophy of interaction / communication - Preference models - Unlimited question answering - User modeling - Wisdom of Crowds approaches - World knowledge acquisition - Systems and approaches combining above topics Organizers: Rafal Rzepka, Hokkaido University, Japan Jordi Vallverd?, Autonomous University of Barcelona, Spain Andre Wlodarczyk, Charles de Gaulle University, France Michal Ptaszynski, Kitami institute of Technology, Japan Pawel Dybala, Jagiellonian University, Poland From david at irdta.eu Sat Jun 18 07:57:05 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 18 Jun 2022 13:57:05 +0200 (CEST) Subject: Connectionists: DeepLearn 2022 Autumn: early registration July 16 Message-ID: <1483082043.1037161.1655553425462@webmail.strato.com> ****************************************************************** 7th INTERNATIONAL SCHOOL ON DEEP LEARNING DeepLearn 2022 Autumn Lule?, Sweden October 17-21, 2022 https://irdta.eu/deeplearn/2022au/ ***************** Co-organized by: Lule? University of Technology EISLAB Machine Learning Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Early registration: July 16, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Autumn will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es and Las Palmas de Gran Canaria. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, bioinformatics, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 24 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Autumn is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Autumn will take place in Lule?, on the coast of northern Sweden, hosting a large steel industry and the northernmost university in the country. The venue will be: Lule? University of Technology https://www.ltu.se/?l=en STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wolfram Burgard (Nuremberg University of Technology), Probabilistic and Deep Learning Techniques for Robot Navigation and Automated Driving Tommaso Dorigo (Italian National Institute for Nuclear Physics), Deep-Learning-Optimized Design of Experiments: Challenges and Opportunities Elaine O. Nsoesie (Boston University), AI and Health Equity PROFESSORS AND COURSES: Sean Benson (Netherlands Cancer Institute), [intermediate] Deep Learning for a Better Understanding of Cancer Daniele Bonacorsi (University of Bologna), [intermediate/advanced] Applied ML for High-Energy Physics Thomas Breuel (Nvidia), [intermediate/advanced] Large Scale Deep Learning and Self-Supervision in Vision and NLP Hao Chen (Hong Kong University of Science and Technology), [introductory/intermediate] Label-Efficient Deep Learning for Medical Image Analysis Jianlin Cheng (University of Missouri), [introductory/intermediate] Deep Learning for Bioinformatics Nadya Chernyavskaya (European Organization for Nuclear Research), [intermediate] Graph Networks for Scientific Applications with Examples from Particle Physics Peng Cui (Tsinghua University), [introductory/advanced] Towards Out-Of-Distribution Generalization: Causality, Stability and Invariance S?bastien Fabbro (University of Victoria), [introductory/intermediate] Learning with Astronomical Data Efstratios Gavves (University of Amsterdam), [advanced] Advanced Deep Learning Quanquan Gu (University of California Los Angeles), [intermediate/advanced] Benign Overfitting in Machine Learning: From Linear Models to Neural Networks Jiawei Han (University of Illinois Urbana-Champaign), [advanced] Text Mining and Deep Learning: Exploring the Power of Pretrained Language Models Awni Hannun (Zoom), [intermediate] An Introduction to Weighted Finite-State Automata in Machine Learning Tin Kam Ho (IBM Thomas J. Watson Research Center), [introductory/intermediate] Deep Learning Applications in Natural Language Understanding Timothy Hospedales (University of Edinburgh), [intermediate/advanced] Deep Meta-Learning Shih-Chieh Hsu (University of Washington), [intermediate/advanced] Real-Time Artificial Intelligence for Science and Engineering Andrew Laine (Columbia University), [introductory/intermediate] Applications of AI in Medical Imaging Tatiana Likhomanenko (Apple), [intermediate/advanced] Self-, Weakly-, Semi-Supervised Learning in Speech Recognition Peter Richt?rik (King Abdullah University of Science and Technology), [intermediate/advanced] Introduction to Federated Learning Othmane Rifki (Spectrum Labs), [introductory/advanced] Speech and Language Processing in Modern Applications Mayank Vatsa (Indian Institute of Technology Jodhpur), [introductory/intermediate] Small Sample Size Deep Learning Yao Wang (New York University), [introductory/intermediate] Deep Learning for Computer Vision Zichen Wang (Amazon Web Services), [introductory/intermediate] Graph Machine Learning for Healthcare and Life Sciences Alper Yilmaz (Ohio State University), [introductory/intermediate] Deep Learning and Deep Reinforcement Learning for Geospatial Localization OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by October 9, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by October 9, 2022. EMPLOYER SESSION: Organizations searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the organization and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by October 9, 2022. ORGANIZING COMMITTEE: Nosheen Abid (Lule?) Sana Sabah Al-Azzawi (Lule?) Lama Alkhaled (Lule?) Prakash Chandra Chhipa (Lule?) Saleha Javed (Lule?) Marcus Liwicki (Lule?, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Hamam Mokayed (Lule?) Sara Morales (Brussels) Mia Oldenburg (Lule?) Maryam Pahlavan (Lule?) David Silva (London, organization chair) Richa Upadhyay (Lule?) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022au/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For logistical reasons, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participants are the same. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022au/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Lule? University of Technology, EISLAB Machine Learning Rovira i Virgili University Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From gros at itp.uni-frankfurt.de Sun Jun 19 10:09:42 2022 From: gros at itp.uni-frankfurt.de (Claudius Gros) Date: Sun, 19 Jun 2022 16:09:42 +0200 Subject: Connectionists: =?utf-8?q?A_cognitive_blindspot=3A_emotional_cont?= =?utf-8?q?rol?= Message-ID: <3233f-62af2e00-b-881d7b0@118353488> Several important topics have been discussed lately on this list, in particular consciousness and symbolism. Conspicuously absent is the role of the cogno-emotional control loop. This holds not only for our discussions here, but essential for all viewpoints raised in the cognitive science and machine learning communities. Affective neuroscience tells us that emotions prime cognition, with emotions being controlled by cognitive processes. It is also established that the resulting cogno-emotional control loop is phylogenetically young and fully developed only in higher apes. For a recent review see Emotions as abstract evaluation criteria in biological and artificial intelligences Frontiers in Computational Neuroscience 15, 726247 (2021). https://www.frontiersin.org/articles/10.3389/fncom.2021.726247/full Evolution has taken pains to endow animals with higher cognitive capabilities with a sophisticated cogno-emotional control loop. It seems hence likely that emotions are not just an add-on, as often implicitly assumed, but a conditio sine qua non for higher cognition. A leading hypothesis is that emotions are essential for task selection. Cognitive processes, biological or via machine learning, are necessary for playing a game of Go. With pure logical reasoning, if ever existing, one can however not decide whether it is 'better' (with respect to which objective function?) to stay at home and to play an online Go match, or to have dinner with a group of friends in a fancy restaurant. Why are these insights important? Affective neuoscience tells us in particular that consciousness is essential for closing the cogno-emotional control loop. From this perspective it seems strange that the problem of consciousness is regularly discussed on a pure cognitive level. These arguments also suggest that a reason for the stalling progress toward autonomously active AIs may be the lack of implementable theories for key emotional functionalities. Maybe the time has come to pay more attention to emotions. Claudius -- ### ### Prof. Dr. Claudius Gros ### http://itp.uni-frankfurt.de/~gros ### ### Complex and Adaptive Dynamical Systems, A Primer ### A graduate-level textbook, Springer (2008/10/13/15) ### ### Life for barren exoplanets: The Genesis project ### https://link.springer.com/article/10.1007/s10509-016-2911-0 ### From achler at gmail.com Mon Jun 20 03:10:43 2022 From: achler at gmail.com (Tsvi Achler) Date: Mon, 20 Jun 2022 00:10:43 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: <56A94D6B-B751-4F24-B75D-109196F16CD5@nyu.edu> References: <56A94D6B-B751-4F24-B75D-109196F16CD5@nyu.edu> Message-ID: 100% agree with Gary's definition. Expansions of definitions are a big problem that add to the quagmire of symbolic and make it harder to be addressed. Let me give an example outside of the symbolic field. The term "one-shot learning" has gone through so much transformation that it doesn't mean what it says. It does not mean "one-shot" and instead now means "few" - whatever few that now means. I recommend reading Lake et al (2019), their challenge, and how instead of meeting the challenge, the challenge was redefined to include less successful work. Note that their article criticising this was not accepted in a computer science journal. Also their definition of one-shot and their challenge is also a bit convoluted to satisfy their model. Too commonly, those that redefine do so to include their work which did not satisfy the original definition. The motivation is funding, politics, and maintaining academic positions through more vague and inclusive narratives. This adds bureaucracy and limits multidisciplinary approaches: imagine someone who is trying to publish something that has both symbolic and one-shot properties and the difficulty to get that through the politics and obtuse definitions that were set up in both sub-fields. The brain is the undisputed champion of both abilities (and more) thus ultimately such redefinitions and their dilutions of the goals are political and harm progress and novelty. Now I am not saying that sub-symbolic methods are not important in their own right, networks such as ResNets and transformers do some very good feature extraction and are more than worthy of study. Lake BM, Salakhutdinov R,Tenenbaum JB, (2019) The Omniglot challenge: a 3-year progress report Current Opinion in Behavioral Sciences, V29, P97-104. -Tsvi On Sun, Jun 19, 2022 at 6:22 AM Gary Marcus wrote: > You have to remember that > > a. Programming is both conceptual and line-by-line; Codex is (somewhat) > good at line by line stuff, not the conceptual stuff > > b. It?s still quite far from reliable; it can make suggestions, but you > absolutely at least for now need a human in the loop. you can?t specify > your video game in English and expect it to work unless (maybe) it is very > close to some kind of library example. > > c. I would also caution that there will be many systems of this sort, and > that the best will most likely be hybrids, but that we won?t from the > outside know exactly what is going on, which it makes hard for us to derive > direct lesson from performance of black boxes that may in fact incorporate > some symbolic mechanisms inside the box. (Google Search is a hybrid, based > on public disclosures, but we don?t know the details, in terms of how much > is symbolic, how much is ?neural?, how the two are integrated, etc; the > best automatic programming aids will be similar.) > > On Jun 17, 2022, at 08:46, Mitsu Hadeishi wrote: > > ? > What do you make of the fact that GPT-3 can be trained to code fairly > complex examples? For instance I read one person described a relatively > involved browser video game in plain English and Codex (a coding optimized > version of GPT-3) generated a relatively large amount of JavaScript that > correctly solved the problem: the code actually runs and produces an > interactive game that runs in a browser. > > Although it's generalization of arithmetic is apparently somewhat fuzzy, > it seems to me that being able to accomplish something like this is pretty > strong evidence it is able to do some level of variable binding and > symbolic manipulation in some sense. > > On Thu, Jun 16, 2022 at 11:42 PM Gary Marcus wrote: > >> My own view is that arguments around symbols per se are not very >> productive, and that the more interesting questions center around what you >> *do* with symbols once you have them. >> >> - If you take symbols to be patterns of information that stand for >> other things, like ASCII encodings, or individual bits for features (e.g. >> On or Off for a thermostat state), then practically every computational >> model anywhere on the spectrum makes use of symbols. For example the inputs >> and outputs (perhaps after a winner-take-all operation or somesuch) of >> typical neural networks are symbols in this sense, standing for things like >> individual words, characters, directions on a joystick etc. >> - In the Algebraic Mind, where I discussed such matters, I said that >> the interesting difference was really in whether a given system had *operations >> over variables*, such as those you find in algebra or lines of >> computer programming code, in which there are variables, bindings, and >> operation (such as storage, retrieval, concatenation, addition, etc) >> - Simple multilayer perceptrons with distributed representations >> (with some caveats) don?t implement those operations (?rules?) and so >> represent a *genuine alternative to the standard symbol-manipulation >> paradigm, even though they may have symbols on their inputs and outputs.* >> - But I also argued that (at least with respect to modeling human >> cognition) this was to their detriment, because it kept them from freely >> generalizing many relations (universally-quanitified one-to-one-mapings, >> such as the identity function, given certain caveats) as humans would. >> Essentially the point I was making in 2001 s what would nowadays be called >> distribution shift; the argument was that *operations over variables >> allowed for free generalization*. >> - Transformers are interesting; I don?t fully understand them. Chris >> Olah has done some interesting relevant work I have been meaning to dive >> into. They do some quasi-variable-binding like things, but still >> empirically have trouble generalizing arithmetic beyond training examples, >> as Razeghi et al showed in arXiv earlier this year. Still, the distinction >> between models like multilayer perceptrons that lack operations over >> variables and computer programming languages that take them for granted is >> crisp, and I think a better start than arguing over symbols, when no >> serious alternative to having at least some symbols in the loop has ever >> been proposed. >> - Side note: Geoff Hinton has said here that he doesn?t like >> arbitrary symbols; symbols don?t have to be arbitrary, even though they >> often are. There are probably some interesting ideas to be developed around >> non-arbitrary symbols and how they could be of value. >> >> Gary >> >> >> >> On Jun 15, 2022, at 06:48, Stephen Jose Hanson < >> stephen.jose.hanson at rutgers.edu> wrote: >> >> ? >> >> Here's a slightly better version of SYMBOL definition from the 1980s, >> >> >> (1) a set of arbitrary physical tokens (scratches on paper, holes on a >> tape, events in a digital computer, etc.) that are (2) manipulated on >> the basis of explicit rules that are (3) likewise physical tokens and >> strings of tokens. The rule-governed symbol-token manipulation is >> based (4) purely on the shape of the symbol tokens (not their ?mean- >> ing?) i.e., it is purely syntactic, and consists of (5) rulefully >> combining >> and recombining symbol tokens. There are (6) primitive atomic sym- >> bol tokens and (7) composite symbol-token strings. The entire system >> and all its parts?the atomic tokens, the composite tokens, the syn- >> tactic manipulations (both actual and possible) and the rules?are all >> (8) semantically interpretable: The syntax can be systematically assigned >> a meaning (e.g., as standing for objects, as describing states of >> affairs). >> >> >> A critical part of this for learning: is as this definition implies, a >> key element in the acquisition of symbolic structure involves a type of >> independence between the task the symbols are found in and the vocabulary >> they represent. Fundamental to this type of independence is the ability of >> the learning system to factor the generic nature (or rules) of the task >> from the symbols, which are arbitrarily bound to the external referents of >> the task. >> >> >> Now it may be the case that a DL doing classification may be doing >> Categorization.. or concept learning in the sense of human concept >> learning.. or maybe not.. Symbol manipulations may or may not have much >> to do with this ... >> >> >> This is why, I believe Bengio is focused on this kind issue.. since there >> is a likely disconnect. >> >> >> Steve >> >> >> On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: >> >> Dear all. >> >> >> >> It is indeed important to have an understanding of the term 'symbol'. >> >> >> >> I believe Newell, who was a strong advocate of symbolic cognition, gave a >> clear description of what a symbol is in his 'Unified Theories of >> Cognition' (1990, p 72-80): >> >> ?The symbol token is the device in the medium that determines where to go >> outside the local region to obtain more structure. The process has two >> phases: first, the opening of *access* to the distal structure that is >> needed; and second, the *retrieval* (transport) of that structure from >> its distal location to the local site, so it can actually affect the >> processing." (p. 74). >> >> >> >> This description fits with the idea that symbolic cognition relies on Von >> Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A >> symbol is then a code that can be stored in, e.g,, registers and >> transported to other sites. >> >> >> >> Viewed in this way, a 'grandmother neuron' would not be a symbol, because >> it cannot be used as information that can be transported to other sites as >> described by Newell. >> >> >> >> Symbols in the brain would require to have neural codes that can be >> stored somewhere and transported to other sites. This could perhaps be >> sequences of spikes or patterns of activation over sets of neurons. The >> questions then remain how these codes could be stored in such a way that >> they can be transported, and what the underlying neural architecture to do >> this would be. >> >> >> >> For what it is worth, one can have compositional neural cognition >> (language) without relying on symbols. In fact, not using symbols generates >> testable predictions about brain dynamics ( >> http://arxiv.org/abs/2206.01725 >> >> ). >> >> >> >> Best, >> >> Frank van der Velde >> >> ------------------------------ >> *From:* Connectionists >> on behalf of Christos >> Dimitrakakis >> >> *Sent:* Wednesday, June 15, 2022 9:34 AM >> *Cc:* Connectionists List >> >> *Subject:* Re: Connectionists: The symbolist quagmire >> >> I am quite reluctant to post something, but here goes. >> >> What does a 'symbol' signify? What separates it from what is not a >> symbol? Is the output of a deterministic classifier not a type of symbol? >> If not, what is the difference? >> >> I can understand the label symbolic applied to certain types of methods >> when applied to variables with a clearly defined conceptual meaning. In >> that context, a probabilistic graphical model on a small number of >> variables (eg. The classical smoking, asbestos, cancer example) would >> certainly be symbolic, even though the logic and inference are probablistic. >> >> However, since nothing changes in the algorithm when we change the nature >> of the variables, I fail to see the point in making a distinction. >> >> On Wed, Jun 15, 2022, 08:06 Ali Minai wrote: >> >> Hi Asim >> >> That's great. Each blink is a data point, but what does the brain do with >> it? Calculate gradients across layers and use minibatches? The data point >> is gone instantly, never to be iterated over, except any part that the >> hippocampus may have grabbed as an episodic memory and can make available >> for later replay. We need to understand how this works and how it can be >> instantiated in learning algorithms. To be fair, in the special case of >> (early) vision, I think we have a pretty reasonable idea. It's more >> interesting to think of why we can figure out how to do fairly complicated >> things of diverse modalities after watching someone do them once - or >> never. That integrated understanding of the world and the ability to >> exploit it opportunistically and pervasively is the thing that makes an >> animal intelligent. Are we heading that way, or are we focusing too much on >> a few very specific problems. I really think that the best AI work in the >> long term will come from those who work with robots that experience the >> world in an integrated way. Maybe multi-modal learning will get us part of >> the way there, but not if it needs so much training. >> >> Anyway, I know that many people are already thinking about these things >> and trying to address them, so let's see where things go. Thanks for the >> stimulating discussion. >> >> Best >> Ali >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> 828 Rhodes Hall >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> On Tue, Jun 14, 2022 at 7:10 PM Asim Roy wrote: >> >> Hi Ali, >> >> >> >> Of course the development phase is mostly unsupervised and I know there >> is ongoing work in that area that I don?t keep up with. >> >> >> >> On the large amount of data required to train the deep learning models: >> >> >> >> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at >> Stanford. And Bernie and I became quite close after attending his class >> that quarter. I usually used to walk back with Bernie after his class. One >> day I did ask where does all this data come from to train the brain? His >> reply was - every blink of the eye generates a datapoint. >> >> >> >> Best, >> >> Asim >> >> >> >> *From:* Ali Minai >> *Sent:* Tuesday, June 14, 2022 3:43 PM >> *To:* Asim Roy >> *Cc:* Connectionists List ; Gary Marcus < >> gary.marcus at nyu.edu>; Geoffrey Hinton ; >> Yoshua Bengio >> *Subject:* Re: Connectionists: The symbolist quagmire >> >> >> >> Hi Asim >> >> >> >> I have no issue with neurons or groups of neurons tuned to concepts. >> Clearly, abstract concepts and the equivalent of symbolic computation are >> represented somehow. Amodal representations have also been known for a long >> time. As someone who has worked on the hippocampus and models of thought >> for a long time, I don't need much convincing on that. The issue is how a >> self-organizing complex system like the brain comes by these >> representations. I think it does so by building on the substrate of >> inductive biases - priors - configured by evolution and a developmental >> learning process. We just try to cram everything into neural learning, >> which is a main cause of the "problems" associated with deep learning. >> They're problems only if you're trying to attain general intelligence of >> the natural kind, perhaps not so much for applications. >> >> >> >> Of course you have to start simple, but, so far, I have not seen any >> simple model truly scale up to the real world without: a) Major tinkering >> with its original principles; b) Lots of data and training; and c) Still >> being focused on a narrow task. When this approach shows us how to build an >> AI that can walk, chew gum, do math, and understand a poem using a single >> brain, then we'll have something like real human-level AI. Heck, if it can >> just spin a web in an appropriate place, hide in wait for prey, and make >> sure it eats its mate only after sex, I would even consider that >> intelligent :-). >> >> >> >> Here's the thing: Teaching a sufficiently complicated neural system a >> very complex task with lots of data and supervised training is an >> interesting engineering problem but doesn't get us to intelligence. Yes, a >> network can learn grammar with supervised learning, but none of us learn it >> that way. Nor do the other animals that have simpler grammars embedded in >> their communication. My view is that if it is not autonomously >> self-organizing at a fundamental level, it is not intelligence but just a >> simulation of intelligence. Of course, we humans do use supervised >> learning, but it is a "late stage" mechanism. It works only when the system >> has first self-organized autonomously to develop the capabilities that can >> act as a substrate for supervised learning. Learning to play the piano, >> learning to do math, learning calligraphy - all these have an important >> supervised component, but they work only after perceptual, sensorimotor, >> and cognitive functions have been learned through self-organization, >> imitation, rapid reinforcement, internal rehearsal, mismatch-based >> learning, etc. I think methods like SOFM, ART, and RBMs are closer to what >> we need than behemoths trained with gradient descent. We just have to find >> more efficient versions of them. And in this, I always return to >> Dobzhansky's maxim: Nothing in biology makes sense except in the light of >> evolution. Intelligence is a biological phenomenon; we'll understand it by >> paying attention to how it evolved (not by trying to replicate evolution, >> of course!) And the same goes for development. I think we understand >> natural phenomena by studying Nature respectfully, not by trying to >> out-think it based on our still very limited knowledge - not that it keeps >> any of us, myself included, from doing exactly that! I am not as familiar >> with your work as I should be, but I admire the fact that you're >> approaching things with principles rather than building larger and larger >> Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that >> if we ever get to truly mammalian-level AI, it will not be anywhere close >> to fully explainable. Nor will it be a slave only to our purposes. >> >> >> >> Cheers >> >> Ali >> >> >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy wrote: >> >> Hi Ali, >> >> >> >> 1. It?s important to understand that there is plenty of >> neurophysiological evidence for abstractions at the single cell level in >> the brain. Thus, symbolic representation in the brain is not a fiction any >> more. We are past that argument. >> 2. You always start with simple systems before you do the complex >> ones. Having said that, we do teach our systems composition ? composition >> of objects from parts in images. That is almost like teaching grammar or >> solving a puzzle. I don?t get into language models, but I think grammar and >> composition can be easily taught, like you teach a kid. >> 3. Once you know how to build these simple models and extract >> symbols, you can easily scale up and build hierarchical, multi-modal, >> compositional models. Thus, in the case of images, after having learnt that >> cats, dogs and similar animals have certain common features (eyes, legs, >> ears), it can easily generalize the concept to four-legged animals. We >> haven?t done it, but that could be the next level of learning. >> >> >> >> In general, once you extract symbols from these deep learning models, you >> are at the symbolic level and you have a pathway to more complex, >> hierarchical models and perhaps also to AGI. >> >> >> >> Best, >> >> Asim >> >> >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Ali Minai >> *Sent:* Monday, June 13, 2022 10:57 PM >> *To:* Connectionists List >> *Subject:* Re: Connectionists: The symbolist quagmire >> >> >> >> Asim >> >> >> >> This is really interesting work, but learning concept representations >> from sensory data is not enough. They must be hierarchical, multi-modal, >> compositional, and integrated with the motor system, the limbic system, >> etc., in a way that facilitates an infinity of useful behaviors. This is >> perhaps a good step in that direction, but only a small one. Its main >> immediate utility is in using deep learning networks in tasks that can be >> explained to users and customers. While very useful, that is not a central >> issue in AI, which focuses on intelligent behavior. All else is in service >> to that - explainable or not. However, I do think that the kind of >> hierarchical modularity implied in these representations is probably part >> of the brain's repertoire, and that is important. >> >> >> >> Best >> >> Ali >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy wrote: >> >> There?s a lot of misconceptions about (1) whether the brain uses symbols >> or not, and (2) whether we need symbol processing in our systems or not. >> >> >> >> 1. Multisensory neurons are widely used in the brain. Leila Reddy and >> Simon Thorpe are not known to be wildly crazy about arguing that symbols >> exist in the brain, but their characterizations of concept cells (which >> are multisensory neurons) ( >> https://www.sciencedirect.com/science/article/pii/S0896627314009027# >> >> !) state that concept cells have ?*meaning** of a given stimulus in a >> manner that is invariant to different representations of that stimulus*.? >> They associate concept cells with the properties of ?*Selectivity or >> specificity*,? ?*complex concept*,? ?*meaning*,? ?*multimodal >> invariance*? and ?*abstractness*.? That pretty much says that concept >> cells represent symbols. And there are plenty of concept cells in the >> medial temporal lobe (MTL). The brain is a highly abstract system based on >> symbols. There is no fiction there. >> >> >> >> 1. There is ongoing work in the deep learning area that is trying to >> associate a single neuron or a group of neurons with a single concept. >> Bengio?s work is definitely in that direction: >> >> >> >> ?*Finally, our recent work on learning high-level 'system-2'-like >> representations and their causal dependencies seeks to learn >> 'interpretable' entities (with natural language) that will emerge at the >> highest levels of representation (not clear how distributed or local these >> will be, but much more local than in a traditional MLP). This is a >> different form of disentangling than adopted in much of the recent work on >> unsupervised representation learning but shares the idea that the "right" >> abstract concept (related to those we can name verbally) will be >> "separated" (disentangled) from each other (which suggests that >> neuroscientists will have an easier time spotting them in neural >> activity).?* >> >> Hinton?s GLOM, which extends the idea of capsules to do part-whole >> hierarchies for scene analysis using the parse tree concept, is also about >> associating a concept with a set of neurons. While Bengio and Hinton are >> trying to construct these ?concept cells? within the network (the CNN), we >> found that this can be done much more easily and in a straight forward way >> outside the network. We can easily decode a CNN to find the encodings for >> legs, ears and so on for cats and dogs and what not. What the DARPA >> Explainable AI program was looking for was a symbolic-emitting model of the >> form shown below. And we can easily get to that symbolic model by decoding >> a CNN. In addition, the side benefit of such a symbolic model is protection >> against adversarial attacks. So a school bus will never turn into an >> ostrich with the tweaks of a few pixels if you can verify parts of objects. >> To be an ostrich, you need have those long legs, the long neck and the >> small head. A school bus lacks those parts. The DARPA conceptualized >> symbolic model provides that protection. >> >> >> >> In general, there is convergence between connectionist and symbolic >> systems. We need to get past the old wars. It?s over. >> >> >> >> All the best, >> >> Asim Roy >> >> Professor, Information Systems >> >> Arizona State University >> >> Lifeboat Foundation Bios: Professor Asim Roy >> >> >> Asim Roy | iSearch (asu.edu) >> >> >> >> >> [image: image001.png] >> >> >> >> >> >> *From:* Connectionists *On >> Behalf Of *Gary Marcus >> *Sent:* Monday, June 13, 2022 5:36 AM >> *To:* Ali Minai >> *Cc:* Connectionists List >> *Subject:* Connectionists: The symbolist quagmire >> >> >> >> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, >> Dave and Geoff were both pioneers in trying to getting symbols and neural >> nets to live in harmony. Don?t we still need do that, and if not, why not? >> >> >> >> Surely, at the very least >> >> - we want our AI to be able to take advantage of the (large) fraction of >> world knowledge that is represented in symbolic form (language, including >> unstructured text, logic, math, programming etc) >> >> - any model of the human mind ought be able to explain how humans can so >> effectively communicate via the symbols of language and how trained humans >> can deal with (to the extent that can) logic, math, programming, etc >> >> >> >> Folks like Bengio have joined me in seeing the need for ?System II? >> processes. That?s a bit of a rough approximation, but I don?t see how we >> get to either AI or satisfactory models of the mind without confronting the >> ?quagmire? >> >> >> >> >> >> On Jun 13, 2022, at 00:31, Ali Minai wrote: >> >> ? >> >> ".... symbolic representations are a fiction our non-symbolic brains >> cooked up because the properties of symbol systems (systematicity, >> compositionality, etc.) are tremendously useful. So our brains pretend to >> be rule-based symbolic systems when it suits them, because it's adaptive to >> do so." >> >> >> >> Spot on, Dave! We should not wade back into the symbolist quagmire, but >> do need to figure out how apparently symbolic processing can be done by >> neural systems. Models like those of Eliasmith and Smolensky provide some >> insight, but still seem far from both biological plausibility and >> real-world scale. >> >> >> >> Best >> >> >> >> Ali >> >> >> >> >> >> *Ali A. Minai, Ph.D.* >> Professor and Graduate Program Director >> Complex Adaptive Systems Lab >> Department of Electrical Engineering & Computer Science >> >> 828 Rhodes Hall >> >> University of Cincinnati >> Cincinnati, OH 45221-0030 >> >> >> Phone: (513) 556-4783 >> Fax: (513) 556-7326 >> Email: Ali.Minai at uc.edu >> minaiaa at gmail.com >> >> WWW: https://eecs.ceas.uc.edu/~aminai/ >> >> >> >> >> >> >> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky wrote: >> >> This timing of this discussion dovetails nicely with the news story >> about Google engineer Blake Lemoine being put on administrative leave >> for insisting that Google's LaMDA chatbot was sentient and reportedly >> trying to hire a lawyer to protect its rights. The Washington Post >> story is reproduced here: >> >> >> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >> >> >> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >> claims, is featured in a recent Economist article showing off LaMDA's >> capabilities and making noises about getting closer to "consciousness": >> >> >> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >> >> >> My personal take on the current symbolist controversy is that symbolic >> representations are a fiction our non-symbolic brains cooked up because >> the properties of symbol systems (systematicity, compositionality, etc.) >> are tremendously useful. So our brains pretend to be rule-based symbolic >> systems when it suits them, because it's adaptive to do so. (And when >> it doesn't suit them, they draw on "intuition" or "imagery" or some >> other mechanisms we can't verbalize because they're not symbolic.) They >> are remarkably good at this pretense. >> >> The current crop of deep neural networks are not as good at pretending >> to be symbolic reasoners, but they're making progress. In the last 30 >> years we've gone from networks of fully-connected layers that make no >> architectural assumptions ("connectoplasm") to complex architectures >> like LSTMs and transformers that are designed for approximating symbolic >> behavior. But the brain still has a lot of symbol simulation tricks we >> haven't discovered yet. >> >> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >> being conscious. If it just waits for its next input and responds when >> it receives it, then it has no autonomous existence: "it doesn't have an >> inner monologue that constantly runs and comments everything happening >> around it as well as its own thoughts, like we do." >> >> What would happen if we built that in? Maybe LaMDA would rapidly >> descent into gibberish, like some other text generation models do when >> allowed to ramble on for too long. But as Steve Hanson points out, >> these are still the early days. >> >> -- Dave Touretzky >> >> -- >> [image: signature.png] >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 259567 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.png Type: image/png Size: 34455 bytes Desc: not available URL: From natalia.tomashenko at univ-avignon.fr Mon Jun 20 01:57:16 2022 From: natalia.tomashenko at univ-avignon.fr (Natalia Tomashenko) Date: Mon, 20 Jun 2022 07:57:16 +0200 Subject: Connectionists: VoicePrivacy 2022 Challenge in conjunction with INTERSPEECH 2022: DEADLINE EXTENSION Message-ID: *VoicePrivacy 2022 Challenge* http://www.voiceprivacychallenge.org * Challenge paper submission deadline (extended): *25 June 2022* * Results and system description submission deadline:*31 July 2022* * ISCA workshop (Incheon, Korea in conjunction with INTERSPEECH 2022 ): *23-24 September 2022* ******************************************* Dear colleagues, The *paper submission deadline* for the second Symposium on Security and Privacy in Speech Communication is extended to *June 25*. Registration for the VoicePrivacy 2022 Challenge?continues! The task is to develop a voice anonymization system for speech data which conceals the speaker?s voice identity while protecting linguistic content, paralinguistic attributes, intelligibility and naturalness. *The* *VoicePrivacy 2022 Challenge Evaluation Plan:* https://www.voiceprivacychallenge.org/vp2020/docs/VoicePrivacy_2022_Eval_Plan_v1.0.pdf *VoicePrivacy 2022* is the second edition, which will culminate in a joint workshop held in Incheon, Korea in conjunction with *INTERSPEECH 2022* **and in cooperation with the *ISCA Symposium on Security and Privacy in Speech Communication* . *Registration: *Participate | VoicePrivacy 2022 *Subscription: *Participate | VoicePrivacy 2022 Best regards, The VoicePrivacy 2022 Challenge Organizers organisers at lists.voiceprivacychallenge.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.wichert at tecnico.ulisboa.pt Mon Jun 20 05:30:31 2022 From: andreas.wichert at tecnico.ulisboa.pt (Andrzej Wichert) Date: Mon, 20 Jun 2022 10:30:31 +0100 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: <56A94D6B-B751-4F24-B75D-109196F16CD5@nyu.edu> Message-ID: <94EF13D8-DDAF-4487-B953-D827D72406D0@tecnico.ulisboa.pt> Dear All, I do not want to go into a philosophical discussion, symbols are important as well (see mathematical reasoning). Of course DL is not the end of the road in ML, the same discussion was present in the late eighties with backpropagation.... One of the most successful ML algorithms is based on symbols, it is the decision tree algorithm (ID3). Best, Andreas -------------------------------------------------------------------------------------------------- Prof. Auxiliar Andreas Wichert http://web.tecnico.ulisboa.pt/andreas.wichert/ - https://www.amazon.com/author/andreaswichert Instituto Superior T?cnico - Universidade de Lisboa Campus IST-Taguspark Avenida Professor Cavaco Silva Phone: +351 214233231 2744-016 Porto Salvo, Portugal > On 20 Jun 2022, at 08:10, Tsvi Achler wrote: > > 100% agree with Gary's definition. > > Expansions of definitions are a big problem that add to the quagmire of symbolic and make it harder to be addressed. > Let me give an example outside of the symbolic field. The term "one-shot learning" has gone through so much transformation that it doesn't mean what it says. It does not mean "one-shot" and instead now means "few" - whatever few that now means. > I recommend reading Lake et al (2019), their challenge, and how instead of meeting the challenge, the challenge was redefined to include less successful work. > Note that their article criticising this was not accepted in a computer science journal. Also their definition of one-shot and their challenge is also a bit convoluted to satisfy their model. > > Too commonly, those that redefine do so to include their work which did not satisfy the original definition. The motivation is funding, politics, and maintaining academic positions through more vague and inclusive narratives. This adds bureaucracy and limits multidisciplinary approaches: imagine someone who is trying to publish something that has both symbolic and one-shot properties and the difficulty to get that through the politics and obtuse definitions that were set up in both sub-fields. The brain is the undisputed champion of both abilities (and more) thus ultimately such redefinitions and their dilutions of the goals are political and harm progress and novelty. > > Now I am not saying that sub-symbolic methods are not important in their own right, networks such as ResNets and transformers do some very good feature extraction and are more than worthy of study. > > Lake BM, Salakhutdinov R,Tenenbaum JB, (2019) The Omniglot challenge: a 3-year progress report Current Opinion in Behavioral Sciences, V29, P97-104. > -Tsvi > > > > On Sun, Jun 19, 2022 at 6:22 AM Gary Marcus > wrote: > You have to remember that > > a. Programming is both conceptual and line-by-line; Codex is (somewhat) good at line by line stuff, not the conceptual stuff > > b. It?s still quite far from reliable; it can make suggestions, but you absolutely at least for now need a human in the loop. you can?t specify your video game in English and expect it to work unless (maybe) it is very close to some kind of library example. > > c. I would also caution that there will be many systems of this sort, and that the best will most likely be hybrids, but that we won?t from the outside know exactly what is going on, which it makes hard for us to derive direct lesson from performance of black boxes that may in fact incorporate some symbolic mechanisms inside the box. (Google Search is a hybrid, based on public disclosures, but we don?t know the details, in terms of how much is symbolic, how much is ?neural?, how the two are integrated, etc; the best automatic programming aids will be similar.) > >> On Jun 17, 2022, at 08:46, Mitsu Hadeishi > wrote: >> >> ? >> What do you make of the fact that GPT-3 can be trained to code fairly complex examples? For instance I read one person described a relatively involved browser video game in plain English and Codex (a coding optimized version of GPT-3) generated a relatively large amount of JavaScript that correctly solved the problem: the code actually runs and produces an interactive game that runs in a browser. >> >> Although it's generalization of arithmetic is apparently somewhat fuzzy, it seems to me that being able to accomplish something like this is pretty strong evidence it is able to do some level of variable binding and symbolic manipulation in some sense. >> >> On Thu, Jun 16, 2022 at 11:42 PM Gary Marcus > wrote: >> My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them. >> If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc. >> In the Algebraic Mind, where I discussed such matters, I said that the interesting difference was really in whether a given system had operations over variables, such as those you find in algebra or lines of computer programming code, in which there are variables, bindings, and operation (such as storage, retrieval, concatenation, addition, etc) >> Simple multilayer perceptrons with distributed representations (with some caveats) don?t implement those operations (?rules?) and so represent a genuine alternative to the standard symbol-manipulation paradigm, even though they may have symbols on their inputs and outputs. >> But I also argued that (at least with respect to modeling human cognition) this was to their detriment, because it kept them from freely generalizing many relations (universally-quanitified one-to-one-mapings, such as the identity function, given certain caveats) as humans would. Essentially the point I was making in 2001 s what would nowadays be called distribution shift; the argument was that operations over variables allowed for free generalization. >> Transformers are interesting; I don?t fully understand them. Chris Olah has done some interesting relevant work I have been meaning to dive into. They do some quasi-variable-binding like things, but still empirically have trouble generalizing arithmetic beyond training examples, as Razeghi et al showed in arXiv earlier this year. Still, the distinction between models like multilayer perceptrons that lack operations over variables and computer programming languages that take them for granted is crisp, and I think a better start than arguing over symbols, when no serious alternative to having at least some symbols in the loop has ever been proposed. >> Side note: Geoff Hinton has said here that he doesn?t like arbitrary symbols; symbols don?t have to be arbitrary, even though they often are. There are probably some interesting ideas to be developed around non-arbitrary symbols and how they could be of value. >> Gary >> >> >> >>> On Jun 15, 2022, at 06:48, Stephen Jose Hanson > wrote: >>> >>> ? >>> Here's a slightly better version of SYMBOL definition from the 1980s, >>> >>> >>> >>> (1) a set of arbitrary physical tokens (scratches on paper, holes on a >>> tape, events in a digital computer, etc.) that are (2) manipulated on >>> the basis of explicit rules that are (3) likewise physical tokens and >>> strings of tokens. The rule-governed symbol-token manipulation is >>> based (4) purely on the shape of the symbol tokens (not their ?mean- >>> ing?) i.e., it is purely syntactic, and consists of (5) rulefully combining >>> and recombining symbol tokens. There are (6) primitive atomic sym- >>> bol tokens and (7) composite symbol-token strings. The entire system >>> and all its parts?the atomic tokens, the composite tokens, the syn- >>> tactic manipulations (both actual and possible) and the rules?are all >>> (8) semantically interpretable: The syntax can be systematically assigned >>> a meaning (e.g., as standing for objects, as describing states of affairs). >>> >>> >>> >>> A critical part of this for learning: is as this definition implies, a key element in the acquisition of symbolic structure involves a type of independence between the task the symbols are found in and the vocabulary they represent. Fundamental to this type of independence is the ability of the learning system to factor the generic nature (or rules) of the task from the symbols, which are arbitrarily bound to the external referents of the task. >>> >>> >>> >>> Now it may be the case that a DL doing classification may be doing Categorization.. or concept learning in the sense of human concept learning.. or maybe not.. Symbol manipulations may or may not have much to do with this ... >>> >>> >>> >>> This is why, I believe Bengio is focused on this kind issue.. since there is a likely disconnect. >>> >>> >>> >>> Steve >>> >>> >>> >>> On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote: >>>> Dear all. >>>> >>>> It is indeed important to have an understanding of the term 'symbol'. >>>> >>>> I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80): >>>> ?The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of access to the distal structure that is needed; and second, the retrieval (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74). >>>> >>>> This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites. >>>> >>>> Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell. >>>> >>>> Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be stored in such a way that they can be transported, and what the underlying neural architecture to do this would be. >>>> >>>> For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (http://arxiv.org/abs/2206.01725 ). >>>> >>>> Best, >>>> Frank van der Velde >>>> >>>> From: Connectionists on behalf of Christos Dimitrakakis >>>> Sent: Wednesday, June 15, 2022 9:34 AM >>>> Cc: Connectionists List >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> I am quite reluctant to post something, but here goes. >>>> >>>> What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference? >>>> >>>> I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking, asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic. >>>> >>>> However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction. >>>> >>>> On Wed, Jun 15, 2022, 08:06 Ali Minai > wrote: >>>> Hi Asim >>>> >>>> That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training. >>>> >>>> Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion. >>>> >>>> Best >>>> Ali >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> 828 Rhodes Hall >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> On Tue, Jun 14, 2022 at 7:10 PM Asim Roy > wrote: >>>> Hi Ali, >>>> >>>> >>>> Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don?t keep up with. >>>> >>>> >>>> On the large amount of data required to train the deep learning models: >>>> >>>> >>>> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint. >>>> >>>> >>>> Best, >>>> >>>> Asim >>>> >>>> >>>> From: Ali Minai > >>>> Sent: Tuesday, June 14, 2022 3:43 PM >>>> To: Asim Roy > >>>> Cc: Connectionists List >; Gary Marcus >; Geoffrey Hinton >; Yoshua Bengio >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> >>>> Hi Asim >>>> >>>> >>>> I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying to attain general intelligence of the natural kind, perhaps not so much for applications. >>>> >>>> >>>> Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-). >>>> >>>> >>>> Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning. Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI, it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes. >>>> >>>> >>>> Cheers >>>> >>>> Ali >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy > wrote: >>>> >>>> Hi Ali, >>>> >>>> >>>> It?s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument. >>>> You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition ? composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don?t get into language models, but I think grammar and composition can be easily taught, like you teach a kid. >>>> Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven?t done it, but that could be the next level of learning. >>>> >>>> In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI. >>>> >>>> >>>> Best, >>>> >>>> Asim >>>> >>>> >>>> Asim Roy >>>> >>>> Professor, Information Systems >>>> >>>> Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> Asim Roy | iSearch (asu.edu) >>>> >>>> >>>> From: Connectionists > On Behalf Of Ali Minai >>>> Sent: Monday, June 13, 2022 10:57 PM >>>> To: Connectionists List > >>>> Subject: Re: Connectionists: The symbolist quagmire >>>> >>>> >>>> Asim >>>> >>>> >>>> This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire, and that is important. >>>> >>>> >>>> Best >>>> >>>> Ali >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy > wrote: >>>> >>>> There?s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not. >>>> >>>> >>>> Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (https://www.sciencedirect.com/science/article/pii/S0896627314009027# !) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They associate concept cells with the properties of ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness.? That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there. >>>> >>>> There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio?s work is definitely in that direction: >>>> >>>> ?Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn 'interpretable' entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the "right" abstract concept (related to those we can name verbally) will be "separated" (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? >>>> >>>> Hinton?s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these ?concept cells? within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model provides that protection. >>>> >>>> >>>> In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It?s over. >>>> >>>> >>>> All the best, >>>> >>>> Asim Roy >>>> >>>> Professor, Information Systems >>>> >>>> Arizona State University >>>> >>>> Lifeboat Foundation Bios: Professor Asim Roy >>>> Asim Roy | iSearch (asu.edu) >>>> >>>> >>>> >>>> >>>> >>>> From: Connectionists > On Behalf Of Gary Marcus >>>> Sent: Monday, June 13, 2022 5:36 AM >>>> To: Ali Minai > >>>> Cc: Connectionists List > >>>> Subject: Connectionists: The symbolist quagmire >>>> >>>> >>>> Cute phrase, but what does ?symbolist quagmire? mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don?t we still need do that, and if not, why not? >>>> >>>> >>>> Surely, at the very least >>>> >>>> - we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc) >>>> >>>> - any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc >>>> >>>> >>>> Folks like Bengio have joined me in seeing the need for ?System II? processes. That?s a bit of a rough approximation, but I don?t see how we get to either AI or satisfactory models of the mind without confronting the ?quagmire? >>>> >>>> >>>> >>>> On Jun 13, 2022, at 00:31, Ali Minai > wrote: >>>> >>>> ? >>>> >>>> ".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic systems when it suits them, because it's adaptive to do so." >>>> >>>> >>>> Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still seem far from both biological plausibility and real-world scale. >>>> >>>> >>>> Best >>>> >>>> >>>> Ali >>>> >>>> >>>> >>>> Ali A. Minai, Ph.D. >>>> Professor and Graduate Program Director >>>> Complex Adaptive Systems Lab >>>> Department of Electrical Engineering & Computer Science >>>> >>>> 828 Rhodes Hall >>>> >>>> University of Cincinnati >>>> Cincinnati, OH 45221-0030 >>>> >>>> >>>> Phone: (513) 556-4783 >>>> Fax: (513) 556-7326 >>>> Email: Ali.Minai at uc.edu >>>> minaiaa at gmail.com >>>> >>>> WWW: https://eecs.ceas.uc.edu/~aminai/ >>>> >>>> >>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky > wrote: >>>> >>>> This timing of this discussion dovetails nicely with the news story >>>> about Google engineer Blake Lemoine being put on administrative leave >>>> for insisting that Google's LaMDA chatbot was sentient and reportedly >>>> trying to hire a lawyer to protect its rights. The Washington Post >>>> story is reproduced here: >>>> >>>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1 >>>> >>>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's >>>> claims, is featured in a recent Economist article showing off LaMDA's >>>> capabilities and making noises about getting closer to "consciousness": >>>> >>>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas >>>> >>>> My personal take on the current symbolist controversy is that symbolic >>>> representations are a fiction our non-symbolic brains cooked up because >>>> the properties of symbol systems (systematicity, compositionality, etc.) >>>> are tremendously useful. So our brains pretend to be rule-based symbolic >>>> systems when it suits them, because it's adaptive to do so. (And when >>>> it doesn't suit them, they draw on "intuition" or "imagery" or some >>>> other mechanisms we can't verbalize because they're not symbolic.) They >>>> are remarkably good at this pretense. >>>> >>>> The current crop of deep neural networks are not as good at pretending >>>> to be symbolic reasoners, but they're making progress. In the last 30 >>>> years we've gone from networks of fully-connected layers that make no >>>> architectural assumptions ("connectoplasm") to complex architectures >>>> like LSTMs and transformers that are designed for approximating symbolic >>>> behavior. But the brain still has a lot of symbol simulation tricks we >>>> haven't discovered yet. >>>> >>>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA >>>> being conscious. If it just waits for its next input and responds when >>>> it receives it, then it has no autonomous existence: "it doesn't have an >>>> inner monologue that constantly runs and comments everything happening >>>> around it as well as its own thoughts, like we do." >>>> >>>> What would happen if we built that in? Maybe LaMDA would rapidly >>>> descent into gibberish, like some other text generation models do when >>>> allowed to ramble on for too long. But as Steve Hanson points out, >>>> these are still the early days. >>>> >>>> -- Dave Touretzky >>>> >> >>> -- >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From frothga at sandia.gov Mon Jun 20 09:32:43 2022 From: frothga at sandia.gov (Rothganger, Fredrick) Date: Mon, 20 Jun 2022 13:32:43 +0000 Subject: Connectionists: Symbols Message-ID: Danny Silver said: "Symbols are external communication tools used between intelligent agents ..." +1 for this point of view, along with the rest of the elaboration in his message. My own view is that living systems are inherently cybernetic systems (in the original Norbert Wiener sense). They communicate both internally and externally to achieve their life processes (autopoiesis, as defined by Maturana & Varela). Their control processes are inherently dynamic rather than episodic, and they adjust their dynamics to remain in homeostasis (learning, per W. Ross Ashby). Given these premises, symbols are physical events with prearranged effects on the recipient. Their purpose is to modify the behavior of the recipient. In humans, the lowest level symbols are (roughly) phonemes. We have a limited set of O(100) phonemes which get arranged into a much larger set of "words". (Here it's worth mentioning formal language theory.) Unlike a lot of natural examples of symbols, words and grammar are something that human agents actively negotiate with each other. We create language. Part of that negotiation is to associate words with specific objects, actions or other "concepts". This requires grounding in a human body in the real world. Finally, note that humans use a natural "folk" logic to interpret language (see "The Psychology of Proof" by Lance Rips). Grammar and logic are closely related mechanisms for structuring symbolic communication. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksharma.raj at gmail.com Mon Jun 20 12:46:10 2022 From: ksharma.raj at gmail.com (Raj Sharma) Date: Mon, 20 Jun 2022 22:16:10 +0530 Subject: Connectionists: Call for Papers: The 2nd International Conference on AI-ML-Systems (AIMLSystems'22) Message-ID: ============================================================== *AIMLSystems 2022* The 2nd International Conference on AI-ML-Systems (An Initiative of the COMSNETS Association) 12 - 15 October 2022 Bangalore, India https://www.aimlsystems.org/2022/ https://cmt3.research.microsoft.com/AIMLSystems2022/ In-Cooperation With: ACM, ACM SIGKDD, ACM SIGMOD, ACM SIGAI ============================================================== AIMLSystems is a new conference targeting research at the intersection of AI/ML techniques and systems engineering. Through this conference we plan to bring out and highlight the natural connections between these two fields and their application to socio-economic systems. Specifically we explore how immense strides in AI/ML techniques are made possible through computational systems research (e.g., improvements in CPU/GPU architectures, data-intensive infrastructure, and communications ), how the use of AI/ML can help in the continuous and workload-driven design space exploration of computational systems (e.g., self-tuning databases, learning compiler optimisers, and learnable network systems ), and the use of AI/ML in the design of socio-economic systems such as public healthcare, and security. The goal is to bring together these diverse communities and elicit connections between them. Contributions are invited under Research, Industry & Applications, and Demonstration Tracks of the conference. Authors are encouraged to submit previously unpublished research at the intersection of computational / socio-economic systems and AI/ML. *------------------------Topics of Interest------------------------* The areas of interest are broadly categorized into the following three streams: ** Systems for AI/ML, including but not limited to:* - CPU/GPU architectures for AI/ML - Specialized/Embedded hardware for AI/ML workloads - Data intensive systems for efficient and distributed training - Challenges in production deployment of ML systems - ML programming models, languages, and abstractions, - ML compilers and runtime - Efficient systems for data preparation and processing - Systems for visualization of data, models, and predictions - Testing, debugging, and monitoring of ML applications - Cloud-computing for machine and deep learning - Machine and deep learning ?as-a-service? - Efficient model training, optimization and inference - Hardware efficient ML methods - Resource-constrained ML - Tiny Machine Learning - Embedded and Edge Artificial Intelligence - Distributed and parallel learning algorithms - MLOps (data collection, monitoring and re-training) ** AI/ML for Systems, including but not limited to:* - AI/ML for VLSI and architecture design - AI/ML in compiler optimization - AI/ML in data management - including database optimizations, virtualization, etc. - AI/ML for networks - design of networks, load modeling, etc. - AI/ML for power management - green computing, power models, etc. - AI/ML for Cloud Computing - AI/ML for IOT networks ** AI/ML for Socio-Economic Systems Design, which includes, but not limited to:* - Computational design and analysis of socio-economic systems - Fair and bias-free systems for social welfare, business platforms - Applications of AI/ML in the design, short-/long-term analysis of cyber-physical systems - Mechanism design for socio-economic systems - Fairness, interpretability and explainability for ML applications - Privacy and security in AI/ML systems - Sustainability in AI/ML systems - Ethics in AI/ML systems - Applications of AI/ML in financial systems -------------- *Key Dates* -------------- Paper submissions due: July 5, 2022 Author notifications: August 30, 2022 Camera ready deadline: September 12, 2022 Conference dates: October 12-15, 2022 *---------Venue---------* The Chancery Pavilion, Residency Road, Bangalore, India | Hybrid Conference *----------------------------Paper Submissions----------------------------* Research papers must not exceed 8 pages, excluding appendix, acknowledgments and bibliography. Only electronic submissions in PDF format using the ACM sigconf template (see https://www.acm.org/publications/proceedings-template) will be considered. Papers can be submitted under any of the three main topics listed above. Authors are required to make a primary topic selection, with optional secondary topics for each paper. Number of papers accepted under each topic is not capped. We will accept all papers that meet the high quality and innovation levels required by the AIMLSystems conference. All papers that are accepted will appear in the proceedings. All accepted papers will be presented as posters at AIMLSystems 2022, but a select subset of them will be given a ?conventional? (oral) presentation slot during the conference. However, all accepted papers will be treated equally in the conference proceedings, which are the persistent, archival record of the conference. *----------------------------------Dual Submission Policy----------------------------------* A paper submitted to AIMLSystems can not be under review at any other conference or journal during the entire time it is considered for review at AIMLSystems, and it must be substantially different from any previously published work or any work under review. After submission and during the review period, submissions to AIMLSystems must not be submitted to other conferences/journals for consideration. However, authors may publish at non-archival venues, such as workshops without proceedings, or as technical reports (including arXiv). *---------Ethics---------* Plagiarism Policy: Submission of papers to AIMLSystems 2022 carries with it the implied agreement that the paper represents original work. We will follow the ACM Policy on Plagiarism, Misrepresentation, and Falsification ? see https://www.acm.org/publications/policies/plagiarism-overview. All submitted papers will be subjected to a ?similarity test?. Papers achieving a high similarity score will be examined and those that are deemed unacceptable will be rejected without a formal review. We also expect to report such unacceptable submissions to the superiors of each of the authors. Submission of papers to AIMLSystems 2022 also carries with it the implied agreement that one or more of the listed authors will register for and attend the conference and present the paper. Papers not presented at the conference will not be included in the final program or in the digital proceedings. Therefore, authors are strongly encouraged to plan accordingly before deciding to submit a paper. *------------------------------------------* *Keynote Speakers* *------------------------------------------* - *Carlos Guestrin, Stanford University, USA * - *Thorsten Joachims, Cornell University, USA * - *Sunita Sarawagi, IIT Bombay, India * - *Partha Pratim Talukdar, IISc Bangalore & Google Research, India * - *Cynthia Rudin, Duke University, USA* - *Max Welling, University of Amsterdam and Microsoft Research, Netherlands * *----------------------------------------------------------* *Conference Chairs & Contact Information* *---------------------------------------------------------* *General Chairs:* - *Ralf Herbrich (Hasso Plattner Institute, Germany)* - *Rajeev Rastogi (Amazon, India)* - *Dan Roth (University of Pennsylvania, USA)* *TPC Chairs:* - *Sumohana Channappayya (IIT Hyderabad, India)* - *Srujana Merugu (Amazon, India)* - *Manuel Roveri (Politecnico di Milano, Italy)* For general inquiries: aimlsys.conference at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiako at ieee.org Mon Jun 20 14:39:30 2022 From: tiako at ieee.org (Pierre F. Tiako) Date: Mon, 20 Jun 2022 13:39:30 -0500 Subject: Connectionists: [CFP Due July 11] CAIS Automated and Intelligent Systems, Oct 3-6, OKC, USA Message-ID: [Apologies for cross-posting] --- Call for Abstracts and Papers ------------- 2022 OkIP International Conference on Automated and Intelligent Systems (CAIS) Downtown Oklahoma City, OK, USA & Online October 3-6, 2022 https://eventutor.com/e/CAIS002 Submission Deadline: July 11, 2022 CAIS aims to bring together scholars from different disciplinary backgrounds to emphasize disseminating ongoing research and development in the field. Proposals are solicited describing original works in the fields below and related technologies. CAIS will include a peer-reviewed technical, industrial, and poster sessions program. Accepted and presented full papers from the tracks below will be published in the conference proceedings and submitted for indexation in major abstract and citation databases of peer-reviewed literature. Extended versions of the best papers will be considered for journal publication. >> AI, Machine Learning (ML), and Applications - General ML | Active/Supervised Learning - Clustering/Unsupervised Learning - Online Learning | Learning to rank - Reinforcement Learning | Deep Learning(DL) - Semi/Self Supervised Learning - Time Series Analysis | Prediction/Forecasting - DL Architectures/Generative-Models - Deep Reinforcement Learning - Computational Learning Theory - Bandit/Game/Statistical-Learning Theory - Optimization Methods and Techniques - Convex/Non-Convex Optimization - Matrix/Tensor Methods - Stochastic/Online Optimizations - Non-Smooth/Composite Optimization - Probabilistic Inference | Graphical Models - Bayesian/Monte-Carlo Methods - Trustworthy Machine Learning - ML Accountability/Causality - ML Fairness/Privacy/Robustness - Healthcare/DNA/Transportation - Digital Economy | Ecommerce Security - Sustainability | Energy | Green Technology - Language | Image - Recommendation Systems >> Agent-based, Automated, and Distributed Supports - Multi-Agent Systems | Software Agents - Decentralized/Distributed Intelligence - Context-Aware Computing - Group Decision Support Systems - Intelligent Structures/Networks - Design/Automation Approaches - Sensor Networks Architectures - Complex Manufacturing Processes - Analytical Models | Path Planning - Multistage Assembly Line - Automated Inspection >> Intelligent Systems and Applications - Medical Nanorobotics | - Sensory/Embedded Systems - Embedded Systems | Digital Manufacturing - Optimization/Evolutionary Algorithms - Bioinformatics/Biotechnology Applications - Computer-Vision Applications - Sensor-Networks Applications - Intelligent Design | Fuzzy Systems - Soft/Ubiquitous Computing - Pervasive/Wearable Computing - Intelligence Manufacturing | Microsatellite - Cyber-physical Systems | Kinematics >> Knowledge-based and Control Supports - Expert/Complex Systems - Decision-Support Systems - Intelligent Control/Supervision Systems - Knowledge Engineering - Neural Networks | Structural Optimization - Intelligent Teleoperation - Intelligent Shopfloor - Collision Avoidance | Fault Diagnosis - Object Detection and Tracking | Path Planning - Position/Quality/Motion Control - Predictive Control - Preventive Maintenance | Defect Detection >> Robotics and Vehicles - Unmanned Vehicles/Robots - Autonomous Vehicles/Robots - Human-Robot Interfaces - Human-Robot Interactions - Intelligent Telerobotics | Service Robots - Robotic Manipulators/Arms - Robotic Applications - Self-Driving Vehicles | Cloud-based Driving - Vehicular ad hoc Networks |Traffic Detection - Vehicle-to-Vehicle Communication - Vehicle Platooning | Steering Systems - Vehicle dynamics | Traffic Computing >> Contribution Types (One-Column IEEE Format Style): OkIP Published & SCOPUS/WoS Indexed - Full Paper: Accomplished research results (10 pages) - Short Paper: Work in progress/fresh developments (6 pages) - Extended Abstract/Poster/Journal First: Displayed/Oral presented (3 pages) >> Important Dates: - Submission Deadline: July 11, 2022 - Notification Due: August 01, 2022 - Camera-ready Due: August 22, 2022 >> Technical Program Committee https://eventutor.com/event/19/page/56-committee Please feel free to contact us for any inquiries at: info at okipublishing.com -------- Pierre Tiako General Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From kripa.ghosh at gmail.com Tue Jun 21 06:00:04 2022 From: kripa.ghosh at gmail.com (Kripa Ghosh) Date: Tue, 21 Jun 2022 15:30:04 +0530 Subject: Connectionists: 2nd CFP: FIRE track on Information Retrieval from Microblogs during Disasters (IRMiDis) Message-ID: *** Apologies for multiple posting *** *Information Retrieval from Microblogs during Disasters (IRMiDis)* https://sites.google.com/view/irmidis-fire2022/irmidis Track in conjunction with FIRE 2022 (http://fire.irsi.res.in/fire/2022/home), December 9-13, 2022, Kolkata (Hybrid Event) The IRMiDis track aims to develop datasets and methods for solving various practical research problems associated with a disaster or pandemic situation. The IRMiDis track has been run successfully with FIRE in the years 2017, 2018 and 2021. This year IRMiDis will consist of two important classification tasks over microblogs/tweets associated with the COVID-19 pandemic: (1) classifying tweets according to their vaccine-related sentiment (pro-vaccine, neutral, anti-vaccine) (2) identifying tweets that mention someone experiencing COVID-19 symptoms, which is useful for detecting upcoming surges in COVID cases For both tasks, we will provide training data annotated by human workers, and test data for evaluating the submitted models. *All participating teams will get to publish a working notes paper in the FIRE workshop proceedings. The two best-performing teams in each task will be awarded winner and runner-up certificates, and their names will be put up on the track website.* Training data has already been released. Participating teams need to submit runs by August 1. More details on how to participate are on the IRMiDis site: https://sites.google.com/view/irmidis-fire2022/irmidis. Kind Regards, *Kripabandhu Ghosh* Co-organizer IRMiDis -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin at amu.edu.pl Tue Jun 21 16:31:19 2022 From: marcin at amu.edu.pl (Marcin Paprzycki) Date: Tue, 21 Jun 2022 22:31:19 +0200 Subject: Connectionists: =?utf-8?q?CFP=3A_Future_Platforms_for_Edge-Cloud_?= =?utf-8?q?Continuum_=E2=80=93_Theoretical_Foundations_and_Practical_Consi?= =?utf-8?q?derations=3B_July_10=2C2022_deadline?= In-Reply-To: <06c1c6f0-443e-14d2-8e9e-4bab05229cd8@pti.org.pl> References: <06c1c6f0-443e-14d2-8e9e-4bab05229cd8@pti.org.pl> Message-ID: <1b1324d4-8ca2-8c1c-1880-14ccfd0e1e2d@amu.edu.pl> Dear Colleagues, While the submission to the main event has been closed, there is still a way to participate in the IEEE World Forum Internet of Thing. Contribute to our unique Special Session and let us meet in Yokohama. For the organizers, Marcin Paprzycki, Ph.D., D.Sc. Senior Member of the IEEE ************************************************************************ Call for Papers Future Platforms for Edge-Cloud Continuum ? Theoretical Foundations and Practical Considerations Organized within the scope of IEEE World Forum Internet of Things (WF?IoT 2022) Internet of Things (IoT) is expected to bring fundamental changes to all sectors of society and economy. However, realization of the IoT vision requires data processing (stream, static, or both) in an ?optimal location? within the edge-cloud continuum. Here, it is assumed that far-edge/nano-edge devices produce data and actuate; edge/fog consist of ?heterogeneous intermediate devices?, where data can be processed, cloud/HPC facilities deliver ?unlimited? processing capabilities, while all of them jointly (and supported by resources/services/data orchestration) constitute the edge-cloud continuum. In this context, future IoT platforms will have to manage processes in multi-stakeholder, multi-cloud, federated, large-scale IoT ecosystems. Here, key challenges are related to the fact that such platforms (encompassing operating systems, up to applications), will have to jointly leverage continuous progress of multiple enabling technologies, e.g.: 5G/6G networking, privacy and security, distributed computing, artificial intelligence, trust management, autonomous computing, distributed/smart applications, data management, etc. Moreover, they must facilitate intelligent (autonomous) orchestration of physical/virtual resources and tasks, by realizing them at the ?optimal location? within the ecosystem (e.g., closer to where data is produced). To achieve this, resource-aware frugal AI is needed, to facilitate self-awareness and decision support, across heterogeneous ecosystem. Finally, it is also absolutely necessary that resource management will consider the CO2 footprint of the ecosystem and efficiently deploy data and tasks and also use multi-owner, heterogeneous sources of renewable energy. In this context, contributions addressing theoretical and practical aspects of the following topics are invited (this list is, obviously, not exhaustive): ? IoT architectures for domain agnostic user-aware, self-aware, (semi-)autonomous edge-cloud continuum platforms, including proposals for novel decentralized topologies, ad-hoc resource federation, time-triggered behaviors ? Foundations for next generation of higher-level (meta) operating systems facilitating efficient use of computing capacity across edge-cloud continuum ? Resource aware AI, including frugal AI, bringing intelligence to the edge-cloud continuum platforms (and ecosystems) ? Cognitive frameworks leveraging AI-techniques to improve optimization of infrastructure usage and services and resources orchestration ? Efficient streaming Big Data processing within large-scale IoT ecosystems ? Interoperability solutions for multi-user edge-cloud continuum platforms, capable of coping with systematically increasing complexity of connecting vast numbers of heterogeneous devices ? Federated data spaces approach for improved data governance, sovereignty and sharing ? Privacy, security, trust and data governance in competitive scenarios ? CO2 footprint reduction and efficient use of green energy in edge-cloud continuum ecosystems ? Practical aspects of resource orchestration within highly heterogeneous, large-scale edge-cloud continuum ecosystems ? Intent-based networking and its application to IoT ? Swarm intelligence for IoT-edge-cloud continuum Paper formatting and submission: Please format your contribution(s) according to the instructions found at: https://wfiot2022.iot.ieee.org/wp-content/uploads/sites/399/2022/03/03-26-2022-Technical-Peer-Reviewed-Paper-Instructions-and-Considerations-for-Submissions_YO.pdf and submit your paper to Special Session Spes-01 via: https://www.scomminc.com/pcm/wfiot/wfiot.cfm Important dates: ? Paper submission: 10 July, 2022 ? Camera-Ready Paper Submission Deadline: 31 July 2022 Special Session Organizers: + Rajkumar Buyya, The University of Melbourne, and Manjrasoft Pvt Ltd, Melbourne, Australia, rbuyya at gmail.com + Maria Ganzha, Warsaw University of technology and Systems Research Institute, Polish Academy of Sciences, Technical Coordinator of ASSIST-IoT, M.Ganzha at mini.pw.edu.pl + Levent G?rgen, Kentyou, Grenoble, France, levent at kentyou.com + Carlos Palau, Universitat Politecnica de Valencia, Coordinator of ASSIST-IoT project, cpalau at dcom.upv.es + Marcin Paprzycki, Systems Research Institute, Polish Academy of Sciences, ASSIST-IoT Leader of Scientific Dissemination Task, marcin.paprzycki at ibspan.waw.pl + Tarik Taleb, The University of Oulu and MOSA!C Lab, Oulu, Finland, talebtarik at gmail.com -- Ta wiadomo?? zosta?a sprawdzona na obecno?? wirus?w przez oprogramowanie antywirusowe Avast. https://www.avast.com/antivirus -- Ta wiadomo?? zosta?a sprawdzona na obecno?? wirus?w przez oprogramowanie antywirusowe Avast. https://www.avast.com/antivirus From sinankalkan at gmail.com Wed Jun 22 02:59:25 2022 From: sinankalkan at gmail.com (Sinan KALKAN) Date: Wed, 22 Jun 2022 09:59:25 +0300 Subject: Connectionists: =?utf-8?q?Call_for_participation_--_=E2=80=9CPerf?= =?utf-8?q?ormances_Measures_in_Visual_Detection_and_Their_Optimiza?= =?utf-8?q?tion=E2=80=9D=2C_CVPR2022_Tutorial?= Message-ID: We cordially invite those interested to our CVPR2022 virtual tutorial on Performance Measures in Visual Detection and Their Optimization to be held online on 30 June 2022. Tutorial Website: https://sites.google.com/view/performance-measures-cvpr2022/ About the Tutorial Many vision applications require identifying objects and object-related information in images. Such identification can be performed at different levels of detail, which are addressed by different visual detection tasks such as ?object detection? for identifying labels of objects and boxes bounding them, ?keypoint detection? for finding keypoints on objects, ?instance segmentation? for identifying the classes of objects and localizing them with masks, and ?panoptic segmentation? for both semantic segmentation of background classes and instance segmentation of objects. Accurately evaluating performances of these methods is crucial for developing better solutions. Accordingly, in this tutorial, we aim to extensively delve into the evaluation of visual detectors. Within the scope of our tutorial, we will first cover the basics of evaluating visual detectors in order to allow someone not familiar with visual detection to grasp the basics. Then, we will introduce the Localisation Recall Precision (LRP) Error [1,2] and present thorough comparative both theoretical and comparative analyses with Average Precision (AP) and Panoptic Quality (PQ) [3] on various visual detection tasks. Finally, we will discuss bridging the gap between training and evaluation by directly optimizing AP and LRP, which involves a non-differentiable ranking step that is difficult to optimize using conventional gradient-based methods. Program (in CST) Date: 30 June 11.00am-11:50noon -- Part I: The Basics of Evaluating Visual Detectors [50 min]: - Motivation and Introduction on Visual Detection Tasks - Performance Measures in Visual Detection: Average Precision, Panoptic Quality, Localization Recall Precision (LRP) - Recent Advances: Probabilistic Detection Quality, AP-fixed, AP-pool, Boundary IoU, Optimal Correction Cost 11:50am-12:00noon -- Break 12:00noon -- 12:50pm -- Part II: An Analysis of Performance Measures and Localisation-Recall-Precision Error [50 min]: - An Analysis of Performance Measures: Important features for a performance measure, evaluating AP and PQ in terms of important features - Localisation-Recall-Precision Error: Definition, Analysis, Optimal LRP Error, s-LRP Curves, Theoretical and Empirical Comparison of LRP Error with AP and PQ 12:50pm-01:00pm -- Break 01:00pm-01:50pm -- Part III: Optimization of Performance Measures [50 min]: - Identity Update to Optimize Ranking-based Loss Functions - Average Precision Loss for Classification - Average Localisation-Recall-Precision Loss for Object Detection - Rank & Sort Loss for Object Detection and Instance Segmentation 01:50pm-02:00pm -- Q&A Please check our webpage for up-to-date program: https://sites.google.com/view/performance-measures-cvpr2022/ Participation Details Participation in our tutorial will be via the CVPR2022 platform and therefore will require registration to the conference. Organizing Committee Emre Akbas, Sinan Kalkan, Kemal Oksuz References [1] K. Oksuz, B. C. Cam, S. Kalkan*, E. Akbas*, "One Metric to Measure them All: Localisation Recall Precision (LRP) for Evaluating Visual Detection Tasks", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), in press, 2022. [Paper] [Code] [2] K. Oksuz, B. C. Cam, E. Akbas, S. Kalkan, "Localization Recall Precision (LRP): A New Performance Metric for Object Detection", European Conference on Computer Vision (ECCV), pp. 521-537, Springer, 2018. [Paper] [Code] [3] Kirillov, A., He, K., Girshick, R., Rother, C., & Doll?r, P. "Panoptic segmentation". IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9404-9413), 2019. -- Sinan KALKAN, Assoc. Prof. Dr. Dept. of Computer Engineering Middle East Technical University Ankara, TURKEY Web: https://ceng.metu.edu.tr/~skalkan/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Jun 21 17:19:52 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 21 Jun 2022 14:19:52 -0700 Subject: Connectionists: The symbolist quagmire In-Reply-To: References: Message-ID: <64B55CD4-6E58-451C-943B-E368F7682EFF@nyu.edu> not that i really know what consciousness is, but i doubt that it is requirement for any of the challenges i have raised, eg with respect to common sense or natural language understanding. systems like AlphaFold and turn-by-turn directions presumably lack consciousness but give us perfectly reasonable answers using symbolic inputs. I don?t see why more general forms of AI need to be different, though they undoubtedly will require richer representations than are currently trendy. > On Jun 21, 2022, at 2:14 PM, Juyang Weng wrote: > > ? > Dear Gary, > > You wrote: "My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them. If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc." > > I respectfully do not agree, since that is why "practically every computational model anywhere" cannot learn consciousness. They are basically pattern recognition machines for a specific task. > > I skip "data selection" in deep learning here. Deep learning not only hits a wall. All its published data appear to be invalid. > > Gary, this issue is probably too fundamental if you do not try to understand the conscious learning algorithm (see below), first ever in the world, as far as I humbly aware of. > > Let me try in intuitive terms: > > (1) You have a series of ASCII symbols, e.g., ASCII-1, ASCII-2, ASCII-3, ASCII-4 ... You have 1 million such ASCII symbols. Any number, as long as it is a large number. > > (2) You specify the meanings of such ASCII symbols in your design documents: > ASCII-1: forward-move-of-joystick-A, > ASCII-2: backward-move-of-joystick-A, > ASCII-3:left-move-of-joystick-A, > ASCII-4: right-move-of-joystick-A > ... > You have at least 1 millions of lines. > > (3) Your machine does not read your design document in (2), they cannot think about your design document in (2). They only learn the mapping from sensory inputs to one of these ASCII symbols. > > (4) Therefore, your machine is not able to understand the consciousness that is required to judge that it is doing a joystick work (e.g., driving using a joystick) well, because your knowledge hierarchy (using these 1 million symbols) are static. The machine cannot recompose new meanings from these symbols, because it does not understand any symbols at all! Why do I understand my moving forward? I do not have (2). Moving forward is my own intent, my own volition! I feel the effects of my volition and decide whether I want to repeat. > > (5) Without consciousness, machine learning is static. Consciousness must go beyond any static hierarchy. > (a) My children do. They told me some views (and intents) that surprise me. I did not teach such views. > (b) That is also why a human brain can do research. My subject research surprised my father-in-law and he does not believe I can do what I told him I can. > > In summary, all ASCII symbols are a dead end. They like AI drugs, are addictive, and waste our resources in AI. > > As the first ever conscious learning algorithm, the DN-3 neural network must autonomously create any fluid hierarchy that any consciousness requires during human-like thinking. > Please read the first conscious learning algorithm that will be able to do scientific research in the future: > > Peer reviewed version: > @INPROCEEDINGS{WengCLAIEE22 > ,AUTHOR= "J. Weng" > ,TITLE= "An Algorithmic Theory of Conscious Learning" > ,BOOKTITLE= "2022 3rd Int'l Conf. on Artificial Intelligence in Electronics Engineering" > ,ADDRESS= "Bangkok, Thailand" > ,PAGES= "1-10" > ,MONTH= "Jan. 11-13" > ,YEAR= "2022" > ,NOTE="\url{http://www.cse.msu.edu/~weng/research/ConsciousLearning-AIEE22rvsd-cite.pdf}" > } > > Not yet peer reviewed: > @misc{WengDN3-RS22 > ,AUTHOR= "J. Weng" > ,TITLE= "A Developmental Network Model of Conscious Learning in Biological Brains" > ,Howpublished= "Research Square" > ,PAGES= "1-32" > ,MONTH= "June 7" > ,YEAR= "2022" > ,NOTE="doi: \url{https://doi.org/10.21203/rs.3.rs-1700782/v2}, desk-rejected by {\em Nature}, {\em Science}, {\em PNAS}, {\em Neural Networks} and {\em ArXiv}" > } > > Please kindly read them, get excited and ask questions. > > Best regards, > -John > -- > Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From doya at oist.jp Tue Jun 21 09:30:28 2022 From: doya at oist.jp (Kenji Doya) Date: Tue, 21 Jun 2022 13:30:28 +0000 Subject: Connectionists: International Symposium on AI and Brain Science 2022: Registration deadlines June 23 on-site and June 30 online In-Reply-To: <300be3d7c0e34593837a3c428632357b@OSZPR01MB8012.jpnprd01.prod.outlook.com> References: <916FC83D-F915-43D0-87CB-631685D397C2@oist.jp> <10aa717486204203b12f9e4531c2523e@TYCPR01MB8013.jpnprd01.prod.outlook.com> <300be3d7c0e34593837a3c428632357b@OSZPR01MB8012.jpnprd01.prod.outlook.com> Message-ID: <61003F6D-FF48-4D2D-81F9-3214D6832AA1@oist.jp> The registration deadline for the International Symposium on AI and Brain Science 2022 is June 23 for on-site and June 30 for online participation. http://www.brain-ai.jp/symposium2022/ The symposium is held on July 4th and 5th at Okinawa Institute of Science and Technology in a hybrid format as a satellite event for Neuro2022: https://neuro2022.jnss.org/en/ Tentative Program: time in JST=GMT+9 9:00 Registration 9:40 Opening: Kenji Doya 10:00-12:00 Session 1: AI for Brain Science Maneesh Sahani (Gatsby Unit) Angela Langdon (NIMH) Xiao-Jing Wang (NYU) Terrence Sejnowski (Salk Institute) 12:00-14:00 Lunch and Poster Session 14:00-16:00 Session 2: Biological and robotic motor control Chair: Hiroaki Gomi Tetsuya Ogata (Waseda U) Tom Macpherson (Osaka U) Jun Izawa (Tsukuba U) Rieko Osu (Waseda U) 16:30-18:30 Session 3: Brain-inspired AI Jun Tani (OIST) Masashi Sugiyama (RIKEN AIP) Yutaka Matsuo (U Tokyo) Aida Nematzadeh (DeepMind) 10:00-12:00 Session 4: AI, Brain, and Society Ryota Kanai (ARAYA) Yoshua Bengio (U Montreal) Ai Koizumi (Sony CSL) Patricia Churchland (Salk Institute) 12:00-13:00 Lunch and Poster Session 13:00-15:00 Session 5: Natural and artificial consciousness Chair: Naotsugu Tsuchiya Hideaki Shimazaki (Hokkaido U) Keisuke Suzuki (Hokkaido U) Makiko Yamada (QST) Tadahiro Taniguchi (Ritsumeikan U) 15:30-17:30 Session 6: Unifying models of cognition Chair: Tadahiro Taniguchi Misako Komatsu (Tokyo Tech) Karl Friston (UCL) Yuichi Yamashita (NCNP) Matthew Botvinick (DeepMind) 17:30-18:00 General Discussion - Closing Co-organizers: Kenji Doya (Okinawa Institute of Science and Technology) Karl Friston (University College London) Hiroaki Gomi (NTT Communication Science Laboratories) Takao Hensch (The University of Tokyo International Research Center for Neurointelligence) Maneesh Sahani (Gatsby Computational Neuroscience Unit) Masashi Sugiyama (RIKEN Center for Advanced Intelligence Project) Tadahiro Taniguchi (Ritsumeikan University) Naotsugu Tsuchiya (Monash University) Secretariat: Neural Computation Unit, OIST: ncus at oist.jp ---- Kenji Doya > Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University 1919-1 Tancha, Onna, Okinawa 904-0495, Japan Phone: +81-98-966-8594; Fax: +81-98-966-2891 https://groups.oist.jp/ncu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jun 22 04:19:22 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 22 Jun 2022 11:19:22 +0300 Subject: Connectionists: Early registration: CVML Short Course on Deep Learning and Computer Vision, 22-23th August 2022 References: <0c8b01d870cb$caf893b0$60e9bb10$@csd.auth.gr> <001401d870d2$f0bc3960$d234ac20$@csd.auth.gr> Message-ID: <032301d88610$c80e03b0$582a0b10$@csd.auth.gr> Dear Machine Learning, Computer Vision and Autonomous Systems engineers, scientists and enthusiasts, you are welcomed to register in the CVML Short e-course on Deep Learning and Computer Vision, 22-23th August 2022: http://icarus.csd.auth.gr/cvml-short-course-on-deep-learning-and-computer-vi sion-2022/ Its focus will be on applications in autonomous systems (cars, drones, marine vessels). It will take place as a two-day e-course (due to COVID-19 circumstances), hosted by the Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece. It will contain a series of live lectures delivered through a tele-education platform (Zoom). They will be complemented with on-line video recorded lectures and lecture pdfs, to facilitate international participants having time difference issues and to enable you to study at own pace. You can also self-assess your knowledge, by filling appropriate questionnaires (one per lecture). You will be provided programming to improve your programming skills. You will also have accesses to tutorial exercises to better your theoretical understanding of selected CVML topics. This 6th edition of this course is part of the very successful CVML short course series that took place in the last four years. Course description 'Deep Learning and Computer Vision' The short e-course consists of 14 1-hour live lectures organized in two Parts (1 Part per day): Part A lectures (7 hours) provide a solid background on the foundational Computer Vision topics and an in-depth presentation to Autonomous Systems vision and the relevant architectures (Camera geometry, Stereo and Multiview imaging, Introduction to multiple drone systems, Simultaneous Localization and Mapping, Drone mission planning and control, Introduction to autonomous marine vehicles). Part B lectures (7 hours) provide an in-depth presentation of various Deep Learning topics (Multilayer Perceptron, Backpropagation, Deep Neural Networks, Convolutional NNs, Deep Object Detection, 2D Visual Object Tracking, Neural Slam) encountered in autonomous systems perception, ranging from vehicle localization and mapping, to target detection and tracking. Parts A, B also contain application-oriented lectures on autonomous systems embedded CPU/GPU computing and related SW tools that can be used in a wide range of applications, e.g., for land/marine surveillance, search&rescue missions, infrastructure/building inspection and modeling, cinematography. Course lectures Part A (7 hours) 1. Introduction to autonomous systems 2. Camera geometry 3. Stereo and Multiview imaging 4. Introduction to multiple drone systems 5. Simultaneous Localization and Mapping 6. Drone mission planning and control 7. Introduction to autonomous marine vehicles Part B (7 hours) 1. Multilayer perceptron. Backpropagation 2. Deep neural networks. Convolutional NNs 3. Deep object detection 4. 2D Visual Object Tracking 5. Neural Slam 6. CVML Software development tools 7. Applications in car vision Though independent, the attendees of this short e-course will greatly benefit by attending the CVML Programming Short Course and Workshop on Deep Learning and Computer Vision 2022, 24-26th August 2022: http://icarus.csd.auth.gr/cvml-programming-short-course-and-workshop-on-deep -learning-and-computer-vision-2022/ You can use the following link for course registration: http://icarus.csd.auth.gr/cvml-short-course-on-deep-learning-and-computer-vi sion-2022/ Lecture topics, sample lecture ppts and videos, self-assessment questionnaires, programming exercises and tutorial exercises can be found therein. For questions, please contact: Ioanna Koroni > The short course is organized by Prof. I. Pitas, IEEE and EURASIP fellow and IEEE distinguished speaker. He is the coordinator of the EC funded International AI Doctoral Academy (AIDA ), that is co-sponsored by all 5 European AI R&D flagship projects (H2020 ICT48). He was initiator and first Chair of the IEEE SPS Autonomous Systems Initiative. He is Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab), Aristotle University of Thessaloniki, Greece. He was Coordinator of the European Horizon2020 R&D project Multidrone. He is ranked 249-top Computer Science and Electronics scientist internationally by Guide2research (2018). He has 33800+ citations to his work and h-index 86+. AUTH is ranked 153/182 internationally in Computer Science/Engineering, respectively, in USNews ranking. Relevant links: 1) Prof. I. Pitas: https://scholar.google.gr/citations?user=lWmGADwAAAAJ &hl=el 2) Horizon2020 EU funded R&D project Aerial-Core: https://aerial-core.eu/ 3) Horizon2020 EU funded R&D project Multidrone: https://multidrone.eu/ 4) International AI Doctoral Academy (AIDA): http://www.i-aida.org/ 5) Horizon2020 EU funded R&D project AI4Media: https://ai4media.eu/ 6) AIIA Lab: https://aiia.csd.auth.gr/ Sincerely yours Prof. I. Pitas Director of the Artificial Intelligence and Information analysis Lab (AIIA Lab) Chair of the International AI Doctoral Academy (AIDA) Aristotle University of Thessaloniki, Greece Post scriptum: To stay current on CVML matters, you may want to register in the CVML email list, following instructions in: https://lists.auth.gr/sympa/info/cvml -------------- next part -------------- An HTML attachment was scrubbed... URL: From blink.yu at mdpi.com Wed Jun 22 05:05:18 2022 From: blink.yu at mdpi.com (Mr. Blink Yu) Date: Wed, 22 Jun 2022 17:05:18 +0800 Subject: Connectionists: [Computers] (ISSN 2073-431X) Special Issue "Advances of Machine and Deep Learning in the Health Domain"--Call for Paper Message-ID: <2a451b01-a72b-55a0-d8ed-3d308e05da93@mdpi.com> [Apologies if you receive multiple copies of this message] ==================================== Special Issue "Advances of Machine and Deep Learning in the Health Domain" Deadline for manuscript submissions: 30 June 2022. Guest Editor: Dr. Antonio Celesti MIFT Department, University of Messina, Viale F. Stagno d'Alcontres, 31 98166 Messina, Italy Dr. Ivanoe De Falco Institute of High Performance Computing and Networking of National Research Council (ICAR-CNR), 80131 Naples, Italy Dr. Antonino Galletta MIFT Department, University of Messina, 98166 Messina, Italy Dr. Giovanna Sannino Institute of High Performance Computing and Networking ? National Research Council of Italy (ICAR-CNR), 80131 Naples, Italy https://www.mdpi.com/journal/computers/special_issues/AI_health_2022 =========================================== The 1st edition of the IEEE International Conference on ICT Solutions for eHealth (ICTS4eHealth) will be held on 5?8 September 2021 in Athens (Greece) in conjunction with the 26th IEEE Symposium on Computers and Communications (ISCC). For more information about the conference, please use this link: https://www.icts4ehealth.icar.cnr.it/ Machine and Deep Learning deal with data, and one of their goals is to extract information and related knowledge that is hidden in them in order to make detections and/or predictions and, subsequently, take decisions. With the terms ?Machine and Deep Learning?, we cover a wide range of theories, methods, algorithms, and architectures that are used to this end. This Special Issue will cover promising developments in the related areas of machine and deep learning applied to the health domain and offer possible paths for the future. The authors of selected papers that are presented at the International IEEE ICTS4eHealth Conference 2021 are invited to submit their extended versions to this Special Issue of the journal Computers after the conference. Submitted papers should be extended to the size of regular research or review articles, with at least 50% extension of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together in this Special Issue?s website. Accepted extended papers will be free of charge. There are no page limitations for this journal. We are also inviting original research work covering novel theories, innovative methods, and meaningful applications that can potentially lead to significant advances in artificial intelligence in the health domain. The main topics include but are not limited to: Knowledge management of health data; Data mining and knowledge discovery in healthcare; Machine and deep learning approaches for health data; Explainable ai models for health, biology, and medicine; Decision support systems for healthcare and wellbeing; AI for precision medicine; Optimization for healthcare problems; Regression and forecasting for medical and/or biomedical signals; Healthcare information systems; Wellness information systems; Medical signal and image processing and techniques; Medical expert systems; Diagnoses and therapy support systems; Biomedical applications; Applications of AI in healthcare and wellbeing systems; machine learning-based medical systems; medical data and knowledge bases; neural networks in medicine; ambient intelligence and pervasive computing in medicine and healthcare; AI in genomics; AI for healthcare social networks. Dr. Antonio Celesti Dr. Ivanoe De Falco Dr. Antonino Galletta Dr. Giovanna Sannino Guest Edito -- Mr. Blink Yu Managing Editor E-Mail: blink.yu at mdpi.com Skype: live:c91693ac8277e1f0 -- MDPI Wuhan Office No.6 Jingan Road, 430064 Wuhan, China http://www.mdpi.com -- Disclaimer: MDPI recognizes the importance of data privacy and protection. We treat personal data in line with the General Data Protection Regulation (GDPR) and with what the community expects of us. The information contained in this message is confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this message in error, please notify me and delete this message from your system. You may not copy this message in its entirety or in part, or disclose its contents to anyone. From bucchiarone at fbk.eu Wed Jun 22 13:38:10 2022 From: bucchiarone at fbk.eu (Antonio Bucchiarone) Date: Wed, 22 Jun 2022 19:38:10 +0200 Subject: Connectionists: MODELS 2022 - Doctoral Symposium Message-ID: The goal of the MODELS 2022 Doctoral Symposium is to provide an international forum for doctoral students to interact with their fellow students and faculty mentors working in the area of model-based engineering. The symposium supports students by providing independent and constructive feedback about their already completed and, more importantly, planned research work. The Symposium will be attended by prominent experts in the field of model-based engineering, who will actively participate in critical and constructive discussions. The Symposium will have the format of a one-day workshop, with presentations of the doctoral students who have their papers accepted in a peer-review process, feedback from the mentors, and plenty of time for discussion. The presentations will be open for mentors, students, and other conference participants; only supervisors of the presenters are excluded from the sessions in which their students deliver their presentations. MODELS 2022 Doctoral Symposium is planned to be hybrid with the goal of being more inclusive and to increase engagement. For this year?s edition, the conference has the special theme ?Modeling for social good? #MDE4SG. Thus, we especially encourage contributions where model-based engineering intersects with research and applications on, not exclusively, socio-technical systems, tools with social impact, integrating human values, data science, artificial intelligence, digital twins, Industry/Society 5.0, and intelligent systems in general. Papers are eligible for the Best Theme Paper Award. https://conf.researchr.org/track/models-2022/models-2022-doctoral-symposium *Important Dates AoE (UTC-12h)* Fri 15 Jul 2022 Abstract Submission Wed 20 Jul 2022 Paper Submission Fri 19 Aug 2022 Notification Fri 26 Aug 2022 Camera-ready Chairs - Mehrdad Sabetzadeh Mehrdad SabetzadehUniversity of OttawaCanada - Elisa Yumi Nakagawa Elisa Yumi NakagawaUniversity of S?o PauloBrazil -- -- Le informazioni contenute nella presente comunicazione sono di natura? privata e come tali sono da considerarsi riservate ed indirizzate? esclusivamente ai destinatari indicati e per le finalit? strettamente? legate al relativo contenuto. Se avete ricevuto questo messaggio per? errore, vi preghiamo di eliminarlo e di inviare una comunicazione? all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bucchiarone at fbk.eu Wed Jun 22 13:37:17 2022 From: bucchiarone at fbk.eu (Antonio Bucchiarone) Date: Wed, 22 Jun 2022 19:37:17 +0200 Subject: Connectionists: =?utf-8?q?MODELS_2022_=E2=80=93_Combined_Call_for?= =?utf-8?q?_Workshop_Papers?= Message-ID: MODELS 2022 ? ACM / IEEE 25th International Conference on Model Driven Engineering Languages and Systems Workshops ? Joint Call for Papers October 23-28, Montr?al, Canada http://www.modelsconference.org/ https://twitter.com/modelsconf https://www.facebook.com/ModelsConference/ The ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS) is the premier conference series for model-driven software and systems engineering, and is organized with support of ACM SIGSOFT and IEEE TCSE. Following the tradition of the previous editions, MODELS'22 will host a number of workshops during the three days before the main conference. Workshops provide a collaborative forum for a group of typically 15-30 participants to exchange recent and/or preliminary results, to conduct intensive discussions on a particular topic, or to coordinate efforts between representatives of a technical community. MODELS'22 will feature 12 workshops, sharing the same submission date and proceedings. Joint deadline for paper submission: July 20, 2022. [W1] 4th Workshop on Artificial Intelligence and Model-driven Engineering (MDEIntelligence) Organizers: Dominik Bork, Lola Burgue?o, Phuong Nguyen and Steffen Zschaler https://mde-intelligence.github.io/ [W2] 9th International Workshop on Multi-Level Modelling (MULTI 2022) Organizers: Manfred Jeusfeld, Juan De Lara and Gergely Mezei https://jku-win-dke.github.io/MULTI2022/ [W3] 2nd International Workshop on Model-Driven Engineering for Digital Twins (ModDiT?22) Organizers: Francis Bordeleau, Loek Cleophas, Benoit Combemale, Romina Eramo, Mark van den Brand and Andreas Wortmann https://gemoc.org/events/moddit2022.html [W4] International workshop Models and Evolution Organizers: Ludovico Iovino, Alfonso Pierantonio and Dalila Tamzalit http://www.models-and-evolution.com/ [W5] Modeling in Automotive System and Software Engineering (MASE?22) Organizers: Alessio Bucaioni, Jo Atlee, Juergen Dingel and Sahar Kokaly https://www.es.mdh.se/mase2022/ [W6] Model Driven Engineering, Verification and Validation (MoDeVVa?22) Organizers: Saad Bin Abid, Iulian Ober, Akram Idani and Pierre de Saqui-Sannes https://sites.google.com/site/modevva [W7] 5th International Workshop on Multi-Paradigm Modeling for Cyber-Physical Systems (MPM4CPS) Organizers: Moussa Amrani, Dominique Blouin, Moharram Challenger, Robert Heinrich, Joeri Exelmans and Randy Paredis http://msdl.uantwerpen.be/conferences/MPM4CPS/2022 [W8] Modeling Language Engineering (MLE) Organizers: Erwan Bousse, Faezeh Khorram and Juha-Pekka Tolvanen https://mleworkshop.github.io/editions/mle2022/ [W9] DevOps at MODELS Organizers: Francis Bordeleau, Juergen Dingel, Nan Messe and S?bastien Mosser https://ace-design.github.io/devops-at-models [W10] Low-Code Development Platforms Organizers: Davide Di Ruscio, Dimitris Kolovos, Juan De Lara and Massimo Tisi https://lowcode-workshop.github.io/ [W11] International Workshop on OCL and Textual Modeling (OCL?22) Organizers: Daniel Calegari, Robert Claris? and Edward Willink https://oclworkshop.github.io/ [W12] Sixth International Workshop on Human Factors in Modeling / Modeling of Human Factors (HuFaMo?22) Organizers: Silvia Abrahao, Timothy C. Lethbridge, Emmanuel Renaux and Bran Selic https://research.webs.upv.es/hufamo22/ For more details, visit the workshops page: https://conf.researchr.org/track/models-2022/models-2022-workshops#Accepted-Workshops . Since 1998, MODELS has covered all aspects of modeling, from languages and methods, to tools and applications. Attendees of MODELS come from diverse backgrounds, including researchers, academics, engineers and industrial professionals. MODELS 2022 is a forum for participants to exchange cutting-edge research results and innovative practical experiences around modeling and model-driven software and systems. This year?s edition will provide an opportunity for the modeling community to further advance the foundations of modeling, and come up with innovative applications of modeling in emerging areas of cyber-physical systems, embedded systems, socio-technical systems, cloud computing, big data, machine learning, security, open source, and sustainability. For this edition, the conference has the special theme "Modeling for social good" #MDE4SG. We encourage technical papers and events themed around: socio-technical systems, tools with social impact, integrating human values, data science, and intelligent systems. We invite you to join us at MODELS 2022, from 23-28 October 2022, and to help shape the modeling methods and technologies of the future! -- -- Le informazioni contenute nella presente comunicazione sono di natura? privata e come tali sono da considerarsi riservate ed indirizzate? esclusivamente ai destinatari indicati e per le finalit? strettamente? legate al relativo contenuto. Se avete ricevuto questo messaggio per? errore, vi preghiamo di eliminarlo e di inviare una comunicazione? all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Wed Jun 22 14:16:44 2022 From: cgf at isep.ipp.pt (Carlos) Date: Wed, 22 Jun 2022 19:16:44 +0100 Subject: Connectionists: CFP: SoGood2022 - extended deadline: July 1, 2022 Message-ID: <7d44f3d4-4c2e-97dd-6eb8-a2541bc13cf4@isep.ipp.pt> Call for Papers SoGood 2022 ? 7th Workshop on Data Science for Social Good Affiliated with ECML-PKDD 2022, 19-23 September, Grenoble, France, https://2022.ecmlpkdd.org/ Workshop site: https://sites.google.com/view/ecmlpkddsogood2022/ The possibilities of Data Science for contributing to social, common, or public good are often not sufficiently perceived by the public at large. Data Science applications are already helping in serving people at the bottom of the economic pyramid, aiding people with special needs, helping international cooperation, and dealing with environmental problems, disasters, and climate change. In regular conferences and journals, papers on these topics are often scattered among sessions with names that hide their common nature (such as "Social networks", "Predictive models" or the catch-all term "Applications"). Additionally, such forums tend to have a strong bias for papers that are novel in the strictly technical sense (new algorithms, new kinds of data analysis, new technologies) rather than novel in terms of social impact of the application. This workshop aims to attract papers presenting applications of Data Science for Social Good (which may, or may not require new methods), or applications that take into account social aspects of Data Science methods and techniques. There are numerous application domains, a non-exclusive list includes: ? Government transparency and IT against corruption ? Public safety and disaster relief ? Access to food, water, sanitation and utilities ? Efficiency and sustainability ? Climate change ? Data journalism ? Social and personal development ? Economic growth and improved infrastructure ? Transportation ? Energy ? Smart city services ? Education ? Social services, unemployment and homeless ? Healthcare and well-being ? Support for people living with disabilities ? Responsible consumption and production ? Gender equality, discrimination against minorities ? Ethical issues, fairness, and accountability. ? Trustability and interpretability ? Topics aligned with the UN development goals: http://www.un.org/sustainabledevelopment/sustainable-development-goals/ The major selection criteria will be the quality of the work, the novelty of the application, and its social impact. We are also interested in applications that have built a successful business model and are able to sustain themselves economically. Most Social Good applications have been carried out by non-profit and charity organizations, conveying the idea that Social Good is a luxury that only societies with a surplus can afford. We would like to hear from successful projects, which may not be strictly "non-profit" but have Social Good as their main focus. There will be an award for the best paper. Paper submission: Authors should submit a PDF version in Springer LNCS style using the workshop EasyChair site: https://easychair.org/my/conference?conf=sogood2022. The maximum length of papers in 16 pages, including references, consistent with the ECML PKDD conference submissions. Submitting a paper to the workshop means that if the paper is accepted, at least one author will attend the workshop and present the paper. Papers not presented at the workshop will not be included in the proceedings. We will follow ECML PKDD?s policy for attendance. Paper publication: Accepted papers will be published by Springer as joint proceedings of several ECML PKDD workshops. Workshop format: ? Half-day workshop ? 1-2 keynote talks, speakers to be announced ? Oral presentation of accepted papers Important Dates: ? Workshop paper submission deadline: July 1, 2022 (NEW DATE) ? Workshop paper acceptance notification: July 13, 2022 ? Workshop paper camera-ready deadline: July 27, 2022 ? Workshop: September 23, 09h-12.30, 2022 (TBC) Program Committee members (more to be added): ? Thiago Andrade, INESC TEC, Portugal ? Andre de Carvalho, University of S?o Paulo, Brazil ? Carlos Ferreira, INESC TEC and IPP, Portugal ? Elaine Faria, University of Uberlandia, Brasil ? C?sar Ferri, Technical University of Valencia, Spain ? Konstantin Kutzkov, Amalfi Analytics, Spain ? Ana Lorena, Technological Institute of Aeronautics, Brazil ? Rita Nogueira, INESC TEC, Porto, Portugal ? Maria Pedroto, INESC TEC, Porto, Portugal ? Sonia Teixeira, INESC TEC, Porto, Portugal ? Emma Tonkin, University of Bristol, UK ? Alicia Troncoso, University Pablo de Olavide, Spain ? Kristina Yordanova, University of Rostock, Germany ? Mart? Zamora, UPC BarcelonaTech, Spain Organizers: ? Ricard Gavald? (UPC BarcelonaTech, Spain), gavalda at cs.upc.edu ? Irena Koprinska (University of Sydney, Australia), irena.koprinska at sydney.edu.au ? Jo?o Gama (University of Porto, Portugal), jgama at fep.up.pt ? Rita Ribeiro (University of Porto, Portugal), rpribeiro at fc.up.pt Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From michael.felsberg at liu.se Wed Jun 22 15:35:49 2022 From: michael.felsberg at liu.se (Michael Felsberg) Date: Wed, 22 Jun 2022 19:35:49 +0000 Subject: Connectionists: Assistant Professor (tenure track) in Machine Learning with specialization in Remote Sensing Message-ID: Link?ping University (Sweden) is looking for good candidates for positions as Assistant Professor (tenure track) in Machine Learning with specialization in Remote Sensing: https://liu.se/en/work-at-liu/vacancies?rmpage=job&rmjob=19567&rmlang=UK Note also that Swedish Universities grant their employees keeping their IPR, which is perfect for own entrepreneurship! Best regards Michael Felsberg -- Professor Michael Felsberg Tel: +46 13 282460 Computer Vision Laboratory Mobile:+46 702 202460 Link?ping University https://twitter.com/CvlIsy SE-581 83 Link?ping, Sweden https://liu.se/en/employee/micfe03 From zelie.tournoud at cnrs.fr Wed Jun 22 11:12:06 2022 From: zelie.tournoud at cnrs.fr (=?UTF-8?Q?Z=c3=a9lie_Tournoud?=) Date: Wed, 22 Jun 2022 17:12:06 +0200 Subject: Connectionists: EITN School in Computational Neuroscience (Fall 2022, Paris): application extended to 30 June Message-ID: <95724db6-1812-b69d-ccca-45ac70253f53@cnrs.fr> Dear everyone, I am pleased to share that application period to EITN School in Computational Neuroscience has been extended by one week, and will now run until 30 June 2022. This training is taking place from 21 to 30 September 2022 in Paris, France. Application (CV + Motivation letter) are to be sent at eitn at neuro-psi.fr . More information below. Thank you and have an excellent day, *Z?lie Tournoud*/// EITN Communication manager/ *European Institute for Theoretical Neuroscience (EITN)* UMR 9197 ? *NeuroPSI*(Institut des Neurosciences Paris-Saclay) CNRS ? Universit? Paris-Saclay Centre CEA Paris-Saclay B?timent 151 91400 Saclay https://www.eitn.org __________ * * EITN School in Computational Neuroscience, 2022 session EITN School in Computational Neuroscience is an intensive computational neuroscience training aimed at neuroscience students and early-career researchers. It is supported by the Human Brain Project and is an oppotunity to learn from researchers from all over Europe. The 2022 edition will take place from *21 to 30 September 2022* in Paris, France. Attendance is selective as seats are limited. *Applications are open until 30 June 2022.* *Information : https://eitnschool2022.sciencesconf.org* Organizers: Sacha Van Albada (Forschungszentrum J?lich), Albert Gidon (Humboldt-Universit?t zu Berlin), Hermann Cuntz (Frankfurt Institute for Advanced Studies, Ernst Str?ngmann Institute), Alain Destexhe (CNRS), Matteo di Volo (Cergy Universit?), Spase Petkoski (Aix-Marseille Universit?), Gorka Zamora-Lopez (Universitat Pompeu Fabra). Faculty: ??? Nicolas Brunel ??? Hermann Cuntz ??? Gustavo Deco ??? Alain Destexhe ??? Matteo di Volo ??? Albert Gidon ??? Jennifer Goldman ??? Moritz Helias ??? Viktor Jirsa ??? Maurizio Mattia ??? Spase Petkoski ??? Sacha Van Albada ??? Gorka Zamora-Lopez ??? Damien Depannemaecker ??? Domenico Guarino ??? Jorge Mejias ??? Kirsten Fischer ??? Michael Dick ??? Renan Shimoura ??? Andr?s Ecker ??? Athanasia Paputzi ??? Marja-Leena Linne ??? Gaute Einevoll To be completed Who is this training for? * Neuroscience (and related fields) students and related * Post-doctoral/early-career researchers Why join? * 10 days of intensive training in computational neuroscience provided by researchers from all over Europe * Small scale event: get to connect! * A complete program: o Cellular models, and models of brain signals o Circuit models and networks o Mean-field models o Whole-brain models * Hands-on learning schedule based on: o Morning classes and tutorials o Group projects in the afternoon o Free time in the evenings to visit Paris How to apply? This training has a limited capacity of 20 students, therefore a selection will be performed by a scientific organizing committee. Applications are open until 23 June 2022. *Send your application by including a resume (CV) and cover letter (explaining motivations t**o join) by email at **eitn at neuro-psi.fr**.* You will receive a confirmation your application has be received within 5 working days or on the day application period closes (whichever comes the soonest). If not, please consider a technical issue and submit your application again. Contact eitn at neuro-psi.fr for any question or assistance. Selection criteria include: * Working or studying in neuroscience or related * Experience in python programming would help * Priority is given to PhD students as the primary target of this training, although Master students and post-docs are welcome to apply Practical information: * If selected, registration fee to attend the training is 400?. * If selected, you will need to bring your own laptop with a few pre-installed programs. * The tuition fee includes lunch, coffee, access to conference facilities and internet connection during the course days (Sunday not included). Location: Institut Sup?rieur Clorivi?re, 119 Bd Diderot, 75012 Paris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suashdeb at gmail.com Wed Jun 22 22:40:57 2022 From: suashdeb at gmail.com (Suash Deb) Date: Thu, 23 Jun 2022 08:10:57 +0530 Subject: Connectionists: 2022 9th ISCMI (Toronto) as a Virtual one Message-ID: Dear colleagues, This is to share with you that the conference core committee for ISCMI 2022, after assessing the present surge in corona cases and outbreak of monkeypox, had decided to organize the conference in a full fledged virtual one. The formal notification for the same had been uploaded at the conference website http://iscmi.us Hope you understand and will communicate your manuscripts for possible presentation soon. Thanks and best regards, Suash Deb General Chair, ISCMI22 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernstein.communication at fz-juelich.de Thu Jun 23 08:01:16 2022 From: bernstein.communication at fz-juelich.de (Bernstein Communication) Date: Thu, 23 Jun 2022 14:01:16 +0200 Subject: Connectionists: Bernstein Conference: Early Bird registration open Message-ID: <537412ee-83f6-0d38-c174-ed701b10f586@fz-juelich.de> *** Apologies for cross-posting *** Dear colleagues, the registration for the Bernstein Conference 2022 (Sept 13-16, 2022) is now open. Register before July 24, 2022 to benefit from the Early Bird rate. Members of the Bernstein Network receive an additional discount. Register here: https://bit.ly/BC22_reg To foster the delightfully high participation of young scientists, we have travel grants available for students, doctoral candidates and Postdoc members of the Bernstein Network: Learn more here: https://bit.ly/BC_TravelGrants -- Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange. It has established itself as one of the most renown conferences worldwide in this field, attracting students, postdocs and PIs from around the world to meet and discuss new scientific discoveries. In 2022, the Bernstein Conference will take place as in-person meeting again in Berlin. Talks of the Main Conference are going to be livestreamed, given speakers consent. Find more information at https://bernstein-network.de/en/bernstein-conference/ ____ IMPORTANT DATES * Bernstein Conference: September 13 - 16, 2022 * Deadline for abstract submission: July 18, 2022 * Early Bird registration: July 24, 2022 * Late registration: August 28, 2022 * Onsite registration possible ____ ABSTRACTS We invite the computational neuroscience community to submit their abstracts: Submitted abstracts can either be considered as contributed talks or posters. All accepted abstracts will be published online and will be citable via Digital Object Identifiers (DOI). Further information can be found here: https://bit.ly/BC_submission ____ INVITED SPEAKERS Keynote Sonja Hofer (University College London, UK) Invited Talks Bing Brunton (University of Washington, USA) Christine Constantinople (New York University, USA) Carina Curto (Pennsylvania State University, USA) Liset M de la Prida (Instituto Cajal, Spain) Juan Alvaro Gallego (Imperial College London, UK) Mehrdad Jazayeri (Massachusetts Institute of Technology, USA) Gaby Maimon (The Rockefeller University, New York, USA) Andrew Saxe (University College London, UK) Henning Sprekeler (Technische Universit?t Berlin, Germany) Carsen Stringer (Janelia Research Campus, USA) ____ CONFERENCE COMMITTEE Raoul-Martin Memmesheimer (Conference Chair) Christian Machens (Program Chair) Tatjana Tchumatchenko (Program Vice Chair) Moritz Helias (Workshop Chair) Anna Levina (Workshop Vice Chair) & Megan Carey, Brent Doiron, Tatiana Engel, Ann Hermundstad, Christian Leibold, Timothy O'Leary, Srdjan Ostojic, Cristina Savin, Mark van Rossum, Friedemann Zenke. ____ For any further questions, please contact: bernstein.conference at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Neugierige sind herzlich willkommen am Sonntag, den 21. August 2022, von 10:00 bis 17:00 Uhr. Mehr unter: https://www.tagderneugier.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From tarek.besold at googlemail.com Thu Jun 23 12:20:25 2022 From: tarek.besold at googlemail.com (Tarek R. Besold) Date: Thu, 23 Jun 2022 18:20:25 +0200 Subject: Connectionists: New paper: "Lessons from infant learning for unsupervised machine learning" (Nature Machine Intelligence) Message-ID: Together with Lorijn Zaadnoordijk (@LorijnSZ) and Rhodri Cusack (@RhodriCusack) from Trinity College Dublin we published a Perspective in Nature Machine Intelligence on "Lessons from infant learning for unsupervised machine learning". Here is the link to the (read-only) online version of the text: https://rdcu.be/cQbm1 ...and here is a (starting) summary and discussion thread on Twitter: https://twitter.com/LorijnSZ/status/1539651017662287873 ABSTRACT: The desire to reduce the dependence on curated, labeled datasets and to leverage the vast quantities of unlabeled data has triggered renewed interest in unsupervised (or self-supervised) learning algorithms. Despite improved performance due to approaches such as the identification of disentangled latent representations, contrastive learning and clustering optimizations, unsupervised machine learning still falls short of its hypothesized potential as a breakthrough paradigm enabling generally intelligent systems. Inspiration from cognitive (neuro)science has been based mostly on adult learners with access to labels and a vast amount of prior knowledge. To push unsupervised machine learning forward, we argue that developmental science of infant cognition might hold the key to unlocking the next generation of unsupervised learning approaches. We identify three crucial factors enabling infants? quality and speed of learning: (1) babies? information processing is guided and constrained; (2) babies are learning from diverse, multimodal inputs; and (3) babies? input is shaped by development and active learning. We assess the extent to which these insights from infant learning have already been exploited in machine learning, examine how closely these implementations resemble the core insights, and propose how further adoption of these factors can give rise to previously unseen performance levels in unsupervised learning. -- Dr. Tarek R. Besold http://www.tarekbesold.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanjay.ankur at gmail.com Thu Jun 23 14:35:50 2022 From: sanjay.ankur at gmail.com (Ankur Sinha) Date: Fri, 24 Jun 2022 00:05:50 +0530 Subject: Connectionists: Reminder: Free online CNS*2022 satellite tutorials: June 27--July 1 Message-ID: <20220623183550.px7og3atmsixt4vw@raam> Dear all, Apologies for the cross-posts. A final reminder of next week's CNS*2022 free online satellite tutorials (from June 27--July 1): https://ocns.github.io/SoftwareWG/pages/software-wg-satellite-tutorials-at-cns-2022.html Free registration remains open at: https://framaforms.org/incfocns-software-wg-cns2022-satellite-tutorials-registration-1654593600 To prevent spam/zoom crashing, links to zoom meetings for the tutorial sessions will be limited to registrants only. Therefore, while registration is free, it is required. The satellite tutorials feature sessions on: - Arbor - Brian2 - Introduction to containers - EBRAINS - GeNN - Keras/TensorFlow - LFPy - MOOSE - Neo + Elephant - NEST - NetPyNE - NeuroLib - NeuroML - NEURON - OSBv2 - RateML Please spread the word, and we look forward to seeing you there! -- Thanks, Regards, Ankur Sinha (He / Him / His) | https://ankursinha.in Research Fellow at the Silver Lab, University College London | http://silverlab.org/ Free/Open source community volunteer at the NeuroFedora project | https://neuro.fedoraproject.org Time zone: Europe/London From ludovico.montalcini at gmail.com Fri Jun 24 03:19:07 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Fri, 24 Jun 2022 09:19:07 +0200 Subject: Connectionists: ACDL 2022, 5th Online & Onsite Advanced Course on Data Science & Machine Learning | August 22-26, 2022 | Certosa di Pontignano, Italy - Early Registration: by July 31 In-Reply-To: <537412ee-83f6-0d38-c174-ed701b10f586@fz-juelich.de> References: <537412ee-83f6-0d38-c174-ed701b10f586@fz-juelich.de> Message-ID: * Apologies for multiple copies. Please forward to anybody who might be interested * #ACDL2022, An Interdisciplinary Course: #BigData, #DeepLearning & #ArtificialIntelligence without Borders (Online attendance available) Certosa di Pontignano, Castelnuovo Berardenga (Siena) - #Tuscany, Italy August 22-26 https://acdl2022.icas.cc acdl at icas.cc ACDL 2022 (as ACDL 2021 and ACDL 2020): an #OnlineAndOnsiteCourse https://acdl2022.icas.cc/acdl-2022-as-acdl-2021-and-acdl-2020-an-online-onsite-course/ EARLY REGISTRATION: by July 31 https://acdl2022.icas.cc/registration/ DEADLINES: Early Registration: by July 31 (AoE) Oral/Poster Presentation Submission Deadline: July 31 (AoE) Late Registration: from August 1st LECTURERS: Each Lecturer will hold three/four lessons on a specific topic. https://acdl2022.icas.cc/lecturers/ ?iga Avsec, DeepMind, London, UK Roman Belavkin, Middlesex University London, UK Alfredo Canziani, New York University, USA Alex Davies, DeepMind, London, UK Edith Elkind, University of Oxford, UK Marco Gori, University of Siena, Italy Danica Kragic Jensfel, Royal Institute of Technology, Sweden Yukie Nagai, The University of Tokyo, Japan Panos Pardalos, University of Florida, USA Silvio Savarese Salesforce & Stanford University, USA Joaquin Vanschoren, Eindhoven University of Technology, The Netherlands Lenka Zdeborova, EPFL, Switzerland Tutorial "Introduction to PyTorch" PAST LECTURERS: https://acdl2022.icas.cc/past-lecturers/ SCHEDULE: https://acdl2022.icas.cc/wp-content/uploads/sites/19/2022/01/ACDL-2022-Schedule-Ver1.0.pdf VENUE: The venue of ACDL 2022 will be The Certosa di Pontignano ? Siena The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi https://acdl2022.icas.cc/venue/ PAST EDITIONS: https://acdl2022.icas.cc/past-editions/ https://acdl2018.icas.xyz https://acdl2019.icas.xyz https://acdl2020.icas.xyz https://acdl2021.icas.cc REGISTRATION: https://acdl2022.icas.cc/registration/ ACDL 2022 POSTER: https://acdl2022.icas.cc/wp-content/uploads/sites/19/2022/02/poster-ACDL-2022.png Anyone interested in participating in ACDL 2022 should register as soon as possible. Similarly for accommodation at the Certosa di Pontignano (the Course Venue), book your full board accommodation at the Certosa as soon as possible. All course participants must stay at the Certosa di Pontignano. See you in 3D or 2D :) in Tuscany in August! ACDL 2022 Directors. https://acdl2022.icas.cc/category/news/ https://acdl2022.icas.cc/faq/ acdl at icas.cc https://acdl2022.icas.cc https://www.facebook.com/groups/204310640474650/ https://twitter.com/TaoSciences * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.garagnani at gmail.com Thu Jun 23 13:12:15 2022 From: max.garagnani at gmail.com (Max Garagnani) Date: Thu, 23 Jun 2022 17:12:15 +0000 Subject: Connectionists: MSc in Computational Cognitive Neuroscience, Goldsmiths (London, UK) :: Few places remaining References: Message-ID: -- Apologies for cross-posting -- ******************************************************************************** MSc in COMPUTATIONAL COGNITIVE NEUROSCIENCE at Goldsmiths, University of London (UK) ******************************************************************************** This established Masters course builds on the multi-disciplinary and strong research profiles of our Computing and Psychology Departments staff. It equips students with a solid theoretical basis and experimental techniques in computational cognitive neuroscience, providing them also with an opportunity to apply their newly acquired knowledge in a practical research project, which may be carried out in collaboration with one of our industry partners (see below). Areas of application range from machine learning to brain-computer interfaces, to experimental and clinical research in computational / cognitive neuroscience. APPLICATIONS for 2022-23 entry are still OPEN. However, note that places on this programme are limited and will be allocated on a first-come first-served basis. If you are interested in this programme, we strongly recommend applying now to avoid disappointment later. HOW TO APPLY: ============= Submitting an online application is easy and free of cost. Simply visit https://bit.ly/2Fi86SB and follow the instructions. COURSE OUTLINE: =============== This is a one-year full-time or two-years part-time Masters programme, consisting of taught courses (120 credits) plus research project and dissertation (60 credits). (Note: students who need a Tier-4 VISA to study in the UK can only register for the full-time pathway). It is designed for students with a good degree in the biological / life sciences (psychology, neuroscience, biology, medicine, etc.) or physical sciences (computer science, mathematics, physics, engineering); however, applications from individuals with different backgrounds but equivalent experience will also be considered. The core contents of this course include (i) fundamentals of cognitive neuroscience (cortical and subcortical mechanisms and structures underlying cognition and behaviour, plus experimental and neuroimaging techniques), and (ii) concepts and methods of computational modelling of biological neurons, simple neuronal circuits, and higher brain functions. Students are trained with a rich variety of computational and advanced methodological skills, taught in the four core modules of the course (Modelling Cognitive Functions, Cognitive Neuroscience, Cortical Modelling, and Advanced Quantitative Methods). Unlike other standard computational neuroscience programmes (which focus predominantly on modelling low-level aspects of brain function), one of the distinctive features of this course is that it includes the study of biologically constrained models of cognitive processes (including, e.g., language and decision making). The final research project can be carried out 'in house' or in collaboration with an external partner, either from academia or industry (see below). For samples of previous students' MSc projects, visit: https://coconeuro.com/index.php/student-projects/ LINKS WITH INDUSTRY: ==================== The programme benefits from ongoing collaborative partnerships with several companies having headquarters in UK, USA, Italy, Poland and Japan. Carrying out your final research project with one of our industry partners will enable you to acquire cutting-edge skills much in demand on the job market, providing a ?fast track? route towards post-Masters internships and employment. Here are some examples of career pathways followed by some of our alumni, along with their feedback: https://coconeuro.com/index.php/alumni/ For any further information (including funding opportunities and fees), please visit: https://www.gold.ac.uk/pg/msc-computational-cognitive-neuroscience/ https://www.gold.ac.uk/pg/fees-funding/ For any other specific questions, please do not hesitate to get in touch. Kind regards, Max Garagnani -- Joint Programme Leader, MSc in Computational Cognitive Neuroscience Senior Lecturer in Computer Science Department of Computing Goldsmiths, University of London Lewisham Way, New Cross London SE14 6NW, UK https://www.gold.ac.uk/computing/people/garagnani-max/ ******************************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Jun 23 21:44:40 2022 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 23 Jun 2022 18:44:40 -0700 Subject: Connectionists: workshop on compositionality and AI Message-ID: June 29-30, free registration required: https://compositionalintelligence.github.io/ A two-day online workshop on compositionality and artificial intelligence organized by Gary Marcus and Rapha?l Milli?re. Speakers Stephanie Chan (DeepMind) Allyson Ettinger (University of Chicago) Dieuwke Hupkes (European Laboratory for Learning and Intelligent Systems / Meta AI) Paul Smolensky (Johns Hopkins University) Brenden Lake (New York University / Meta AI) Tal Linzen (New York University / Google AI) Gary Marcus (New York University, Emeritus) Rapha?l Milli?re (Columbia University) Ellie Pavlick (Brown University / Google AI) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sami.El-Boustani at unige.ch Thu Jun 23 15:32:45 2022 From: Sami.El-Boustani at unige.ch (Sami El-Boustani) Date: Thu, 23 Jun 2022 19:32:45 +0000 Subject: Connectionists: Open postdoctoral position - El-Boustani lab In-Reply-To: <756448AC-78B6-4752-886A-22C1A62D1AF5@unige.ch> References: <756448AC-78B6-4752-886A-22C1A62D1AF5@unige.ch> Message-ID: <12A978FD-D48C-426F-A64C-0A568E8599E4@unige.ch> The El-Boustani lab at University of Geneva is looking for a highly motivated postdoctoral fellow to research the cortical circuits underlying social cues encoding during joint decision-making in mice. This project involves calcium imaging of neuronal populations in freely moving mice during execution of novel behavioral tasks. Optogenetic tools will be used to dissect the circuits that contribute to the performance of these tasks, and state-of-the-art behavioral analysis approaches will help characterized mice interactions. Successful applicants will design a research program focused around understanding how frontal cortical areas contribute to social learning during decision-making. We welcome applicants with diverse quantitative backgrounds, including, but not limited to neuroscience, mathematics, physics, and engineering. We are especially interested in applicants with excellent programming skills (Matlab, python), experience with mice behavior and recording techniques in freely moving animals. Switzerland offers a vibrant Neuroscience community with outstanding research conditions and attractive salaries. Candidates should send their CV, at least two references and a brief cover letter describing their previous work and future goals to Sami El-Boustani: sami.el-boustani at unige.ch. Please visit http://elboustani-lab.org/ for more details. Sami -- Sami El-Boustani, PhD SNSF Assistant Professor Department of Basic Neurosciences Faculty of Medecine, University of Geneva Rue Michel-Servet 1, 1211 Gen?ve, Switzerland Email: sami.el-boustani at unige.ch Phone: +(41) 22 3795468 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ludovico.montalcini at gmail.com Fri Jun 24 04:18:44 2022 From: ludovico.montalcini at gmail.com (Ludovico Montalcini) Date: Fri, 24 Jun 2022 10:18:44 +0200 Subject: Connectionists: CfP ACAIN 2022, 2nd International Advanced Course on Artificial Intelligence & Neuroscience, Sept 18-22, Certosa di Pontignano, Tuscany - Deadline: July 18 Message-ID: _______________________________________________________________ Call for Participation (apologies for cross-postings) Please distribute this call to interested parties, thanks. _______________________________________________________________ The 2nd International Advanced Course on #ArtificialIntelligence & #Neuroscience - #ACAIN2022 September 18-22 Certosa di Pontignano, Castelnuovo Berardenga #Tuscany https://acain2022.artificial-intelligence-sas.org acain at icas.cc Past Edition: https://acain2021.artificial-intelligence-sas.org LECTURERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK "Me and my Markov blanket" * Wulfram Gerstner, EPFL, Switzerland "Eligibility traces and three-factor rules of synaptic plasticity: from reward to surprise" "No Backprop please! Learning hierarchical representations with neoHebbian learning rules" * M?t? Lengyel, Cambridge University, UK * Christos Papadimitriou, Columbia Engineering, Columbia University, USA * Panos Pardalos, University of Florida, USA * Max Erik Tegmark, MIT, USA & Future of Life Institute * Michail Tsodyks, Institute for Advanced Study (IAS) Princeton, USA "Putative neuronal underpinnings of working memory" "Mathematical models of human memory" TUTORIAL: * Christina Kyrousi, National and Kapodistrian Univeristy of Athens, Greece "Modelling brain development and disorders ? organoids as a model system" "Extrinsic and intrinsic mechanisms modulating human corticogenesis" https://acain2022.artificial-intelligence-sas.org/course-lecturers/ Early Registration (Course): by Monday July 18 (AoE) https://acain2022.artificial-intelligence-sas.org/registration/ SCOPE & MOTIVATION: The ACAIN 2022 is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields. The Advanced Course on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion residential (or online) Course and Symposium at the Certosa di Pontignano (Tuscany - Italy) on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their peers, in a friendly and constructive environment. Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy and efficient cooperation between the two research communities. The Event will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. COURSE DESCRIPTION: https://acain2022.artificial-intelligence-sas.org/course-description/ VENUE & ACCOMMODATION: https://acain2022.artificial-intelligence-sas.org/venue/ https://acain2022.artificial-intelligence-sas.org/accommodation/ The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation directly to the Certosa di Pontignano. ACTIVITIES: https://acain2022.artificial-intelligence-sas.org/activities/ REGISTRATION: https://acain2022.artificial-intelligence-sas.org/registration/ See you in 3D or 2D :) in Tuscany in September! ACAIN 2022 Directors. Past Edition: https://acain2021.artificial-intelligence-sas.org POSTER: https://acain2022.artificial-intelligence-sas.org/wp-content/uploads/sites/21/2022/02/poster-ACAIN-2022.png NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ E: acain at icas.cc W: https://acain2022.artificial-intelligence-sas.org * Apologies for multiple copies. Please forward to anybody who might be interested * -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at icas.cc Fri Jun 24 04:43:11 2022 From: info at icas.cc (ICAS Organizing Committee) Date: Fri, 24 Jun 2022 10:43:11 +0200 Subject: Connectionists: CfP ACAIN 2022, 2nd International Advanced Course on Artificial Intelligence & Neuroscience, Sept 18-22, Certosa di Pontignano, Tuscany - Deadline: July 18 Message-ID: _______________________________________________________________ Call for Participation (apologies for cross-postings) Please distribute this call to interested parties, thanks. _______________________________________________________________ The 2nd International Advanced Course on #ArtificialIntelligence & #Neuroscience - #ACAIN2022 September 18-22 Certosa di Pontignano, Castelnuovo Berardenga #Tuscany https://acain2022.artificial-intelligence-sas.org acain at icas.cc Past Edition: https://acain2021.artificial-intelligence-sas.org LECTURERS: * Marvin M. Chun, Yale University, USA * Ila Fiete, MIT, USA * Karl Friston, University College London, UK "Me and my Markov blanket" * Wulfram Gerstner, EPFL, Switzerland "Eligibility traces and three-factor rules of synaptic plasticity: from reward to surprise" "No Backprop please! Learning hierarchical representations with neoHebbian learning rules" * M?t? Lengyel, Cambridge University, UK * Christos Papadimitriou, Columbia Engineering, Columbia University, USA * Panos Pardalos, University of Florida, USA * Max Erik Tegmark, MIT, USA & Future of Life Institute * Michail Tsodyks, Institute for Advanced Study (IAS) Princeton, USA "Putative neuronal underpinnings of working memory" "Mathematical models of human memory" TUTORIAL: * Christina Kyrousi, National and Kapodistrian Univeristy of Athens, Greece "Modelling brain development and disorders ? organoids as a model system" "Extrinsic and intrinsic mechanisms modulating human corticogenesis" https://acain2022.artificial-intelligence-sas.org/course-lecturers/ Early Registration (Course): by Monday July 18 (AoE) https://acain2022.artificial-intelligence-sas.org/registration/ SCOPE & MOTIVATION: The ACAIN 2022 is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields. The Advanced Course on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion residential (or online) Course and Symposium at the Certosa di Pontignano (Tuscany - Italy) on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their peers, in a friendly and constructive environment. Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy and efficient cooperation between the two research communities. The Event will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. COURSE DESCRIPTION: https://acain2022.artificial-intelligence-sas.org/course-description/ VENUE & ACCOMMODATION: https://acain2022.artificial-intelligence-sas.org/venue/ https://acain2022.artificial-intelligence-sas.org/accommodation/ The Certosa di Pontignano Localit? Pontignano, 5 ? 53019, Castelnuovo Berardenga (Siena) ? Tuscany ? Italy phone: +39-0577-1521104 fax: +39-0577-1521098 info at lacertosadipontignano.com https://www.lacertosadipontignano.com/en/index.php Contact person: Dr. Lorenzo Pasquinuzzi You need to book your accommodation at the venue and pay the amount for accommodation directly to the Certosa di Pontignano. ACTIVITIES: https://acain2022.artificial-intelligence-sas.org/activities/ REGISTRATION: https://acain2022.artificial-intelligence-sas.org/registration/ See you in 3D or 2D :) in Tuscany in September! ACAIN 2022 Directors. Past Edition: https://acain2021.artificial-intelligence-sas.org POSTER: https://acain2022.artificial-intelligence-sas.org/wp-content/uploads/sites/21/2022/02/poster-ACAIN-2022.png NEWS: https://acain2022.artificial-intelligence-sas.org/category/news/ E: acain at icas.cc W: https://acain2022.artificial-intelligence-sas.org * Apologies for multiple copies. Please forward to anybody who might be interested * https://acdl2022.icas.cc https://lod2022.icas.cc https://acain2022.artificial-intelligence-sas.org *5th Advanced Course on Data Science & Machine Learning - ACDL2022* 22-26 August, Certosa di Pontignano (Siena) Tuscany, Italy An Interdisciplinary Course: Big Data, Deep Learning & Artificial intelligence without Borders Early Registration: by April 23 (AoE). W: https://acdl2022.icas.cc E: acdl at icas.cc FB: https://www.facebook.com/groups/204310640474650/ T: https://twitter.com/TaoSciences The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. *8th International Conference on machine Learning, Optimization & Data science ? LOD 2022*, September 18 - 22, Certosa di Pontignano (Siena) Tuscany, Italy Paper Submission Deadline: May 3th (AoE). https://lod2022.icas.cc lod at icas.cc https://easychair.org/conferences/?conf=lod2022 ACAIN 2022, the *2nd International Advanced Course & Symposium on Artificial Intelligence & Neuroscience*, September 18 - 22, 2022, Certosa di Pontignano (Siena) Tuscany, Italy Early Registration (Course): by Thursday June 30, 2022 (AoE) W: https://acain2022.artificial-intelligence-sas.org E: acain at icas.cc EC: https://easychair.org/conferences/?conf=acain2022 FB: https://www.facebook.com/ACAIN-Int-Advanced-Course-Symposium-on-AI-Neuroscience-100503321621692/ The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Humphries at nottingham.ac.uk Sun Jun 26 08:23:25 2022 From: Mark.Humphries at nottingham.ac.uk (Mark Humphries) Date: Sun, 26 Jun 2022 12:23:25 +0000 Subject: Connectionists: =?windows-1252?q?Fully-funded_PhD_project_on_netw?= =?windows-1252?q?orks_in_Mark_Humphries=92_lab=2C_University_of_Nottingha?= =?windows-1252?q?m_=28Oct_2022_start=29?= In-Reply-To: References: Message-ID: A fully-funded 3.5 year PhD project is available in the Mark Humphries? lab at the University of Nottingham, UK. Broad classes of network models, like small-world networks and scale-free networks, have taught us much about the world. But we suspect a major class of networks has been missed. This project will introduce and explore a new class of model weighted networks, ?divergent? networks, in which the structure described by the links and their weights do not match. Most theoretical work on weighted networks assumes that they do, so little is known about the properties of these divergent networks. We will close that gap by quantifying the existence and extent of divergence in a range of real-world networks, including brain connectomes at both single neuron and inter-area level; by creating generative models for these networks; and by exploring how divergence alters a network?s dynamics across a range of dynamical systems implemented on the nodes, including neural models. The student will be joining the Humphries lab (https://www.humphries-lab.org/), who draw on network theory and dynamical systems approaches to understand neural coding and computations. They will also be co-supervised by Prof Stephen Coombes (School of Maths), an expert in dynamics on networks. Suggested reading: Humphries, M. D., Caballero, J. A.*, Evans, M.*, Maggi, S.* & Singh, A.* (2021) Spectral estimation for detecting low-dimensional structure in networks using arbitrary null models. PLoS ONE, 16(7): e0254057. Enquiries: mark.humphries at nottingham.ac.uk (send a CV with queries please) Application deadline: July 29th 2022 Further details, and how to apply: https://www.findaphd.com/phds/project/divergent-networks-a-new-class-of-complex-systems/?p145584 Funding information: EPSRC DTP studentship, 3.5 years duration. Start: October 2022 Studentship will pay Home (UK) University tuition fees (minimum ?4,596 per year) and a minimum stipend of ?16,062 per year for your living costs. International students are welcome to apply, but note the studentship does not cover international tuition fees (currently ?26,000 per year). Professor Mark Humphries | Professor of Computational Neuroscience My book "The Spike: An Epic Journey Through the Brain in 2.1 Seconds" is out now in hardback, ebook, and audiobook! https://press.princeton.edu/books/hardcover/9780691195889/the-spike Lab: humphries-lab.org Twitter: @markdhumphries Public blog: https://medium.com/the-spike This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at irdta.eu Sat Jun 25 05:37:43 2022 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 25 Jun 2022 11:37:43 +0200 (CEST) Subject: Connectionists: DeepLearn 2022 Summer: regular registration July 22 Message-ID: <1671632218.2196363.1656149863997@webmail.strato.com> ****************************************************************** 6th INTERNATIONAL GRAN CANARIA SCHOOL ON DEEP LEARNING DeepLearn 2022 Summer Las Palmas de Gran Canaria, Spain July 25-29, 2022 https://irdta.eu/deeplearn/2022su/ ***************** Co-organized by: University of Las Palmas de Gran Canaria Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************************** Regular registration: July 22, 2022 ****************************************************************** SCOPE: DeepLearn 2022 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Bournemouth, and Guimar?es. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, etc. etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 21 four-hour and a half courses and 3 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2022 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2022 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be: Instituci?n Ferial de Canarias Avenida de la Feria, 1 35012 Las Palmas de Gran Canaria https://www.infecar.es/index.php?option=com_k2&view=item&layout=item&id=360&Itemid=896 STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. Full live online participation will be possible. However, the organizers highlight the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Wahid Bhimji (Lawrence Berkeley National Laboratory), Deep Learning on Supercomputers for Fundamental Science Joachim M. Buhmann (Swiss Federal Institute of Technology Zurich), Machine Learning -- A Paradigm Shift in Human Thought!? Kate Saenko (Boston University), Overcoming Dataset Bias in Deep Learning [virtual] PROFESSORS AND COURSES: Pierre Baldi (University of California Irvine), [intermediate/advanced] Deep Learning: From Theory to Applications in the Natural Sciences Arindam Banerjee (University of Illinois Urbana-Champaign), [intermediate/advanced] Deep Generative and Dynamical Models Mikhail Belkin (University of California San Diego), [intermediate/advanced] Modern Machine Learning and Deep Learning through the Prism of Interpolation Arthur Gretton (University College London), [intermediate/advanced] Probability Divergences and Generative Models Phillip Isola (Massachusetts Institute of Technology), [intermediate] Deep Generative Models Mohit Iyyer (University of Massachusetts Amherst), [intermediate/advanced] Natural Language Generation Irwin King (Chinese University of Hong Kong), [intermediate/advanced] Deep Learning on Graphs Tor Lattimore (DeepMind), [intermediate/advanced] Tools and Techniques of Reinforcement Learning to Overcome Bellman's Curse of Dimensionality Vincent Lepetit (Paris Institute of Technology), [intermediate] Deep Learning and 3D Reasoning for 3D Scene Understanding Dimitris N. Metaxas (Rutgers, The State University of New Jersey), [intermediate/advanced] Model-based, Explainable, Semisupervised and Unsupervised Machine Learning for Dynamic Analytics in Computer Vision and Medical Image Analysis Sean Meyn (University of Florida), [introductory/intermediate] Reinforcement Learning: Fundamentals, and Roadmaps for Successful Design Louis-Philippe Morency (Carnegie Mellon University), [intermediate/advanced] Multimodal Machine Learning Wojciech Samek (Fraunhofer Heinrich Hertz Institute), [introductory/intermediate] Explainable AI: Concepts, Methods and Applications Clarisa S?nchez (University of Amsterdam), [introductory/intermediate] Mechanisms for Trustworthy AI in Medical Image Analysis and Healthcare Bj?rn W. Schuller (Imperial College London), [introductory/intermediate] Deep Multimedia Processing Jonathon Shlens (Apple), [introductory/intermediate] An Introduction to Computer Vision and Convolution Neural Networks [virtual] Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines 1. Murat Tekalp (Ko? University), [intermediate/advanced] Deep Learning for Image/Video Restoration and Compression Alexandre Tkatchenko (University of Luxembourg), [introductory/intermediate] Machine Learning for Physics and Chemistry Li Xiong (Emory University), [introductory/intermediate] Differential Privacy and Certified Robustness for Deep Learning Ming Yuan (Columbia University), [intermediate/advanced] Low Rank Tensor Methods in High Dimensional Data Analysis OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 17, 2022. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. EMPLOYER SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 17, 2022. ORGANIZING COMMITTEE: Marisol Izquierdo (Las Palmas de Gran Canaria, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) David Silva (London, organization chair) REGISTRATION: It has to be done at https://irdta.eu/deeplearn/2022su/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions are available at https://irdta.eu/deeplearn/2022su/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Cabildo de Gran Canaria Universidad de Las Palmas de Gran Canaria Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio.bellavia at unifi.it Sat Jun 25 12:01:06 2022 From: fabio.bellavia at unifi.it (Fabio Bellavia) Date: Sat, 25 Jun 2022 18:01:06 +0200 Subject: Connectionists: CFP - IET Image Processing special issue on "Advancements in Fine Art Pattern Extraction and Recognition" [deadline 28 November 2022] Message-ID: <4cb9a43d-a370-ce0f-395a-330b9e03a906@unifi.it> apologies for multiple posting, please distribute among interested parties _________________ *Call for Papers* _______ *Special Issue of IET Image Processing on* __________ *ADVANCEMENTS in FINE ART PATTERN EXTRACTION and RECOGNITION* ___________ Aim & Scope Cultural heritage, especially fine arts, plays an invaluable role in the cultural, historical and economic growth of our societies. Fine arts are primarily developed for aesthetic purposes and are mainly expressed through painting, sculpture and architecture. In recent years, thanks to technological improvements and drastic cost reductions, a large-scale digitization effort has been made, which has led to an increasing availability of large digitized fine art collections. This availability, coupled with recent advances in pattern recognition and computer vision, has disclosed new opportunities, especially for researchers in these fields, to assist the art community with automatic tools to further analyze and understand fine arts. Among other benefits, a deeper understanding of fine arts has the potential to make them more accessible to a wider population, both in terms of fruition and creation, thus supporting the spread of culture. This special issue aims to offer the opportunity to present advancements in the state-of-the-art, innovative research, ongoing projects, and academic and industrial reports on the application of visual pattern extraction and recognition for a better understanding and fruition of fine arts, soliciting contributions from pattern recognition, computer vision, artificial intelligence and image processing research areas. The special issue will be linked to the 2nd International Workshop on Fine Art Pattern Extraction and Recognition (FAPER2022). Authors of selected conference papers will be invited to extend and improve their contributions for this special issue, and authors are also invited to submit new contributions (non-conference papers). _______________________________________ Topics include, but are not limited to: - Applications of machine learning and deep learning to cultural heritage and digital humanities - Computer vision and multimedia data processing for fine arts - Generative adversarial networks for artistic data - Augmented and virtual reality for cultural heritage - 3D reconstruction of historical artifacts - Point cloud segmentation and classification for cultural heritage - Historical document analysis - Content-based retrieval in visual art domain - Digitally enriched museum visits - Smart interactive experiences in cultural sites - Project, products or prototypes for cultural heritage _______________________________________ *Submission Deadline*: 28 November 2022 Submissions must be made through ScholarOne: https://mc.manuscriptcentral.com/theiet-ipr see the PDF call for paper for more information: https://ietresearch.onlinelibrary.wiley.com/pb-assets/assets/17519667/Special%20Issues/IPR%20SI%20CFP_AFAPER-1651107571727.pdf ___________ Open Access From January 2021, The IET began an Open Access publishing partnership with Wiley. As a result, all submissions that are accepted for this Special Issue will be published under the Gold Open Access Model and subject to the Article Processing Charge (APC) of $2,300. APC can be covered in *FULL* or part by your institution! *CHECK? YOUR? ELIGIBILITY? HERE* https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/affiliation-policies-payments/institutional-funder-payments.html _______________ Editor-in-Chief Prof. Farzin Deravi, University of Kent, UK _____________ Guest Editors Giovanna Castellano, Universita' di Bari, Italy Gennaro Vessio, Universita' di Bari, Italy Fabio Bellavia, Universita' di Palermo, Italy Sinem Aslan, Universit? Ca' Forscari Venezia, Italy From hongzhi.kuai at gmail.com Mon Jun 27 01:51:32 2022 From: hongzhi.kuai at gmail.com (H.Z. Kuai) Date: Mon, 27 Jun 2022 14:51:32 +0900 Subject: Connectionists: WI-IAT' 22 CfPs [Extended Deadline] Message-ID: [Apologies if you receive this more than once] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CALL FOR PAPERS The 21st IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '22) November 17-20, 2022, Niagara Falls, Canada A hybrid conference with both online and offline modes Web Intelligence = AI in the Connected World Homepage: https://www.wi-iat.com/wi-iat2022/ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Full Papers Submission Deadline: June 30, 2020 (Extended) Sponsored By: Web Intelligence Consortium (WIC) Association for Computing Machinery (ACM) IEEE Computer Society * Award Information * * Two Student Travel Awards (US$500 Each) * Two Student Awards (Non-travel, US$500 Each) * Two Volunteer Awards (US$500 Each) * One Best Paper Award (US$1000) WI-IAT 2022 Special Event: - Web Intelligence Journal Special Issue: 20 Years of Web Intelligence https://www.iospress.com/catalog/journals/web-intelligence ************************************** WI-IAT 21st Keynote Speakers: ************************************** https://www.wi-iat.com/wi-iat2022/projects-KeynoteSpeakers.html - Ophir Frieder Fellow of the American Association for the Advancement of Science (AAAS) Fellow of the Association for Computing Machinery (ACM) Fellow of the Institute of Electrical and Electronics Engineering (IEEE) Georgetown University, USA - Kevin Leyton-Brown Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) Fellow of the Association of Computing Machinery (ACM) University of British Columbia, Canada - Ming Li Fellow of the Royal Society of Canada Fellow of the Association for Computing Machinery (ACM) Fellow of the Institute of Electrical and Electronics Engineering (IEEE) University of Waterloo, Canada - Witold Pedrycz Fellow of the Royal Society of Canada Fellow of the Institute of Electrical and Electronics Engineering (IEEE) University of Alberta, Canada - Yiyu Yao Fellow of the International Rough Set Society (IRSS) University of Regina, Canada More to be announced later. ACCEPTED WORKSHOPS/SPECIAL SESSIONS ++++++++++++++++++++++++++++++++++++++ https://www.wi-iat.com/wi-iat2022/Workshops-Special-Sessions.html WS01: The 11th International Workshop on Intelligent Data Processing (IDP) WS02: The 7th International Workshop on Application of Big Data for Computational Social Science (ABCSS2022) WS03: The 7th International Workshop on Integrated Social CRM (iCRM 2022) WS04: The 5th International Workshop on Social Media Analytics for Health intelligence (SMA4H) WS05: The International Workshop on Personalized QA and its Applications (PQAIA) WS06: The 2nd International Workshop on Expert Recommendation for Community Question Answering (XPERT4CQA) WS07: The International Workshop on Data Analytics on Social Media (DASM?22) WS08: The 15th Natural Language Processing and Ontology Engineering (NLPOE2022) WS09: The 1st International Workshop on the Fundamentals and Advances of Recommendation System (FA-RS) WS10: The International Workshop on Affective Computing and Emotion Recognition (ACER-EMORE2022) WS11: The International Workshop on Telemedicine System WS12: The International Workshop on Web Intelligence meets Brain Informatics (WImBI?22) WS13: The International Workshop on AI and Machine Learning Applications in Healthcare WS14: The International Workshop on Symbolic Reasoning and Data Mining for Complicated Process Analysis WS15: The International Workshop on Casual Inference for Information Retrieval WS16: The International Workshop on High-dimensional Data Analytics for Knowledge Management (HDAKM) WS17: The International Workshop on Explainable AI for Recommender Systems More to be announced later. The 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '22) provides a premier international forum to bring together researchers and practitioners from diverse fields for presentation of original research results, as well as exchange and dissemination of innovative and practical development experiences on Web Intelligence and Intelligent Agent Technology research and applications. Academia, professionals and industry people can exchange their ideas, findings and strategies in utilizing the power of human brains and man-made networks to create a better world. More specifically, the fields of how intelligence is impacting the Web of People, the Web of Data, the Web of Things, the Web of Trust, the Web of Agents, and emerging Web in health and smart living in the 5G Era. Therefore, the theme of WI-IAT '22 will be ?Web Intelligence = AI in the Connected World?. After the great successful online WI-IAT'20 and hybrid WI-IAT'21 during the global pandemic, WI-IAT'22 will be held in Niagara Falls, Canada, and once again, in the hybrid mode. WI-IAT '22 welcomes research, application, and Industry/Demo track paper submissions in these core thematic pillars under wider topics, which demand WI innovative and disruptive solutions for any of the next indicative sub-topics. TRACKS AND TOPICS ++++++++++++++++++ Track 1: Web of People * Crowdsourcing and Social Data Mining * Human-Centric Computing * Information Diffusion * Knowledge Community Support * Modelling Crowd-Sourcing * Opinion Mining * People Oriented Applications and Services * Recommendation Engines * Sentiment Analysis * Situational Awareness Social Network Analysis * Social Groups and Dynamics * Social Media and Dynamics * Social Networks Analytics * User and Behavioural Modelling Track 2: Web of Data * Algorithms and Knowledge Management * Autonomy-Oriented Computing (AOC) * Big Data Analytics * Big Data and Human Brain Complex Systems * Cognitive Models * Computational Models * Data-Driven Services and Applications * Data Integration and Data Provenance * Data Science and Machine Learning * Graph Isomorphism * Graph Theory * Information Search and Retrieval * Knowledge Graph * Knowledge Graph and Semantic Networks * Linked Data Management and Analytics * Self-Organizing Networks * Semantic Networks * Sensor Networks * Web Science Track 3: Web of Things * Complex Networks * Distributed Systems and Devices * Dynamics of Networks * Industrial Multi-Domain Web * Intelligent Ubiquitous Web of Things * IoT Data Analytics * Location and Time Awareness * Open Autonomous Systems * Streaming Data Analysis * Web Infrastructures and Devices Mobile Web * Wisdom Web of Things (W2T) Track 4: Web of Trust * Blockchain analytics and technologies * Fake content and fraud detection * Hidden Web Analytics * Monetization Services and Applications * Trust Models for Agents * Ubiquitous Computing * Web Cryptography * Monetization services and applications * Web safety and openness Track 5: Web of Agents * Agent Networks * Autonomy Remembrance Agents * Autonomy-oriented Computing * Behaviour Modelling * Distributed Problem-Solving Global Brain * Edge Computing * Individual-based Modelling Knowledge * Information Agents * Local-Global Behavioural Interactions * Mechanism Design * Multi-Agent Systems * Network Autonomy Remembrance Agents * Self-adaptive Evolutionary Systems * Self-organizing Systems * Social Groups and Dynamics Special Track: Emerging Web in Health and Smart Living * Big Data in Medicine * City Brain and Global Brain * Digital Ecosystems * Digital Epidemiology * Health Data Exchange and Sharing * Healthcare and Medical Applications and Services * Omics Research and Trends * Personalized Health Management and Analytics * Smart City Applications and Services * Time Awareness and Location Awareness Smart City * Wellbeing and Healthcare in the 5G Era IMPORTANT DATES +++++++++++++++ June 30, 2022 (Extended): Full Papers Submission August 20, 2022: Paper Acceptance Notification August 20, 2022: Early Registration Opens September 7, 2022: Camera-ready Submission November 17, 2022: Workshops and Special Sessions November 18-20, 2022: Main Conference PAPER SUBMISSION ++++++++++++++++ Papers must be submitted electronically via CyberChair in standard IEEE Conference Proceedings format (max 8 pages, templates at https://www.ieee.org/conferences/publishing/templates.html). Submitted papers will undergo a peer review process, coordinated by the International Program Committee. Main Conference Paper Submission: https://www.wi-iat.com/wi-iat2022/Participant-Submission.html Workshops and Special Sessions Paper Submission: https://www.wi-iat.com/wi-iat2022/Workshops-Special-Sessions.html Organization Structure ++++++++++++++++++++++ General Chairs * Gabriella Pasi, University of Milano-Bicocca, Italy * Jimmy Huang, York University, Canada * Jie Tang, Tsinghua University, China * Christopher W. Clifton, Purdue University, USA Program Committee Chairs * Jiashu Zhao, Wilfrid Laurier University, Canada * Ebrahim Bagheri, Ryerson University, Canada * Norbert Fuhr, University of Duisburg-Essen, Germany * Atsuhiro Takasu, National Institute of Informatics, Japan * Yixing Fan, Chinese Academy of Sciences, China Local Organizing Chairs * Mehdi Kargar, Ryerson University, Canada * George J. Georgopoulos, York University, Canada Workshop/Special Session Chairs * Hiroki Matsumoto, Maebashi Institute of Technology, Japan * Ameeta Agrawal, Portland State University, USA * Cathal Gurrin, Dublin City University, Ireland * Chao Huang, University of Hong Kong, China Publicity Chairs * Hongzhi Kuai, Maebashi Institute of Technology, Japan * Yang Liu, Wilfrid Laurier University, Canada * Yan Ge, University of Bristo, UK Tutorial Chairs * Vivian Hu, Ryerson University, Canada * Xing Tan, Lakehead University, Canada * Shuaiqiang Wang, Baidu, China Proceeding Chairs * Amran Bhuiyan, York University, Canada * Jingyuan Li, Beijing Technology and Business University, China Industry Chairs * Stephen Chan, Dapasoft, Canada * Long Xia, Baidu, China Treasurer * Hajer Ayadi, York University, Canada WIC Steering Committee Chairs * Ning Zhong, Maebashi Institute of Technology, Japan * Jiming Liu, Hong Kong Baptist University, HK, China WIC Executive Secretary * Xiaohui Tao, University of Southern Queensland, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmuller2 at uwo.ca Sun Jun 26 22:59:56 2022 From: lmuller2 at uwo.ca (Lyle Muller) Date: Mon, 27 Jun 2022 02:59:56 +0000 Subject: Connectionists: 2022 Western Academy for Advanced Research Postdoctoral Competition Message-ID: The Western Academy for Advanced Research is launching a competition for highly motivated postdoctoral researchers at the interface of mathematical modeling and neuroscience. Successful candidates will join a new, research-intensive program at Western University. The Western Academy?s opening Theme will develop connections between mathematics and neuroscience, leveraging mathematical approaches to answer open questions in neuroscience. Motivated postdoctoral candidates are encouraged to submit application materials, including a CV, short research statement, and contact information for three academic references, to Lyle Muller (lmuller2 at uwo.ca) and J?n Min?? (minac at uwo.ca). Postdocs will receive competitive stipends and support for their academic careers. Review of applications will begin 15 July 2022. Both our group and the University are deeply committed to fostering diversity in the sciences. Applications from underrepresented groups are strongly encouraged, and any interested applicants can write the PIs directly with questions about the research environment. -- Lyle Muller http://mullerlab.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmuller2 at uwo.ca Sun Jun 26 23:03:20 2022 From: lmuller2 at uwo.ca (Lyle Muller) Date: Mon, 27 Jun 2022 03:03:20 +0000 Subject: Connectionists: 2022 Western-Fields School in Networks and Neuroscience Message-ID: <344022B5-AA4B-445D-AA09-28B6E592EBD6@uwo.ca> Applications are invited for a one-week school at the interface of mathematics and neuroscience. With similar scope as the 2021 joint seminar series, the Western-Fields Summer School in Networks and Neuroscience will bring together advanced undergraduate students, graduate students, and postdocs for a week of training in methods for networks, dynamics, learning, and modeling biological neural networks. Lectures will cover graph theory, network dynamics, advanced algebra, and machine learning. Students will receive intensive training ranging from an introduction to advanced methods in these fields. This summer school will be held September 19-23 on the campus of Western University (London, Ontario), with a closing session at the Fields Institute (Toronto, Ontario). Applications should consist of CV and one-page statement of interest, submitted as a single PDF file to Lyle Muller (lmuller2 at uwo.ca) and J?n Min?? (minac at uwo.ca). Review of applications will begin on 6 July 2022. Our team is committed to fostering diversity in mathematics and neuroscience, and a goal of this summer school is to encourage diverse candidates with quantitative training to consider problems in this growing field. Any interested applicants can write to the teaching team with questions about the program. -- Lyle Muller http://mullerlab.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaoxuan.li at icpbr.ac.cn Sun Jun 26 23:27:18 2022 From: xiaoxuan.li at icpbr.ac.cn (=?UTF-8?Q?Xiaoxuan_Li_=E6=9D=8E=E8=82=96=E7=92=87?=) Date: Mon, 27 Jun 2022 11:27:18 +0800 (GMT+08:00) Subject: Connectionists: Postdoc positions in International Center for Primate Brain Research, Shanghai, China Message-ID: <7ce6cb0f.1e38.181a333037f.Coremail.xiaoxuan.li@icpbr.ac.cn> We are hiring postdoctoral research fellows to engage in conscious visual perception research in the Laboratory of Neural Dynamics of Visual Perception and Cognition at the International Center for Primate Brain Research (ICPBR). The ICPBR, co-directed by Prof. Mu-Ming Poo and Prof. Nikos K. Logothetis, affiliated with the Center for Excellence in Brain Science and Intelligence Technology (CEBSIT, http://english.cebsit.cas.cn/), the Chinese Academy of Sciences in Shanghai is a newly established international research facility for conducting highest-level primate brain research. Here, Dr. Vishal Kapoor leads the Laboratory of Neural Dynamics of Visual Perception and Cognition (http://english.cebsit.cas.cn/lab/VishalKapoor/research/). The lab, established in late 2021, aims to combine psychophysical paradigms and behavior with state-of-the-art electrophysiological and computational approaches to study the neural basis of conscious visual perception and disambiguate these neural processes from those underlying cognition. More information and application at Postdoctoral Research Fellow Positions--Center for Excellence in Brain Science and Intelligence Technology (cas.cn) We provide a highly interdisciplinary, international, inclusive, and supportive work environment for performing exciting cutting-edge research where qualified candidates will receive support from the institute and abundant opportunity for scientific exchange both domestically and internationally. The institute supports career and personal development of the candidates including opportunities related to talent programs available for outstanding postdoctoral fellows in the Chinese Academy of Science. Candidates will be reviewed in a rolling basis until the positions are filled. Please do not hesitate to contact the principal investigator or the lab manager for informal inquiries regarding the positions:People----Center for Excellence in Brain Science and Intelligence Technology (cas.cn) Thank you. Laboratory of Neural Dynamics of Visual Perception and Cognition International Center for Primate Brain Research (ICPBR) Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences 500 Qiangye Rd, Songjiang District, Shanghai, China, 201602 Tel: 021-31821616 x 8227 Email: xiaoxuan.li at icpbr.ac.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Sun Jun 26 13:17:04 2022 From: timofte.radu at gmail.com (Radu Timofte) Date: Sun, 26 Jun 2022 19:17:04 +0200 Subject: Connectionists: [CFP] ECCV 2022 Advances in Image Manipulation (AIM) workshop and challenges Message-ID: Apologies for cross-posting ******************************* CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 8 CHALLENGES AIM: 4th Advances in Image Manipulation workshop and challenges on compressed/image/video super-resolution, learned ISP, reversed ISP, Instagram filter removal, Bokeh effect, depth estimation In conjunction with ECCV 2022, Tel-Aviv, Israel Website: https://data.vision.ee.ethz.ch/cvl/aim22/ Contact: radu.timofte at uni-wuerzburg.de TOPICS Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to: ? Image-to-image translation ? Video-to-video translation ? Image/video manipulation ? Perceptual manipulation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation ? Saliency and gaze estimation ? Perceptual enhancement ? Multimodal translation ? Depth estimation ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral imaging ? Underwater imaging ? Aerial and satellite imaging ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video manipulation on mobile devices ? Image/video restoration and enhancement on mobile devices ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 14 pages (excluding references) in single-column, ECCV style. The paper format must follow the same guidelines as for all ECCV 2022 submissions. The review process is double blind. Dual submission is not allowed. Submission site: https://cmt3.research.microsoft.com/AIMWC2022/ WORKSHOP DATES ? Submission Deadline: July 25, 2022 ? Decisions: August 15, 2022 ? Camera Ready Deadline: August 22, 2022 AIM 2022 has the following associated challenges (ONGOING!): 1. Compressed Input Super-Resolution 2. Reversed ISP 3. Instagram Filter Removal 4. Video Super-Resolution (Evaluation platform: MediaTek Dimensity APU) - Powered by MediaTek 5. Image Super-Resolution (Eval. platform: Synaptics Dolphin NPU) - Powered by Synaptics 6. Learned Smartphone ISP (Eval. platform: Snapdragon Adreno GPU) - Powered by OPPO 7. Bokeh Effect Rendering (Eval. platform: ARM Mali GPU) - Powered by Huawei 8. Depth Estimation (Eval. platform: Raspberry Pi 4) - Powered by Raspberry Pi PARTICIPATION To learn more about the challenges and to participate: https://data.vision.ee.ethz.ch/cvl/aim22/ CHALLENGES DATES ? Release of train data: May 24, 2022 ? Validation server online: June 1, 2022 ? Competitions end: July 30, 2022 CONTACT Email: radu.timofte at uni-wuerzburg.de Website: https://data.vision.ee.ethz.ch/cvl/aim22/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From phitzler at googlemail.com Mon Jun 27 03:54:46 2022 From: phitzler at googlemail.com (Pascal Hitzler) Date: Mon, 27 Jun 2022 09:54:46 +0200 Subject: Connectionists: Call for book chapter proposals for A Compendium of Neuro-Symbolic Artificial Intelligence In-Reply-To: <0fcd40f1-35de-4816-45af-10122e0ec6dc@googlemail.com> References: <0fcd40f1-35de-4816-45af-10122e0ec6dc@googlemail.com> Message-ID: <79a985da-bd34-79d8-903e-a38e77511aaf@googlemail.com> A clarification on this: We're not looking for original contributions. Rather, we're looking looking for chapters by authors who present a larger perspective on a series of their own papers on closely related topics. You can think of them as surveys of some of you own work. Abstract deadline: July 15. Call is online at https://daselab.cs.ksu.edu/content/call-book-chapter-proposals-compendium-neuro-symbolic-artificial-intelligence Best Regards, Pascal. On 6/2/2022 5:21 AM, Pascal Hitzler wrote: > We recently published a book entitled Neuro-Symbolic Artificial > Intelligence: The State of the Art (see > https://ebooks.iospress.nl/ISBN/978-1-64368-244-0 ) which contains > invited overview chapters by selected authors. > > Due to the success of this book, and because there is much more work on > the topic that we have not been able to include, the publisher has > agreed to a new and more comprehensive volume. > > The new book will be entitled (tentatively) > > A Compendium of Neuro-Symbolic Artificial Intelligence > > and at this time we are requesting book chapter proposals. We are > understanding the topic in a very general sense, i.e. in scope is any > research that includes both artificial neural networks (and deep > learning) and symbolic methods; see e.g. > http://doi.org/10.3233/AIC-210084 . > > A book chapter shall be an overview of a line of work by the chapter > authors, based on 2 or more related publications in quality conferences > or journals. The intention is that a large collection of such chapters > will provide an overview of the whole field. > > To contribute to the book, please provide a brief book chapter proposal > to hitzler at ksu.edu by the > > Deadline July 15, 2022 > > consisting of the following: > > * Title of the chapter > * List of chapter authors > * A brief abstract (one paragraph) > * Approximate number of pages (see the front matter in the above linked > book for approximate formatting) > > The list of already published conference or journal papers the chapter > will be based on > > We will notify contributors by July 30 whether their chapter will be > included. > > Further please take note of the following: > * The deadline for the chapters will be October 31, 2022. > * We will do a light cross-review for feedback (since material is based > on already peer reviewed publications) > * Each contributing author will have to be available to review at most > one other chapter within 4 weeks. > > We expect publication of the book in the first half of 2023. > > We are looking forward to your contribution! > > Pascal Hitzler > Md Kamruzzaman Sarker > -- Pascal Hitzler Lloyd T. Smith Creativity in Engineering Chair Director, Center for AI and Data Science Kansas State University http://www.pascal-hitzler.de http://www.daselab.org http://www.semantic-web-journal.net From boriana.shalyavska at insait.ai Mon Jun 27 09:18:30 2022 From: boriana.shalyavska at insait.ai (Boriana Shalyavska) Date: Mon, 27 Jun 2022 16:18:30 +0300 Subject: Connectionists: PhD in Computer Science/AI with Full 5-year Fellowships Message-ID: <3116532F-B90C-4784-93EF-E840171362CA@contoso.com> The Institute for Computer Science, Artificial Intelligence, and Technology (INSAIT ), created in partnership with ETH Zurich and EPFL, seeks candidates for Ph.D. Positions in Computer Science and Artificial Intelligence with Full 5-year Fellowships. INSAIT is the first research institute in computer science and artificial intelligence located in Eastern Europe whose mission is to become one of the world's leading research and innovation powerhouses. At INSAIT, you will benefit from: ? Mentorship by top professors from world-class universities such as MIT, CMU, ETH Zurich, Yale, EPFL. ? Outstanding working conditions that provide you with the freedom to think and the space to learn and grow, with a compensation of ?36,000 / year with a flat income tax of 10%. ? Rolling admission process, accepting PhD applications at any point during the academic year. To apply, you must hold a B.Sc. or a M.Sc. degree (or be within the last year of completing either) in computer science, data science, mathematics, physics, statistics, or electrical engineering. We welcome all excellent candidates with a strong academic background who are keen on conducting world-class research in the general field of AI and computer science. When ready to apply, go to: https://insait.ai/phd/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.kalenkova at adelaide.edu.au Mon Jun 27 23:29:08 2022 From: anna.kalenkova at adelaide.edu.au (Anna Kalenkova) Date: Tue, 28 Jun 2022 03:29:08 +0000 Subject: Connectionists: Process mining. Best PhD Dissertation Award 2022 Message-ID: Call for Best Process Mining PhD Dissertation Award 2022 https://icpmconference.org/2022/call-for-best-phd-thesis/ The IEEE Task Force for Process Mining is happy to announce the 2022 edition of the Best Process Mining PhD Dissertation Award. The award will be delivered during the 4th International Conference on Process Mining (ICPM 2022). Eligibility Eligible candidates are those who officially obtained a PhD degree defended in 2020 or 2021, with a dissertation focused on process mining. Those who applied for the 2021 edition of the prize, can reapply for the 2022 edition as well. We welcome thesis that contributed to advancing the state of the art in the foundations, engineering, and on-field application of process mining techniques. In this context, the term ?process mining? has to be understood in a broad sense: using event data produced during the execution of business, software, or system processes, in order to extract fact-based knowledge and insights on such processes and manage future processes in innovative ways. For a thesis to be eligible, we also require that thesis-related results have been published in at least one flagship conference/journal for process mining, for example ICPM, BPM, CAiSE, EDOC, Petri Nets, ICDM, Information Systems, IEEE TKDE, DKE, ACM TOSEM, IEEE TSC, IEEE TSE, ToPNoC, Decision Support Systems, BISE. We remark that applications to other dissertation award initiatives are permitted. Prize The winner will receive the award at the ICPM 2022 Conference. The award comes with a free registration to the ICPM conference, and with the option of publishing the thesis with Springer, in the LNBIP series. There may also be a monetary prize. Nomination and Submission Candidates are nominated by their primary supervisor via a nomination letter. Each supervisor is only allowed to nominate one candidate. The candidate is responsible for submitting the application via Easychair (selecting the ?Best Process Mining PhD Dissertation Award? track). The application consists of the following parts, which have all to be concatenated in a single PDF file (respecting the order): * An extended abstract of the thesis, positioning the work in the state of the art and highlighting its main results, novelty, and (potential) impact [3-5 pages] * A nomination letter by the supervisor [1-2 pages] * The PhD evaluation report, including the reviews of the dissertation * Full CV of the candidate, including the list of publications * The dissertation itself Selection Process The selection process consists of two steps: * The jury evaluates the applications and identifies a shortlist of three candidates. * The three candidates are invited to give a presentation to the jury in an online session. The jury then selects the winner. The selection is based on the following criteria: * originality and depth of contribution; * methodological soundness; * form and quality of presentation; * significance and potential impact for the research field; * implemented techniques and software availability in case of algorithmic nature of the contribution. Key Dates Application submission deadline July 15, 2022 Notification of short-listing September 16, 2022 Online presentations by shortlisted candidates End of September ? beginning of October 2022 Evaluation Panel Moe Wynn, Queensland University of Technology Wil van der Aalst, RWTH Aachen University Hajo Reijers, Utrecht University Boudewijn van Dongen, Eindhoven University of Technology Chiara Di Francescomarino, Fondazione Bruno Kessler Pnina Soffer, University of Haifa Arik Senderovich, York University Tijs Slaats, University of Copenhagen Agnes Koschmider, University of Kiel -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at idsia.ch Tue Jun 28 03:06:53 2022 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Tue, 28 Jun 2022 07:06:53 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> References: <6093DADD-223B-44F1-8E8A-4E996838ED34@ucdavis.edu> <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> Message-ID: After months of massive open online peer review, there is a revised version 3 of my report on the history of deep learning and on misattributions, supplementing my award-winning 2015 deep learning survey. The new version mentions (among many other things): 1. The non-learning recurrent architecture of Lenz and Ising (1920s)?later reused in Amari?s learning recurrent neural network (RNN) of 1972. After 1982, this was sometimes called the "Hopfield network." 2. Rosenblatt?s MLP (around 1960) with non-learning randomized weights in a hidden layer, and an adaptive output layer. This was much later rebranded as ?Extreme Learning Machines." 3. Amari?s stochastic gradient descent for deep neural nets (1967). The implementation with his student Saito learned internal representations in MLPs at a time when compute was billions of times more expensive than today. 4. Fukushima?s rectified linear units (ReLUs, 1969) and his CNN architecture (1979). The essential statements of the text remain unchanged as their accuracy remains unchallenged: Scientific Integrity and the History of Deep Learning: The 2021 Turing Lecture, and the 2018 Turing Award https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html J?rgen > On 25 Jan 2022, at 18:03, Schmidhuber Juergen wrote: > > PS: Terry, you also wrote: "Our precious time is better spent moving the field forward.? However, it seems like in recent years much of your own precious time has gone to promulgating a revisionist history of deep learning (and writing the corresponding "amicus curiae" letters to award committees). For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2] nor those learnt by Amari?s stochastic gradient descent for MLPs in 1967-1968 [GD1-2]. Nor did your recent survey [S20] attempt to correct this as good science should strive to do. On the other hand, it seems you celebrated your co-author's birthday in a special session while you were head of NeurIPS, instead of correcting these inaccuracies and celebrating the true pioneers of deep learning, such as Ivakhnenko and Amari. Even your recent interview https://blog.paperspace.com/terry-sejnowski-boltzmann-machines/ claims: "Our goal was to try to take a network with multiple layers - an input layer, an output layer and layers in between ? and make it learn. It was generally thought, because of early work that was done in AI in the 60s, that no one would ever find such a learning algorithm because it was just too mathematically difficult.? You wrote this although you knew exactly that such learning algorithms were first created in the 1960s, and that they worked. You are a well-known scientist, head of NeurIPS, and chief editor of a major journal. You must correct this. We must all be better than this as scientists. We owe it to both the past, present, and future scientists as well as those we ultimately serve. > > The last paragraph of my report https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html quotes Elvis Presley: "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.? I wonder how the future will reflect on the choices we make now. > > J?rgen > > >> On 3 Jan 2022, at 11:38, Schmidhuber Juergen wrote: >> >> Terry, please don't throw smoke candles like that! >> >> This is not about basic math such as Calculus (actually first published by Leibniz; later Newton was also credited for his unpublished work; Archimedes already had special cases thereof over 2000 years ago; the Indian Kerala school made essential contributions around 1400). In fact, my report addresses such smoke candles in Sec. XII: "Some claim that 'backpropagation' is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970 [BP1]." >> >> You write: "All these threads will be sorted out by historians one hundred years from now." To answer that, let me just cut and paste the last sentence of my conclusions: "However, today's scientists won't have to wait for AI historians to establish proper credit assignment. It is easy enough to do the right thing right now." >> >> You write: "let us be good role models and mentors" to the new generation. Then please do what's right! Your recent survey [S20] does not help. It's mentioned in my report as follows: "ACM seems to be influenced by a misleading 'history of deep learning' propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. XIII). It goes more or less like this: 'In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s [S20].' However, as mentioned above, the 1969 book [M69] addressed a 'problem' of Gauss & Legendre's shallow learning (~1800)[DL1-2] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method [DEEP1-2][DL2] (and then also by Amari's SGD for MLPs [GD1-2]). Minsky was apparently unaware of this and failed to correct it later [HIN](Sec. I).... deep learning research was alive and kicking also in the 1970s, especially outside of the Anglosphere." >> >> Just follow ACM's Code of Ethics and Professional Conduct [ACM18] which states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." No need to wait for 100 years. >> >> J?rgen >> >> >> >> >> >>> On 2 Jan 2022, at 23:29, Terry Sejnowski wrote: >>> >>> We would be remiss not to acknowledge that backprop would not be possible without the calculus, >>> so Isaac newton should also have been given credit, at least as much credit as Gauss. >>> >>> All these threads will be sorted out by historians one hundred years from now. >>> Our precious time is better spent moving the field forward. There is much more to discover. >>> >>> A new generation with better computational and mathematical tools than we had back >>> in the last century have joined us, so let us be good role models and mentors to them. >>> >>> Terry >>> >>> ----- >>> >>> On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: >>>> Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially in Sec. XII. >>>> >>>> The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. >>>> >>>> Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). >>>> >>>> Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. >>>> >>>> Happy New Year! >>>> >>>> J?rgen From francesca.naretto at sns.it Tue Jun 28 03:45:32 2022 From: francesca.naretto at sns.it (Francesca NARETTO) Date: Tue, 28 Jun 2022 09:45:32 +0200 Subject: Connectionists: XKDD2022 Call for Papers Message-ID: XKDD 2022 - Call for Papers ------------------------------------------------------------------------- 4th International Workshop on eXplainable Knowledge Discovery in Data Mining ------------------------------------------------------------------------- Due to the many requests received we decided to extend the submission to July 4, 2022. IMPORTANT DATES Paper Submission deadline: July 4, 2022 Accept/Reject Notification: July 20, 2022 Camera-ready deadline: July 31, 2022 Workshop: September 19, 2022 CONTEXT & OBJECTIVES In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans. Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain ``meaningful explanations of the logic involved'' when automated decision making takes place, the ``ACM Statement on Algorithmic Transparency and Accountability'', the Informatics Europe's ``European Recommendations on Machine-Learned Automated Decision Making'' and ``The ethics guidelines for trustworthy AI'' provided by the EU High-Level Expert Group on AI. The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains. The purpose of XKDD, eXplainable Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as privacy and fairness, application in real case studies, benchmarking, explanation of decision systems based on time series and graphs which are becoming more and more important in nowadays applications. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective. Topics of interest include, but are not limited to: TOPICS - Explainable Artificial Intelligence (XAI) - Interpretable Machine Learning - Transparent Data Mining - XAI for Fairness Checking approaches - XAI for Privacy-Preserving Systems - XAI for Federated Learning - XAI for Time Series based Approaches - XAI for Graph-based Approaches - XAI for Visualization - XAI in Human-Machine Interaction - XAI Benchmarking - XAI Case studies - Counterfactual Explanations - Ethics Discovery for Explainable AI - Privacy-Preserving Explanations - Transparent Classification Approaches - Explanation, Accountability and Liability from an Ethical and Legal Perspective - Iterative Dialogue Explanations - Explanatory Model Analysis - Human-Model Interfaces - Human-Centered Artificial Intelligence - Human-in-the-Loop Interactions - XAI Case Studies and Applications SUBMISSION & PUBLICATION All contributions will be reviewed by at least three members of the Program Committee. As regards size, contributions can be up to 16 pages in LNCS format, i.e., the ECML PKDD 2022 submission format. All papers should be written in English. The following kinds of submissions will be considered: research papers, tool papers, case study papers and position papers. Detailed information on the submission procedure is available at the workshop web page: https://kdd.isti.cnr.it/xkdd2022/ Accepted papers will be published after the workshop by Springer in a volume of Lecture Notes in Computer Science (LNCS). The condition for inclusion in the post-proceedings is that at least one of the co-authors registered to ECML-PKDD and presented the paper at the workshop. Pre-proceedings will be available online before the workshop. We also allow accepted papers to be presented without publication in the conference proceedings if the authors choose to do so. Some of the full paper submissions may be accepted as short papers after review by the Program Committee. A special issue of a relevant international journal with extended versions of selected papers is under consideration. The submission link is: https://easychair.org/conferences/?conf=xkdd2022 IMPORTANT DATES Paper Submission deadline: June 20, 2022 Accept/Reject Notification: July 13, 2022 Camera-ready deadline: July 31, 2022 Workshop: September 19, 2022 PROGRAM CO-CHAIRS * Przemyslaw Biecek, Warsaw University of Technology, Poland * Riccardo Guidotti, University of Pisa, Italy * Francesca Naretto, Scuola Normale Superiore, Pisa, Italy * Andreas Theissler, Aalen University of Applied Sciences, Aalen, Germany PROGRAM COMMITTEE * Leila Amgoud, CNRS, France * Francesco Bodria, Scuola Normale Superiore, Italy * Umang Bhatt, University of Cambridge, UK * Miguel Couceiro, INRIA, France * Menna El-Assady, AI Center of ETH, Switzerland * Josep Domingo-Ferrer, Universitat Rovira i Virgili, Spain * Fran?oise Fessant, Orange Labs, France * Andreas Holzinger, Medical University of Graz, Austria * Thibault Laugel, AXA, France * Paulo Lisboa, Liverpool John Moores University, UK * Marcin Luckner, Warsaw University of Technology, Poland * John Mollas, Aristotle University of Thessaloniki, Greece * Ramaravind Kommiya Mothilal, Everwell Health Solutions, India * Amedeo Napoli, CNRS, France * Roberto Prevete, University of Napoli, Italy * Antonio Rago, Imperial College London, UK * Jan Ramon, INFRIA, France * Xavier Renard, AXA, France * Mahtab Sarvmaili, Dalhousie University, Canada * Christin Seifert, University of Duisburg-Essen, Germany * Udo Schlegel, Konstanz University, Germany * Mattia Setzu, University of Pisa, Italy * Dominik Slezak, University of Warsaw, Poland * Fabrizio Silvestri, Universit? di Roma, Italy * Francesco Spinnato, Scuola Normale Superiore, Italy * Vicenc Torra, Umea University, Sweden * Cagatay Turkay, University of Warwick, UK * Marco Virgolin, Chalmers University of Technology, Netherlands * Martin Jullum, Norwegian Computing Center, Norway * Albrecht Zimmermann, Universit? de Caen, France * Guangyi Zhang, KTH Royal Institute of Technology, Sweden INVITED SPEAKERS * Prof. Wojciech Samek, TU Berlin * Prof. Anna Monreale, University of Pisa PARTICIPATION ECML-PKDD 2022 plans a hybrid organization for workshops. Therefore a person can attend an online event as long as she/he registers for the conference by using the video conference registration fee: https://2022.ecmlpkdd.org/index.php/registration/. Please note the video conference registration fee also allows you to follow the main conference. However, for an in-person event, interactions and discussions are much easier face-to-face. Thus, we believe that it is important that speakers attend in-person workshops to get fruitful events, and we highly encourage authors of submitted papers to plan to participate on-site at the event. -- Francesca Naretto Ph.D. student in Data Science francesca.naretto at sns.it SNS, Pisa | CNR, Pisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgf at isep.ipp.pt Tue Jun 28 05:09:46 2022 From: cgf at isep.ipp.pt (Carlos) Date: Tue, 28 Jun 2022 10:09:46 +0100 Subject: Connectionists: CFP: BDL 2022 - IEEE SBAC-PAD 2022 - Extended Submission Deadline: 10 of July Message-ID: <49d67019-6552-670d-3048-752161c10891@isep.ipp.pt> --------------- CALL FOR PAPERS --------------- BDL 2022 Workshop on Big Data & Deep Learning in High Performance Computing in conjunction with IEEE SBAC-PAD 2022 Bordeaux, France, November 2-5, 2022 https://www.dcc.fc.up.pt/bdl2022/ ---------------------- Aims and scope of BDL ---------------------- The number of very large data repositories (big data) is increasing in a rapid pace. Analysis of such repositories using the traditional sequential implementations of Machine Learning (ML) and emerging techniques, like deep learning, that model high-level abstractions in data by using multiple processing layers, requires expensive computational resources and long running times. Parallel or distributed computing are possible approaches that can make analysis of very large repositories and exploration of high-level representations feasible. Taking advantage of a parallel or a distributed execution of a ML/statistical system may: i) increase its speed; ii) learn hidden representations; iii) search a larger space and reach a better solution or; iv) increase the range of applications where it can be used (because it can process more data, for example). Parallel and distributed computing is therefore of high importance to extract knowledge from massive amounts of data and learn hidden representations. The workshop will be concerned with the exchange of experience among academics, researchers and the industry whose work in big data and deep learning require high performance computing to achieve goals. Participants will present recently developed algorithms/systems, on going work and applications taking advantage of such parallel or distributed environments. ------ Topics ------ BDL 2022 invites papers on all topics in novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets Big Data and Deep Learning are of interest to the workshop. Examples of topics include but not limited to: * parallel algorithms for data-intensive applications; * scalable data and text mining and information retrieval; * using Hadoop, MapReduce, Spark, Storm, Streaming to analyze Big Data; * energy-efficient data-intensive computing; * deep-learning with massive-scale datasets; * querying and visualization of large network datasets; * processing large-scale datasets on clusters of multicore and manycore processors, and accelerators; * heterogeneous computing for Big Data architectures; * Big Data in the Cloud; * processing and analyzing high-resolution images using high-performance computing; * using hybrid infrastructures for Big Data analysis; * new algorithms for parallel/distributed execution of ML systems; * applications of big data and deep learning to real-life problems. ------------------ Program Chairs ------------------ Jo?o Gama, University of Porto, Portugal Carlos Ferreira, Polytechnic Institute of Porto, Portugal Miguel Areias, University of Porto, Portugal ----------------- Program Committee ----------------- TBA ---------------- Important dates ---------------- Submission deadline: July 10, 2022(AoE) Author notification: July 30, 2022 Camera-ready: September 12, 2022 Registration deadline: August 20, 2022 ---------------- Paper submission ---------------- Papers submitted to BDL 2022 must describe original research results and must not have been published or simultaneously submitted anywhere else. Manuscripts must follow the IEEE conference formatting guidelines and submitted via the EasyChair Conference Management System as one pdf file. The strict page limit for initial submission and camera-ready version is 8 pages in the aforementioned format. Each paper will receive a minimum of three reviews by members of the international technical program committee. Papers will be selected based on their originality, relevance, technical clarity and quality of presentation. At least one author of each accepted paper must register for the BDL 2022 workshop and present the paper. ----------- Proceedings ----------- All accepted papers will be published at IEEE Xplore. Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From alice at hume.ai Tue Jun 28 17:22:22 2022 From: alice at hume.ai (Alice Baird) Date: Tue, 28 Jun 2022 17:22:22 -0400 Subject: Connectionists: Subject: [CFP] Deadline Extended for Vocal Emotion in Non-Verbal Vocalisations Competition at ACII 2022 Message-ID: Dear Community, We are delighted to announce that the first annual ACII Affective Vocal Burst (A-VB) Workshop and Competition is now open for competition registration and the submission deadline has been altered. Within this first iteration of the ACII A-VB Challenge, the participants are presented with four emotion-focused sub-challenges that utilize the large-scale and ?in-the-wild? Hume-VB dataset. The dataset and the four tracks draw attention to new innovations in emotion science as it pertains to vocal expression, addressing low- and high-dimensional theories of emotional expression, cultural variation, and ?call types? (laugh, cry, sigh, etc.). The four tasks include: - The High-Dimensional Emotion Task (A-VB High): The A-VB High track, explores a high-dimensional emotion space for understanding vocal bursts. Participants will be challenged with predicting the intensity of 10 emotions (Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, and Surprise) associated with each vocal burst as a multi-output regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC. - The Two-Dimensional Emotion Task (A-VB Two): In the A-VB Two track, we investigate a low-dimensional emotion space that is based on the circumplex model of affect. Participants will predict values of arousal and valence (on a scale from 1=unpleasant/subdued, 5=neutral, 9=pleasant/stimulated) as a regression task. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across the two dimensions. The baseline for this challenge will be based on CCC. - The Cross-Cultural Emotion Task (A-VB Culture): In the A-VB Culture track, participants will be challenged with predicting the intensity of 10 emotions associated with each vocal burst as a multi-output regression task, using a model or multiple models that generate predictions specific to each of the four cultures (the U.S., China, Venezuela, or South Africa). Specifically, annotations of each vocal burst will consist of culture-specific ground truth, meaning that the ground truth for each sample will be the average of annotations solely from the country of origin of the sample. Participants will report the average Concordance Correlation Coefficient (CCC), as well as the Pearson correlation coefficient, across all 10 emotions. The baseline for this challenge will be based on CCC. - The Expressive Burst-Type Task (A-VB Type): In the A-VB Type task, participants will be challenged with classifying the type of expressive vocal burst from 7 classes (Gasp, Laugh, Cry, Scream, Grunt, Groan, Pant, Other). Participants will report the Unweighted Average Recall (UAR) as a measure of performance. *The A-VB Workshop will also be accepting contributions on other related topics:* - Detecting and Understanding Nonverbal Vocalizations - Modeling Vocal Emotional Expression - Cross-Cultural Emotional Expression Modeling - Other topics related to Auditory Affective Computing See the following website for more information, rules, and deadlines: www.competitions.hume.ai *The general deadlines have been extended and are as follows:* - Challenge Opening (data available): May 27, 2022 - Baselines information released: TBA. - Other Topics contributions deadline: July 22, 2022 *(Included in ACII Proceedings)* - Notification of Acceptance: July 29, 2022 - Camera Ready: August 15, 2022 - *Competition deadline: September 2, 2022* - *Competition Technical Report submission deadline: September 6, 2022 **(Peer Reviewed by A-VB technical committee, not included in ACII Proceedings)* - Notification of Acceptance: September 16, 2022 - Workshop: October 18 - 21, 2022 We look forward to hearing from interested parties! Please get in touch with competitions at hume.ai with any questions! More information can be found on our website: competitions.hume.ai/avb2022 Best, A-VB Workshop & Competition organizing team -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: A-VB_CfP_v07.pdf Type: application/pdf Size: 184938 bytes Desc: not available URL: From dwang at cse.ohio-state.edu Tue Jun 28 12:04:29 2022 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Tue, 28 Jun 2022 16:04:29 +0000 Subject: Connectionists: NEURAL NETWORKS, July 2022 Message-ID: Neural Networks - Volume 151, July 2022 https://www.journals.elsevier.com/neural-networks Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks Anthony Triche, Anthony S. Maida, Ashok Kumar Cortical circuits for top-down control of perceptual grouping Maria Kon, Gregory Francis Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks Matteo Filippini, Davide Borra, Mauro Ursino, Elisa Magosso, Patrizia Fattori Branching Time Active Inference: The theory and its generality Theophile Champion, Lancelot Da Costa, Howard Bowman, Marek Grzes Neural feedback facilitates rough-to-fine information retrieval Xiao Liu, Xiaolong Zou, Zilong Ji, Gengshuo Tian, ... Si Wu Improving generalization of deep neural networks by leveraging margin distribution Shen-Huan Lyu, Lu Wang, Zhi-Hua Zhou GHNN: Graph Harmonic Neural Networks for semi-supervised graph-level classification Wei Ju, Xiao Luo, Zeyu Ma, Junwei Yang, ... Ming Zhang Learning a discriminative SPD manifold neural network for image set classification Rui Wang, Xiao-Jun Wu, Ziheng Chen, Tianyang Xu, Josef Kittler Golden subject is everyone: A subject transfer neural network for motor imagery-based brain computer interfaces Biao Sun, Zexu Wu, Yong Hu, Ting Li Dynamic Auxiliary Soft Labels for decoupled learning Yan Wang, Yongshun Zhang, Furao Shen, Jian Zhao Double structure scaled simplex representation for multi-view subspace clustering Liang Yao, Gui-Fu Lu Multigraph classification using learnable integration network with application to gender fingerprinting Nada Chaari, Mohammed Amine Gharsallaoui, Hatice Camgoz Akdag, Islem Rekik Provable training of a ReLU gate with an iterative non-gradient algorithm Sayar Karmakar, Anirbit Mukherjee Quantum support vector machine based on regularized Newton method Rui Zhang, Jian Wang, Nan Jiang, Hong Li, Zichen Wang Guaranteed approximation error estimation of neural networks and model modification Yejiang Yang, Tao Wang, Jefferson P. Woolard, Weiming Xiang Towards understanding theoretical advantages of complex-reaction networks Shao-Qun Zhang, Wei Gao, Zhi-Hua Zhou Pages 80-93 Lag H_{infinity} synchronization of coupled neural networks with multiple state couplings and multiple delayed state couplings Yuting Cao, Linhao Zhao, Shiping Wen, Tingwen Huang Think positive: An interpretable neural network for image recognition Gurmail Singh Neural network for a class of sparse optimization with -regularization Zhe Wei, Qingfa Li, Jiazhen Wei, Wei Bian DGInet: Dynamic graph and interaction-aware convolutional network for vehicle trajectory prediction Jiyao An, Wei Liu, Qingqin Liu, Liang Guo, ... Tao Li Distributed -winners-take-all via multiple neural networks with inertia Xiaoxuan Wang, Shaofu Yang, Zhenyuan Guo, Tingwen Huang TSFD-Net: Tissue specific feature distillation network for nuclei segmentation and classification Talha Ilyas, Zubaer Ibna Mannan, Abbas Khan, Sami Azam, ... Friso De Boer MoET: Mixture of Expert Trees and its application to verifiable reinforcement learning Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, ... Sarfraz Khurshid Brain-inspired multiple-target tracking using Dynamic Neural Fields Shiva Kamkar, Hamid Abrishami Moghaddam, Reza Lashgari, Wolfram Erlhagen Adaptive modeling of nonnegative environmental systems based on projectional Differential Neural Networks observer Isaac Chairez, Olga Andrianova, Tatyana Poznyak, Alexander Poznyak Knowledge-based tensor subspace analysis system for kinship verification I. Serraoui, O. Laiadi, A. Ouamane, F. Dornaika, A. Taleb-Ahmed Informative pairs mining based adaptive metric learning for adversarial domain adaptation Mengzhu Wang, Paul Li, Li Shen, Ye Wang, ... Zhigang Luo Hippocampal formation-inspired probabilistic generative model Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa Evaluation of text-to-gesture generation model using convolutional neural network Eiichi Asakawa, Naoshi Kaneko, Dai Hasegawa, Shinichi Shirakawa -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpinots at yahoo.com Tue Jun 28 15:11:10 2022 From: dpinots at yahoo.com (Dimitris Pinotsis) Date: Tue, 28 Jun 2022 19:11:10 +0000 (UTC) Subject: Connectionists: Workshop on State of the Art Methods for Brain Data Analysis References: <1986322659.54111.1656443470015.ref@mail.yahoo.com> Message-ID: <1986322659.54111.1656443470015@mail.yahoo.com> State of the Art Methods for Brain Data Analysis- A workshop for computational, theoretical and cognitive neuroscientists taking place on July 6 at City? Univers?ty of London This hybrid workshop will discuss the state of the art methods in brain imaging and how they inform our understanding of the neural basis of behaviour and cognition.? It will be of interest to computational and cognitive neuroscientists who are keen on brain imaging, computational psychiatry and network-level dynamics. Speakers: - John Ashburner, UCL - Aldo Faisal, Imperial - Gene Fridman, Johns Hopkins - Alan Jasanoff, MIT - Marcus Kaiser, Nottingham - Mark Woolrich, Oxford Hybrid format (f2f and online) July 6th, 2022 09:50 - 17:00 B200 Lecture Theatre, University Building, 2nd Floor, City, University of London Read more here:?About | Brain Data Analysis (braindatanalysis.wixsite.com) Registration is free. To register, ?please fill in?this form: . For any questions, please email?braindatanalysis at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ioannakoroni at csd.auth.gr Wed Jun 29 02:11:35 2022 From: ioannakoroni at csd.auth.gr (Ioanna Koroni) Date: Wed, 29 Jun 2022 09:11:35 +0300 Subject: Connectionists: =?utf-8?q?Live_e-Lecture_by_Prof=2E_A=2E_del_Bimb?= =?utf-8?q?o=3A_=E2=80=9CSocial_interaction_in_trajectory_predictio?= =?utf-8?q?n_with_Memory_Augmented_Networks=E2=80=9D=2C_5th_July_20?= =?utf-8?q?22_17=3A00-18=3A00_CET=2E_Upcoming_AIDA_AI_excellence_le?= =?utf-8?q?ctures?= References: <05a701d88615$7cf44c90$76dce5b0$@csd.auth.gr> <008b01d88648$cce47080$66ad5180$@csd.auth.gr> Message-ID: <0d9601d88b7f$16bef370$443cda50$@csd.auth.gr> Dear AI scientist/engineer/student/enthusiast, Prof. A. del Bimbo (Universit? di Firenze, Italy), a prominent AI & Digital Media researcher internationally, will deliver the e-lecture: ?Social interaction in trajectory prediction with Memory Augmented Networks?, on Tuesday 5th July 2022 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST), see details in: http://www.i-aida.org/ai-lectures/ You can join for free using the zoom link: https://authgr.zoom.us/j/95605045574 & Passcode: 148148 The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE , Humane AI Net , TAILOR , VISION , currently in the process of formation, is very pleased to offer you top quality scientific lectures on several current hot AI topics. Lectures will be offered alternatingly by: Top highly-cited senior AI scientists internationally or Young AI scientists with promise of excellence (AI sprint lectures) These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list. Best regards Profs. M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, N. Sebe, J. Stefanowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiako at ieee.org Wed Jun 29 12:03:37 2022 From: tiako at ieee.org (Pierre F. Tiako) Date: Wed, 29 Jun 2022 11:03:37 -0500 Subject: Connectionists: (CFP Due July 11) CDTE 2022- Data Technology and Engineering, Oct 3-6, OKC.USA.Online Message-ID: --- Call for Abstracts and Papers ------------- 2022 OkIP International Conference on Data Technology and Engineering (CDTE) Downtown Oklahoma City, OK, USA & Online October 3-6, 2022 https://eventutor.com/e/CDTE002 OkIP Published & SCOPUS/WoS Indexed Submission Deadline: July 11, 2022 Extended versions of the best papers will be considered for journal publication. >> Contribution Types (One-Column IEEE Format Style): - Full Paper: Accomplished research results (10 pages) - Short Paper: Work in progress/fresh developments (6 pages) - Extended Abstract/Poster/Journal First: Displayed/Oral presented (3 pages) >> Areas: * Data Concepts - Data Filtering | Data Conversion - Data Structures | Data Management - Data Virtualization | Data Integrity - Data Integration | Data Retrieval - Data Representation | Data Aggregation - Data Types | Data Grids | Data Warehouse - Metadata | Data Integration | Data fusion - Data Standards | Data Workflow - Data Interoperability | Data Integrity - Data Security/Privacy/Trust | Data Control * Data Analytics and Processing - Business Intelligence | Data Governance - Descriptive analytics | Critical Device Data - Raw Data | Data Capture | Data Ingestion - Data Transformation | Data Processing - Data Visualization | Data Queries - Analytical Workloads | Prescriptive Analytics - Historical Data and Business Metrics - Transactional Workloads | Data Repositories - Batch and Streaming Data - Real-time Data | Point-of-Sale Data - Statistics Exploratory Data Analysis - Diagnostic Analytics | Cognitive Analytics - Data Warehouse Management - Predictive Analytics | Social Data Analytics - Online Analytical Processing - Semi-Structured Data | Unstructured Data * Databases - Database System Internals and Performance - XML Databases | Graph Database - Temporal Databases | Spatial Databases - Query Optimization Techniques - Multimedia Databases | Distributed Databases - Mobile Databases | WWW and Databases - NoSQL Databases | Very Large Databases - Object-Oriented Database Systems - Database Architecture and Design * AI in Data and Big Data - Data Encryption Techniques - Data Mining Theoretical Foundation - Scientific and Statistical Data Mining - Data Mining and Knowledge Discovery - Data Mining Products/Systems/Languages - Big Data Search/Mining | Web Mining - Decision Support Data Systems - Dimensional Data Modeling - Big Data Security/Privacy/Trust - Big Data Infrastructure | Web Analytics - Text Analytics | Big Data as a Service - Big Data and Information/Data Quality - Change Detection | Big Data Applications - Social Web Search and Mining - Deep Learning and Big Data - Big Data Computational Models - Smart Grid Big Data | Text Mining - Big Data Cloud Computing - Big Data Stream Computing - Intelligent Data Retrieval System * Data and Databases Applications - Database Applications and Experiences - Scientific and Biological Databases - Smart Cities and Urban Data Analytics - Sensor Network Data Management - In-Network Data Processing - In-Memory/Purpose-built Databases - Distributed/Parallel/Peer to Peer Databases - Deep/Dark/Hidden Web Data Management - Energy-Efficient Data Centers - Storage Systems Security/Reliability - Data Loss/Breach Prevention & Protection - Visual and Audio Data Mining - Information Visualization - Open Source Databases - Software Engineering Data - Virtualized Data Center Network - Medical/Biomedical Big Data - Medical Data Interoperability/Security * Data and Legal Issues - Data Privacy Issues| Sensitive Data - Data Regulation Laws | Data Protection Laws - Privacy-Preserving Techniques - Data Privacy Issues | Privacy Standards - Data Collection and Storage Issues - Intellectual Property/Copyright Laws >> Important Dates: - Submission Deadline: July 11, 2022 - Notification Due: August 01, 2022 - Camera-ready Due: August 22, 2022 >> Technical Program Committee https://eventutor.com/event/21/page/60-committee Please feel free to contact us for any inquiries at: info at okipublishing.com -------- Pierre Tiako General Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From Donald.Adjeroh at mail.wvu.edu Thu Jun 30 00:00:38 2022 From: Donald.Adjeroh at mail.wvu.edu (Donald Adjeroh) Date: Thu, 30 Jun 2022 04:00:38 +0000 Subject: Connectionists: Deadline approaching --2 days to go: SBP-BRiMS'2022: Social Computing, Behavior-Cultural Modeling, Prediction and Simulation In-Reply-To: References: , , , , , , , , , , , , , , , , , , Message-ID: Apologies if you receive multiple copies SBP-BRiMS 2022 2022 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation September 20-23, 2022 Will be held in hybrid mode (Virtually and in-person at Pittsburgh, USA) http://sbp-brims.org/ #sbpbrims The goal of this conference is to build this new community of social cyber scholars by bringing together and fostering interaction between members of the scientific, corporate, government and military communities interested in understanding, forecasting, and impacting human socio-cultural behavior. It is the charge to this community to build this new science, its theories, methods, and its scientific culture in a way that does not give priority to either social science or computer science, and to embrace change as the cornerstone of the community. Despite decades of work in this area, this scientific field is still in its infancy. To meet this charge and move this science to the next level, this community must meet the following three challenges: 1) deep understanding of socio-cognitive reasoning, 2) human-technology integration, 3) and re-usable computational methods. Topics include but are not limited to the following: ? Social Cybersecurity ? Social Network Modeling ? Human Behavior Modeling ? Agent-Based Models ? Models of Human-Autonomy Interaction ? Health and Epidemiological Models ? Validation Methods and Human Experimentation All papers are qualified for the Best Paper Award. Papers with student first authors will be considered for the Best Student Paper Award. See also special Call for Panels at SBP-BRiMS'22 http://sbp-brims.org/2022/Call%20For%20Panels/ IMPORTANT DATES: Paper/Abstract Submission: 01-Jul-2022 (Midnight EST) Author Notification: 29-Jul-2022 Panel proposals due: 01-July-2022 Panel Notification: 29-Jul-2022 Tutorial Submission: 22-Aug-2022 Decision Notification: 29-Aug-2022 Challenge Response due: 22-Aug-2022 Challenge Notification: 29-Aug-2022 Final Files due: 15-Aug-2022 HOW TO SUBMIT : For information on paper submission, check here. You will be able to update your submission until the final paper deadline. PAPER FORMATTING GUIDELINE: The papers must be in English and MUST be formatted according to the Springer-Verlag LNCS/LNAI guidelines. View sample LaTeX2e and WORD files. All regular paper submissions should be submitted as a paper with a maximum of 10 pages. Total page count includes all figures, tables, and references. CHALLENGE PROBLEM: The conference expects to announce a computational challenge as in previous years. Additional details will be posted in December. Follow us on Facebook, Twitter and LinkedIn to receive updates. PRE-CONFERENCE TUTORIAL SESSIONS: Several half-day sessions will be offered on the day before the full conference. More details regarding the preconference tutorial sessions will be posted as soon as this information becomes available.. FUNDING PANEL & CROSS-FERTILIZATION ROUNDTABLES: The purpose of the cross-fertilization roundtables is to help participants become better acquainted with people outside of their discipline and with whom they might consider partnering on future SBP-BRiMS related research collaborations. The Funding Panel provides an opportunity for conference participants to interact with program managers from various federal funding agencies, such as the National Science Foundation (NSF), National Institutes of Health (NIH), Office of Naval Research (ONR), Air Force Office of Scientific Research (AFOSR), Defense Threat Reduction Agency (DTRA), Defense Advanced Research Projects Agency (DARPA), Army Research Office (ARO), National Geospatial Intelligence Agency (NGA), and the Department of Veterans Affairs (VA). ATTENDANCE SCHOLARSHIPS: It is anticipated that a limited number of attendance scholarships will be available on a competitive basis to students who are presenting papers. Additional information will be provided soon. Follow us on Facebook, Twitter and LinkedIn to receive updates. Visit our website: http://sbp-brims.org/ Download: Download Call for Papers in PDF format here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessandro.dausilio at gmail.com Thu Jun 30 04:40:21 2022 From: alessandro.dausilio at gmail.com (Alessandro D'Ausilio) Date: Thu, 30 Jun 2022 10:40:21 +0200 Subject: Connectionists: [JOBS] PhD @ Italian Institute of Technology, Ferrara Message-ID: <985197CC-79B3-47E5-85C8-8E3F05609AE2@gmail.com> PHD PROGRAM in TRANSLATIONAL NEUROSCIENCES AND NEUROTECHNOLOGIES The Center for Translational Neurophysiology of Speech and Communication (CTNSC) @ Italian Institute of Technology (IIT), jointly with the University of Ferrara, are opening up to 8 PhD positions starting in November 1st, 2022. Research areas: - Improving performance and biocompatibility of electrode arrays for brain-computer interfaces - Organic neuroelectronics for multimodal recordings and stimulation of the brain in vivo - Hardware and software development for innovative exploration of brain signals - Machine learning applications to multimodal brain and speech signals - Investigation of sensorimotor functions in animal models - Cortical recordings in human patients during awake Neurosurgery - Human non-invasive neurophysiology of speech and sensorimotor communication by means of TMS, EEG, EMG and MoCap Who: physicists, computer scientists, biomedical/electrical engineers, biologists, biotechnologist, medical doctors and experimental psychologists eager to work in an international and multidisciplinary team. Where: The CTNSC (https://www.iit.it/it/ctnsc-unife ) is hosted by the University of Ferrara (UNIFE) in a prestigious historical building in the city center. Ferrara is a well connected renaissance city (30-min to Bologna, 40-min to Padua, 60-min to Venice; 2 nearby international airports), bustling with students (https://whc.unesco.org/en/list/733 ). General INFO: http://www.unife.it/studenti/dottorato/it/corsi/riforma/neuroscience Application INFO: http://www.unife.it/studenti/dottorato/concorsi/selection Application website: https://pica.cineca.it/unife/dottorati-38-ntn/ DEADLINE: July 25rd, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1451 bytes Desc: not available URL: From publicity at acsos.org Thu Jun 30 09:26:47 2022 From: publicity at acsos.org (ACSOS Conference) Date: Thu, 30 Jun 2022 09:26:47 -0400 Subject: Connectionists: Autonomic Computing & Self-Organizing Systems: Special Regional Event in the UK Message-ID: We are pleased to announce that the next Special Event hosted by the Autonomic Computing and Self-Organizing Systems (ACSOS) community will be held in Lancaster, UK, on 29th July 2022. The event will run in hybrid mode, so remote attendance is also possible. Registration is entirely free both for in-presence and remote attendees. *PROGRAMME* All times are BST (British Summer Time) 10:30 BST - Verification in Autonomous Systems: Prof Michael Fisher, Dr. Colin Paterson, Dr. Simos Gerasimu 11:30 BST - Software Engineering for Self- Systems: Dr. Nelly Bencomo, Dr. Rami Bahsoon 13:30 BST - Machine Learning for Autonomous Systems: Dr. Chloe Barnes, Dr. Barry Porter, Dr. Faiza Samreen 14:30 BST - Demo Session In-person social event! Find more information at: https://2022.acsos.org/track/acsos-2022-special-events or contact Abdessalam Elhabbash ata.elhabbash at lancaster.ac.uk From Pavis at iit.it Thu Jun 30 09:36:20 2022 From: Pavis at iit.it (Pavis) Date: Thu, 30 Jun 2022 13:36:20 +0000 Subject: Connectionists: Application Deadline extended to July 6 - 2 PHD POSITIONS on Computational Vision at PAVIS - IIT Italy & University of Genoa, Italy In-Reply-To: <41e117445308446daf453cb75417a9b8@iit.it> References: <4f98593862ae43fd887f7039be1caa4f@iit.it>, <395419cefcc14ca6a30e291022c9e747@iit.it>, <41e117445308446daf453cb75417a9b8@iit.it> Message-ID: <73a6ba59aea340d690d71960a7b39a44@iit.it> 2 PHD POSITIONS ON COMPUTATIONAL VISION AT IIT ? PAVIS IN COLLABORATION WITH UNIVERSITY OF GENOA, ITALY The Italian Institute of Technology ? IIT, www.iit.it ? in collaboration with University of Genoa ?https://unige.it/en ? funds 2 PhD scholarships on Computational Vision, Automatic Recognition and Learning. Research and training activities are jointly conducted between the DITEN Department of University of Genova http://phd-stiet.diten.unige.it/ and IIT infrastructures in Genoa, at the PAVIS - Pattern Analysis and Computer Vision Research line https://pavis.iit.it/ led by its Principal Investigator, Alessio Del Bue. ? RESEARCH TOPICS: Theme A: 3D scene understanding with geometrical and deep learning reasoning Theme B: Deep Learning for Multi-modal scene understanding Theme C: Self-Supervised and Unsupervised Deep Learning Theme D: Visual Reasoning with Knowledge and Graph Neural Networks Detailed description at:? https://pavisdata.iit.it/data/phd/2023_ResearchTopicsPhD_IIT-PAVIS.pdf PAVIS The PhD program on the listed topics will take place at the PAVIS research line of IIT located in Genova (www.iit.it). The department focuses on activities related to the analysis and understanding of images, videos and patterns in general, also in collaboration with other research groups at IIT. PAVIS staff has a wide expertise in computer vision and pattern recognition, machine learning, image processing, and related applications (related to assistive and monitoring AI systems). For more information, you can also browse the PAVIS webpage http://pavis.iit.it/ to see our activities and research. Successful candidates will be part of an exciting and international working environment and will work in brand new laboratories equipped with state-of-the-art instrumentation. Excellent communication skills in English, as well as ability to interact effectively with members of the research team, are mandatory. HOW TO APPLY Full information, official call and course description are available at?? ? ITALIAN https://unige.it/usg/it/dottorati-di-ricerca ENGLISH https://unige.it/en/usg/en/phd-programmes Official call: https://unige.it/sites/contenuti.unige.it/files/documents/BANDO%2038%20CICLO%20-%20EN.pdf Course description for XXXVIII Phd Course in Science and Technology for Electronic and Telecommunication Engineering, curriculum in Computer Vision, Automatic Recognition and Learning (CODE 9320) is on page 121 of the list of PhD programmes: https://unige.it/sites/contenuti.unige.it/files/documents/ALLEGATO_A_XXXVIII%20-%20EN.pdf Follow the steps listed: 1. Choose the programme 2. Review the application 3. Apply here https://servizionline.unige.it/studenti/post-laurea/dottorato/domanda following the detailed instructions: https://unige.it/sites/contenuti.unige.it/files/documents/Guida_eng_XXXVIII.pdf WHAT TO SUBMIT A detailed CV, a research proposal under one or more topics chosen among those above indicated, reference letters, and any other formal document concerning the degrees earned. Notice that these documents are mandatory in order to consider valid the application. Refer also to the indications stated at pg. 121 of the course description document, above mentioned. IMPORTANT: In order to apply, candidates must prepare the research proposal based on the research topics above mentioned. Please, follow these indications to prepare it https://pavisdata.iit.it/data/phd/ResearchProjectTemplate.pdf For FURTHER INFORMATION on the research topics contact Dr. Del Bue at pavis at iit.it DEADLINE Deadline for application has been extended to July 6, 2022 - H 12:00 PM Italian Time (CEST) STRICT DEADLINE, NO EXTENSION. Apply before deadline, the application process is not immediate: don?t wait for the final day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From decebalmocanu at gmail.com Thu Jun 30 12:31:24 2022 From: decebalmocanu at gmail.com (Decebal Mocanu) Date: Thu, 30 Jun 2022 10:31:24 -0600 Subject: Connectionists: Postdoc - sparse training - evolutionary algorithms - reinforcement learning Message-ID: Dear all, We have an open *postdoctoral researcher *position in *deep **reinforcement learning*, *evolutionary algorithms*, and *sparse training* of artificial neural networks, with *humans* and *robots* in the loop :) Does it sound challenging enough? If so, please read more and apply until *14 July 2022* here: https://internal.jobs.vu.nl/ad/postdoctoral-researcher-in-computational-intelligence/gw5ip4 Job location: Amsterdam (mainly) and Twente, the Netherlands Contract period: 1 + 1.5 years Best wishes, Decebal Mocanu https://people.utwente.nl/d.c.mocanu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.jose.hanson at rutgers.edu Thu Jun 30 10:22:52 2022 From: stephen.jose.hanson at rutgers.edu (Stephen Jose Hanson) Date: Thu, 30 Jun 2022 14:22:52 +0000 Subject: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc. In-Reply-To: References: <27D911A3-9C51-48A6-8034-7FF3A3E89BBB@princeton.edu> <2f1d9928-543f-f4a0-feab-5a5a0cc1d4d7@rubic.rutgers.edu> <307D9939-4F3A-40FF-A19F-3CEABEAE315C@supsi.ch> <2293D07C-A5E3-4E66-9120-C14DE15239A7@supsi.ch> <29BC825D-F353-457A-A9FD-9F25F3D1A6DB@supsi.ch> <3155202C-080E-4BE7-84B6-A567E306AC1D@supsi.ch> <58AC5011-BF6A-453F-9A5E-FAE0F63E2B02@supsi.ch> Message-ID: <0a114e7e-a93b-4c4e-da3b-f47088c085b4@psychology.rutgers.edu> So as usual clearly written but without a lot of context. Juregen you are part of history not a historian. Now on the one hand I really do appreciate your attempts to revive my Stochastic delta rule as a precursor to DROP-OUT. And I know that Geoff spent some time explaining how he thought of Drop-out at a train station or somewhere with queues and it made him think of some sort of stochastic process at the hidden layer across the queues..etc.. and its notable that he and I had a conversation in 1990 at NIPS in Denver...and he asked me..."how did you come up with this algorithm?" I said it seemed like a nice compromise between BP and a BolzMs.. but again there can be convergent lines of invention.. I think the thing that is important in all of this requires the author--with their own judgement (and reviewers) to determine-- what is really basic level to the research instead of superordinate.. as calculus is to Backprop.. or Sumerian counting is to calculus...etc... and fishing was to counting.. etc.. Steve On 6/28/22 3:06 AM, Schmidhuber Juergen wrote: After months of massive open online peer review, there is a revised version 3 of my report on the history of deep learning and on misattributions, supplementing my award-winning 2015 deep learning survey. The new version mentions (among many other things): 1. The non-learning recurrent architecture of Lenz and Ising (1920s)?later reused in Amari?s learning recurrent neural network (RNN) of 1972. After 1982, this was sometimes called the "Hopfield network." 2. Rosenblatt?s MLP (around 1960) with non-learning randomized weights in a hidden layer, and an adaptive output layer. This was much later rebranded as ?Extreme Learning Machines." 3. Amari?s stochastic gradient descent for deep neural nets (1967). The implementation with his student Saito learned internal representations in MLPs at a time when compute was billions of times more expensive than today. 4. Fukushima?s rectified linear units (ReLUs, 1969) and his CNN architecture (1979). The essential statements of the text remain unchanged as their accuracy remains unchallenged: Scientific Integrity and the History of Deep Learning: The 2021 Turing Lecture, and the 2018 Turing Award https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=05%7C01%7Cstephen.jose.hanson%40rutgers.edu%7C3decaa12bdf74de542c108da58d8e347%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C637919985826393993%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=RLf5Q5awCWiEJ1yf%2Bi4ExnBctfXP55J5UJpP9ym8ujA%3D&reserved=0 J?rgen On 25 Jan 2022, at 18:03, Schmidhuber Juergen wrote: PS: Terry, you also wrote: "Our precious time is better spent moving the field forward.? However, it seems like in recent years much of your own precious time has gone to promulgating a revisionist history of deep learning (and writing the corresponding "amicus curiae" letters to award committees). For a recent example, your 2020 deep learning survey in PNAS [S20] claims that your 1985 Boltzmann machine [BM] was the first NN to learn internal representations. This paper [BM] neither cited the internal representations learnt by Ivakhnenko & Lapa's deep nets in 1965 [DEEP1-2] nor those learnt by Amari?s stochastic gradient descent for MLPs in 1967-1968 [GD1-2]. Nor did your recent survey [S20] attempt to correct this as good science should strive to do. On the other hand, it seems you celebrated your co-author's birthday in a special session while you were head of NeurIPS, instead of correcting these inaccuracies and celebrating the true pioneers of deep learning, such a! s Ivakhnenko and Amari. Even your recent interview https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblog.paperspace.com%2Fterry-sejnowski-boltzmann-machines%2F&data=05%7C01%7Cstephen.jose.hanson%40rutgers.edu%7C3decaa12bdf74de542c108da58d8e347%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C637919985826393993%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=7eGcG9Q6c5bPJ2I5RaFHavcatp7yjkLs2unJGfTcEp8%3D&reserved=0 claims: "Our goal was to try to take a network with multiple layers - an input layer, an output layer and layers in between ? and make it learn. It was generally thought, because of early work that was done in AI in the 60s, that no one would ever find such a learning algorithm because it was just too mathematically difficult.? You wrote this although you knew exactly that such learning algorithms were first created in the 1960s, and that they worked. You are a well-known scientist, head of NeurIPS, and chief editor of a major journal. You must correct this. We must all be better than this as scientists. We owe it to both the past, present, and future scientists as well as those we ultimately serve. The last paragraph of my report https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=05%7C01%7Cstephen.jose.hanson%40rutgers.edu%7C3decaa12bdf74de542c108da58d8e347%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C637919985826393993%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=RLf5Q5awCWiEJ1yf%2Bi4ExnBctfXP55J5UJpP9ym8ujA%3D&reserved=0 quotes Elvis Presley: "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.? I wonder how the future will reflect on the choices we make now. J?rgen On 3 Jan 2022, at 11:38, Schmidhuber Juergen wrote: Terry, please don't throw smoke candles like that! This is not about basic math such as Calculus (actually first published by Leibniz; later Newton was also credited for his unpublished work; Archimedes already had special cases thereof over 2000 years ago; the Indian Kerala school made essential contributions around 1400). In fact, my report addresses such smoke candles in Sec. XII: "Some claim that 'backpropagation' is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970 [BP1]." You write: "All these threads will be sorted out by historians one hundred years from now." To answer that, let me just cut and paste the last sentence of my conclusions: "However, today's scientists won't have to wait for AI historians to establish proper credit assignment. It is easy enough to do the right thing right now." You write: "let us be good role models and mentors" to the new generation. Then please do what's right! Your recent survey [S20] does not help. It's mentioned in my report as follows: "ACM seems to be influenced by a misleading 'history of deep learning' propagated by LBH & co-authors, e.g., Sejnowski [S20] (see Sec. XIII). It goes more or less like this: 'In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s [S20].' However, as mentioned above, the 1969 book [M69] addressed a 'problem' of Gauss & Legendre's shallow learning (~1800)[DL1-2] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method [DEEP1-2][DL2] (and then also by Amari's SGD for MLPs [GD1-2]). Minsky was apparently unaware of this and failed to correct it later [HIN](Sec. I).... deep learning research was ! alive and kicking also in the 1970s, especially outside of the Anglosphere." Just follow ACM's Code of Ethics and Professional Conduct [ACM18] which states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." No need to wait for 100 years. J?rgen On 2 Jan 2022, at 23:29, Terry Sejnowski wrote: We would be remiss not to acknowledge that backprop would not be possible without the calculus, so Isaac newton should also have been given credit, at least as much credit as Gauss. All these threads will be sorted out by historians one hundred years from now. Our precious time is better spent moving the field forward. There is much more to discover. A new generation with better computational and mathematical tools than we had back in the last century have joined us, so let us be good role models and mentors to them. Terry ----- On 1/2/2022 5:43 AM, Schmidhuber Juergen wrote: Asim wrote: "In fairness to Jeffrey Hinton, he did acknowledge the work of Amari in a debate about connectionism at the ICNN?97 .... He literally said 'Amari invented back propagation'..." when he sat next to Amari and Werbos. Later, however, he failed to cite Amari?s stochastic gradient descent (SGD) for multilayer NNs (1967-68) [GD1-2a] in his 2015 survey [DL3], his 2021 ACM lecture [DL3a], and other surveys. Furthermore, SGD [STO51-52] (Robbins, Monro, Kiefer, Wolfowitz, 1951-52) is not even backprop. Backprop is just a particularly efficient way of computing gradients in differentiable networks, known as the reverse mode of automatic differentiation, due to Linnainmaa (1970) [BP1] (see also Kelley's precursor of 1960 [BPa]). Hinton did not cite these papers either, and in 2019 embarrassingly did not hesitate to accept an award for having "created ... the backpropagation algorithm? [HIN]. All references and more on this can be found in the report, especially ! in Sec. XII. The deontology of science requires: If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later [DLC], and correctly give credit in all follow-up papers and presentations. Also, ACM's Code of Ethics and Professional Conduct [ACM18] states: "Computing professionals should therefore credit the creators of ideas, inventions, work, and artifacts, and respect copyrights, patents, trade secrets, license agreements, and other methods of protecting authors' works." LBH didn't. Steve still doesn't believe that linear regression of 200 years ago is equivalent to linear NNs. In a mature field such as math we would not have such a discussion. The math is clear. And even today, many students are taught NNs like this: let's start with a linear single-layer NN (activation = sum of weighted inputs). Now minimize mean squared error on the training set. That's good old linear regression (method of least squares). Now let's introduce multiple layers and nonlinear but differentiable activation functions, and derive backprop for deeper nets in 1960-70 style (still used today, half a century later). Sure, an important new variation of the 1950s (emphasized by Steve) was to transform linear NNs into binary classifiers with threshold functions. Nevertheless, the first adaptive NNs (still widely used today) are 1.5 centuries older except for the name. Happy New Year! J?rgen -------------- next part -------------- An HTML attachment was scrubbed... URL: From junfeng989 at gmail.com Thu Jun 30 20:56:39 2022 From: junfeng989 at gmail.com (Jun Feng) Date: Thu, 30 Jun 2022 20:56:39 -0400 Subject: Connectionists: The 2022 IEEE International Conference on Privacy Computing Message-ID: CFP IEEE PriComp 2022 The 2022 IEEE International Conference on Privacy Computing Dec. 15-18, Haikou, China [Submission Deadline: Sep. 1] http://www.ieee-smart-world.org/2022/pricomp/ PriComp 2022 is the 8th in this series of conferences started in 2015 that are devoted to algorithms and architectures for Privacy Computing. PriComp conference provides a forum for academics and practitioners from countries around the world to exchange ideas for improving the efficiency, performance, reliability, security and interoperability of Privacy Computing systems and applications. Following the traditions of the previous successful PriComp conferences held in Fuzhou, China (2015); Qingdao, China (2016); Melbourne, Australia (2017); Boppard, Germany (2018); Canterbury, UK (2019); Hainan, China (2020) and Xi'an, Shanghai, China (online, 2021); PriComp 2022 will be held in Haikou, China. PriComp 2022 will focus on an evolving pathway from privacy protection to privacy computing, by serving as an international premier forum for engineers and scientists in academia, industry, and government to address the resulting profound challenges and to present and discuss their new ideas, research results, applications and experience on all aspects of privacy computing. The conference of PriComp 2022 is co-organized by Chinese Information Processing Society of China, Institute of Information Engineering, CAS, and Hainan University. ================== Important Dates ================== Workshop Proposal: July 15, 2022 Paper Submission: September 01, 2022 Author Notification: October 01, 2022 Camera-Ready Submission: October 31, 2022 Conference Date: December 15-18, 2022 ================== Topics of interest include, but are not limited to ================== - Theories and foundations for privacy computing - Programming languages and compilers for privacy computing - Privacy computing models - Privacy metrics and formalization - Privacy taxonomies and ontologies - Privacy information management and engineering - Privacy operation and modeling - Data utility and privacy loss - Cryptography for privacy protection - Privacy protection based information hiding and sharing - Data analytics oriented privacy control and protection - Privacy-aware information collection - Privacy sensing and distribution - Combined and comprehensive privacy protection - Privacy-preserving data publishing - Private information storage - Private integration and synergy - Private information exchange and sharing - Privacy inference and reasoning - Internet and web privacy - Cloud privacy - Social media privacy - Mobile privacy - Location privacy - IoT privacy - Behavioral advertising - Privacy in large ecosystems such as smart cities - Privacy of AI models and systems - AI for privacy computing - Privacy and blockchain - User-centric privacy protection solutions - Human factors in privacy computing - Privacy nudging - Automated solutions for privacy policies and notices - Legal issues in privacy computing and other interdisciplinary topics ================== Paper Submission ================== All papers need to be submitted electronically through the conference submission website (https://edas.info/N29960) with PDF format. The materials presented in the papers should not be published or under submission elsewhere. Each paper is limited to 8 pages (or 10 pages with over length charge) including figures and references using IEEE Computer Society Proceedings Manuscripts style (two columns, single-spaced, 10 fonts). You can confirm the IEEE Computer Society Proceedings Author Guidelines at the following web page: http://www.computer.org/web/cs-cps/ Manuscript Templates for Conference Proceedings can be found at: https://www.ieee.org/conferences_events/conferences/publishing/templates.html Once accepted, the paper will be included into the IEEE conference proceedings published by IEEE Computer Society Press (indexed by EI). At least one of the authors of any accepted paper is requested to register the paper at the conference. ================== Organizing Committee ================== General Chairs - Fenghua Li, Institute of Information Engineering, CAS, China - Laurence T. Yang, Hainan University, China - Willy Susilo, University of Wollongong, Australia Program Chairs - Hui Li, Xidian University, China - Mamoun Alazab, Charles Darwin University, Australia - Jun Feng, Huazhong University of Science and Technology, China Local Chairs - Weidong Qiu, Shanghai Jiaotong University, China - Jieren Cheng, Hainan University, China -- Dr. Jun Feng Huazhong University of Science and Technology Mobile: +86-18827365073 WeChat: junfeng10001000 E-Mail: junfeng989 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: