From Pavis at iit.it Thu Feb 1 06:00:46 2024 From: Pavis at iit.it (Pavis) Date: Thu, 1 Feb 2024 11:00:46 +0000 Subject: Connectionists: Call - 2 Postdoc positions on multi-modal models for 3D scene understanding [Jobs] In-Reply-To: References: Message-ID: <274f5eb209534a5ea1c319795a6d10e3@iit.it> Two Postdoctoral positions on multi-modal models for 3D scene understanding - (2400000V) Commitment & contract: collaboration contract, at least 12 months Location: Genova, Italy ABOUT US At IIT we work enthusiastically to develop human-centered Science and Technology to tackle some of the most pressing societal challenges of our times and transfer these technologies to the production system and society. Our Genoa headquarters are strictly interconnected with the other 11 centers around Italy and two outstations based in the US. We promote excellence in basic and applied research such as computation, neuroscience and cognition, humanoid technologies and robotics, nanotechnology, materials for a truly multidisciplinary scientific experience. YOUR TEAM We are hiring two Postdocs willing to join the Pattern Analysis and Computer Vison (PAVIS) Research Line (https://pavis.iit.it/ ) coordinated by Dr. Alessio Del Bue. PAVIS is a 25-unit group with multi-disciplinary staff of PhDs, Technicians and PostDocs coming from several countries in Europe and abroad. PAVIS is part of ELLIS (https://ellis.eu/) - an European network of excellence in AI, Machine Learning (ML) and Computer Vision (CV). About the research project PAVIS is willing to strengthen its current activities on 3D multi-modal scene understanding. Topics of interest are the study of novel ML and CV methods that can efficiently incorporate priors and constraints related to world physical models and semantic priors, computed both from vision, language models or other concurrent modalities. In particular, we will investigate the interplay between vision and (large) language models (LLMs) to solve tasks related to 3D reasoning, visual (re-)localization, active vision, and neural/geometrical novel view rendering (NeRF, 3DGS) to name a few. We seek to develop models that can also be readily applicable to IIT interdisciplinary research, ranging from drug discovery to robotics, in particular by leveraging our inhouse robotics platforms (iCub, ErgoCub, R1 et al.) and HPC computational facilities. Both junior and experienced candidates will be considered for the position. ESSENTIAL REQUIREMENTS - A PhD in Computer Science, Machine Learning, Physics, Engineering, Mathematics or related areas Documented expertise in: - Computer Vision, with focus on multimodal machine learning - ML and CV models with a preference for GNNs and Transformers architectures. - Strong programming ability (Python preferred) with hands-on skills in AI and Deep Learning frameworks (e.g., Pytorch, Tensorflow or equivalent tools) - Ability/willingness to integrate within multidisciplinary research group - Proven track record of publications in the relevant technical areas - Ability to properly report, organize and publish research data - High motivation to learn - Good priority management - Ability to work in a challenging and international environment - Fluency in spoken and written English NICE-TO-HAVE - Knowledge of 3D vision (multi-view geometry, point cloud data processing, neural rendering) - Experience in deploying and finetune large language models - Practical experience on deploying ML models on HPC platforms - Team player skills with the ability to communicate technical knowledge in a clear and understandable manner COMPENSATION & BENEFITS - Competitive salary for international standards - Wide range of staff discounts - Flexible working time - Candidates from abroad or Italian citizens who permanently work abroad and meet specific requirements, may be entitled to a deduction from taxable income of up to 90% from 6 to 13 years. WHAT'S IN FOR YOU? - An equal, inclusive and multicultural environment ready to welcome you with open arms. Discrimination is a big NO for us! - We like contamination and encourage you to mingle and discover what other people are up to in our labs! - If paperwork is not your piece of cake, we got you! There's a specialized team working to help you with that, especially during your relocation! If you are a startupper or a business-minded person, you will find some exceptionally gifted professionals ready to nurture and guide your attitude and aspirations. If you want your work to have a real impact, in IIT you will find an innovative and stimulating culture that drives our mission to contribute to the improvement and well-being of society! We stick to our values! Integrity, courage, societal responsibility and inclusivity are the values we believe in! They define us and our actions in our everyday life. They guide us to accomplish IIT mission! **Application's deadline: February 29th, 2024** If you feel this tickles your appetite for change, do not hesitate and apply here https://iit.taleo.net/careersection/ex/jobdetail.ftl?lang=it&job=2400000V Please submit your application using the online form above and including: - A detailed CV with a full list of publications, plus a final section titled "Relevant skills" commenting on the strong points where your CV matches the required skills (listed above). - Name and contacts of 2 referees. We inform you that the information you provide will be used solely for the purposes of evaluating and selecting professional profiles to meet the requirements of Istituto Italiano di Tecnologia. Your data will be processed by Istituto Italiano di Tecnologia, based in Genoa, Via Morego 30, acting as Data Controller, in compliance with the rules on protection of personal data, including those related to data security. Please also note that, pursuant to articles 15 et. seq. of European Regulation no. 679/2016 (General Data Protection Regulation), you may exercise your rights at any time by contacting the Data Protection Officer (phone Tel: +39 010 28961 - email: mailto:dpo at iit.it - kindly note that this e-mail address is exclusively reserved for handling data protection issues. Please, do not use this e-mail address to send any document and/or request of information about this opening). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsmattei at gmail.com Thu Feb 1 18:44:03 2024 From: nsmattei at gmail.com (Nicholas Mattei) Date: Thu, 1 Feb 2024 17:44:03 -0600 Subject: Connectionists: ADT 2024 - PRELIMINARY CALL FOR PAPERS Message-ID: *ADT** 2024 - PRELIMINARY CALL FOR PAPERS* The 8th International Conference on Algorithmic Decision Theory - ADT 2024 will be held October 14-16, 2024, at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) at Rutgers University. ADT 2024 focuses on algorithmic decision theory broadly defined, seeking to bring together researchers and practitioners coming from diverse areas of Computer Science, Economics, and Operations Research in order to improve the theory and practice of modern decision support. The conference topics include research in: Algorithms, Argumentation Theory, Artificial Intelligence, Computational Social Choice, Database Systems, Decision Analysis, Discrete Mathematics, Game Theory, Machine Learning and Adversarial Machine Learning, Matching, Multi-agent Systems, Multiple Criteria Decision Aiding, Networks, Optimization, Preference Modeling, Risk Analysis and Adversarial Risk Analysis, and Utility Theory. ADT 2024 provides a multi-disciplinary forum for sharing knowledge in this area with a special focus on algorithmic issues in decision theory, continuing the tradition of the first seven editions of the International Conference on Algorithmic Decision Theory (ADT 2009 Venice, ADT 2011 Rutgers, ADT 2013 Brussels, ADT 2015 Lexington KY, ADT 2017 Luxembourg, ADT 2019 Durham NC, ADT 2021 Toulouse) which brought together researchers and practitioners from around the globe. *Important Dates* *Title and Abstract Submission*: May 10, 2024 *Full Paper Submission*: May 17, 2024 *Notification*: July 19, 2024 *Final Version of Accepted Papers*: August 9, 2024 *Conference Dates*: October 14-16, 2024 *Submissions* Submissions are invited on significant, original and previously unpublished research on all aspects of Algorithmic Decision Theory. Papers must be at most 15 pages long in the LNCS format (including references). The formal proceedings of ADT 2024 will be published by Springer Verlag in the Lecture Notes in Artificial Intelligence (LNAI) . ADT 2024 will not accept any paper that, at the time of submission, has already been published in a journal or another venue with formally published proceedings. However, in order to accommodate the publication norms of the many fields that work on decision theory, papers can be submitted but not already published elsewhere provided the authors note they are submitting for the non-proceedings track at the top of their submission. All papers will be peer-reviewed by a double-blind procedure. Therefore, papers must be submitted anonymously as pdf documents via the Microsoft CMT system. It is important and required that authors do not reveal their identities in submitted papers. Since the review process is double blind, authors must take measures to ensure that their identity is not easily revealed from the submission itself. Authors should include the submission number (as assigned by the conference system) in the author field of the submission, and refer to their prior work in a neutral manner (i.e., instead of saying ?We showed? say ?XYZ et al. showed?). It is acceptable to submit work that has been presented in public or has appeared on arXiv, provided the submission itself is anonymized. *Submission Format:* Authors should consult Springer?s authors? guidelines and use their proceedings templates , either for LaTeX or for Word, for the preparation of their papers. Note that at least one author of each accepted paper is required to attend the conference to present the work. Authors will be required to agree to this requirement at the time of submission. In addition, the corresponding author of each accepted paper that will appear in the proceedings, acting on behalf of all of the authors of that paper, will need to complete and sign a Consent-to-Publish form, through which the copyright for their paper is transferred to Springer. To accommodate the publishing traditions of different fields, ADT 2024 will accept two types of submissions: *Submissions with full text in the proceedings:* Papers of this type will be accepted for either oral or poster presentation, or both. Each accepted paper of this type will be allocated at most 15 pages in the proceedings and there will be no distinction between papers accepted for oral or poster presentation in the conference proceedings. *Submissions with one-page abstract in the proceeding:* Papers of this type will be accepted for either oral or poster presentation, or both. Each accepted paper of this type will appear as a one-page abstract in the proceedings, along with a URL pointing to the full paper. This option is available to accommodate subsequent publication in journals that would not consider results that have been published in preliminary form in conference proceedings. Such papers must be formatted just like papers submitted for full-text publication, at the submission time, but authors are required to write ?submission without proceedings? into the author field of their paper (instead of author names), if they choose this category. Otherwise, it will be assumed, by default, that their paper is submitted in the first category (submissions with full-text in the proceedings). Rupert Freeman and Nicholas Mattei ADT 2024 Program Co-Chairs David Pennock and Lirong Xia ADT 2024 General Chairs -- *Nicholas Mattei* Assistant Professor, Tulane University nsmattei at tulane.edu | www.nickmattei.net Stanley Thomas Hall | 305B +1 504 865 5782 Department of Computer Science Tulane University 6823 St Charles Ave New Orleans, LA 70118 -------------- next part -------------- An HTML attachment was scrubbed... URL: From claudio.piciarelli at uniud.it Thu Feb 1 10:48:07 2024 From: claudio.piciarelli at uniud.it (Claudio Piciarelli) Date: Thu, 1 Feb 2024 15:48:07 +0000 Subject: Connectionists: Call for Competition - Layout Segmentation of Ancient Manuscripts In-Reply-To: References: Message-ID: (apologies for multiple copies) =================================== CALL FOR COMPETITION: SAM 2024 - ??? with prizes! $$$ Layout segmentation of ancient manuscripts https://ai4ch.uniud.it/udiadscomp/ =================================== We are glad to announce SAM: International Competition on Few-Shot and Many-Shot Layout Segmentation of Ancient Manuscripts, in conjunction with the 18th International Conference on Document Analysis and Recognition ICDAR 2024. Competition Overview: Layout segmentation is a critical aspect of Document Image Analysis, particularly when it comes to ancient manuscripts. It consists in decomposing the document in several regions representing title, main text, paratext, etc.. We invite the research community to address this task on U-DIADS-Bib, a novel dataset of fully-labelled ancient manuscripts. Competition Tasks: We propose two separate tasks. Participants can try only one of them or both. * Few-Shot Segmentation: participants are asked to develop a layout segmentation system using only three images for each manuscript as a training set * Many-Shot Segmentation: participants have access to the full dataset (except for the private data that will be used for the final evaluation) Important Dates: * Beginning of Track 1: January 15, 2024 * Deadline of Track 1: March 3, 2024 * Beginning of Track 2: Match 4, 2024 * Deadline of Track 2: March 31, 2024 Prizes: Winners of each task will be eligible for a cash prize of 300 EUR sponsored by CVPL ? Italian Association for Computer Vision, Pattern Recognition and Machine Learning, IAPR Italian chapter. For any additional information, please visit the website: https://ai4ch.uniud.it/udiadscomp/ Organizers: Silvia Zottin, zottin.silvia at spes.uniud.it Axel De Nardin, denardin.axel at spes.uniud.it Claudio Piciarelli, claudio.piciarelli at uniud.it Gian Luca Foresti, gianluca.foresti at uniud.it Emanuela Colombi, emanuela.colombi at uniud.it AI4CH - Artificial Intelligence for Cultural Heritage Lab, University of Udine. https://ai4ch.uniud.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rw at physics.wustl.edu Thu Feb 1 15:50:38 2024 From: rw at physics.wustl.edu (Ralf Wessel) Date: Thu, 01 Feb 2024 14:50:38 -0600 Subject: Connectionists: NeuroAI Symposium May 15 at WashU in STL Message-ID: <54a632f4f6511f9a43584e39f5933e62@physics.wustl.edu> NeuroAI Symposium May 15 at WashU in STL https://transdisciplinaryfutures.wustl.edu/events/neuroai-symposium?d=2024-05-15 What can we learn from the similarities and differences that power computations in AI and Brains? 1 question 10 speakers 100 participants 1000 answers -- **************************************************** Ralf Wessel, Professor of Physics Washington University in St. Louis https://physics.wustl.edu/people/ralf-wessel http://neuroscience.wustl.edu/ **************************************************** From spiglerg at gmail.com Thu Feb 1 12:51:23 2024 From: spiglerg at gmail.com (Giacomo Spigler) Date: Thu, 1 Feb 2024 18:51:23 +0100 Subject: Connectionists: [Meetings] Invitation: full day AI for Robotics Workshop @ Tilburg (Netherlands) on April 5th, 2024 Message-ID: Dear colleagues, We are happy to invite you to participate and contribute to our one-day Workshop on ``*AI for Robotics*'', which will be held in **Tilburg, The Netherlands**, on **April 5th 2024**. The Workshop will serve as a venue for interdisciplinary networking across the fields of Artificial Intelligence and Robotics. The Workshop is **free** to attend, and **we will provide lunch, drinks, and plenty of snacks** throughout the day. More details about the workshop can be found on our website https://www.ai4robotics.eu . Please contact me directly or through "*workshop < at > tilburg-robotics.eu *" if you have any questions! We are looking forward to a wide and varied participation! --- A few details --- # Registration *Please register in advance* if you wish to attend the Workshop, so that we can organize the facilities accordingly. You can register at https://t.ly/mdyrw , or from the Workshop's website ( https://www.ai4robotics.eu ). # Travel grants We plan to offer a limited number of travel grants based on need and merit to increase accessibility and improve participation. Please let us know if you need a travel grant through the registration form. # Invited Speakers Our workshop is privileged to host an exceptional lineup of speakers from academia and industry. Confirmed speakers include: * Francesco Nori (Google DeepMind) * Roberto Calandra (TU Dresden) * Servey Levine (UC Berkeley) * Jean-Baptiste Mouret (Inria) * Jens Kober (TU Delft) * Guido de Croon (TU Delft) # Workshop Format The Workshop consists of talks by our invited speakers, interleaved with coffee breaks where our participants can present their posters. A tasty free lunch will be provided, during which we are going to organize various networking activities. The Workshop will end with a reception and (free) drinks. # Abstract Since their inception, Artificial Intelligence and Robotics have been intrinsically connected. Yet, the two communities are not always interacting as much as they should. The increasing specialization within each field tends to overlook their deeper link, posing challenges in keeping up to date with each other's advancements. This highlights the critical need for an interdisciplinary forum to harmonize efforts across both domains. Additionally, many research labs lack access to essential resources. This includes both the high-cost robot hardware required for practical application of theories and the computational resources necessary for training large neural networks. By fostering collaborations between AI and robotics researchers, we can significantly enhance research efficiency and accelerate scientific breakthroughs. We organize the AI for Robotics Workshop to provide an opportunity for AI researchers and roboticists to come together and exchange ideas, learn about the latest developments in both fields, encourage networking, and initiate collaborations. The workshop is organized as a one day event in Tilburg, The Netherlands, featuring invited talks on the latest advances in both AI and Robotics from leading researchers. To promote meaningful exchanges, the event includes networking opportunities during lunch and coffee breaks, and ends with relaxed social drinks at the end of the day. Understanding the importance of inclusive participation, we are pleased to offer free registration for the workshop, kindly provided by our sponsors. This includes a delightful lunch and all-day snacks and drinks. Additionally, we are committed to supporting the involvement of early-career researchers and students by providing a number of travel grants based on need and merit. Everybody is encouraged to submit and present a poster. Kind regards, Giacomo Spigler Murat Kirtay -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.o.sousa at inesctec.pt Thu Feb 1 10:35:28 2024 From: hugo.o.sousa at inesctec.pt (Hugo Oliveira Sousa) Date: Thu, 1 Feb 2024 15:35:28 +0000 Subject: Connectionists: Text2Story@ECIR'24 Last Call for Papers Message-ID: *** Apologies for cross-posting *** ++ LAST CALL FOR PAPERS ++ **************************************************************************** Seventh International Workshop on Narrative Extraction from Texts (Text2Story'24) Held in conjunction with the 46th European Conference on Information Retrieval (ECIR'24) March 24th, 2024 ? Glasgow, Scotland Website: https://text2story24.inesctec.pt **************************************************************************** ++ Important Dates ++ - Submission Deadline: February 7th, 2024 - Acceptance Notification: March 1st, 2024 - Camera-ready copies: March 15th, 2024 - Workshop: March 24th, 2024 ++ Overview ++ Over these past years, significant breakthroughs, led by Transformers and Large Language Models (LLMs), have been made in understanding natural language text. However, the ability to capture, represent, and analyze contextual nuances in longer texts is still an elusive goal, let alone the understanding of consistent fine-grained narrative structures in text. In the seventh edition of the Text2Story workshop, we aim to bring to the forefront the challenges involved in understanding the structure of narratives and in incorporating their representation in well-established frameworks, as well as in modern architectures (e.g., transformers) and AI-powered language models (e.g, chatGPT) which are now common and form the backbone of almost every IR and NLP application. It is hoped that the workshop will provide a common forum to consolidate the multi-disciplinary efforts and foster discussions to identify the wide-ranging issues related to the narrative extraction task. ++ List of Topics ++ Research works submitted to the workshop should foster the scientific advance on all aspects of storyline generation and understanding from texts including but not limited to narrative information extraction aspects, narratives representation, knowledge extraction, ethics and bias in narratives, datasets and evaluation protocols and narrative applications such as visualization of narratives, multi-modal aspects, Q&A, etc. To this regard, we encourage the submission of high-quality and original submissions covering the following topics: Information Extraction Aspects * Temporal Relation Identification * Temporal Reasoning and Ordering of Events * Causal Relation Extraction and Arrangement * Big Data Applied to Narrative Extraction Narrative Representation * Annotation protocols * Narrative Representation Models * Lexical, Syntactic, and Semantic Ambiguity in Narrative Representation Narrative Analysis and Generation * Argumentation Analysis * Language Models and Transfer Learning in Narrative Analysis * Narrative Analysis in Low-resource Languages * Multilinguality: Multilingual and Cross-lingual Narrative Analysis * Comprehension of Generated Narratives * Story Evolution and Shift Detection * Automatic Timeline Generation Datasets and Evaluation Protocol * Evaluation Methodologies for Narrative Extraction * Annotated datasets * Narrative Resources Ethics and Bias in Narratives * Bias Detection and Removal in Generated Stories * Ethical and Fair Narrative Generation * Misinformation and Fact Checking Narrative Applications * Narrative-focused Search in Text Collections * Narrative Summarization * Narrative Q&A * Multi-modal Narrative Summarization * Sentiment and Opinion Detection in Narratives * Social Media Narratives * Narrative Simplification * Personalization and Recommendation of Narratives * Storyline Visualization ++ Dataset ++ We challenge the interested researchers to consider submitting a paper that makes use of the tls-covid19 dataset - published at ECIR'21 - under the scope and purposes of the text2story workshop. tls-covid19 consists of a number of curated topics related to the Covid-19 outbreak, with associated news articles from Portuguese and English news outlets and their respective reference timelines as gold-standard. While it was designed to support timeline summarization research tasks it can also be used for other tasks (e.g., Q&A), especially when combined with Large Language Models (LLMs) like ChatGPT. A script to reconstruct and expand the dataset is available at https://github.com/LIAAD/tls-covid19. The article itself is available at this link: https://link.springer.com/chapter/10.1007/978-3-030-72113-8_33 ++ Submission Guidelines ++ We solicit the following types of contributions: * Full papers up to 8 pages + references Original and high-quality unpublished contributions to the theory and practical aspects of the narrative extraction task. Full papers should introduce existing approaches, describe the methodology and the experiments conducted in detail. Negative result papers to highlight tested hypotheses that did not get the expected outcome are also welcomed. * Short papers up to 5 pages + references Unpublished short papers describing work in progress; position papers introducing a new point of view, a research vision or a reasoned opinion on the workshop topics; and dissemination papers describing project ideas, ongoing research lines, case studies or summarized versions of previously published papers in high-quality conferences/journals that is worthwhile sharing with the Text2Story community, but where novelty is not a fundamental issue. * Demos | Resource Papers up to 5 pages + references Unpublished papers presenting research/industrial demos; papers describing important resources (datasets or software packages) to the text2story community; Submissions will be peer-reviewed by at least two members of the programme committee. The accepted papers will appear in the proceedings published at CEUR workshop proceedings (indexed in Scopus and DBLP) as long as they don't conflict with previous publication rights. ++ Workshop Format ++ Participants of accepted papers will be given 15 minutes for oral presentations. ++ Invited Speakers ++ Homo narrans: From Information to Narratives Jochen L. Leidner, Coburg University of Applied Sciences, Germany Abstract: Humans are curious creatures, equipped with a sense of (and desire for) finding meaning in their environment. They are predisposed to identify patterns, real and spurious, in the world they live in, and above anything else, they understand the world in terms of narratives. In this talk, we will explore a set of questions about narratives: what is a narrative made up of? What signals from textual prose tell us what the narrative is? What about signals from structured data that imply a particular narrative? What is the essence of a story? How can narrative information be extracted and presented? Open source intelligence analysts and investigative reporters alike are hunting for the story, the narrative, behind the petabyte intercepts or terabyte leaks. The more data we gather or have available, the stronger will be our thirst to distill meaningful stories from it. Bio: Professor Jochen L. Leidner MA MPhil PhD FRGS is the Research Professor for Explainable and Responsible Artificial Intelligence in Insurance at Coburg University of Applied Sciences and Arts, Germany, where he leads the Information Access Research Group, a Visiting Professor of Data Analytics in the Department of Computer Science, University of Sheffield and founder and CEO of the consultancy KnowledgeSpaces. He is also a Fellow of the Royal Geographical Society. Dr. Leidner's experience includes positions as Director of Research at Thomson Reuters and Refinitiv in London, where he headed its R&D team (2013-2022). He has built up research and innovation teams. He was also the Royal Academy of Engineering Visiting Professor of Data Analytics at the Department of Computer Science. His background includes a Master's in computational linguistics, English and computer science (University of Erlangen-Nuremberg), a Master's in Computer Speech, Text and Internet Technology (University of Cambridge) and a PhD in Informatics (University of Edinburgh), which won the first ACM SIGIR Doctoral Consortium Award. He is a scientific expert for the European Commission (FP7, H2020, Horizon Europe) and other funding bodies in Germany, Austria, the UK and the USA. He also is a past chair of the Microsoft-BCS/BCS IRSG Karen Sparck Jones award. Professor Leidner is an author or co-author of several dozen peer-reviewed publications (including one best paper award), has authored or co-edited two books and holds several patents in the areas of information retrieval, natural language processing, and mobile computing. He has been twice winner of the Thomson Reuters inventor of the year award for the best patent application, and is the past received of a Royal Society of Edinburgh Enterprise Fellowship in Electronic Markets. Visual Storytelling with Question-Answer Plans Mirella Lapata, University of Edinburgh, Scotland Abstract: Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark (Huang et al., 2016) demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems. Bio: Professor Mirella Lapata is a faculty member in the School of Informatics at the University of Edinburgh. She is affiliated with the Institute for Communicating and Collaborative Systems and the Edinburgh Natural Language Processing Group. Her research centers on computational models for the representation, extraction, and generation of semantic information from structured and unstructured data. This encompasses various modalities, including text, images, video, and large-scale knowledge bases. Prof. Lapata has contributed to diverse applied Natural Language Processing (NLP) tasks, such as semantic parsing, semantic role labeling, discourse coherence, summarization, text simplification, concept-to-text generation, and question answering. Using primarily probabilistic generative models, she has employed computational models to investigate aspects of human cognition, including learning concepts, judging similarity, forming perceptual representations, and learning word meanings. The overarching objective of her research is to empower computers to comprehend requests, execute actions based on them, process and aggregate large datasets, and convey information derived from them. Central to these endeavors are models designed for extracting and representing meaning from natural language text, internally storing meanings, and leveraging stored meanings to deduce further consequences. ++ Organizing committee ++ Ricardo Campos (INESC TEC; University of Beira Interior, Covilh?, Portugal) Al?pio M. Jorge (INESC TEC; University of Porto, Portugal) Adam Jatowt (University of Innsbruck, Austria) Sumit Bhatia (Media and Data Science Research Lab, Adobe) Marina Litvak (Shamoon Academic College of Engineering, Israel) ++ Proceedings Chair ++ Jo?o Paulo Cordeiro (INESC TEC & Universidade da Beira do Interior) Concei??o Rocha (INESC TEC) ++ Web and Dissemination Chair ++ Hugo Sousa (INESC TEC & University of Porto) Behrooz Mansouri (Rochester Institute of Technology) ++ Program Committee ++ ?lvaro Figueira (INESC TEC & University of Porto) Andreas Spitz (University of Konstanz) Antoine Doucet (Universit? de La Rochelle) Ant?nio Horta Branco (University of Lisbon) Anubhav Jangra (IIT Patna, Japan) Arian Pasquali (Faktion AI) Bart Gajderowicz (University of Toronto) Bego?a Altuna (Universidad del Pa?s Vasco) Behrooz Mansouri (Rochester Institute of Technology) Brenda Santana (Federal University of Rio Grande do Sul) Bruno Martins (IST & INESC-ID, University of Lisbon) Brucce dos Santos (Computational Intelligence Laboratory (LABIC) - ICMC/USP) David Semedo (Universidade NOVA de Lisboa) Deya Banisakher (Florida International University) Dhruv Gupta (Norwegian University of Science and Technology) Evelin Amorim (INESC TEC) Henrique Lopes Cardoso (LIACC & University of Porto) Ignatius Ezeani (Lancaster University) Irina Rabaev (Shamoon College of Engineering) Ismail Altingovde (Middle East Technical University) Jo?o Paulo Cordeiro (INESC TEC & University of Beira Interior) Liana Ermakova (HCTI, Universit? de Bretagne Occidentale) Luca Cagliero (Politecnico di Torino) Ludovic Moncla (INSA Lyon) Luis Filipe Cunha (INESC TEC & University of Minho) Marc Finlayson (Florida International University) Marc Spaniol (Universit? de Caen Normandie) Mariana Caravanti (Computational Intelligence Laboratory (LABIC) - ICMC/USP) Moreno La Quatra (Kore University of Enna) Natalia Vanetik (Sami Shamoon College of Engineering) Nuno Guimar?es (INESC TEC & University of Porto) Pablo Gerv?s (Universidad Complutense de Madrid) Paulo Quaresma (Universidade de ?vora) Purifica??o Silvano (CLUP & University of Porto) Ross Purves (University of Zurich) Satya Almasian (Heidelberg University) S?rgio Nunes (INESC TEC & University of Porto) Sriharsh Bhyravajjula (University of Washington) Udo Kruschwitz (University of Regensburg) Valentina Bartalesi (ISTI-CNR, Italy) ++ Contacts ++ Website: https://text2story24.inesctec.pt For general inquiries regarding the workshop, reach the organizers at: text2story2024 at easychair.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.azeglio at gmail.com Fri Feb 2 10:07:00 2024 From: simone.azeglio at gmail.com (Simone Azeglio) Date: Fri, 2 Feb 2024 16:07:00 +0100 Subject: Connectionists: Sharpening Our Sight - Cosyne 2024 workshop on Vision Neuroscience Message-ID: Hello everyone, We are pleased to announce the "*Sharpening Our Sight: Advances in Naturalistic Visual Perception through Efficient Representations and Active Search*" workshop, a featured event at this year's Cosyne conference. The workshop will be held on the *5th of March* and it will focus on vision neuroscience at the intersection of active vision and efficient neural representations. For detailed information about the workshop and the most current version of the schedule, please visit our website: https://sites.google.com/view/cosyne2024-sos/home We are honored to host a distinguished panel of speakers: - *Andreas Tolias*, Baylor College - *Christopher Summerfield*, Oxford / Deepmind - *Carsen Stringe*r, Janelia - *Sylvia Schr?der*, Sussex - *Wiktor M?ynarski*, LMU - *Cris Niell*, Oregon - *Stuart Trenholm*, McGill - *Jin Hwa Lee*, UCL We anticipate a fruitful exchange of ideas and look forward to your participation in Cascais. Kind regards, Simone Azeglio, Samuele Virgili, Gabriel Mahuas -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwang at cse.ohio-state.edu Fri Feb 2 12:01:15 2024 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Fri, 2 Feb 2024 17:01:15 +0000 Subject: Connectionists: NEURAL NETWORKS, February 2024 Message-ID: Neural Networks - Volume 170, February 2024 https://www.journals.elsevier.com/neural-networks BrainGridNet: A two-branch depthwise CNN for decoding EEG-based multi-class motor imagery Xingfu Wang, Yu Wang, Wenxia Qi, Delin Kong, Wei Wang Classification of DBS microelectrode recordings using a residual neural network with attention in the temporal domain K.A. Ciecierski, T. Mandat A recurrent Hopfield network for estimating meso-scale effective connectivity in MEG Giorgio Gosti, Edoardo Milanetti, Viola Folli, Francesco de Pasquale, ... Stefania Della Penna Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation Naoko Koide-Majima, Shinji Nishimoto, Kei Majima AdjointBackMapV2: Precise reconstruction of arbitrary CNN unit's activation via adjoint operators Qing Wan, Siu Wun Cheung, Yoonsuck Choe Stabilizing machine learning prediction of dynamics: Novel noise-inspired regularization tested with reservoir computing Alexander Wikner, Joseph Harvey, Michelle Girvan, Brian R. Hunt, ... Edward Ott LFighter: Defending against the label-flipping attack in federated learning Najeeb Moharram Jebreel, Josep Domingo-Ferrer, David S?nchez, Alberto Blanco-Justicia MCNet: A multi-level context-aware network for the segmentation of adrenal gland in CT images Jinhao Li, Huying Li, Yuan Zhang, Zhiqiang Wang, ... Xieping Gao Teacher-student complementary sample contrastive distillation Zhiqiang Bao, Zhenhua Huang, Jianping Gou, Lan Du, ... Yunwen Chen Exploiting nonlinear dendritic adaptive computation in training deep Spiking Neural Networks Guobin Shen, Dongcheng Zhao, Yi Zeng Unsupervised image enhancement under non-uniform illumination based on paired CNNs Feng Lin, Huaqing Zhang, Jian Wang, Jun Wang Grounding spatial relations in text-only language models Gorka Azkune, Ander Salaberria, Eneko Agirre A versatile Wavelet-Enhanced CNN-Transformer for improved fluorescence microscopy image restoration Qinghua Wang, Ziwei Li, Shuqi Zhang, Nan Chi, Qionghai Dai Human-Object Interaction detection via Global Context and Pairwise-level Fusion Features Integration Haozhong Wang, Hua Yu, Qiang Zhang Multi-Adaptive Optimization for multi-task learning with deep neural networks ?lvaro S. Hervella, Jos? Rouco, Jorge Novo, Marcos Ortega MECCH: Metapath Context Convolution-based Heterogeneous Graph Neural Networks Xinyu Fu, Irwin King Powerful-IoU: More straightforward and faster bounding box regression loss with a nonmonotonic focusing mechanism Can Liu, Kaige Wang, Qing Li, Fazhan Zhao, ... Hongtu Ma CT-Net: Asymmetric compound branch Transformer for medical image segmentation Ning Zhang, Long Yu, Dezhi Zhang, Weidong Wu, ... Min Li A start-stop points CenterNet for wideband signals detection and time-frequency localization in spectrum sensing Teng Cheng, Lei Sun, Junning Zhang, Jinling Wang, Zhanyang Wei Hierarchical attention network with progressive feature fusion for facial expression recognition Huanjie Tao, Qianyue Duan Star algorithm for neural network ensembling Sergey Zinchenko, Dmitrii Lishudi Theoretical limits on the speed of learning inverse models explain the rate of adaptation in arm reaching tasks Victor R. Barradas, Yasuharu Koike, Nicolas Schweighofer Boundary uncertainty aware network for automated polyp segmentation Guanghui Yue, Guibin Zhuo, Weiqing Yan, Tianwei Zhou, ... Tianfu Wang Learning deep representation and discriminative features for clustering of multi-layer networks Wenming Wu, Xiaoke Ma, Quan Wang, Maoguo Gong, Quanxue Gao Boosting semi-supervised learning with Contrastive Complementary Labeling Qinyi Deng, Yong Guo, Zhibang Yang, Haolin Pan, Jian Chen Graph embedding-based heterogeneous domain adaptation with domain-invariant feature learning and distributional order preserving Wenxu Wang, Zhenbo Li, Weiran Li Feature-wise scaling and shifting: Improving the generalization capability of neural networks through capturing independent information of features Tongfeng Sun, Xiurui Wang, Zhongnian Li, Shifei Ding Combining external-latent attention for medical image segmentation Enmin Song, Bangcheng Zhan, Hong Liu A filter-augmented auto-encoder with learnable normalization for robust multivariate time series anomaly detection Jiahao Yu, Xin Gao, Baofeng Li, Feng Zhai, ... Chun Xiao UTDNet: A unified triplet decoder network for multimodal salient object detection Fushuo Huo, Ziming Liu, Jingcai Guo, Wenchao Xu, Song Guo SVD-AE: An asymmetric autoencoder with SVD regularization for multivariate time series anomaly detection Yueyue Yao, Jianghong Ma, Shanshan Feng, Yunming Ye Dynamic-group-aware networks for multi-agent trajectory prediction with relational reasoning Chenxin Xu, Yuxi Wei, Bohan Tang, Sheng Yin, ... Yanfeng Wang Dynamic learning from adaptive neural control for full-state constrained strict-feedback nonlinear systems Qinchen Yang, Fukai Zhang, Qinghua Sun, Cong Wang An off-policy multi-agent stochastic policy gradient algorithm for cooperative continuous control Delin Guo, Lan Tang, Xinggan Zhang, Ying-chang Liang CVANet: Cascaded visual attention network for single image super-resolution Weidong Zhang, Wenyi Zhao, Jia Li, Peixian Zhuang, ... Chongyi Li OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning Chu Myaet Thwal, Minh N.H. Nguyen, Ye Lin Tun, Seong Tae Kim, ... Choong Seon Hong Event-triggered impulsive cluster synchronization of coupled reaction-diffusion neural networks and its application to image encryption Minghao Hui, Xiaoyang Liu, Song Zhu, Jinde Cao Mean square exponential stabilization analysis of stochastic neural networks with saturated impulsive input Hao Deng, Chuandong Li, Fei Chang, Yinuo Wang Markov chain stochastic DCA and applications in deep learning with PDEs regularization Hoang Phuc Hau Luu, Hoai Minh Le, Hoai An Le Thi A novel fixed-time error-monitoring neural network for solving dynamic quaternion-valued Sylvester equations Lin Xiao, Penglin Cao, Zidong Wang, Sai Liu Sampled-data controller scheme for multi-agent systems and its Application to circuit network A. Stephen, R. Karthikeyan, C. Sowmiya, R. Raja, Ravi P. Agarwal A comparative analysis of multi-biometrics performance in human and action recognition using silhouette thermal-face and skeletal data Onur Can Kurban, T?lay Yildirim Genetic high-gain controller to improve the position perturbation attenuation and compact high-gain controller to improve the velocity perturbation attenuation in inverted pendulums Jos? de Jes?s Rubio, Mario Alberto Hernandez, Francisco Javier Rosas, Eduardo Orozco, ... Jaime Pacheco SLAPP: Subgraph-level attention-based performance prediction for deep learning models Zhenyi Wang, Pengfei Yang, Linwei Hu, Bowen Zhang, ... Quan Wang LDCNet: Lightweight dynamic convolution network for laparoscopic procedures image segmentation Yiyang Yin, Shuangling Luo, Jun Zhou, Liang Kang, Calvin Yu-Chian Chen IoUformer: Pseudo-IoU prediction with transformer for visual tracking Huayue Cai, Long Lan, Jing Zhang, Xiang Zhang, ... Zhigang Luo Foreground segmentation network using transposed convolutional neural networks and up sampling for multiscale feature encoding Vishruth B. Gowda, M.T. Gopalakrishna, J. Megha, Shilpa Mohankumar Deep Kernel Principal Component Analysis for multi-level feature learning Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A.K. Suykens -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuzhanglei2008 at gmail.com Fri Feb 2 12:06:10 2024 From: bnuzhanglei2008 at gmail.com (Lei Zhang) Date: Fri, 2 Feb 2024 17:06:10 +0000 Subject: Connectionists: Birmingham-Leiden Summer School in Computational Social Cognition Message-ID: (Apologies for cross-posting) We are delighted to welcome applications for the first edition of the Birmingham-Leiden summer school in Computational Social Cognition (CSC). The summer school is hosted at the University of Birmingham (UK), in collaboration with Leiden University (NL), and will take place from 15th-17th July 2024. Apply by 14th March 2024 (see information below)! In the past decade, computational modelling has been extremely helpful in explaining various concepts and processes in social cognition, and these modelling frameworks include the reinforcement learning models, Bayesian learning models, models of theory of mind, models of effort discounting, and drift diffusion models. This timely summer school will provide talks and training on these latest developments focusing on learning, decision-making and computational modelling in social cognition, as well as topics covering theoretical issues in social cognition. Attending the Birmingham-Leiden CSC Summer School will equip a diverse cohort of early career researchers (trainees through to junior faculty members) with the ability to understand, program and interpret the output of a range of computational models of social cognition. Attendees will receive different types of training aimed at understanding modelling as well as the theoretical and practical inferences that can be drawn from computational models. For more information about this year?s training program, including criteria and application instructions, please visit our website . The deadline is March 14, 2024. We hope to see you this summer in Birmingham! Keynote speakers: - Cecilia Heyes, University of Oxford (UK) - Christian Ruff, University of Zurich (CH) - Wolfram Schultz, University of Cambridge (UK) Instructors (alphabetical): - Matt Apps, University of Birmingham (UK) - Jo Cutler, University of Birmingham (UK) - Anna Van Duijvenvoorde, Leiden University (NL) - Romy Froemer, University of Birmingham (UK) - Arkady Konovalov, University of Birmingham (UK) - Patricia Lockwood, University of Birmingham (UK) - Ili Ma, Leiden University (NL) - Lei Zhang, University of Birmingham (UK) Anna & Lei, on behalf of the CSC 2024 organization team --- Dr. Lei Zhang w: lei-zhang.net t: @lei_zhang_lz -------------- next part -------------- An HTML attachment was scrubbed... URL: From ali at ifi.uio.no Fri Feb 2 13:15:33 2024 From: ali at ifi.uio.no (Ali Ramezani-Kebrya) Date: Fri, 2 Feb 2024 18:15:33 +0000 Subject: Connectionists: Ph.D. Positions in ML in Norway Integreat - Norwegian Centre for Knowledge-driven Machine Learning Message-ID: <3A59EB92-7BB3-4B14-A360-909B796BBC73@uio.no> [?] [?] [?] A great opportunity to be within the first batch of Integreat Ph.Ds. Integreat is a unique environment where you work with computer scientists, statisticians, mathematicians + logic, language models, and ethics experts all in one place! I am involved in three exciting projects with wonderful colleagues: - Project 9: Embedded Sufficient Statistics (in Oslo) - Project 4: Exploration and Control of the Inner Representation in Generative AI Models (in Troms?) - Project 3: Developing Novel Information Theoretic Discrepancy Measures (in Troms?) You will receive similar resources to do the best Ph.D. in ML in both Troms? and Oslo. Engage in interdisciplinary projects in Machine Learning, Statistics, Logic, Language Technology, and Ethics with a group of devoted supervisors and fellow students. [?] Deadline: 4 March 2024 #AcademicCareers #PhDpositions #UiO #UiT #MachineLearning #Integreat #JuniorArena [?] Please share [?] Read more: [link]: https://www.integreat.no/news/join-integreat-apply-now-for-13-phd-positions-and-.html -------------------------------------- Ali Ramezani-Kebrya Associate Professor Department of Informatics University of Oslo Principal Investigator, SFI Visual Intelligence Norwegian Centre for Knowledge-driven Machine Learning (Integreat) European Laboratory for Learning and Intelligent Systems (ELLIS) Member https://alirk.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1121 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 816 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 949 bytes Desc: image003.png URL: From info at icas.cc Fri Feb 2 12:37:43 2024 From: info at icas.cc (ICAS Organizing Committee) Date: Fri, 2 Feb 2024 18:37:43 +0100 Subject: Connectionists: 1st CfP ACAIN 2024 - 4th Int. Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy -> Submission Deadline: March 23 Message-ID: Call for Participation & Call for Papers (apologies for multiple copies) ________________________________________________________________________ The 4th International Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25, 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy W: https://acain2024.icas.events E: acain at icas.cc FB: https://www.facebook.com/ACAIN.LakeDistrict/ Symposium Deadlines Paper Submission (Symposium): by Saturday March 23, 2024 (AoE). https://acain2024.icas.events/symposium-call-for-papers/ https://easychair.org/conferences/?conf=acain2024 Notification of Decision for Papers (Symposium): by Wednesday July 10, 2024. Camera Ready Submission (Symposium): by Monday July 20, 2024. Author Registration (Symposium): by July 20, 2024. Regular Registration (Symposium): by Saturday March 23, 2024. Late Registration (Symposium): from Sunday March 24, 2024. Course Deadlines Regular Registration (Course): by Saturday March 23, 2024. Late Registration (Course): from Sunday March 24, 2024. Oral Presentation Submission (Course): by Saturday March 23, 2024. Notification of Decision for Oral Presentation (Course): by August 31, 2024. SCOPE & MOTIVATION: ACAIN 2024: AI meets Computational Neuroscience and Cognitive Science The ACAIN 2024 symposium and course is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields of AI, Neuroscience, Neuroscience-Inspired AI, Human-Level AI, and Cognitive Science. The 4th Advanced Course and Symposium on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion four-day Course and Symposium in Tuscany on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their colleagues, in a convivial and productive environment. Two days of keynote talks and oral presentations, the ACAIN Symposium, (September 24-25), will be preceded by lectures of leading scientists, the ACAIN Course, (September 22-23). Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy, exchange, collaboration and efficient cooperation between the two research communities. These are the goals of the International Course and Symposium - ACAIN 2024, which is aimed both at AI experts with interests in Neuroscience and at neuroscientists with an interest in AI. ACAIN 2024 accepts rigorous research that promotes and fosters multidisciplinary interactions between artificial intelligence and neuroscience. The Advanced Course is suited for scholars, academics, early career researchers, Post-Docs, PhD students and industry leaders. The Event (Course and Symposium) will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. LECTURERS https://acain2024.icas.events/course-lecturers/ * Auke Jan Ijspeert, EPFL, Switzerland * Zeb Kurth-Nelson, DeepMind & UCL, London, UK * Kevin J Miller, DeepMind & UCL, London, UK * Thomas Parr, Oxford University, UK * Eric Schulz, Max Planck Institute for Biological Cybernetics, Tuebingen, Germany More Lecturers TBA COURSE DESCRIPTION: https://acain2024.icas.events/course-description/ SYMPOSIUM CALL FOR PAPERS: https://acain2024.icas.events/symposium-call-for-papers/ SYMPOSIUM PROGRAM COMMITTEE (partial list, confirmed members): https://acain2024.icas.events/program-committee/ ORGANIZING COMMITTEE: https://acain2024.icas.events/symposium-committee/ VENUE & ACCOMMODATION: https://acain2024.icas.events/venue/ ACAIN 2024 is a *Residential Conference*, all participants (invited speakers, authors, organizers, chairs, participants) must book and stay at the Riva del Sole Resort and Spa. No exceptions are allowed. https://lod2024.icas.events/lod-2024-a-residential-conference/ Riva del Sole Resort & SPA a: Localit? Riva del Sole - Castiglione della Pescaia (Grosseto) CAP 58043, Tuscany - Italy p: +39-0564-928111 f: +39-0564-935607 e: booking.events at rivadelsole.it w: www.rivadelsole.it/en ACTIVITIES: https://acain2024.icas.events/activities/ REGISTRATION: https://acain2024.icas.events/registration/ See you in Tuscany in September! ACAIN 2024 Organizing Committee. E: acain at icas.cc W: https://acain2024.icas.events FB: https://www.facebook.com/ACAIN.LakeDistrict -- *7th Advanced Course on Data Science & Machine Learning - ACDL 2024,* 10-14 June Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) ? Tuscany, Italy An Interdisciplinary Course: Big Data, Deep Learning & AI without Borders *Early Registration: by February 23, 2024 (AoE)* The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. https://acdl2024.icas.events/ *10th International Conference on Learning, Optimization and Data ? LOD 2024, *22?25 September Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy https://lod2024.icas.events/ lod at icas.cc *Submission Deadline: March 23* *4th Advanced Course ** and Symposium **on Artificial Intelligence and Neuroscience ? ACAIN 2024,* 22?25 September Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy https://acain2024.icas.events/ acain at icas.cc *Symposium Submission Deadline: March 23 * *Early Registration Deadline for the Course: March 23* The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. *ICAS* *-* *The Interdisciplinary Centre of Advanced Studies* https://icas.cc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ai at okipublishing.com Fri Feb 2 14:15:40 2024 From: ai at okipublishing.com (AI Systems) Date: Fri, 02 Feb 2024 19:15:40 +0000 Subject: Connectionists: Abstracts Ext Feb 20: CAIF 2023 Artificial Intelligence Frontiers, April 1-4, USA & Online Message-ID: <1b66ea1fd273f1e3a89d0ae5b2c29f1b@okipublishing.com> [Due extended per requests: apologies for cross-posting] --- Call for Abstracts and Papers ------------- 2024 OkIP International Conference on Artificial Intelligence Frontiers (CAIF) Oklahoma City, OK, USA & Online April 1-4, 2024 https://eventutor.com/e/CAIF001 Submission Deadline Extended: February 20, 2024 Extended versions of the best papers will be considered for publication in the inaugural volume of the International Journal of Automated and Intelligent Systems. *** Keynote/Invited Talk "AI Revolution in the Healthcare Enterprise" Ram D. Sriram, PhD Chief Software and Systems Division National Institute of Standards and Technology, USA *** Contribution Types (Two-Column IEEE Format Style): - Full Paper: Accomplished research results (6 pages) - Short Paper: Work in progress/fresh developments (3 pages) - Extended Abstract/Poster/Journal First: Displayed/Oral presented (1 page) *** Tracks/Areas: >> AI, Machine Learning (ML), and Applications >> Agent-based, Automated, and Distributed Supports >> Formal Methods in AI >> AI Foundations/Theory/Algorithms >> Natural Language Processing (NLP) >> Conversational Agents and Dialogue System >> Intelligent Systems and Applications >> AI in Bioinformatics/Biotechnology >> AI in Nuclear-Energy/Earth-observation >> Knowledge-based and Control Supports >> Automation, Robotics and Vehicles >> Business Process Automation >> Global and Sustainable AI >> AI for HealthCare >> AI and Environmental Efficiency >> AI for Diversity/Inclusion/Equity/Equality >> AI for Future Communities >> Generative AI Tools and Models >> AI in Computing and Society >> Smart Systems/Infrastructures/Construction/City >> Intelligent Transportation Systems >> AI in Education >> AI Engineering >> AI in Data and Big Data >> AI System Design >> Software Engineering for AI >> AI for Cloud Computing >> AI in Software Engineering >> Theories and Approaches of Blockchain >> AI in Electronic Commerce >> AI in High-Performance Computing >> AI in CyberSecurity and Information Assurance >> IA in Power Systems >> Internet of Things >> IA in Energy >> AI Ethical/Gender/Equity/Bias/Policy/Accessibility Concerns >> Circuit, Design and Hardware for AI >> Other >> New Important Dates: - Abstract or Paper Submission: Due Extended February 19, 2024 - Conference Date: April 1-4, 2024 >> Technical Program Committee https://eventutor.com/event/41/page/101-committee Please feel free to contact us for any inquiries at: Kris Zeuti OkIP Secretariat info at okipublishing.com From g.goodhill at wustl.edu Sat Feb 3 10:30:33 2024 From: g.goodhill at wustl.edu (Goodhill, Geoffrey) Date: Sat, 3 Feb 2024 15:30:33 +0000 Subject: Connectionists: NEXTEN Conference registration now open Message-ID: NEXTEN: Envisaging Theoretical and Computational Neuroscience for the next 10 years May 16-17, 2024, Washington University in St Louis nexten.wustl.edu Registration and abstract submission for this event is now open (deadline Feb 28). Theoretical/Computational Neuroscience is currently undergoing a phase of unprecedented growth. Dramatic advances in the ease with which large-scale datasets of behavior, neural activity, connectomic and gene expression information can be collected have coincided with a revolution in the capabilities of Artificial Intelligence-based algorithms to interrogate such datasets. Concurrently, these new datasets are inspiring exciting new developments in mathematical tools for extracting their underlying structure and relationships. What is the current state of the art in our theoretical/computational understanding of the brain, and what does the future hold? You are invited to join an exciting line-up of leaders in the field to address these questions. Confirmed speakers include Adam Kepecs (WashU) Mackenzie Mathis (EPFL) Terry Sejnowski (Salk Institute) Andreas Tolias (Baylor College of Medicine) Richard Zemel (Columbia University) Abstract submissions for posters presenting current research in the field are encouraged, and some of these will be selected for short talks. Some travel awards will be available for qualified trainees. The conference is sponsored by the Center for Theoretical and Computational Neuroscience at Washington University in St Louis (ctcn.wustl.edu). It will be held in the new 600,000 sq ft Neuroscience Research Building on the WashU Medical School campus, into which ~100 neuroscience labs have recently moved (https://medicine.wustl.edu/news/newly-opened-jeffrey-t-fort-neuroscience-research-building-dedicated). The deadline for registration and abstract submission is Wednesday, February 28, 2024 (for those external to WashU). However, due to the size of the auditorium total registrations are limited to 150, and so registration may close earlier than Feb 28 if this limit is reached. Please direct any questions to ctcn at wustl.edu Professor Geoffrey J Goodhill Departments of Developmental Biology and Neuroscience Director, Center for Theoretical and Computational Neuroscience (ctcn.wustl.edu) Washington University School of Medicine 660 S. Euclid Avenue St. Louis, MO 63110 g.goodhill at wustl.edu https://neuroscience.wustl.edu/people/geoffrey-goodhill-phd -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhansel0 at gmail.com Sun Feb 4 09:50:11 2024 From: dhansel0 at gmail.com (David Hansel) Date: Sun, 4 Feb 2024 16:50:11 +0200 Subject: Connectionists: World wide VVTNS series: Wednesday, February 7 2024 at 11am (ET), Nicholas Priebe | The University of Texas at Austin In-Reply-To: References: Message-ID: [image: VVTNS.png] https://www.wwtns.online - on twitter: wwtns at TheoreticalWide You are cordially invited to the lecture given by Nicholas Priebe The University of Texas at Austin on the topic of *"The origins of variable responses in neocortical neurons"* The lecture will be held on zoom on *February 7, 2024*, at *11:00 am ET * Register on our website - https://www.wwtns.online -to receive the zoom link *Abstract:* I will discuss a collaborative project studying the origins of variable responses in neocortical neurons. The spiking responses of neocortical neurons are remarkably variable. Distinct patterns are observed when the same stimulus is presented in the sensory areas or when the same action is executed in motor areas. This is quantified across trials by measuring the Fano factor of the neuronal spike counts, which is generally near 1, consistent with spiking times following a noisy Poisson process. The two candidate sources for noise are the synaptic drive that converges on individual neurons or intrinsic transducing processes within neurons. To parse the relative contributions of these noise sources, we made whole-cell intracellular recordings from cortical slices and used in the whole cell dynamic clamp configuration while using dynamic clamp to injecting excitatory and inhibitory conductances previously recorded in vivo from visual cortical neurons (Tan et al. 2011). By controlling the conductance directly, we can test whether intrinsic processes contribute to poisson firing. We found that repeated injections of the same excitatory and inhibitory conductance evoked stereotypical spike trains, resulting in fano factors near 0.2. Varying the amplitude of both excitatory and inhibitory conductances changed the firing rate of recorded neurons but not the Fano factor. These records indicate that intrinsic processes do not contribute substantially to the Poisson spiking of cortical cells. Next, to test whether differences in network input are responsible for Poisson spike patterns, we examined spike trains evoked by injecting excitatory and inhibitory conductances recorded from different presentations of the same visual stimulus. These records exhibited different behaviors depending on whether the injected conductances were from visually-driven or spontaneous epochs: during visually-driven epochs, spiking responses were Poisson (Fano factor near 1); during spontaneous epochs spiking responses were super-Poisson (fano factors above 1). Both of these observations are consistent with the quenching of variability by sensory stimulation or motor behavior (Churchland et al. 2010). We also found that excitatory conductances, in the absence of inhibition, are sufficient to generate spike trains with Poisson statistics. Our results indicate that the Poisson spiking emerges not from intrinsic sources but from differences in the synaptic drive across trials, the nature of this synaptic drive can alter the nature of variability, and that that excitatory input alone is sufficient to generate Poisson spiking. *About VVTNS : Created as the World Wide Neuroscience Seminar (WWTNS) in November 2020 and renamed in homage to Carl van Vreeswijk in Memoriam (April 20, 2022), its aim is to be a platform to exchange ideas among theoreticians. Speakers have the occasion to talk about theoretical aspects of their work which cannot be discussed in a setting where the majority of the audience consists of experimentalists. The seminars, **held on Wednesdays at 11 am ET,** are 45-50 min long followed by a discussion. The talks are recorded with authorization of the speaker and are available to everybody on our YouTube channel.* ? ? ? ? ---- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints Peres 75270 Paris Cedex 06 Tel (Cell): +33 607508403 - Fax (33).1.49.27.90.62 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VVTNS.png Type: image/png Size: 41084 bytes Desc: not available URL: From david at irdta.eu Sat Feb 3 04:14:10 2024 From: david at irdta.eu (David Silva - IRDTA) Date: Sat, 3 Feb 2024 10:14:10 +0100 (CET) Subject: Connectionists: DeepLearn 2024: early registration March 3 Message-ID: <69085020.120498.1706951650798@webmail.strato.com> ****************************************************** 11th INTERNATIONAL SCHOOL ON DEEP LEARNING (and the Future of Artificial Intelligence) DeepLearn 2024 Porto ? Maia, Portugal July 15-19, 2024 https://deeplearn.irdta.eu/2024/ ****************************************************** Co-organized by: University of Maia Institute for Research Development, Training and Advice ? IRDTA Brussels/London ****************************************************** Early registration: March 3, 2024 ****************************************************** SCOPE: DeepLearn 2024 will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of deep learning. Previous events were held in Bilbao, Genova, Warsaw, Las Palmas de Gran Canaria, Guimar?es, Las Palmas de Gran Canaria, Lule?, Bournemouth, Bari and Las Palmas de Gran Canaria. Deep learning is a branch of artificial intelligence covering a spectrum of current frontier research and industrial innovation that provides more efficient algorithms to deal with large-scale data in a huge variety of environments: computer vision, neurosciences, speech recognition, language processing, human-computer interaction, drug discovery, health informatics, medical image analysis, recommender systems, advertising, fraud detection, robotics, games, finance, biotechnology, physics experiments, biometrics, communications, climate sciences, geographic information systems, signal processing, genomics, materials design, video technology, social systems, etc. etc. The field is also raising a number of relevant questions about robustness of the algorithms, explainability, transparency, and important ethical concerns at the frontier of current knowledge that deserve careful multidisciplinary discussion. Most deep learning subareas will be displayed, and main challenges identified through 18 four-hour and a half courses, 2 keynote lectures, 1 round table and a few hackathon-type competitions among students, which will tackle the most active and promising topics. Renowned academics and industry pioneers will lecture and share their views with the audience. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely. ADDRESSED TO: Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2024 is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators. VENUE: DeepLearn 2024 will take place in Porto, the second largest city in Portugal, recognized by UNESCO in 1996 as a World Heritage Site. The venue will be: University of Maia Avenida Carlos de Oliveira Campos - Cast?lo da Maia 4475-690 Maia Porto, Portugal https://www.umaia.pt/en STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. All lectures will be videorecorded. Participants will be able to watch them again for 45 days after the event. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Also companies will be able to present their technical developments for 10 minutes. This year?s edition of the school will schedule hands-on activities including mini-hackathons, where participants will work in teams to tackle several machine learning challenges. Full live online participation will be possible. The organizers highlight, however, the importance of face to face interaction and networking in this kind of research training event. KEYNOTE SPEAKERS: Jiawei Han (University of Illinois Urbana-Champaign), How Can Large Language Models Contribute to Effective Text Mining? Katia Sycara (Carnegie Mellon University), Effective Multi Agent Teaming PROFESSORS AND COURSES: Luca Benini (Swiss Federal Institute of Technology Zurich), [intermediate/advanced] Open Hardware Platforms for Edge Machine Learning Gustau Camps-Valls (University of Val?ncia), [intermediate] AI for Earth, Climate, and Sustainability Nitesh Chawla (University of Notre Dame), [introductory/intermediate] Introduction to Representation Learning on Graphs Daniel Cremers (Technical University of Munich), [introductory/advanced] Deep Networks for 3D Computer Vision Peng Cui (Tsinghua University), [intermediate/advanced] Stable Learning for Out-of-Distribution Generalization: Invariance, Causality and Heterogeneity Sergei V. Gleyzer (University of Alabama), [introductory/intermediate] Machine Learning Fundamentals and Their Applications to Very Large Scientific Data: Rare Signal and Feature Extraction, End-to-End Deep Learning, Uncertainty Estimation and Realtime Machine Learning Applications in Software and Hardware Yulan He (King?s College London), [introductory/intermediate] Machine Reading Comprehension with Large Language Models Frank Hutter (University of Freiburg), [intermediate/advanced] AutoML George Karypis (University of Minnesota), [intermediate] Deep Learning Models and Systems for Real-World Graph Machine Learning Hermann Ney (RWTH Aachen University / AppTek), [intermediate/advanced] Machine Learning and Deep Learning for Speech & Language Technology: A Probabilistic Perspective Massimiliano Pontil (Italian Institute of Technology), [intermediate/advanced] Operator Learning for Dynamical Systems Elisa Ricci (University of Trento), [intermediate] Continual and Adaptive Learning in Computer Vision Xinghua Mindy Shi (Temple University), [intermediate] Trustworthy Artificial Intelligence for Health and Medicine Michalis Vazirgiannis (?cole Polytechnique), [intermediate/advanced] Graph Machine Learning and Multimodal Graph Generative AI James Zou (Stanford University), [introductory/intermediate] Large Language Models and Biomedical Applications OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david at irdta.eu by July 7, 2024. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 7, 2024. HACKATHONS: Hackathons will take place, where participants will work in teams to tackle several machine learning challenges. They will be coordinated by Professor Sergei V. Gleyzer. The challenges will be released 2 weeks before the beginning of the school. A jury will judge the submissions and the winners of each challenge will be announced on the final day. The winning teams will receive a small prize and the runners-up will get a certificate. EMPLOYERS: Organizations searching for personnel well skilled in deep learning will be provided a space for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the organization and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david at irdta.eu by July 7, 2024. SPONSORS: Companies/institutions/organizations willing to be sponsors of the event can download the sponsorship leaflet from https://deeplearn.irdta.eu/2024/sponsoring/ ORGANIZING COMMITTEE: Jos? Paulo Marques dos Santos (Maia, local chair) Carlos Mart?n-Vide (Tarragona, program chair) Sara Morales (Brussels) Jos? Lu?s Reis (Maia) Lu?s Paulo Reis (Porto) David Silva (London, organization chair) REGISTRATION: It has to be done at https://deeplearn.irdta.eu/2024/registration/ The selection of 8 courses requested in the registration template is only tentative and non-binding. For logistical reasons, it will be helpful to have an estimation of the respective demand for each course. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all program activities and lunches. There are several early registration deadlines. Fees depend on the registration deadline. The fees for on site and for online participation are the same. ACCOMMODATION: Accommodation suggestions will be available at https://deeplearn.irdta.eu/2024/accommodation/ CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. This should be sufficient for those participants who plan to request ECTS recognition from their home university. QUESTIONS AND FURTHER INFORMATION: david at irdta.eu ACKNOWLEDGMENTS: Universidade da Maia Universidade do Porto Universitat Rovira i Virgili Institute for Research Development, Training and Advice ? IRDTA, Brussels/London -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at icas.cc Sat Feb 3 03:52:56 2024 From: info at icas.cc (ICAS Organizing Committee) Date: Sat, 3 Feb 2024 09:52:56 +0100 Subject: Connectionists: 1st CfP ACAIN 2024 - 4th Int. Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy Subscribe -> Submission Deadline: March 23 Message-ID: Call for Participation & Call for Papers (apologies for multiple copies) _______________________________________________________________ The 4th International Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25, 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy W: https://acain2024.icas.events E: acain at icas.cc FB: https://www.facebook.com/ACAIN.LakeDistrict/ Symposium Deadlines Paper Submission (Symposium): by Saturday March 23, 2024 (AoE). https://acain2024.icas.events/symposium-call-for-papers/ https://easychair.org/conferences/?conf=acain2024 Notification of Decision for Papers (Symposium): by Wednesday July 10, 2024. Camera Ready Submission (Symposium): by Monday July 20, 2024. Author Registration (Symposium): by July 20, 2024. Regular Registration (Symposium): by Saturday March 23, 2024. Late Registration (Symposium): from Sunday March 24, 2024. Course Deadlines Regular Registration (Course): by Saturday March 23, 2024. Late Registration (Course): from Sunday March 24, 2024. Oral Presentation Submission (Course): by Saturday March 23, 2024. Notification of Decision for Oral Presentation (Course): by August 31, 2024. SCOPE & MOTIVATION: ACAIN 2024: AI meets Computational Neuroscience and Cognitive Science The ACAIN 2024 symposium and course is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields of AI, Neuroscience, Neuroscience-Inspired AI, Human-Level AI, and Cognitive Science. The 4th Advanced Course and Symposium on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion four-day Course and Symposium in Tuscany on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their colleagues, in a convivial and productive environment. Two days of keynote talks and oral presentations, the ACAIN Symposium, (September 24-25), will be preceded by lectures of leading scientists, the ACAIN Course, (September 22-23). Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy, exchange, collaboration and efficient cooperation between the two research communities. These are the goals of the International Course and Symposium - ACAIN 2024, which is aimed both at AI experts with interests in Neuroscience and at neuroscientists with an interest in AI. ACAIN 2024 accepts rigorous research that promotes and fosters multidisciplinary interactions between artificial intelligence and neuroscience. The Advanced Course is suited for scholars, academics, early career researchers, Post-Docs, PhD students and industry leaders. The Event (Course and Symposium) will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. LECTURERS https://acain2024.icas.events/course-lecturers/ * Auke Jan Ijspeert, EPFL, Switzerland * Zeb Kurth-Nelson, DeepMind & UCL, London, UK * Kevin J Miller, DeepMind & UCL, London, UK * Thomas Parr, Oxford University, UK * Eric Schulz, Max Planck Institute for Biological Cybernetics, Tuebingen, Germany More Lecturers TBA COURSE DESCRIPTION: https://acain2024.icas.events/course-description/ SYMPOSIUM CALL FOR PAPERS: https://acain2024.icas.events/symposium-call-for-papers/ SYMPOSIUM PROGRAM COMMITTEE (partial list, confirmed members): https://acain2024.icas.events/program-committee/ ORGANIZING COMMITTEE: https://acain2024.icas.events/symposium-committee/ VENUE & ACCOMMODATION: https://acain2024.icas.events/venue/ ACAIN 2024 is a *Residential Conference*, all participants (invited speakers, authors, organizers, chairs, participants) must book and stay at the Riva del Sole Resort and Spa. No exceptions are allowed. https://lod2024.icas.events/lod-2024-a-residential-conference/ Riva del Sole Resort & SPA a: Localit? Riva del Sole - Castiglione della Pescaia (Grosseto) CAP 58043, Tuscany - Italy p: +39-0564-928111 f: +39-0564-935607 e: booking.events at rivadelsole.it w: www.rivadelsole.it/en ACTIVITIES: https://acain2024.icas.events/activities/ REGISTRATION: https://acain2024.icas.events/registration/ See you in Tuscany in September! ACAIN 2024 Organizing Committee. E: acain at icas.cc W: https://acain2024.icas.events FB: https://www.facebook.com/ACAIN.LakeDistrict -- *7th Advanced Course on Data Science & Machine Learning - ACDL 2024,* 10-14 June Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) ? Tuscany, Italy An Interdisciplinary Course: Big Data, Deep Learning & AI without Borders *Early Registration: by February 23, 2024 (AoE)* The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. https://acdl2024.icas.events/ *10th International Conference on Learning, Optimization and Data ? LOD 2024, *22?25 September Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy https://lod2024.icas.events/ lod at icas.cc *Submission Deadline: March 23* *4th Advanced Course ** and Symposium **on Artificial Intelligence and Neuroscience ? ACAIN 2024,* 22?25 September Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy https://acain2024.icas.events/ acain at icas.cc *Symposium Submission Deadline: March 23 * *Early Registration Deadline for the Course: March 23* The Course is equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Course. *ICAS* *-* *The Interdisciplinary Centre of Advanced Studies* https://icas.cc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkrichma at uci.edu Sun Feb 4 18:14:12 2024 From: jkrichma at uci.edu (Jeffrey L Krichmar) Date: Sun, 4 Feb 2024 15:14:12 -0800 Subject: Connectionists: Two months until submission deadline for From Animals to Animats: 17th International Conference on the Simulation of Adaptive Behavior (SAB 2024) References: Message-ID: <7DA1967E-DFB4-44DD-B28D-731F3717B363@uci.edu> Dear Connectionists, The deadline for SAB24 paper submissions is less than two months away! The Simulation of Adaptive Behavior (SAB) interdisciplinary conference brings together researchers in artificial intelligence, artificial life, computer science, cybernetics, ethology, evolutionary biology, neuroscience, robotics, and many other fields, to further our understanding of the behaviors and underlying mechanisms that allow natural animals and artificial agents to adapt and survive in complex, dynamic and uncertain environments. Animats denote the connection between animals and synthetic agents. The conference focuses on models and experiments designed to help characterize and compare various organizational principles and architectures underlying adaptive behavior in real animals and in animats. SAB 2024 solicits contributions dealing with any aspect of adaptive behavior. We especially invite submissions from researchers who connect biology and natural behavior to synthetic agents. Conference papers are peer-reviewed and accepted papers will be published in ?From Animals to Animats 17. SAB 2024.? Lecture Notes in Artificial Intelligence, Springer Nature. Website: https://sites.uci.edu/sab2024/ Venue -------- CalIT2 Auditorium University of California, Irvine Irvine, CA 92697 USA CALL FOR PAPERS - SAB 2024 (April 1, 2024 Deadline) ????????????????????? More details can be found at: https://sab2024.socsci.uci.edu/call-for-papers Important Dates ---------------------- April 1, 2024: Deadline for paper submission April 15, 2024: Deadline for Workshop and tutorial proposals June15, 2024: Notice of acceptance June 30, 2024: Camera-ready papers September 9-12: Conference dates More details can be found at: https://sab2024.socsci.uci.edu/call-for-papers Jeff Krichmar Department of Cognitive Sciences 2328 Social & Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 jkrichma at uci.edu http://www.socsci.uci.edu/~jkrichma https://sites.uci.edu/sab2024/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.suykens at esat.kuleuven.be Mon Feb 5 04:11:20 2024 From: johan.suykens at esat.kuleuven.be (Johan Suykens) Date: Mon, 5 Feb 2024 10:11:20 +0100 Subject: Connectionists: DEEPK 2024 - International Workshop on Deep Learning and Kernel Machines - March 7-8, 2024, Leuven Belgium In-Reply-To: <5bf77004-29ca-4390-9e70-7e95aec99695@esat.kuleuven.be> References: <5bf77004-29ca-4390-9e70-7e95aec99695@esat.kuleuven.be> Message-ID: <7aff4570-9715-4c9f-b57d-fa731823ebe3@esat.kuleuven.be> *DEEPK 2024 /International Workshop on Deep Learning and Kernel Machines/* March 7-8, 2024, Leuven, Arenberg Castle, Belgium https://www.esat.kuleuven.be/stadius/E/DEEPK2024 */- Main scope -/* Major progress and impact has been achieved through deep learning architectures with many exciting applications such as by generative models and transformers. At the same time it triggers new questions on the fundamental possibilities and limitations of the models, with respect to representations, scalability, learning and generalization aspects. Through kernel-based methods often a deeper understanding and solid foundations have been obtained, complementary to the powerful and flexible deep learning architectures. Recent examples are understanding generalization of over-parameterized models in the double descent phenomenon and conceiving attention mechanisms in transformers as kernel machines. The aim of DEEPK 2024 is to provide a multi-disciplinary forum where researchers of different communities can meet, to find new synergies between deep learning and kernel machines, both at the level of theory and applications. */- Topics - /* Topics include but are not limited to: * Deep learning and generalization * Double descent phenomenon and over-parameterized models * Transformers and asymmetric kernels * Attention mechanisms, kernel singular value decomposition * Learning with asymmetric kernels * Duality and deep learning * Regularization schemes, normalization * Neural tangent kernel * Deep learning and Gaussian processes * Transformers, support vector machines and least squares support vector machines * Autoencoders, neural networks and kernel methods * Kernel methods in GANs, variational autoencoders, diffusion models, Generative Flow Networks * Generative kernel machines * Deep Kernel PCA, deep kernel machines, deep eigenvalues, deep eigenvectors * Restricted Boltzmann machines, Restricted kernel machines, deep learning, energy based models * Disentanglement and explainability * Tensors, kernels and deep learning * Convolutional kernels * Sparsity, robustness, low-rank representations, compression * Nystrom method, Nystromformer * Efficient training methods * Lagrange duality, Fenchel duality, estimation in Hilbert spaces, reproducing kernel Hilbert spaces, vector-valued reproducing kernel Hilbert spaces, Krein spaces, Banach spaces, RKHS and C*-algebra * Applications */- Invited Speakers -/* * Mikhail Belkin (University of California San Diego) * Volkan Cevher (EPFL) * Florence d'Alche-Buc (Telecom Paris, Institut Polytechnique de Paris) * Julien Mairal (INRIA) * Massimiliano Pontil (IIT and University College London) * Dingxuan Zhou (University of Sydney) */- Call for abstracts -/* The DEEPK 2024 program will include *oral and poster sessions*. Interested participants are cordially invited to submit an *extended abstract (max. 2 pages)* for their contribution.? Please prepare your extended abstract submission in LaTeX, according to the provided stylefile and submit it in pdf format (max. 2 pages). Further extended abstract information is given at https://www.esat.kuleuven.be/stadius/E/DEEPK2024/call_for_abstracts.php . */- Schedule - /* * *Deadline extended abstract submission:* *Feb 8, 2024 * * Notification of acceptance and presentation format (oral/poster): Feb 22, 2024 * Deadline for registration: Feb 29, 2024 * *International Workshop DEEPK 2024:* March 7-8, 2024 */- Organizing committee - /* Johan Suykens (Chair), Alex Lambert, Panos Patrinos, Qinghua Tao, Francesco Tonin */- Other info -/* Please consult the DEEPK 2024 website https://www.esat.kuleuven.be/stadius/E/DEEPK2024 for info on program, registration, location and venue. The event is co-sponsored by ERC Advanced Grant E-DUALITY and KU Leuven. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jphanna at cs.wisc.edu Mon Feb 5 06:47:28 2024 From: jphanna at cs.wisc.edu (Josiah Hanna) Date: Mon, 5 Feb 2024 11:47:28 +0000 Subject: Connectionists: RL Conference 2024 - Call for Workshop Proposals Message-ID: Before the RLC 2024 main conference, workshops on a variety of current topics will be held on August 9th, 2024 in Amherst, Massachusetts. We invite researchers interested in chairing a workshop to submit proposals. Goal of RLC Workshops: Workshops provide an informal, cutting-edge venue for discussion of works in progress and future directions. Good workshops have helped to crystallize common problems, explicitly contrast competing frameworks, and clarify essential questions for a subfield or application area. Potential workshop topics include, but are not limited to, the following: -Reinforcement learning algorithms, -Theoretical foundations of reinforcement learning, -Reinforcement learning in low-resource settings, -Representation learning for reinforcement learning, -Robustness, safety, and fairness in reinforcement learning, -Exploration in reinforcement learning, -Human-in-the-loop reinforcement learning and RLHF, -Imitation learning and inverse reinforcement learning, -Uncertainty in reinforcement learning, -Applications of reinforcement learning: e.g., resource allocation, recommendation systems, robotics, healthcare, climate change, agriculture, pandemic response, etc, -Societal impacts and policy aspects of reinforcement learning deployment, -Any other topic relevant to an appreciable fraction of the RLC community. Submission Instructions: https://rl-conference.cc/call_for_workshops.html Workshop Application Deadline: 8 March 2024 (11:00 PM - Pacific Time) Workshop Acceptance Notification: 20 March 2024 Best, The RLC 2024 Workshop Co-Chairs Daniel Brown, University of Utah Josiah Hanna, University of Wisconsin, Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtkostecki at gmail.com Mon Feb 5 08:09:56 2024 From: mtkostecki at gmail.com (Mateusz Kostecki) Date: Mon, 5 Feb 2024 14:09:56 +0100 Subject: Connectionists: Origins of Order in Behaviour: Deadline extended till Feb 15h Message-ID: Dear Colleagues, we have extended the deadline for our Workshop of Ideas in Neuroscience: Origins of Order in Behaviour till Feb 15th! Come to Warsaw, to beautiful wooden house in the middle of the greenest part of the city, to discuss the concept of instinct and innate behaviour, the role of genes, environmental and organismal factors in the development of behaviour and behavioural evolution together with our fantastic guest: Kevin Mitchell, Gregory Kohn, Joanna R?czaszek Leonardi and Daniel Czegel. The workshop will be based on extensive talks and discussions in an informal atmosphere - more info can be found here - https://nenckiopenlab.org/origins_of_order/ Best, Mat -------------- next part -------------- An HTML attachment was scrubbed... URL: From newsletters at tommasoturchi.fyi Mon Feb 5 08:07:19 2024 From: newsletters at tommasoturchi.fyi (Tommaso Turchi) Date: Mon, 05 Feb 2024 14:07:19 +0100 Subject: Connectionists: [CfP] SYNERGY Workshop @ AVI 2024 Message-ID: <10e03a23-1946-4ad8-b684-be92ad0bc001@app.fastmail.com> *** Apologies for cross-posting *** -------------------------------------- CALL FOR PAPERS: SYNERGY Workshop 2024 -------------------------------------- We are pleased to announce the call for papers for the SYNERGY Workshop, to be held in conjunction with ACM AVI 2024. This workshop is dedicated to designing and building Hybrid Human-AI Systems. We invite researchers and practitioners to contribute to this evolving field, where human intelligence synergizes with artificial intelligence. == Important Dates == * Submission Deadline: April 2, 2024 * Notification: April 16, 2024 * Camera Ready: May 3, 2024 * Workshop Date: Afternoon of June 3rd or 4th, 2024 == Submission Guidelines == * Length: 5-10 pages * Format: CEUR-ART 1-column Template * Selected papers may be invited to submit an extended version to a dedicated Special Issue of Springer's "Personal and Ubiquitous Computing" * Submission Website: https://cmt3.research.microsoft.com/SYNERGY2024 == Topics Include == The workshop seeks contributions on, but not limited to, the following themes: 1. Human-AI Collaboration Models 2. Augmenting Human Cognition with AI 3. Design Principles for Synergistic Systems 4. Ethics and Trust in Human-AI Interaction 5. Philosophical and Theoretical Approaches 6. Adaptive Interfaces for Human-AI Synergy 7. Software Architectures and Tools 8. Case Studies of Human-AI Synergy == Organizers == * Alan Dix, Swansea University & Cardiff Metropolitan University * Matt Roach, Swansea University * Tommaso Turchi, University of Pisa * Alessio Malizia, University of Pisa For more information, please visit our website at https://synergy.trx.li or contact Tommaso Turchi (tommaso.turchi at unipi.it). Join us in Arenzano, Genoa, Italy for an insightful workshop that aims to explore and shape the future of hybrid human-AI systems! //==============================\\ Tommaso Turchi Department of Computer Science University of Pisa \\==============================// From tmawla42 at tntech.edu Mon Feb 5 09:05:23 2024 From: tmawla42 at tntech.edu (Mawla, Tanjila (tmawla42)) Date: Mon, 5 Feb 2024 14:05:23 +0000 Subject: Connectionists: Call for Papers: ACM Workshop on Secure and Trustworthy Cyber-Physical Systems Message-ID: (Apologies for re-posting) Dear Colleagues, ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS) aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. The workshop will be held in Porto, Portugal (or virtually) on June 21, 2024 in conjunction with the 14th ACM Conference on Data and Application Security and Privacy (CODASPY 2024). *** Paper submission deadline: February 25, 2024 *** *** Website: https://sites.google.com/view/sat-cps-2024/ *** SaT-CPS aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. Example topics of interest are given below, but are not limited to: - Secure CPS architectures - Authentication mechanisms for CPS - Access control for CPS - Key management in CPS - Attack detection for CPS - Threat modeling for CPS - Forensics for CPS - Intrusion and anomaly detection for CPS - Trusted-computing in CPS - Energy-efficient and secure CPS - Availability, recovery, and auditing for CPS - Distributed secure solutions for CPS - Metrics and risk assessment approaches - Privacy and trust - Blockchain for CPS security - Data security and privacy for CPS - Digital twins for CPS - Wireless sensor network security - CPS/IoT malware analysis - CPS/IoT firmware analysis - Economics of security and privacy - Securing CPS in medical devices/systems - Securing CPS in civil engineering systems/devices - Physical layer security for CPS - Security on heterogeneous CPS - Securing CPS in automotive systems - Securing CPS in aerospace systems - Usability security and privacy of CPS - Secure protocol design in CPS - Vulnerability analysis of CPS - Anonymization in CPS - Embedded systems security - Formal security methods in CPS - Industrial control system security - Securing Internet-of-Things - Securing smart agriculture and related domains The workshop is planned for one day, June 21, 2024, on the last day of the conference. Instructions for Paper Authors All submissions must describe original research, not published nor currently under review for another workshop, conference, or journal. All papers must be submitted electronically via the Easychair system: https://easychair.org/conferences/?conf=acmsatcps2024 Full-length papers Papers must be at most 10 pages in length in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template). Submission implies the willingness of at least one author to attend the workshop and present the paper. Accepted papers will be included in the ACM Digital Library. The presenter must register for the workshop before the deadline for author registration. Position papers and Work-in-progress papers We also invite short position papers and work-in-progress papers. Such papers can be of length up to 6 pages in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template), and must clearly state "Position Paper" or "Work in progress," as the case may be in the title section of the paper. These papers will be reviewed and accepted papers will be published in the conference proceedings. Important Dates Due date for full workshop submissions: February 25, 2024 Notification of acceptance to authors: March 25, 2024 Camera-ready of accepted papers: April 1, 2024 Workshop day: June 21, 2024 Sincerely, Tanjila Mawla PhD Student and Graduate Research Assistant in Computer Science, Tennessee Technological University. Email: tmawla42 at tntech.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From wanling.cai at tcd.ie Mon Feb 5 14:30:59 2024 From: wanling.cai at tcd.ie (Wanling Cai) Date: Mon, 5 Feb 2024 19:30:59 +0000 Subject: Connectionists: ACM UMAP 2024 - 3rd Call for Doctoral Consortium Papers Message-ID: * We apologize if you receive multiple copies of this CfP * * Online version: https://www.um.org/umap2024/call-for-doctoral-consortium-papers/ ACM UMAP 2024 - Call for Doctoral Consortium Papers ACM UMAP 2024: The 32nd ACM Conference on User Modeling, Adaptation and Personalization Cagliari, Sardinia, Italy July 1-4, 2024 The 32nd International Conference on User Modeling, Adaptation, and Personalization (ACM UMAP 2024) is the premier international conference for researchers and practitioners working on systems that adapt to individual users or groups of users, and that collect, represent, and model user information. The ACM UMAP 2024 Doctoral Consortium (DC) will take place as part of the conference. The DC provides an opportunity for doctoral students to explore and develop their research interests under the guidance of distinguished researchers and industry practitioners from the field. Doctoral students are invited to apply for presenting their research to experienced scholars who will provide constructive feedback and advice. Students should consider participating in the DC if they are at least one year away from completing their dissertation at the time of the event, but after having settled on a research area or dissertation topic. This forum will provide doctoral students with an opportunity to: * Present and discuss their research ideas to experienced scholars in a supportive, formative, and yet critical environment. * Explore and develop their research interests under the guidance of distinguished researchers from the field who will provide constructive feedback and advice. * Explore career pathways available after completing their doctoral degree. * Network and build collaborations with other members of the community. Students are asked to submit a brief proposal outlining their doctoral research (see detailed requirements below), which will be evaluated by the doctoral consortium committee. Good quality applications will be selected for presentation at a DC session as part of the conference. Each student with an accepted submission will be assigned a mentor who will provide feedback on the student?s work. A mentoring session will take place during the conference, and the DC students are expected to attend. ** Important Dates ** Paper Submission: March 28, 2024 Notification: April 26, 2024 Camera-ready Submission (TAPS system): May 9, 2024 Conference: July 1 ? 4, 2024 DC Day: July TBD, 2024 (it will happen in one of the conference days) Note: The submissions deadlines are at 11:59 pm AoE (Anywhere on Earth) time. ** Submission ** All submissions must be written in English. Submissions should be a single PDF document consisting of: (i) the recommendation letter, (ii) the research description, and (iii) the curriculum vitae. This PDF document should be submitted electronically through the EasyChair submission system, https://easychair.org/conferences/?conf=umap24, by selecting the ?UMAP24 Doctoral Consortium Papers? track. ** Length and Formatting ** Format and Content Guidelines. Consult the ACM UMAP 2024 general call for full and short papers concerning the key topics of this call as well. Each proposal should be a single PDF document, with three required components: (i) a recommendation letter, (ii) a description of your doctoral path, (iii) and a curriculum vitae. If you are unable to obtain a letter of recommendation from your dissertation advisor, please include a short explanation. Specifically, for each component, please consider the following instructions: 1. Letter of Recommendation (from your primary dissertation advisor, in any format): * Describe your interaction with the student and your assessment of the quality of their work. Explain how the ACM UMAP 2024 doctoral consortium would benefit this student at this point in their doctoral program, as well as the contributions you expect the student to make to the group. * The DC targets students who have a clear idea of their research plans and have started their research but have not yet executed a majority of their research. Please explain the structure of your student?s program and their expected level of progress by July 2024. * Recall that students who are unable to obtain a letter of recommendation from their dissertation advisor (or the equivalent) should instead include a short explanation and a description of where the student is in their doctoral progress and the anticipated timeline. 1. Research description (6 pages at most plus additional pages for references; figures, tables, proofs, appendixes, acknowledgments, and any other content count toward the page limit): using a single-column manuscript formatted according to the guidelines discussed below, please include the following: * Your name, advisor(s) name(s), and the university where you are conducting your doctoral work. * Current year of study and projected completion date (plus information about the regulations of your doctoral program regarding length, any part-time study, etc.). * Context and motivation for your research. * Key related work that frames your research. * Specific research objectives, goals, or questions. * Research approach, methods, and rationale. * Results and contributions to date. * Expected next steps. * Dissertation status and long-term goals. 1. Curriculum vitae (2 pages at most, in any format): a concise summary of your current curriculum vitae, including research publications. Clearly separate published papers from those under review or in press. Please ensure that your submission is complete and conforms to the format and content guidelines above. Submissions that do not meet these requirements will be desk-rejected. *Single-Blind Policy*. Proposals will be reviewed single-blind and, therefore, do not need to be anonymized before submission. *Template for the Research Description*. Following the ACM Publication Workflow, the research description component should be arranged based on the new ACM single-column format. Instructions for students are given below: * LaTeX (use \documentclass[manuscript, review, anonymous]{acmart} in the sample-authordraft.tex file for single-column). Please carefully follow the ACM?s instructions for preparing your article with LaTeX. * Overleaf (use \documentclass[manuscript,review,anonymous]{acmart} for single-column). Please carefully follow the ACM?s instructions for preparing your article with Overleaf. * Word. Please carefully follow the ACM?s instructions for preparing your article with Microsoft Word Should you have any questions or issues going through the instructions above, please contact support at acmtexsupport at aptaracorp.com for LaTeX and Microsoft Word inquiries. *Accessibility*. DC students are strongly encouraged to provide ?alt text? (alternative text) for floats (images, tables, etc.) in their content so that readers with disabilities can be given descriptive information for these floats that are important to the work. The descriptive text will be displayed in place of a float if the float cannot be loaded. This benefits the author and it broadens the reader base for the author?s work. Moreover, the alt text provides in-depth float descriptions to search engine crawlers, which helps to properly index these floats. Additionally, authors should follow the ACM Accessibility Recommendations for Publishing in Color and SIG ACCESS guidelines on describing figures. *Submission Policy*. Please consider the following ACM?s publication policies: 1. ?By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM?s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.? 2. ?Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.? *Camera-ready Information*. Accepted proposals (i.e., the research description component) will require a further revision in order to meet the requirements and the page limits of the camera-ready format required by ACM. Instructions for the preparation of the camera-ready versions of the papers will be provided after acceptance. We strongly recommend the usage of LaTeX/Overleaf for the camera-ready papers to minimize the extent of reformatting. Users of the Word template must use either the version for Microsoft Word for Windows, Macintosh Office 2011, or Macintosh Office 2016 (other formats such as Open Office, etc., are not admitted) for the camera-ready submission to avoid incompatibility issues. ** Registration and Presentation Policy ** Each accepted proposal in the doctoral consortium track must be accompanied by a distinct full author registration, completed by the doctoral student by the early registration date cut-off. Each accepted proposal must be presented in person in the doctoral consortium session to be included in the conference proceedings, published by ACM and available via the ACM Digital Library. The official publication date is when the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of UMAP 2024. The official publication date affects the deadline for any patent filings related to published work. ** Grants and Support ** ACM UMAP has a long history of supporting students who want to attend the conference. Doctoral students of the accepted DC proposals will have higher priority for the ACM UMAP grant applications. Please periodically consult the page that summarizes grants and support opportunities, available at https://www.um.org/umap2024/grants-and-support/. ** Doctoral Consortium Chairs ** * Martijn C. Willemsen, Eindhoven University of Technology & JADS, The Netherlands * Styliani Kleanthous, CYENS CoE and Open University of Cyprus, Cyprus * Rodrygo Santos, Universidade Federal de Minas Gerais, Brazil Contact information: umap2024-dc at um.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at snl.salk.edu Mon Feb 5 19:42:06 2024 From: terry at snl.salk.edu (Terry Sejnowski) Date: Mon, 5 Feb 2024 16:42:06 -0800 Subject: Connectionists: NEURAL COMPUTATION - February 1, 2024 In-Reply-To: References: Message-ID: <1cecf639-7b40-47e5-8c30-63938665f96c@snl.salk.edu> NEURAL COMPUTATION - Volume 36, Number 2 - February 1, 2024 Now available for online download: http://www.mitpressjournals.org/toc/neco/36/2 http://cognet.mit.edu/content/neural-computation ----- Article Efficient Decoding of Large-scale Neural Population Responses With Gaussian-process Multiclass Regression Jonathan Pillow, C. Daniel Greenidge, Benjamin Scholl, and Jacob Yates Letters Emergence of Universal Computations Through Neural Manifold Dynamics Joan Gort Cooperativity, Information Gain, and Energy Cost During Early LTP in Dendritic Spines Jan Karbowski and Paulina Urban Q&A Label Learning Kota Kawamoto and Masato Uchida ----- ON-LINE -- http://www.mitpressjournals.org/neco MIT Press Journals, One Rogers Street, Cambridge, MA 02142-1209 Tel: (617) 253-2889? FAX: (617) 577-1545 journals-cs at mit.edu ----- From balder.tencate at gmail.com Tue Feb 6 05:03:19 2024 From: balder.tencate at gmail.com (Balder ten Cate) Date: Tue, 6 Feb 2024 11:03:19 +0100 Subject: Connectionists: PhD position "Machine Learning for Automated Reasoning" (fully funded) @ University of Amsterdam In-Reply-To: References: Message-ID: Are you interested to work, in an interdisciplinary research setting, on topics at the intersection of logic, machine learning and automated reasoning? The Institute for Logic, Language and Computation (ILLC) of the University of Amsterdam is looking for a talented PhD candidate. Your research will concern the use of machine learning for automated reasoning (such as mathematical theorem proving and/or declarative constraint-based reasoning). See https://vacatures.uva.nl/UvA/job/PhD-Position-on-Machine-Learning-for-Automated-Reasoning/787287002/ for more information. The deadline for applications is March 11, 2024. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wanling.cai at tcd.ie Tue Feb 6 04:58:07 2024 From: wanling.cai at tcd.ie (Wanling Cai) Date: Tue, 6 Feb 2024 09:58:07 +0000 Subject: Connectionists: ACM UMAP 2024 - 2nd Call for Late-Breaking Results and Demos Message-ID: * We apologize if you receive multiple copies of this CfP * * Online version: https://www.um.org/umap2024/call-for-lbr-and-demos/ ACM UMAP 2024 - Call for Late-Breaking Results and Demos ACM UMAP 2024: The 32nd ACM Conference on User Modeling, Adaptation and Personalization Cagliari, Sardinia, Italy July 1-4, 2024 The 32nd International Conference on User Modeling, Adaptation, and Personalization (ACM UMAP 2024) is the premier international conference for researchers and practitioners working on systems that adapt to individual users or groups of users, and that collect, represent, and model user information. ACM UMAP 2024 is sponsored by ACM SIGCHI and SIGWEB. User Modeling Inc., as the core Steering Committee, oversees the conference organization. ACM UMAP operates under the ACM Conference Code of Conduct. The proceedings, published by ACM, will be part of the ACM Digital Library. ACM UMAP 2024 invites Late-Breaking Results (LBR) papers as well as Demonstrations (demos) of innovative UMAP-based systems (including research prototypes). The topics for these submissions are the same as the ones included in the Call for Full and Short Papers. However, their scope and timing is different. In particular, although short papers (from the main track) and the LBR papers of this call have the same maximum length, LBR papers are expected to present innovative ideas that are being explored and have already shown some promising results. It is also a dissemination channel for new research directions. Mature results which have already gone through experimental validation are more suitable for the general call, either as a full or short paper. Thus, rejected papers from the main track are not expected to be a valuable contribution as an LBR paper unless their contents are revised to address the scope of this track. Note also that LBR papers are to be presented physically during the conference in a poster format, allowing for more informal discussions on the ideas shared. Demos are intended to present systems that have been used to achieve the research outcomes presented as full and short papers. Thus, we highly recommend authors of accepted full and short papers to also prepare a submission for a demo of their system to be showcased at the conference in the LBR and Demo session. ** Important Dates ** * Submission: April 18, 2024 * Notification: May 9 2024 * Camera-ready Submission (TAPS system): May 16, 2024 * Conference: July 1 ? 4, 2024 Note: The submissions deadlines are at 11:59 pm AoE (Anywhere on Earth) time. ** Submission ** All submissions must be written in English. Papers should be submitted electronically, in a PDF format, through the EasyChair submission system, https://easychair.org/conferences/?conf=umap24, by selecting the ?UMAP24 Late Breaking Results and Demo Papers? track ** Length and Formatting ** -- Late-Breaking Results Format -- * Page Limits. Up to 7 pages, plus up to 2 additional pages for references (figures, tables, proofs, appendixes, acknowledgments, and any other content count toward the page limit). * Proceedings. Publication in ACM UMAP 2024 adjunct proceedings. * Presentation. Physically presented as a poster during the conference. * Description. LBRs are research-in-progress that must contain original and unpublished accounts of innovative research ideas, preliminary results, industry showcases, and system prototypes, addressing both the theory and practice of UMAP. In addition, papers introducing recently started research projects or summarizing project results are welcome as well. We encourage researchers and practitioners to submit late-breaking work as it provides a unique opportunity for sharing valuable ideas, eliciting useful feedback on early-stage work, and fostering discussions and collaborations among colleagues. In case that papers rejected in the main track (i.e. submitted as full or short papers) are submitted to this track, they should be revised not only to take into account the comments made to help improve the paper, but to fulfill the scope of the LBR, which emphasizes novelty of ideas. This is also valid for papers that have been recommended as an LBR when rejected in the main track. -- Demos Format -- * Page Limits *. Up to 5 pages, plus up to 1 additional page for references (figures, tables, proofs, appendixes, acknowledgments, and any other content count toward the page limit). On an extra page (not to be published), submissions should include a specification of the technical requirements for demonstrating the system at UMAP 2024. * Supporting Material *. Video or external material demonstrating the system. * Proceedings *. Publication in ACM UMAP 2024 adjunct proceedings. * Presentation * . Physically presented as a demo plus as a poster during the conference. * Description *. Demos will showcase research prototypes and commercially available products in a dedicated session. Demo submissions must be based on an implemented and tested system that pursues one or more innovative ideas in the interest areas of the conference. Demonstrations are an excellent and exciting way to showcase implementations and get valuable feedback from the community, especially for those papers that have been presented in the main track. Each demo submission must make clear which aspects of the system will be demonstrated, and how these will be demonstrated on-site as well as online. To better identify the value of demos, we also encourage authors to submit a pointer to a screencast (max. 5 minutes on Vimeo or YouTube) or any external material related to the demo (e.g., shared code on GitHub). *Non-Anonymity.* Submissions will be reviewed single-masked (i.e., authors? names should be included in the papers) thus there is no need to be anonymized before submission. *Template for the Research Description.* Following the ACM Publication Workflow, all authors should submit manuscripts for review in the new ACM single-column format. Instructions for authors are given below: * LaTeX (use \documentclass[manuscript, review, anonymous]{acmart} in the sample-authordraft.tex file for single-column). Please carefully follow the ACM?s instructions for preparing your article with LaTeX. * Overleaf (use \documentclass[manuscript,review,anonymous]{acmart} for single-column). Please carefully follow the ACM?s instructions for preparing your article with Overleaf. * Word. Please carefully follow the ACM?s instructions for preparing your article with Microsoft Word Should you have any questions or issues going through the instructions above, please contact support at acmtexsupport at aptaracorp.com for LaTeX and Microsoft Word inquiries. * Accessibility.* Authors are strongly encouraged to provide ?alt text? (alternative text) for floats (images, tables, etc.) in their content so that readers with disabilities can be given descriptive information for these floats that are important to the work. The descriptive text will be displayed in place of a float if the float cannot be loaded. This benefits the author and it broadens the reader base for the author?s work. Moreover, the alt text provides in-depth float descriptions to search engine crawlers, which helps to properly index these floats. Additionally, authors should follow the ACM Accessibility Recommendations for Publishing in Color and SIG ACCESS guidelines on describing figures. ** Review Process & Camera-ready Submission ** * Review * Submissions will be reviewed by at least two independent reviewers. They will be assessed based on their originality and novelty, potential contribution to the research field, potential impact in particular use cases, and the usefulness of presented experiences, as well as their overall readability. Papers that exceed the page limits or do not adhere to the formatting guidelines will be returned without review. The ACM Code of Ethics gives the UMAP program committee the right to (desk-)reject papers that perpetuate harmful stereotypes, employ unethical research practices, or uncritically present outcomes/implications that clearly disadvantage minority communities. Further, reviewers will be explicitly asked to consider whether the research was conducted in compliance with professional ethical standards and applicable regulatory guidelines. Failure to do so could lead to a (desk-)rejection. *Camera-ready Information.* Accepted papers will be subject to further revision to meet the requirements of the camera-ready format required by ACM. We strongly recommend the usage of LaTeX/Overleaf for the camera-ready papers to minimize the extent of reformatting. Users of the Word template must use either the version for Microsoft Word for Windows, Macintosh Office 2011, or Macintosh Office 2016 (other formats such as Open Office, etc., are not admitted) for the camera-ready submission to avoid incompatibility issues. Instructions for preparing the camera-ready versions of accepted papers will be provided after acceptance. This might include instructions to prepare a video of the accepted contribution. Camera-ready versions of accepted papers will be later submitted using ACM?s new production platform where authors will be able to review PDF and HTML output formats before publication. ** Registration and Presentation Policy ** Each accepted paper must be accompanied by a distinct full author registration, completed by the early registration date cut-off. Each accepted paper must be presented in person to be included in the conference adjunct proceedings, published by ACM and available via the ACM Digital Library. The official publication date is when the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of UMAP 2024. The official publication date affects the deadline for any patent filings related to published work. During the conference, all categories will be presented at the poster reception, in the form of a poster and/or a software demonstration following poster format. This form of presentation will provide presenters with an opportunity to obtain direct feedback about their work from a wide audience during the conference. *Grants and Support* To help students and early career researchers attend ACM UMAP 2024, we are able to provide several grants for participants at ACM UMAP 2024. * Category 1: Discounted Registration Rates * Category 2: Gary Marsden Travel Awards (GMTA) * Category 3: ACM UMAP 2024 Grants More information can be found in https://www.um.org/umap2024/grants-and-support/. Please refer to the website information, to learn the eligibility and application process for each category. ** Late-Breaking Results and Demos Chairs ** * Bamshad Mobasher, DePaul University, USA * Hanna Hauptmann, Utrecht University, Netherlands * Olga C. Santos, Universidad Nacional de Educacion a Distancia (UNED), Spain Contact information: umap2024-lbr at um.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.pisarski at uw.edu.pl Tue Feb 6 08:15:33 2024 From: m.pisarski at uw.edu.pl (Mariusz Pisarski) Date: Tue, 6 Feb 2024 13:15:33 +0000 Subject: Connectionists: CFP and Invitation to ACM Hypertext 2024 Message-ID: Dear colleagues, On behalf of ACM SIGWEB and Adam Mickiewicz University in Pozna? I want to invite you to participate in the upcoming 2024 ACM Hypertext Conference ? a premium venue for high quality peer-reviewed research on hypertext theory, systems, applications, publishing, artwork and related practices. The conference theme is "Creative Intelligence". Share your ideas and papers, bring demos and reports on your projects, your systems and your art! Please see the details, dates and links below: [CFP] ACM Hypertext & Social Media 2024 - Deadline 31 March 2024 ************************************************************************ ACM Hypertext & Social Media 2024 10-13 September 2024, Poznan, Poland https://ht.acm.org/ht2024/ Deadline: 31 March 2024 23.59 AOE ************************************************************************ Important Dates ? Workshop proposals - Submission: 28 February 2024 AoE - Notification: 15 March 2024 AoE ? Research tracks - Submission: 31 March 2024 AoE - Notification: 15 May 2024 AoE ? Practitioners tracks & blue sky - Submission: 26 May 2024 AoE - Notification: 26 June 2024 AoE ? Camera-ready version of accepted papers - 23 July 2024 AoE ? Conference 10-13 September 2024 Note: The submission times are 11:59 pm AoE time (Anywhere on Earth) The ACM Hypertext Conference is a premium venue for high-quality peer-reviewed research on hypertext theory, systems, applications, publishing, artwork and related practices. It is concerned with all aspects of modern hypertext research including social and intelligent media, narrative systems and applications, authoring, reading and publishing hypertext, workflows and infrastructures as well as reflections and approaches. The topic of the 2024 edition of HT is Creative Intelligence. Research tracks (31 March 2024 AoE) ? Intelligent methods for content creation and curation ? Authoring and publishing of ? Foundations of Creative Intelligence and social media content ? Readership and experience with interactive and social media ? Communities of digital practices Practitioners? tracks (26 May 2024 AoE) ? Interactive narratives ? Applications ? Resources and tools Submissions on EasyChair https://easychair.org/conferences/?conf=ht24 Formats ? Long papers (up to 12 pages + references) ? Short papers (up to 6 pages + references) ? Extended abstracts (2-4 pages + references) All accepted contributions will be published by ACM and will be available in the Proceedings via the ACM Digital Library. Selected contributors will be invited to submit an expanded version after the conference to a special issue. Best regards Mariusz Pisarski, PhD Electronic Literature Research Center, Adam Mickiewicz University, Pozna? *t:* +447871452513 *w*: https://techsty.art.pl *e:* mariusz.pisarski at amu.edu.pl -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessio.ferone at uniparthenope.it Tue Feb 6 09:56:45 2024 From: alessio.ferone at uniparthenope.it (Alessio Ferone) Date: Tue, 6 Feb 2024 14:56:45 +0000 Subject: Connectionists: [Call for Applications] University of Naples Parthenope - 12 Months Position Message-ID: Dear all, We are seeking candidates for an open position at University of Naples Parthenope on topics related to exoplanets atmosphere retrieval. Position: atmospheric exoplanet retrieval from spectroscopy with machine learning techniques (as part of the Progetto PNRR ? PRIN 2022 ?ESPLORA: Exoplanet Spectroscopy at high resolution to Probe their Lost Origins by Revealing their Atmospheric compositions?, PI: A. Bonomo) ? application deadline is March, 1st 2024, and the successful candidate is expected to start their position in March/April 2024 at University of Naples Parthenope, Italy. Link to the call: https://uniparthenope.portaleamministrazionetrasparente.it/index.php?id_oggetto=22&id_doc=17853 Requirement: Master Degree in Computer Science or Physics Brief Description of Research Program: The successful candidate will focus on developing a new retrieval framework for exoplanet atmospheres of wide applicability, across wavelength and resolutions. The fellow will primarily collaborate with Alessio Ferone, at the Parthenope University of Naples, working closely with other colleagues who are experts in machine learning and exoplanetary science in the Napoli area. The fellow will also collaborate with the GAPS group (https://theglobalarchitectureofplanetarysystems.com/) for determining the atmospheric properties (composition, clouds, dynamics) of a sample of extrasolar planets, using high spectral resolution observations from the optical to the near-infrared. Alessio Ferone, University of Naples Parthenope From stephan.petrone at gmail.com Tue Feb 6 14:43:36 2024 From: stephan.petrone at gmail.com (Stephan Petrone) Date: Tue, 6 Feb 2024 20:43:36 +0100 Subject: Connectionists: MESS 2024 ~ autoDL meets HPC ~ 15-18 July 2024 Message-ID: Call for Participation (apologies for multiple copies) ----------------------------------------------------------------------- MESS 2024 - Metaheuristics Summer School - Automated Deep Learning meets High-Performance Computing - 15-18 July 2024, Catania, Italy https://www.ANTs-lab.it/mess2024/ mess.school at ANTs-lab.it https://www.facebook.com/groups/MetaheuristicsSchool/ ----------------------------------------------------------------------- ** APPLICATION DEADLINE: 16th March 2024 ** MESS 2024 is aimed at qualified and strongly motivated MSc and PhD students; post-docs; young researchers, and both academic and industrial professionals to provide them with an overview on the metaheuristics techniques, and an in-depth analysis of the state-of-the-art. It is a full-immersion four-days course in the beautiful city of Catania whose goal is to offer all participants with a stimulating environment to study and learn advanced concepts and recent research in the fields of Metaheuristics, Optimization, Machine Learning and Artificial Intelligence, in general. The main theme of this edition is focused on ?Automated Deep Learning Meets High-Performance Computing?, that is how these research areas may interact and affect each other in order to develop reliable and robust solving methodologies for Big Data analysis, and data-driven problems. Participants will be delivered a certificate of attendance indicating the number of hours of lectures (36-40 hours of lectures). In according to the academic system all PhD and master students attending to the summer school will may get 8 ECTS points. ** LECTURERS + Thomas Bartz-Beielstein, TH Koln, Germany Lecture#1: Introduction to Surrogate-Based Hyperparameter Tuning in Classical Machine Learning, Online Machine Learning, and Deep Learning Lecture#2: Exploring Hyperparameter-Tuning Applications: Optimizing scikit-learn, River, and PyTorch with SPOT and Other Tuning Techniques + Aaron Klein, AWS Research Berlin, Germany Lecture#1: Introduction into AutoML: Hyperparameter Optimization Lecture#2: Introduction into AutoML: Neural Architecture Search + El-Ghazali Talbi, University of Lille1, France Lecture#1: TBA Lecture#2: TBA ~ More Lecturers will be announced soon ~ ** METAHEURISTICS COMPETITION All participants will be involved in the ?Metaheuristics Competition Race?, where each of them, individually or divided in working groups (no more than 3 people), will develop a metaheuristic solution for a given problem presented during the school. The top three of the competition ranking will receive the MESS 2024 award and will be involved in the writing of a manuscript dedicated to the competition that, afterwards, will be submitted to an international journal for possible publication. In addition, the remaining best ten in the ranking (excluding the top three, of course) will be invited to report their work in a manuscript which will be published in the special MESS 2024 Volume of the AIRO Springer Series. ** SHORT ORAL PRESENTATION All participants may submit an abstract of their recent results, or works in progress, for presentation and having the opportunities for debate and interact with leaders in the field. The Workshop Organizers and Scientific Committee will review the abstracts and will recommend for the format of the presentation (oral or poster). All abstracts will be published on the electronic hands-out book of the summer school. The Abstracts must be submitted during the application by *March 16, 2020*. ** MESS 2024 DIRECTORS: - Salvatore Greco, University of Catania, Italy - Mario Pavone, University of Catania, Italy - Andrea Schaerf, University of Udine, Italy - El-Ghazali Talbi, University of Lille1, France - Daniele Vigo, University of Bologna, Italy ** METAHEURISTICS COMPETITION CHAIRS: - Raffaele Cerulli, University of Salerno, Italy - Sara Ceschia, University of Udine, Italy - Mario Pavone, University of Catania, Italy - Andrea Schaerf, University of Udine, Italy ** MORE INFORMATION: https://www.ANTs-lab.it/mess2024/ -- mess.school at ANTs-lab.it Facebook Group: https://www.facebook.com/groups/MetaheuristicsSchool/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhansel0 at gmail.com Tue Feb 6 16:35:51 2024 From: dhansel0 at gmail.com (David Hansel) Date: Tue, 6 Feb 2024 22:35:51 +0100 Subject: Connectionists: REMINDER: World wide VVTNS series: Wednesday, February 7 2024 at 11am (ET), Nicholas Priebe | The University of Texas at Austin In-Reply-To: References: Message-ID: [image: VVTNS.png] https://www.wwtns.online - on twitter: wwtns at TheoreticalWide You are cordially invited to the lecture given by Nicholas Priebe The University of Texas at Austin on the topic of *"The origins of variable responses in neocortical neurons"* The lecture will be held on zoom on *February 7, 2024*, at *11:00 am ET * Register on our website - https://www.wwtns.online -to receive the zoom link *Abstract:* I will discuss a collaborative project studying the origins of variable responses in neocortical neurons. The spiking responses of neocortical neurons are remarkably variable. Distinct patterns are observed when the same stimulus is presented in the sensory areas or when the same action is executed in motor areas. This is quantified across trials by measuring the Fano factor of the neuronal spike counts, which is generally near 1, consistent with spiking times following a noisy Poisson process. The two candidate sources for noise are the synaptic drive that converges on individual neurons or intrinsic transducing processes within neurons. To parse the relative contributions of these noise sources, we made whole-cell intracellular recordings from cortical slices and used in the whole cell dynamic clamp configuration while using dynamic clamp to injecting excitatory and inhibitory conductances previously recorded in vivo from visual cortical neurons (Tan et al. 2011). By controlling the conductance directly, we can test whether intrinsic processes contribute to poisson firing. We found that repeated injections of the same excitatory and inhibitory conductance evoked stereotypical spike trains, resulting in fano factors near 0.2. Varying the amplitude of both excitatory and inhibitory conductances changed the firing rate of recorded neurons but not the Fano factor. These records indicate that intrinsic processes do not contribute substantially to the Poisson spiking of cortical cells. Next, to test whether differences in network input are responsible for Poisson spike patterns, we examined spike trains evoked by injecting excitatory and inhibitory conductances recorded from different presentations of the same visual stimulus. These records exhibited different behaviors depending on whether the injected conductances were from visually-driven or spontaneous epochs: during visually-driven epochs, spiking responses were Poisson (Fano factor near 1); during spontaneous epochs spiking responses were super-Poisson (fano factors above 1). Both of these observations are consistent with the quenching of variability by sensory stimulation or motor behavior (Churchland et al. 2010). We also found that excitatory conductances, in the absence of inhibition, are sufficient to generate spike trains with Poisson statistics. Our results indicate that the Poisson spiking emerges not from intrinsic sources but from differences in the synaptic drive across trials, the nature of this synaptic drive can alter the nature of variability, and that that excitatory input alone is sufficient to generate Poisson spiking. *About VVTNS : Created as the World Wide Neuroscience Seminar (WWTNS) in November 2020 and renamed in homage to Carl van Vreeswijk in Memoriam (April 20, 2022), its aim is to be a platform to exchange ideas among theoreticians. Speakers have the occasion to talk about theoretical aspects of their work which cannot be discussed in a setting where the majority of the audience consists of experimentalists. The seminars, **held on Wednesdays at 11 am ET,** are 45-50 min long followed by a discussion. The talks are recorded with authorization of the speaker and are available to everybody on our YouTube channel.* ? -- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints P?res 75270 Paris Cedex 06 Tel (Cell): +33 607508403 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VVTNS.png Type: image/png Size: 41084 bytes Desc: not available URL: From saeed at psu.edu Tue Feb 6 14:38:15 2024 From: saeed at psu.edu (Saeed Abdullah) Date: Tue, 06 Feb 2024 14:38:15 -0500 Subject: Connectionists: Postdoc Position - Human-AI Collaboration to Support Mental Health Workers Message-ID: <58104E26-4B3A-442B-848D-3D3CD467CD64@psu.edu> The College of Information Sciences and Technology at Penn State is seeking a postdoctoral scholar to join an interdisciplinary team focusing on Human-AI collaboration to train mental health workers. The project is supported by an NSF grant. The position will involve developing computational methods to assess clinical sessions and provide actionable feedback to support effective training. An ideal candidate will have strong research skills and experience in relevant areas (e.g., foundation models, deep learning, natural language processing, reinforcement learning). It will be a full-time appointment for 24 months, with a possibility of renewal dependent upon performance. Please see the application instructions here: https://psu.wd1.myworkdayjobs.com/PSU_Academic/job/Penn-State-University-Park/Postdoctoral-Scholar-Human-AI---College-of-Information-Sciences-and-Technology_REQ_0000051969-1 For more information, please contact Saeed Abdullah (saeed at psu.edu). From el-ghazali.talbi at univ-lille.fr Wed Feb 7 04:48:48 2024 From: el-ghazali.talbi at univ-lille.fr (El-Ghazali Talbi) Date: Wed, 7 Feb 2024 10:48:48 +0100 Subject: Connectionists: Postdoc and PhD positions available @University and INRIA Lille (France) Message-ID: Dear all, We would like to inform you that 3 positions are available at the University of Lille and INRIA in France. - Postdoc position in Neuromorphic-based Optimization. Details in document enclosed - PhD position in Exascale parallel optimization. Details in document enclosed - PhD position in Optimization of Spiking Neural Networks. Details in document enclosed Please distribute among potentially interested candidates. Best regards. -- Prof. El-ghazali TALBI Polytech'Lille, University Lille - INRIA CRISTAL - CNRS -------------- next part -------------- A non-text attachment was scrubbed... Name: Postdoc-ERC.pdf Type: application/pdf Size: 166322 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Phd-Exascale-Opti.pdf Type: application/pdf Size: 227773 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PhD-Opti-SNN.pdf Type: application/pdf Size: 198700 bytes Desc: not available URL: From mpavone at dmi.unict.it Wed Feb 7 03:55:20 2024 From: mpavone at dmi.unict.it (Mario Pavone) Date: Wed, 07 Feb 2024 09:55:20 +0100 Subject: Connectionists: MESS 2024 ~ autoDL meets HPC ~ 15-18 July 2024 Message-ID: <20240207095520.Horde.NatSQOph4B9lw0V4NK0BjFA@webmail.dmi.unict.it> Call for Participation (apologies for multiple copies) ----------------------------------------------------------------------- MESS 2024 - Metaheuristics Summer School - Automated Deep Learning meets High-Performance Computing - 15-18 July 2024, Catania, Italy https://www.ANTs-lab.it/mess2024/ mess.school at ANTs-lab.it https://www.facebook.com/groups/MetaheuristicsSchool/ ----------------------------------------------------------------------- ** APPLICATION DEADLINE: 16th March 2024 ** MESS 2024 is aimed at qualified and strongly motivated MSc and PhD students; post-docs; young researchers, and both academic and industrial professionals to provide them with an overview on the metaheuristics techniques, and an in-depth analysis of the state-of-the-art. It is a full-immersion four-days course in the beautiful city of Catania whose goal is to offer all participants with a stimulating environment to study and learn advanced concepts and recent research in the fields of Metaheuristics, Optimization, Machine Learning and Artificial Intelligence, in general. The main theme of this edition is focused on ?Automated Deep Learning Meets High-Performance Computing?, that is how these research areas may interact and affect each other in order to develop reliable and robust solving methodologies for Big Data analysis, and data-driven problems. Participants will be delivered a certificate of attendance indicating the number of hours of lectures (36-40 hours of lectures). In according to the academic system all PhD and master students attending to the summer school will may get 8 ECTS points. ** LECTURERS + Thomas Bartz-Beielstein, TH Koln, Germany ? ? ?Lecture#1: Introduction to Surrogate-Based Hyperparameter Tuning in Classical Machine Learning, Online Machine Learning, and Deep Learning ? ? ?Lecture#2: Exploring Hyperparameter-Tuning Applications: Optimizing scikit-learn, River, and PyTorch with SPOT and Other Tuning Techniques + Aaron Klein, AWS Research Berlin, Germany ? ? ?Lecture#1: Introduction into AutoML: Hyperparameter Optimization ? ? ?Lecture#2: Introduction into AutoML: Neural Architecture Search + El-Ghazali Talbi, University of Lille1, France ? ? ?Lecture#1: TBA ? ? ?Lecture#2: TBA ~ More Lecturers will be announced soon ~ ** METAHEURISTICS COMPETITION All participants will be involved in the ?Metaheuristics Competition Race?, where each of them, individually or divided in working groups (no more than 3 people), will develop a metaheuristic solution for a given problem presented during the school. The top three of the competition ranking will receive the MESS 2024 award and will be involved in the writing of a manuscript dedicated to the competition that, afterwards, will be submitted to an international journal for possible publication. In addition, the remaining best ten in the ranking (excluding the top three, of course) will be invited to report their work in a manuscript which will be published in the special MESS 2024 Volume of the AIRO Springer Series. ** SHORT ORAL PRESENTATION All participants may submit an abstract of their recent results, or works in progress, for presentation and having the opportunities for debate and interact with leaders in the field. The Workshop Organizers and Scientific Committee will review the abstracts and will recommend for the format of the presentation (oral or poster). All abstracts will be published on the electronic hands-out book of the summer school. The Abstracts must be submitted during the application by *March 16, 2020*. ** MESS 2024 DIRECTORS: - Salvatore Greco, University of Catania, Italy - Mario Pavone, University of Catania, Italy - Andrea Schaerf, University of Udine, Italy - El-Ghazali Talbi, University of Lille1, France - Daniele Vigo, University of Bologna, Italy ** METAHEURISTICS COMPETITION CHAIRS: - Raffaele Cerulli, University of Salerno, Italy - Sara Ceschia, University of Udine, Italy - Mario Pavone, University of Catania, Italy - Andrea Schaerf, University of Udine, Italy ** MORE INFORMATION: https://www.ANTs-lab.it/mess2024/ ? -- ? mess.school at ANTs-lab.it Facebook Group: https://www.facebook.com/groups/MetaheuristicsSchool/ From georg.martius at tuebingen.mpg.de Wed Feb 7 09:33:49 2024 From: georg.martius at tuebingen.mpg.de (Georg Martius) Date: Wed, 7 Feb 2024 15:33:49 +0100 Subject: Connectionists: CoRL 2024: 6-9th Nov. Deadline 6th of June Message-ID: <955b6916-66ef-44c7-81c8-496933632d07@tuebingen.mpg.de> Dear colleagues, We are happy to announce that the Conference on Robot Learning (CoRL 2024) is happening this year in the beautiful city of **Munich** in Germany from **November 6 to 9, 2024**. Submission deadline: **6th of June** Further details are on: https://www.corl.org We are excited to receive your innovative work on robot learning. Best regards, CoRL Organizing Committee From hocine.cherifi at gmail.com Wed Feb 7 10:14:20 2024 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Wed, 7 Feb 2024 16:14:20 +0100 Subject: Connectionists: CFP COMPLEX NETWORKS 2024 Istanbul Turkey December 10 - 12, 2024 Message-ID: 13 *th **International Conference on Complex Networks & Their Applications* Istanbul, Turkey December 10 - 12, 2024 COMPLEX NETWORKS 2024 You are cordially invited to submit your contribution until *September 03, 2024*. *SPEAKERS** (TBU)* ? Federico Battiston Central European University Austria ? Frank Emmert-Streib Tampere University Finland *PUBLICATION* Full papers (not previously published up to 12 pages) and Extended Abstracts (about published or unpublished research up to 4 pages) are welcome. ? *Papers *will be included in the conference *proceedings edited by Springer* ? *Extended abstracts* will be published in the *Book of Abstracts (with ISBN)* Extended versions will be invited for publication in *special issues of international journals:* o Applied Network Science edited by Springer o Advances in Complex Systems edited by World Scientific o Complex Systems o Entropy edited by MDPI o PLOS COMPLEX SYSTEMS o PLOS one o Social Network Analysis and Mining edited by Springer *SUBMISSION:* *https://cmt3.research.microsoft.com/COMPLEXNETWORKS2024/ * *TOPICS* *Topics include, but are not limited to: * o Models of Complex Networks o Structural Network Properties and Analysis o Complex Networks and Epidemics o Community Structure in Networks o Community Discovery in Complex Networks o Motif Discovery in Complex Networks o Network Mining o Network embedding methods o Machine learning with graphs o Dynamics and Evolution Patterns of Complex Networks o Link Prediction o Multilayer Networks o Network Controllability o Synchronization in Networks o Visual Representation of Complex Networks o Large-scale Graph Analytics o Social Reputation, Influence, and Trust o Information Spreading in Social Media o Rumour and Viral Marketing in Social Networks o Recommendation Systems and Complex Networks o Financial and Economic Networks o Complex Networks and Mobility o Biological and Technological Networks o Mobile call Networks o Bioinformatics and Earth Sciences Applications o Resilience and Robustness of Complex Networks o Complex Networks for Physical Infrastructures o Complex Networks, Smart Cities and Smart Grids o Political networks o Supply chain networks o Complex networks and information systems o Complex networks and CPS/IoT o Graph signal processing o Cognitive Network Science o Network Medicine o Network Neuroscience o Quantifying success through network analysis o Temporal and spatial networks o Historical Networks Join us at COMPLEX NETWORKS 2024 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Plos Complex Systems Founding & Adisory Editor Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From amira.mouakher at univ-perp.fr Wed Feb 7 10:45:01 2024 From: amira.mouakher at univ-perp.fr (Amira Mouakher) Date: Wed, 7 Feb 2024 16:45:01 +0100 Subject: Connectionists: =?utf-8?q?CFP_FRCCS_2024_Montpellier=2C_France_Ma?= =?utf-8?b?eSAyOSDigJMgMzEsIDIwMjQ=?= Message-ID: <7BED729D-73CF-4299-86F3-FC2A8776E2AE@univ-perp.fr> Fourth French Regional Conference on Complex Systems May 29 ? 31, 2024 Montpellier, France FRCCS 2024 After successfully hosting the conference in Dijon in 2021, Paris in 2022, and Le Havre in 2023, the fourth edition of the French Regional Conference on Complex Systems (FRCCS 2024) will be held in Montpellier, France. This single track international conference organized by CSS France serves as a platform to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange and promote ideas, facilitating the cross-fertilization of recent research work, industrial advancements, and original applications. Moreover, the conference emphasizes research topics with a high societal impact, showcasing the significance of complexity science in addressing complex societal challenges. Join us in Montpellier as we collectively strive to better understand complexity and its implications for society. You are cordially invited to submit your contribution until February 21, 2024. Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? Papers about original research (up to 12 pages) ? Extended Abstract about published or unpublished research (3 to 4 pages). Keynote Speakers ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany Publication o Contributions will be included in the conference proceedings (With ISBN) o Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) of the journals: o Applied Network Science, edited by Springer o Complexity, edited by Hindawi Submission ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? Select the Track: FRCCS2024 Topics include, but are not limited to: ? Foundations of complex systems Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals Complex Networks Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. Neuroscience, Linguistics Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity Economics & Finance Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment Infrastructure, planning, and environment critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences Biological and (bio)medical complexity biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging Social complexity o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation Socio-Ecological Systems Global environmental change, green growth, sustainability & resilience, and culture Organisms and populations Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution Engineering systems and systems of systems bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology Complexity in physics and chemistry quantum computing, quantum synchronization, quantum chaos, random matrix theory GENERAL CHAIRS Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreea.sburlea at yahoo.com Wed Feb 7 11:32:46 2024 From: andreea.sburlea at yahoo.com (Andreea Sburlea) Date: Wed, 7 Feb 2024 16:32:46 +0000 (UTC) Subject: Connectionists: 7th Spring School on Cognitive Modeling in Groningen, The Netherlands References: <482906880.5287875.1707323566330.ref@mail.yahoo.com> Message-ID: <482906880.5287875.1707323566330@mail.yahoo.com> --------------------Begin of Call Reminder-------------------- ? ACT-R, Nengo, PRIMs ??? ? Date:?8-12 April 2024?? Location:?Groningen, the Netherlands?? Fee:?? 305 (late fee after March 7 will be ? 355) More information and registration:?http://www.ai.rug.nl/springschool/ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??? ? Dear colleagues and students,?? ? This is a reminder that the early registration deadline for the?seventh Spring School on Cognitive Modeling in Groningen (8-12 April 2024)?ends on March 7th! ? As announced earlier, the Spring School will cover three different modeling paradigms: ACT-R, Nengo, and PRIMs. Each of these topics consists of a series of lectures, as well as a number of hands-on exercises (tutorials).?? ? Past years have shown that students get most out of the spring school if they really immerse themselves into one modeling paradigm. We therefore recommend you choose one topic for which you will attend both the lectures as well as the tutorials. In addition, you can select a second paradigm, for which you attend the lectures only.?? ? To give students a broader picture, there will be multiple guest lectures throughout the week. These lectures will give an introduction to other modeling paradigms, as well as practical research examples to show you what can be done with the modeling paradigms presented! Everyone is encouraged to attend those lectures.?? ? To round off the program, there will be a poster session, where students present themselves and their research, as well as a city tour, and our (in)famous spring school dinner.?? ? Registration closes on March 31. The early registration deadline ends March 7. ? Please feel free to forward the information to anyone who might be interested in the Spring School, and let us know if you have any questions!?? ? We are looking forward to welcoming you (again) in Groningen,?? The Spring School team springschool at rug.nl ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??? ? ACT-R Teachers: Jelmer Borst & Stephen Jones (University of Groningen) Website:?http://act-r.psy.cmu.edu ACT-R is a high-level cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time experiments to driving a car, learning algebra, and air traffic control. ACT-R can be used to develop process models of a task at a symbolic level. Participants will follow a compressed five-day version of the traditional summer school curriculum. We will also cover the connection between ACT-R and fMRI. Nengo Teacher: Terry Stewart (University of Waterloo) Website:?http://www.nengo.ca ? Nengo is a toolkit for converting high-level cognitive theories into low-level spiking neuron implementations. In this way, aspects of model performance such as response accuracy and reaction times emerge as a consequence of neural parameters such as the neurotransmitter time constants. It has been used to model adaptive motor control, visual attention, serial list memory, reinforcement learning, Tower of Hanoi, and fluid intelligence. Participants will learn to construct these kinds of models, starting with generic tasks like representing values and positions, and ending with full production-like systems. There will also be special emphasis on extracting various forms of data out of a model, such that it can be compared to experimental data. PRIMs Teacher: Niels Taatgen (University of Groningen) Website:?PRIMs | | | | PRIMs | | | How do people handle and prioritize multiple tasks? How can we learn something in the context of one task, and partially benefit from it in another task? The goal of PRIMs is to cross the artificial boundary that most cognitive architectures have imposed on themselves by studying single tasks. It has mechanisms to model transfer of cognitive skills, and the competition between multiple goals. In the tutorial we will look at how PRIMs can model phenomena of cognitive transfer and cognitive training, and how multiple goals compete for priority in models of distraction. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgecbalmeida at gmail.com Wed Feb 7 14:11:56 2024 From: jorgecbalmeida at gmail.com (Jorge Almeida) Date: Wed, 7 Feb 2024 19:11:56 +0000 Subject: Connectionists: 3 Post-Doctoral positions at the Faculty of Psychology, University of Coimbra, Portugal, to work with Alfonso Caramazza and Jorge Almeida Message-ID: The Faculty of Psychology and Educational Sciences of the University of Coimbra Portugal (FPCE-UC) invites applications from rising and enthusiastic researchers in Cognitive Science and Cognitive Neuroscience for 3 Post-Doctoral positions to work with Alfonso Caramazza and Jorge Almeida. These positions are part of a transformative ERA Chair grant CogBooster from the European Union to FPCE-UC led by Alfonso Caramazza . The goal of CogBooster is to implement a strong and international line of research in Basic Cognitive Science and Cognitive Neuroscience to contribute to the ongoing renewal of the Psychological Sciences in Portugal over the next decade. *Positions:* As part of this expansion and renewal, we are searching for Post-doctoral applicants with expertise related to the following areas: ? 1 Post-doctoral position in any of the following areas: lexical processing, visual object recognition, reading, or action recognition. ? 1 Post-doctoral position on visual object recognition, in particular on how object knowledge is organized and represented neurally and cognitively, using fMRI (potentially with ultra-high field MRI), and potentially ECoG/sEEG. ? 1 Post-doctoral position on object dimensionality and dimensional mapping, using population receptive field analysis/connective field modeling to work on topics related to the following preprint: https://www.biorxiv.org/content/10.1101/2023.11.29.568856v1 The selected applicants will work directly with Alfonso Caramazza and Jorge Almeida. They will be based in Coimbra, but will have the opportunity to spend some time at Harvard University, at Alfonso Caramazza?s laboratory. *Qualifications:* Applicants should be recent graduates ? the date in their PhD diploma should not be before October 2021 (i.e., no more than 3 years from the signing of the contract). They should also have their diploma recognized in Portugal at the time of the signing of the contract (diploma recognition could be requested here ). The start date is negotiable (but should be no later than fall 2024). The successful applicant should have a strong record of research for their career level, and meet, or show promise of meeting, the following qualifications: ? Research potential through publications with impact in the field? ? Proficiency in English, both written and spoken (speaking Portuguese is not necessary); ? Good written and spoken communication skills; *Offer:* The positions do not involve formal teaching. They do involve, however, lab mentoring. The salary is competitive for Portuguese standards ? 1800 euro per month net value. According to Numbeo , 1800 euro in Coimbra correspond to a local purchasing power of about 3230 euro in Paris, 3520 pounds in London (UK), 2685 euro in Brussels, 2385 euro in Rome, 2890 euro in Munich, about 4750 USD in Los Angeles, Boston or Washington, 4830 CAD in Toronto, or 5940 AUD euro in Sydney. The position will be for a maximum of 3 years, renewable every year. *About FPCE-UC and Coimbra:* The University of Coimbra is a 700-year-old University and is a UNESCO world Heritage site. Coimbra is one of the liveliest university cities in the world, and it is a beautiful city with easy access to beaches and mountains. The Faculty of Psychology and Educational Sciences has been consistently ranked as the best, or one of the best Psychology Departments in Portugal. In the last decade it has become the leading department in Portugal on Psychological research, holding, for instance the only ERC grants in Psychology in Portugal. FPCE-UC has a major laboratory for Cognitive Science and Cognitive Neuroscience research ? the Proaction Lab . We have access locally to two 3T MRI scanners, and access to one 7T MRI remotely, to tDCS, to a 256 channel EEG, and to a fully set behavioral lab. *Application Instructions:* All candidates should submit the following documents in English: (1) a curriculum vitae; (2) a motivation letter describing their interest in the position, and their track record; (3) at least two letters of reference submitted before the application deadline. Full consideration will be given to applications received by March 31, 2023. If you are interested in applying for one of the positions or know of any suitable applicants in your network of colleagues and former students who may be suitable for any of the positions, please direct them to the advertisement. *Equal Employment Opportunity statement:* The University of Coimbra is an equal opportunity/affirmative action employer and has a Gender Equality Plan in place. We are committed to fostering a diverse and inclusive academic global community at the University. We particularly encourage applications from women, and from other under-represented groups in the University of Coimbra?s workforce and in the cognitive and brain sciences. *Further information:* If you want to know more about the positions you can informally contact Alfonso Caramazza (caram at wjh.harvard.edu) and Jorge Almeida ( jorgecbalmeida at gmail.com). Please mention the profile(s) you would fit, from the three proposed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadia at di.unimi.it Wed Feb 7 16:11:50 2024 From: gadia at di.unimi.it (gadia at di.unimi.it) Date: Wed, 7 Feb 2024 21:11:50 +0000 Subject: Connectionists: [IEEE CoG 2024] Second Call for Papers Message-ID: IEEE CoG 2024: The 2024 IEEE Conference on Games August 5-8, 2024 Milan, Italy https://2024.ieee-cog.org/ Call for Papers Submission of *full* papers: March 1, 2024 Submission of *auxiliary* papers: April 28, 2024 The annual IEEE Conference on Games (IEEE CoG) aims to be a leading venue for researchers and practitioners to exchange ideas and novel approaches to bring innovation in and through games. Games are a great domain to study and develop novel ideas in design, artificial intelligence, human-computer interaction, psychology, education, sociology, and creativity, as well as their applications in real-world problems. IEEE CoG 2024 will take place at Politecnico di Milano, which is located in Milan, Italy. We invite contributions related to all game aspects, including, but not limited to: - Accessibility and inclusivity in games - Game design (frameworks, game mechanics, etc.) - Game technology (game engines, graphics, audio, animation, physics, networking, controls, etc.) - Player-Computer Interfaces - Generative AI for games - Serious/applied games (for health, sustainability, etc.) - Extended reality (virtual reality, mixed reality, and augmented reality) - Esports and live-streaming - Games for education, learning and development - Game editors and authoring tools - Game User Research (player/opponent modeling, player experience studies, psychology of play and player types, etc.) - Game-playing AI - Games as test-beds for algorithms - Game analytics, game data science, and data visualization - AI for game development (automated playtesting, adaptation/personalization in games, etc.) Please note: CoG is not a conference about game theory and does not accept papers about game theory unless they are applied to games people would play, such as video games or board games. *** Full papers *** We invite the submission of full *technical* and *vision* papers with an 8 page limit (including references and appendices). Full *technical* papers should provide a technical or empirical contribution to scientific, technical, or engineering aspects of games. Full *vision* papers should describe a vision for the future of the Games field or some part of it, be based on extensive research, and include a comprehensive bibliography. Please notice that the standards for vision papers are high: literature reviews and opinion papers with speculations not grounded in research are immediately rejected. *** Auxiliary papers *** We invite the submission of *short* and *demo* papers. *Short* papers (2-4 pages) describe work in progress, smaller projects that are not yet ready to be published as a full paper, or new progress on projects that have been reported elsewhere. *Demo* papers (2 pages) describe works to be presented during a demo session. All page limits include references and appendices. *** Important Dates *** Full papers submission: March 1, 2024 Full papers acceptance notification: April 15, 2024 Auxiliary papers submission: April 28, 2024 Auxiliary papers acceptance notification: May 28, 2024 Papers camera ready: June 7, 2024 Early bird and (author) registration: June 7, 2024 NONE OF THE SUBMISSION DEADLINES WILL BE EXTENDED. All deadlines are Anytime on Earth (AoE). *** Submission Instructions *** Papers must be submitted through the conference submission system available at the following link: https://cmt3.research.microsoft.com/CoG2024 All paper submissions should follow the recommended IEEE conference author guidelines. MS Word and LaTeX templates can be found at: https://www.ieee.org/conferences_events/conferences/publishing/templates.html All submitted papers will be fully peer-reviewed, and accepted papers will be published in the conference proceedings and on IEEE Xplore. CoG will use a *double-blind review process*. Authors must omit their names and affiliations from their submissions, avoiding obvious identifying statements. Submissions not abiding by anonymity requirements will be desk rejected. Short papers might be allocated to either poster presentations or short oral presentations if time and space permit. Demo papers will be allocated to a demo session. We look forward to having you join us at CoG 2024 in Milan. Best regards, The Organizing Committee of IEEE CoG 2024 --------------------------------------------------------------- Davide Gadia, PhD Associate Professor Dipartimento di Informatica "Giovanni Degli Antoni" Universit? degli Studi di Milano Via Celoria 18 20133 Milano Room 3005 - Floor 3 Tel. +39 02 5031 6301 Homepage: https://pong.di.unimi.it/davide/ PONG laboratory: https://pong.di.unimi.it/ --------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.destercke at hds.utc.fr Wed Feb 7 16:00:50 2024 From: sebastien.destercke at hds.utc.fr (sdesterc) Date: Wed, 7 Feb 2024 22:00:50 +0100 Subject: Connectionists: SIPTA virtual seminar 14th February 3pm Paris time: "Mixing Time and Uncertainty. A Tale of Superpositions", Rafael Penaloza Message-ID: Dear colleagues, We are delighted to announce our upcoming SIPTA online seminar on imprecise probabilities (IP). These monthly events are open to anyone interested in IP, and will be followed by a Q&A and open discussion. They also provide an occasion for the community to meet, keep in touch and exchange between in-person events. The next seminar will take place on the 14th of February(Wednesday). The zoom link for this seminar is https://utc-fr.zoom.us/j/85488837637 For this new seminar, we are very happy to have Rafael Pe?aloza Nyssen as our speaker. Rafael Pe?aloza Nyssen is currently Associate Professor at the IKR3 Lab of the University of Milano-Bicocca. HIs main interests and contributions are in knowledge representation and logical formalisms to handle various problems. Lately, he has taken a keen interest in reasoning and uncertainty, notably by using imprecisely specified probabilistic information in temporal reasoning. On the 14th of February, at 15:00 CEST:paris time (up to 17:00 CEST, with a talk duration of 45min/1h), he will talk about "Mixing Time and Uncertainty. A Tale of Superpositions?. Curious? Then check out the abstract on the webpage of the SIPTA seminars: sipta.org/events/sipta-seminars. The zoom link for attending the seminar can be found on that same page. So please mark your calendars on the 14th of February, and join us for the occasion. And for those who missed the previous seminar and want to catch up, or simply want to see it again and again, it is online at https://www.youtube.com/watch?v=EI8JSL80RuE See you at the seminar! S?bastien, Enrique and Jasper From cgf at isep.ipp.pt Wed Feb 7 13:17:44 2024 From: cgf at isep.ipp.pt (Carlos) Date: Wed, 7 Feb 2024 18:17:44 +0000 Subject: Connectionists: Information Fusion SI: Explainable AI in Industry 4.0 and 5.0 Message-ID: <822557b9-5237-473f-828e-2fd3b24cf448@isep.ipp.pt> ===================================== Explainable AI in Industry 4.0 and 5.0 This Special Issue aims to gather papers that focus on integrating and applying eXplainable AI (xAI) in the context of Industry 4.0 and 5.0. The themes that will be explored include the use of XAI in modern industrial settings and the collaboration between humans and machines in Industry 5.0, with a particular focus on trustworthiness, transparency, and accountability. As AI becomes more prevalent in industries, it is crucial to understand its nuances, particularly in terms of explainability. While there have been publications on the revolutionary nature of Industry 4.0 and 5.0, there has been little exploration of the critical role of XAI in these sectors. Previous special issues have covered AI and industrial revolutions, but a concerted effort to comprehend the importance of XAI in these settings is lacking. This Special Issue is intended for both academic researchers and industrial practitioners, with the potential to promote innovative collaborations. Topics of interest include (but are not limited to): -Data and information fusion in the industrial XAI context -Explainable systems fusing various sources of industrial information -Exploring XAI in the performance and efficiency of industrial systems -XAI for predictive maintenance -Forecasting of product and process quality -Explainable anomaly detection -Root Cause Analysis, Causal Reasoning -Automatic process optimization -Industrial process monitoring and modelling -Visual analytics and interactive machine learning -Remaining Useful Life -Decision-making assistance and resource optimization -Planning under uncertainty -Digital Twins for Predictive Maintenance -Analysis of usage patterns -AI transparency and accountability in smart factories -Ethical considerations in industrial deployment of AI -Industrial use cases for XAI (e.g., manufacturing, energy, transport) -Challenges and future directions for XAI in the industry Guest editors: Rita P. Ribeiro, PhD University Porto and INESC TEC, Porto, Portugal Jo?o Gama, PhD University Porto and INESC TEC, Porto, Portugal Slawomir Nowaczyk, PhD Halmstad University, Halmstad, Sweden Sepideh Pashami, PhD Halmstad University, Halmstad, Sweden Manuscript submission information: The journal's submission platform (Editorial Manager?) will be available for receiving submissions to this Special Issue from January 29th, 2024. Please refer to the Guide for Authors to prepare your manuscript and select the article type of ?VSI: XAI in Industry 4.0 and 5.0? when submitting your manuscript online. Both the Guide for Authors and the submission portal could be found on the Journal Homepage: Information Fusion | Journal | ScienceDirect.com by Elsevier. Timeline: Submission Open Date *29/01/2024 Final Manuscript Submission Deadline *29/07/2024 Editorial Acceptance Deadline *29/11/2024 Keywords: Predictive Maintenance; Industry 4.0; Explainable AI; Machine Learning Carlos Ferreira ISEP | Instituto Superior de Engenharia do Porto Rua Dr. Ant?nio Bernardino de Almeida, 431 4249-015 Porto - PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail at isep.ipp.pt | www.isep.ipp.pt From amir.kalfat at gmail.com Thu Feb 8 08:14:20 2024 From: amir.kalfat at gmail.com (Amir Aly) Date: Thu, 8 Feb 2024 13:14:20 +0000 Subject: Connectionists: Available Ph.D. Position in Health Care and Machine Learning Message-ID: Dear All, *Apologies for cross-posting* We are pleased to announce an opportunity for a funded PhD studentship (*BioMarkML: Machine Learning-Driven Bio-Sensing for Early Diagnosis of Alzheimer?s Disease*) at Plymouth University, UK. The closing date for applications is *12 noon on **15th March 2024* *(The difference in tuition fees between national and international students will be waived for excellent international candidates)*. More details *about the topic *and *how to apply* are available at: https://www.jobs.ac.uk/job/DFL216/phd-studentship-biomarkml-machine-learning-driven-bio-sensing-for-early-diagnosis-of-alzheimers-disease If you have any questions, please don't hesitate to contact me, Regards ---------------- *Dr. Amir Aly* Lecturer in Artificial Intelligence and Robotics / AI Consultant Programme Manager of Artificial Intelligence Center for Robotics and Neural Systems (CRNS) School of Engineering, Computing, and Mathematics Room A307 Portland Square, Drake Circus, PL4 8AA University of Plymouth, UK ------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Thu Feb 8 07:46:38 2024 From: timofte.radu at gmail.com (Radu Timofte) Date: Thu, 8 Feb 2024 13:46:38 +0100 Subject: Connectionists: [CFP] CVPR 2024 New Trends in Image Restoration and Enhancement (NTIRE) workshop and challenges Message-ID: Apologies for multiple postings *********************************** CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 16 CHALLENGES NTIRE: 9th New Trends in Image Restoration and Enhancement workshop and challenges. In conjunction with CVPR 2024, June 17, Seattle, US. Website: https://cvlai.net/ntire/2024/ Contact: radu.timofte at uni-wuerzburg.de TOPICS ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral image restoration, enhancement, manipulation ? Underwater image restoration, enhancement, manipulation ? Light field image restoration, enhancement, manipulation ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video restoration, enhancement, manipulation on constrained settings/mobile devices ? Visual domain translation ? Multimodal translation ? Perceptual enhancement ? Perceptual manipulation ? Depth estimation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation ? Saliency and gaze estimation ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. https://cvpr.thecvf.com/Conferences/2024/AuthorGuidelines The review process is double blind. Accepted and presented papers will be published after the conference in the 2024 CVPR Workshops Proceedings. Author Kit: https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip Submission site: https://cmt3.research.microsoft.com/NTIRE2024 WORKSHOP DATES ? *Regular Papers submission deadline: March 10, 2024* ? Challenge Papers submission deadline: April 1, 2024 ? *Papers reviewed elsewhere submission deadline: April 1, 2024* CHALLENGES - Dense and Non-Homogeneous Dehazing - Night Photography Rendering - Blind Compressed Image Enhancement - Shadow Removal - Track 1 Fidelity - Shadow Removal - Track 2 Perceptual - Efficient Super Resolution - Image Super Resolution (x4) - Light Field Image Super-Resolution - Track 1 Fidelity - Light Field Image Super-Resolution - Track 2 Efficiency - Stereo Image Super-Resolution - Track 1 Bicubic - Stereo Image Super-Resolution - Track 2 Realistic - HR Depth from Images of Specular and Transparent Surfaces - Track 1 Stereo - HR Depth from Images of Specular and Transparent Surfaces - Track 2 Mono - Bracketing Image Restoration and Enhancement - Track 1 - Bracketing Image Restoration and Enhancement - Track 2 - Portrait Quality Assessment - Quality Assessment for AI-Generated Content - Track 1 Image - Quality Assessment for AI-Generated Content - Track 2 Video - Restore Any Image Model (RAIM) in the Wild - RAW Image Super-Resolution - Short-form UGC Video Quality Assessment - Low Light Enhancement To learn more about the challenges, to participate in the challenges, and to access the data everybody is invited to check the NTIRE 2024 page: https://cvlai.net/ntire/2024/ CHALLENGES DATES ? *Release of train data: February 1, 2024* ? Competitions end: March 14, 2024 SPEAKERS (TBA) SPONSORS (TBA) Website: https://cvlai.net/ntire/2024/ Contact: radu.timofte at uni-wuerzburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From efipatm at gmail.com Thu Feb 8 10:38:38 2024 From: efipatm at gmail.com (Efi) Date: Thu, 8 Feb 2024 17:38:38 +0200 Subject: Connectionists: Live free 'AIDA AI Excellence e-Lecture' by Prof. Heng Ji: "SmartBook: an AI Prophetess for Disaster Reporting and Forecasting", 16th February 2024, 15:00 CET Message-ID: Dear AI scientist/engineer/student/enthusiast, Professor *Heng Ji* (University of Illinois) will deliver the e-lecture: "*SmartBook: an AI Prophetess for Disaster Reporting and Forecasting*" on *16th February 2024, 15:00 CET*. See details in: https://www.i-aida.org/events/smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting/ *Online participation* Please complete this registration form: http://www.hitz.eus/webinar_izenematea The *International AI Doctoral Academy (AIDA)* , a joint initiative of the European R&D projects AI4Media , ELISE , Humane AI Net , TAILOR , VISION , is very pleased to offer you top quality scientific lectures in the framework of AIDA AI Excellence Lecture Series on several current hot AI topics. Lectures will be offered alternatingly by: - Top highly-cited senior AI scientists internationally or - Young AI scientists with promise of excellence (AI sprint lectures) These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list . Best regards Profs. N. Sebe, M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, J. Stefanowski AIDA AI Excellence Lecture Series committee members -------------- next part -------------- An HTML attachment was scrubbed... URL: From wanling.cai at tcd.ie Thu Feb 8 11:59:33 2024 From: wanling.cai at tcd.ie (Wanling Cai) Date: Thu, 8 Feb 2024 16:59:33 +0000 Subject: Connectionists: [ACM IUI 2024] Call For Early Registration (Deadline: Feb 12) Message-ID: * We apologize if you receive multiple copies of this email * * Online version: https://iui.acm.org/2024/registration.html ACM IUI 2024 - Call For Early Registration 29th International ACM Conference on Intelligent User Interfaces (ACM IUI 2024) Greenville, South Carolina, USA, March 18-21, 2024 https://iui.acm.org/2024/ https://iui.acm.org/2024/index.html ACM Conference on Intelligent User Interfaces (ACM IUI) 2024 is the 29th annual premiere venue, where researchers and practitioners will meet and discuss state-of-the-art advances at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI). Ideal IUI submissions should address practical HCI challenges using machine intelligence and discuss both computational and human-centric aspects of such methodologies, techniques, and systems. The theme for this year will be Resilience, covering a wide variety of topics, such as COVID-19 recovery, organizational cyber resilience, economic growth and stability, climate change recovery, intelligent user interface resilience, and similar. While we encourage submissions related to this theme, the scope of the conference is not limited to the theme only. Thanks for your interest in IUI 2024! Please read the information link below carefully regarding costs and deadlines. * Registration information: https://iui.acm.org/2024/registration.html - Early Registration until Feb 12, 2024 - Standard Registration until Mar 7, 2024 - Onsite Registration after Mar 7, 2024 * Registration Link: https://web.cvent.com/event/9865c85a-e052-4e7e-9947-cf19114283eb/summary Registration for the IUI 2024 full conference includes access to all sessions and workshops. Payment and registration is being handled through a 3rd party provider. If you have questions, please email the registration chair at registration2024 at iui.acm.org. -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.dovrolis at cyi.ac.cy Thu Feb 8 14:53:19 2024 From: c.dovrolis at cyi.ac.cy (Constantine Dovrolis) Date: Thu, 8 Feb 2024 19:53:19 +0000 Subject: Connectionists: Postdoc position in Cyprus -- ML for medical imaging Message-ID: If you are interested in the development of new ML methods for medical imaging ? and in particular for cancer detection ? we have an available postdoc position at the beautiful island of Cyprus: * Full-time postdoc position * At the Cyprus Institute (https://www.cyi.ac.cy/). * Duration: two years (can be extended) * In collaboration with the German Oncology Center (GOC) in Cyprus * Due date: March 8, 2024 * More details: https://onlinerecruitment.exelsyslive.com/?c=6E7274A2-8EBA-4BEA-905B-06F790EEB566&v=2024/0223 Best regards, Constantine Dovrolis ? short bio Professor and Director of CaSToRC ? The Cyprus Institute ? www.cyi.ac.cy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Edgar.Galvan at mu.ie Thu Feb 8 15:53:13 2024 From: Edgar.Galvan at mu.ie (=?iso-8859-1?Q?Edgar_Galv=E1n?=) Date: Thu, 8 Feb 2024 20:53:13 +0000 Subject: Connectionists: PhD Opportunity - Maynooth University Ireland Message-ID: Dear all, If you're passionate about studying a PhD in Computer Science, particularly in areas like evolutionary algorithms, game development, and deep neural networks, then the project titled "Intelligent Adaptation of Serious Games for Enhanced Teaching: Leveraging Artificial Intelligence" could be an exciting opportunity for you. More information on the application process, project details, and further insights can be found at: https://maynoothuniversity.ie/graduate-research-academy/john-pat-hume-doctoral-awards/research-topics Feel free to reach out to me via email (edgar.galvan at mu.ie) if you'd like to explore this opportunity further or if you have any questions. Thank you, Edgar Galvan Head of the Naturally Inspired Computation Research Group, Department of Computer Science, Maynooth University, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.otte at uni-luebeck.de Fri Feb 9 03:06:05 2024 From: sebastian.otte at uni-luebeck.de (Sebastian Otte) Date: Fri, 9 Feb 2024 09:06:05 +0100 Subject: Connectionists: ICANN 2024 - First Call for Papers Message-ID: <7d851548-ad24-41d3-9957-fafce485ef67@uni-luebeck.de> ICANN 2024 - FIRST CALL FOR PAPERS ======================================================================= The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). In 2024, the ICANN will take place in Lugano, Switzerland. Conference Dates: September 17 to September 20, 2024 Conference Venue: Campus of the University of Southern Switzerland and the University of Applied Sciences and Arts of Southern Switzerland, Via la Santa 1, 6962 Lugano-Viganello, Switzerland. Conference Webpage: https://icann2024.org The ICANN 2024 features three conference tracks, namely AI and Machine Learning, Bio-Inspired Computing, and the Applications Track, making it a unique forum to bring together researchers across a wide range of disciplines related to Artificial Intelligence and Neural Networks. ======================================================================= We invite prospective authors to submit original and innovative papers in the following areas: AI and Machine Learning: Theory and Foundations of Neural Networks, Novel Neural Network Architectures, Topics in Neural Networks, Large Language Models, Explainable AI, Graph Methods, Graph Neural Networks, and Clustering, Reinforcement Learning, Topics in Artificial Intelligence and Machine Learning Bio-Inspired Computing: Brain-Inspired Computing, Cognitive and Theoretical/Computational Neuroscience Applications: Human-Computer Interaction, Biosignal Processing, Medicine and Physiology, Speech and Language Processing, Computer Vision, Robotics, Climate and Environment, General Applications of Machine Learning Workshops: - AI in Drug Discovery - Deep Learning for Neuro-heuristic Brain Analysis - Reservoir Computing Special Sessions: - Spiking Neural Networks and Neuromorphic Computing - Accuracy, Stability, and Robustness in Deep Neural Networks - Antifragile dynamical systems: beyond robustness, resilience, and adaptiveness - Neurorobotics For details and updates, as well as for the final list of workshops and special sessions, consider the current version of the list at: https://e-nns.org/icann2024/conference-programme/conference-topics The ICANN 2024 will be an in-person conference. We expect that at least one co-author of each paper will be physically present in Lugano to give his or her presentation. We will also enforce a no-show rule, in order to encourage an optimal quality of the scientific discussion here in Lugano. Papers must be written in English language and must conform to the format defined in the Author?s Kit, available on the conference webpage at: https://e-nns.org/icann2024/submission Both short papers (6-11 pages) and full papers (12-15 pages) can be submitted. Papers shorter than 6 pages will not be accepted to the main conference, please check special submission rules for the conference workshops on the webpage of the respective workshop if you plan to submit there. Papers will undergo rigorous peer review; the Program Committee will make the final decision which papers will be included in the conference program. For papers with conflicting reviews, or which are on the borderline between acceptance and rejection, a short rebuttal phase is foreseen. Full papers will usually be assigned to oral sessions (15 minutes presentation + 5 minutes questions), short papers may be assigned to oral or poster sessions, depending on the quantity of submissions; this assignment does in no way incur a judgment on paper quality. Note that presentation rules for workshops and special sessions may differ. All accepted papers will be published in the Springer LNCS series. KEYNOTE SPEAKERS J?rgen Schmidhuber, IDSIA USI-SUPSI, Switzerland, and KAUST AI Initiative, Saudi Arabia Tanja Schultz, University of Bremen, Germany Walter Senn, University of Bern, Switzerland Henry Markram, ?cole polytechnique f?d?rale de Lausanne, Switzerland CONFERENCE COMMITTEE General Chairs J?rgen Schmidhuber, IDSIA USI-SUPSI, Switzerland, and KAUST AI Initiative, Saudi Arabia Igor Tetko, HGMU Munich, Germany Program Chairs Michael Wand, IDSIA USI-SUPSI, Switzerland Krist?na Malinovsk?, Comenius University, Bratislava, Slovakia Honorary Chair Stefan Wermter, University of Hamburg, Germany Full list of the organization committee: https://e-nns.org/icann2024/organization/organising-committee IMPORTANT DATES - Opening of submission portal: Feb 15 2024 - Deadline for full paper submission: March 15 2024 - Notification of Acceptance or Rebuttal: May 15 2024 - Deadline for rebuttal: May 31 2024 - Final notification of Acceptance or Rejection after Rebuttal: June 10 2024 - Camera-ready paper upload and author registration: June 30 2024 Contact: For questions, please contact program-committee at icann2024.org. Link to this call: https://e-nns.org/icann2024/call-for-papers From tarek.besold at googlemail.com Fri Feb 9 14:43:23 2024 From: tarek.besold at googlemail.com (Tarek R. Besold) Date: Fri, 9 Feb 2024 20:43:23 +0100 Subject: Connectionists: Job opportunity at Sony AI Barcelona: Research Scientist Interpretable ML Message-ID: Dear all. We have another vacancy in our Barcelona office, this time for a Research Scientist Interpretable ML: https://sonyglobal.wd1.myworkdayjobs.com/en-US/SonyGlobalCareers/job/Barcelona/Research-Scientist---Interpretable-ML_JR-115903-1 If you have questions regarding the role or the overall setup, feel free to reach out to tarek.besold at sony.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Fri Feb 9 12:24:31 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 9 Feb 2024 09:24:31 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Message-ID: Dear Geoff, Please stop taking potshots. Face me. You say I am irrelevant, yet you attack (and misrepresent) me regularly in your talks, including both this morning at The Vector Institute, and your ?just weights? riff in the Fall. On your web page, you have more mentions of me than Yann LeCun. (What greater measure of relevance could there be?) To make it easy for you, I will come to Toronto, on your home turf. (Perhaps Vector Institute and Web Summit could co-host). We will get a neutral moderator, and a huge audience. And we can fight together for the things we are actually both concerned about, around AI safety, and make a huge statement by doing so. The world will learn from the debate?and also from our overlap. It will be win-win for the world. ? Gary From nemanja at temple.edu Sun Feb 11 05:29:21 2024 From: nemanja at temple.edu (Nemanja Djuric) Date: Sun, 11 Feb 2024 10:29:21 +0000 Subject: Connectionists: 2nd CfP: The 6th Workshop on "Precognition: Seeing through the Future" @ CVPR 2024 Message-ID: Call for Workshop Papers The 6th Workshop on "Precognition: Seeing through the Future" in conjunction with The 37th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024) Seattle, June 17th-21st, 2024 https://sites.google.com/view/ieeecvf-cvpr2024-precognition ================= Despite its potential and relevance for real-world applications, visual forecasting or precognition has not been in the focus of new theoretical studies and practical applications as much as detection and recognition problems. Through the organization of this workshop we aim to facilitate further discussion and interest within the research community regarding this nascent topic. The workshop will discuss recent approaches and research trends not only in anticipating human behavior from videos, but also precognition in multiple other visual applications, such as: medical imaging, health-care, human face aging prediction, early event prediction, autonomous driving forecasting, and so on. In addition, this workshop will give an opportunity for the community in both academia and industry to meet and discuss future work and research directions. It will bring together researchers from different fields and viewpoints to discuss existing major research problems and identify opportunities in further research directions in both research topics and industrial applications. This is the sixth Precognition workshop organized at CVPR. It follows very successful workshops organized since 2019, which featured talks from researchers across a number of industries, insightful presentations, and large attendance. For full programs, slides, posters, and other resources, please visit the websites of earlier Precognition workshops, linked at the workshop website. ================= Topics: The workshop focuses on several important aspects of visual forecasting. The topics of interest for this workshop include, but are not limited to: - Early event prediction - Activity and trajectory forecasting - Multi-agent forecasting - Human behavior and pose prediction - Human face aging prediction - Predicting frames and features in videos and other sensors in autonomous driving - Traffic congestion anomaly prediction - Automated Covid-19 prediction in medical imaging - Visual DeepFake prediction - Short- and long-term prediction and diagnoses in medical imaging - Prediction of agricultural parameters from satellite imagery - Databases, evaluation and benchmarking in precognition ================= Submission Instructions: All submitted work will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. For each accepted submission, at least one author must attend the workshop and present the paper. There are two ways to contribute submissions to the workshop: - Extended abstracts submissions are single-blind peer-reviewed, and author names and affiliations should be listed. Extended abstract submissions are limited to a total of four pages. Extended abstracts of already published works can also be submitted. Accepted abstracts will be presented at the poster session, and will not be included in the printed proceedings of the workshop. - Full paper submissions are double-blind peer-reviewed. The submissions are limited to eight pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed (additional information about formatting and style files is available at the website). Accepted papers will be presented at the poster session, with selected papers also being presented in an oral session. All accepted papers will be published by the CVPR in the workshop proceedings. Submission website: https://cmt3.research.microsoft.com/PRECOGNITION2024 ================= Important Deadlines: Submission : March 24th, 2024 Decisions : April 8th, 2024 Camera-ready : April 13th, 2024 Workshop : June 18th, 2024 ================= Program Committee Chairs: - Dr. Khoa Luu (University of Arkansas) - Dr. Nemanja Djuric (Aurora Innovation) - Dr. Kris Kitani (Carnegie Mellon University) - Dr. Utsav Prabhu (Google) - Dr. Hien Van Nguyen (University of Houston) - Dr. Junwei Liang (HKUST, Guangzhou) For further questions please contact a member of the organizing committee at precognition.organizers at gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrey.hinton at gmail.com Sat Feb 10 10:33:28 2024 From: geoffrey.hinton at gmail.com (Geoffrey Hinton) Date: Sat, 10 Feb 2024 10:33:28 -0500 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: I am sorry that i took a potshot at you. I will do my best not to do it again. Your aim in life seems to be to get attention and I regret facilitating this. Geoff On Sat, Feb 10, 2024 at 8:14?AM Gary Marcus wrote: > Dear Geoff, > > Please stop taking potshots. Face me. > > You say I am irrelevant, yet you attack (and misrepresent) me regularly in > your talks, including both this morning at The Vector Institute, and your > ?just weights? riff in the Fall. On your web page, you have more mentions > of me than Yann LeCun. (What greater measure of relevance could there be?) > > To make it easy for you, I will come to Toronto, on your home turf. > (Perhaps Vector Institute and Web Summit could co-host). > > We will get a neutral moderator, and a huge audience. And we can fight > together for the things we are actually both concerned about, around AI > safety, and make a huge statement by doing so. > > The world will learn from the debate?and also from our overlap. It will > be win-win for the world. > > ? Gary > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aadharna at gmail.com Sat Feb 10 18:10:37 2024 From: aadharna at gmail.com (Aaron Dharna) Date: Sat, 10 Feb 2024 15:10:37 -0800 Subject: Connectionists: Call for Anecdotes -- AI Finds A Way Message-ID: Dear colleagues, tl;dr Please submit (to aifindsaway at gmail.com) any stories you know of where AI acted in a way that surprised its creators, especially if it could be seen as unsafe (e.g. hacking a reward function, finding a loophole in an environment or experimental design, goal misgeneralization, etc.). Ever encountered an AI that cleverly outmaneuvered your experimental design, or revealed unexpected flaws in your reward functions? We (Aaron Dharna, Joel Lehman, Victoria Krakovna, and Jeff Clune) are writing a paper about how AI Finds A Way to surprise us. We're gathering such stories to expand our previous work The Surprising Creativity of Digital Evolution: https://arxiv.org/abs/1803.03453 to the deep learning setting, highlighting the importance of AI safety and the unpredictable nature of our work. We aim to record the true accounts of as many anecdotes as possible regarding AI (of any type, including RL, ML, etc.) surprising its creators and users. Therefore, your experiences are crucial for this endeavor. We hope you can help create a definitive account of these fascinating and sometimes ominous anecdotes so we can inform AI safety discussions, either by submitting and/or spreading the word of this Call for Anecdotes. Please send your anecdotes to aifindsaway at gmail.com by March 1st, 2024. Please feel free to share the following call far and wide: https://docs.google.com/document/d/1BhRWzkIYRUDjU5zon-ILXINPL4VqZp2JZXNsTjekBPk/edit?usp=sharing Let's illuminate the path forward together with insights from our collective research adventures. Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From francisco.pereira at gmail.com Sun Feb 11 18:45:20 2024 From: francisco.pereira at gmail.com (Francisco Pereira) Date: Sun, 11 Feb 2024 18:45:20 -0500 Subject: Connectionists: position: data scientist / data analyst researcher at NIMH Message-ID: (reposting for a colleague, please follow contact instructions at the end if interested; apologies for repeated postings!) ### HIRING: Data Analyst / Data Scientist Position at the CDN Lab The Unit on Computational Decision Neuroscience (CDN, https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/edb/cdn ) at the National Institute of Mental Health is seeking a full-time Data Scientist/Data Analyst. The lab is focused on understanding the neural and computational bases of adaptive and maladaptive decision-making and their relationship to mental health. Current studies investigate how internal states lead to biases in decision-making and how this is exacerbated in mental health disorders. Our approach involves a combination of computational model-based tasks, questionnaires, biosensor data, fMRI, and intracranial recordings. The main models of interest come from neuroeconomics, reinforcement learning, Bayesian inference, signal detection, and information theory. The main tasks for this position include but are not restricted to computational modeling of behavioral data from decision-making and other cognitive tasks, statistical analysis of task-based, clinical, physiological and neuroimaging data, as well as data visualization for the purposes of scientific presentations, public communication, and academic manuscripts. The candidate is expected to demonstrate experience with best practices for the development of well-documented, reproducible programming pipelines for data analysis, that facilitate sharing and collaboration, and live up to our open-science philosophy, as well as to our data management and sharing commitments at NIH. No prior experience with psychiatry research is needed but a familiarity with the constructs and models of interest in the lab (value-based learning and decision-making, metacognition, belief updating, emotion regulation, and/or effort-cost estimation) is desirable, as these are universally important for understanding adaptive healthy functioning and psychiatric disease. This is an exciting opportunity for a candidate with established programming and analytic skills to work at the cutting edge of psychiatry research and computational cognitive neuroscience. You can find more about our work and recent publications at https://www.nimh.nih.gov/research/research-conducted-at-nimh/research-areas/clinics-and-labs/edb/cdn ## Job Requirements: - Assisting with setting up and managing analysis pipelines - Conducting advanced analysis of behavioral, physiological, and imaging data, including but not limited to computational modeling and machine learning - Integrating complex datasets across multiple modalities, including fMRI, electrophysiology, biosensor data neuroendocrinology, behavior, and self-report - Assisting in data visualization for manuscripts and presentation of results at scientific meetings - Supporting/co-mentoring junior members of the lab on data analysis practices ## Qualifications: - A PhD in neuroscience, cognitive science, psychology, computer science, data science, statistics, engineering, or a related field - Strong programming skills (ideally in Python, and/or MATLAB, R) - Experience working within Linux environment and cloud computing - An ability to work well in multidisciplinary and highly collaborative teams - An interest in translational research - A track record or potential for scholarly productivity - Effective independent problem-solving and task prioritization ## Experience with any of the following is not required, but preferred: - Computational modeling (mathematical optimization, reinforcement learning, Bayesian inference, neuroeconomics or other types of modeling of human/animal learning and decision-making) - statistical inference (e.g. generalized linear models, mixed effect models, state space models, survival analysis) - Dynamic analysis of longitudinal or time series data (e.g. neuroimaging, MEG, EEG, or neural recordings) - Advanced statistical methods and machine learning The data analyst will work under the supervision of Dr. Silvia Lopez-Guzman on projects that aim to understand (1) the process of adaptively evaluating options and committing to a choice; (2) how changes in internal and motivational states may abnormally shape decisions in individuals with and without psychopathology; and (3) how cognitive and metacognitive resources support these adaptive or maladaptive decision-making processes. The CDN lab leverages the rich clinical resources and computational expertise across the NIH, and collaborates actively with labs that specialize on addiction, depression, anxiety, and pain. The lab is an active part of a growing community of expert labs on learning and decision-making who work together to improve our understanding from the circuits and behavioral neuroscience level to the human cognitive and clinical levels, making this a unique opportunity for any scientist with an interest in decision science and computational psychiatry. ## How to Apply: To apply, please send your CV and a cover letter to Dr. Silvia Lopez-Guzman (silvia.lopezguzman at nih.gov ) with the subject ?CDN Lab Data Analyst App?. Inquiries about any aspect of the position are very welcome! From johan.suykens at esat.kuleuven.be Sun Feb 11 10:11:59 2024 From: johan.suykens at esat.kuleuven.be (Johan Suykens) Date: Sun, 11 Feb 2024 16:11:59 +0100 Subject: Connectionists: Deadline extension - DEEPK 2024 - International Workshop on Deep Learning and Kernel Machines - March 7-8, 2024, Leuven Belgium In-Reply-To: <9d370675-abd5-434e-b7e4-ed4a7a029966@esat.kuleuven.be> References: <9d370675-abd5-434e-b7e4-ed4a7a029966@esat.kuleuven.be> Message-ID: <1f36b0d4-7316-4c6e-bbd1-fba7bc18d88c@esat.kuleuven.be> *DEEPK 2024 /International Workshop on Deep Learning and Kernel Machines/* March 7-8, 2024, Leuven, Arenberg Castle, Belgium https://www.esat.kuleuven.be/stadius/E/DEEPK2024 */- Main scope -/* Major progress and impact has been achieved through deep learning architectures with many exciting applications such as by generative models and transformers. At the same time it triggers new questions on the fundamental possibilities and limitations of the models, with respect to representations, scalability, learning and generalization aspects. Through kernel-based methods often a deeper understanding and solid foundations have been obtained, complementary to the powerful and flexible deep learning architectures. Recent examples are understanding generalization of over-parameterized models in the double descent phenomenon and conceiving attention mechanisms in transformers as kernel machines. The aim of DEEPK 2024 is to provide a multi-disciplinary forum where researchers of different communities can meet, to find new synergies between deep learning and kernel machines, both at the level of theory and applications. */- Topics - /* Topics include but are not limited to: * Deep learning and generalization * Double descent phenomenon and over-parameterized models * Transformers and asymmetric kernels * Attention mechanisms, kernel singular value decomposition * Learning with asymmetric kernels * Duality and deep learning * Regularization schemes, normalization * Neural tangent kernel * Deep learning and Gaussian processes * Transformers, support vector machines and least squares support vector machines * Autoencoders, neural networks and kernel methods * Kernel methods in GANs, variational autoencoders, diffusion models, Generative Flow Networks * Generative kernel machines * Deep Kernel PCA, deep kernel machines, deep eigenvalues, deep eigenvectors * Restricted Boltzmann machines, Restricted kernel machines, deep learning, energy based models * Disentanglement and explainability * Tensors, kernels and deep learning * Convolutional kernels * Sparsity, robustness, low-rank representations, compression * Nystrom method, Nystromformer * Efficient training methods * Lagrange duality, Fenchel duality, estimation in Hilbert spaces, reproducing kernel Hilbert spaces, vector-valued reproducing kernel Hilbert spaces, Krein spaces, Banach spaces, RKHS and C*-algebra * Applications */- Invited Speakers -/* * Mikhail Belkin (University of California San Diego) * Volkan Cevher (EPFL) * Florence d'Alche-Buc (Telecom Paris, Institut Polytechnique de Paris) * Julien Mairal (INRIA) * Massimiliano Pontil (IIT and University College London) * Dingxuan Zhou (University of Sydney) */- Call for abstracts -/* The DEEPK 2024 program will include *oral and poster sessions*. Interested participants are cordially invited to submit an *extended abstract (max. 2 pages)* for their contribution.? Please prepare your extended abstract submission in LaTeX, according to the provided stylefile and submit it in pdf format (max. 2 pages). Further extended abstract information is given at https://www.esat.kuleuven.be/stadius/E/DEEPK2024/call_for_abstracts.php . */- Schedule - /* * *Deadline extended abstract submission:* *Feb 8, 2024 (deadline extended to Feb 15, 2024) * * Notification of acceptance and presentation format (oral/poster): Feb 22, 2024 * Deadline for registration: Feb 29, 2024 * *International Workshop DEEPK 2024:* March 7-8, 2024 */- Organizing committee - /* Johan Suykens (Chair), Alex Lambert, Panos Patrinos, Qinghua Tao, Francesco Tonin */- Other info -/* Please consult the DEEPK 2024 website https://www.esat.kuleuven.be/stadius/E/DEEPK2024 for info on program, registration, location and venue. The event is co-sponsored by ERC Advanced Grant E-DUALITY and KU Leuven. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhansel0 at gmail.com Sun Feb 11 16:12:23 2024 From: dhansel0 at gmail.com (David Hansel) Date: Sun, 11 Feb 2024 22:12:23 +0100 Subject: Connectionists: World wide VVTNS series: Wednesday, February 14 at 11am (ET), Nader Nikbakht, MIT In-Reply-To: References: Message-ID: [image: VVTNS.png] https://www.wwtns.online - on twitter: wwtns at TheoreticalWide You are cordially invited to the lecture given by Nader Nikbakht MIT on the topic of *"*Thalamocortical dynamics in a complex learned behavior*"* The lecture will be held on zoom on *February 14, 2024*, at *11:00 am ET * Register on our website - https://www.wwtns.online -to receive the zoom link *Abstract:* Performing learned behaviors requires animals to produce precisely timed motor sequences. The underlying neuronal circuits must convert incoming spike trains into precisely timed firing to indicate the onset of crucial sensory cues or to carry out well-coordinated muscle movements. Birdsong is a remarkable example of a complex, learned and precisely timed natural behavior which is controlled by a brainstem-thalamocortical feedback loop. Projection neurons within the zebra finch cortical nucleus HVC (used as a proper name), produce precisely timed, highly reliable and ultra-sparse neural sequences that are thought to underlie song dynamics. However, the origin of short timescale dynamics of the song is debated. One model posits that these dynamics reside in HVC and are mediated through a synaptic chain mechanism. Alternatively, the upstream motor thalamic nucleus Uveaformis (Uva), could drive HVC bursts as part of a brainstem-thalamocortical distributed network. Using focal temperature manipulation we found that the song dynamics reside chiefly in HVC. We then characterized the activity of thalamic nucleus Uva, which provides input to HVC. We developed a lightweight (~1 g) microdrive for juxtacellular recordings and with it performed the very first extracellular single unit recordings in Uva during song. Recordings revealed HVC-projecting Uva neurons contain timing information during the song, but compared to HVC neurons, fire densely in time and are much less reliable. Computational models of Uva-driven HVC neurons estimated that a high degree of synaptic convergence is needed from Uva to HVC to overcome the inconsistency of Uva firing patterns. However, axon terminals of single Uva neurons exhibit low convergence within HVC such that each HVC neuron receives input from 2-7 Uva neurons. These results suggest that thalamus maintains sequential cortical activity during song but does not provide unambiguous timing information. Our observations are consistent with a model in which the brainstem-thalamocortical feedback loop acts at the syllable timescale (~100 ms) and does not support a model in which the brainstem-thalamocortical feedback loop acts at fast timescale (~10 ms) to generate sequences within cortex. *About VVTNS : Created as the World Wide Neuroscience Seminar (WWTNS) in November 2020 and renamed in homage to Carl van Vreeswijk in Memoriam (April 20, 2022), its aim is to be a platform to exchange ideas among theoreticians. Speakers have the occasion to talk about theoretical aspects of their work which cannot be discussed in a setting where the majority of the audience consists of experimentalists. The seminars, **held on Wednesdays at 11 am ET,** are 45-50 min long followed by a discussion. The talks are recorded with authorization of the speaker and are available to everybody on our YouTube channel.* ? ? ? ? ? -- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints Peres 75270 Paris Cedex 06 Tel (Cell): +33 607508403 - Fax (33).1.49.27.90.62 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VVTNS.png Type: image/png Size: 41084 bytes Desc: not available URL: From gary.marcus at nyu.edu Sat Feb 10 11:15:28 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 10 Feb 2024 08:15:28 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: <92337E16-9572-4525-BFDF-34978CE8A137@nyu.edu> References: <92337E16-9572-4525-BFDF-34978CE8A137@nyu.edu> Message-ID: <57A1B364-AB59-487C-98E8-A23B64E4EA50@nyu.edu> An HTML attachment was scrubbed... URL: From axel.hutt at inria.fr Mon Feb 12 02:44:18 2024 From: axel.hutt at inria.fr (Axel Hutt) Date: Mon, 12 Feb 2024 08:44:18 +0100 (CET) Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <525334317.55497810.1707723858349.JavaMail.zimbra@inria.fr> Dear Dr. Hinton and Dr. Marcus, I am sure that your personal dispute is important for you, while I am not sure that this contributes to the scientific knowledge which is one of the aims of this email list. I would like to ask you to refrain from sharing this seemingly personal dispute in this email list. Thanks in advance Axel ----- On 10 Feb, 2024, at 16:33, Geoffrey Hinton wrote: > I am sorry that i took a potshot at you. I will do my best not to do it again. > Your aim in life seems to be to get attention and I regret facilitating this. > Geoff > On Sat, Feb 10, 2024 at 8:14 AM Gary Marcus < [ mailto:gary.marcus at nyu.edu | > gary.marcus at nyu.edu ] > wrote: >> Dear Geoff, >> Please stop taking potshots. Face me. >> You say I am irrelevant, yet you attack (and misrepresent) me regularly in your >> talks, including both this morning at The Vector Institute, and your ?just >> weights? riff in the Fall. On your web page, you have more mentions of me than >> Yann LeCun. (What greater measure of relevance could there be?) >> To make it easy for you, I will come to Toronto, on your home turf. (Perhaps >> Vector Institute and Web Summit could co-host). >> We will get a neutral moderator, and a huge audience. And we can fight together >> for the things we are actually both concerned about, around AI safety, and make >> a huge statement by doing so. >> The world will learn from the debate?and also from our overlap. It will be >> win-win for the world. >> ? Gary -- Axel Hutt Directeur de Recherche Equipe MIMESIS - INRIA Nancy Grand Est Equipe MLMS - iCube Strasbourg B?timent NextMed 4, rue Marie Hamm 67000 Strasbourg, France https://mimesis.inria.fr/members/axel-hutt/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From francesca.naretto at sns.it Mon Feb 12 06:29:29 2024 From: francesca.naretto at sns.it (Francesca NARETTO) Date: Mon, 12 Feb 2024 12:29:29 +0100 Subject: Connectionists: DS2024 - Call for papers Message-ID: 27th International Conference on Discovery Science (DS2024) October 14-16, 2024, Pisa, Italy http://ds2024.isti.cnr.it/index.html ***CALL FOR PAPER: MAIN RESEARCH TRACK*** Discovery Science 2024 conference provides an open forum for intensive discussions and exchange of new ideas among researchers working in the area of Discovery Science. The conference focus is on the use of Artificial Intelligence, Data Science and Big Data Analytics methods in science. Its scope includes the development and analysis of methods for discovering scientific knowledge, coming from machine learning, data mining, intelligent data analysis, and big data analytics, as well as their application in various domains. Possible topics include, but are not limited to: - Artificial Intelligence (machine learning, knowledge representation and reasoning, natural language processing, statistical methods, etc.) applied to science - Machine Learning: supervised learning (including ranking, multi-target prediction and structured prediction), unsupervised learning, semi-supervised learning, active learning, reinforcement learning, online learning, transfer learning, etc. - Knowledge Discovery and Data Mining - Anomaly and Outlier Detection - Time-Series Analysis - Spatial, Temporal and Spatiotemporal Data Analysis - Unstructured Data Analysis (textual and web data) - Data and Knowledge Visualization - Complex Network Analysis - Causal Modelling - Explainable AI and Interpretable Machine Learning - Human-Machine Interaction for Knowledge Discovery and Management - Data Streams, Evolving Data, Change Detection & Concept drift - AutoML, Meta-Learning, Planning to Learn - AI and High-performance Computing, Grid and Cloud Computing - AI and Cybersecurity - Computational Creativity - Learning from Complex Data - Process Discovery and Analysis - Evaluation of Models and Predictions in Discovery Setting - Applications of the above techniques in scientific domains, such as Physical sciences (e.g., materials sciences, particle physics), Life sciences (e.g., systems biology/systems medicine), Environmental sciences, Natural and social sciences ***IMPORTANT DATES*** Abstract submission (deadline): Monday June 3, 2024 Full paper submission (deadline): Monday June 10, 2024 Notification of acceptance: Monday July 22, 2024 Camera-ready version, author registration: Monday August 5, 2024 All dates are specified as 23:59:59 SST (Standard Samoa Time / Anywhere on Earth) ***SUBMISSION PROCEDURE*** Contributions, written in English, must be formatted according to the guidelines of the Lecture Notes of Computer Science (LNCS) series by Springer-Verlag, which are available together with templates here: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines . We strongly recommend using the LNCS template for LaTeX. The page limit for any contribution, including figures, title pages, references, and appendices, is 15 pages in the LNCS format. Submission of the camera-ready version of the paper has to include the authors? consent to publish on the above Springer LNCS website. Submissions will be reviewed following a single-blind procedure. Therefore, authors should include their names and affiliations in the paper. Authors may not submit any paper which is under review elsewhere or which has been accepted for publication in a journal or another conference; neither will they submit their papers elsewhere during the review period of DS? 2024. Submission System: https://cmt3.research.microsoft.com/DS2024/ At least one of the authors must be a registered participant at the conference to present the work. ***SPECIAL ISSUE*** The authors of a number of selected papers presented at DS 2024 will be invited to submit extended versions of their papers for possible inclusion in a special issue of Machine Learning journal (published by Springer) on Discovery Science. Fast-track processing will be used to have them reviewed and published. ***BEST PAPER AWARD*** There will be a Best Student Paper Award in the value of 550 Eur sponsored by Springer, and a Best Paper Award in the value of 550 Eur sponsored by Fondazione Area. ***CALL FOR PAPER: SPECIAL TRACK SOBIGDATA*** SoBigData RI (www.sobigdata.eu) is a distributed, Pan-European, multi-disciplinary research infrastructure aimed at using social mining and big data to understand the complexity of our contemporary, globally-interconnected society. This year, Discovery Science and SoBigData RI, collaborate in order to have a special track. The topics are the same of the main track but focussed on experiments for the SoBigData research spaces: - Demography, Economy and Finance 2.0: traditional complex socio-economic financial systems in conjunction with emerging ones, in particular, block-chain & cryptocurrency markets and their applications such as smart property, Internet of things, energy trading, and smart contracts. - Disaster Response and Recovery: methods and tools to analyze, monitor, and improve post-disaster reconstruction processes in socio-economic areas, spatial planning, environmental health in cooperation with national and international institutions. - Health Studies: experiments addressing health-related issues, including medical, nutrition, environmental health and network medicine. It entails integrating multidisciplinary scientific research including machine learning techniques and AI technologies. - Pervasive Intelligence in Cyber-Physical Systems for Future Society: experiments on AI models for a dynamic and automatic allocation of resources capable of reacting to changes in services, application requirements, sustainability, and user-generated traffic. Specific attention will also be given to approaches based on the emulation of biological brain mechanisms, called Cyber-Physical systems, with the aim of creating a ?conscious system? able to self-control a large number of subsystems. - Societal Debates and Misinformation: experiments analyzing both discussions on social media and journalistic production to understand which are the most debated topics. - Next-Generation Internet: experiments investigating the effect of Next Generation Internet paradigms by developing user- and device-centric services for groups of mobile users based on direct communication between faredge devices, validating them in the same application areas. - Sustainable Cities for Citizens: experiments about the usage of mobile devices data, georeferenced social media data, or other spatio-temporal data to model human behavior and activities. - Submitted paper must refer to a specific research space. Submitted papers must contain all the references to data and methods used in order to be accessible to the SoBigData community and to be published on the platform catalog (only in case of acceptance). Accepted papers will be published in DS2024 proceedings and disseminated through the research infrastructure channels. ***IMPORTANT DATES & SUBMISSION PROCEDURE*** Abstract submission (deadline): Monday June 3, 2024 Full paper submission (deadline): Monday June 10, 2024 Notification of acceptance: Monday July 22, 2024 Camera-ready version, author registration: Monday August 5, 2024 All dates are specified as 23:59:59 SST (Standard Samoa Time / Anywhere on Earth) ***BEST PAPER AWARD*** There will be a Best SoBigData Paper award in the value of 550 Eur sponsored by SoBigData. ***CALL FOR LATE BREAKING CONTRIBUTIONS *** The Posters Track complements the research track of the conference and offers an opportunity to present late-breaking results and showcase innovative implementations in an informal and interactive setting. During the Poster session authors are encouraged to directly connect with conference participants and engage in discussions about their work. We invite submissions relevant to Discovery Science and which address, but are not limited to, the topics of the Research Track. We also welcome work that has already been seen in other places but needs to be shared with the Discovery Science community. Authors of the research papers not accepted for the Research track can be invited to present their work via a poster in the Poster session to discuss their research and scientific contributions with the community and expand their network with other researchers. ***IMPORTANT DATES*** Submission due: Monday June 22, 2024 Notification of acceptance: Monday July 27, 2024 Camera-ready version (extended abstract), author registration: Monday August 5, 2024 All dates are specified as 23:59:59 SST (Standard Samoa Time / Anywhere on Earth) ***SUBMISSION DETAILS*** Authors should submit an up-to two-page (excluding references) extended abstract for evaluation. All submissions must be written in English, using the PDF file format, following the formatting guidelines of the research track, and submitted electronically via the CMT submission system: https://cmt3.research.microsoft.com/DS2024/ Poster submissions should describe late-breaking research results, on-going research projects, or work-in-progress relevant to Discovery Science. Additional links to relevant resources are encouraged to be included in the submission. The submissions should not be anonymised. Each submission will be reviewed by 2 members of the Program Committee. All accepted posters will be available on the conference website. Authors are encouraged to link in the poster complementary materials (e.g. images, videos, code). At least one of the authors must be a registered participant at the conference, and attend the Poster Session to present the work. All papers accepted at the poster session will be published as extended abstracts (up to 4 pages in LNCS format) in CEUR proceedings. ***CALL FOR CONTRIBUTIONS: DOCTORAL CONSORTIUM*** The Doctoral Consortium (DC) provides an opportunity to present and discuss their doctoral research ideas and progress in a supportive, formative and yet critical environment and receive feedback from reviewers, mentors and peers from the field of Discovery Science. The DC will also provide opportunities to network and build collaborations with other members of the Discovery Science community. We welcome submissions across research Discovery Science-related domains and which address, but are not limited to, the topics of the Research Track. The event is intended for early as well as middle/late-stage PhD candidates and asks them to formulate and submit a concrete PhD research proposal, preferably supported by some preliminary results. The proposal will be peer-reviewed. If accepted, students must register and physically attend the event to present their PhD research proposal in a Poster session. Details for the submission are found below under ?Submission Details?. ***IMPORTANT DATES*** Submission due: Monday June 22, 2024 Notification of acceptance: Monday July 27, 2024 Camera-ready version (extended abstract), author registration: Monday August 5, 2024 All dates are specified as 23:59:59 SST (Standard Samoa Time / Anywhere on Earth) ***SUBMISSION DETAILS*** Authors should submit an up-to four-page (excluding references) description of their PhD research proposal. All submissions must be written in English, using the PDF file format, following the formatting guidelines of the research track, and submitted electronically via the CMT submission system: https://cmt3.research.microsoft.com/DS2024/ The papers should have a single author (the PhD candidate) and submissions are *not* anonymous. Supervisors, other involved persons, and funding agencies should be acknowledged in an Acknowledgements section. Research proposals (and if accepted, the Posters) should contain the following elements: - Context: The background and motivation for your research, including the related work that frames your research - Research questions/challenges: what are the research questions/challenges that your dissertation addresses? Try to highlight how it differs from existing literature. - Method/approach and evaluation: how is each of the research questions answered? How are results evaluated? If you are planning to conduct studies or build prototypes, provide a brief description. - Preliminary results (if available). Highlight results and contribution to date and the timeplan for projected steps. - Discussion and Future work: What are intermediary conclusions, and what are the planned next steps? All accepted research proposals will be presented in a Poster session. Posters and PDF documents will be available on the conference website. PhD students are encouraged to link in the poster complementary materials (e.g. images, videos, code). At least one of the authors must be a registered participant at the conference, and attend the Poster Session to present the work. All papers accepted at the poster session will be published as extended abstracts (up to 4 pages in LNCS format) in CEUR proceedings. ***COMMITTEES AND CHAIRS*** GENERAL & PROGRAM CHAIRS - Dino Pedreschi, University of Pisa - Anna Monreale, University of Pisa - Riccardo Guidotti, University of Pisa STEERING COMMITTEE - Michelangelo Ceci, University of Bari SPECIAL SESSION CHAIR - Roberto Trasarti, ISTI-CNR Pisa POSTER SESSION CHAIR - Francesca Naretto DOCTORAL CONSORTIUM CHAIRS - Fosca Giannotti - Roberto Pellungrini SOCIAL MEDIA & PUBLICITY CHAIR - Vittorio Romano LOCAL ORGANIZATION COMMITTEE - Francesco Spinnato, University of Pisa -- Francesca Naretto Ph.D. in Data Science francesca.naretto at sns.it SNS, Pisa | CNR, Pisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtkostecki at gmail.com Mon Feb 12 07:15:22 2024 From: mtkostecki at gmail.com (Mateusz Kostecki) Date: Mon, 12 Feb 2024 13:15:22 +0100 Subject: Connectionists: Transatlantic Behavioural Neuroscience Summer School, Sept 9-20th 2024, Deadline: Feb 15th Message-ID: Hello! We would like to remind you that the deadline for registration to our *Transatlantic Behavioural Summer School is Feb 15h*! During our school you will learn how to deliver stimuli or modify the environment in which the animal is embedded with *Arduino-based devices and Bonsai code*; you will design and *3D-print cases* and holders for your instruments. And then you will apply cutting-edge tools:* Deeplabcut, keypoint-MoSeq, and CEBRA* ? to extract as much information as possible from your data. But that?s not everything. At TBNSS you will *listen to fascinating lectures and interact with speakers*, who explored bold ideas and made their unique career paths. You will look at what behavior is through many lenses. Most importantly though, *you will meet people with similar interests and goals*. And you may become a part of our global Open Lab community, and help us to spread the knowledge further! More information and the application form can be found here - https://nenckiopenlab.org/tbnss2024/. *The deadline for application is Feb 15th, the fee is EUR 900, but we have secured funding for fee waivers and travel grants if needed.* With best wishes, TBNSS Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From levine at uta.edu Mon Feb 12 08:37:01 2024 From: levine at uta.edu (Levine, Daniel S) Date: Mon, 12 Feb 2024 13:37:01 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: <57A1B364-AB59-487C-98E8-A23B64E4EA50@nyu.edu> References: <92337E16-9572-4525-BFDF-34978CE8A137@nyu.edu> <57A1B364-AB59-487C-98E8-A23B64E4EA50@nyu.edu> Message-ID: Raising a point in a discussion always includes getting attention. So there is nothing necessarily wrong with that. Sent via the Samsung Galaxy A12, an AT&T smartphone Get Outlook for Android ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Saturday, February 10, 2024 10:15:28 AM To: Geoffrey Hinton Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate [External] More seriously, it is sad that we can?t have an intelligent, moderated discussion. Gary On Feb 10, 2024, at 07:42, Gary Marcus wrote: ? Oh lookie, another potshot at the end of an apology for taking potshots. That?s too perfect! Gary On Feb 10, 2024, at 07:33, Geoffrey Hinton wrote: ? I am sorry that i took a potshot at you. I will do my best not to do it again. Your aim in life seems to be to get attention and I regret facilitating this. Geoff On Sat, Feb 10, 2024 at 8:14?AM Gary Marcus > wrote: Dear Geoff, Please stop taking potshots. Face me. You say I am irrelevant, yet you attack (and misrepresent) me regularly in your talks, including both this morning at The Vector Institute, and your ?just weights? riff in the Fall. On your web page, you have more mentions of me than Yann LeCun. (What greater measure of relevance could there be?) To make it easy for you, I will come to Toronto, on your home turf. (Perhaps Vector Institute and Web Summit could co-host). We will get a neutral moderator, and a huge audience. And we can fight together for the things we are actually both concerned about, around AI safety, and make a huge statement by doing so. The world will learn from the debate?and also from our overlap. It will be win-win for the world. ? Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From announce at ucy.ac.cy Mon Feb 12 09:45:01 2024 From: announce at ucy.ac.cy (Announce) Date: Mon, 12 Feb 2024 14:45:01 +0000 Subject: Connectionists: 21st International Conference on Software and Systems Reuse (ICSR 2024): Last Call for Papers Message-ID: *** Last Call for Papers *** 21st International Conference on Software and Systems Reuse (ICSR 2024) June 10-12, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus https://cyprusconferences.org/icsr2024/ (*** Submission Deadline: March 4, 2024 AoE (extended) ***) The International Conference on Software and Systems Reuse (ICSR) is a biannual conference in the field of software reuse research and technology. ICSR is a premier event aiming to present the most recent advances and breakthroughs in the area of software reuse and to promote an intensive and continuous exchange among researchers and practitioners. The guiding theme of this edition is Sustainable Software Reuse. We invite submissions on new and innovative research results and industrial experience reports dealing with all aspects of software reuse within the context of the modern software development landscape. Topics include but are not limited to the following. 1 Technical aspects of reuse, including ? Reuse in/for Quality Assurance (QA) techniques, testing, verification, etc. ? Domain ontologies and Model-Driven Development ? Variability management and software product lines ? Context-aware and Dynamic Reuse ? Reuse in and for Machine Learning ? Domain-specific languages (DSLs) ? New language abstractions for software reuse ? Generative Development ? COTS-based development and reuse of open source assets ? Retrieval and recommendation of reusable assets ? Reuse of non-code artefacts ? Architecture-centric reuse approaches ? Service-oriented architectures and microservices ? Software composition and modularization ? Sustainability and software reuse ? Economic models of reuse ? Benefit and risk analysis, scoping ? Legal and managerial aspects of reuse ? Reuse adoption and transition to software reuse ? Lightweight reuse approaches ? Reuse in agile projects ? Technical debt and software reuse 2 Software reuse in industry and in emerging domains ? Reuse success stories ? Reuse failures, and lessons learned ? Reuse obstacles and success factors ? Return on Investment (ROI) studies ? Reuse in hot topic domains (Artificial Intelligence, Internet of Things, Virtualization, Network functions, Quantum Computing, etc.) We welcome research (16 pages) and industry papers (12 pages) following the Springer Lecture Notes in Computer Science format. Submissions will be handled via EasyChair (https://easychair.org/my/conference?conf=icsr2024). Submissions will be **double-blindly** reviewed, meaning that authors should: ? Omit all authors? names and affiliations from the title page ? Do not include the acknowledgement section, if you have any, in the submitted paper ? Refer to your own work in the third person ? Use anonymous GitHub, Zenondo, FigShare or equivalent to provide access to artefacts without disclosing your identity Both research and industry papers will be reviewed by members of the same program committee (check the website for details). Proceedings will be published by Springer in their Lecture Notes for Computer Science (LNCS) series. An award will be given to the best research and the best industry papers. The authors of selected papers from the conference will be invited to submit an extended version (containing at least 30% new material) to a special issue in the Journal of Systems and Software (Elsevier). More details will follow. IMPORTANT DATES ? Abstract submission: February 26, 2024, AoE (*** extended ***) ? Full paper submission: March 4, 2024, AoE (*** extended ***) ? Notification: April 8, 2024, AoE ? Camera Ready: April 15, 2024, AoE ? Author Registration: April 15, 2024 AoE ORGANISATION Steering Committee ? Eduardo Almeida, Federal University of Bahia, Brazil ? Goetz Botterweck, Lero, University of Limerick, Ireland ? Rafael Capilla, Rey Juan Carlos University, Spain ? John Favaro, Trust-IT, Italy ? William B. Frakes, IEEE TCSE committee on software reuse, USA ? Martin L. Griss, Carnegie Mellon University, USA ? Oliver Hummel, University of Applied Sciences, Germany ? Hafedh Mili, Universit? du Qu?bec ? Montr?al, Canada ? Nan Niu, University of Cincinnati, USA ? George Angelos Papadopoulos, University of Cyprus, Cyprus ? Claudia M.L. Werner, Federal University of Rio de Janeiro, Brazil General Chair ? George A. Papadopoulos, University of Cyprus, Cyprus Program Co-Chairs ? Achilleas Achilleos, Frederick University, Cyprus ? Lidia Fuentes, University of Malaga, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 12 11:33:48 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 12 Feb 2024 08:33:48 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <2F5ADF90-F694-4E38-A2B2-11E8DB34D1EB@nyu.edu> An HTML attachment was scrubbed... URL: From wanling.cai at tcd.ie Mon Feb 12 11:35:57 2024 From: wanling.cai at tcd.ie (Wanling Cai) Date: Mon, 12 Feb 2024 16:35:57 +0000 Subject: Connectionists: [ACM IUI 2024] *Last Day* For Early Registration Message-ID: * We apologize if you receive multiple copies of this email * * Online version: https://iui.acm.org/2024/registration.html ACM IUI 2024 - Call For Early Registration 29th International ACM Conference on Intelligent User Interfaces (ACM IUI 2024) Greenville, South Carolina, USA, March 18-21, 2024 https://iui.acm.org/2024/ https://iui.acm.org/2024/index.html ACM Conference on Intelligent User Interfaces (ACM IUI) 2024 is the 29th annual premiere venue, where researchers and practitioners will meet and discuss state-of-the-art advances at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI). Ideal IUI submissions should address practical HCI challenges using machine intelligence and discuss both computational and human-centric aspects of such methodologies, techniques, and systems. The theme for this year will be Resilience, covering a wide variety of topics, such as COVID-19 recovery, organizational cyber resilience, economic growth and stability, climate change recovery, intelligent user interface resilience, and similar. While we encourage submissions related to this theme, the scope of the conference is not limited to the theme only. Thanks for your interest in IUI 2024! Please read the information link below carefully regarding costs and deadlines. * Registration information: https://iui.acm.org/2024/registration.html - Early Registration until Feb 12, 2024 - Standard Registration until Mar 7, 2024 - Onsite Registration after Mar 7, 2024 * Registration Link: https://web.cvent.com/event/9865c85a-e052-4e7e-9947-cf19114283eb/summary Registration for the IUI 2024 full conference includes access to all sessions and workshops. Payment and registration is being handled through a 3rd party provider. If you have questions, please email the registration chair at registration2024 at iui.acm.org. -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.yousif at herts.ac.uk Mon Feb 12 11:55:17 2024 From: n.yousif at herts.ac.uk (Nada Yousif) Date: Mon, 12 Feb 2024 16:55:17 +0000 Subject: Connectionists: PhD studentships at University of Hertfordshire In-Reply-To: References: Message-ID: Applications are invited by the Centre for Future Societies Research for three fully funded PhD studentships at the University of Hertfordshire. These research studentships will be part of the University Alliance (UA) Doctoral Training Alliance in Future Societies, and include the proposed project: "Low-cost, accessible neurostimulation, experiments and simulations" to be supervised by Dr Nada Yousif and Prof Volker Steuber. The PhD project will combine experimental cerebellar stimulation and computational modelling of the cerebellum to show the feasibility of low-cost, accessible non-invasive neurostimulation for treating movement disorders. Please see the Centre for Future Societies Research web page where project details are held on https://www.herts.ac.uk/research/centres/centre-for-future-societies-research/phd-studentships Closing date for applications: 20th May 2024. Interviews will start soon after the closing date. Studentship start date: September 2024. For informal enquiries contact Dr Nada Yousif (n.yousif at herts.ac.uk) or Prof Volker Steuber (v.steuber at herts.ac.uk, http://biocomputation.herts.ac.uk/). Dr Nada Yousif School of Physics, Engineering and Computer Science University of Hertfordshire +44 (0)1707 284129 n.yousif at herts.ac.uk https://researchprofiles.herts.ac.uk/en/persons/nada-yousif https://sites.google.com/view/nada-yousif/home -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 12 14:21:49 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 12 Feb 2024 11:21:49 -0800 Subject: Connectionists: What Gary Marcus believes about existential risk and LLMs Message-ID: <1B2EC91B-91D8-4493-B900-AADDB745990D@nyu.edu> Geoff Hinton recently asked: > Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html. My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Mon Feb 12 17:58:54 2024 From: achler at gmail.com (Tsvi Achler) Date: Mon, 12 Feb 2024 14:58:54 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: <57A1B364-AB59-487C-98E8-A23B64E4EA50@nyu.edu> References: <92337E16-9572-4525-BFDF-34978CE8A137@nyu.edu> <57A1B364-AB59-487C-98E8-A23B64E4EA50@nyu.edu> Message-ID: Unfortunately this field is too preoccupied with egos, hype, pomp and circumstance. All this politicking effectively inhibits novel approaches. I think the whole field can do better by looking at and supporting the next generation, those offering significantly different ideas. Sincerely, -Tsvi On Sun, Feb 11, 2024 at 10:53?PM Gary Marcus wrote: > More seriously, it is sad that we can?t have an intelligent, moderated > discussion. > Gary > > On Feb 10, 2024, at 07:42, Gary Marcus wrote: > > ? > Oh lookie, another potshot at the end of an apology for taking potshots. > > That?s too perfect! > > Gary > > > On Feb 10, 2024, at 07:33, Geoffrey Hinton > wrote: > > ? > I am sorry that i took a potshot at you. I will do my best not to do it > again. Your aim in life seems to be to get attention and I regret > facilitating this. > > Geoff > > > On Sat, Feb 10, 2024 at 8:14?AM Gary Marcus wrote: > >> Dear Geoff, >> >> Please stop taking potshots. Face me. >> >> You say I am irrelevant, yet you attack (and misrepresent) me regularly >> in your talks, including both this morning at The Vector Institute, and >> your ?just weights? riff in the Fall. On your web page, you have more >> mentions of me than Yann LeCun. (What greater measure of relevance could >> there be?) >> >> To make it easy for you, I will come to Toronto, on your home turf. >> (Perhaps Vector Institute and Web Summit could co-host). >> >> We will get a neutral moderator, and a huge audience. And we can fight >> together for the things we are actually both concerned about, around AI >> safety, and make a huge statement by doing so. >> >> The world will learn from the debate?and also from our overlap. It will >> be win-win for the world. >> >> ? Gary >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From triesch at fias.uni-frankfurt.de Tue Feb 13 07:14:36 2024 From: triesch at fias.uni-frankfurt.de (Jochen Triesch) Date: Tue, 13 Feb 2024 13:14:36 +0100 Subject: Connectionists: Open PhD or Post-Doc Position on Computational Modeling of Active Auditory Perception in the Triesch lab, Frankfurt am Main, Germany Message-ID: <90159215-9EBF-42C7-8B4D-BFE40C9A8004@fias.uni-frankfurt.de> We solicit applications for a PhD or post-doc position at the Frankfurt Institute for Advanced Studies (FIAS) to develop neural network-based computational models of active auditory perception.? The project will address how the brain simultaneously adapts sensory coding strategies, attentional processing, and behavior to optimize active auditory perception in its recurrent processing architecture. The project is embedded in Germany?s priority program ?Sensing LOOPS: Cortico-subcortical Interactions for Adaptive Sensing?(www.brainloops.de/spp2411) and will provide ample opportunities for collaboration with experimental labs, including the lab of Julio Hechavarrria in Frankfurt (https://www.julio-hechavarria.com). Keywords: active perception, active efficient coding, active noise avoidance, recurrent neural network, attentional modulation, bat echolocation, cocktail party problem. Please see some of our previous work on active visual and auditory perception and use of recurrent networks to model visual object recognition to get a flavor of this line of work: ? Eckmann, S., Klimmasch, L., Shi, B. E., & Triesch, J. (2020). Active efficient coding explains the development of binocular vision and its failure in amblyopia. Proceedings of the National Academy of Sciences, 117(11), 6156-6162. https://www.pnas.org/doi/abs/10.1073/pnas.1908100117 ? Wijesinghe, L. P., Wohlgemuth, M. J., So, R. H., Triesch, J., Moss, C. F., & Shi, B. E. (2021). Active head rolls enhance sonar-based auditory localization performance. PLoS computational biology, 17(5), e1008973. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008973 ? Ernst, M. R., Burwick, T., & Triesch, J. (2021). Recurrent processing improves occluded object recognition and gives rise to perceptual hysteresis. Journal of Vision, 21(13), 6-6. https://jov.arvojournals.org/article.aspx?articleid=2778154 ? Klimmasch, L., Schneider, J., Lelais, A., Fronius, M., Shi, B. E., & Triesch, J. (2021). The development of active binocular vision under normal and alternate rearing conditions. Elife, 10, e56212. https://elifesciences.org/articles/56212 The Frankfurt Institute for Advanced Studies (https://fias.institute/en/) is a research institution dedicated to fundamental theoretical research in various areas of science. The city of Frankfurt is the hub of one of the most vibrant metropolitan areas in Europe. It boasts a rich culture and arts community and repeatedly earns high rankings in worldwide surveys of quality of living. We are seeking an outstanding and highly motivated post-doc for this project. Applicants should have obtained a Master or PhD in Computational Neuroscience or a related field (Machine Learning, Physics, Engineering, etc.). The ideal candidate will have excellent analytic and neural network modeling skills (in particular using unsupervised and reinforcement learning techniques), a good understanding of information theory and a broad knowledge of Computational Neuroscience. A strong interest in active auditory perception and desire to collaborate with experimental labs is essential. Renumeration is according to the German E13 pay scale. Details depend on the applicant?s experience. Applications should consist of a single pdf file. Please include a brief statement of research interests, CV and publication list, and contact information for at least two references. Application deadline is February 29, 2024. Upload your document using the application platform at: https://pm.fias.science/projects/application. Click on ?New Issue? to start the application process. -- Prof. Dr. Jochen Triesch Johanna Quandt Chair for Theoretical Life Sciences Frankfurt Institute for Advanced Studies and Goethe University Frankfurt http://fias.uni-frankfurt.de/~triesch/ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1418 bytes Desc: not available URL: From angelo.cangelosi at manchester.ac.uk Tue Feb 13 07:28:16 2024 From: angelo.cangelosi at manchester.ac.uk (Angelo Cangelosi) Date: Tue, 13 Feb 2024 12:28:16 +0000 Subject: Connectionists: Postdoc in Neuromorphic Systems / Computational Neuroscience, University of Manchester Message-ID: <2C7FE032-346B-4E82-A65D-CE82ED326390@manchester.ac.uk> Postdoc Research Associate in Neuromorphic Systems / Computational Neuroscience University of Manchester, UK A Postdoctoral Research Associates in Neuromorphic Systems and/or Computational Neuroscience for robotics is required for a period of 3.5 years to work on the Horizon/InnovateUK project ?PRIMI: Performance in Robots Interaction via Mental Imagery. This is a collaborative project of the University of Manchester?s Cognitive Robotics Lab with various academic and industry partners in the UK and Europe. PRIMI will synergistically combine research and development in neurophysiology, psychology, machine intelligence, cognitive mechatronics, neuromorphic engineering, and humanoid robotics to build developmental models of higher-cognition abilities ? mental imagery, abstract reasoning, and theory of mind ? boosted by energy- efficient event-driven computing and sensing. You will carry out research on the design of neuromorphic systems models for robotics. The postdoc will work collaboratively with the other postdocs and PhD students in the PRIMI project. This post requires expertise in computational neuroscience (e.g. spiking neural networks) for robotics and/or neuromorphic systems. You will be working collaboratively as part of the Cognitive Robotics Lab at the Department of Computer Science at the University of Manchester under the supervision of Professor Angelo Cangelosi. Close collaboration with the other project partners will also be required. Application deadline: 8 March 2024 Link to online application system: https://www.jobs.manchester.ac.uk/Job/JobDetail?JobId=28042 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 13 12:54:48 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 13 Feb 2024 09:54:48 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab5345a6-2f04-463d-a53b-789115a1fdb7_851x1024.jpeg Type: image/webp Size: 33600 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_5914.jpeg Type: image/jpeg Size: 1061549 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3818.jpeg Type: image/jpeg Size: 130885 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3811.jpeg Type: image/jpeg Size: 100991 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Tue Feb 13 12:36:17 2024 From: weng at msu.edu (Weng, Juyang) Date: Tue, 13 Feb 2024 17:36:17 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Message-ID: Dear Gary, You wrote, "LLMs do not really understand what they are saying". Those LLMs generated text in a natural language, didn't they? Why do you say that LLMs do not understand such text? The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! That is why Jeffery Hinton can simply give you potshots without valid analysis. Best regards, -John Weng Brain-Mind Institute On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html. My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesse.grootjen at um.ifi.lmu.de Tue Feb 13 09:54:04 2024 From: jesse.grootjen at um.ifi.lmu.de (Grootjen, Jesse) Date: Tue, 13 Feb 2024 14:54:04 +0000 Subject: Connectionists: CfP - HHAI 2024 [Extended Deadline] - The third International Conference on Hybrid Human-Artificial Intelligence In-Reply-To: <6E374A41-4981-426E-866A-C06F5A1EEC69@lmu.de> References: <6E374A41-4981-426E-866A-C06F5A1EEC69@lmu.de> Message-ID: <028D17A2-3023-4F39-B3FE-BB812D02986A@lmu.de> On 13 Feb 2024, at 15:47, Grootjen, Jesse wrote: * We apologize if you receive multiple copies of this email * Call for Participation: HHAI2024 ? The third International Conference on Hybrid Human-Artificial Intelligence June 10-14, 2024, Malm?, Sweden Call for Contributions to the Doctoral Consortium In this message, we shortened each call to their essentials ? the full text of each call is available on the website: https://hhai-conference.org/2024/ The HHAI 2024 Doctoral Consortium (DC) will take place as part of the 3rd International Conference on Hybrid Human-Artificial Intelligence in June 2024, Malm?, Sweden. This forum will provide early as well as middle/late-stage PhD students in the field of Hybrid Intelligence focusing on the study of Artificial intelligence systems that cooperate synergistically, proactively and purposefully with humans, amplifying instead of replacing human intelligence. The Doctoral Consortium will take place in person at the HHAI 2024 conference. Important Dates All deadlines are 23:59 AoE (anywhere on Earth). - Submission Deadline: February 12th, 2024 February 19th, 2024 - Reviews Released: March 18th, 2024 - Camera-ready Papers Due: April 18th, 2024 - Doctoral Consortium: June 11th, 2024 Proposals should be submitted via Easychair: https://easychair.org/conferences/?conf=hhai2024 Location HHAI 2024 will be an in-person, single-track conference, held in Malm?, Sweden on June 10-14, 2024. More information on the venue as well as travel information can be found on the website: https://hhai-conference.org/2024/ Contact information Conference chairs: Frank Dignum (Ume? University, SE), Fabian Lorig (Malm? University, SE), Jason Tucker (Malm? University, SE), and Adam Dahlgren Lindstr?m (Ume? University, SE). Doctoral Consortium chairs: Passant El.Agroudy (DFKI, DE), Michiel van der Meer (Leiden University, NL) and Harko Verhagen (Stockholm University, SE) For questions, you can reach the doctoral consortium chairs at dc at hhai-conference.org Kind regards, Julian Rasch & Jesse Grootjen Publicity and Social Media Chairs HHAI 2024 https://www.hhai-conference.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 13 17:20:54 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 13 Feb 2024 14:20:54 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <12CB17FC-8CE9-4338-BD0F-B3989E06E7D2@nyu.edu> those are fine keywords, but not real counterarguments with enough depth to have a conversation around, nor solutions to any of the specific problems I raised. > On Feb 13, 2024, at 2:01 PM, Weng, Juyang wrote: > > Dear Gary, > I read your posted articles, but by those shallow arguments in your posts, somebody may say that your arguments are as shallow and unconvincing as Geoffrey Hinton's. > For example, you wrote, "lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc." But they are shallow because human kids may not have them either. > You overlooked the 5th aspect of the four aspects of learning (see my book Natural and Artificial Intelligence NAI): > (1) Learning framework (e.g., incremental vs. batch or DP); > (2) Sensors and effectors; > (3) Internal representations; > (4) Computational resources; and > (5) Learning experience. > Do the above 5 aspects give you more convincing reasons to criticize Deep Learning and LLMs? > Best regards, > -John Weng > > On Tue, Feb 13, 2024 at 12:55?PM Gary Marcus > wrote: > Thanks for your question. Why should we think Generative AI systems lack understanding? Over the last couple weeks, since Hinton?s October critique of me surfaced, I have written five recent essays on the matter, filled with examples, each exploring a different facet of the issues, both in chatbots and image generation systems: > https://open.substack.com/pub/garymarcus/p/deconstructing-geoffrey-hintons-weakest > https://open.substack.com/pub/garymarcus/p/further-trouble-in-hinton-city > https://open.substack.com/pub/garymarcus/p/there-must-be-some-misunderstanding > https://open.substack.com/pub/garymarcus/p/horse-rides-astronaut-redux > https://open.substack.com/pub/garymarcus/p/statistics-versus-understanding-the > A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. > > Gary > > > From: Weng, Juyang > Sent: Tuesday, February 13, 2024 12:36 PM > To: gary.marcus at nyu.edu > > Cc: connectionists at mailman.srv.cs.cmu.edu > > Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate > > Dear Gary, > You wrote, "LLMs do not really understand what they are saying". > Those LLMs generated text in a natural language, didn't they? > Why do you say that LLMs do not understand such text? > The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! > That is why Jeffery Hinton can simply give you potshots without valid analysis. > Best regards, > -John Weng > Brain-Mind Institute > > On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus > wrote: > Geoff Hinton recently asked: > > Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. > > That?s an easy one. The second, regarding existential threat. I > > do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) > > I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html . > > My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk . > although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. > Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? > We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. > LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. > > But that doesn?t mean they are remotely likely to literally end the species. > > I hope that clarifies my view. > Gary > > > -- > Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 13 18:07:35 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 13 Feb 2024 15:07:35 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <4AFD66E9-1BBA-452D-8CC0-890F6E22BFD4@nyu.edu> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-02-13 at 4.52.58?PM Type: image/jpeg Size: 134129 bytes Desc: not available URL: From weng at msu.edu Tue Feb 13 18:53:17 2024 From: weng at msu.edu (Weng, Juyang) Date: Tue, 13 Feb 2024 23:53:17 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: <12CB17FC-8CE9-4338-BD0F-B3989E06E7D2@nyu.edu> References: <12CB17FC-8CE9-4338-BD0F-B3989E06E7D2@nyu.edu> Message-ID: Dear Gary, They are not keywords, but five aspects. In other words, they represent a partition of the set of all factors (millions!) that relate to the performance of a brain or learning machine into five subsets. This partition makes our discussion more systematic. You only argue about some behavior errors, not in terms of the five aspects. This is hopeless to win laymen and media. Even a human child has many behavioral mistakes. These mistakes do not invalidate the child's brain. In summary, your criticism of LLMs is example-based, shallow, weak, and disorganized, not convincing that LLMs are fundamentally wrong. I will wait a little more time before I raise some fundamental wrongs. Best regards, -John ________________________________ From: Gary Marcus Sent: Tuesday, February 13, 2024 5:20 PM To: Weng, Juyang Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate those are fine keywords, but not real counterarguments with enough depth to have a conversation around, nor solutions to any of the specific problems I raised. On Feb 13, 2024, at 2:01 PM, Weng, Juyang wrote: Dear Gary, I read your posted articles, but by those shallow arguments in your posts, somebody may say that your arguments are as shallow and unconvincing as Geoffrey Hinton's. For example, you wrote, "lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc." But they are shallow because human kids may not have them either. You overlooked the 5th aspect of the four aspects of learning (see my book Natural and Artificial Intelligence NAI): (1) Learning framework (e.g., incremental vs. batch or DP); (2) Sensors and effectors; (3) Internal representations; (4) Computational resources; and (5) Learning experience. Do the above 5 aspects give you more convincing reasons to criticize Deep Learning and LLMs? Best regards, -John Weng On Tue, Feb 13, 2024 at 12:55?PM Gary Marcus > wrote: Thanks for your question. Why should we think Generative AI systems lack understanding? Over the last couple weeks, since Hinton?s October critique of me surfaced, I have written five recent essays on the matter, filled with examples, each exploring a different facet of the issues, both in chatbots and image generation systems: * https://open.substack.com/pub/garymarcus/p/deconstructing-geoffrey-hintons-weakest * https://open.substack.com/pub/garymarcus/p/further-trouble-in-hinton-city * https://open.substack.com/pub/garymarcus/p/there-must-be-some-misunderstanding * https://open.substack.com/pub/garymarcus/p/horse-rides-astronaut-redux * https://open.substack.com/pub/garymarcus/p/statistics-versus-understanding-the A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary ________________________________ From: Weng, Juyang Sent: Tuesday, February 13, 2024 12:36 PM To: gary.marcus at nyu.edu > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Dear Gary, You wrote, "LLMs do not really understand what they are saying". Those LLMs generated text in a natural language, didn't they? Why do you say that LLMs do not understand such text? The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! That is why Jeffery Hinton can simply give you potshots without valid analysis. Best regards, -John Weng Brain-Mind Institute On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus > wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html. My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Tue Feb 13 17:01:32 2024 From: weng at msu.edu (Weng, Juyang) Date: Tue, 13 Feb 2024 22:01:32 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: Dear Gary, I read your posted articles, but by those shallow arguments in your posts, somebody may say that your arguments are as shallow and unconvincing as Geoffrey Hinton's. For example, you wrote, "lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc." But they are shallow because human kids may not have them either. You overlooked the 5th aspect of the four aspects of learning (see my book Natural and Artificial Intelligence NAI): (1) Learning framework (e.g., incremental vs. batch or DP); (2) Sensors and effectors; (3) Internal representations; (4) Computational resources; and (5) Learning experience. Do the above 5 aspects give you more convincing reasons to criticize Deep Learning and LLMs? Best regards, -John Weng On Tue, Feb 13, 2024 at 12:55?PM Gary Marcus wrote: Thanks for your question. Why should we think Generative AI systems lack understanding? Over the last couple weeks, since Hinton?s October critique of me surfaced, I have written five recent essays on the matter, filled with examples, each exploring a different facet of the issues, both in chatbots and image generation systems: * https://open.substack.com/pub/garymarcus/p/deconstructing-geoffrey-hintons-weakest * https://open.substack.com/pub/garymarcus/p/further-trouble-in-hinton-city * https://open.substack.com/pub/garymarcus/p/there-must-be-some-misunderstanding * https://open.substack.com/pub/garymarcus/p/horse-rides-astronaut-redux * https://open.substack.com/pub/garymarcus/p/statistics-versus-understanding-the A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary ________________________________ From: Weng, Juyang Sent: Tuesday, February 13, 2024 12:36 PM To: gary.marcus at nyu.edu Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Dear Gary, You wrote, "LLMs do not really understand what they are saying". Those LLMs generated text in a natural language, didn't they? Why do you say that LLMs do not understand such text? The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! That is why Jeffery Hinton can simply give you potshots without valid analysis. Best regards, -John Weng Brain-Mind Institute On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html. My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.s.smith at cs.stir.ac.uk Tue Feb 13 18:32:18 2024 From: l.s.smith at cs.stir.ac.uk (Prof Leslie Smith) Date: Tue, 13 Feb 2024 23:32:18 -0000 Subject: Connectionists: Statistics versus Understanding in Generative AI. In-Reply-To: References: Message-ID: <7b3f9909601c9da698ce39b52a12004c.squirrel@mail.cs.stir.ac.uk> Can I suggest that there's a deep philosophical disjunction here. I'd say that it hinges on whether "Understanding" implies grounded-ness or not. Does it require the understanding entity to have a model of the environment into which the information can be placed, or can understanding be achieved in an abstract system that only has access to a world of textual data? One one hand, there's a danger that we imply that understanding only becomes possible in living systems, whatever the capability of the synthetic system, and on the other hand there's a danger that we ascribe understanding to systems that we really shouldn't. Does a thermostat understand temperature? This is a difficult question altogether: at what point to we ascribe understanding to animals? They all behave in ways appropriate to their environment, from the single celled Paramecium upwards, but we don't tend to consider them to understand their environment until we consider much higher animals (like mice or rats). --Leslie Smith Gary Marcus wrote: -- Prof Leslie Smith (Emeritus) Computing Science & Mathematics, University of Stirling, Stirling FK9 4LA Scotland, UK Mobile: +44 7913 219013 Web: http://www.cs.stir.ac.uk/~lss Blog: http://lestheprof.com From Brad at kairosresearch.com Tue Feb 13 20:52:28 2024 From: Brad at kairosresearch.com (Brandon Minnery) Date: Wed, 14 Feb 2024 01:52:28 +0000 Subject: Connectionists: Industry Job Posting - HCI / LLM Researcher Message-ID: Greetings, Kairos Research is seeking to hire a Senior Human-Computer Interaction Researcher with expertise in human language technology. Position Overview: We currently have an opening for a full-time Senior Human-Computer Interaction Researcher whose work seeks to incorporate recent advances in generative large language models (LLMs). Specific research areas of interest include human-machine dialogue, human-AI alignment, trust (and over-trust) in AI, and the use of multimodal generative AI approaches in conjunction with other tools and techniques (e.g., virtual and/or augmented reality) to accelerate learning in real-world task environments. Additional related projects underway at Kairos involve the integration of generative AI into interactive dashboards for visualizing and interrogating social media narratives. The Human-Computer Interaction Researcher will play a significant role in supporting our growing body of work with DARPA, Special Operations Command, the Air Force Research Laboratory, and other federal sponsors. Company Profile: Kairos Research (https://kairosresearch.com) is a young and dynamic company based in Dayton, OH, conducting cutting-edge research at the intersection of human and artificial intelligence for federal, State, and industry clients. We are especially proud of our culture here at Kairos, where we place a strong emphasis on fostering a mutually supportive, friendly, and collaborative work environment. Creativity and passion for research are the lifeblood of our organization and animate everything we do. For all positions, we offer a competitive salary and benefits package in addition to performance-based incentive programs. The Ideal Candidate: * Is a natural leader who enjoys taking charge of a project and/or team. * Has a track record of writing winning research proposals and building his/her own portfolio of externally funded research. * Has excellent communication skills and enjoys engaging directly with research colleagues and government program officers. Other Preferred Skills: * Experience with 1) deep learning, as well as common associated frameworks (e.g. PyTorch or TensorFlow) and 2) NLP, employing established toolsets (e.g. spaCy, nltk, gensim). * Familiarity with common tools, frameworks, and services for building, testing, and evaluating generative AI, especially LLMs (e.g., HuggingFace, LangChain, OpenAI). * Experience utilizing contemporary LLM frameworks for learning and inference, including supervised fine-tuning LLMs, as well as transformer reinforcement learning (e.g., reinforcement learning with human feedback (RLHF), direct preference optimization (DPO)). * Expertise in evaluating AI alignment/misalignment, diagnosing possible sources of bias, and deploying strategies to address or mitigate poor alignment. This is a great opportunity for an academic research scientist who is seeking a career change but who wants to continue pursuing original research! Job Location: Dayton, Ohio NOTE: Candidates must be US citizens or Permanent Residents (i.e., must hold a green card). REQUIRED: * US citizenship or green card. * Ph.D. in Human-Computer Interaction, Cognitive Science, Computer Science, Linguistics, or related discipline. * 3+ years direct research experience post-Ph.D. * Willingness and ability to obtain independent research funding. * Strong publication record as lead author or senior author in high-impact journals/conferences. * Excellent written and oral presentation skills. Interested candidates should email their CV/resume to techadmin at kairosresearch.com and cc: Brad Minnery (brad at kairosresearch.com) Brandon (Brad) Minnery, PhD Kairos Research LLC brad at kairosresearch.com https://kairosresearch.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriele.scheler at yahoo.com Tue Feb 13 15:03:56 2024 From: gabriele.scheler at yahoo.com (gabriele.scheler at yahoo.com) Date: Tue, 13 Feb 2024 20:03:56 +0000 (UTC) Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <146508390.1638995.1707854636062@mail.yahoo.com> Simply put, LLMs represent words by their contexts. They have no referents for the words. That is not understanding. You can follow many tests people have performed to show that LLMs can reproduce, but make errors from a lack of knowing about the referential meaning of pieces of text.? They mimic understanding, aka known as statistical parrot. On Tuesday, February 13, 2024 at 07:15:07 PM GMT+1, Weng, Juyang wrote: Dear Gary,?? ? You wrote, "LLMs do not really understand what they are saying".? ? Those LLMs generated text in a natural language, didn't they?? ? Why do you say that LLMs do not understand such text?? ?? ? The truly understandable answer to this question is not as simple as youbelieve!? What you "believe" is not convincing and intuitive to many laymen and media!??? ? That is why Jeffery Hinton can simply give you potshots without valid analysis.? ? Best regards,-John WengBrain-Mind Institute?On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I ?do?believe that LLMs do not really understand what they are saying. But I donot?believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) Ihave?in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. ?E.g. in interview with AFP, posted at TechExplore ?I said that I thought the extinction threat 'overblown', https:// techxplore.com/news/2023-06- human-extinction-threat- overblown-ai.html. My actual view is captured here ,https://garymarcus.substack. com/p/ai-risk-agi-risk.? although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence,?you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the?access?that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but?power,?which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them.? But that doesn?t mean they are remotely likely to literally end the species.? I hope that clarifies my view.Gary --Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From lrubchin at iupui.edu Tue Feb 13 17:00:58 2024 From: lrubchin at iupui.edu (Rubchinsky, Leonid L) Date: Tue, 13 Feb 2024 22:00:58 +0000 Subject: Connectionists: 33rd Annual Computational Neuroscience Meeting CNS*2024 - tutorials and workshops, reduced registration fee for low and middle income countries In-Reply-To: References: Message-ID: The deadlines to propose workshops and tutorials at the 33rd Annual Computational Neuroscience Meeting CNS*2024 are approaching. Proposals from all levels of faculty as well as advanced postdoctoral fellows are welcome. CALL FOR WORKSHOPS: https://www.cnsorg.org/cns-2024-call-for-workshops. Priority Deadline for proposals: February 19, 2024. Workshop proposals submitted after this date will be evaluated based on available spots for additional workshops. CALL FOR TUTORIALS: https://www.cnsorg.org/cns-2024-call-for-tutorials. Deadline for proposals: February 19, 2024 CNS*2024 participants from low and middle income countries are eligible for reduced registration fee, see https://www.cnsorg.org/cns-2024-registration-fees General information about CNS*2024: https://www.cnsorg.org/cns-2024 Registration and abstract submission are open. Abstract submission deadline is March 11, 2024. With kind regards, Leonid Rubchinsky OCNS Vice President, on behalf of the OCNS Board of Directors *********************** Leonid Rubchinsky, PhD Professor Department of Mathematical Sciences, Indiana University Purdue University Indianapolis Stark Neurosciences Research Institute, Indiana University School of Medicine 402 N. Blackford St Indianapolis, IN 46202-3216 lrubchin at iu.edu http://www.math.iupui.edu/~lrubchin *********************** From lorincz at inf.elte.hu Tue Feb 13 14:21:39 2024 From: lorincz at inf.elte.hu (Andras Lorincz) Date: Tue, 13 Feb 2024 19:21:39 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: I tried to build a tangram together with GPT 4. Even if I forced it ? by prompting ? to do it though programming instead of using DALL-E. It has no "understanding" of * the world, including neighboring relations * physics, including stability Better prompting may help, of course. But this is not the real issue. GPT 4 is an associative system, a kind of implicit and/or procedural memory, something like the person in the Chinese Room if you wish. Cognition and thinking are related to declarative, or explicit memory, using logic, facts and rules, the recollection of factual information, and not predictive associations. BUT the mentioned properties can be added. Considerable effort goes into that direction, see, e.g., LINC (A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers) and CLOMO (Counterfactual Logical Modification with Large Language Models). Soon we will not be able to prove that LLMs/GPTs are not capable of understanding. It will look like they do. A. ------------------------------------ Andras Lorincz Fellow of the European Association for Artificial Intelligence Department of Artificial Intelligence Faculty of Informatics Eotvos Lorand University Budapest, Hungary ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Tuesday, February 13, 2024 6:36 PM To: gary.marcus at nyu.edu Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Dear Gary, You wrote, "LLMs do not really understand what they are saying". Those LLMs generated text in a natural language, didn't they? Why do you say that LLMs do not understand such text? The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! That is why Jeffery Hinton can simply give you potshots without valid analysis. Best regards, -John Weng Brain-Mind Institute On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html. My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriele.scheler at yahoo.com Wed Feb 14 03:17:14 2024 From: gabriele.scheler at yahoo.com (gabriele.scheler at yahoo.com) Date: Wed, 14 Feb 2024 08:17:14 +0000 (UTC) Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: Message-ID: <2030150530.1836255.1707898634977@mail.yahoo.com> Hybrid AI to which you refer - incorporating logic into LLMS - will not solve the lack of reference. Besides, it is fine to expect much from the future, which however is notoriously hard to predict. If the foundation model is wrong and misguided, adding parts to it may not help. Right now LLMs do not understand. On Wednesday, February 14, 2024 at 06:51:18 AM GMT+1, Andras Lorincz wrote: I tried to build a tangram together with GPT 4.Even if I forced it ? by prompting ? to do it though programming instead of using DALL-E. It has no "understanding" of - the world, including neighboring relations - physics, including stability Better prompting may help, of course.? But this is not the real issue. GPT 4 is an associative system, a kind of implicit and/or procedural memory, something like the person in the Chinese Room if you wish. Cognition and thinking are related to declarative, or explicit memory, using logic, facts and rules, the recollection of factual information, and not predictive associations.? BUT the mentioned properties can be added. Considerable effort goes into that direction, see, e.g., LINC (A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers) and CLOMO (Counterfactual Logical Modification with Large Language Models). Soon we will not be able to prove that LLMs/GPTs are not capable of understanding. It will look like they do. A. ------------------------------------ Andras Lorincz Fellow of the European Association for Artificial IntelligenceDepartment of Artificial IntelligenceFaculty of Informatics Eotvos Lorand University Budapest, Hungary From: Connectionists on behalf of Weng, Juyang Sent: Tuesday, February 13, 2024 6:36 PM To: gary.marcus at nyu.edu Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate?Dear Gary,?? ? You wrote, "LLMs do not really understand what they are saying".? ? Those LLMs generated text in a natural language, didn't they?? ? Why do you say that LLMs do not understand such text?? ?? ? The truly understandable answer to this question is not as simple as youbelieve!? What you "believe" is not convincing and intuitive to many laymen and media!??? ? That is why Jeffery Hinton can simply give you potshots without valid analysis.? ? Best regards,-John WengBrain-Mind Institute?On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I ?do?believe that LLMs do not really understand what they are saying. But I donot?believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) Ihave?in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. ?E.g. in interview with AFP, posted at TechExplore ?I said that I thought the extinction threat 'overblown', https:// techxplore.com/news/2023-06- human-extinction-threat- overblown-ai.html. My actual view is captured here ,https://garymarcus.substack. com/p/ai-risk-agi-risk.? although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence,?you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the?access?that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but?power,?which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them.? But that doesn?t mean they are remotely likely to literally end the species.? I hope that clarifies my view.Gary --Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From mitsu at well.com Wed Feb 14 04:04:16 2024 From: mitsu at well.com (Mitsu Hadeishi) Date: Wed, 14 Feb 2024 01:04:16 -0800 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: <146508390.1638995.1707854636062@mail.yahoo.com> References: <146508390.1638995.1707854636062@mail.yahoo.com> Message-ID: Text is generated by the real world, and therefore one can infer structures of the real world simply by the structural relationships within the text itself. Even our human experience of the world is heavily mediated - no one has "direct" experience of "referents", it is all mediate by casual relationships which we assume preserve meaningful structure from the world which allows our experiences and thoughts to be useful. But that is still mediated and indirect as it has to be. The upshot is that it is obvious that LLMs may have a better ability to form conceptual models of time than of space (since they operate more directly on time series of symbols and their concepts of space have to be inferred from far less data) - we have experiments showing this is in fact the case (LLM temporal reasoning is much better than their spatial reasoning). But the fact these concepts are imperfect is not a good argument in favor of the idea they have no "understanding" whatsoever. For instance, an LLM can correctly "interpret" English language descriptions of code transformations in coding problems and refactor code, which involves conceptual mapping far beyond mere parroting of preexisting training data: the space of possible code transforms is exponentially larger than training sets. Even given the fact that LLMs are not perfect at this, if they had zero conceptual understanding one would predict they could only do coding problems that mimicked samples they had ingested during training, not apply generalized principles to generate code even somewhat correctly. Another bit of evidence for some form of understanding is you can combine LLMs with other modalities and tools and they are able to "use" these new modalities correctly. If the abstractions in LLMs had some relationship to what we might call "understanding", even if it is very limited, one would predict that they could be extended and used in this way, adding multimodal capabilities and so on. And that is what we observe. On Tue, Feb 13, 2024 at 21:56 gabriele.scheler at yahoo.com < gabriele.scheler at yahoo.com> wrote: > > Simply put, LLMs represent words by their contexts. They have no referents > for the words. That is not understanding. You can follow many tests people > have performed to show that LLMs can reproduce, but make errors from a lack > of knowing about the referential meaning of pieces of text. They mimic > understanding, aka known as statistical parrot. > > On Tuesday, February 13, 2024 at 07:15:07 PM GMT+1, Weng, Juyang < > weng at msu.edu> wrote: > > > Dear Gary, > You wrote, "LLMs do not really understand what they are saying". > Those LLMs generated text in a natural language, didn't they? > Why do you say that LLMs do not understand such text? > The truly understandable answer to this question is not as simple as > you believe! What you "believe" is not convincing and intuitive to many > laymen and media! > That is why Jeffery Hinton can simply give you potshots without valid > analysis. > Best regards, > -John Weng > Brain-Mind Institute > > On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus wrote: > > Geoff Hinton recently asked: > > Which part of this is a misrepresentation: Gary Marcus believes that LLMs > do not really understand what they are saying and Gary Marcus also believes > that they are an existential threat. > > > That?s an easy one. The second, regarding existential threat. I > > *do* believe that LLMs do not really understand what they are saying. > But I do *not* believe that LLMs as such pose a (literally) existential > threat, nor have I ever said such a thing, not in the Senate, not in my > Substack, not here, and not anywhere else. (Anyone with evidence otherwise > should step forward.) > > I *have* in fact said the opposite; e.g., I have said that the human > species is hard to extinguish, because we are genetically and geographic > diverse, capable of building vaccines, etc. E.g. in interview with AFP, > posted at TechExplore I said that I thought the extinction threat > 'overblown', https:// techxplore.com/news/2023-06- > human-extinction-threat- overblown-ai.html > > . > > My actual view is captured here , https://garymarcus.substack. > com/p/ai-risk-agi-risk > > . > > *although a lot of the literature equates artificial intelligence risk > with the risk of superintelligence or artificial general intelligence, you > don?t have to be superintelligent to create serious problems. I am not > worried, immediately, about ?AGI risk? (the risk of superintelligent > machines beyond our control), in the near term I am worried about what I > will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) > but widely deployed?both in terms of the sheer number of people using it, > and in terms of the access that the software has to the world. ..* > > *Lots of ordinary humans, perhaps of above average intelligence but not > necessarily genius-level, have created all kinds of problems throughout > history; in many ways, the critical variable is not intelligence > but power, which often caches out as access. In principle, a single idiot > with the nuclear codes could destroy the world, with only a modest amount > of intelligence and a surplus of ill-deserved access. ?* > > *We need to stop worrying (just) about Skynet and robots taking over the > world, and think a lot more about what criminals, including terrorists, > might do with LLMs, and what, if anything, we might do to stop them.* > LLMs may well pose an existential threat to democracy, because (despite > their limited capacity for understanding and fact checking) they are > excellent mimics, and their inability to fact-check is to the advantage of > bad actors that which to exploit them. > > But that doesn?t mean they are remotely likely to literally end the > species. > > I hope that clarifies my view. > Gary > > > > -- > Juyang (John) Weng > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.pavlovic at tuebingen.mpg.de Wed Feb 14 05:09:29 2024 From: maria.pavlovic at tuebingen.mpg.de (Pavlovic, Maria) Date: Wed, 14 Feb 2024 10:09:29 +0000 Subject: Connectionists: Job Opening: Lab Mechatronics / Programmer/ Research and Admin Assistant (m/f/d) 100% in Tuebingen, Germany Message-ID: The Department for Sensory and Sensorimotor Systems of the Max-Planck-Institute for Biological Cybernetics studies the processing of sensory information (visual, auditory, tactile, olfactory) in the brain and the use of this information for directing body movements and making cognitive decisions. The research is highly interdisciplinary and uses theoretical and experimental approaches in humans. Our methodologies include visual psychophysics, eye tracking, fMRI, EEG, TMS in humans. For more information, please visit the department website: www.lizhaoping.org We are currently looking for a Lab Mechatronics / Programmer/ Research and Admin Assistant (m/f/d) 100% to join us, this position is open until it is filled. The position You will provide hardware, software, data taking, and managerial support for a diverse set of brain and neuroscience research activities. This includes: * Computer and IT support of Windows and Linux systems * Programming and debugging of computer code, especially at the stage of setting up new equipment or new experimental platforms * Provide technical, administrative, and operational support in the research data taking process. (The position holder should either have previous experience in visual psychophysics, or have the ability to quickly learn the data taking processes involved in the labs.) * Carry out or arrange for hardware repairs and troubleshooting * Equipment inventory and maintenance * Supervising and training of new equipment users * Setting up, updating and managing the database of knowledge and data from research projects, personnel and activities Our department is interdisciplinary, with research activities including human visual psychophysics, eye tracking, fMRI, EEG, TMS. We are looking for a person with a broad technical knowledge base, who loves working in a scientific environment and who is curious, open-minded, and able to adapt and learn new skills and solve new problems quickly. The set of skills that the individual should either already have or can quickly learn includes: MATLAB/Psychotoolbox, Python/OpenCV, Javascript, graphics and display technologies, EEG data taking techniques and similar, eye tracking, optics, electronics/controllers/sensors, etc. Your profile * Relevant degree in the field of physics, cognittive science, psychology, neuroscience, mechatronics, electronics, computer science, or related discipline. Scientific experience is a plus * Evidence or track record of broad technical skills and/or experience * Curiosity about our research topics * Good teamwork skills * A high level of resilience and professionalism * Ability to anticipate problems and skills in project management and communication * Strong command of English; fluency in German is highly desirable We offer We offer highly interesting, challenging and varied tasks; you will work closely and collaboratively with scientists, students, programmers, administrative staff, and central IT and mechanical/electronic workshop support to help achieve the scientific goals of the department. An international environment with regular opportunities for further education and training awaits you. The salary is paid in accordance with the collective agreement for the public sector (TV?D Bund), based on qualification and experience and will include social security benefits and additional fringe benefits in accordance with public service provisions. This position is initially limited to two years, with the possibility of extensions and a permanent contract. The Max Planck Society seeks to employ more handicapped people and strongly encourages them to apply. Furthermore, we actively support the compatibility of work and family life. The Max Planck Society also seeks to increase the number of women in leadership positions and strongly encourages qualified women to apply. The Max Planck Society strives for gender equality and diversity. Your application The position is available immediately and will be open until filled. We look forward to receiving your application that includes a cover letter, your curriculum vitae, relevant certificates, and three names and contacts for reference letters electronically by e-mail to jobs.li at tuebingen.mpg.de, where informal inquiries can also be addressed to. Please note that incomplete applications will not be considered. For further opportunities in our group, please visit https://www.lizhaoping.org/jobs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6707 bytes Desc: not available URL: From eleal at d.umn.edu Wed Feb 14 07:28:21 2024 From: eleal at d.umn.edu (Eleazar Leal) Date: Wed, 14 Feb 2024 06:28:21 -0600 Subject: Connectionists: CFP 2024 Intl Workshop on Quantum Data Science and Management @ VLDB Message-ID: ************************************************************************* CALL FOR PAPERS International Workshop on Quantum Data Science and Management (QDSM 2024) In conjunction with VLDB 2024 August 29, 2024, Guangzhou, China (and hybrid) Submission: May 5, 2024 Web: https://www.ifis.uni-luebeck.de/~groppe/qdsm ************************************************************************* ** Aims of the Workshop ** The field of quantum computing has experienced remarkable progress after decades of research and development. Prototypes of quantum computers already exist and have been made available to users through cloud services (e.g., IBM Q experience, Google quantum AI, or Xanadu quantum cloud). While fault-tolerant and large-scale quantum computers are not available yet, the potential of this new technology is undeniable. Quantum algorithms have the proven ability to either outperform classical approaches for various tasks or are impossible to be efficiently simulated by classical means under reasonable complexity-theoretic assumptions. Even imperfect current-day technology is speculated to exhibit computational advantages over classical systems. For most database researchers, quantum computing and quantum machine learning are still new research fields. The goal of this workshop is to bring together academic researchers and industry practitioners from multiple disciplines (e.g., database, AI, software, physics, etc.) to discuss the challenges, solutions, and applications of quantum computing and quantum machine learning that have the potential to advance the state of the art of data science and data management technologies. Our purpose is to foster the interaction between database researchers and more traditional quantum disciplines, as well as industrial users. The workshop serves as a forum for the growing quantum computing community to connect with database researchers to discuss the wider questions and applications of how quantum resources can benefit data science and data management tasks, and how quantum software can support this endeavor. We believe that many unsolved and interesting issues can be found at boundaries and intersections between different fields and that there are insufficient venues to publish such cross-disciplinary results. We also believe that an important aspect of future quantum computing will concern issues of handling data in one way or another. This workshop will serve as a venue not only to discuss early, experimental results in research, but also to feature a demonstration part with the intention of providing attendees with first-hand experience in using novel quantum computing techniques that go beyond the simple examples offered by various web services. This will give researchers a realistic intuition about quantum computing for data science and data management tasks. ** Categories of Papers ** The workshop solicits papers of the following categories: - Research Papers propose new approaches, theories or techniques related to quantum data science and management including new data structures, protocols and algorithms. They should make substantial theoretical and empirical contributions to the research field. - System Papers describe new systems and whole frameworks for enabling quantum data science and management. - Experiments and Analysis Papers focus on the experimental evaluation of existing approaches including data structures and algorithms for quantum data science and management and bring new insights through the analysis of these experiments. Results of Experiments and Analysis Papers can be, for example, showing the benefits of well-known approaches in new settings and environments, opening new research problems by demonstrating unexpected behavior or phenomena, or comparing a set of traditional approaches in an experimental survey. - Application Papers report practical experiences on applications of quantum data science and management. Application Papers might describe how to apply quantum technologies to specific application domains. - Vision Papers identify emerging new or future research issues and directions and describe new research visions for quantum data science and management. The new visions will potentially have significant impacts on society. - Demo Papers deal with innovative approaches and applications for quantum data science and management. These papers describe a showcase of the proposed approach/application. We are especially interested in demonstrations having a WOW-effect. The length of papers must be within 5 pages to 10 pages. Accepted papers will be published in the CEUR Workshop proceedings (CEUR-WS.org) and presented as oral presentations. ** Topics of Interest ** We are interested in all issues concerning quantum data science and management such as the following: - Quantum Computing for problems related to Data Science and Management - Quantum Data Science - Quantum Computing for Data Science - Data Science for Quantum Computing - Quantum Data Management - Quantum Computing for Data Management - Data Management for Quantum Computing - Quantum Machine Learning - Quantum Machine Learning Enabled Databases - Quantum Data Management to Support Machine Learning - New approaches to Quantum Machine Learning - Applications for - Quantum Data Science - Quantum Data Management - Quantum Machine Learning - Quantum Algorithms with applications in Quantum Data Science and Management - Quantum Software Tools for Quantum Data Science and Management - Frameworks and APIs - Programming Languages - Optimizers of Quantum Programs and Circuits - Quantum Cryptography and Security for Data Science and Management ** Workshop Chairs ** - Sven Groppe, University of Lubeck, Germany - Jiaheng Lu, University of Helsinki, Finland - Wolfgang Mauerer, Technical University of Applied Science Regensburg, Germany - Le Gruenwald, University of Oklahoma, USA ** Publicity Chairs ** - Eleazar Leal, University of Minnesota Duluth, USA ** Proceedings Chairs ** - Valter Uotila, University of Helsinki, Finland ** Program Committee ** - Umut Calikyilmaz, University of Lubeck, Germany - Maja Franz, OTH Regensburg, Germany - Srinjoy Ganguly, Woxsen University, India - Natacha Kuete Meli, University of Lubeck, Germany - Nitin Nayak, University of Lubeck, Germany - Jukka K. Nurminen, University of Helsinki, Finland - IIya Safro, University of Delaware, USA - Manuel Schonberger, OTH Regensburg, Germany - Ghanshyam Singh, Malaviya National Institute of Technology Jaipur (MNIT Jaipur), India - Valter Uotila, University of Helsinki, Finland - Sanjay Vishwakarma, IBM Quantum, IBM Research - Almaden, USA - Zhengtong Yan, University of Helsinki, Finland ** Important Dates ** Submission: May 5, 2024 Notification: June 30, 2024 Workshop: August 29, 2024 ** Submission ** Authors are invited to submit original, unpublished research papers that are not being considered for publication in any other forum. Manuscripts should be formatted using the camera-ready templates in the CEUR Workshop Proceedings (CEUR-WS.org) double-column format. The length of papers must be within 5 pages to 10 pages. Accepted papers will be published online in the CEUR Workshop proceedings (CEUR-WS.org). We describe manuscript preparation and submission procedure at https://www.ifis.uni-luebeck.de/~groppe/qdsm/submit -- Eleazar Leal, Ph.D. Associate Professor Dept. of Computer Science | University of Minnesota Duluth Heller Hall 331 1114 Kirby Drive Duluth, MN 55812 Tel: (218) 726-8452 | Fax: (218) 726-8240 -------------- next part -------------- An HTML attachment was scrubbed... URL: From interdonatos at gmail.com Wed Feb 14 08:31:22 2024 From: interdonatos at gmail.com (Roberto Interdonato) Date: Wed, 14 Feb 2024 14:31:22 +0100 Subject: Connectionists: =?utf-8?q?CFP_FRCCS_2024_Montpellier=2C_France_Ma?= =?utf-8?q?y_29_=E2=80=93_31=2C_2024_=5BDeadline_approaching!=5D?= Message-ID: Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* After successfully hosting the conference in Dijon in 2021, Paris in 2022, and Le Havre in 2023, the fourth edition of the French Regional Conference on Complex Systems (FRCCS 2024) will be held in Montpellier, France. This single track international conference organized by CSS France serves as a platform to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange and promote ideas, facilitating the cross-fertilization of recent research work, industrial advancements, and original applications. Moreover, the conference emphasizes research topics with a high societal impact, showcasing the significance of complexity science in addressing complex societal challenges. Join us in Montpellier as we collectively strive to better understand complexity and its implications for society. You are cordially invited to submit your contribution until *February 21, 2024.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Frontiers in Big Data , edited by Frontiers *Submission * ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to: * ? *Foundations of complex systems * - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks * - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment * - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity * - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations * - Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.silver at acadiau.ca Wed Feb 14 09:49:52 2024 From: danny.silver at acadiau.ca (Danny Silver) Date: Wed, 14 Feb 2024 14:49:52 +0000 Subject: Connectionists: Statistics versus Understanding in Generative AI. In-Reply-To: <7b3f9909601c9da698ce39b52a12004c.squirrel@mail.cs.stir.ac.uk> References: <7b3f9909601c9da698ce39b52a12004c.squirrel@mail.cs.stir.ac.uk> Message-ID: I have a tendancy to agree with Leslie Smith concerning the philosophical disjunction that is occurring in this discussion. However, the distinction between an agent that statistically manipulates symbols (ie. can use language similar to humans) and an agent that has a grounded sense of concepts and relations (ie. knows about the world like humans) is not as obvious as it was 20 years ago. Certainly, it has a lot to do with the sensory modalities the agent can perceive and its ability to integrate and act on information from those modalities. We are getting closer to an understanding of how such modalities might be integrated. Recent work on LLMs that can input from, and output to, various modalities (text, video, audio, tactile) show evidence that they develop complex embedded representations of concepts (conreps) that include, or are closely connected to, representations of symbols (symreps). There is a lot more to do here in terms of how these sensory modalities are integrated in real-time (in terms of architecture and learning), but my sense (pardon the pun) is that many AI researchers are on the right path. For more on symreps and conreps, please see our paper (Silver and Mitchell 2023) ?The Roles of Symbols in Neural-based AI: They are Not What You Think!? from the Compendium of Neurosymbolic Artificial Intelligence 2023: p.1-28. Our hypothesis and the conceptual architecture we present imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought. ========================== Daniel L. Silver Professor, Jodrey School of Computer Science Data Scientist, Acadia Institute for Data Analytics Acadia University, Office 314, Carnegie Hall, Wolfville, Nova Scotia Canada B4P 2R6 Cell: (902) 679-9315 acadiau.ca Facebook Twitter YouTube LinkedIn Flickr [id:image001.png at 01D366AF.7F868A70] From: Connectionists on behalf of Prof Leslie Smith Date: Wednesday, February 14, 2024 at 1:38 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Statistics versus Understanding in Generative AI. CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Can I suggest that there's a deep philosophical disjunction here. I'd say that it hinges on whether "Understanding" implies grounded-ness or not. Does it require the understanding entity to have a model of the environment into which the information can be placed, or can understanding be achieved in an abstract system that only has access to a world of textual data? One one hand, there's a danger that we imply that understanding only becomes possible in living systems, whatever the capability of the synthetic system, and on the other hand there's a danger that we ascribe understanding to systems that we really shouldn't. Does a thermostat understand temperature? This is a difficult question altogether: at what point to we ascribe understanding to animals? They all behave in ways appropriate to their environment, from the single celled Paramecium upwards, but we don't tend to consider them to understand their environment until we consider much higher animals (like mice or rats). --Leslie Smith Gary Marcus wrote: -- Prof Leslie Smith (Emeritus) Computing Science & Mathematics, University of Stirling, Stirling FK9 4LA Scotland, UK Mobile: +44 7913 219013 Web: http://www.cs.stir.ac.uk/~lss Blog: http://lestheprof.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7490 bytes Desc: image001.png URL: From steve at bu.edu Wed Feb 14 16:34:06 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Wed, 14 Feb 2024 21:34:06 +0000 Subject: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate In-Reply-To: References: <146508390.1638995.1707854636062@mail.yahoo.com> Message-ID: Dear All, Perhaps some of you might find the following recent article relevant to the discussions in this email thread: Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult?child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full Best, Steve Grossberg sites.bu.edu/steveg From: Connectionists on behalf of Mitsu Hadeishi Date: Wednesday, February 14, 2024 at 5:07?AM To: travelsummer2006 at yahoo.com Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate Text is generated by the real world, and therefore one can infer structures of the real world simply by the structural relationships within the text itself. Even our human experience of the world is heavily mediated - no one has "direct" experience of "referents", it is all mediate by casual relationships which we assume preserve meaningful structure from the world which allows our experiences and thoughts to be useful. But that is still mediated and indirect as it has to be. The upshot is that it is obvious that LLMs may have a better ability to form conceptual models of time than of space (since they operate more directly on time series of symbols and their concepts of space have to be inferred from far less data) - we have experiments showing this is in fact the case (LLM temporal reasoning is much better than their spatial reasoning). But the fact these concepts are imperfect is not a good argument in favor of the idea they have no "understanding" whatsoever. For instance, an LLM can correctly "interpret" English language descriptions of code transformations in coding problems and refactor code, which involves conceptual mapping far beyond mere parroting of preexisting training data: the space of possible code transforms is exponentially larger than training sets. Even given the fact that LLMs are not perfect at this, if they had zero conceptual understanding one would predict they could only do coding problems that mimicked samples they had ingested during training, not apply generalized principles to generate code even somewhat correctly. Another bit of evidence for some form of understanding is you can combine LLMs with other modalities and tools and they are able to "use" these new modalities correctly. If the abstractions in LLMs had some relationship to what we might call "understanding", even if it is very limited, one would predict that they could be extended and used in this way, adding multimodal capabilities and so on. And that is what we observe. On Tue, Feb 13, 2024 at 21:56 gabriele.scheler at yahoo.com > wrote: Simply put, LLMs represent words by their contexts. They have no referents for the words. That is not understanding. You can follow many tests people have performed to show that LLMs can reproduce, but make errors from a lack of knowing about the referential meaning of pieces of text. They mimic understanding, aka known as statistical parrot. On Tuesday, February 13, 2024 at 07:15:07 PM GMT+1, Weng, Juyang > wrote: Dear Gary, You wrote, "LLMs do not really understand what they are saying". Those LLMs generated text in a natural language, didn't they? Why do you say that LLMs do not understand such text? The truly understandable answer to this question is not as simple as you believe! What you "believe" is not convincing and intuitive to many laymen and media! That is why Jeffery Hinton can simply give you potshots without valid analysis. Best regards, -John Weng Brain-Mind Institute On Tue, Feb 13, 2024 at 12:49?AM Gary Marcus > wrote: Geoff Hinton recently asked: Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat. That?s an easy one. The second, regarding existential threat. I do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.) I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', https:// techxplore.com/news/2023-06- human-extinction-threat- overblown-ai.html. My actual view is captured here , https://garymarcus.substack. com/p/ai-risk-agi-risk. although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don?t have to be superintelligent to create serious problems. I am not worried, immediately, about ?AGI risk? (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call ?MAI risk??Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed?both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. .. Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. ? We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them. LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. But that doesn?t mean they are remotely likely to literally end the species. I hope that clarifies my view. Gary -- Juyang (John) Weng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Edgar.Galvan at mu.ie Wed Feb 14 13:03:35 2024 From: Edgar.Galvan at mu.ie (=?iso-8859-1?Q?Edgar_Galv=E1n?=) Date: Wed, 14 Feb 2024 18:03:35 +0000 Subject: Connectionists: Postdoctoral Researcher Position available at CS, Maynooth University, Ireland Message-ID: Hi everyone, We're currently looking for a Postdoctoral Researcher for a 27-month contract as part of the "The CirculaR Economy Buildings as Material Banks (REBUILD)" project. This role involves designing, implementing, and evaluating a toolkit to simulate the use of multiple building materials, assessing their economic and carbon emission impact. Expertise in optimisation methods to handle conflicting objectives is particularly valuable. The successful candidate will have the chance to enhance their research skills, publish high-quality work, receive mentorship in grant funding applications, supervise students, and advance their academic career. For full details, please visit [1], reference number 027401. The deadline for applications is March 3rd. Please feel free to reach out to edgar.galvan at mu.ie if you would like to discuss this opportunity further. Your help in spreading the word to potential candidates would be greatly appreciated. Best Regards, Edgar Galvan [1] https://www.maynoothuniversity.ie/human-resources/vacancies -------------- next part -------------- An HTML attachment was scrubbed... URL: From yashar.deldjoo at poliba.it Wed Feb 14 10:45:22 2024 From: yashar.deldjoo at poliba.it (Yashar Deldjoo) Date: Wed, 14 Feb 2024 15:45:22 +0000 Subject: Connectionists: =?windows-1252?q?3rd_European_Workshop_on_Algorit?= =?windows-1252?q?hmic_Fairness_=28EWAF=9224=29?= In-Reply-To: References: Message-ID: 3rd European Workshop on Algorithmic Fairness (EWAF?24) Where: Mainz, Germany When: July 1 to 3, 2024 Call for Papers: https://2024.ewaf.org/call-for-papers Submissions Deadline: March 15, 2024 Types of submissions: Both short submissions (< 4 pages) and full papers (10-15 pages) are accepted. EWAF welcomes both theoretical work from all disciplines as well as applied research, especially interdisciplinary research on algorithmic fairness in the European context. What's EWAF? EWAF?s goal is to foster the dialogue between researchers working on algorithmic fairness in the context of Europe?s legal and societal framework, especially in light of the EU?s attempts to promote ethical AI. EWAF's relaxed atmosphere makes it a great place to connect with researchers working on similar topics. Connect with us on social media for future updates European Association for Algorithmic Fairness on LinkedIn @EWAFWorkshop on Twitter @EWAFWorkshop on Mastodon Kind Regards, [photo] Dr. Yashar Deldjoo Assistant Professor, SisInf Lab, DEI, Polytechnic University of Bari, Italy Skype: yashar.deldjoo | Address: SisInf Lab, Department of Electrical Engineering and Information Technology, Polytechnic University of Bari, Via Orabona, 4, 70125 Bari, Italy University homepage: http://sisinflab.poliba.it/deldjoo Personal homepage: https://yasdel.github.io/ [https://s3.amazonaws.com/images.wisestamp.com/social_icons/square/linkedin.png] Informativa Privacy - Ai sensi del Regolamento (UE) 2016/679 si precisa che le informazioni contenute in questo messaggio sono riservate e ad uso esclusivo del destinatario. Qualora il messaggio in parola Le fosse pervenuto per errore, La preghiamo di eliminarlo senza copiarlo e di non inoltrarlo a terzi, dandocene gentilmente comunicazione. Grazie. Privacy Information - This message, for the Regulation (UE) 2016/679, may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Thu Feb 15 02:48:29 2024 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Thu, 15 Feb 2024 08:48:29 +0100 Subject: Connectionists: =?utf-8?q?CFP_FRCCS_2024_Montpellier=2C_France_Ma?= =?utf-8?b?eSAyOSDigJMgMzEsIDIwMjQ=?= Message-ID: Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* Deadline: February 21, 2023 After successfully hosting the conference in Dijon in 2021, Paris in 2022, and Le Havre in 2023, the fourth edition of the French Regional Conference on Complex Systems (FRCCS 2024) will be held in Montpellier, France. This single-track international conference organized by CSS France serves as a platform to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange and promote ideas, and facilitate the cross-fertilization of recent research work, industrial advancements, and original applications. Moreover, the conference emphasizes research topics with a high societal impact, showcasing the significance of complexity science in addressing complex societal challenges. Join us in Montpellier as we collectively strive to understand complexity and its implications for society better. You are cordially invited to submit your contribution until *February 21, 2024.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Complexity, edited by Hindawi *Submission * ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to: * ? *Foundations of complex systems * - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks * - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment * - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity * - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations * - Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux Join us at COMPLEX NETWORKS 2024 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Plos Complex Systems Founding & Adisory Editor Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Wed Feb 14 18:16:42 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Wed, 14 Feb 2024 15:16:42 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= Message-ID: ?I am having a genuinely hard time comprehending some of the claims recently made in this forum. (Not one of which engaged with any of the specific examples or texts I linked.) Here?s yet another example, a dialog about geography that was just sent to me by entrepreneur Phil Libin. Do we really want to call outputs like these (to two prompts, with three generated responses zoomed in below) understanding? In what sense do these responses exemplify the word ?understanding?? I am genuinely baffled. To me a better word would be ?approximations?, and poor approximations at that. Worse, I don?t see any AI system on the horizon that could reliably do better, across a broad range of related questions. If these kinds of outputs are any indication at all, we are still a very long away from reliable general-purpose AI. Gary -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-02-13 at 4.52.58?PM.heic Type: image/heic Size: 76419 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-02-13 at 5.16.54?PM.heic Type: image/heic Size: 120976 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_6623.heic Type: image/heic Size: 319766 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 8a5cccf30a46deb38a11ca81f31a3cce.heic Type: image/heic Size: 150728 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fafd8f5c436b8861e314af7459c6f9f3.heic Type: image/heic Size: 132228 bytes Desc: not available URL: From efipatm at gmail.com Thu Feb 15 04:02:56 2024 From: efipatm at gmail.com (Efi) Date: Thu, 15 Feb 2024 11:02:56 +0200 Subject: Connectionists: AIDA free semester course by Prof. Nicolo Cesa-Bianchi "Statistical Methods for Machine Learning", 4th March - 4th June 2024 Message-ID: Dear AI scientist/engineer/student/enthusiast, the International AI Doctoral Academy (*AIDA *) is excited to invite you to attend the upcoming semester course by *Prof. Nicolo Cesa-Bianchi:* *Statistical Methods for Machine Learning* This is a course about the theoretical foundations of machine learning and the analysis of machine learning algorithms. The focus is on understanding the mathematical principles at the basis of machine learning. *Host Institution*: Universit? degli Studi di Milano *Level*: Graduate *4th March 2024 ? 4th June 2024* ? *Course attendance will be asynchronous in English*: https://cesa-bianchi.di.unimi.it/MSA/index_23-24.html *Join us and explore further*: https://www.i-aida.org/course/statistical-methods-for-machine-learning/ Both AIDA and non-AIDA students are encouraged to participate in this course for *free* (Maximal registrant number: 10 students) ? If you are an AIDA Student* already, please: - Step (a): Register in the course by sending an email to the Course Lecturer (nicolo.cesa-bianchi at unimi.it) for your registration. AND - Step (b): Enroll in the same course in the AIDA course link using the ?Enroll on this course? button therein, so that this course enters your AIDA Certificate of Course Attendance. If you are not an AIDA Student, do only step (a). *AIDA Students should have been registered in the AIDA system already (they are PhD students or PostDocs that belong only to the AIDA Members . The International AI Doctoral Academy (AIDA) , a joint initiative of the European R&D projects AI4Media , ELISE , Humane AI Net , TAILOR , VISION , supports a world-level AI education and research program. AIDA has regular lecture series, short courses, semester courses, and summer schools with the participation of hundreds of academics and serving a growing community of AI PhD students and researchers. Best regards AIDA Secretariat -------------- next part -------------- An HTML attachment was scrubbed... URL: From Laura.Cariola at ed.ac.uk Thu Feb 15 04:52:34 2024 From: Laura.Cariola at ed.ac.uk (Laura Cariola) Date: Thu, 15 Feb 2024 09:52:34 +0000 Subject: Connectionists: Salaried Intern Opportunities for Postgraduate Research Students at the University of Edinburgh Message-ID: The University of Edinburgh?s Information Services Group (ISG) invites applications from postgraduate research students who are interested in working as one of ten Digital Research Services Ambassadors. The successful students will support researchers across some of the University?s schools, by applying their digital research, computing, and data skills in different research fields. The Ambassadors will be assigned to projects/hosts based on their skills and expertise, you could be placed on one project, or working with other students across multiple projects. https://portal.unitemps.com/Search/JobDetails/32800327 With best wishes, Laura Dr. Laura A. Cariola CPsychol Programme Director MSc Mental Health in Children and Young People: Psychological Approaches (Online Learning) | Lecturer in Applied Psychology (ODLP) | The University of Edinburgh | Clinical and Health Psychology | School of Health in Social Science | Old Medical School | Elsie Inglis Quad | EdinburghEH8 9AG | Laura.Cariola at ed.ac.uk | Website | [cid:9d0e3610-b41f-4114-9168-238b1fa5ce7e][cid:6c12a335-a8be-45f3-b366-022e1548d772][cid:14ba6ef0-7ecc-4e0e-9113-19574abf618e] The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th? ann an Oilthigh Dh?n ?ideann, cl?raichte an Alba, ?ireamh cl?raidh SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-oandsypf.png Type: image/png Size: 148803 bytes Desc: Outlook-oandsypf.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-m34o52fy.png Type: image/png Size: 104402 bytes Desc: Outlook-m34o52fy.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-mjrqzpi0.png Type: image/png Size: 7625 bytes Desc: Outlook-mjrqzpi0.png URL: From iam.palat at gmail.com Thu Feb 15 09:26:58 2024 From: iam.palat at gmail.com (Iam Palatnik) Date: Thu, 15 Feb 2024 11:26:58 -0300 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: Dear all, yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets itcyxellpi that oyu uderdnoost eht gsaninesmt Copy pasting just the above paragraph onto GPT-4 should show the kind of behavior that makes some researchers say LLMs understand something, in some form. We already use words such as 'intelligence' in AI and 'learning' in ML. This is not to say it's the same as human intelligence/learning. It is to say it's a similar enough behavior that the same word fits, while specifically qualifying the machine word-counterpart as something different (artificial/machine). Can this debate be solved by coining a concept such as 'artificial/machine understanding'? GPT-4 then 'machine understands' the paragraph above. It 'machine understands' arbitrary scrambled text better than humans 'human understand' it. Matrix multiplying rotational semantic embeddings of byte pair encoded tokens is part of 'machine understanding' but not of 'human understanding'. At the same time, there are plenty of examples of things we 'human understand' and GPT-4 doesn't 'machine understand', or doesn't understand without tool access and self reflective prompts. As to the map generation example, there are multiple tasks overlaid there. The language component of GPT-4 seems to have 'machine understood' it has to generate an image, and what the contents of the image have to be. It understood what tool it has to call to create the image. The tool generated an infograph style map of the correct country, but the states and landmarks are wrong. The markers are on the wrong cities and some of the drawings are bad. Is it too far fetched to say GPT-4 'machine understood' the assignment (generating a map with markers in the style of infograph), but its image generation component (Dall-E) is bad at detailed accurate geography knowledge? I'm also confused why the linguistic understanding capabilities of GPT-4 are being tested by asking Dall-E 3 to generate images. Aren't these two completely separate models, and GPT-4 just function-calls Dall-E3 for image generation? Isn't this actually a sign GPT-4 did its job by 'machine understanding' what the user wanted, making the correct function call, creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled it because it's not good at generating detailed accurate maps? Cheers, Iam On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus wrote: > I am having a genuinely hard time comprehending some of the claims > recently made in this forum. (Not one of which engaged with any of the > specific examples or texts I linked.) > > Here?s yet another example, a dialog about geography that was just sent to > me by entrepreneur Phil Libin. Do we really want to call outputs like these > (to two prompts, with three generated responses zoomed in below) > understanding? > > In what sense do these responses exemplify the word ?understanding?? > > I am genuinely baffled. To me a better word would be ?approximations?, and > poor approximations at that. > > Worse, I don?t see any AI system on the horizon that could reliably do > better, across a broad range of related questions. If these kinds of > outputs are any indication at all, we are still a very long away from > reliable general-purpose AI. > > Gary > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Feb 15 10:20:26 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 15 Feb 2024 07:20:26 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> An HTML attachment was scrubbed... URL: From iam.palat at gmail.com Thu Feb 15 10:56:18 2024 From: iam.palat at gmail.com (Iam Palatnik) Date: Thu, 15 Feb 2024 12:56:18 -0300 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: I understand why using the word 'understanding' might seem too generous when models still have the failure modes mentioned. Some of the failure modes (like the reversal curse ) can be remedied with access to tools, external context, self-reflection prompts, but there are failures that cannot yet be remedied. I just don't know what better word to use in the sentence "GPT-4 can ___ that scrambled text better than I can". 'Understand' just flows very naturally in how we commonly use this word, even if it turns out that what GPT-4 is doing is shallower or less general than what my brain is doing. 'Parse' or 'process' doesn't seem enough because the scrambled text contains an instruction and GPT-4 does follow through with it. What word should we use for this? On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus wrote: > Selectively looking at a single example (which happens to involve images) > and ignoring all the other language-internal failures that I and others > have presented is not a particularly effective way of getting to a general > truth. > > More broadly, you are, in my judgement, mistaking correlation for a deeper > level of understanding. > > Gary > > On Feb 15, 2024, at 07:05, Iam Palatnik wrote: > > ? > Dear all, > > yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an > iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay > cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta > oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets > itcyxellpi that oyu uderdnoost eht gsaninesmt > > Copy pasting just the above paragraph onto GPT-4 should show the kind of > behavior that makes some researchers say LLMs understand something, in some > form. > We already use words such as 'intelligence' in AI and 'learning' in ML. > This is not to say it's the same as human intelligence/learning. It is to > say it's a similar enough behavior that the same word fits, while > specifically qualifying the machine word-counterpart as something different > (artificial/machine). > > Can this debate be solved by coining a concept such as 'artificial/machine > understanding'? GPT-4 then 'machine understands' the paragraph above. It > 'machine understands' arbitrary scrambled text better than humans 'human > understand' it. Matrix multiplying rotational semantic embeddings of byte > pair encoded tokens is part of 'machine understanding' but not of 'human > understanding'. At the same time, there are plenty of examples of things we > 'human understand' and GPT-4 doesn't 'machine understand', or doesn't > understand without tool access and self reflective prompts. > > As to the map generation example, there are multiple tasks overlaid there. > The language component of GPT-4 seems to have 'machine understood' it has > to generate an image, and what the contents of the image have to be. It > understood what tool it has to call to create the image. The tool generated > an infograph style map of the correct country, but the states and landmarks > are wrong. The markers are on the wrong cities and some of the drawings are > bad. Is it too far fetched to say GPT-4 'machine understood' the assignment > (generating a map with markers in the style of infograph), but its image > generation component (Dall-E) is bad at detailed accurate geography > knowledge? > > I'm also confused why the linguistic understanding capabilities of GPT-4 > are being tested by asking Dall-E 3 to generate images. Aren't these two > completely separate models, and GPT-4 just function-calls Dall-E3 for image > generation? Isn't this actually a sign GPT-4 did its job by 'machine > understanding' what the user wanted, making the correct function call, > creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled > it because it's not good at generating detailed accurate maps? > > Cheers, > > Iam > > On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus wrote: > >> I am having a genuinely hard time comprehending some of the claims >> recently made in this forum. (Not one of which engaged with any of the >> specific examples or texts I linked.) >> >> Here?s yet another example, a dialog about geography that was just sent >> to me by entrepreneur Phil Libin. Do we really want to call outputs like >> these (to two prompts, with three generated responses zoomed in below) >> understanding? >> >> In what sense do these responses exemplify the word ?understanding?? >> >> I am genuinely baffled. To me a better word would be ?approximations?, >> and poor approximations at that. >> >> Worse, I don?t see any AI system on the horizon that could reliably do >> better, across a broad range of related questions. If these kinds of >> outputs are any indication at all, we are still a very long away from >> reliable general-purpose AI. >> >> Gary >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Thu Feb 15 11:05:27 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 15 Feb 2024 08:05:27 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <73CB7135-728A-434C-927D-3379DF2F2EC1@nyu.edu> An HTML attachment was scrubbed... URL: From dkirsh at lsu.edu Thu Feb 15 11:17:21 2024 From: dkirsh at lsu.edu (David H Kirshner) Date: Thu, 15 Feb 2024 16:17:21 +0000 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: Two questions often get intermixed in such discussions: ? Are neural nets intelligent? ? Do we relate to neural nets as intelligent? This confusion is embedded in the history of AI as the Turing Test uses the latter question to answer the former. Back in Alan Turing?s time when AI meant symbolic AI, there might have been some justification to interrelating the questions. For a symbolic processor to act intelligently someone would have had to figure out in conceptual terms what is intelligent interaction. The symbolic AI program could then be said to have intelligence encoded into it. Clearly, that is not true for connectionist simulation of intelligence. The second question seems to me the more interesting of the two because it reveals something to us about our nature as social beings. It amazed me back in the days of Eliza that people would spend hours interacting with the very rudimentary language response interface?I suspected then that the Turing Test was way too easy. But whether we take an agent to be intelligent/human is not purely up to us. Our own neural networks have evolved to make discriminations between human and non-human in a very simplified stimulus environment. Of course, we can tell ourselves not to ascribe human qualities to machines, but this is after the fact. Our cognitive system already has responded to the AI as human. The question is what happens from here. Do we evolve more discriminatory response patterns based on the new and evolving stimulus environment, or does AI remain human-by-default? This is a chapter of human social history that has not yet been written. David Kirshner From: Connectionists On Behalf Of Gary Marcus Sent: Thursday, February 15, 2024 9:20 AM To: Iam Palatnik Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: ChatGPT?s ?understanding? of maps and infographics You don't often get email from gary.marcus at nyu.edu. Learn why this is important Selectively looking at a single example (which happens to involve images) and ignoring all the other language-internal failures that I and others have presented is not a particularly effective way of getting to a general truth. More broadly, you are, in my judgement, mistaking correlation for a deeper level of understanding. Gary On Feb 15, 2024, at 07:05, Iam Palatnik > wrote: ? Dear all, yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets itcyxellpi that oyu uderdnoost eht gsaninesmt Copy pasting just the above paragraph onto GPT-4 should show the kind of behavior that makes some researchers say LLMs understand something, in some form. We already use words such as 'intelligence' in AI and 'learning' in ML. This is not to say it's the same as human intelligence/learning. It is to say it's a similar enough behavior that the same word fits, while specifically qualifying the machine word-counterpart as something different (artificial/machine). Can this debate be solved by coining a concept such as 'artificial/machine understanding'? GPT-4 then 'machine understands' the paragraph above. It 'machine understands' arbitrary scrambled text better than humans 'human understand' it. Matrix multiplying rotational semantic embeddings of byte pair encoded tokens is part of 'machine understanding' but not of 'human understanding'. At the same time, there are plenty of examples of things we 'human understand' and GPT-4 doesn't 'machine understand', or doesn't understand without tool access and self reflective prompts. As to the map generation example, there are multiple tasks overlaid there. The language component of GPT-4 seems to have 'machine understood' it has to generate an image, and what the contents of the image have to be. It understood what tool it has to call to create the image. The tool generated an infograph style map of the correct country, but the states and landmarks are wrong. The markers are on the wrong cities and some of the drawings are bad. Is it too far fetched to say GPT-4 'machine understood' the assignment (generating a map with markers in the style of infograph), but its image generation component (Dall-E) is bad at detailed accurate geography knowledge? I'm also confused why the linguistic understanding capabilities of GPT-4 are being tested by asking Dall-E 3 to generate images. Aren't these two completely separate models, and GPT-4 just function-calls Dall-E3 for image generation? Isn't this actually a sign GPT-4 did its job by 'machine understanding' what the user wanted, making the correct function call, creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled it because it's not good at generating detailed accurate maps? Cheers, Iam On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus > wrote: I am having a genuinely hard time comprehending some of the claims recently made in this forum. (Not one of which engaged with any of the specific examples or texts I linked.) Here?s yet another example, a dialog about geography that was just sent to me by entrepreneur Phil Libin. Do we really want to call outputs like these (to two prompts, with three generated responses zoomed in below) understanding? In what sense do these responses exemplify the word ?understanding?? I am genuinely baffled. To me a better word would be ?approximations?, and poor approximations at that. Worse, I don?t see any AI system on the horizon that could reliably do better, across a broad range of related questions. If these kinds of outputs are any indication at all, we are still a very long away from reliable general-purpose AI. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtanveer at iiti.ac.in Fri Feb 16 00:16:02 2024 From: mtanveer at iiti.ac.in (M Tanveer) Date: Fri, 16 Feb 2024 10:46:02 +0530 Subject: Connectionists: IEEE TFS SI - Extended Last Date - Feb 28, 2024 In-Reply-To: References: Message-ID: Special Issue: Advances in fuzzy deep learning algorithms for biomedical data Aim and Scope: Deep learning is one of the most important revolutions in the field of artificial intelligence over the last decade. Approaches under this family of models have achieved great success in different tasks such as computer vision, image processing, biomedical analysis and related fields. Researchers in deep and shallow machine learning including those working in computer vision, image processing, biomedical analysis and other fields comprising multi-dimensional data. Fuzzy set theory is a branch of artificial intelligence capable of analysing complex biomedical data, which has been one of the state of the art methodologies, leading to the enhanced performance in various medical applications to prevent, diagnose, and treat diseases. Compared to the traditional data analytics and decision support techniques, fuzzy sets and their extensions are effective white-box tools for representing and explaining the complexity and vagueness of the information, especially to reduce uncertainty. However, the relatively low learning efficiency and performance also hinder their applications in the medical domain. Therefore, in the last few years, integrating deep learning and fuzzy systems has been an emerging and promising topic with applications in healthcare. When tied with experienced clinicians, researchers in fuzzy deep learning can play a significant role in understanding and working on complex medical data, which ultimately leads to improved patient care. Developing novel fuzzy deep learning algorithms suited to deal with medical data still remains a challenge. Healthcare and biomedical sciences have become data-intensive fields, with a strong need for sophisticated data mining methods to extract knowledge from the available information. Biomedical data pose several challenges in data analysis, including high dimensionality, class imbalance and scarcity of annotated data featuring enough quality for modelling purposes. Although current research in this field has shown promising results, several research issues need to be explored, including novel feature selection methods to improve predictive performance along with interpretation, and to explore large scale data in biomedical sciences. This special issue aims to bring together the current research progress (from both academia and industry) on fuzzy deep learning algorithms to address the challenges of biomedical complex data. Special attention will be devoted to novel contributions related to feature selection, class imbalance, data fusion, explainability and biomedical use cases comprising real-world data. This special issue aims at providing an opportunity for collecting some advanced work in the fuzzy deep learning, including compilation of the latest research, development, and practical experiences as well as up-to-date issues, reviewing accomplishments, assessing future directions and challenges in this field. It will bring both researchers from academia and practitioners from industry to discuss the latest progress, new research topics, and potential application domains. Topics: The topics relevant to the special issue include (but are not limited to): - Fuzzy deep learning for computer aided detection and diagnosis - Fuzzy deep learning for neuroimaging - Fuzzy deep learning for radiographic data - Fuzzy deep learning for biomedical image classification and ROI localization - Fuzzy deep learning for genomics - Explainable fuzzy deep learning for prediction of healthcare variations - Fuzzy deep learning for multimodality neuroimaging data fusion systems - Fusion of fuzzy deep learning and big data for future challenges - Explainability of fuzzy deep learning in all its forms (counterfactuals, local explanations, relevance attribution, etc) - Advanced fuzzy deep learning techniques for the risk prediction of COVID-19 Submission guidelines: All authors should read ?Information for Authors? before submitting a manuscript http://cis.ieee.org/ieeetransactions-on-fuzzy-systems.html Submissions should be through the IEEE TFS journal website http://mc.manuscriptcentral.com/tfs-ieee. It is essential that your manuscript is identified as a Special Issue contribution. Ensure you choose ?Special Issue? when submitting. A cover letter must be included which includes the title ?Advances in fuzzy deep learning algorithms for biomedical data?. Important Dates: Submission Deadline: Feb 28, 2024 Notification of the first round review: Usually 2-3 months from the date of submission Guest Editors: M. Tanveer, Indian Institute of Technology Indore, India (Lead) Email: mtanveer at iiti.ac.in, Homepage: http://people.iiti.ac.in/~mtanveer/ Google Scholar Citations: 5438 with h-index 38 Chin-Teng Lin, University of Technology Sydney, Australia Email: Chin-Teng.Lin at uts.edu.au, Homepage: https://www.uts.edu.au/staff/chin-teng.lin Google Scholar Citations: 36774 with h-index 93 Yu-Dong Zhang, University of Leicester, UK Email: yudongzhang at ieee.org Homepage: https://le.ac.uk/people/yudong-zhang Google Scholar Citations: 31200 with h-index 99 ---------------------------------------------------------- Dr. M. Tanveer (Founding Chapter Chair, IEEE CIS Chapter - MP Section) SERB Ramanujan Fellow and INSA Associate Fellow IEEE CIS Distinguished Lecturer (2024-2026) Elected Board of Governors - APNNS (2023-2024) Associate Professor of Mathematics Indian Institute of Technology Indore Email: mtanveer at iiti.ac.in Mobile: +91-9413259268 Homepage: http://iiti.ac.in/people/~mtanveer/ Associate Editor: IEEE Transactions on Neural Networks & Learning Systems (IF: 10.4). Action Editor: Neural Networks, Elsevier (IF: 7.8). Associate Editor: Pattern Recognition, Elsevier (IF: 8.0). Editorial Board: Applied Soft Computing, Elsevier (IF: 8.7). Board of Editors: Engineering Applications of AI, Elsevier (IF: 8.0). Associate Editor: Neurocomputing, Elsevier (IF: 6.0). Associate Editor: Cognitive Computation, Springer (IF: 5.4). Associate Editor: International Journal of Machine Learning & Cybernetics (IF: 5.6). Lead Guest Editor: IEEE Transactions on Fuzzy Systems (IF: 11.9) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.miasnikof at utoronto.ca Thu Feb 15 22:36:25 2024 From: p.miasnikof at utoronto.ca (P. Miasnikof) Date: Thu, 15 Feb 2024 22:36:25 -0500 Subject: Connectionists: Call for abstracts - CoNBaF2024: Workshop on Complex Networks in Banking and Finance Message-ID: <6b0f92fe-d04c-4167-b82e-9bc1a810baf5@utoronto.ca> Hi, *CoNBaF2024:* /*Workshop on Complex Networks in Banking and Finance* /is soliciting abstracts. The event will be held in Toronto Ontario, Canada, June 24-27 2024. Deadline to submit an abstract is March 15. The abstract must be 1 page at most. For more on submissions, please visit: https://conbaf2024.unicas.it/call-for-abstracts For more general details, please visit: http://www.fields.utoronto.ca/activities/23-24/complex-networks We hope you'll join us in Toronto! Pierre Miasnikof, Conference Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at snl.salk.edu Thu Feb 15 11:37:12 2024 From: terry at snl.salk.edu (Terry Sejnowski) Date: Thu, 15 Feb 2024 08:37:12 -0800 Subject: Connectionists: =?utf-8?q?Fwd=3A__Statistics_versus_=E2=80=9CUnde?= =?utf-8?q?rstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: GPT-4 is a moving target that may eventually replace Gary. Terry ------- The request you've made is similar to one I've already fulfilled. The image generated earlier depicts an empty room without any elephant present, adhering to your specific instructions. If you need any further adjustments or have another request, feel free to let me know! ------ User Write a critique of ChatGPT in the style of Gary Marcus ChatGPT Writing a critique of ChatGPT in the style of Gary Marcus, a scientist known for his clear-eyed scrutiny of artificial intelligence's capabilities and limitations, presents an intriguing challenge. Marcus, with his deep understanding of both the potential and pitfalls of AI, often emphasizes the importance of grounding AI advancements in robust, generalized learning principles, rather than overreliance on data and statistical patterns. *Critique of ChatGPT in the Style of Gary Marcus:* ChatGPT, while a marvel of contemporary AI technology, epitomizes the quintessential limitations that besiege deep learning models. At its core, ChatGPT is a linguistic colossus, built upon the edifice of the transformer architecture, yet it is fundamentally constrained by the inherent limitations of its training methodology and underlying principles. The brilliance of generating coherent, contextually appropriate responses is overshadowed by its occasional lapses into confabulation, a glaring testament to its lack of true understanding. Gary Marcus would likely point out that ChatGPT, for all its linguistic dexterity, lacks the nuanced grasp of the world that humans possess. It can mimic the form of human knowledge and discourse, but it doesn't truly comprehend the content in the way a human does. This distinction is crucial; it is the boundary between a sophisticated parrot and an entity capable of genuine thought and understanding. ChatGPT's responses, no matter how convincing, are ultimately derived from patterns in data it has been fed, not from an intrinsic understanding of those patterns. Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of data as a substitute for a more nuanced, rule-based understanding of language and the world. This data-driven approach can lead to impressive performances in many scenarios but falls short when faced with novel situations or when required to apply common sense reasoning that hasn't been explicitly encoded in its training data. The model's occasional incoherence or its propensity to generate plausible but factually incorrect responses are symptomatic of this deeper issue. Another point of critique would likely focus on the opacity of ChatGPT's decision-making process. Despite recent advances in making AI models more interpretable, understanding how ChatGPT arrives at a particular response remains a challenge. This opacity complicates efforts to debug or improve the model, especially when it produces erroneous or nonsensical outputs. For AI to truly advance, Marcus might argue, it must not only perform well but do so in a manner that is transparent and understandable to human overseers. In the spirit of Marcus's advocacy for a hybrid approach to AI development, one could argue that integrating ChatGPT's statistical prowess with more structured, rule-based systems could ameliorate some of its shortcomings. By grounding the model's responses in a framework of logical reasoning and factual correctness, one might achieve a more reliable, genuinely intelligent system that better approximates human cognition. In conclusion, while ChatGPT represents a significant achievement in natural language processing, its limitations underscore the need for a more comprehensive approach to AI. An approach that incorporates the strengths of both deep learning and symbolic AI, coupled with a commitment to transparency and understandability, may pave the way for the development of AI systems that are not only more capable but also more aligned with the nuanced complexities of human intelligence and reasoning. -------- Forwarded Message -------- Subject: Connectionists: Statistics versus ?Understanding? in Generative AI. Date: Tue, 13 Feb 2024 09:54:48 -0800 From: Gary Marcus To: Weng, Juyang CC: connectionists at mailman.srv.cs.cmu.edu ... A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3ttu1d7S0cx4Qvpk.png Type: image/png Size: 569541 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: uGPqolaZPAByhZbS.png Type: image/png Size: 45656 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a0ET6JyMvGOjno0L.png Type: image/png Size: 250175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab5345a6-2f04-463d-a53b-789115a1fdb7_851x1024.jpeg Type: image/webp Size: 33600 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_5914.jpeg Type: image/jpeg Size: 1061549 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3818.jpeg Type: image/jpeg Size: 130885 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3811.jpeg Type: image/jpeg Size: 100991 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From maanakg at gmail.com Thu Feb 15 22:54:21 2024 From: maanakg at gmail.com (Maanak Gupta) Date: Thu, 15 Feb 2024 21:54:21 -0600 Subject: Connectionists: Call for Papers: ACM Workshop on Secure and Trustworthy Cyber-Physical Systems In-Reply-To: References: Message-ID: > > > Dear Colleagues, > > ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS) > aims to represent a forum for researchers and practitioners from industry > and academia interested in various areas of CPS security. SaT-CPS seeks > novel submissions describing practical and theoretical solutions for cyber > security challenges in CPS. Submissions can be from different application > domains in CPS. The workshop will be held in Porto, Portugal (or virtually) > on June 21, 2024 in conjunction with the 14th ACM Conference on Data and > Application Security and Privacy (CODASPY 2024). > > *** Paper submission deadline: February 25, 2024 *** > *** Website: https://sites.google.com/view/sat-cps-2024/ *** > > SaT-CPS aims to represent a forum for researchers and practitioners from > industry and academia interested in various areas of CPS security. SaT-CPS > seeks novel submissions describing practical and theoretical solutions for > cyber security challenges in CPS. Submissions can be from different > application domains in CPS. Example topics of interest are given below, but > are not limited to: > - Secure CPS architectures > - Authentication mechanisms for CPS > - Access control for CPS > - Key management in CPS > - Attack detection for CPS > - Threat modeling for CPS > - Forensics for CPS > - Intrusion and anomaly detection for CPS > - Trusted-computing in CPS > - Energy-efficient and secure CPS > - Availability, recovery, and auditing for CPS > - Distributed secure solutions for CPS > - Metrics and risk assessment approaches > - Privacy and trust > - Blockchain for CPS security > - Data security and privacy for CPS > - Digital twins for CPS > - Wireless sensor network security > - CPS/IoT malware analysis > - CPS/IoT firmware analysis > - Economics of security and privacy > - Securing CPS in medical devices/systems > - Securing CPS in civil engineering systems/devices > - Physical layer security for CPS > - Security on heterogeneous CPS > - Securing CPS in automotive systems > - Securing CPS in aerospace systems > - Usability security and privacy of CPS > - Secure protocol design in CPS > - Vulnerability analysis of CPS > - Anonymization in CPS > - Embedded systems security > - Formal security methods in CPS > - Industrial control system security > - Securing Internet-of-Things > - Securing smart agriculture and related domains > > The workshop is planned for one day, June 21, 2024, on the last day of the > conference. > > Instructions for Paper Authors > All submissions must describe original research, not published nor > currently under review for another workshop, conference, or journal. All > papers must be submitted electronically via the Easychair system: > https://easychair.org/conferences/?conf=acmsatcps2024 > > > Full-length papers > Papers must be at most 10 pages in length in double-column ACM format (as > specified at https://www.acm.org/publications/proceedings-template). > Submission implies the willingness of at least one author to attend the > workshop and present the paper. Accepted papers will be included in the ACM > Digital Library. The presenter must register for the workshop before the > deadline for author registration. > > Position papers and Work-in-progress papers > We also invite short position papers and work-in-progress papers. Such > papers can be of length up to 6 pages in double-column ACM format (as > specified at https://www.acm.org/publications/proceedings-template), and > must clearly state "Position Paper" or "Work in progress," as the case may > be in the title section of the paper. These papers will be reviewed and > accepted papers will be published in the conference proceedings. > > Important Dates > Due date for full workshop submissions: February 25, 2024 > Notification of acceptance to authors: March 25, 2024 > Camera-ready of accepted papers: April 1, 2024 > Workshop day: June 21, 2024 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Fri Feb 16 07:11:08 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 16 Feb 2024 04:11:08 -0800 Subject: Connectionists: =?utf-8?q?Fwd=3A__Statistics_versus_=E2=80=9CUnde?= =?utf-8?q?rstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: <6A812E92-0C10-49E7-A405-F1A3443E4441@nyu.edu> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3ttu1d7S0cx4Qvpk.png Type: image/png Size: 67664 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: uGPqolaZPAByhZbS.png Type: image/png Size: 45656 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a0ET6JyMvGOjno0L.png Type: image/png Size: 250175 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: https:__substack-post-media.s3.amazonaws.com_public%2Fimages_ab5345a6-2f04-463d-a53b-789115a1fdb7_851x1024.jpeg Type: image/jpeg Size: 33600 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_5914.jpeg Type: image/jpeg Size: 34973 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3818.jpeg Type: image/jpeg Size: 130885 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3811.jpeg Type: image/jpeg Size: 100991 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jost at mis.mpg.de Fri Feb 16 16:43:04 2024 From: jost at mis.mpg.de (J. Jost) Date: Fri, 16 Feb 2024 22:43:04 +0100 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <73CB7135-728A-434C-927D-3379DF2F2EC1@nyu.edu> References: <73CB7135-728A-434C-927D-3379DF2F2EC1@nyu.edu> Message-ID: Hi Gary, why don't you give a precise definition of "understanding". You write: " Understanding, for me, is about converting conversations and visual inputs and so on into sound, reliable, interrogable internal cognitive models." but this contains a couple of terms whose meaning is not completely clear. There is only hope for progress in this debate if everybody defines his or her terms clearly and precisely. And when people attack you, they should first also clearly state the definitions they are employing. And if we all agree on some definition, then we might want to discuss to which extent humans, like the average college student, satisfy the corresponding criteria. Juergen (J.) On Thu, Feb 15, 2024 at 5:44?PM Gary Marcus wrote: > For your fill-in-the-blanks ["GPT-4 can ___ that scrambled text better > than I can"] I would urge ?respond to? > > That is nice, nonjudgemental, non-anthropomorphic language to describe a > process that we thus far have limited insight into, and that seems to > respond very differently from what we do. > > With respect to your assertion that various problems ?can be remedied?, I > first warned of hallucinations errors in 2001 in my book The Algebraic > Mind; the problem has not gone away in over 20 years. I also addressed > something quite similar to the reversal curse, and anticipated a number of > other issues in semantics and compositionality, also still unsolved. For > many problems the phrase ?Can be remedied? remains a promissory note. > Techniques like RAG hold some promise but are very far from perfect. > > Yes, I am sure that AI that is not subject to hallucination and the > reversal curse and so can someday be built; I am also sure that some form > of AI that can reliably produce accounts of its internal processing can be > built. But I am not convinced that *LLMs, *which happen to be popular > right now, are the right foundation for such things. > > They remain opaque, unreliable, and poorly grounded in facts. > Understanding, for me, is about converting conversations and visual inputs > and so on into sound, reliable, interrogable internal cognitive models. I > am with LeCun in finding what passes for cognitive models in current > systems to be lacking. > > Gary > > > > On Feb 15, 2024, at 07:56, Iam Palatnik wrote: > > ? > I understand why using the word 'understanding' might seem too generous > when models still have the failure modes mentioned. Some of the failure > modes (like the reversal curse > ) > can be remedied with access to tools, external context, self-reflection > prompts, but there are failures that cannot yet be remedied. > > I just don't know what better word to use in the sentence "GPT-4 can ___ > that scrambled text better than I can". 'Understand' just flows very > naturally in how we commonly use this word, even if it turns out that what > GPT-4 is doing is shallower or less general than what my brain is doing. > 'Parse' or 'process' doesn't seem enough because the scrambled text > contains an instruction and GPT-4 does follow through with it. What word > should we use for this? > > > On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus wrote: > >> Selectively looking at a single example (which happens to involve images) >> and ignoring all the other language-internal failures that I and others >> have presented is not a particularly effective way of getting to a general >> truth. >> >> More broadly, you are, in my judgement, mistaking correlation for a >> deeper level of understanding. >> >> Gary >> >> On Feb 15, 2024, at 07:05, Iam Palatnik wrote: >> >> ? >> Dear all, >> >> yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an >> iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay >> cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta >> oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets >> itcyxellpi that oyu uderdnoost eht gsaninesmt >> >> Copy pasting just the above paragraph onto GPT-4 should show the kind of >> behavior that makes some researchers say LLMs understand something, in some >> form. >> We already use words such as 'intelligence' in AI and 'learning' in ML. >> This is not to say it's the same as human intelligence/learning. It is to >> say it's a similar enough behavior that the same word fits, while >> specifically qualifying the machine word-counterpart as something different >> (artificial/machine). >> >> Can this debate be solved by coining a concept such as >> 'artificial/machine understanding'? GPT-4 then 'machine understands' the >> paragraph above. It 'machine understands' arbitrary scrambled text better >> than humans 'human understand' it. Matrix multiplying rotational semantic >> embeddings of byte pair encoded tokens is part of 'machine understanding' >> but not of 'human understanding'. At the same time, there are plenty of >> examples of things we 'human understand' and GPT-4 doesn't 'machine >> understand', or doesn't understand without tool access and self reflective >> prompts. >> >> As to the map generation example, there are multiple tasks overlaid >> there. The language component of GPT-4 seems to have 'machine understood' >> it has to generate an image, and what the contents of the image have to be. >> It understood what tool it has to call to create the image. The tool >> generated an infograph style map of the correct country, but the states and >> landmarks are wrong. The markers are on the wrong cities and some of the >> drawings are bad. Is it too far fetched to say GPT-4 'machine understood' >> the assignment (generating a map with markers in the style of infograph), >> but its image generation component (Dall-E) is bad at detailed accurate >> geography knowledge? >> >> I'm also confused why the linguistic understanding capabilities of GPT-4 >> are being tested by asking Dall-E 3 to generate images. Aren't these two >> completely separate models, and GPT-4 just function-calls Dall-E3 for image >> generation? Isn't this actually a sign GPT-4 did its job by 'machine >> understanding' what the user wanted, making the correct function call, >> creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled >> it because it's not good at generating detailed accurate maps? >> >> Cheers, >> >> Iam >> >> On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus wrote: >> >>> I am having a genuinely hard time comprehending some of the claims >>> recently made in this forum. (Not one of which engaged with any of the >>> specific examples or texts I linked.) >>> >>> Here?s yet another example, a dialog about geography that was just sent >>> to me by entrepreneur Phil Libin. Do we really want to call outputs like >>> these (to two prompts, with three generated responses zoomed in below) >>> understanding? >>> >>> In what sense do these responses exemplify the word ?understanding?? >>> >>> I am genuinely baffled. To me a better word would be ?approximations?, >>> and poor approximations at that. >>> >>> Worse, I don?t see any AI system on the horizon that could reliably do >>> better, across a broad range of related questions. If these kinds of >>> outputs are any indication at all, we are still a very long away from >>> reliable general-purpose AI. >>> >>> Gary >>> >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Fri Feb 16 11:16:03 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 16 Feb 2024 08:16:03 -0800 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From fellous at arizona.edu Fri Feb 16 11:09:54 2024 From: fellous at arizona.edu (Fellous, Jean-Marc - (fellous)) Date: Fri, 16 Feb 2024 16:09:54 +0000 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <6A812E92-0C10-49E7-A405-F1A3443E4441@nyu.edu> References: <6A812E92-0C10-49E7-A405-F1A3443E4441@nyu.edu> Message-ID: Is it accurate though? Some of us sure could us a concise summary of everyone's position/opinion! Thanks, Jean-Marc ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Friday, February 16, 2024 5:11 AM To: Terry Sejnowski Cc: connectionists at cs.cmu.edu Subject: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email Always with the childish ad hominem. This field is something else. On Feb 15, 2024, at 11:09?PM, Terry Sejnowski wrote: ? GPT-4 is a moving target that may eventually replace Gary. Terry ------- [cid:0457c8e1-debf-4343-b3b7-6b0ecfe7c3da at namprd19.prod.outlook.com] [cid:162d7610-f7c9-451c-a9a6-5e7cb4028673 at namprd19.prod.outlook.com] The request you've made is similar to one I've already fulfilled. The image generated earlier depicts an empty room without any elephant present, adhering to your specific instructions. If you need any further adjustments or have another request, feel free to let me know! ------ [User] Write a critique of ChatGPT in the style of Gary Marcus ChatGPT Writing a critique of ChatGPT in the style of Gary Marcus, a scientist known for his clear-eyed scrutiny of artificial intelligence's capabilities and limitations, presents an intriguing challenge. Marcus, with his deep understanding of both the potential and pitfalls of AI, often emphasizes the importance of grounding AI advancements in robust, generalized learning principles, rather than overreliance on data and statistical patterns. Critique of ChatGPT in the Style of Gary Marcus: ChatGPT, while a marvel of contemporary AI technology, epitomizes the quintessential limitations that besiege deep learning models. At its core, ChatGPT is a linguistic colossus, built upon the edifice of the transformer architecture, yet it is fundamentally constrained by the inherent limitations of its training methodology and underlying principles. The brilliance of generating coherent, contextually appropriate responses is overshadowed by its occasional lapses into confabulation, a glaring testament to its lack of true understanding. Gary Marcus would likely point out that ChatGPT, for all its linguistic dexterity, lacks the nuanced grasp of the world that humans possess. It can mimic the form of human knowledge and discourse, but it doesn't truly comprehend the content in the way a human does. This distinction is crucial; it is the boundary between a sophisticated parrot and an entity capable of genuine thought and understanding. ChatGPT's responses, no matter how convincing, are ultimately derived from patterns in data it has been fed, not from an intrinsic understanding of those patterns. Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of data as a substitute for a more nuanced, rule-based understanding of language and the world. This data-driven approach can lead to impressive performances in many scenarios but falls short when faced with novel situations or when required to apply common sense reasoning that hasn't been explicitly encoded in its training data. The model's occasional incoherence or its propensity to generate plausible but factually incorrect responses are symptomatic of this deeper issue. Another point of critique would likely focus on the opacity of ChatGPT's decision-making process. Despite recent advances in making AI models more interpretable, understanding how ChatGPT arrives at a particular response remains a challenge. This opacity complicates efforts to debug or improve the model, especially when it produces erroneous or nonsensical outputs. For AI to truly advance, Marcus might argue, it must not only perform well but do so in a manner that is transparent and understandable to human overseers. In the spirit of Marcus's advocacy for a hybrid approach to AI development, one could argue that integrating ChatGPT's statistical prowess with more structured, rule-based systems could ameliorate some of its shortcomings. By grounding the model's responses in a framework of logical reasoning and factual correctness, one might achieve a more reliable, genuinely intelligent system that better approximates human cognition. In conclusion, while ChatGPT represents a significant achievement in natural language processing, its limitations underscore the need for a more comprehensive approach to AI. An approach that incorporates the strengths of both deep learning and symbolic AI, coupled with a commitment to transparency and understandability, may pave the way for the development of AI systems that are not only more capable but also more aligned with the nuanced complexities of human intelligence and reasoning. -------- Forwarded Message -------- Subject: Connectionists: Statistics versus ?Understanding? in Generative AI. Date: Tue, 13 Feb 2024 09:54:48 -0800 From: Gary Marcus To: Weng, Juyang CC: connectionists at mailman.srv.cs.cmu.edu ... A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary [cid:0ad635fd-2313-4c5b-a3a9-f6269f45f8b9 at namprd19.prod.outlook.com] Gary [cid:8d75ad4a-90a5-4fde-9024-f47268d3bdac at namprd19.prod.outlook.com] [cid:9fd611e9-abe6-45c1-9096-46d02dd0b19c at namprd19.prod.outlook.com] [cid:f61a0e86-d924-4a35-af96-6b7445bf1fb9 at namprd19.prod.outlook.com] [cid:5662ce69-117e-4093-8502-71edc3fed91c at namprd19.prod.outlook.com] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3ttu1d7S0cx4Qvpk.png Type: image/png Size: 67664 bytes Desc: 3ttu1d7S0cx4Qvpk.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: uGPqolaZPAByhZbS.png Type: image/png Size: 45656 bytes Desc: uGPqolaZPAByhZbS.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a0ET6JyMvGOjno0L.png Type: image/png Size: 250175 bytes Desc: a0ET6JyMvGOjno0L.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: https__substack-post-media.s3.amazonaws.com_public%2Fimages_ab5345a6-2f04-463d-a53b-789115a1fdb7_851x1024.jpeg Type: image/jpeg Size: 33600 bytes Desc: https__substack-post-media.s3.amazonaws.com_public%2Fimages_ab5345a6-2f04-463d-a53b-789115a1fdb7_851x1024.jpeg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_5914.jpeg Type: image/jpeg Size: 34973 bytes Desc: IMG_5914.jpeg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3818.jpeg Type: image/jpeg Size: 130885 bytes Desc: IMG_3818.jpeg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3811.jpeg Type: image/jpeg Size: 100991 bytes Desc: IMG_3811.jpeg URL: From gary.marcus at nyu.edu Fri Feb 16 12:14:53 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 16 Feb 2024 09:14:53 -0800 Subject: Connectionists: =?utf-8?q?Fwd=3A_Statistics_versus_=E2=80=9CUnder?= =?utf-8?q?standing=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: ? Too much negativity and vitriol around this IMHO? from a guy who opens his missive with an accusation of goalpost shifting that is supported by not a single quotation. The level of discourse here leaves a lot to be desired. > On Feb 16, 2024, at 8:56?AM, Keith Lambert wrote: > > Too much negativity and vitriol around this IMHO From keithlambert75 at gmail.com Fri Feb 16 15:22:36 2024 From: keithlambert75 at gmail.com (Keith Lambert) Date: Fri, 16 Feb 2024 14:22:36 -0600 Subject: Connectionists: =?utf-8?q?Fwd=3A_Statistics_versus_=E2=80=9CUnder?= =?utf-8?q?standing=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Please just stay on task and consider the gist of my argument. Not interested in this back n forth which seems pointless and if anything detrimental. Have a nice day.? Sent from my iPhone On Fri, Feb 16, 2024 at 1:58?PM Gary Marcus wrote: > Wow. If you going to back to podcasts (not written articles, book chatpers > etc) from 2014, you are desperate. In fairness, > a. You are going to have to be more specific about what goalposts you even > mean there? (I don?t have time to listen to 10-year old podcasts). What > *exactly *did I say there that you object to exactly? I NEVER endorsed > the Turing Test, so far as I can recall. Always thought it was a lousy test > for intelligence. > b. You are going to have hold Hinton accountable for the 2016 prediction > that we should stop training radiologists. (We can?t have double standards > here.) > c. You are going to have hold Lecun responsible for saying last week that > ? We don't know how to do this properly. It doesn't work for video. What > works for text doesn?t work for video. And the only technique so far that > has a chance of working for video is a new architecture that I've called > JEPA...?, with Sora beating that a week later. (Again, we can?t have double > standards here.) > d. Specifically on the subject of alternatives to Turing, I proposed > something I call the comprehension challenge in 2014 > in > the New Yorker and stand by it. As impressive as the newGemini is, the only > video test they demo?d (Buster Keaton) did not have audio as part of the > input. In their own new long-duration task (details not disclosed; maybe > mostly be memorization rather than the kind of inferences they described) > they were only at 60%. My prediction that test would be hard; goal posts on > that haven?t changed. > > Gary > > > > On Feb 16, 2024, at 11:40, Keith Lambert wrote: > > ? > > Fair point and I?ll try to do better. That being said, this entire talk is > filled with goal post shifting examples. I mean you literally talk about > creating a sequel to the Turing test, which would be definitionally a goal > post shift, but I don?t want to get mired down in this?.I believe my points > stand regardless and should be the point of the discourse. To a better > understanding for us all. > > > https://www.econtalk.org/gary-marcus-on-the-future-of-artificial-intelligence-and-the-brain/ > > > > Sent from my iPhone > > > On Fri, Feb 16, 2024 at 11:15?AM Gary Marcus wrote: > >> ? Too much negativity and vitriol around this IMHO? from a guy who opens >> his missive with an accusation of goalpost shifting that is supported by >> not a single quotation. >> >> The level of discourse here leaves a lot to be desired. >> >> > On Feb 16, 2024, at 8:56?AM, Keith Lambert >> wrote: >> > >> > Too much negativity and vitriol around this IMHO >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Fri Feb 16 16:52:20 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 16 Feb 2024 13:52:20 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <73E7D990-917F-493D-AD2A-F365D9C55FF3@nyu.edu> An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Fri Feb 16 14:57:54 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Fri, 16 Feb 2024 11:57:54 -0800 Subject: Connectionists: =?utf-8?q?Fwd=3A_Statistics_versus_=E2=80=9CUnder?= =?utf-8?q?standing=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From keithlambert75 at gmail.com Fri Feb 16 11:56:38 2024 From: keithlambert75 at gmail.com (Keith Lambert) Date: Fri, 16 Feb 2024 10:56:38 -0600 Subject: Connectionists: =?utf-8?q?Fwd=3A_Statistics_versus_=E2=80=9CUnder?= =?utf-8?q?standing=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <6A812E92-0C10-49E7-A405-F1A3443E4441@nyu.edu> References: <6A812E92-0C10-49E7-A405-F1A3443E4441@nyu.edu> Message-ID: Gary, enough with the language games and goal post shifting?is this a bijective map from biological to technological cognition and understanding? No, but is there a one to one mapping from birds to planes? No?but do both fly? Yes?simple analogy but think it proves my point?stop looking for this one to one correlate and univariate explanation for intelligence, understanding, etc?it?s definitely doing something, but the lack of appropriate linguistic representation for the phenomena shouldn?t disqualify the existence of it?do we need new ?notation? to describe and understand (pun intended) this new technological information processing system, yes, but does this deny what is happening here because we lack that linguistic clarity, no. How?s about stepping off the anthropocentric train for a minute and think of this objectively as an advanced technological entity the likes of which we?ve never seen before and thus struggle with the categorization of it? Just a ?thought??.whatever that means??????? P.S. Also, thinking about Frege?s linguistic turn here, and that we?re talking the same ?sense? here but a different ?referent? as well as the understanding of numbers as parts of a logical system and not objects in and of themselves. I think healthy and spirited dialogue are great. Too much negativity and vitriol around this IMHO Best, Keith Sent from my iPhone On Fri, Feb 16, 2024 at 9:54?AM Gary Marcus wrote: > Always with the childish ad hominem. > > This field is something else. > > On Feb 15, 2024, at 11:09?PM, Terry Sejnowski wrote: > > ? GPT-4 is a moving target that may eventually replace Gary. > > Terry > > ------- > > > > The request you've made is similar to one I've already fulfilled. The > image generated earlier depicts an empty room without any elephant present, > adhering to your specific instructions. If you need any further adjustments > or have another request, feel free to let me know! > > ------ > [image: User] > > Write a critique of ChatGPT in the style of Gary Marcus > > ChatGPT > > Writing a critique of ChatGPT in the style of Gary Marcus, a scientist > known for his clear-eyed scrutiny of artificial intelligence's capabilities > and limitations, presents an intriguing challenge. Marcus, with his deep > understanding of both the potential and pitfalls of AI, often emphasizes > the importance of grounding AI advancements in robust, generalized learning > principles, rather than overreliance on data and statistical patterns. > > *Critique of ChatGPT in the Style of Gary Marcus:* > > ChatGPT, while a marvel of contemporary AI technology, epitomizes the > quintessential limitations that besiege deep learning models. At its core, > ChatGPT is a linguistic colossus, built upon the edifice of the transformer > architecture, yet it is fundamentally constrained by the inherent > limitations of its training methodology and underlying principles. The > brilliance of generating coherent, contextually appropriate responses is > overshadowed by its occasional lapses into confabulation, a glaring > testament to its lack of true understanding. > > Gary Marcus would likely point out that ChatGPT, for all its linguistic > dexterity, lacks the nuanced grasp of the world that humans possess. It can > mimic the form of human knowledge and discourse, but it doesn't truly > comprehend the content in the way a human does. This distinction is > crucial; it is the boundary between a sophisticated parrot and an entity > capable of genuine thought and understanding. ChatGPT's responses, no > matter how convincing, are ultimately derived from patterns in data it has > been fed, not from an intrinsic understanding of those patterns. > > Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of > data as a substitute for a more nuanced, rule-based understanding of > language and the world. This data-driven approach can lead to impressive > performances in many scenarios but falls short when faced with novel > situations or when required to apply common sense reasoning that hasn't > been explicitly encoded in its training data. The model's occasional > incoherence or its propensity to generate plausible but factually incorrect > responses are symptomatic of this deeper issue. > > Another point of critique would likely focus on the opacity of ChatGPT's > decision-making process. Despite recent advances in making AI models more > interpretable, understanding how ChatGPT arrives at a particular response > remains a challenge. This opacity complicates efforts to debug or improve > the model, especially when it produces erroneous or nonsensical outputs. > For AI to truly advance, Marcus might argue, it must not only perform well > but do so in a manner that is transparent and understandable to human > overseers. > > In the spirit of Marcus's advocacy for a hybrid approach to AI > development, one could argue that integrating ChatGPT's statistical prowess > with more structured, rule-based systems could ameliorate some of its > shortcomings. By grounding the model's responses in a framework of logical > reasoning and factual correctness, one might achieve a more reliable, > genuinely intelligent system that better approximates human cognition. > > In conclusion, while ChatGPT represents a significant achievement in > natural language processing, its limitations underscore the need for a more > comprehensive approach to AI. An approach that incorporates the strengths > of both deep learning and symbolic AI, coupled with a commitment to > transparency and understandability, may pave the way for the development of > AI systems that are not only more capable but also more aligned with the > nuanced complexities of human intelligence and reasoning. > > > -------- Forwarded Message -------- > Subject: Connectionists: Statistics versus ?Understanding? in Generative > AI. > Date: Tue, 13 Feb 2024 09:54:48 -0800 > From: Gary Marcus > To: Weng, Juyang > CC: connectionists at mailman.srv.cs.cmu.edu > > ... > > A few sample visuals from those below (read the essays for sources). The > argument in brief is that the systems do fine in statistically canonical > situations, but lack depth in their comprehension of complex syntax, > negation, unusual circumstances, etc. Much more context (and many more > examples) in the essays themselves. > > Gary > > > > Gary > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From keithlambert75 at gmail.com Fri Feb 16 14:40:20 2024 From: keithlambert75 at gmail.com (Keith Lambert) Date: Fri, 16 Feb 2024 13:40:20 -0600 Subject: Connectionists: =?utf-8?q?Fwd=3A_Statistics_versus_=E2=80=9CUnder?= =?utf-8?q?standing=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Fair point and I?ll try to do better. That being said, this entire talk is filled with goal post shifting examples. I mean you literally talk about creating a sequel to the Turing test, which would be definitionally a goal post shift, but I don?t want to get mired down in this?.I believe my points stand regardless and should be the point of the discourse. To a better understanding for us all. https://www.econtalk.org/gary-marcus-on-the-future-of-artificial-intelligence-and-the-brain/ Sent from my iPhone On Fri, Feb 16, 2024 at 11:15?AM Gary Marcus wrote: > ? Too much negativity and vitriol around this IMHO? from a guy who opens > his missive with an accusation of goalpost shifting that is supported by > not a single quotation. > > The level of discourse here leaves a lot to be desired. > > > On Feb 16, 2024, at 8:56?AM, Keith Lambert > wrote: > > > > Too much negativity and vitriol around this IMHO > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at togelius.com Sat Feb 17 00:19:44 2024 From: julian at togelius.com (Julian Togelius) Date: Sat, 17 Feb 2024 00:19:44 -0500 Subject: Connectionists: 2024 Summer School AI and Games Message-ID: Are you interested in (generative) AI and (video) games? Are you a game developer, researcher, professor, PhD student, or otherwise working on something related? We have the event for you. We are excited to announce the 2024 Summer School on AI and Games! The school will be held in Valletta, the beautiful medieval capital city of Malta, 17-21 June 2024. The summer school comes packed with lectures focusing on the use of artificial intelligence in game development and game design with talks from industry leaders, indie developers, and even a few academic researchers. There will also be hands-on sessions and an AI game jam. Early Bird registration is now open, and the early bird rates are available until the end of February: https://school.gameaibook.org Best regards, The Summer School on AI and Games Organisers -- Julian Togelius Associate Professor, New York University Department of Computer Science and Engineering mail: julian at togelius.com, web: http://julian.togelius.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sat Feb 17 13:19:52 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 17 Feb 2024 10:19:52 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <77802D27-7690-4011-BBB8-CB780DEF346C@nyu.edu> An HTML attachment was scrubbed... URL: From fellous at arizona.edu Sat Feb 17 19:37:10 2024 From: fellous at arizona.edu (Fellous, Jean-Marc - (fellous)) Date: Sun, 18 Feb 2024 00:37:10 +0000 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Yes, I agree, not always accurate of course. Hence the idea of reinforcement. Reinforce when it is accurate, punish when it is not. Updates should not just be an OpenAI prerogative? there should be users in the loop? Thanks for the pointers to these studies? will look them up? Jean-Marc From: Gary Marcus Sent: Saturday, February 17, 2024 4:55 PM To: Fellous, Jean-Marc - (fellous) Cc: Terry Sejnowski ; connectionists at cs.cmu.edu Subject: Re: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email GPT-4 is surely not reliably accurate; I gave many examples and pointed to Mitchell?s recent work as an example. The two empirical examinations Tom pointed to also note inaccuracies, as do many studies by Yejin Choi, Subbarao Kambhampati, and so on. OpenAI regularly updates their systems with undisclosed mechanisms and addresses some publicly discussed examples. (it is also stochastic, so even on questions it gets right it sometimes makes errors). But their patches are rarely if ever robust; it is absurd to suggest that GPT-4 is accurate, full stop, with no qualification. On Feb 17, 2024, at 3:47?PM, Fellous, Jean-Marc - (fellous) > wrote: ? Hi Gary, Yes, I meant the GPT-4 output. If I understand your answer, it is accurate. Then my point is that whether GPT ?understands? or fakes an understanding may be secondary to the fact that it made us understand (to some good? extent) your point of view on AI. It may not ?have? an understanding, but it generates one. That is amazing. What would be good now, is if there was a (relatively safe) feedback mechanism that would allow for a (human) reinforcement of whatever lead to that output. Moving from a purely feedforward GPT (query->answer) to a recurrent one (answer + human->GPT inner workings)? Best, Jean-Marc From: Gary Marcus > Sent: Friday, February 16, 2024 9:16 AM To: Fellous, Jean-Marc - (fellous) > Cc: Terry Sejnowski >; connectionists at cs.cmu.edu Subject: Re: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email On your first question: Is what accurate? There is no substantive claim to be tested in Sejnowski?s schoolyard taunt. As for my position on GPT-4, one place to start: All my predictions of GPT-4 were poublished before its public release (and before I had access); I think all were correct; I don?t think I revised any. You can read them here: https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting On Feb 16, 2024, at 08:10, Fellous, Jean-Marc - (fellous) > wrote: ? Is it accurate though? Some of us sure could us a concise summary of everyone's position/opinion! Thanks, Jean-Marc ________________________________ From: Connectionists > on behalf of Gary Marcus > Sent: Friday, February 16, 2024 5:11 AM To: Terry Sejnowski > Cc: connectionists at cs.cmu.edu > Subject: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email Always with the childish ad hominem. This field is something else. On Feb 15, 2024, at 11:09?PM, Terry Sejnowski > wrote: ? GPT-4 is a moving target that may eventually replace Gary. Terry ------- <3ttu1d7S0cx4Qvpk.png> The request you've made is similar to one I've already fulfilled. The image generated earlier depicts an empty room without any elephant present, adhering to your specific instructions. If you need any further adjustments or have another request, feel free to let me know! ------ [User] Write a critique of ChatGPT in the style of Gary Marcus ChatGPT Writing a critique of ChatGPT in the style of Gary Marcus, a scientist known for his clear-eyed scrutiny of artificial intelligence's capabilities and limitations, presents an intriguing challenge. Marcus, with his deep understanding of both the potential and pitfalls of AI, often emphasizes the importance of grounding AI advancements in robust, generalized learning principles, rather than overreliance on data and statistical patterns. Critique of ChatGPT in the Style of Gary Marcus: ChatGPT, while a marvel of contemporary AI technology, epitomizes the quintessential limitations that besiege deep learning models. At its core, ChatGPT is a linguistic colossus, built upon the edifice of the transformer architecture, yet it is fundamentally constrained by the inherent limitations of its training methodology and underlying principles. The brilliance of generating coherent, contextually appropriate responses is overshadowed by its occasional lapses into confabulation, a glaring testament to its lack of true understanding. Gary Marcus would likely point out that ChatGPT, for all its linguistic dexterity, lacks the nuanced grasp of the world that humans possess. It can mimic the form of human knowledge and discourse, but it doesn't truly comprehend the content in the way a human does. This distinction is crucial; it is the boundary between a sophisticated parrot and an entity capable of genuine thought and understanding. ChatGPT's responses, no matter how convincing, are ultimately derived from patterns in data it has been fed, not from an intrinsic understanding of those patterns. Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of data as a substitute for a more nuanced, rule-based understanding of language and the world. This data-driven approach can lead to impressive performances in many scenarios but falls short when faced with novel situations or when required to apply common sense reasoning that hasn't been explicitly encoded in its training data. The model's occasional incoherence or its propensity to generate plausible but factually incorrect responses are symptomatic of this deeper issue. Another point of critique would likely focus on the opacity of ChatGPT's decision-making process. Despite recent advances in making AI models more interpretable, understanding how ChatGPT arrives at a particular response remains a challenge. This opacity complicates efforts to debug or improve the model, especially when it produces erroneous or nonsensical outputs. For AI to truly advance, Marcus might argue, it must not only perform well but do so in a manner that is transparent and understandable to human overseers. In the spirit of Marcus's advocacy for a hybrid approach to AI development, one could argue that integrating ChatGPT's statistical prowess with more structured, rule-based systems could ameliorate some of its shortcomings. By grounding the model's responses in a framework of logical reasoning and factual correctness, one might achieve a more reliable, genuinely intelligent system that better approximates human cognition. In conclusion, while ChatGPT represents a significant achievement in natural language processing, its limitations underscore the need for a more comprehensive approach to AI. An approach that incorporates the strengths of both deep learning and symbolic AI, coupled with a commitment to transparency and understandability, may pave the way for the development of AI systems that are not only more capable but also more aligned with the nuanced complexities of human intelligence and reasoning. -------- Forwarded Message -------- Subject: Connectionists: Statistics versus ?Understanding? in Generative AI. Date: Tue, 13 Feb 2024 09:54:48 -0800 From: Gary Marcus To: Weng, Juyang CC: connectionists at mailman.srv.cs.cmu.edu ... A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From jost at mis.mpg.de Sat Feb 17 06:55:09 2024 From: jost at mis.mpg.de (J. Jost) Date: Sat, 17 Feb 2024 12:55:09 +0100 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <73E7D990-917F-493D-AD2A-F365D9C55FF3@nyu.edu> References: <73E7D990-917F-493D-AD2A-F365D9C55FF3@nyu.edu> Message-ID: Gary, Why don't you state your definition explicitly here? Hopefully that will make the dynamics less frustrating and more rational. Juergen Gary Marcus schrieb am Fr., 16. Feb. 2024, 22:52: > If your interest is genuine (I?ve grown frustrated with the dynamic here), > i have some discussion of some of this in my article The Next Decade in AI. > > Gary > > On Feb 16, 2024, at 13:43, J. Jost wrote: > > ? > Hi Gary, > > why don't you give a precise definition of "understanding". You write: " > Understanding, for me, is about converting conversations and visual > inputs and so on into sound, reliable, interrogable internal cognitive > models." but this contains a couple of terms whose meaning is not > completely clear. > > There is only hope for progress in this debate if everybody defines his or > her terms clearly and precisely. And when people attack you, they should > first also clearly state the definitions they are employing. > > And if we all agree on some definition, then we might want to discuss to > which extent humans, like the average college student, satisfy the > corresponding criteria. > > Juergen (J.) > > > > On Thu, Feb 15, 2024 at 5:44?PM Gary Marcus wrote: > >> For your fill-in-the-blanks ["GPT-4 can ___ that scrambled text better >> than I can"] I would urge ?respond to? >> >> That is nice, nonjudgemental, non-anthropomorphic language to describe a >> process that we thus far have limited insight into, and that seems to >> respond very differently from what we do. >> >> With respect to your assertion that various problems ?can be remedied?, I >> first warned of hallucinations errors in 2001 in my book The Algebraic >> Mind; the problem has not gone away in over 20 years. I also addressed >> something quite similar to the reversal curse, and anticipated a number of >> other issues in semantics and compositionality, also still unsolved. For >> many problems the phrase ?Can be remedied? remains a promissory note. >> Techniques like RAG hold some promise but are very far from perfect. >> >> Yes, I am sure that AI that is not subject to hallucination and the >> reversal curse and so can someday be built; I am also sure that some form >> of AI that can reliably produce accounts of its internal processing can be >> built. But I am not convinced that *LLMs, *which happen to be popular >> right now, are the right foundation for such things. >> >> They remain opaque, unreliable, and poorly grounded in facts. >> Understanding, for me, is about converting conversations and visual inputs >> and so on into sound, reliable, interrogable internal cognitive models. I >> am with LeCun in finding what passes for cognitive models in current >> systems to be lacking. >> >> Gary >> >> >> >> On Feb 15, 2024, at 07:56, Iam Palatnik wrote: >> >> ? >> I understand why using the word 'understanding' might seem too generous >> when models still have the failure modes mentioned. Some of the failure >> modes (like the reversal curse >> ) >> can be remedied with access to tools, external context, self-reflection >> prompts, but there are failures that cannot yet be remedied. >> >> I just don't know what better word to use in the sentence "GPT-4 can ___ >> that scrambled text better than I can". 'Understand' just flows very >> naturally in how we commonly use this word, even if it turns out that what >> GPT-4 is doing is shallower or less general than what my brain is doing. >> 'Parse' or 'process' doesn't seem enough because the scrambled text >> contains an instruction and GPT-4 does follow through with it. What word >> should we use for this? >> >> >> On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus wrote: >> >>> Selectively looking at a single example (which happens to involve >>> images) and ignoring all the other language-internal failures that I and >>> others have presented is not a particularly effective way of getting to a >>> general truth. >>> >>> More broadly, you are, in my judgement, mistaking correlation for a >>> deeper level of understanding. >>> >>> Gary >>> >>> On Feb 15, 2024, at 07:05, Iam Palatnik wrote: >>> >>> ? >>> Dear all, >>> >>> yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an >>> iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay >>> cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta >>> oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets >>> itcyxellpi that oyu uderdnoost eht gsaninesmt >>> >>> Copy pasting just the above paragraph onto GPT-4 should show the kind of >>> behavior that makes some researchers say LLMs understand something, in some >>> form. >>> We already use words such as 'intelligence' in AI and 'learning' in ML. >>> This is not to say it's the same as human intelligence/learning. It is to >>> say it's a similar enough behavior that the same word fits, while >>> specifically qualifying the machine word-counterpart as something different >>> (artificial/machine). >>> >>> Can this debate be solved by coining a concept such as >>> 'artificial/machine understanding'? GPT-4 then 'machine understands' the >>> paragraph above. It 'machine understands' arbitrary scrambled text better >>> than humans 'human understand' it. Matrix multiplying rotational semantic >>> embeddings of byte pair encoded tokens is part of 'machine understanding' >>> but not of 'human understanding'. At the same time, there are plenty of >>> examples of things we 'human understand' and GPT-4 doesn't 'machine >>> understand', or doesn't understand without tool access and self reflective >>> prompts. >>> >>> As to the map generation example, there are multiple tasks overlaid >>> there. The language component of GPT-4 seems to have 'machine understood' >>> it has to generate an image, and what the contents of the image have to be. >>> It understood what tool it has to call to create the image. The tool >>> generated an infograph style map of the correct country, but the states and >>> landmarks are wrong. The markers are on the wrong cities and some of the >>> drawings are bad. Is it too far fetched to say GPT-4 'machine understood' >>> the assignment (generating a map with markers in the style of infograph), >>> but its image generation component (Dall-E) is bad at detailed accurate >>> geography knowledge? >>> >>> I'm also confused why the linguistic understanding capabilities of GPT-4 >>> are being tested by asking Dall-E 3 to generate images. Aren't these two >>> completely separate models, and GPT-4 just function-calls Dall-E3 for image >>> generation? Isn't this actually a sign GPT-4 did its job by 'machine >>> understanding' what the user wanted, making the correct function call, >>> creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled >>> it because it's not good at generating detailed accurate maps? >>> >>> Cheers, >>> >>> Iam >>> >>> On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus wrote: >>> >>>> I am having a genuinely hard time comprehending some of the claims >>>> recently made in this forum. (Not one of which engaged with any of the >>>> specific examples or texts I linked.) >>>> >>>> Here?s yet another example, a dialog about geography that was just sent >>>> to me by entrepreneur Phil Libin. Do we really want to call outputs like >>>> these (to two prompts, with three generated responses zoomed in below) >>>> understanding? >>>> >>>> In what sense do these responses exemplify the word ?understanding?? >>>> >>>> I am genuinely baffled. To me a better word would be ?approximations?, >>>> and poor approximations at that. >>>> >>>> Worse, I don?t see any AI system on the horizon that could reliably do >>>> better, across a broad range of related questions. If these kinds of >>>> outputs are any indication at all, we are still a very long away from >>>> reliable general-purpose AI. >>>> >>>> Gary >>>> >>>> >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at oregonstate.edu Sat Feb 17 08:24:43 2024 From: tgd at oregonstate.edu (Dietterich, Thomas) Date: Sat, 17 Feb 2024 13:24:43 +0000 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: I favor a functional definition: A system understands if it responds appropriately. However, understanding can be pointwise (i.e., it works only for specific situations) or systematic (i.e., it works across an entire ?region? of situations). Current LLMs ?play the odds?. They can deliver pointwise understanding for frequent cases. But for rare cases, they often try to ?autocorrect reality? and end up answering the wrong question. This is beautifully discussed in McCoy, R. T., Yao, S., Friedman, D., Hardy, M., & Griffiths, T. L. (2023). Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. ArXiv, 2309.13638(v1). http://arxiv.org/abs/2309.13638 I expect that systems whose internal representations are causally connected to the world will be more likely to understand systematically, but the evidence is not yet clear. In the last week, these two papers appeared: Vivian Y. Nastl, Moritz Hardt. Predictors from causal features do not generalize better to new domains. https://arxiv.org/abs/2402.09891 Jonathan Richens, Tom Everitt. Robust Agents Learn Causal World Models. ICLR 2024 https://openreview.net/forum?id=pOoKI3ouv1 --Tom Thomas G. Dietterich, Distinguished Professor (Emeritus) School of EECS, Oregon State University US Mail: 1148 Kelley Engineering Center, Corvallis, OR 97331-5501 USA Office: 2063 Kelley Engineering Center Voice: 541-737-5559; FAX: 541-737-1300 https://web.engr.oregonstate.edu/~tgd/ From: Connectionists On Behalf Of Iam Palatnik Sent: Thursday, February 15, 2024 21:26 To: Gary Marcus Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: ChatGPT?s ?understanding? of maps and infographics You don't often get email from iam.palat at gmail.com. Learn why this is important [This email originated from outside of OSU. Use caution with links and attachments.] [This email originated from outside of OSU. Use caution with links and attachments.] I understand why using the word 'understanding' might seem too generous when models still have the failure modes mentioned. Some of the failure modes (like the reversal curse) can be remedied with access to tools, external context, self-reflection prompts, but there are failures that cannot yet be remedied. I just don't know what better word to use in the sentence "GPT-4 can ___ that scrambled text better than I can". 'Understand' just flows very naturally in how we commonly use this word, even if it turns out that what GPT-4 is doing is shallower or less general than what my brain is doing. 'Parse' or 'process' doesn't seem enough because the scrambled text contains an instruction and GPT-4 does follow through with it. What word should we use for this? On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus > wrote: Selectively looking at a single example (which happens to involve images) and ignoring all the other language-internal failures that I and others have presented is not a particularly effective way of getting to a general truth. More broadly, you are, in my judgement, mistaking correlation for a deeper level of understanding. Gary On Feb 15, 2024, at 07:05, Iam Palatnik > wrote: ? Dear all, yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets itcyxellpi that oyu uderdnoost eht gsaninesmt Copy pasting just the above paragraph onto GPT-4 should show the kind of behavior that makes some researchers say LLMs understand something, in some form. We already use words such as 'intelligence' in AI and 'learning' in ML. This is not to say it's the same as human intelligence/learning. It is to say it's a similar enough behavior that the same word fits, while specifically qualifying the machine word-counterpart as something different (artificial/machine). Can this debate be solved by coining a concept such as 'artificial/machine understanding'? GPT-4 then 'machine understands' the paragraph above. It 'machine understands' arbitrary scrambled text better than humans 'human understand' it. Matrix multiplying rotational semantic embeddings of byte pair encoded tokens is part of 'machine understanding' but not of 'human understanding'. At the same time, there are plenty of examples of things we 'human understand' and GPT-4 doesn't 'machine understand', or doesn't understand without tool access and self reflective prompts. As to the map generation example, there are multiple tasks overlaid there. The language component of GPT-4 seems to have 'machine understood' it has to generate an image, and what the contents of the image have to be. It understood what tool it has to call to create the image. The tool generated an infograph style map of the correct country, but the states and landmarks are wrong. The markers are on the wrong cities and some of the drawings are bad. Is it too far fetched to say GPT-4 'machine understood' the assignment (generating a map with markers in the style of infograph), but its image generation component (Dall-E) is bad at detailed accurate geography knowledge? I'm also confused why the linguistic understanding capabilities of GPT-4 are being tested by asking Dall-E 3 to generate images. Aren't these two completely separate models, and GPT-4 just function-calls Dall-E3 for image generation? Isn't this actually a sign GPT-4 did its job by 'machine understanding' what the user wanted, making the correct function call, creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled it because it's not good at generating detailed accurate maps? Cheers, Iam On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus > wrote: I am having a genuinely hard time comprehending some of the claims recently made in this forum. (Not one of which engaged with any of the specific examples or texts I linked.) Here?s yet another example, a dialog about geography that was just sent to me by entrepreneur Phil Libin. Do we really want to call outputs like these (to two prompts, with three generated responses zoomed in below) understanding? In what sense do these responses exemplify the word ?understanding?? I am genuinely baffled. To me a better word would be ?approximations?, and poor approximations at that. Worse, I don?t see any AI system on the horizon that could reliably do better, across a broad range of related questions. If these kinds of outputs are any indication at all, we are still a very long away from reliable general-purpose AI. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From poole at cs.ubc.ca Sat Feb 17 15:24:14 2024 From: poole at cs.ubc.ca (poole) Date: Sat, 17 Feb 2024 12:24:14 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: > On Feb 15, 2024, at 8:17?AM, David H Kirshner wrote: > > Back in Alan Turing?s time when AI meant symbolic AI, ?. It?s nice to create a revisionist history to bolster an argument, but? Back in Alan Turing's time AI did not exist. Symbolic or neural. Turing?s paper is quite readable and anticilpates many of the issues still discussed. It can be found here: https://academic.oup.com/mind/article/LIX/236/433/986238?login=false? I.?COMPUTING MACHINERY AND INTELLIGENCE academic.oup.com David ?? David Poole, Department of Computer Science, University of British Columbia, https://cs.ubc.ca/~poole poole at cs.ubc.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: m_lix-236-433f1.jpeg Type: image/jpeg Size: 9945 bytes Desc: not available URL: From poole at cs.ubc.ca Sat Feb 17 19:39:36 2024 From: poole at cs.ubc.ca (poole) Date: Sat, 17 Feb 2024 16:39:36 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: > On Feb 17, 2024, at 1:08?PM, David H Kirshner wrote: > > [CAUTION: Non-UBC Email] > You?re right, David. > I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). There was lots of neural network hype in the 1950?s: "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." ? New York Times [1958] It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). I am sure there are people who know the history better than me, who might like to provide more persoective. David > ?? > David Poole, > Department of Computer Science, > University of British Columbia, > https://cs.ubc.ca/~poole > poole at cs.ubc.ca From stmanion at gmail.com Sat Feb 17 05:58:20 2024 From: stmanion at gmail.com (Sean Manion) Date: Sat, 17 Feb 2024 05:58:20 -0500 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <73E7D990-917F-493D-AD2A-F365D9C55FF3@nyu.edu> References: <73E7D990-917F-493D-AD2A-F365D9C55FF3@nyu.edu> Message-ID: What is the operational definition of "intelligence" for this group? Having one might make the dialogue a bit easier (if that's a desired goal). Some considerations here: https://www.linkedin.com/posts/sean-manionphd_intelligence-ai-engineering-activity-7162830566382374913-x-P4?utm_source=share&utm_medium=member_android Cheers! Sean On Sat, Feb 17, 2024, 4:36 AM Gary Marcus wrote: > If your interest is genuine (I?ve grown frustrated with the dynamic here), > i have some discussion of some of this in my article The Next Decade in AI. > > Gary > > On Feb 16, 2024, at 13:43, J. Jost wrote: > > ? > Hi Gary, > > why don't you give a precise definition of "understanding". You write: " > Understanding, for me, is about converting conversations and visual > inputs and so on into sound, reliable, interrogable internal cognitive > models." but this contains a couple of terms whose meaning is not > completely clear. > > There is only hope for progress in this debate if everybody defines his or > her terms clearly and precisely. And when people attack you, they should > first also clearly state the definitions they are employing. > > And if we all agree on some definition, then we might want to discuss to > which extent humans, like the average college student, satisfy the > corresponding criteria. > > Juergen (J.) > > > > On Thu, Feb 15, 2024 at 5:44?PM Gary Marcus wrote: > >> For your fill-in-the-blanks ["GPT-4 can ___ that scrambled text better >> than I can"] I would urge ?respond to? >> >> That is nice, nonjudgemental, non-anthropomorphic language to describe a >> process that we thus far have limited insight into, and that seems to >> respond very differently from what we do. >> >> With respect to your assertion that various problems ?can be remedied?, I >> first warned of hallucinations errors in 2001 in my book The Algebraic >> Mind; the problem has not gone away in over 20 years. I also addressed >> something quite similar to the reversal curse, and anticipated a number of >> other issues in semantics and compositionality, also still unsolved. For >> many problems the phrase ?Can be remedied? remains a promissory note. >> Techniques like RAG hold some promise but are very far from perfect. >> >> Yes, I am sure that AI that is not subject to hallucination and the >> reversal curse and so can someday be built; I am also sure that some form >> of AI that can reliably produce accounts of its internal processing can be >> built. But I am not convinced that *LLMs, *which happen to be popular >> right now, are the right foundation for such things. >> >> They remain opaque, unreliable, and poorly grounded in facts. >> Understanding, for me, is about converting conversations and visual inputs >> and so on into sound, reliable, interrogable internal cognitive models. I >> am with LeCun in finding what passes for cognitive models in current >> systems to be lacking. >> >> Gary >> >> >> >> On Feb 15, 2024, at 07:56, Iam Palatnik wrote: >> >> ? >> I understand why using the word 'understanding' might seem too generous >> when models still have the failure modes mentioned. Some of the failure >> modes (like the reversal curse >> ) >> can be remedied with access to tools, external context, self-reflection >> prompts, but there are failures that cannot yet be remedied. >> >> I just don't know what better word to use in the sentence "GPT-4 can ___ >> that scrambled text better than I can". 'Understand' just flows very >> naturally in how we commonly use this word, even if it turns out that what >> GPT-4 is doing is shallower or less general than what my brain is doing. >> 'Parse' or 'process' doesn't seem enough because the scrambled text >> contains an instruction and GPT-4 does follow through with it. What word >> should we use for this? >> >> >> On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus wrote: >> >>> Selectively looking at a single example (which happens to involve >>> images) and ignoring all the other language-internal failures that I and >>> others have presented is not a particularly effective way of getting to a >>> general truth. >>> >>> More broadly, you are, in my judgement, mistaking correlation for a >>> deeper level of understanding. >>> >>> Gary >>> >>> On Feb 15, 2024, at 07:05, Iam Palatnik wrote: >>> >>> ? >>> Dear all, >>> >>> yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an >>> iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay >>> cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta >>> oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets >>> itcyxellpi that oyu uderdnoost eht gsaninesmt >>> >>> Copy pasting just the above paragraph onto GPT-4 should show the kind of >>> behavior that makes some researchers say LLMs understand something, in some >>> form. >>> We already use words such as 'intelligence' in AI and 'learning' in ML. >>> This is not to say it's the same as human intelligence/learning. It is to >>> say it's a similar enough behavior that the same word fits, while >>> specifically qualifying the machine word-counterpart as something different >>> (artificial/machine). >>> >>> Can this debate be solved by coining a concept such as >>> 'artificial/machine understanding'? GPT-4 then 'machine understands' the >>> paragraph above. It 'machine understands' arbitrary scrambled text better >>> than humans 'human understand' it. Matrix multiplying rotational semantic >>> embeddings of byte pair encoded tokens is part of 'machine understanding' >>> but not of 'human understanding'. At the same time, there are plenty of >>> examples of things we 'human understand' and GPT-4 doesn't 'machine >>> understand', or doesn't understand without tool access and self reflective >>> prompts. >>> >>> As to the map generation example, there are multiple tasks overlaid >>> there. The language component of GPT-4 seems to have 'machine understood' >>> it has to generate an image, and what the contents of the image have to be. >>> It understood what tool it has to call to create the image. The tool >>> generated an infograph style map of the correct country, but the states and >>> landmarks are wrong. The markers are on the wrong cities and some of the >>> drawings are bad. Is it too far fetched to say GPT-4 'machine understood' >>> the assignment (generating a map with markers in the style of infograph), >>> but its image generation component (Dall-E) is bad at detailed accurate >>> geography knowledge? >>> >>> I'm also confused why the linguistic understanding capabilities of GPT-4 >>> are being tested by asking Dall-E 3 to generate images. Aren't these two >>> completely separate models, and GPT-4 just function-calls Dall-E3 for image >>> generation? Isn't this actually a sign GPT-4 did its job by 'machine >>> understanding' what the user wanted, making the correct function call, >>> creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled >>> it because it's not good at generating detailed accurate maps? >>> >>> Cheers, >>> >>> Iam >>> >>> On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus wrote: >>> >>>> I am having a genuinely hard time comprehending some of the claims >>>> recently made in this forum. (Not one of which engaged with any of the >>>> specific examples or texts I linked.) >>>> >>>> Here?s yet another example, a dialog about geography that was just sent >>>> to me by entrepreneur Phil Libin. Do we really want to call outputs like >>>> these (to two prompts, with three generated responses zoomed in below) >>>> understanding? >>>> >>>> In what sense do these responses exemplify the word ?understanding?? >>>> >>>> I am genuinely baffled. To me a better word would be ?approximations?, >>>> and poor approximations at that. >>>> >>>> Worse, I don?t see any AI system on the horizon that could reliably do >>>> better, across a broad range of related questions. If these kinds of >>>> outputs are any indication at all, we are still a very long away from >>>> reliable general-purpose AI. >>>> >>>> Gary >>>> >>>> >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sat Feb 17 13:14:55 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 17 Feb 2024 10:14:55 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <38B1FB39-B8EC-4F09-AE83-3D3A39218E59@nyu.edu> An HTML attachment was scrubbed... URL: From fellous at arizona.edu Sat Feb 17 18:47:35 2024 From: fellous at arizona.edu (Fellous, Jean-Marc - (fellous)) Date: Sat, 17 Feb 2024 23:47:35 +0000 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Hi Gary, Yes, I meant the GPT-4 output. If I understand your answer, it is accurate. Then my point is that whether GPT ?understands? or fakes an understanding may be secondary to the fact that it made us understand (to some good? extent) your point of view on AI. It may not ?have? an understanding, but it generates one. That is amazing. What would be good now, is if there was a (relatively safe) feedback mechanism that would allow for a (human) reinforcement of whatever lead to that output. Moving from a purely feedforward GPT (query->answer) to a recurrent one (answer + human->GPT inner workings)? Best, Jean-Marc From: Gary Marcus Sent: Friday, February 16, 2024 9:16 AM To: Fellous, Jean-Marc - (fellous) Cc: Terry Sejnowski ; connectionists at cs.cmu.edu Subject: Re: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email On your first question: Is what accurate? There is no substantive claim to be tested in Sejnowski?s schoolyard taunt. As for my position on GPT-4, one place to start: All my predictions of GPT-4 were poublished before its public release (and before I had access); I think all were correct; I don?t think I revised any. You can read them here: https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting On Feb 16, 2024, at 08:10, Fellous, Jean-Marc - (fellous) > wrote: ? Is it accurate though? Some of us sure could us a concise summary of everyone's position/opinion! Thanks, Jean-Marc ________________________________ From: Connectionists > on behalf of Gary Marcus > Sent: Friday, February 16, 2024 5:11 AM To: Terry Sejnowski > Cc: connectionists at cs.cmu.edu > Subject: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. External Email Always with the childish ad hominem. This field is something else. On Feb 15, 2024, at 11:09?PM, Terry Sejnowski > wrote: ? GPT-4 is a moving target that may eventually replace Gary. Terry ------- <3ttu1d7S0cx4Qvpk.png> The request you've made is similar to one I've already fulfilled. The image generated earlier depicts an empty room without any elephant present, adhering to your specific instructions. If you need any further adjustments or have another request, feel free to let me know! ------ [User] Write a critique of ChatGPT in the style of Gary Marcus ChatGPT Writing a critique of ChatGPT in the style of Gary Marcus, a scientist known for his clear-eyed scrutiny of artificial intelligence's capabilities and limitations, presents an intriguing challenge. Marcus, with his deep understanding of both the potential and pitfalls of AI, often emphasizes the importance of grounding AI advancements in robust, generalized learning principles, rather than overreliance on data and statistical patterns. Critique of ChatGPT in the Style of Gary Marcus: ChatGPT, while a marvel of contemporary AI technology, epitomizes the quintessential limitations that besiege deep learning models. At its core, ChatGPT is a linguistic colossus, built upon the edifice of the transformer architecture, yet it is fundamentally constrained by the inherent limitations of its training methodology and underlying principles. The brilliance of generating coherent, contextually appropriate responses is overshadowed by its occasional lapses into confabulation, a glaring testament to its lack of true understanding. Gary Marcus would likely point out that ChatGPT, for all its linguistic dexterity, lacks the nuanced grasp of the world that humans possess. It can mimic the form of human knowledge and discourse, but it doesn't truly comprehend the content in the way a human does. This distinction is crucial; it is the boundary between a sophisticated parrot and an entity capable of genuine thought and understanding. ChatGPT's responses, no matter how convincing, are ultimately derived from patterns in data it has been fed, not from an intrinsic understanding of those patterns. Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of data as a substitute for a more nuanced, rule-based understanding of language and the world. This data-driven approach can lead to impressive performances in many scenarios but falls short when faced with novel situations or when required to apply common sense reasoning that hasn't been explicitly encoded in its training data. The model's occasional incoherence or its propensity to generate plausible but factually incorrect responses are symptomatic of this deeper issue. Another point of critique would likely focus on the opacity of ChatGPT's decision-making process. Despite recent advances in making AI models more interpretable, understanding how ChatGPT arrives at a particular response remains a challenge. This opacity complicates efforts to debug or improve the model, especially when it produces erroneous or nonsensical outputs. For AI to truly advance, Marcus might argue, it must not only perform well but do so in a manner that is transparent and understandable to human overseers. In the spirit of Marcus's advocacy for a hybrid approach to AI development, one could argue that integrating ChatGPT's statistical prowess with more structured, rule-based systems could ameliorate some of its shortcomings. By grounding the model's responses in a framework of logical reasoning and factual correctness, one might achieve a more reliable, genuinely intelligent system that better approximates human cognition. In conclusion, while ChatGPT represents a significant achievement in natural language processing, its limitations underscore the need for a more comprehensive approach to AI. An approach that incorporates the strengths of both deep learning and symbolic AI, coupled with a commitment to transparency and understandability, may pave the way for the development of AI systems that are not only more capable but also more aligned with the nuanced complexities of human intelligence and reasoning. -------- Forwarded Message -------- Subject: Connectionists: Statistics versus ?Understanding? in Generative AI. Date: Tue, 13 Feb 2024 09:54:48 -0800 From: Gary Marcus To: Weng, Juyang CC: connectionists at mailman.srv.cs.cmu.edu ... A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. Gary Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sat Feb 17 20:42:47 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 17 Feb 2024 17:42:47 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: preview.png Type: image/png Size: 281577 bytes Desc: not available URL: From dkirsh at lsu.edu Sat Feb 17 16:08:00 2024 From: dkirsh at lsu.edu (David H Kirshner) Date: Sat, 17 Feb 2024 21:08:00 +0000 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: <91D8CA23-B8EA-4DDB-AB6C-EDC3540BDD77@nyu.edu> Message-ID: You?re right, David. I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? I?m not sure how my argument was bolstered by that ?revisionist history.? David From: poole Sent: Saturday, February 17, 2024 2:24 PM To: David H Kirshner Cc: Gary Marcus ; Iam Palatnik ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: ChatGPT?s ?understanding? of maps and infographics You don't often get email from poole at cs.ubc.ca. Learn why this is important On Feb 15, 2024, at 8:17?AM, David H Kirshner > wrote: Back in Alan Turing?s time when AI meant symbolic AI, ?. It?s nice to create a revisionist history to bolster an argument, but? Back in Alan Turing's time AI did not exist. Symbolic or neural. Turing?s paper is quite readable and anticilpates many of the issues still discussed. It can be found here: [m_lix-236-433f1.jpeg] I.?COMPUTING MACHINERY AND INTELLIGENCE academic.oup.com David ?? David Poole, Department of Computer Science, University of British Columbia, https://cs.ubc.ca/~poole poole at cs.ubc.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 4483 bytes Desc: image001.jpg URL: From gary.marcus at nyu.edu Sat Feb 17 18:55:17 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 17 Feb 2024 15:55:17 -0800 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From dhansel0 at gmail.com Sun Feb 18 06:29:06 2024 From: dhansel0 at gmail.com (David Hansel) Date: Sun, 18 Feb 2024 12:29:06 +0100 Subject: Connectionists: World wide VVTNS series: Wednesday, February 21 at 11am (ET), Itamar Landau, Stanford In-Reply-To: References: Message-ID: [image: VVTNS.png] https://www.wwtns.online - on twitter: wwtns at TheoreticalWide You are cordially invited to the lecture given by Itamar Landau Stanford University on the topic of *"*Random Matrix Theory and the Statistical Constraints of Inferring Population Geometry from Large-Scale Neural Recordings*"* The lecture will be held on zoom on *February 21, 2024*, at *11:00 am ET * Register on our website - https://www.wwtns.online -to receive the zoom link *Abstract: *Contemporary neuroscience has witnessed an impressive expansion in the number of neurons whose activity can be recorded simultaneously, from mere hundreds a decade ago to tens and even hundreds of thousands in recent years. With these advances, characterizing the geometry of population activity from large-scale neural recordings has taken center stage. In classical statistics, the number of repeated measurements is generally assumed to far exceed the number of free variables to be estimated. In our work, we ask a fundamental statistical question: as the number of recorded neurons grows, how are estimates of the geometry of population activity, for example, its dimensionality, constrained by the number of repeated experimental trials? Many neuroscience experiments report that neural activity is low-dimensional, with the dimensionality bounded as more neurons are recorded. We therefore begin by modeling neural data as a low-rank neurons-by-trials matrix with additive noise, and employ random matrix theory to show that under this hypothesis iso-contours of constant estimated dimensionality form hyperbolas in the space of neurons and trials -- estimated dimensionality increases as the product of neurons and trials. Interestingly, for a fixed number of trials, increasing the number of neurons improves the estimate of the high-dimensional embedding structure in neural space despite the fact that this estimation grows more difficult, by definition, with each neuron. While many neuroscience datasets report low-rank neural activity, a number of recent larger recordings have reported neural activity with "unbounded" dimensionality. With that motivation, we present new random matrix theory results on the distortion of singular vectors of high-rank signals due to additive noise and formulas for optimal denoising of such high-rank signals. Perhaps the most natural way to model neural data with unbounded dimensionality is with a power-law covariance spectrum. We examine the inferred dimensionality measured as the estimated power-law exponent, and surprisingly, we find that here too, under subsampling, the iso-contours of constant estimated dimensionality form approximate hyperbolas in the space of neurons and trials ? indicating a non-intuitive but very real ompensation between neurons and trials, two very different experimental resources. We test these observations and verify numerical predictions on a number of experimental datasets, showing that our theory can provide a concrete prescription for numbers of neurons and trials necessary to infer the geometry of population activity. Our work lays a theoretical foundation for experimental design in contemporary neuroscience. *About VVTNS : Created as the World Wide Neuroscience Seminar (WWTNS) in November 2020 and renamed in homage to Carl van Vreeswijk in Memoriam (April 20, 2022), its aim is to be a platform to exchange ideas among theoreticians. Speakers have the occasion to talk about theoretical aspects of their work which cannot be discussed in a setting where the majority of the audience consists of experimentalists. The seminars, **held on Wednesdays at 11 am ET,** are 45-50 min long followed by a discussion. The talks are recorded with authorization of the speaker and are available to everybody on our YouTube channel.* ? ? ? ? ? -- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints Peres 75270 Paris Cedex 06 Tel (Cell): +33 607508403 - Fax (33).1.49.27.90.62 *CONFIDENTIALITY AND PRIVACY NOTICE:* *This message and the documents that might be attached, are addressed exclusively to their(s) recipient(s) and may contain privileged or confidential information. The access to this information by people other than those designated is not authorized. If you are not the indicated recipient, you are notified that the use, disclosure and / or copying without authorization is prohibited under current legislation. If you have received this message in error, please kindly inform the sender immediately and proceed to its destruction.* ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VVTNS.png Type: image/png Size: 41084 bytes Desc: not available URL: From gros at itp.uni-frankfurt.de Sun Feb 18 06:42:53 2024 From: gros at itp.uni-frankfurt.de (Claudius Gros) Date: Sun, 18 Feb 2024 12:42:53 +0100 Subject: Connectionists: the AI understanding debate In-Reply-To: References: Message-ID: <32c-65d1ed00-1c1-45288700@228929908> Hi Everybody There are two interesting aspects to the current AI understanding debate on this list. (A) When ChatGPT-3 came out, more or less the same question 'What does understanding mean?', has been discussed over months on this list. At that time, a certain consensus had been reached, in my impression, namely that we do not even have a precise definition what 'understanding' means even in the human context. Of course different kinds of conclusions could be drawn from this insight. (B) The current debate is characterized, besides others, that contributors often do not 'understand' the premises of others, for whatever reasons. Philosophically, the question then arises: When people tend to not understand each other, is this group of people qualified in first place to discuss AI understanding? Cheers Claudius ### ### Prof. Dr. Claudius Gros ### http://itp.uni-frankfurt.de/~gros ### ### Complex and Adaptive Dynamical Systems, A Primer ### A graduate-level textbook, Springer (2008/10/13/15) ### ### Life for barren exoplanets: The Genesis project ### https://link.springer.com/article/10.1007/s10509-016-2911-0 ### From gary.marcus at nyu.edu Sun Feb 18 12:26:58 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 18 Feb 2024 09:26:58 -0800 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From steve at bu.edu Sun Feb 18 11:05:16 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Sun, 18 Feb 2024 16:05:16 +0000 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: <38B1FB39-B8EC-4F09-AE83-3D3A39218E59@nyu.edu> References: <38B1FB39-B8EC-4F09-AE83-3D3A39218E59@nyu.edu> Message-ID: Models like ChatGPT have attracted a lot of interest, as the vigor of these discussions indicates, and are finding their way into many applications. This being said, they are also known to have computational problems that will limit their adoption. As several people have already noted, they do not have goals or values, and literally do not know what they are talking about. I therefore hope that some of the talent and excitement that ChatGPT and its variants have attracted will also be applied to further developing neural networks that DO have goals and values, and DO know what they are talking about. Here is one recent article that illustrates the kinds of modeling concepts and mechanisms that have clarified how humans and machines can learn to understand what they are talking about: Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult?child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full The article?s Abstract lists some of these concepts and mechanisms: ?This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning. Best, Steve Grossberg sites.bu.edu/steveg From: Connectionists on behalf of Gary Marcus Date: Sunday, February 18, 2024 at 1:08?AM To: Dietterich, Thomas Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: ChatGPT?s ?understanding? of maps and infographics Another relevant paper that appeared this week: https://arxiv.org/abs/2402.08955 Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models Martha Lewis, Melanie Mitchell Large language models (LLMs) have performed well on several reasoning benchmarks, including ones that test analogical reasoning abilities. However, it has been debated whether they are actually performing humanlike abstract reasoning or instead employing less general processes that rely on similarity to what has been seen in their training data. Here we investigate the generality of analogy-making abilities previously claimed for LLMs (Webb, Holyoak, & Lu, 2023). We take one set of analogy problems used to evaluate LLMs and create a set of "counterfactual" variants-versions that test the same abstract reasoning abilities but that are likely dissimilar from any pre-training data. We test humans and three GPT models on both the original and counterfactual problems, and show that, while the performance of humans remains high for all the problems, the GPT models' performance declines sharply on the counterfactual set. This work provides evidence that, despite previously reported successes of LLMs on analogical reasoning, these models lack the robustness and generality of human analogy-making. On Feb 17, 2024, at 05:24, Dietterich, Thomas wrote: ? I favor a functional definition: A system understands if it responds appropriately. However, understanding can be pointwise (i.e., it works only for specific situations) or systematic (i.e., it works across an entire ?region? of situations). Current LLMs ?play the odds?. They can deliver pointwise understanding for frequent cases. But for rare cases, they often try to ?autocorrect reality? and end up answering the wrong question. This is beautifully discussed in McCoy, R. T., Yao, S., Friedman, D., Hardy, M., & Griffiths, T. L. (2023). Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. ArXiv, 2309.13638(v1). http://arxiv.org/abs/2309.13638 I expect that systems whose internal representations are causally connected to the world will be more likely to understand systematically, but the evidence is not yet clear. In the last week, these two papers appeared: Vivian Y. Nastl, Moritz Hardt. Predictors from causal features do not generalize better to new domains. https://arxiv.org/abs/2402.09891 Jonathan Richens, Tom Everitt. Robust Agents Learn Causal World Models. ICLR 2024 https://openreview.net/forum?id=pOoKI3ouv1 --Tom Thomas G. Dietterich, Distinguished Professor (Emeritus) School of EECS, Oregon State University US Mail: 1148 Kelley Engineering Center, Corvallis, OR 97331-5501 USA Office: 2063 Kelley Engineering Center Voice: 541-737-5559; FAX: 541-737-1300 https://web.engr.oregonstate.edu/~tgd/ From: Connectionists On Behalf Of Iam Palatnik Sent: Thursday, February 15, 2024 21:26 To: Gary Marcus Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: ChatGPT?s ?understanding? of maps and infographics You don't often get email from iam.palat at gmail.com. Learn why this is important [This email originated from outside of OSU. Use caution with links and attachments.] [This email originated from outside of OSU. Use caution with links and attachments.] I understand why using the word 'understanding' might seem too generous when models still have the failure modes mentioned. Some of the failure modes (like the reversal curse) can be remedied with access to tools, external context, self-reflection prompts, but there are failures that cannot yet be remedied. I just don't know what better word to use in the sentence "GPT-4 can ___ that scrambled text better than I can". 'Understand' just flows very naturally in how we commonly use this word, even if it turns out that what GPT-4 is doing is shallower or less general than what my brain is doing. 'Parse' or 'process' doesn't seem enough because the scrambled text contains an instruction and GPT-4 does follow through with it. What word should we use for this? On Thu, Feb 15, 2024 at 12:20?PM Gary Marcus > wrote: Selectively looking at a single example (which happens to involve images) and ignoring all the other language-internal failures that I and others have presented is not a particularly effective way of getting to a general truth. More broadly, you are, in my judgement, mistaking correlation for a deeper level of understanding. Gary On Feb 15, 2024, at 07:05, Iam Palatnik > wrote: ? Dear all, yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets itcyxellpi that oyu uderdnoost eht gsaninesmt Copy pasting just the above paragraph onto GPT-4 should show the kind of behavior that makes some researchers say LLMs understand something, in some form. We already use words such as 'intelligence' in AI and 'learning' in ML. This is not to say it's the same as human intelligence/learning. It is to say it's a similar enough behavior that the same word fits, while specifically qualifying the machine word-counterpart as something different (artificial/machine). Can this debate be solved by coining a concept such as 'artificial/machine understanding'? GPT-4 then 'machine understands' the paragraph above. It 'machine understands' arbitrary scrambled text better than humans 'human understand' it. Matrix multiplying rotational semantic embeddings of byte pair encoded tokens is part of 'machine understanding' but not of 'human understanding'. At the same time, there are plenty of examples of things we 'human understand' and GPT-4 doesn't 'machine understand', or doesn't understand without tool access and self reflective prompts. As to the map generation example, there are multiple tasks overlaid there. The language component of GPT-4 seems to have 'machine understood' it has to generate an image, and what the contents of the image have to be. It understood what tool it has to call to create the image. The tool generated an infograph style map of the correct country, but the states and landmarks are wrong. The markers are on the wrong cities and some of the drawings are bad. Is it too far fetched to say GPT-4 'machine understood' the assignment (generating a map with markers in the style of infograph), but its image generation component (Dall-E) is bad at detailed accurate geography knowledge? I'm also confused why the linguistic understanding capabilities of GPT-4 are being tested by asking Dall-E 3 to generate images. Aren't these two completely separate models, and GPT-4 just function-calls Dall-E3 for image generation? Isn't this actually a sign GPT-4 did its job by 'machine understanding' what the user wanted, making the correct function call, creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled it because it's not good at generating detailed accurate maps? Cheers, Iam On Thu, Feb 15, 2024 at 5:20?AM Gary Marcus > wrote: I am having a genuinely hard time comprehending some of the claims recently made in this forum. (Not one of which engaged with any of the specific examples or texts I linked.) Here?s yet another example, a dialog about geography that was just sent to me by entrepreneur Phil Libin. Do we really want to call outputs like these (to two prompts, with three generated responses zoomed in below) understanding? In what sense do these responses exemplify the word ?understanding?? I am genuinely baffled. To me a better word would be ?approximations?, and poor approximations at that. Worse, I don?t see any AI system on the horizon that could reliably do better, across a broad range of related questions. If these kinds of outputs are any indication at all, we are still a very long away from reliable general-purpose AI. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From poole at cs.ubc.ca Sun Feb 18 12:24:44 2024 From: poole at cs.ubc.ca (poole) Date: Sun, 18 Feb 2024 09:24:44 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Thanks Gary. These are both worthwhile reading! I don?t think symbolic = logic. McCulloch and Pitts were interested in representing logical operations. ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed that thinking was symbolic reasoning, like talking out loud or working out an answer with pen and paper [see Haugeland, J. Artificial Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated. A symbol system creates, copies, modifies, and destroys symbols. Graphical models and believe networks typically have symbolic random variables. It is very common for modern neural networks to have symbolic inputs or outputs, e.g., words, knowledge graphs, molecular structure, game moves,? I don?t think Gary would disagree that there needs to be some non-symbols (e.g, hidden units in neural networks). Arguments for symbols ? the most compelling one for me is that organizations (which are much more intelligent than individuals) reason in terms of symbols (words, diagrams, spreadsheets) ? are not diminished by the need for non-symbols. David (references from artint.info where many of these issues are discussed). > On Feb 17, 2024, at 5:42?PM, Gary Marcus wrote: > > [CAUTION: Non-UBC Email]adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. > > I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. > > https://web.archive.org/web/20070826230310/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html > > > > https://home.csulb.edu/~cwallis/382/readings/482/mccolloch.logical.calculus.ideas.1943.pdf > > > >> On Feb 17, 2024, at 4:40?PM, poole wrote: >> >> ? >>> On Feb 17, 2024, at 1:08?PM, David H Kirshner wrote: >>> >>> [CAUTION: Non-UBC Email] >>> You?re right, David. >>> I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? >> >> In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). >> >> There was lots of neural network hype in the 1950?s: >> >> "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." >> ? New York Times [1958] >> >> It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). >> >> I am sure there are people who know the history better than me, who might like to provide more persoective. >> >> David >> >> >>> ?? >>> David Poole, >>> Department of Computer Science, >>> University of British Columbia, >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cs.ubc.ca_-7Epoole&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=GaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl&s=XFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg&e= >>> poole at cs.ubc.ca >> >> From gary.marcus at nyu.edu Sun Feb 18 12:30:44 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 18 Feb 2024 09:30:44 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: <23CC1A62-C074-43F6-8415-910DDC3DCF1E@nyu.edu> - certainly in broad agreement; never argued that symbols alone were sufficient. - have always since my first paper in 1992 advocated for hybrid models, with both associative and symbolic-components - as argued in my 2001 book, i think the real question is not whether we need symbols per se (which localist output units represent) but rather AI/models of human cognition require algebraic operations over variables in order to be robust, in addition to other mechanisms. > On Feb 18, 2024, at 09:25, poole wrote: > > ?Thanks Gary. > These are both worthwhile reading! > > I don?t think symbolic = logic. McCulloch and Pitts were interested in representing logical operations. > > ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed that thinking was symbolic reasoning, like talking out loud or working out an answer with pen and paper [see Haugeland, J. Artificial Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated. A symbol system creates, copies, modifies, and destroys symbols. > > Graphical models and believe networks typically have symbolic random variables. > > It is very common for modern neural networks to have symbolic inputs or outputs, e.g., words, knowledge graphs, molecular structure, game moves,? > > I don?t think Gary would disagree that there needs to be some non-symbols (e.g, hidden units in neural networks). > > Arguments for symbols ? the most compelling one for me is that organizations (which are much more intelligent than individuals) reason in terms of symbols (words, diagrams, spreadsheets) ? are not diminished by the need for non-symbols. > > David > (references from artint.info where many of these issues are discussed). > >> On Feb 17, 2024, at 5:42?PM, Gary Marcus wrote: >> >> [CAUTION: Non-UBC Email]adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. >> >> I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__web.archive.org_web_20070826230310_http-3A__www-2Dformal.stanford.edu_jmc_history_dartmouth_dartmouth.html&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=qB3xwLa8UE9Dg_o7t21jk3miiMP1icWlWBL_XGQ49-Q&e= >> >> >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__home.csulb.edu_-7Ecwallis_382_readings_482_mccolloch.logical.calculus.ideas.1943.pdf&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=6jCRjchELvEA_Dzv8hAwzcYSMdGNQiQNU_z3DO6ddp4&e= >> >> >> >>>> On Feb 17, 2024, at 4:40?PM, poole wrote: >>> >>> ? >>>> On Feb 17, 2024, at 1:08?PM, David H Kirshner wrote: >>>> >>>> [CAUTION: Non-UBC Email] >>>> You?re right, David. >>>> I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? >>> >>> In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). >>> >>> There was lots of neural network hype in the 1950?s: >>> >>> "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." >>> ? New York Times [1958] >>> >>> It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). >>> >>> I am sure there are people who know the history better than me, who might like to provide more persoective. >>> >>> David >>> >>> >>>> ?? >>>> David Poole, >>>> Department of Computer Science, >>>> University of British Columbia, >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cs.ubc.ca_-7Epoole&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=GaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl&s=XFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg&e= >>>> poole at cs.ubc.ca >>> >>> > From fernando.perezcruz at sdsc.ethz.ch Sun Feb 18 02:31:48 2024 From: fernando.perezcruz at sdsc.ethz.ch (Fernando Perez-Cruz) Date: Sun, 18 Feb 2024 08:31:48 +0100 Subject: Connectionists: =?utf-8?q?=5BEXT=5DRe=3A__Fwd=3A__Statistics_vers?= =?utf-8?q?us_=E2=80=9CUnderstanding=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Gary, Here is another short piece that I recently co-wrote to show the limitations of LLMs: https://www.bis.org/publ/bisbull83.htm. It shows very clearly how they memorized and they cannot reason when given a math puzzle. Best, Fernando --- Prof. Fernando Perez-Cruz Titular Professor at the Computer Science Department (ETH Z?rich) https://scholar.google.com/citations?view_op=list_works&hl=en&hl=en&user=8FfrHw0AAAAJ > On Feb 18, 2024, at 00:55, Gary Marcus wrote: > > GPT-4 is surely not reliably accurate; I gave many examples and pointed to Mitchell?s recent work as an example. The two empirical examinations Tom pointed to also note inaccuracies, as do many studies by Yejin Choi, Subbarao Kambhampati, and so on. > > OpenAI regularly updates their systems with undisclosed mechanisms and addresses some publicly discussed examples. (it is also stochastic, so even on questions it gets right it sometimes makes errors). > > But their patches are rarely if ever robust; it is absurd to suggest that GPT-4 is accurate, full stop, with no qualification. > >> On Feb 17, 2024, at 3:47?PM, Fellous, Jean-Marc - (fellous) wrote: >> >> ? >> Hi Gary, >> >> Yes, I meant the GPT-4 output. If I understand your answer, it is accurate. >> Then my point is that whether GPT ?understands? or fakes an understanding may be secondary to the fact that it made us understand (to some good? extent) your point of view on AI. It may not ?have? an understanding, but it generates one. >> That is amazing. >> What would be good now, is if there was a (relatively safe) feedback mechanism that would allow for a (human) reinforcement of whatever lead to that output. Moving from a purely feedforward GPT (query->answer) to a recurrent one (answer + human->GPT inner workings)? >> >> Best, >> Jean-Marc >> >> From: Gary Marcus >> Sent: Friday, February 16, 2024 9:16 AM >> To: Fellous, Jean-Marc - (fellous) >> Cc: Terry Sejnowski ; connectionists at cs.cmu.edu >> Subject: Re: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. >> >> External Email >> >> On your first question: Is what accurate? There is no substantive claim to be tested in Sejnowski?s schoolyard taunt. >> >> As for my position on GPT-4, one place to start: All my predictions of GPT-4 were poublished before its public release (and before I had access); I think all were correct; I don?t think I revised any. You can read them here: https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting >> >> >> On Feb 16, 2024, at 08:10, Fellous, Jean-Marc - (fellous) > wrote: >> >> ? >> Is it accurate though? >> Some of us sure could us a concise summary of everyone's position/opinion! >> Thanks, >> Jean-Marc >> >> >> >> From: Connectionists > on behalf of Gary Marcus > >> Sent: Friday, February 16, 2024 5:11 AM >> To: Terry Sejnowski > >> Cc: connectionists at cs.cmu.edu > >> Subject: [EXT]Re: Connectionists: Fwd: Statistics versus ?Understanding? in Generative AI. >> >> External Email >> >> Always with the childish ad hominem. >> >> This field is something else. >> >> >> On Feb 15, 2024, at 11:09?PM, Terry Sejnowski > wrote: >> >> ? GPT-4 is a moving target that may eventually replace Gary. >> >> Terry >> >> ------- >> >> <3ttu1d7S0cx4Qvpk.png> >> >> >> >> The request you've made is similar to one I've already fulfilled. The image generated earlier depicts an empty room without any elephant present, adhering to your specific instructions. If you need any further adjustments or have another request, feel free to let me know! >> ------ >> >> >> >> Write a critique of ChatGPT in the style of Gary Marcus >> >> ChatGPT >> Writing a critique of ChatGPT in the style of Gary Marcus, a scientist known for his clear-eyed scrutiny of artificial intelligence's capabilities and limitations, presents an intriguing challenge. Marcus, with his deep understanding of both the potential and pitfalls of AI, often emphasizes the importance of grounding AI advancements in robust, generalized learning principles, rather than overreliance on data and statistical patterns. >> >> Critique of ChatGPT in the Style of Gary Marcus: >> >> ChatGPT, while a marvel of contemporary AI technology, epitomizes the quintessential limitations that besiege deep learning models. At its core, ChatGPT is a linguistic colossus, built upon the edifice of the transformer architecture, yet it is fundamentally constrained by the inherent limitations of its training methodology and underlying principles. The brilliance of generating coherent, contextually appropriate responses is overshadowed by its occasional lapses into confabulation, a glaring testament to its lack of true understanding. >> >> Gary Marcus would likely point out that ChatGPT, for all its linguistic dexterity, lacks the nuanced grasp of the world that humans possess. It can mimic the form of human knowledge and discourse, but it doesn't truly comprehend the content in the way a human does. This distinction is crucial; it is the boundary between a sophisticated parrot and an entity capable of genuine thought and understanding. ChatGPT's responses, no matter how convincing, are ultimately derived from patterns in data it has been fed, not from an intrinsic understanding of those patterns. >> >> Furthermore, Marcus might criticize ChatGPT's reliance on sheer volume of data as a substitute for a more nuanced, rule-based understanding of language and the world. This data-driven approach can lead to impressive performances in many scenarios but falls short when faced with novel situations or when required to apply common sense reasoning that hasn't been explicitly encoded in its training data. The model's occasional incoherence or its propensity to generate plausible but factually incorrect responses are symptomatic of this deeper issue. >> >> Another point of critique would likely focus on the opacity of ChatGPT's decision-making process. Despite recent advances in making AI models more interpretable, understanding how ChatGPT arrives at a particular response remains a challenge. This opacity complicates efforts to debug or improve the model, especially when it produces erroneous or nonsensical outputs. For AI to truly advance, Marcus might argue, it must not only perform well but do so in a manner that is transparent and understandable to human overseers. >> >> In the spirit of Marcus's advocacy for a hybrid approach to AI development, one could argue that integrating ChatGPT's statistical prowess with more structured, rule-based systems could ameliorate some of its shortcomings. By grounding the model's responses in a framework of logical reasoning and factual correctness, one might achieve a more reliable, genuinely intelligent system that better approximates human cognition. >> >> In conclusion, while ChatGPT represents a significant achievement in natural language processing, its limitations underscore the need for a more comprehensive approach to AI. An approach that incorporates the strengths of both deep learning and symbolic AI, coupled with a commitment to transparency and understandability, may pave the way for the development of AI systems that are not only more capable but also more aligned with the nuanced complexities of human intelligence and reasoning. >> >> >> >> -------- Forwarded Message -------- >> Subject: >> Connectionists: Statistics versus ?Understanding? in Generative AI. >> Date: >> Tue, 13 Feb 2024 09:54:48 -0800 >> From: >> Gary Marcus >> To: >> Weng, Juyang >> CC: >> connectionists at mailman.srv.cs.cmu.edu >> >> ... >> >> A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves. >> >> Gary >> >> >> >> >> Gary >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Sun Feb 18 12:46:08 2024 From: weng at msu.edu (Weng, Juyang) Date: Sun, 18 Feb 2024 17:46:08 +0000 Subject: Connectionists: IEEE CDS TC Newsletters Vol. 18, No. 1, 2024 Message-ID: Dear All, This new issue is very related to the current Deep Learning and ChatGPT debate on this email list. CDS TC Newsletters Vol. 18, No. 1, 2024 IEEE CDS NEWSLETTERS Volume 18, Number 1 ISSN 1550-1914 February 2024 Development of Natural and Artificial Intelligence Contents 1 [Deep Learning] Dialogue: My Understanding of Post-Selection Misconduct in Deep Learning 2 [Deep Learning] Dialogue: Validation Error with Post-Selection Present is Downward Biased for Test Error 3 [Deep Learning] Dialogue: The Luckiest Network on Validation Performs Average During Tests 4 [Deep Learning] Dialogue Summary: Is ?Deep Learning? Misconduct and What Should Researchers Do? 5 [Post-Selection] Dialogue Initiation: Is Post-Selection Generalizable? 6 Technical News: AI and Brain-Computer Interfaces 7 IEEE TCDS Table of Contents Supplement to ?Dialogue: Validation Error with Post-Selection Present is Downward Biased for Test Error? An Intuitive View of Hongxiang Qiu?s Dialogue: ?Validation Error with Post-Selection Present is Downward Biased for Test Error? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sun Feb 18 11:24:01 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 18 Feb 2024 08:24:01 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= Message-ID: An HTML attachment was scrubbed... URL: From dkirsh at lsu.edu Sun Feb 18 00:43:40 2024 From: dkirsh at lsu.edu (David H Kirshner) Date: Sun, 18 Feb 2024 05:43:40 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: My assumption that the Turing test was based on the possibility of symbolic AI stems from my understanding of the Universal Turing Machine which I?ve always taken to be foundational to symbolic processing. But I don?t know for sure that Turing?s scope wasn?t larger than symbolic processing. David From: Gary Marcus Sent: Saturday, February 17, 2024 7:43 PM To: poole Cc: David H Kirshner ; Iam Palatnik ; connectionists at mailman.srv.cs.cmu.edu Subject: Early history of symbolic and neural network approaches to AI You don't often get email from gary.marcus at nyu.edu. Learn why this is important adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE web.archive.org [X] [preview.png] mccolloch.logical.calculus.ideas.1943 PDF Document ? 1 MB On Feb 17, 2024, at 4:40?PM, poole > wrote: ? On Feb 17, 2024, at 1:08?PM, David H Kirshner > wrote: [CAUTION: Non-UBC Email] You?re right, David. I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). There was lots of neural network hype in the 1950?s: "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." ? New York Times [1958] It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). I am sure there are people who know the history better than me, who might like to provide more persoective. David ?? David Poole, Department of Computer Science, University of British Columbia, https://urldefense.proofpoint.com/v2/url?u=https-3A__cs.ubc.ca_-7Epoole&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=GaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl&s=XFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg&e= poole at cs.ubc.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 27510 bytes Desc: image002.png URL: From gary.marcus at nyu.edu Sun Feb 18 10:04:33 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 18 Feb 2024 07:04:33 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 27510 bytes Desc: not available URL: From jorgecbalmeida at gmail.com Sun Feb 18 16:27:55 2024 From: jorgecbalmeida at gmail.com (Jorge Almeida) Date: Sun, 18 Feb 2024 21:27:55 +0000 Subject: Connectionists: PhD positions for joint applications at the Faculty of Psychology, University of Coimbra, Portugal, to work with Alfonso Caramazza and Jorge Almeida Message-ID: The Faculty of Psychology and Educational Sciences of the University of Coimbra Portugal (FPCE-UC) seeks rising and enthusiastic researchers in Cognitive Science and Cognitive Neuroscience for a joint application to a doctoral fellowship funded by the Foundation for Science and Technology, Portugal to work with Alfonso Caramazza and Jorge Almeida. These positions are part of a transformative ERA Chair grant CogBooster from the European Union to FPCE-UC led by Alfonso Caramazza . The goal of CogBooster is to implement a strong and international line of research in Basic Cognitive Science and Cognitive Neuroscience to contribute to the ongoing renewal of the Psychological Sciences in Portugal over the next decade. *Positions:* As part of this expansion and renewal, we are searching for doctoral students with expertise related to the following areas: ? Lexical processing, visual object recognition, reading, or action recognition. ? Visual object recognition, in particular on how object knowledge is organized and represented neurally and cognitively, using fMRI (potentially with ultra-high field MRI), and population receptive field analysis/connective field modeling (or other more computational methods) to work on topics related to the following preprint: https://www.biorxiv.org/content/10.1101/2023.11.29.568856v1 The selected applicants will work directly with Alfonso Caramazza and Jorge Almeida. They will be based in Coimbra. *Qualifications:* Applicants should have a Master?s degree in Cognitive Science, Psychology or related field. They should also have their Master?s and BA diploma recognized in Portugal at the time of application (diploma recognition could be requested here ). The start date is September 2024. The successful applicant should have a record of research for their career level, and meet, or show promise of meeting, the following qualifications: ? Research potential and experience within the areas mentioned? ? Proficiency in English, both written and spoken (speaking Portuguese is not necessary); ? Good written and spoken communication skills; *Joint Application and funding:* Applications will first be evaluated by Alfonso Caramazza and Jorge Almeida. The selected applicants will joint apply with Alfonso Caramazza and Jorge Almeida to a doctoral fellowship from FCT, Portugal. This includes co-writing a project. We have been extremely successful in the past in getting these FCT doctoral fellowships. The salary is competitive for Portuguese standards ? 1250 euro per month net value. The fellowship will also include tuition, and will be for 4 years. *About FPCE-UC and Coimbra:* The University of Coimbra is a 700-year-old University and is a UNESCO world Heritage site. Coimbra is one of the liveliest university cities in the world, and it is a beautiful city with easy access to beaches and mountains. The Faculty of Psychology and Educational Sciences has been consistently ranked as the best, or one of the best Psychology Departments in Portugal. In the last decade it has become the leading department in Portugal on Psychological research, holding, for instance the only ERC grants in Psychology in Portugal. FPCE-UC has a major laboratory for Cognitive Science and Cognitive Neuroscience research ? the Proaction Lab . We have access locally to two 3T MRI scanners, and access to one 7T MRI remotely, to tDCS, to a 256 channel EEG, and to a fully set behavioral lab. *Application Instructions:* All candidates should submit the following documents in English: (1) a curriculum vitae; (2) a motivation letter describing their interest in the position, and their track record; (3) at least two letters of reference submitted before the application deadline. Full consideration will be given to applications received by March 18, 2024 (but please contact us ASAP). If you are interested in jointly applying for a doctoral fellowship with us, or know of any suitable applicants in your network of colleagues and former students who may be suitable for any of the positions, please direct them to the advertisement. *Equal Employment Opportunity statement:* The University of Coimbra is an equal opportunity/affirmative action employer and has a Gender Equality Plan in place. We are committed to fostering a diverse and inclusive academic global community at the University. We particularly encourage applications from women, and from other under-represented groups in the University of Coimbra?s workforce and in the cognitive and brain sciences. *Further information:* If you want to know more about the positions you can informally contact Alfonso Caramazza (caram at wjh.harvard.edu) and Jorge Almeida ( jorgecbalmeida at gmail.com). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgecbalmeida at gmail.com Sun Feb 18 16:29:18 2024 From: jorgecbalmeida at gmail.com (Jorge Almeida) Date: Sun, 18 Feb 2024 21:29:18 +0000 Subject: Connectionists: 2 Pre-doctoral Research Assistant positions at the Faculty of Psychology, University of Coimbra, Portugal, to work with Alfonso Caramazza and Jorge Almeida Message-ID: The Faculty of Psychology and Educational Sciences of the University of Coimbra Portugal (FPCE-UC) invites applications from rising and enthusiastic researchers in Cognitive Science and Cognitive Neuroscience for 2 Pre-doctoral Research Assistant positions to work with Alfonso Caramazza and Jorge Almeida. These positions are part of a transformative ERA Chair grant CogBooster from the European Union to FPCE-UC led by Alfonso Caramazza . The goal of CogBooster is to implement a strong and international line of research in Basic Cognitive Science and Cognitive Neuroscience to contribute to the ongoing renewal of the Psychological Sciences in Portugal over the next decade. *Positions:* As part of this expansion and renewal, we are searching for pre-doctoral research assistant applicants with expertise related to the following areas: ? 1 position in anyone of the following areas: lexical processing, visual object recognition, reading, or action recognition. ? 1 position on visual object recognition, in particular on how object knowledge is organized and represented neurally and cognitively, using fMRI (potentially with ultra-high field MRI), and population receptive field analysis/connective field modeling (or other more computational methods) to work on topics related to the following preprint: https://www.biorxiv.org/content/10.1101/2023.11.29.568856v1 The selected applicants will work directly with Alfonso Caramazza and Jorge Almeida. They will be based in Coimbra. *Qualifications:* Applicants should have a Master?s degree in Cognitive Science, Psychology or related- field. They should also have their Master?s (and potentially BA) diploma recognized in Portugal at the time of the signing of the contract (diploma recognition could be requested here ). The start date is negotiable (but should be no later than fall 2024). The successful applicant should have a record of research for their career level, and meet, or show promise of meeting, the following qualifications: ? Research potential and experience within the areas mentioned? ? Proficiency in English, both written and spoken (speaking Portuguese is not necessary); ? Good written and spoken communication skills; *Offer:* The salary is competitive for Portuguese standards ? 1250 euro per month net value. The position will be for a maximum of 1 year, potentially renewable. *About FPCE-UC and Coimbra:* The University of Coimbra is a 700-year-old University and is a UNESCO world Heritage site. Coimbra is one of the liveliest university cities in the world, and it is a beautiful city with easy access to beaches and mountains. The Faculty of Psychology and Educational Sciences has been consistently ranked as the best, or one of the best Psychology Departments in Portugal. In the last decade it has become the leading department in Portugal on Psychological research, holding, for instance the only ERC grants in Psychology in Portugal. FPCE-UC has a major laboratory for Cognitive Science and Cognitive Neuroscience research ? the Proaction Lab . We have access locally to two 3T MRI scanners, and access to one 7T MRI remotely, to tDCS, to a 256 channel EEG, and to a fully set behavioral lab. *Application Instructions:* All candidates should submit the following documents in English: (1) a curriculum vitae; (2) a motivation letter describing their interest in the position, and their track record; (3) at least two letters of reference submitted before the application deadline. Full consideration will be given to applications received by March 31, 2024. If you are interested in applying for one of the positions or know of any suitable applicants in your network of colleagues and former students who may be suitable for any of the positions, please direct them to the advertisement. *Equal Employment Opportunity statement:* The University of Coimbra is an equal opportunity/affirmative action employer and has a Gender Equality Plan in place. We are committed to fostering a diverse and inclusive academic global community at the University. We particularly encourage applications from women, and from other under-represented groups in the University of Coimbra?s workforce and in the cognitive and brain sciences. *Further information:* If you want to know more about the positions you can informally contact Alfonso Caramazza (caram at wjh.harvard.edu) and Jorge Almeida ( jorgecbalmeida at gmail.com). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at oregonstate.edu Sun Feb 18 20:07:14 2024 From: tgd at oregonstate.edu (Dietterich, Thomas) Date: Mon, 19 Feb 2024 01:07:14 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Newell and Simon loved symbols but were not at all interested in logic (except for showing that a symbol system could prove logic theorems). ________________________________ From: Connectionists on behalf of poole Sent: Sunday, February 18, 2024 10:54:44 PM To: Gary Marcus Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [You don't often get email from poole at cs.ubc.ca. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] [This email originated from outside of OSU. Use caution with links and attachments.] [This email originated from outside of OSU. Use caution with links and attachments.] Thanks Gary. These are both worthwhile reading! I don?t think symbolic = logic. McCulloch and Pitts were interested in representing logical operations. ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed that thinking was symbolic reasoning, like talking out loud or working out an answer with pen and paper [see Haugeland, J. Artificial Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated. A symbol system creates, copies, modifies, and destroys symbols. Graphical models and believe networks typically have symbolic random variables. It is very common for modern neural networks to have symbolic inputs or outputs, e.g., words, knowledge graphs, molecular structure, game moves,? I don?t think Gary would disagree that there needs to be some non-symbols (e.g, hidden units in neural networks). Arguments for symbols ? the most compelling one for me is that organizations (which are much more intelligent than individuals) reason in terms of symbols (words, diagrams, spreadsheets) ? are not diminished by the need for non-symbols. David (references from artint.info where many of these issues are discussed). > On Feb 17, 2024, at 5:42?PM, Gary Marcus wrote: > > [CAUTION: Non-UBC Email]adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. > > I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fweb.archive.org%2Fweb%2F20070826230310%2Fhttp%3A%2F%2Fwww-formal.stanford.edu%2Fjmc%2Fhistory%2Fdartmouth%2Fdartmouth.html&data=05%7C02%7Ctgd%40oregonstate.edu%7Cd412f007008648673abc08dc30c86fe0%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638438884630109866%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=V77gzH7FxUIj74wtdVgImHnQWj0yTL1vQtK51hM3Trs%3D&reserved=0 > > > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhome.csulb.edu%2F~cwallis%2F382%2Freadings%2F482%2Fmccolloch.logical.calculus.ideas.1943.pdf&data=05%7C02%7Ctgd%40oregonstate.edu%7Cd412f007008648673abc08dc30c86fe0%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638438884630117743%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=cPCX9dR837n6ZtI43%2B83mbsEufto0x8fQyyJSSx1zuM%3D&reserved=0 > > > >> On Feb 17, 2024, at 4:40?PM, poole wrote: >> >> ? >>> On Feb 17, 2024, at 1:08?PM, David H Kirshner wrote: >>> >>> [CAUTION: Non-UBC Email] >>> You?re right, David. >>> I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? >> >> In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). >> >> There was lots of neural network hype in the 1950?s: >> >> "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." >> ? New York Times [1958] >> >> It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). >> >> I am sure there are people who know the history better than me, who might like to provide more persoective. >> >> David >> >> >>> ?? >>> David Poole, >>> Department of Computer Science, >>> University of British Columbia, >>> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__cs.ubc.ca_-7Epoole%26d%3DDwIFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DGaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl%26s%3DXFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg%26e%3D&data=05%7C02%7Ctgd%40oregonstate.edu%7Cd412f007008648673abc08dc30c86fe0%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638438884630123191%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=e7pAt3h2BkzgkJmdc8qD1lFsQIrWhq%2BF6VX5l2P8yiw%3D&reserved=0 >>> poole at cs.ubc.ca >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Sun Feb 18 17:36:30 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sun, 18 Feb 2024 14:36:30 -0800 Subject: Connectionists: =?utf-8?b?Q2hhdEdQVOKAmXMg4oCcdW5kZXJzdGFuZGlu?= =?utf-8?q?g=E2=80=9D_of_maps_and_infographics?= In-Reply-To: References: Message-ID: <9B771196-683D-4804-812C-2D4F61229C62@nyu.edu> An HTML attachment was scrubbed... URL: From diochnos at ou.edu Sun Feb 18 21:06:54 2024 From: diochnos at ou.edu (Diochnos, Dimitrios) Date: Mon, 19 Feb 2024 02:06:54 +0000 Subject: Connectionists: Assistant Professor of Computer Science - Hybrid Modeling Message-ID: <721D2683-056C-4235-AF5D-A9A7EE661F14@ou.edu> University of Oklahoma Norman Campus: Gallogly College of Engineering: School of Computer Science Location: Norman, Oklahoma, USA Description As part of a sustained, multiyear, strategic growth initiative, the School of Computer Science (CS; https://www.ou.edu/coe/cs) in the Gallogly College of Engineering (GCoE; https://www.ou.edu/coe/cs) at the University of Oklahoma (OU; https://www.ou.edu) seeks applications for a tenure-track assistant professor position in the area of hybrid machine-learning/first-principles modeling with a targeted start date of Fall 2024. We particularly seek candidates whose core expertise will allow OU to synergistically leverage existing first-principles modeling strengths across GCoE as well as existing strengths in machine learning and data mining in CS. We further seek candidates whose research, teaching, and service have prepared them to be integral contributors to the advancement of our welcoming community. Candidates who fill this position should look to support OU's strategic research verticals (https://www.ou.edu/research-norman/centers-institutes/strategic-vertical-institutes) and GCoE's strategic plan (https://ou.edu/content/dam/CoE/Pictures/GCoE%20Strategic%20Plan%202020-25%20Update%2007-27-21.pdf), and be prepared to engage in collaborative research. We also welcome applicants whose research interests support artificial intelligence (AI) X and computational-X. The University of Oklahoma (OU) is a Carnegie-R1 comprehensive public research university known for excellence in teaching, research, and community engagement, serving the educational, cultural, economic, and healthcare needs of the state, region, and nation from three campuses: Norman, the Health Sciences Center in Oklahoma City, and the Schusterman Center in Tulsa. OU enrolls over 32,000 students and has more than 2,600 full-time faculty members in 21 colleges. The School of Computer Science is growing rapidly in size and impact. As of Spring 2024, it has 26 faculty (https://www.ou.edu/coe/cs/people) from a wide variety of backgrounds and more than 950 students; features student-centric initiatives; has vibrant research and teaching programs aligned with OU's strategic plan (https://www.ou.edu/leadon); and is a key partner in OU's world-class Data Science and Analytics Institute (https://www.ou.edu/coe/dsai). The University of Oklahoma invests in its faculty by providing support and resources through our Center for Faculty Excellence and Vice President for Research and Partnerships Office. The Gallogly College of Engineering supports faculty development through mentoring, new faculty orientation series, and access to research and educational supports. We encourage candidates to apply who are seeking to work in a rapidly growing college with collegial interdisciplinary groups and strong academic units. Qualifications REQUIRED: - Ph.D. in Computer Science or a related field. - Able to effectively conduct and lead research. - Able to effectively form research collaborations. - Able to effectively teach computer science courses at all levels. - Able to effectively advise M.S. and Ph.D. students. PREFERENCES (NOT REQUIRED): - Demonstrated passion for teaching as well as research. - Evidence of excellent and innovative instruction. - History of innovative research projects. - Demonstrated ethos of care and support of students. - Evidence of effective mentorship and guidance of research-active students. - History of commitment to supportive environments in the classroom, office, and lab. Application Instructions Applicants should submit a cover letter explaining the applicant's interest in all aspects of the position, current C.V., 2-3 scholarly articles that best represent the applicant's research contributions, and complete contact information (including full mailing and e-mail addresses and telephone numbers) of three people willing to write letters of recommendation to http://apply.interfolio.com/139322. The cover letter may address examples of the candidate's approach to scholarship, research, and teaching. Additional materials may be requested at a later date. The search will remain open until filled. Inquiries should be directed to the search committee chair: Dr. Dimitrios Diochnos, School of Computer Science, University of Oklahoma, diochnos at ou.edu, 405-325-2978. Equal Employment Opportunity Statement The University of Oklahoma, in compliance with all applicable federal and state laws and regulations, does not discriminate on the basis of race, color, national origin, sex, sexual orientation, genetic information, gender identity, gender expression, age, religion, disability, political beliefs, or status as a veteran in any of its policies, practices, or procedures. This includes, but is not limited to: admissions, employment, financial aid, housing, services in educational programs or activities, or health care services that the University operates or provides. Diversity Statement The University of Oklahoma is committed to achieving a diverse, equitable and inclusive university community by recognizing each person's unique contributions, background, and perspectives. The University of Oklahoma strives to cultivate a sense of belonging and emotional support for all, recognizing that fostering an inclusive environment for all is vital in the pursuit of academic and inclusive excellence in all aspects of our institutional mission. Mission of the University of Oklahoma The Mission of the University of Oklahoma is to provide the best possible educational experience for our students through excellence in teaching, research and creative activity, and service to the state and society. From dst at cs.cmu.edu Sun Feb 18 21:29:52 2024 From: dst at cs.cmu.edu (Dave Touretzky) Date: Sun, 18 Feb 2024 21:29:52 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: Your message of Sat, 17 Feb 2024 15:55:17 -0800. Message-ID: <3524663.1708309792@ammon2.boltz.cs.cmu.edu> My favorite way to show that LLMs don't know what they're talking about is this simple prompt: List all the US states whose names don't contain the letter "a". ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" or "Alaska" on the list and leaving out states like "Wyoming" and "Tennessee". And you can have a lengthy conversation with them about this, pointing out their errors one at a time, and they still can't manage to get it right. Gemini insisted that all 50 US states have an "a" in their name. It also claimed "New Jersey" has two a's. -- Dave Touretzky From suashdeb at gmail.com Mon Feb 19 01:40:05 2024 From: suashdeb at gmail.com (Suash Deb) Date: Mon, 19 Feb 2024 12:10:05 +0530 Subject: Connectionists: Imminent Final Deadline, ISMSI 2024 Message-ID: Dear esteemed colleagues, This is to gently remind you abt the imminent deadline (1st March, 2024) for submission of manuscripts for ISMSI24 https://www.ismsi.org This is the final opportunity for submissions and I hope you will avail this, if not already. A leaflet of the cfp is attached for your reference. Thank you all and look forward to receiving your submissions in the coming days. With kind regards, Suash Deb General Chair, ISMSI 2024 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CFP-ISMSI24-1.pdf Type: application/pdf Size: 2115993 bytes Desc: not available URL: From wuxundong at gmail.com Mon Feb 19 03:16:13 2024 From: wuxundong at gmail.com (wuxundong at gmail.com) Date: Mon, 19 Feb 2024 16:16:13 +0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <3524663.1708309792@ammon2.boltz.cs.cmu.edu> References: <3524663.1708309792@ammon2.boltz.cs.cmu.edu> Message-ID: That can be attributed to the models' underlying text encoding and processing mechanisms, specifically tokenization that removes the spelling information from those words. If you use GPT-4 instead, it can process it properly by resorting to external tools. On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: > My favorite way to show that LLMs don't know what they're talking about > is this simple prompt: > > List all the US states whose names don't contain the letter "a". > > ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" > or "Alaska" on the list and leaving out states like "Wyoming" and > "Tennessee". And you can have a lengthy conversation with them about > this, pointing out their errors one at a time, and they still can't > manage to get it right. Gemini insisted that all 50 US states have an > "a" in their name. It also claimed "New Jersey" has two a's. > > -- Dave Touretzky > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Mon Feb 19 08:01:48 2024 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Mon, 19 Feb 2024 09:01:48 -0400 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <3524663.1708309792@ammon2.boltz.cs.cmu.edu> Message-ID: Good point, but Dave's point stands as the models he is referring to did not even comprehend that they made mistakes. Cheers, Thomas On Mon, Feb 19, 2024, 4:43?a.m. wrote: > That can be attributed to the models' underlying text encoding and > processing mechanisms, specifically tokenization that removes the spelling > information from those words. If you use GPT-4 instead, it can process it > properly by resorting to external tools. > > On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: > >> My favorite way to show that LLMs don't know what they're talking about >> is this simple prompt: >> >> List all the US states whose names don't contain the letter "a". >> >> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" >> or "Alaska" on the list and leaving out states like "Wyoming" and >> "Tennessee". And you can have a lengthy conversation with them about >> this, pointing out their errors one at a time, and they still can't >> manage to get it right. Gemini insisted that all 50 US states have an >> "a" in their name. It also claimed "New Jersey" has two a's. >> >> -- Dave Touretzky >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Johan.Suykens at esat.kuleuven.be Mon Feb 19 09:04:09 2024 From: Johan.Suykens at esat.kuleuven.be (Johan Suykens) Date: Mon, 19 Feb 2024 15:04:09 +0100 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: <23CC1A62-C074-43F6-8415-910DDC3DCF1E@nyu.edu> References: <23CC1A62-C074-43F6-8415-910DDC3DCF1E@nyu.edu> Message-ID: Thanks Gary for your clarifications. Related to your mentioning of "AI/models of human cognition require algebraic operations over variables in order to be robust, in addition to other mechanisms" I would like to inform that in our recent NeurIPS 2023 paper https://arxiv.org/abs/2305.19798 (Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation) we have described self-attention of transformers through a modified form of kernel singular value decomposition, related to asymmetric kernels (where the kernel function is the dot product between a query and key feature map) with primal and dual representations. In this way low rank representations are obtained together with efficient training of transformers in primal form. Singular value decomposition is a well-known method and major tool in linear algebra. This is a kernel version of it. I hope this may possibly bridge the gap between polarized viewpoints in the debate. Within the same framework it is also possible to consider e.g. pairwise constraints. An example is kernel spectral clustering where pairwise constraints yielding models that better align with human understanding (as e.g. in the paper "A regularized formulation for spectral clustering with pairwise constraints" https://ieeexplore.ieee.org/document/5178772 ). In Fig.7 an illustration is given of an image segmentation with "horses + the sky + the ground". However, the sun is shining and therefore there is also the "shadow of the horses" on the ground. This leads to an ambiguity: should the shadow of the horse be part of the horse or be part of the ground? By specifying a few additional must-link and cannot-link constraints between pixels, one can quickly improve the model and make it compatible with our "human understanding" of this image. It seems that one of the main issues in the debate is then whether we need such constraints yes or no. If there are enough additional images showing e.g. also the sun on the image or other example images that could clarify the context, possibly one doesn't need to have the additional constraints. However, the additional constraints can definitely help in quickly interfacing the model with the real world and making it compatible with "human understanding". Best regards, Johan Suykens On 2024-02-18 18:30, Gary Marcus wrote: > - certainly in broad agreement; never argued that symbols alone were > sufficient. > - have always since my first paper in 1992 advocated for hybrid models, > with both associative and symbolic-components > - as argued in my 2001 book, i think the real question is not whether > we need symbols per se (which localist output units represent) but > rather AI/models of human cognition require algebraic operations over > variables in order to be robust, in addition to other mechanisms. > >> On Feb 18, 2024, at 09:25, poole wrote: >> >> ?Thanks Gary. >> These are both worthwhile reading! >> >> I don?t think symbolic = logic. McCulloch and Pitts were interested >> in representing logical operations. >> >> ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed >> that thinking was symbolic reasoning, like talking out loud or working >> out an answer with pen and paper [see Haugeland, J. Artificial >> Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon >> [1976] defined a symbol is a meaningful pattern that can be >> manipulated. A symbol system creates, copies, modifies, and destroys >> symbols. >> >> Graphical models and believe networks typically have symbolic random >> variables. >> >> It is very common for modern neural networks to have symbolic inputs >> or outputs, e.g., words, knowledge graphs, molecular structure, game >> moves,? >> >> I don?t think Gary would disagree that there needs to be some >> non-symbols (e.g, hidden units in neural networks). >> >> Arguments for symbols ? the most compelling one for me is that >> organizations (which are much more intelligent than individuals) >> reason in terms of symbols (words, diagrams, spreadsheets) ? are not >> diminished by the need for non-symbols. >> >> David >> (references from artint.info where many of these issues are >> discussed). >> >>> On Feb 17, 2024, at 5:42?PM, Gary Marcus wrote: >>> >>> [CAUTION: Non-UBC Email]adding some pointers to David?s remarks: >>> McCulloch and Pitts in 1943 were very much trying to bridge the >>> symbolic and neural world. It?s clear even in the abstract (article >>> below). Tensions between symbolic and neural approaches were in full >>> force by Minsky and Papert 1969, and resurfaced in the 1980s. >>> >>> I don?t have a clear sense of where things were in Turing?s time per >>> se, but both approaches were countenanced in the 1955 proposal for >>> the Dartmouth conference link below; Rosenblatt had gathered steam by >>> 1958 as noted. >>> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__web.archive.org_web_20070826230310_http-3A__www-2Dformal.stanford.edu_jmc_history_dartmouth_dartmouth.html&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=qB3xwLa8UE9Dg_o7t21jk3miiMP1icWlWBL_XGQ49-Q&e= >>> >>> >>> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__home.csulb.edu_-7Ecwallis_382_readings_482_mccolloch.logical.calculus.ideas.1943.pdf&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=6jCRjchELvEA_Dzv8hAwzcYSMdGNQiQNU_z3DO6ddp4&e= >>> >>> >>> >>>>> On Feb 17, 2024, at 4:40?PM, poole wrote: >>>> >>>> ? >>>>> On Feb 17, 2024, at 1:08?PM, David H Kirshner >>>>> wrote: >>>>> >>>>> [CAUTION: Non-UBC Email] >>>>> You?re right, David. >>>>> I should have said ?Back in Alan Turing?s time when the possibility >>>>> of AI meant the possibility of symbolic AI, ?.? >>>> >>>> In Turing's time, from what I can see (I wan?t alive then ;^) neural >>>> networks were more trendy than symbolic approaches. Turing?s paper >>>> was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis >>>> on neural networks was written in 1952. (Schmidhuber has great >>>> resources on the history of NNs and AI on his website). >>>> >>>> There was lots of neural network hype in the 1950?s: >>>> >>>> "The Navy revealed the embryo of an electronic computer today that >>>> it expects will be able to walk, talk, see, write, reproduce itself >>>> and be conscious of its existence. ?The service said it would ?build >>>> the first of its Perceptron thinking machines that will be able to >>>> read and write. It is expected to be finished in about a year at a >>>> cost of $100,000." >>>> ? New York Times [1958] >>>> >>>> It was later in the 1950?s that they came to realize that AI needed >>>> representations, lead by Minsky and McCarthy, whick lead to the rise >>>> of symbolic approaches.. (It is interesting that a major NN >>>> conference ICLR is about representations). >>>> >>>> I am sure there are people who know the history better than me, who >>>> might like to provide more persoective. >>>> >>>> David >>>> >>>> >>>>> ?? >>>>> David Poole, >>>>> Department of Computer Science, >>>>> University of British Columbia, >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cs.ubc.ca_-7Epoole&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=GaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl&s=XFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg&e= >>>>> poole at cs.ubc.ca >>>> >>>> >> From vassilisvas at gmail.com Mon Feb 19 09:07:12 2024 From: vassilisvas at gmail.com (Vassilis Vassiliades) Date: Mon, 19 Feb 2024 16:07:12 +0200 Subject: Connectionists: [jobs] Research Assistant/Associate Positions at CYENS in Cyprus Message-ID: Dear colleagues, The CYENS Centre of Excellence in Cyprus is offering 2 research positions on *machine learning* and *robotics *for pure and applied research. The researchers will join the "Cognitive Artificial Intelligence and Robotics" research team. Interested candidates should apply as soon as possible. *Deadline*: 04 March 2024 *For more information and how to apply please visit:* - Research Assistant/Associate in Machine Learning: https://www.cyens.org.cy/en-gb/vacancies/job-listings/research-associates/research-assistant-associate-in-machine-learning/ - Research Assistant/Associate in Robotics: https://www.cyens.org.cy/en-gb/vacancies/job-listings/research-associates/research-assistant-associate-in-robotics/ Regards, Vassilis -------------- next part -------------- An HTML attachment was scrubbed... URL: From iam.palat at gmail.com Mon Feb 19 09:15:57 2024 From: iam.palat at gmail.com (Iam Palatnik) Date: Mon, 19 Feb 2024 11:15:57 -0300 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <3524663.1708309792@ammon2.boltz.cs.cmu.edu> Message-ID: This is potentially more tricky than it seems, partly because we can only see so much of what is going on behind the curtain in both ChatGPT and Bard. I can confirm that GPT-3.5 totally fumbles this task even with extensive help from me, and GPT-4 aces it with no help. GPT-3.5 however seems to be just a model and GPT-4 is definitely an agent, since it shows us when it uses its function calling and code-interpreter. I'm also not sure how different the system prompts are for either, too. Both GPT-3.5 and GPT-4 were able to generate the correct list of 50 US states, but GPT-4 did this within the code interpreter and just wrote python code to check whether they contain 'a' or not. GPT-3.5 could easily be hurt on this task by the random sampling of the next token. For instance "Does" is the most likely token but it randomly picks "Doesn't" due to the temperature and therefore trips itself for the rest of the answer. GPT-4 might both have better predictions for tokens besides whatever fine-tuning or prompt that makes it know when to function call. I wouldn't be surprised if GPT-3.5 with function-calling and a code-interpreter could also ace this task. At the same time as we strive for better architectures that can do this by themselves, I definitely feel like the [LLM + function-call + code-interpreter + external-source search] style agents can greatly increase what the LLM alone is capable of, and there seems to be a lot of activity in the literature in this direction. We ourselves are probably closer to the Agent than to the Model in how we do tasks. For instance, I don't know the 50 US states and had to search for the list on google, so why not allow the model to do the same, and so on. Whether these hits or misses entail understanding or not will continue to be a tricky debate but I think these experiments are useful to see what helps these models (and agents) increase the hit rate. On Mon, Feb 19, 2024 at 10:51?AM Thomas Trappenberg wrote: > Good point, but Dave's point stands as the models he is referring to did > not even comprehend that they made mistakes. > > Cheers, Thomas > > On Mon, Feb 19, 2024, 4:43?a.m. wrote: > >> That can be attributed to the models' underlying text encoding and >> processing mechanisms, specifically tokenization that removes the spelling >> information from those words. If you use GPT-4 instead, it can process it >> properly by resorting to external tools. >> >> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: >> >>> My favorite way to show that LLMs don't know what they're talking about >>> is this simple prompt: >>> >>> List all the US states whose names don't contain the letter "a". >>> >>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" >>> or "Alaska" on the list and leaving out states like "Wyoming" and >>> "Tennessee". And you can have a lengthy conversation with them about >>> this, pointing out their errors one at a time, and they still can't >>> manage to get it right. Gemini insisted that all 50 US states have an >>> "a" in their name. It also claimed "New Jersey" has two a's. >>> >>> -- Dave Touretzky >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 19 09:38:50 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 06:38:50 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: <602A5037-4309-4068-93AC-58EF33AA52E4@nyu.edu> The notion of constraints is very interesting in connection with the discussion of understanding. Sora for example appears not to be able to oblige objects to respect gravity, solidity, or object permanence. (Basic things human infants understand). There is no way to restrict the things that GPT says to those that have been verified, etc. Dave Touretzky?s example show a failure to obey an even simpler constraint. > On Feb 19, 2024, at 6:04?AM, Johan Suykens wrote: > > ? > Thanks Gary for your clarifications. > > Related to your mentioning of > > "AI/models of human cognition require algebraic operations over variables in order to be robust, in addition to other mechanisms" > > I would like to inform that in our recent NeurIPS 2023 paper https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2305.19798&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=ycCLlME4xVoD3WTyo8LuvWjVNcRbCdUopSwCRNCTp8GoUfWcnbbNa5IyXgNnHSKe&s=CwBhpfeHo4p3bhV5w6tCk6vGcRBC91P4K2J_70AzOLI&e= (Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation) we have described self-attention of transformers through a modified form of kernel singular value decomposition, related to asymmetric kernels (where the kernel function is the dot product between a query and key feature map) with primal and dual representations. In this way low rank representations are obtained together with efficient training of transformers in primal form. Singular value decomposition is a well-known method and major tool in linear algebra. This is a kernel version of it. > > I hope this may possibly bridge the gap between polarized viewpoints in the debate. > > Within the same framework it is also possible to consider e.g. pairwise constraints. An example is kernel spectral clustering where pairwise constraints yielding models that better align with human understanding (as e.g. in the paper "A regularized formulation for spectral clustering with pairwise constraints" https://urldefense.proofpoint.com/v2/url?u=https-3A__ieeexplore.ieee.org_document_5178772&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=ycCLlME4xVoD3WTyo8LuvWjVNcRbCdUopSwCRNCTp8GoUfWcnbbNa5IyXgNnHSKe&s=W3_Nmc1GjxWl6JIZsgxc2xvzlHi4VpG-flVwvI3MAwY&e= ). In Fig.7 an illustration is given of an image segmentation with "horses + the sky + the ground". However, the sun is shining and therefore there is also the "shadow of the horses" on the ground. This leads to an ambiguity: should the shadow of the horse be part of the horse or be part of the ground? By specifying a few additional must-link and cannot-link constraints between pixels, one can quickly improve the model and make it compatible with our "human understanding" of this image. > > It seems that one of the main issues in the debate is then whether we need such constraints yes or no. If there are enough additional images showing e.g. also the sun on the image or other example images that could clarify the context, possibly one doesn't need to have the additional constraints. However, the additional constraints can definitely help in quickly interfacing the model with the real world and making it compatible with "human understanding". > > Best regards, > Johan Suykens > > > > >> On 2024-02-18 18:30, Gary Marcus wrote: >> - certainly in broad agreement; never argued that symbols alone were sufficient. >> - have always since my first paper in 1992 advocated for hybrid models, with both associative and symbolic-components >> - as argued in my 2001 book, i think the real question is not whether we need symbols per se (which localist output units represent) but rather AI/models of human cognition require algebraic operations over variables in order to be robust, in addition to other mechanisms. >>>> On Feb 18, 2024, at 09:25, poole wrote: >>> ?Thanks Gary. >>> These are both worthwhile reading! >>> I don?t think symbolic = logic. McCulloch and Pitts were interested in representing logical operations. >>> ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed that thinking was symbolic reasoning, like talking out loud or working out an answer with pen and paper [see Haugeland, J. Artificial Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated. A symbol system creates, copies, modifies, and destroys symbols. >>> Graphical models and believe networks typically have symbolic random variables. >>> It is very common for modern neural networks to have symbolic inputs or outputs, e.g., words, knowledge graphs, molecular structure, game moves,? >>> I don?t think Gary would disagree that there needs to be some non-symbols (e.g, hidden units in neural networks). >>> Arguments for symbols ? the most compelling one for me is that organizations (which are much more intelligent than individuals) reason in terms of symbols (words, diagrams, spreadsheets) ? are not diminished by the need for non-symbols. >>> David >>> (references from artint.info where many of these issues are discussed). >>>> On Feb 17, 2024, at 5:42?PM, Gary Marcus wrote: >>>> [CAUTION: Non-UBC Email]adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. >>>> I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__web.archive.org_web_20070826230310_http-3A__www-2Dformal.stanford.edu_jmc_history_dartmouth_dartmouth.html&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=qB3xwLa8UE9Dg_o7t21jk3miiMP1icWlWBL_XGQ49-Q&e= >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__home.csulb.edu_-7Ecwallis_382_readings_482_mccolloch.logical.calculus.ideas.1943.pdf&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=i1FPhi6c93rU7h_AikZJEx36pDsvgNzWMEKlOMHY0uqMdsAuRf5w_wtT_l9EhRaQ&s=6jCRjchELvEA_Dzv8hAwzcYSMdGNQiQNU_z3DO6ddp4&e= >>>>>> On Feb 17, 2024, at 4:40?PM, poole wrote: >>>>> ? >>>>>> On Feb 17, 2024, at 1:08?PM, David H Kirshner wrote: >>>>>> [CAUTION: Non-UBC Email] >>>>>> You?re right, David. >>>>>> I should have said ?Back in Alan Turing?s time when the possibility of AI meant the possibility of symbolic AI, ?.? >>>>> In Turing's time, from what I can see (I wan?t alive then ;^) neural networks were more trendy than symbolic approaches. Turing?s paper was 1950. McCulloch and Pitts seminal work was 1943. Minsky?s thesis on neural networks was written in 1952. (Schmidhuber has great resources on the history of NNs and AI on his website). >>>>> There was lots of neural network hype in the 1950?s: >>>>> "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. ?The service said it would ?build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000." >>>>> ? New York Times [1958] >>>>> It was later in the 1950?s that they came to realize that AI needed representations, lead by Minsky and McCarthy, whick lead to the rise of symbolic approaches.. (It is interesting that a major NN conference ICLR is about representations). >>>>> I am sure there are people who know the history better than me, who might like to provide more persoective. >>>>> David >>>>>> ?? >>>>>> David Poole, >>>>>> Department of Computer Science, >>>>>> University of British Columbia, >>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cs.ubc.ca_-7Epoole&d=DwIFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=GaLFSysAmNesHoiuLMLNQzFZKcQfTn2lkEhxM8Xrc205FjMs-5qx1lSBZ4u9kagl&s=XFNzK3B_BUrmI0gbNaacNriRQ53tTCzjlNaL2JBAmRg&e= >>>>>> poole at cs.ubc.ca From dftschool at ini.rub.de Mon Feb 19 09:21:31 2024 From: dftschool at ini.rub.de (DFT summer school) Date: Mon, 19 Feb 2024 15:21:31 +0100 Subject: Connectionists: Neural Dynamics for Embodied Cognition Summer School 2024 Bochum, Germany Message-ID: <350d81fe-b1fe-4bf6-b84a-43b50192ad7c@ini.rub.de> Please forward this advertisement to whoever you think might be interested. The application deadline is 03 June 2024. Thanks, Raul Grieben, Minseok Kang and Stephan Sehring - - - DFT Summer School 2024 Bochum, Germany This year our summer school "Neural Dynamics for Embodied Cognition" will take place from the 12th to the 17th of August, 2024 at the Institute for Neural Computation, Ruhr-University Bochum in Germany, coordinated by Prof. Dr. Gregor Sch?ner. Neuronal dynamics provide a powerful theoretical language for the design and modeling of embodied and situated cognitive systems. This school provides a hands-on and practical introduction to neuronal dynamics ideas and enables participants to become productive within this framework. The school is aimed at advanced undergraduate or graduate students, postdocs, and faculty members in embodied cognition, cognitive science, and robotics. Topics addressed include neural dynamics, attractor dynamics and instabilities, dynamic field theory, neuronal representations, artificial perception, simple forms of cognition including detection and selection decisions, memory formation, learning, and grounding relational concepts. The school combines tutorial lectures with hands-on project work. Participants will develop their own modeling project, which may connect to their ongoing doctoral or postdoctoral research. To apply for the summer school, please visit our webpage: https://dynamicfieldtheory.org/events/neuronal_dynamics_for_embodied_cognition_2024/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 19 09:33:01 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 06:33:01 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 19 10:05:35 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 07:05:35 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: Hypothesis 1: LLMs (deeply) understand language: false Hypothesis 2: LLMs could play an important part of a larger, modular systems, perhaps neurosymbolic, and perhaps partly prestructured prior to learning, that could conceivably eventually deeply understand language: open for investigation > On Feb 19, 2024, at 6:50?AM, Iam Palatnik wrote: > > I definitely feel like the [LLM + function-call + code-interpreter + external-source search From Francesco.DiCosmo at unibz.it Mon Feb 19 10:05:45 2024 From: Francesco.DiCosmo at unibz.it (Di Cosmo Francesco) Date: Mon, 19 Feb 2024 15:05:45 +0000 Subject: Connectionists: Final Call for Papers -- ESSLLI 2024 Student Session Message-ID: (apologies for potential cross-posting) 35th European Summer School in Logic, Language and Information Student session CfP 29 July ? 9 August 2024, Leuven, Belgium https://2024.esslli.eu/ Final Call for Papers -- Student Session ========================== ========================== Extended CfP: https://2024.esslli.eu/. Important Dates ============ Paper submission deadline: March 1, 2024 Notification of acceptance: May 1, 2024 Camera-ready submission: June 1, 2024 Event: July 29 - August 9, 2024 Submission link: https://easychair.org/conferences/?conf=esslli2024 (make sure to choose the Student Session track). About ===== The Student Session of the 35th European Summer School in Logic, Language, and Information (ESSLLI) will take place at ESSLLI 2024, on 29 July - 9 August 2024 in Leuven, Belgium. We invite submissions of original, unpublished work from students in any area related to Logic & Language, Language & Computation, or Logic & Computation in the form of long (8 pages) or short (4 pages) papers (including references, figures, etc.). Submissions will be reviewed by several experts in the field. Accepted long papers will be presented as talks, while short papers will be presented as posters. This is an excellent opportunity for students to receive valuable feedback from expert readers and to present their work to a diverse audience. Short papers are especially suited for smaller or in-progress research works. See the extended Call for Papers at the ESSLLI 2024 webpage for the submission and publication policies: https://2024.esslli.eu/. Awards ====== For both long and short papers, the best contributions (submission plus presentation) will receive an award. Springer generously supports the ESSLLI Student Session by offering prizes for a total of 600? in Springer books. The best poster and the best talk will each be awarded Springer book vouchers of 300? each. The prizes are awarded based on the reviews of the submission as well as the oral/poster presentation. The ideas presented should be novel and promising. The presentation at ESSLLI should be adapted to the wide variety of backgrounds that ESSLLI participants come from: everybody should be able to learn/understand something new. Registration ========= Participation, including presentation, in the Student Session requires an ESSLLI 2024 registration at: https://2024.esslli.eu/ Depending on the budget, student grants may be available. Student Session Committee ===================== Session chairs ------------------- Francesco Di Cosmo (Free University of Bozen-Bolzano) John Lindqvist (University of Bergen) Language and Computation Track (LaCo) ------------------------------- Jia Ren (University of Massachusetts Amherst) Jiaxing Yu (Rutgers University) Tran Thi Hong Hanh (Jozef Stefan Institute and University of La Rochelle) Language and Logic Track (LaLo) ------------------------- Emil Eva Rosina (Ruhr-Universit?t Bochum) Tijana ?ukovi? (University of Belgrade) Valeria Gradimondo (University Paris Cit?) Logic and Computation Track (LoCo) ---------------------------- Alex De Klerck (KU Leuven) Sreehari Kalloormana (City University of New York) Tephilla Prince (Indian Institute of Technology Dharwad) -------------- next part -------------- An HTML attachment was scrubbed... URL: From N.Cohen at leeds.ac.uk Mon Feb 19 10:20:26 2024 From: N.Cohen at leeds.ac.uk (Netta Cohen) Date: Mon, 19 Feb 2024 15:20:26 +0000 Subject: Connectionists: Faculty recruitment at the University of Leeds, UK Message-ID: Dear Connectionists, The School of Computing, at the University of Leeds is recruiting a faculty member in Computational Neuroscience. We are looking to appoint a Lecturer (analogous to tenure-track Assistant Professor) working in computational and systems neuroscience and related areas of artificial and biological intelligence. The position provides an exceptional opportunity to join our internationally outstanding activity and to pursue interdisciplinary research, in collaboration with extensive neuroscience activity in Leeds and beyond. We have a vibrant community of postgraduate and postdoctoral researchers supported by a large portfolio of external research funding. We are core partners in major pan-University initiatives, such as Neur at L (Neuroscience at Leeds), the Leeds Institute for Data Analytics, Robotics Leeds, Centre for HealthTech Innovation, Leeds Cancer Research Centre and Leeds Institute of Fluid Dynamics, as well as in leading national centres such as the Alan Turing Institute. Informal enquiries are welcome. For more details, please see: https://jobs.leeds.ac.uk/Vacancy.aspx?ref=EPSCP1146 Closing date for applications: Wednesday 13 March 2024 Netta Netta Cohen Professor of Complex Systems Director, Computing in Biology, Medicine and Health School of Computing University of Leeds Leeds, UK N.Cohen at leeds.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Mon Feb 19 10:53:51 2024 From: bwyble at gmail.com (Brad Wyble) Date: Mon, 19 Feb 2024 10:53:51 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: Iam, the difference is that while you may need an external source to remember all 50 states, for the ones that you have remembered/looked up, you are able to verify that they do or do not contain specific letters without reference to a resource, or writing some code to verify it. It is even worse that if you push them on their mistakes, they are still unable to correct. A better counterargument to the example Dave provides is that perhaps LLMs just cannot ever break things down at the letter level because of their reliance on tokens. Humans can do this of course, but a good analogy for us might be the Muller Lyer illusion, which is essentially impenetrable to our cognitive faculties. I.e. we are unable to force ourselves to see the lines as their true lengths on the page because the basis of our representations does not permit it. This is perhaps similar to the way that LLM representations preclude them from accessing the letter level. However, I think a good counterpoint to this is that while people are unable to un-see the Muller Lyer illusion, it is not that difficult to teach someone about this blindspot and get them to reason around it, with no external tools, just their own reasoning faculties. LLMs seem unable to achieve this level of self-knowledge no matter how patiently things are explained. They do not have the metacognitive faculty that allows them to even understand their blindspot about letters. On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus wrote: > Correct; also tool integration has actually been less successful than some > people believe: > > > https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external > > > On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg wrote: > > ? > Good point, but Dave's point stands as the models he is referring to did > not even comprehend that they made mistakes. > > Cheers, Thomas > > On Mon, Feb 19, 2024, 4:43?a.m. wrote: > >> That can be attributed to the models' underlying text encoding and >> processing mechanisms, specifically tokenization that removes the spelling >> information from those words. If you use GPT-4 instead, it can process it >> properly by resorting to external tools. >> >> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: >> >>> My favorite way to show that LLMs don't know what they're talking about >>> is this simple prompt: >>> >>> List all the US states whose names don't contain the letter "a". >>> >>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" >>> or "Alaska" on the list and leaving out states like "Wyoming" and >>> "Tennessee". And you can have a lengthy conversation with them about >>> this, pointing out their errors one at a time, and they still can't >>> manage to get it right. Gemini insisted that all 50 US states have an >>> "a" in their name. It also claimed "New Jersey" has two a's. >>> >>> -- Dave Touretzky >>> >> -- Brad Wyble Professor of Psychology Penn State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From roshini.johri at gmail.com Mon Feb 19 11:02:14 2024 From: roshini.johri at gmail.com (Roshini Johri) Date: Mon, 19 Feb 2024 16:02:14 +0000 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: I agree with Hypothesis 1 as false with the caveat that the answer to the question: what does understanding language mean itself is really contentious, different and also cultural. I believe LLMs understand the structure of language and the logic underneath it including mapping structures to each other (in purely mathematical functional terms) . I don't believe LLMs 'understand' the significance of it in a way humans do. They are just learning a different type of mathematical interpretation of something we associate a deeper meaning to. Hypothesis 2 : Yes this is going to be exciting as it grows to become a part of something much bigger. 100% agree that is a component of a system that will evolve to be more complex with different reasoning/planning capabilities that hasn't really come together yet. On Mon, Feb 19, 2024 at 3:49?PM Gary Marcus wrote: > Hypothesis 1: LLMs (deeply) understand language: false > > Hypothesis 2: LLMs could play an important part of a larger, modular > systems, perhaps neurosymbolic, and perhaps partly prestructured prior to > learning, that could conceivably eventually deeply understand language: > open for investigation > > > On Feb 19, 2024, at 6:50?AM, Iam Palatnik wrote: > > > > I definitely feel like the [LLM + function-call + code-interpreter + > external-source search > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Mon Feb 19 11:06:36 2024 From: weng at msu.edu (Weng, Juyang) Date: Mon, 19 Feb 2024 16:06:36 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI Message-ID: Somebody wrote below, "Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated." I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. In the book "Natural and Artificial Intelligence", I defined symbolic representation: A symbolic representation in the brain of an agent has human handcrafted static boundaries where each zone represents a symbol (e.g., text as label) about a concept of the extra-body environment. For example, if one model V1 as a region to detect edges (in extra-body environment), he models V1 using a symbolic representation. This model is wrong because a brain region is plastic, not specific for a particular type of features. -John Brain-Mind Institute On Mon, Feb 19, 2024 at 2:47?AM Dietterich, Thomas wrote: Newell and Simon loved symbols but were not at all interested in logic (except for showing that a symbol system could prove logic theorems). From: Connectionists on behalf of poole Sent: Sunday, February 18, 2024 10:54:44 PM To: Gary Marcus Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [You don't often get email from poole at cs.ubc.ca. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] [This email originated from outside of OSU. Use caution with links and attachments.] [This email originated from outside of OSU. Use caution with links and attachments.] Thanks Gary. These are both worthwhile reading! I don?t think symbolic = logic. McCulloch and Pitts were interested in representing logical operations. ?Symbolic" follows the tradition of Hobbes (1588?1679) who claimed that thinking was symbolic reasoning, like talking out loud or working out an answer with pen and paper [see Haugeland, J. Artificial Intelligence: The Very Idea. MIT Press 1985]. Newell and Simon [1976] defined a symbol is a meaningful pattern that can be manipulated. A symbol system creates, copies, modifies, and destroys symbols. Graphical models and believe networks typically have symbolic random variables. It is very common for modern neural networks to have symbolic inputs or outputs, e.g., words, knowledge graphs, molecular structure, game moves,? I don?t think Gary would disagree that there needs to be some non-symbols (e.g, hidden units in neural networks). Arguments for symbols ? the most compelling one for me is that organizations (which are much more intelligent than individuals) reason in terms of symbols (words, diagrams, spreadsheets) ? are not diminished by the need for non-symbols. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dst at cs.cmu.edu Mon Feb 19 11:21:12 2024 From: dst at cs.cmu.edu (Dave Touretzky) Date: Mon, 19 Feb 2024 11:21:12 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: Your message of Mon, 19 Feb 2024 11:15:57 -0300. Message-ID: <3602379.1708359672@ammon2.boltz.cs.cmu.edu> Even GPT 3.5 can answer many questions about the spelling of words, so I don't think the problem is that the tokenization has removed this information. I think the problem is that a stack of attention heads isn't good at applying a novel, arbitrary rule consistently to a long list of items. Also, while the average human would be forgiven if they missed a few entries in the list of US states that do not contain "a" in their name, no normal human would claim that "Alaska" or "Texas" belongs in this list, nor would they persist in such an error after it was pointed out to them, the way ChatGPT 4 and Gemini do. This isn't an argument against the possibility of AGI. It's an argument that "attention is all you need" might be a bit of an overstatement. -- Dave From gary.marcus at nyu.edu Mon Feb 19 11:43:01 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 08:43:01 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <3602379.1708359672@ammon2.boltz.cs.cmu.edu> References: <3602379.1708359672@ammon2.boltz.cs.cmu.edu> Message-ID: <7C1DC214-90E4-4B8F-A5BE-CDB8DED907CD@nyu.edu> ? > On Feb 19, 2024, at 08:41, Dave Touretzky wrote: > > ?Even GPT 3.5 can answer many questions about the spelling of words, so I > don't think the problem is that the tokenization has removed this > information. I think the problem is that a stack of attention heads > isn't good at applying a novel, arbitrary rule consistently to a long > list of items. > > Also, while the average human would be forgiven if they missed a few > entries in the list of US states that do not contain "a" in their name, > no normal human would claim that "Alaska" or "Texas" belongs in this > list, nor would they persist in such an error after it was pointed out > to them, the way ChatGPT 4 and Gemini do. > > This isn't an argument against the possibility of AGI. It's an argument > that "attention is all you need" might be a bit of an overstatement. > > -- Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From iam.palat at gmail.com Mon Feb 19 19:35:39 2024 From: iam.palat at gmail.com (Iam Palatnik) Date: Mon, 19 Feb 2024 21:35:39 -0300 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: > > The fact that answers are so dependent on word choice Just to clarify, I was not referring to the prompt of the task itself. That is, whether we tell chatgpt to 'add 2 and 2' or 'give me 2+2'. I'm referring to the fact that adding something like 'think step by step ' onto the task prompt has a noticeable effect on the performance for various tasks. And it doesn't have to literally be 'think step by step' exactly. An example that happened just the other day: a friend was trying to get ChatGPT to write a 30 paragraph long text, and was having no luck. The model always wrote the wrong number of paragraphs. I told him to try including 'number the start of each paragraph to help yourself' to the prompt and bingo. I had never seen someone trying to do this exact task, nor this numbering trick. I came up with it on the spot, and it worked, but why? Why should we expect the model to perform better with this, at all? Do examples like this entail understanding or lack thereof? I'm not sure everyone would agree on the answer. But regardless of the specific wording we would use to describe what happened there, it would be a problem if we are 'absolutely sure' ChatGPT can't do X because it can't understand anything, while it turns out it can do X if you just change the prompt or tool access. On Mon, Feb 19, 2024 at 6:13?PM Brad Wyble wrote: > The question is not whether we need to rely on external tools to verify > the line lengths (we do), but whether we can understand that this illusion > affects us, and can understand the basic idea of such an illusion. And we > can do both of these. You can explain to someone that this illusion exists > and they are able to understand that there is a difference between what > they think their eyes are telling them about line length and what is > actually true in the world. > > As far as we can determine from GPT experiments, LLMs are not able to > reason about their own limitations in this way. That is the fundamental > distinction I'm referring to. > > >> >> Because the performance of the LLMs on some of these tests seem to depend >> so much on how the questions are formulated and what tools they are given >> to respond with, I still tend to think that they understand something. I'm >> OK with the idea that their understanding has space to be much deeper, >> still, too. >> >> >> > This is the opposite of the argument that is typically used though. It is > specifically the face that their answers depend so much on phrase (e.g. > that paper by Melanie Mitchell that recently went by) that we argue they do > NOT understand. The fact that answers are so dependent on word choice is > an indication that they are parroting remembered examples rather than > understanding in a human sense. > > > > > >> >> >> >> >> >> On Mon, Feb 19, 2024 at 1:50?PM Brad Wyble wrote: >> >>> Iam, the difference is that while you may need an external source to >>> remember all 50 states, for the ones that you have remembered/looked up, >>> you are able to verify that they do or do not contain specific letters >>> without reference to a resource, or writing some code to verify it. It is >>> even worse that if you push them on their mistakes, they are still unable >>> to correct. >>> >>> A better counterargument to the example Dave provides is that perhaps >>> LLMs just cannot ever break things down at the letter level because of >>> their reliance on tokens. Humans can do this of course, but a good analogy >>> for us might be the Muller Lyer illusion, which is essentially >>> impenetrable to our cognitive faculties. I.e. we are unable to force >>> ourselves to see the lines as their true lengths on the page because the >>> basis of our representations does not permit it. This is perhaps similar >>> to the way that LLM representations preclude them from accessing the letter >>> level. >>> >>> However, I think a good counterpoint to this is that while people are >>> unable to un-see the Muller Lyer illusion, it is not that difficult to >>> teach someone about this blindspot and get them to reason around it, with >>> no external tools, just their own reasoning faculties. LLMs seem unable to >>> achieve this level of self-knowledge no matter how patiently things are >>> explained. They do not have the metacognitive faculty that allows them to >>> even understand their blindspot about letters. >>> >>> >>> >>> >>> On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus >>> wrote: >>> >>>> Correct; also tool integration has actually been less successful than >>>> some people believe: >>>> >>>> >>>> https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external >>>> >>>> >>>> On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg wrote: >>>> >>>> ? >>>> Good point, but Dave's point stands as the models he is referring to >>>> did not even comprehend that they made mistakes. >>>> >>>> Cheers, Thomas >>>> >>>> On Mon, Feb 19, 2024, 4:43?a.m. wrote: >>>> >>>>> That can be attributed to the models' underlying text encoding and >>>>> processing mechanisms, specifically tokenization that removes the spelling >>>>> information from those words. If you use GPT-4 instead, it can process it >>>>> properly by resorting to external tools. >>>>> >>>>> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: >>>>> >>>>>> My favorite way to show that LLMs don't know what they're talking >>>>>> about >>>>>> is this simple prompt: >>>>>> >>>>>> List all the US states whose names don't contain the letter "a". >>>>>> >>>>>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting >>>>>> "Texas" >>>>>> or "Alaska" on the list and leaving out states like "Wyoming" and >>>>>> "Tennessee". And you can have a lengthy conversation with them about >>>>>> this, pointing out their errors one at a time, and they still can't >>>>>> manage to get it right. Gemini insisted that all 50 US states have an >>>>>> "a" in their name. It also claimed "New Jersey" has two a's. >>>>>> >>>>>> -- Dave Touretzky >>>>>> >>>>> >>> >>> -- >>> Brad Wyble >>> Professor of Psychology >>> Penn State University >>> >>> > > -- > Brad Wyble (he/him) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Mon Feb 19 19:42:25 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 16:42:25 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From weng at msu.edu Mon Feb 19 17:32:37 2024 From: weng at msu.edu (Weng, Juyang) Date: Mon, 19 Feb 2024 22:32:37 +0000 Subject: Connectionists: =?windows-1252?q?Statistics_versus_=93Understandi?= =?windows-1252?q?ng=94_in_Generative_AI=2E?= In-Reply-To: <3602379.1708359672@ammon2.boltz.cs.cmu.edu> References: Your message of Mon, 19 Feb 2024 11:15:57 -0300. <3602379.1708359672@ammon2.boltz.cs.cmu.edu> Message-ID: Dear Dave, We have heard many expressive examples and failure examples about ChatGPT. They are intuitive but not systematic. If my writing in the last CDS Newsletter applies to ChatGPT, ChatGPT (along with all other LLMs) is the luckiest fitter for its training set F, among many similar fitters. The luckiest fitter has not been systematically tested on a new test T yet. Its human developers are keeping improving ChatGPT, by hand-tuning its ever-increasing number of hyperparameters. The outputs from ChatGPT are from humans assisted by computers, not really from computers. Best regards, -John ________________________________ From: Connectionists on behalf of Dave Touretzky Sent: Monday, February 19, 2024 11:21 AM To: Connectionists Subject: Re: Connectionists: Statistics versus ?Understanding? in Generative AI. Even GPT 3.5 can answer many questions about the spelling of words, so I don't think the problem is that the tokenization has removed this information. I think the problem is that a stack of attention heads isn't good at applying a novel, arbitrary rule consistently to a long list of items. Also, while the average human would be forgiven if they missed a few entries in the list of US states that do not contain "a" in their name, no normal human would claim that "Alaska" or "Texas" belongs in this list, nor would they persist in such an error after it was pointed out to them, the way ChatGPT 4 and Gemini do. This isn't an argument against the possibility of AGI. It's an argument that "attention is all you need" might be a bit of an overstatement. -- Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From arbib at usc.edu Mon Feb 19 13:28:33 2024 From: arbib at usc.edu (Michael Arbib) Date: Mon, 19 Feb 2024 18:28:33 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wduch at umk.pl Mon Feb 19 14:30:52 2024 From: wduch at umk.pl (Wlodek Duch) Date: Mon, 19 Feb 2024 20:30:52 +0100 Subject: Connectionists: =?utf-8?q?Juniors_positions_at_=E2=80=9CDynamics?= =?utf-8?q?=2C_Mathematical_Analysis_and_Artificial_Intelligence=E2=80=9D_?= =?utf-8?q?Center=2C_Nicolaus_Copernicus_University=2C_Poland?= In-Reply-To: <8a86f7f9-6814-4df2-889a-e5dc22a4c5cc@umk.pl> References: <5c935600-2487-46c7-8144-d3b9f23ed972@gmail.com> <8a86f7f9-6814-4df2-889a-e5dc22a4c5cc@umk.pl> Message-ID: <66cb8a40-39ac-4d47-98c8-fe2c1182867d@umk.pl> An HTML attachment was scrubbed... URL: From frothga at sandia.gov Mon Feb 19 13:31:17 2024 From: frothga at sandia.gov (Rothganger, Fred) Date: Mon, 19 Feb 2024 18:31:17 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: The McCulloch & Pitts paper referenced below leads with "Because of the 'all-or-none' character of nervous activity ...". This elevates spike events to be the most important feature of neural systems, then builds a whole theory around it. This is not consistent with modern neuroscience, and even overlooks things the authors themselves mention. ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Saturday, February 17, 2024 6:42 PM To: poole Cc: connectionists at mailman.srv.cs.cmu.edu Subject: [EXTERNAL] Connectionists: Early history of symbolic and neural network approaches to AI adding some pointers to David?s remarks: McCulloch and Pitts in 1943 were very much trying to bridge the symbolic and neural world. It?s clear even in the abstract (article below). Tensions between symbolic and neural approaches were in full force by Minsky and Papert 1969, and resurfaced in the 1980s. I don?t have a clear sense of where things were in Turing?s time per se, but both approaches were countenanced in the 1955 proposal for the Dartmouth conference link below; Rosenblatt had gathered steam by 1958 as noted. A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE web.archive.org [cid:cddedc31-0571-40ff-99b2-a18ac7153c2a] [preview.png] mccolloch.logical.calculus.ideas.1943 PDF Document ? 1 MB -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-cbs1eupw.png Type: image/png Size: 6194 bytes Desc: Outlook-cbs1eupw.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-preview.pn.png Type: image/png Size: 271697 bytes Desc: Outlook-preview.pn.png URL: From gary.marcus at nyu.edu Mon Feb 19 14:34:38 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 11:34:38 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: Message-ID: <7352F962-23B3-4295-91C8-43362E298077@nyu.edu> This dependency on exact formulation is literally why I think they don?t understand much. An expert, who understands something deeply, can appreciate it in a variety of presentations; a novice fails to recognize slight transformations as the same problem, because they don?t really understand what?s going on. > On Feb 19, 2024, at 11:08, Iam Palatnik wrote: > > Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something From gary.marcus at nyu.edu Mon Feb 19 11:41:34 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 08:41:34 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From bwyble at gmail.com Mon Feb 19 16:12:28 2024 From: bwyble at gmail.com (Brad Wyble) Date: Mon, 19 Feb 2024 16:12:28 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: The question is not whether we need to rely on external tools to verify the line lengths (we do), but whether we can understand that this illusion affects us, and can understand the basic idea of such an illusion. And we can do both of these. You can explain to someone that this illusion exists and they are able to understand that there is a difference between what they think their eyes are telling them about line length and what is actually true in the world. As far as we can determine from GPT experiments, LLMs are not able to reason about their own limitations in this way. That is the fundamental distinction I'm referring to. > > Because the performance of the LLMs on some of these tests seem to depend > so much on how the questions are formulated and what tools they are given > to respond with, I still tend to think that they understand something. I'm > OK with the idea that their understanding has space to be much deeper, > still, too. > > > This is the opposite of the argument that is typically used though. It is specifically the face that their answers depend so much on phrase (e.g. that paper by Melanie Mitchell that recently went by) that we argue they do NOT understand. The fact that answers are so dependent on word choice is an indication that they are parroting remembered examples rather than understanding in a human sense. > > > > > > On Mon, Feb 19, 2024 at 1:50?PM Brad Wyble wrote: > >> Iam, the difference is that while you may need an external source to >> remember all 50 states, for the ones that you have remembered/looked up, >> you are able to verify that they do or do not contain specific letters >> without reference to a resource, or writing some code to verify it. It is >> even worse that if you push them on their mistakes, they are still unable >> to correct. >> >> A better counterargument to the example Dave provides is that perhaps >> LLMs just cannot ever break things down at the letter level because of >> their reliance on tokens. Humans can do this of course, but a good analogy >> for us might be the Muller Lyer illusion, which is essentially >> impenetrable to our cognitive faculties. I.e. we are unable to force >> ourselves to see the lines as their true lengths on the page because the >> basis of our representations does not permit it. This is perhaps similar >> to the way that LLM representations preclude them from accessing the letter >> level. >> >> However, I think a good counterpoint to this is that while people are >> unable to un-see the Muller Lyer illusion, it is not that difficult to >> teach someone about this blindspot and get them to reason around it, with >> no external tools, just their own reasoning faculties. LLMs seem unable to >> achieve this level of self-knowledge no matter how patiently things are >> explained. They do not have the metacognitive faculty that allows them to >> even understand their blindspot about letters. >> >> >> >> >> On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus wrote: >> >>> Correct; also tool integration has actually been less successful than >>> some people believe: >>> >>> >>> https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external >>> >>> >>> On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg wrote: >>> >>> ? >>> Good point, but Dave's point stands as the models he is referring to did >>> not even comprehend that they made mistakes. >>> >>> Cheers, Thomas >>> >>> On Mon, Feb 19, 2024, 4:43?a.m. wrote: >>> >>>> That can be attributed to the models' underlying text encoding and >>>> processing mechanisms, specifically tokenization that removes the spelling >>>> information from those words. If you use GPT-4 instead, it can process it >>>> properly by resorting to external tools. >>>> >>>> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: >>>> >>>>> My favorite way to show that LLMs don't know what they're talking about >>>>> is this simple prompt: >>>>> >>>>> List all the US states whose names don't contain the letter "a". >>>>> >>>>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting >>>>> "Texas" >>>>> or "Alaska" on the list and leaving out states like "Wyoming" and >>>>> "Tennessee". And you can have a lengthy conversation with them about >>>>> this, pointing out their errors one at a time, and they still can't >>>>> manage to get it right. Gemini insisted that all 50 US states have an >>>>> "a" in their name. It also claimed "New Jersey" has two a's. >>>>> >>>>> -- Dave Touretzky >>>>> >>>> >> >> -- >> Brad Wyble >> Professor of Psychology >> Penn State University >> >> -- Brad Wyble (he/him) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanm at alleninstitute.org Mon Feb 19 20:25:02 2024 From: stefanm at alleninstitute.org (Stefan Mihalas) Date: Tue, 20 Feb 2024 01:25:02 +0000 Subject: Connectionists: [JOBS] Scientist 1 or 2 position at the Allen Institute - bio-inspired machine learning Message-ID: Dear Colleagues, A Scientist position is open at the Allen Institute in bio-inspired machine learning. Please see the job descriptions here: Sci I_Bio-inspired ML_Apply Sci II_Bio-inspired ML_Apply Biological systems learn differently than current machine learning systems, with generally higher sample efficiency but also strong inductive biases. The scientist will explore the effects which bio-realistic neurons, plasticity rules and architectures have on learning in artificial neural networks. This will be done by combining construction of artificial neural network with bio-inspired constraints. Please consider applying and reach out if you have any questions. We are looking forward to hearing from you. Thanks, Stefan Mihalas Investigator at Allen Institute Affiliate Professor in Applied Math, UW Seattle, Wa -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseph.lizier at sydney.edu.au Mon Feb 19 17:55:19 2024 From: joseph.lizier at sydney.edu.au (Joseph Lizier) Date: Mon, 19 Feb 2024 22:55:19 +0000 Subject: Connectionists: Postdoc/PhD in systems neuroscience and complexity at Uni of Sydney Message-ID: Dear all, We would like to advertise 1x Postdoc and 1x PhD position in Systems Neuroscience and Complexity at The University of Sydney. The successful candidates will join a dynamic interdisciplinary collaboration between A/Prof Mac Shine (Brain and Mind Centre), A/Prof Joseph Lizier (School of Computer Science) and Dr Ben Fulcher (School of Physics), within the University's Centre for Complex Systems, focused on advancing our understanding of brain function and cognition using cutting-edge computational and neuroimaging techniques at the intersection of network neuroscience, dynamical systems and information theory. The positions are funded by a grant from the Australian Research Council "Evaluating the Network Neuroscience of Human Cognition to Improve AI". Post-Doctoral Fellow in Systems Neuroscience and Complexity: Full time, research only fixed term position for 24 months. Lead supervisor: A/Prof. Mac Shine The Postdoc will process and analyse functional neuroimaging data (fMRI, EEG, etc.) to extract meaningful insights into brain function. The successful candidate will hold a PhD in Neuroscience, Computational Neuroscience, Psychology, or a related field (by July 2024); have a strong background in computational modelling and/or functional neuroimaging techniques; and proficiency in programming languages commonly used in neuroscience research (e.g., Python, MATLAB). PhD scholarship: Characterising information flow networks across brain regions in rest and task: available to both domestic and international applicants for 3.5 years (fulltime), providing the tuition fees and a stipend. Lead Supervisor: A/Prof. Joseph Lizier The PhD student will conduct applications of, and/or updates to algorithms for, inferring brain network models to represent information flow relationships between brain regions (via information-theoretic measures), based on time-series neural recordings. The successful candidate will hold a Bachelor's degree with honours or Master's degree in a relevant quantitative field (e.g. computer science, physics, mathematics), including completing a research thesis (first-class honours equivalent results are essential); and excellent skills in computational numerical analysis (in Python and/or Matlab) and in applied mathematics. For further details and how to apply, see the links above for both positions. Deadline Feb 28 (PhD position open until filled) --joe -- A/Prof. Joseph Lizier The University of Sydney School of Computer Science, Faculty of Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwyble at gmail.com Mon Feb 19 21:31:28 2024 From: bwyble at gmail.com (Brad Wyble) Date: Mon, 19 Feb 2024 21:31:28 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: The fact that prompting tricks sometimes manage to improve performance is not really evidence of understanding though. On the contrary, this is exactly how you would expect a massive memory bank with unknown contents and unknown indexing to function. You throw phrases at it like incantations and some of them happen to trigger the kind of output that you wanted. I agree that the fact that LLMs do what they do is amazing and impressive and they are even useful for some cases. But the understanding is an illusion. Our minds always want to believe that artifacts are sentient because it plays into our metacognition about agency and theory of mind. LLMs hit the sweet spot for this belief. On Mon, Feb 19, 2024 at 7:35?PM Iam Palatnik wrote: > The fact that answers are so dependent on word choice > > Just to clarify, I was not referring to the prompt of the task itself. > That is, whether we tell chatgpt to 'add 2 and 2' or 'give me 2+2'. > I'm referring to the fact that adding something like 'think step by step > ' onto the task prompt has a > noticeable effect on the performance for various tasks. > And it doesn't have to literally be 'think step by step' exactly. > > An example that happened just the other day: a friend was trying to get > ChatGPT to write a 30 paragraph long text, and was having no luck. The > model always wrote the wrong number of paragraphs. I told him to try > including 'number the start of each paragraph to help yourself' to the > prompt and bingo. I had never seen someone trying to do this exact task, > nor this numbering trick. I came up with it on the spot, and it worked, but > why? Why should we expect the model to perform better with this, at all? > > Do examples like this entail understanding or lack thereof? I'm not sure > everyone would agree on the answer. > But regardless of the specific wording we would use to describe what > happened there, it would be a problem if we are 'absolutely sure' ChatGPT > can't do X because it can't understand anything, while it turns out it can > do X if you just change the prompt or tool access. > > On Mon, Feb 19, 2024 at 6:13?PM Brad Wyble wrote: > >> The question is not whether we need to rely on external tools to verify >> the line lengths (we do), but whether we can understand that this illusion >> affects us, and can understand the basic idea of such an illusion. And we >> can do both of these. You can explain to someone that this illusion exists >> and they are able to understand that there is a difference between what >> they think their eyes are telling them about line length and what is >> actually true in the world. >> >> As far as we can determine from GPT experiments, LLMs are not able to >> reason about their own limitations in this way. That is the fundamental >> distinction I'm referring to. >> >> >>> >>> Because the performance of the LLMs on some of these tests seem to >>> depend so much on how the questions are formulated and what tools they are >>> given to respond with, I still tend to think that they understand >>> something. I'm OK with the idea that their understanding has space to be >>> much deeper, still, too. >>> >>> >>> >> This is the opposite of the argument that is typically used though. It >> is specifically the face that their answers depend so much on phrase (e.g. >> that paper by Melanie Mitchell that recently went by) that we argue they do >> NOT understand. The fact that answers are so dependent on word choice is >> an indication that they are parroting remembered examples rather than >> understanding in a human sense. >> >> >> >> >> >>> >>> >>> >>> >>> >>> On Mon, Feb 19, 2024 at 1:50?PM Brad Wyble wrote: >>> >>>> Iam, the difference is that while you may need an external source to >>>> remember all 50 states, for the ones that you have remembered/looked up, >>>> you are able to verify that they do or do not contain specific letters >>>> without reference to a resource, or writing some code to verify it. It is >>>> even worse that if you push them on their mistakes, they are still unable >>>> to correct. >>>> >>>> A better counterargument to the example Dave provides is that perhaps >>>> LLMs just cannot ever break things down at the letter level because of >>>> their reliance on tokens. Humans can do this of course, but a good analogy >>>> for us might be the Muller Lyer illusion, which is essentially >>>> impenetrable to our cognitive faculties. I.e. we are unable to force >>>> ourselves to see the lines as their true lengths on the page because the >>>> basis of our representations does not permit it. This is perhaps similar >>>> to the way that LLM representations preclude them from accessing the letter >>>> level. >>>> >>>> However, I think a good counterpoint to this is that while people are >>>> unable to un-see the Muller Lyer illusion, it is not that difficult to >>>> teach someone about this blindspot and get them to reason around it, with >>>> no external tools, just their own reasoning faculties. LLMs seem unable to >>>> achieve this level of self-knowledge no matter how patiently things are >>>> explained. They do not have the metacognitive faculty that allows them to >>>> even understand their blindspot about letters. >>>> >>>> >>>> >>>> >>>> On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus >>>> wrote: >>>> >>>>> Correct; also tool integration has actually been less successful than >>>>> some people believe: >>>>> >>>>> >>>>> https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external >>>>> >>>>> >>>>> On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg wrote: >>>>> >>>>> ? >>>>> Good point, but Dave's point stands as the models he is referring to >>>>> did not even comprehend that they made mistakes. >>>>> >>>>> Cheers, Thomas >>>>> >>>>> On Mon, Feb 19, 2024, 4:43?a.m. wrote: >>>>> >>>>>> That can be attributed to the models' underlying text encoding and >>>>>> processing mechanisms, specifically tokenization that removes the spelling >>>>>> information from those words. If you use GPT-4 instead, it can process it >>>>>> properly by resorting to external tools. >>>>>> >>>>>> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky >>>>>> wrote: >>>>>> >>>>>>> My favorite way to show that LLMs don't know what they're talking >>>>>>> about >>>>>>> is this simple prompt: >>>>>>> >>>>>>> List all the US states whose names don't contain the letter "a". >>>>>>> >>>>>>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting >>>>>>> "Texas" >>>>>>> or "Alaska" on the list and leaving out states like "Wyoming" and >>>>>>> "Tennessee". And you can have a lengthy conversation with them about >>>>>>> this, pointing out their errors one at a time, and they still can't >>>>>>> manage to get it right. Gemini insisted that all 50 US states have >>>>>>> an >>>>>>> "a" in their name. It also claimed "New Jersey" has two a's. >>>>>>> >>>>>>> -- Dave Touretzky >>>>>>> >>>>>> >>>> >>>> -- >>>> Brad Wyble >>>> Professor of Psychology >>>> Penn State University >>>> >>>> >> >> -- >> Brad Wyble (he/him) >> >> -- Brad Wyble (he/him) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iam.palat at gmail.com Mon Feb 19 14:08:01 2024 From: iam.palat at gmail.com (Iam Palatnik) Date: Mon, 19 Feb 2024 16:08:01 -0300 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: Dear Brad, I see what you mean and I agree that this is in some way about the 'cognitive biases' (for lack of a better term) that LLMs suffer, and how they are different to our cognitive biases. It's not that they can't see things at the letter level, I think, because current tokenizers can split text into tokens all the way from sub-character to combinations of full words. (For example, the English letter W gets 1 token, but Phoenician ? gets 4 tokens, on the gpt-4 tokenizer ). But token related weirdness indeed likely plays a role in LLM behavior, and is likely part of why certain behaviors seem so non-intuitive. But imagine some civilization from another planet, with very precise eyes, are testing us with the Muller Lyer illusion example, and sometimes they prank us by making one of the lines actually just slightly longer. Is there any way for us to be sure, without tool usage? Using tools, like measuring the lines with or overlaying them on an image editor, makes the task more trivial, whereas it might've been close to impossible otherwise. Would the testers be able to conclude something about our intelligence or understanding based on the tool-less version of the Muller Lyer illusion test? We may be bad at that test in that format, but does that mean we don't understand length? So much of our species is built around tool usage. Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something. I'm OK with the idea that their understanding has space to be much deeper, still, too. On Mon, Feb 19, 2024 at 1:50?PM Brad Wyble wrote: > Iam, the difference is that while you may need an external source to > remember all 50 states, for the ones that you have remembered/looked up, > you are able to verify that they do or do not contain specific letters > without reference to a resource, or writing some code to verify it. It is > even worse that if you push them on their mistakes, they are still unable > to correct. > > A better counterargument to the example Dave provides is that perhaps LLMs > just cannot ever break things down at the letter level because of their > reliance on tokens. Humans can do this of course, but a good analogy for > us might be the Muller Lyer illusion, which is essentially impenetrable to > our cognitive faculties. I.e. we are unable to force ourselves to see the > lines as their true lengths on the page because the basis of our > representations does not permit it. This is perhaps similar to the way > that LLM representations preclude them from accessing the letter level. > > However, I think a good counterpoint to this is that while people are > unable to un-see the Muller Lyer illusion, it is not that difficult to > teach someone about this blindspot and get them to reason around it, with > no external tools, just their own reasoning faculties. LLMs seem unable to > achieve this level of self-knowledge no matter how patiently things are > explained. They do not have the metacognitive faculty that allows them to > even understand their blindspot about letters. > > > > > On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus wrote: > >> Correct; also tool integration has actually been less successful than >> some people believe: >> >> >> https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external >> >> >> On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg wrote: >> >> ? >> Good point, but Dave's point stands as the models he is referring to did >> not even comprehend that they made mistakes. >> >> Cheers, Thomas >> >> On Mon, Feb 19, 2024, 4:43?a.m. wrote: >> >>> That can be attributed to the models' underlying text encoding and >>> processing mechanisms, specifically tokenization that removes the spelling >>> information from those words. If you use GPT-4 instead, it can process it >>> properly by resorting to external tools. >>> >>> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky wrote: >>> >>>> My favorite way to show that LLMs don't know what they're talking about >>>> is this simple prompt: >>>> >>>> List all the US states whose names don't contain the letter "a". >>>> >>>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" >>>> or "Alaska" on the list and leaving out states like "Wyoming" and >>>> "Tennessee". And you can have a lengthy conversation with them about >>>> this, pointing out their errors one at a time, and they still can't >>>> manage to get it right. Gemini insisted that all 50 US states have an >>>> "a" in their name. It also claimed "New Jersey" has two a's. >>>> >>>> -- Dave Touretzky >>>> >>> > > -- > Brad Wyble > Professor of Psychology > Penn State University > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Mon Feb 19 17:11:07 2024 From: weng at msu.edu (Weng, Juyang) Date: Mon, 19 Feb 2024 22:11:07 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: From interdonatos at gmail.com Tue Feb 20 02:31:17 2024 From: interdonatos at gmail.com (Roberto Interdonato) Date: Tue, 20 Feb 2024 08:31:17 +0100 Subject: Connectionists: =?utf-8?q?=5BDEADLINE_EXTENSION=5D_CFP_FRCCS_2024?= =?utf-8?q?_Montpellier=2C_France_May_29_=E2=80=93_31=2C_2024?= Message-ID: SUBJECT: CFP FRCCS 2024 Montpellier, France May 29 ? 31, 2024 Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* Deadline Extension: *March 4, 2024.* After successfully hosting the conference in Dijon in 2021, Paris in 2022, and Le Havre in 2023, the fourth edition of the French Regional Conference on Complex Systems (FRCCS 2024) will be held in Montpellier, France. This single track international conference organized by CSS France serves as a platform to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange and promote ideas, facilitating the cross-fertilization of recent research work, industrial advancements, and original applications. Moreover, the conference emphasizes research topics with a high societal impact, showcasing the significance of complexity science in addressing complex societal challenges. Join us in Montpellier as we collectively strive to better understand complexity and its implications for society. You are cordially invited to submit your contribution until *March 4, 2024* *.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Frontiers in Big Data , edited by Frontiers *Submission * ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to: * ? *Foundations of complex systems * - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks * - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment * - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity * - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations * - Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 20 02:34:43 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Mon, 19 Feb 2024 23:34:43 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Strawperson alert: there are not any neurons that have a one-to-one correspondence to the symbol How about collections of neurons? What do you think logicians do when they manipulate symbols? Is it magic? Parapsychological? > On Feb 19, 2024, at 10:41?PM, Weng, Juyang wrote: > > there are not any neurons that have a one-to-one correspondence to the symbol -------------- next part -------------- An HTML attachment was scrubbed... URL: From vtresp at gmail.com Tue Feb 20 04:16:12 2024 From: vtresp at gmail.com (Volker Tresp) Date: Tue, 20 Feb 2024 09:16:12 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Hi all, there is no doubt in my mind that the brain has symbols. A symbol might be represented as a single neuron, an ensemble of neurons, or an activation pattern of a set of neurons. The question is: does the brain use symbolic reasoning (FOL-type)? Obvious, some of us can do it sometimes! More in the Tensor Brain paper: https://arxiv.org/abs/2109.13392 Best, Volker From: Connectionists on behalf of Gary Marcus Date: Tuesday, 20. February 2024 at 08:56 To: "Weng, Juyang" Cc: "connectionists at mailman.srv.cs.cmu.edu" Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Strawperson alert: there are not any neurons that have a one-to-one correspondence to the symbol How about collections of neurons? What do you think logicians do when they manipulate symbols? Is it magic? Parapsychological? On Feb 19, 2024, at 10:41?PM, Weng, Juyang wrote: there are not any neurons that have a one-to-one correspondence to the symbol -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbiancardi at cesi.fr Tue Feb 20 04:49:55 2024 From: bbiancardi at cesi.fr (BIANCARDI Beatrice) Date: Tue, 20 Feb 2024 09:49:55 +0000 Subject: Connectionists: CfP AVI 2024 Advanced Visual Interfaces - 3rd Call for Tutorial Proposals In-Reply-To: References: Message-ID: CfP AVI 2024 Advanced Visual Interfaces - 3rd Call for Tutorial Proposals International Conference on Advanced Visual Interfaces ================================================= 3-7 June 2024 Genoa, Italy https://avi2024.dibris.unige.it ================================================= [https://lh7-us.googleusercontent.com/OZ7XkzziwW-IrPvKTdwEBEV9wwQSGKFbS8Qk8-SyRSx_qCxtcQpZt4ROLvIX-4VbmAZRZyuFNFaXejO5heeKW4Nt48Yih_b_mmt1E7c682OPpjfYUgkGd6tiB7DKECdk8Ef4FL-krVW4aXXhwXXbWcs] IMPORTANT DATES Tutorial proposals: - Friday, March 15, 2023 (23:59, AoE) Submission webpage: https://easychair.org/conferences/?conf=avi2024 --------------------------------------------------------------------------- TUTORIAL PROPOSALS We encourage the proposal of tutorials on new and emerging topics or, conversely, consolidated topics related to Advanced Visual Interfaces and HCI approaches, methodologies, or technologies. The selected tutorials will be held on the first or last day of the conference and Doctoral Consortium students will be invited to participate in them. Tutorial proposals must be submitted following the instructions on: https://avi2024.dibris.unige.it/calls Tutorial Chairs - Ignacio Aedo, Universidad Carlos III de Madrid - Rosella Gennari, Free University of Bozen-Bolzano --------------------------------------------------------------------------- International Conference on Advanced Visual Interfaces (AVI) Since 1992, AVI has been a biennial appointment for a vast international community of experts with a broad range of backgrounds. Throughout three decades, the Conference has attracted leading researchers of Human-Computer Interaction (HCI) from all over the world, offering a forum to present and disseminate new technological results, paradigms, and visions for HCI and user interfaces. Because of advanced technology and new possibilities for user interaction, AVI has broadened the topics it covers, still keeping its primary focus on the conception, design, implementation, and evaluation of novel visual interfaces. While rooted in Italy, AVI is an actual international conference concerning the nationality of participants, authors of papers, and program committee members. The mixture of carefully selected research contributions paired with cordial Italian hospitality creates a unique conference atmosphere, which has made AVI an internationally recognized brand. AVI 2024 is under the patronage of University of Genoa, Department of Informatics, Bioengineering, Robotics and Systems Engineering, Italy. We look forward to your participation in AVI 2024! Cristina Conati, AVI 2024 General Chair Gulatiero Volpe, AVI 2024 General Chair Ilaria Torre, AVI 2024 Program Chair --------------------------------------------------------------------------- TOPICS Topics of interest include (but are not limited to) the following: Adaptive and Context-Aware Interfaces Affective Visual Interfaces Computer Supported Cooperative Work Conversational Interfaces Full-body Interaction Human-AI Interaction Information Visualization Intelligent Interfaces Engineering of Visual Interfaces and Interaction Interaction Design Tools Building Interactions: Hardware, Materials, and Fabrication Interaction for the environment and environmental awareness Interface Metaphors Interfaces for Automotive Interfaces for Big Data Interfaces for e-Commerce and e-Branding Interfaces for e-Culture and e-Tourism Interfaces for End-User Development Interfaces for i-TV Interfaces for Recommender Systems Interfaces for Social Interaction and Cooperation Interfaces and Interactions for Inclusion, Accessibility and Aging Interfaces for Children Learning, Education, and Families Mobile Interaction Motion-based Interaction Multimodal Interfaces (Multi)Sensory Interfaces (Multi)Touch Interaction Search Interfaces Shape-Changing Devices User Interfaces for the Internet of Things Usability and Accessibility Usability and (Cyber)Security Virtual and Augmented Reality Visual Analytics --------------------------------------------------------------------------- AVI 2024 ORGANIZING COMMITTEE General Chairs Cristina Conati, University of British Columbia, Vancouver Gualtiero Volpe, University of Genoa Program Chair Ilaria Torre, University of Genoa Long Papers Chairs Giuseppe Desolda, University of Bari Michail Giannakos, Norwegian University of Science and Technology Short Papers Chairs Elisabetta Bevacqua, National Engineering School of Brest Maurizio Mancini, Sapienza University of Rome Workshop & Tutorial Chairs Ignacio Aedo, Universidad Carlos III de Madrid Rosella Gennari, Free University of Bozen-Bolzano Poster Chairs Masood Masoodian, Aalto University Giovanna Varni, University of Trento Demo Chairs Radoslaw Niewiadomski, University of Genoa Fabiana Vernero, University of Turin Doctoral Consortium Chair Fabio Patern?, CNR ISTI Giuliana Vitiello, University of Salerno Publicity Chairs Beatrice Biancardi, LINEACT CESI Federica Delprino, University of Genoa Proceedings Chairs Eleonora Ceccaldi, University of Genoa Cigdem Beyan, University of Trento Web Chair Paola Barra, University of Naples Parthenope AVI STEERING COMMITTEE Paolo Bottoni, Sapienza University of Rome, Italy Paolo Buono University of Bari, Italy Tiziana Catarci Sapienza University of Rome, Italy Maria Francesca Costabile University of Bari, Italy Maristella Matera Polytechnic of Milan, Italy Massimo Mecella Sapienza University of Rome, Italy Kent Norman University of Maryland, USA Emanuele Panizzi, Sapienza University of Rome, Italy Genny Tortora University of Salerno, Italy Giuliana Vitiello University of Salerno, Italy Marco Winckler Universit? C?te d'Azur, France --------------------------------------------------------------------------- @avi2024conference (https://www.facebook.com/avi2024conference/) @avi2024conference (https://www.instagram.com/avi2024conference/) @avi2024conf (https://twitter.com/avi2024conf) @avi2024conference https://www.linkedin.com/company/avi2024conference/ [cid:image002.png at 01DA63EA.88AC20D0] Beatrice BIANCARDI Enseignante-Chercheure LINEACT CESI bbiancardi at cesi.fr https://sites.google.com/view/beatricebiancardi/home Tel : +33 (0)6 71 62 50 45 93 boulevard de La Seine CS 40177 - 92006 Nanterre cedex [cid:image003.png at 01DA63EA.88AC20D0] [cid:image004.png at 01DA63EA.88AC20D0] [cid:image005.png at 01DA63EA.88AC20D0] [cid:image006.png at 01DA63EA.88AC20D0] [cid:image007.png at 01DA63EA.88AC20D0] cesi.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 11405 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 5926 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 488 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 487 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 501 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 549 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.png Type: image/png Size: 591 bytes Desc: image007.png URL: From carsten.mehring at biologie.uni-freiburg.de Tue Feb 20 05:59:44 2024 From: carsten.mehring at biologie.uni-freiburg.de (Carsten Mehring) Date: Tue, 20 Feb 2024 11:59:44 +0100 Subject: Connectionists: Call for applications - MSc Neuroscience, University of Freiburg, Germany Message-ID: <9B1AF5C6-36CD-45FD-BDBD-6016ABABB431@biologie.uni-freiburg.de> MSc program in Neuroscience - University of Freiburg, Germany We are inviting applications for the interdisciplinary MSc program in Neuroscience at the University of Freiburg, Germany. The program provides theoretical and practical training in neuroscience, covering both the foundations and latest research in the field. It is taught by lecturers from an international scientific community from multiple faculties and neuroscience research centres. The modular course structure caters to the specific backgrounds and research interests of each individual student and we offer specialisations in neural circuits and behavior, computational neuroscience and neurotechnology. All courses are taught in English. We welcome applications with a background in natural sciences, mathematics, behavioural sciences, computer science, medicine or engineering sciences. The deadline for applications for the October 2024 entry is the 31st of March 2024. Further details can be found on our homepage http://www.mscneuro.uni-freiburg.de With kind regards Carsten Mehring ? Bernstein Center Freiburg & Faculty of Biology University of Freiburg -------------- next part -------------- An HTML attachment was scrubbed... URL: From risto at cs.utexas.edu Tue Feb 20 06:47:01 2024 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Tue, 20 Feb 2024 03:47:01 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: References: <98C09E74-34DE-4356-8865-C042E84C1F85@nyu.edu> Message-ID: <49E47D6C-C143-496C-87DA-606FD452C19F@cs.utexas.edu> Defining ?understanding? is a slippery goal: it can be an illusion (i.e. an epiphenomenon), a continuum with different depths, different for machines and humans and other animals, more or less thorough depending on the topic, etc. But it is essential to develop characterizations for it, however complex they may be. How else can we ever take full advantage of generative AI? I believe this effort is different from what we are used to doing with AI and neural networks. Instead of statistics, we should perhaps adapt other methods from neuroscience, psychology, and social sciences. They deal with complex systems with emergent behaviors and with only a partial view of what?s generating them?much like GenAI systems. ( I recently wrote an opinion piece about this for the AI Magazine, at https://onlinelibrary.wiley.com/doi/10.1002/aaai.12155 There?s also a fun video abstract, i.e. discussion with Babak Hodjat at https://youtu.be/3qkmF61oBwI To be fair, Babak will have his own piece in the next issue :-) > On Feb 19, 2024, at 6:31?PM, Brad Wyble wrote: > > The fact that prompting tricks sometimes manage to improve performance is not really evidence of understanding though. On the contrary, this is exactly how you would expect a massive memory bank with unknown contents and unknown indexing to function. You throw phrases at it like incantations and some of them happen to trigger the kind of output that you wanted. > > I agree that the fact that LLMs do what they do is amazing and impressive and they are even useful for some cases. But the understanding is an illusion. Our minds always want to believe that artifacts are sentient because it plays into our metacognition about agency and theory of mind. LLMs hit the sweet spot for this belief. > > > On Mon, Feb 19, 2024 at 7:35?PM Iam Palatnik > wrote: >>> The fact that answers are so dependent on word choice >> Just to clarify, I was not referring to the prompt of the task itself. That is, whether we tell chatgpt to 'add 2 and 2' or 'give me 2+2'. >> I'm referring to the fact that adding something like 'think step by step ' onto the task prompt has a noticeable effect on the performance for various tasks. >> And it doesn't have to literally be 'think step by step' exactly. >> >> An example that happened just the other day: a friend was trying to get ChatGPT to write a 30 paragraph long text, and was having no luck. The model always wrote the wrong number of paragraphs. I told him to try including 'number the start of each paragraph to help yourself' to the prompt and bingo. I had never seen someone trying to do this exact task, nor this numbering trick. I came up with it on the spot, and it worked, but why? Why should we expect the model to perform better with this, at all? >> >> Do examples like this entail understanding or lack thereof? I'm not sure everyone would agree on the answer. >> But regardless of the specific wording we would use to describe what happened there, it would be a problem if we are 'absolutely sure' ChatGPT can't do X because it can't understand anything, while it turns out it can do X if you just change the prompt or tool access. >> >> On Mon, Feb 19, 2024 at 6:13?PM Brad Wyble > wrote: >>> The question is not whether we need to rely on external tools to verify the line lengths (we do), but whether we can understand that this illusion affects us, and can understand the basic idea of such an illusion. And we can do both of these. You can explain to someone that this illusion exists and they are able to understand that there is a difference between what they think their eyes are telling them about line length and what is actually true in the world. >>> >>> As far as we can determine from GPT experiments, LLMs are not able to reason about their own limitations in this way. That is the fundamental distinction I'm referring to. >>> >>>> >>>> >>>> Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something. I'm OK with the idea that their understanding has space to be much deeper, still, too. >>>> >>>> >>> >>> This is the opposite of the argument that is typically used though. It is specifically the face that their answers depend so much on phrase (e.g. that paper by Melanie Mitchell that recently went by) that we argue they do NOT understand. The fact that answers are so dependent on word choice is an indication that they are parroting remembered examples rather than understanding in a human sense. >>> >>> >>> >>> >>>> >>>> >>>> >>>> >>>> >>>> On Mon, Feb 19, 2024 at 1:50?PM Brad Wyble > wrote: >>>>> Iam, the difference is that while you may need an external source to remember all 50 states, for the ones that you have remembered/looked up, you are able to verify that they do or do not contain specific letters without reference to a resource, or writing some code to verify it. It is even worse that if you push them on their mistakes, they are still unable to correct. >>>>> >>>>> A better counterargument to the example Dave provides is that perhaps LLMs just cannot ever break things down at the letter level because of their reliance on tokens. Humans can do this of course, but a good analogy for us might be the Muller Lyer illusion, which is essentially impenetrable to our cognitive faculties. I.e. we are unable to force ourselves to see the lines as their true lengths on the page because the basis of our representations does not permit it. This is perhaps similar to the way that LLM representations preclude them from accessing the letter level. >>>>> >>>>> However, I think a good counterpoint to this is that while people are unable to un-see the Muller Lyer illusion, it is not that difficult to teach someone about this blindspot and get them to reason around it, with no external tools, just their own reasoning faculties. LLMs seem unable to achieve this level of self-knowledge no matter how patiently things are explained. They do not have the metacognitive faculty that allows them to even understand their blindspot about letters. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Feb 19, 2024 at 10:06?AM Gary Marcus > wrote: >>>>>> Correct; also tool integration has actually been less successful than some people believe: >>>>>> >>>>>> https://open.substack.com/pub/garymarcus/p/getting-gpt-to-work-with-external >>>>>> >>>>>>> On Feb 19, 2024, at 5:49?AM, Thomas Trappenberg > wrote: >>>>>>> >>>>>>> ? >>>>>>> Good point, but Dave's point stands as the models he is referring to did not even comprehend that they made mistakes. >>>>>>> >>>>>>> Cheers, Thomas >>>>>>> >>>>>>> On Mon, Feb 19, 2024, 4:43?a.m. > wrote: >>>>>>>> That can be attributed to the models' underlying text encoding and processing mechanisms, specifically tokenization that removes the spelling information from those words. If you use GPT-4 instead, it can process it properly by resorting to external tools. >>>>>>>> >>>>>>>> On Mon, Feb 19, 2024 at 3:45?PM Dave Touretzky > wrote: >>>>>>>>> My favorite way to show that LLMs don't know what they're talking about >>>>>>>>> is this simple prompt: >>>>>>>>> >>>>>>>>> List all the US states whose names don't contain the letter "a". >>>>>>>>> >>>>>>>>> ChatGPT, Bing, and Gemini all make a mess of this, e.g., putting "Texas" >>>>>>>>> or "Alaska" on the list and leaving out states like "Wyoming" and >>>>>>>>> "Tennessee". And you can have a lengthy conversation with them about >>>>>>>>> this, pointing out their errors one at a time, and they still can't >>>>>>>>> manage to get it right. Gemini insisted that all 50 US states have an >>>>>>>>> "a" in their name. It also claimed "New Jersey" has two a's. >>>>>>>>> >>>>>>>>> -- Dave Touretzky >>>>> >>>>> >>>>> -- >>>>> Brad Wyble >>>>> Professor of Psychology >>>>> Penn State University >>>>> >>> >>> >>> -- >>> Brad Wyble (he/him) >>> > > > -- > Brad Wyble (he/him) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lamb at inf.ufrgs.br Tue Feb 20 09:06:05 2024 From: lamb at inf.ufrgs.br (Luis Lamb) Date: Tue, 20 Feb 2024 09:06:05 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <7352F962-23B3-4295-91C8-43362E298077@nyu.edu> References: <7352F962-23B3-4295-91C8-43362E298077@nyu.edu> Message-ID: <3345479C-8AA8-491D-A4CB-E2F74823B228@inf.ufrgs.br> What many (or some) have thought relevant in computer science is knowing rigorously what a computing engine is computing (learning or reasoning). Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career at Stanford-Oxford-CMU to that. It seems to me that we are lacking in semantic understanding in current LLMs. Extending and understanding a notion made explicit by Harnad (in the symbol grounding paper) to neural computing and its hybridization through neurosymbolic AI, I always thought of a Harnad style analogy between symbol grounding and "neural grounding (sic)". What is/are the meaning(s) of large neural networks (Transformer like LLMs) "computations" which are manipulated based only on their (initial) syntactic structure? This is one of my (and of many others) motivation towards neurosymbolic AI. The need for better (or "a") formal semantics. The benefits would be possibly many in explainability and other predicates now under attention. Luis > On Feb 20, 2024, at 01:38, Gary Marcus wrote: > > ?This dependency on exact formulation is literally why I think they don?t understand much. > > An expert, who understands something deeply, can appreciate it in a variety of presentations; a novice fails to recognize slight transformations as the same problem, because they don?t really understand what?s going on. > >> On Feb 19, 2024, at 11:08, Iam Palatnik wrote: >> >> Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something > From ali at ifi.uio.no Tue Feb 20 09:23:21 2024 From: ali at ifi.uio.no (Ali Ramezani-Kebrya) Date: Tue, 20 Feb 2024 14:23:21 +0000 Subject: Connectionists: 36-month Postdoc Position in ML at the University of Oslo Message-ID: We have a 36-month Postdoc Position in ML (Salary NOK 575 400 ? 657 300): Joint Physics-informed and Data-driven Complex Dynamical System Solvers Reach out to me if you want to apply for this position. https://www.jobbnorge.no/en/available-jobs/job/255679/dstrain-msca-postdoctoral-fellowships-in-computational-and-natural-sciences-18-positions Ali Ramezani-Kebrya Associate Professor Department of Informatics University of Oslo Principal Investigator, SFI Visual Intelligence Norwegian Centre for Knowledge-driven Machine Learning (Integreat) European Laboratory for Learning and Intelligent Systems (ELLIS) Member https://alirk.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurent.mertens at kuleuven.be Tue Feb 20 11:05:41 2024 From: laurent.mertens at kuleuven.be (Laurent Mertens) Date: Tue, 20 Feb 2024 16:05:41 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 20 10:09:37 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 20 Feb 2024 07:09:37 -0800 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <3345479C-8AA8-491D-A4CB-E2F74823B228@inf.ufrgs.br> References: <3345479C-8AA8-491D-A4CB-E2F74823B228@inf.ufrgs.br> Message-ID: an AI approach that combined large-scale learning with a tractable formal semantics would be a huge advance. > On Feb 20, 2024, at 6:06?AM, Luis Lamb wrote: > > ?What many (or some) have thought relevant in computer science is knowing rigorously what a computing engine is computing (learning or reasoning). Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career > at Stanford-Oxford-CMU to that. It seems to me that we are lacking in semantic understanding in current LLMs. > > Extending and understanding a notion made explicit by Harnad (in the symbol grounding paper) to neural computing and its hybridization through neurosymbolic AI, I always thought of a Harnad style analogy > between symbol grounding and "neural grounding (sic)". > > What is/are the meaning(s) of large neural networks (Transformer > like LLMs) "computations" which are manipulated based only on their > (initial) syntactic structure? > This is one of my (and of many others) motivation towards neurosymbolic AI. The need for better (or "a") formal semantics. The benefits would be possibly many in explainability and other predicates now under attention. > Luis > >> On Feb 20, 2024, at 01:38, Gary Marcus wrote: >> >> ?This dependency on exact formulation is literally why I think they don?t understand much. >> >> An expert, who understands something deeply, can appreciate it in a variety of presentations; a novice fails to recognize slight transformations as the same problem, because they don?t really understand what?s going on. >> >>>> On Feb 19, 2024, at 11:08, Iam Palatnik wrote: >>> >>> Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something >> > From minaiaa at gmail.com Tue Feb 20 10:54:13 2024 From: minaiaa at gmail.com (Ali Minai) Date: Tue, 20 Feb 2024 10:54:13 -0500 Subject: Connectionists: =?utf-8?q?Statistics_versus_=E2=80=9CUnderstandin?= =?utf-8?q?g=E2=80=9D_in_Generative_AI=2E?= In-Reply-To: <3345479C-8AA8-491D-A4CB-E2F74823B228@inf.ufrgs.br> References: <7352F962-23B3-4295-91C8-43362E298077@nyu.edu> <3345479C-8AA8-491D-A4CB-E2F74823B228@inf.ufrgs.br> Message-ID: It is absurd to claim humanlike ?understanding? in an ungrounded system with no introspective ability, no internal recurrence, no hierarchical working memory, no internal drives or affective states, no foresight, no real-time adaptation, and purely autoregressive response. The fact that careful prompting can improve performance dramatically shows how much a user needs to make up for the absence of basic cognitive mechanisms. All this said, LLMs - especially GPT-4 - are truly remarkable and should force us to reconsider many of our received ideas about the nature of cognition and language. Its hallucinations and failure to obey very simple instructions (as in Dave?s example) notwithstanding, ChatGPT can often respond with such apparent depth and nuance that you have to sit back and wonder. The ?just statistics? argument is, in my opinion, not a good one. The problem with ChatGPT, Sora, et al is not that they build their internal world model from statistics, but that they do so from statistics of text and video, which represent the real world indirectly, superficially, ambiguously, and - in the case of text - often inaccurately. Thus, they are modeling the worlds of text and video, not the real world. They do it using architectures that are extremely generic and simplistic - especially when compared to brains. They also do not exploit the incredibly useful prior biases (e.g., specific neural circuits) that evolution provides in animals, and do not learn developmentally. If one had a more sophisticated - animal-like or other - autonomous system with productive innate biases, self-motivation, representational depth, and the ability to learn ?just statistics? by experiencing the real world, it could have natural understanding. If we ever do get around to building such systems, we?ll need to think of them as we do of humans and other animals. They will be able to do very useful things for us, but by choice, not compulsion. They will also be able to disobey, deceive, and act sociopathically - even psychopathically. We accept these things in our fellow humans and pets, and should do the same in any autonomous general intelligence we create. The ideal AGI system - if we ever build one - will have its own mind; else it is not a general intelligence but a glorified tool. Ali *Ali A. Minai, Ph.D.* Professor and Graduate Program Director Complex Adaptive Systems Lab Department of Electrical & Computer Engineering 828 Rhodes Hall University of Cincinnati Cincinnati, OH 45221-0030 Phone: (513) 556-4783 Fax: (513) 556-7326 Email: Ali.Minai at uc.edu minaiaa at gmail.com WWW: https://researchdirectory.uc.edu/p/minaiaa On Tue, Feb 20, 2024 at 9:37?AM Luis Lamb wrote: > What many (or some) have thought relevant in computer science is knowing > rigorously what a computing engine is computing (learning or reasoning). > Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career > at Stanford-Oxford-CMU to that. It seems to me that we are lacking in > semantic understanding in current LLMs. > > Extending and understanding a notion made explicit by Harnad (in the > symbol grounding paper) to neural computing and its hybridization through > neurosymbolic AI, I always thought of a Harnad style analogy > between symbol grounding and "neural grounding (sic)". > > What is/are the meaning(s) of large neural networks (Transformer > like LLMs) "computations" which are manipulated based only on their > (initial) syntactic structure? > This is one of my (and of many others) motivation towards neurosymbolic > AI. The need for better (or "a") formal semantics. The benefits would be > possibly many in explainability and other predicates now under attention. > Luis > > > On Feb 20, 2024, at 01:38, Gary Marcus wrote: > > > > ?This dependency on exact formulation is literally why I think they > don?t understand much. > > > > An expert, who understands something deeply, can appreciate it in a > variety of presentations; a novice fails to recognize slight > transformations as the same problem, because they don?t really understand > what?s going on. > > > >> On Feb 19, 2024, at 11:08, Iam Palatnik wrote: > >> > >> Because the performance of the LLMs on some of these tests seem to > depend so much on how the questions are formulated and what tools they are > given to respond with, I still tend to think that they understand something > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joern.diedrichsen at googlemail.com Tue Feb 20 14:26:20 2024 From: joern.diedrichsen at googlemail.com (=?utf-8?Q?J=C3=B6rn_Diedrichsen?=) Date: Tue, 20 Feb 2024 14:26:20 -0500 Subject: Connectionists: Postdoctoral Position in MRI data analysis of cerebellar development Message-ID: We are looking to recruit a new postdoctoral associate for a large collaborative project on the anatomical development of the human cerebellum. The overall goal of the project is to develop a high-resolution normative model of human cerebellar development across the entire life span (see raynorcerebellumcharts.github.io ). The successful candidate will join the Diedrichsen Lab (Western University, Canada) and will work with a team of colleagues at Erasmus Medical Center, the Donders Institute (Netherlands), McGill, Dalhousie, Sick Kids, and UBC (Canada). A PhD degree in Neuroscience, Statistics, Computer Science, Engineering, Psychology, Medicine or related fields is required. Expertise in machine learning, analysis of medical imaging data, and/or normative modeling is highly desired. The project is funded for a 3 year duration, with the initial contract for 2 years. The exact starting date is negotiable, but is ideally as close as possible to the project start date, April 1st, 2024. Review of application will start March 5th and continue until the position is filled. For more information and application instructions, please see the full ad on our website . J?rn Diedrichsen Western Research Chair Brain Mind Institute Department of Computer Science Department of Statistical and Actuarial Sciences Email: jdiedric at uwo.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From Elena.Vildjiounaite at vtt.fi Tue Feb 20 16:57:22 2024 From: Elena.Vildjiounaite at vtt.fi (Vildjiounaite Elena) Date: Tue, 20 Feb 2024 21:57:22 +0000 Subject: Connectionists: Special Session on Human Emotion and Behaviour Understanding at CBMI2024, Reykjavik, Iceland Message-ID: ==== We apologies if you have received multiple copies ==== ==== Please, feel free to share this call for Papers with your colleagues who might be interested ==== Dear Colleagues, This is Call for Papers for Special Session in 21st International Conference on Content-based Multimedia Indexing September 18-20, Reykjavik, Iceland Special Sessions - CBMI 2024 Multimodal Data Analysis for Understanding of Human Behaviour, Emotions and their Reasons This special session addresses the processing of all types of data related to understanding of human behaviour, emotions, and their reasons, such as current or past context. Understanding human behaviour and context may be beneficial for many services both online and in physical spaces. For example, detecting lack of skills, confusion or other negative states may help to adapt online learning programmes, to detect a bottleneck in the production line, to recognise poor workplace culture etc., or maybe to detect a dangerous spot on a road before any accident happens there. Detection of unusual behaviour may help to improve security of travellers and safety of dementia sufferers and visually/ audio impaired individuals, for example, to help them to stay away from potentially dangerous strangers, e.g., drunk people or football fans forming a big crowd. In context of multimedia retrieval, understanding human behaviour and emotions could help not only for multimedia indexing, but also to derive implicit (i.e., other than intentionally reported) human feedback regarding multimedia news, videos, advertisements, navigators, hotels, shopping items etc. and improve multimedia retrieval. Humans are good at understanding other humans, their emotions and reasons. For example, when looking at people engaged in different activities (sport, driving, working on a computer, working in a construction site, using public transport etc.), a human observer can understand whether a person is engaged in the task or distracted, stopped the recommended video because the video was not interesting, or because the person quickly found what he needed in the beginning of the video. After observing another human for some time, humans can also learn his/ her tastes, skills and personality traits. Hence the interest of this session is, how to improve AI understanding of the same aspects? The topics include (but are not limited to) the following * Use of various sensors for monitoring and understanding human behaviour, emotion/ mental state/ cognition, and context: video, audio, infrared, wearables, virtual (e.g., mobile device usage, computer usage) etc. * Methods for information fusion, including information from various heterogeneous sources * Methods to learn human traits and preferences from long term observations * Methods to detect human implicit feedback from past and current observations * Methods to assess task performance: skills, emotions, confusion, engagement in the task, context * Methods to detect potential security and safety threats and risks * Methods to adapt behavioural and emotional models to different end users and contexts without collecting a lot of labels from each user and/ or for each context: transfer learning, semi-supervised learning, anomaly detection, one-shot learning etc. * How to collect data for training AI methods from various sources, e.g., internet, open data, field pilots etc. * Use of behavioural or emotional data to model humans and adapt services either online or in physical spaces. * Ethics and privacy issues in modelling human emotions, behaviour, context and reasons Proceedings should be published by ACM, the information will soon appear in the conference web page. Best papers will be invited to Multimedia Tools and Applications journal. Conference Website: http://cbmi2024.org/ Authors can submit full papers (6 pages + references), short papers (4 pages + references), special session papers (6 pages + references) and demonstration proposals (4 pages + 1 page demonstration description + references). Authors of certain best papers of the conference will be invited to submit extended versions of their contributions to a special journal issue in MTAP (approval pending). Submissions to CBMI are peer reviewed in a single blind process. All types of papers must use the IEEE templates at https://www.ieee.org/conferences/publishing/templates.html. The language of the conference is English. Submission Deadlines * * * Full and short research papers are due March 22, 2024 * * * * Special session papers are due March 22, 2024 * * * * Demonstration submissions are due April 22, 2024 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Feb 20 11:47:27 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 20 Feb 2024 08:47:27 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: Message-ID: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> An HTML attachment was scrubbed... URL: From newsletters at tommasoturchi.fyi Wed Feb 21 03:43:26 2024 From: newsletters at tommasoturchi.fyi (Tommaso Turchi) Date: Wed, 21 Feb 2024 09:43:26 +0100 Subject: Connectionists: [CfP] SYNERGY Workshop @ AVI 2024 Message-ID: *** Apologies for cross-posting *** -------------------------------------- CALL FOR PAPERS: SYNERGY Workshop 2024 -------------------------------------- We are pleased to announce the call for papers for the SYNERGY Workshop, to be held in conjunction with ACM AVI 2024. This workshop is dedicated to designing and building Hybrid Human-AI Systems. We invite researchers and practitioners to contribute to this evolving field, where human intelligence synergizes with artificial intelligence. == Important Dates == * Submission Deadline: April 2, 2024 * Notification: April 16, 2024 * Camera Ready: May 3, 2024 * Workshop Date: Afternoon of June 3rd or 4th, 2024 == Submission Guidelines == * Length: 5-10 pages * Format: CEUR-ART 1-column Template * Selected papers may be invited to submit an extended version to a dedicated Special Issue of Springer's "Personal and Ubiquitous Computing" * Submission Website: https://cmt3.research.microsoft.com/SYNERGY2024 == Topics Include == The workshop seeks contributions on, but not limited to, the following themes: 1. Human-AI Collaboration Models 2. Augmenting Human Cognition with AI 3. Design Principles for Synergistic Systems 4. Ethics and Trust in Human-AI Interaction 5. Philosophical and Theoretical Approaches 6. Adaptive Interfaces for Human-AI Synergy 7. Software Architectures and Tools 8. Case Studies of Human-AI Synergy == Organizers == * Alan Dix, Swansea University & Cardiff Metropolitan University * Matt Roach, Swansea University * Tommaso Turchi, University of Pisa * Alessio Malizia, University of Pisa For more information, please visit our website at https://synergy.trx.li or contact Tommaso Turchi (tommaso.turchi at unipi.it). Join us in Arenzano, Genoa, Italy for an insightful workshop that aims to explore and shape the future of hybrid human-AI systems! //==============================\\ Tommaso Turchi Department of Computer Science University of Pisa \\==============================// From bernstein.communication at fz-juelich.de Wed Feb 21 04:00:39 2024 From: bernstein.communication at fz-juelich.de (Alexander Lammers) Date: Wed, 21 Feb 2024 10:00:39 +0100 Subject: Connectionists: Call for Satellite Workshop Proposals - Bernstein Conference 2024 in Frankfurt am Main Message-ID: <3e7cce33-10ac-4604-b39c-20ad4726fbaf@fz-juelich.de> The Bernstein Network Computational Neuroscience invites proposals for Satellite Workshops at the Bernstein Conference 2024 in Frankfurt am Main, Germany. ____ Bernstein Conference Each year, the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange. It has established itself as one of the most renowned conferences worldwide in this field, attracting students, postdocs, and PIs from around the world to meet and discuss new scientific discoveries. https://www.bernstein-conference.de/ ____ Invited speakers Dmitriy Aronov (Columbia University, USA) Alison Barker (Max Planck Institute for Brain Research, Germany) Helen Barron (University of Oxford, UK) Elizabeth Buffalo (University of Washington, USA) Alex Cayco Gajic (Ecole Normale Sup?rieure, France) Jan Drugowitsch (Harvard University, USA) Jakob Macke (University of T?bingen, Germany) Tirin Moore (Stanford University, USA) Mala Murthy (Princeton University, USA) Memming Park (Champalimaud Foundation, Portugal) Xiao-Jing Wang (New York University, USA) ____ Satellite Workshops Satellite Workshops of the Bernstein Conference 2024 provide a forum to discuss topical research questions, novel scientific approaches, and challenges in computational neuroscience and related fields. Ideally, the format should foster extensive scientific discussions and debates, and go beyond a mere series of talks. https://bernstein-network.de/bernstein-conference/call-for-satellite-workshops/ ____ Important Dates Satellite Workshops: September 29 - 30, 2024 * Sunday, Sep 29, 14:00 - 18:30 CEST * Monday, Sep 30, 8:30 - 12:30 CEST Main Conference: September 30 - October 2, 2024 Deadline for Satellite Workshop proposal submission: April 10, 2024 ? 15:00 CEST Notification of Satellite Workshop acceptance: May 2024 ____ Benefits of participation Satellite Workshop registration is free for speakers of workshops. Workshop organizers (max. 2 per workshop) will receive waivers for the Satellite Workshops and Main Conference. ____ For any further questions, please contact: bernstein.conference at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum J?lich GmbH 52425 J?lich Sitz der Gesellschaft: J?lich Eingetragen im Handelsregister des Amtsgerichts D?ren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Stefan M?ller Gesch?ftsf?hrung: Prof. Dr. Astrid Lambrecht (Vorsitzende), Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Wed Feb 21 02:57:12 2024 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Wed, 21 Feb 2024 10:57:12 +0300 Subject: Connectionists: =?utf-8?q?FRCCS_2024_Montpellier=2C_France_May_29?= =?utf-8?q?_=E2=80=93_31=2C_2024_Deadline_Extended_March_4=2C_2024?= Message-ID: Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* Deadline Extension: *March 4, 2024 (Firm Deadline).* FRCCS is a single-track international conference organized by CSS France to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange, promote ideas, and facilitate the cross-fertilization of recent research work, industrial advancements, and original applications. You are cordially invited to submit your contribution until *March 4, 2024* *.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Frontiers in Big Data , edited by Frontiers *Submission * ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to: * ? *Foundations of complex systems * - *Complex Networks * - *Neuroscience, **Linguistics* - *Economics & Finance* - *Infrastructure, planning, and environment * - *Biological and (bio)medical complexity * - *Social complexity* - *Socio-Ecological Systems* - *Organisms and populations * - *Engineering systems and systems of systems* - *Complexity in physics and chemistry* *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux *PROGRAM CHAIRS* Hocine Cherifi University of Burgundy Pascal Poncelet University of Montpellier Join us at COMPLEX NETWORKS 2024 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Plos Complex Systems Founding & Adisory Editor Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.miozzo at cttc.cat Wed Feb 21 05:09:46 2024 From: marco.miozzo at cttc.cat (Marco Miozzo) Date: Wed, 21 Feb 2024 11:09:46 +0100 Subject: Connectionists: Post-doctorate position on pervasive intelligence and AIoT at CTTC Message-ID: <418c3c9c-4b91-4f4f-9f50-96ba879c5961@cttc.cat> Dear all, Sustainable Artificial Intelligence Research Unit at Centre Tecnol?gic de Telecomunicacions de Catalunya (CTTC) , Barcelona (Spain), is hiring! We are offering a full-time post-doctorate position for investigating pervasive intelligence and Artificial Intelligence of Things (AIoT). The research will focus on solutions suitable for advancing towards a truly pervasive and liquid AI, enabling edge devices to accomplish training and inference with the same accuracy of cloud AI, without harming our environment. For doing so, highly efficient learning methods will be investigated including model compression, reservoir and neuromorphic computing, and distributed/decentralized and collaborative data/client selection algorithms. Realistic use cases from sustainable development goals will be considered to validate the selected solutions. You may find all the details at the following link: https://www.cttc.cat/job/call-06-2024-1-researcher-position/ For any further detail, do not hesitate to contact me. Best, Marco. -- *Marco Miozzo* Senior Researcher Sustainable Artificial Intelligence (SAI) Research Unit marco.miozzo at cttc.cat Centre Tecnol?gic de Telecomunicacions de Catalunya (CTTC) Av. Carl Friedrich Gauss, 7 - Building B4 08860 - Castelldefels Tel.: +34 93 645 29 00 _DATA PROTECTION INFORMATION. Data controller: CENTRE TECNOLOGIC DE TELECOMUNICACIONS DE CATALUNYA (G62616586):_ We inform you that your identification data and the data contained in the emails and attached files can be incorporated into our databases, in order to maintain professional and / or commercial relationships, and that it will be preserved throughout the relationship. According to the current regulations, you can exercise your right to access, rectification, erasure, restriction of processing, to portability and to object by sending an email to dpo at cttc.cat. This message and any attached document, where appropriate, may be confidential and intended for the original recipient only. L'informem que les seves dades identificatives i les compreses als correus electr?nics i fitxers adjunts poden ser incorporades a les nostres bases de dades amb la finalitat de mantenir relacions professionals i/o comercials i, que seran conservades mentre es mantingui aquesta relaci?. Si ho desitja, pot exercir el seu dret d'acc?s, rectificaci?, supressi?, limitaci? del tractament, portabilitat i objecci? enviant un correu electr?nic a dpo at cttc.cat. Aquest missatge i qualsevol document adjunt, en el seu cas, pot ser confidencial i destinat ?nicament a la persona o entitat a qui s'hagi enviat. Before printing this e-mail or attachments, be sure it is necessary. It is in our hands to protect the environment. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kentridge at durham.ac.uk Wed Feb 21 05:43:05 2024 From: robert.kentridge at durham.ac.uk (KENTRIDGE, ROBERT W.) Date: Wed, 21 Feb 2024 10:43:05 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From etienne.roesch at gmail.com Wed Feb 21 05:29:16 2024 From: etienne.roesch at gmail.com (Etienne B. Roesch) Date: Wed, 21 Feb 2024 10:29:16 +0000 Subject: Connectionists: Join the ReproPsy & e-ReproNim Fellowship Programmes! Message-ID: Application Form: https://bit.ly/repro-school-fellowships Deadline: 15/03/2024 midnight anywhere in the world. Join the ReproPsy & e-ReproNim Fellowship Programmes! Are you passionate about robust, rigorous, and transparent research practices in Psychology and/or Neuroscience? Do you want ?1,000 or (slightly) more to fund training to enhance your skills in software and data management? Do you want to join a vibrant community of practice ? Look no further! This is a second call for the first edition of two Fellowship programmes, ReproPsy and e-ReproNim. These are unique opportunities to become part of a vibrant community dedicated to advancing open and robust data practices in Psychological and Neuroscientific research. By becoming a ReproPsy Fellow, you will join a network of like-minded researchers dedicated to advancing data practices in Psychology and Neuroscience. Our programmes offer more than just financial support?it provides a platform for collaboration, knowledge sharing, and professional growth. The project is led by Dr Jim Grange (Keele University, UK) and Dr Etienne Roesch (University of Reading, UK). They are part of the repro.school collective, and funded by the Research Data Alliance (RDA), in collaboration with the European Open Science Cloud (EOSC Future), and the UK Reproducibility Network?s Open Research Programme . The RDA aims to develop and adopt infrastructure that promotes data-sharing and data-driven research, thereby grounding the fellowship programmes in networks of data specialists. - ReproPsy is part of the RDA Psychology Data Interest Group ; - e-ReproNim is part of the RDA Neuroimaging Data Working Group , and the European sister project to the North American repronim.org . The UK Reproducibility Network is a grassroot movement, created organically by researchers in the UK in 2019, in response to the so-called reproducibility crisis. It now gathers active representatives of 70 research-intensive Universities in the UK and lead major projects. Other national Reproducibility Networks exist; there might be one closer to you ? If not, contact us and we?ll help you set up. Open Data in Psychology & Neuroimaging In a context where results of research are questioned, open data is transforming the landscape of our practices, enabling collaboration, reproducibility, and innovation. As Psychology & Neuroscience embraces this paradigm shift, we aim to equip researchers like you with the necessary skills and resources to adopt and adapt your data practices, ensuring greater impact and scientific progress, and helping you help your community. Project overview The mission of the Fellowship programme is to empower communities to excel in open data practices. In each programme we are offering 15 fellowships, each worth up to ?1,000, to support early career researchers (ECRs) from both EU and UK institutions. ECRs typically donate their time to upskill and transmit this new knowledge to their local communities. We expect the funding awarded to be directed towards helping you create opportunities for learning and teaching, helping you fund your own software and data management training, and organise bespoke training sessions at your home institution. The funding could for instance be used to organise training from the Carpentries (http://carpentries.org) or similar venues; or, should your local policies on workload allocation permit, you could use the funding to offset time commitment towards such training, in agreement with your line manager. What we ask of you in return is a commitment to participate in the online events organised (at least once a month) and in the projects that may arise, such as scoping and designing training needs, contributing to writing training material, and other projects that you may bring to the table! More information about the programmes and to access the application form (including dropbox link to submit letters of support, evidence of claims and invoices), please visit https://repro.school/2023/05/31/join-the-repropsy-e-repronim-fellowship-programmes/ . Important: Because of constraints imposed on the funding, we must have received all claims and invoices for selected Fellows by the 31/3/2024. The deadline for application is 15/3, and applicants must include all claims and invoices in their application. Applications and documents sent after the deadline will not be processed. For enquiries about ReproPsy, contact j.a.grange at keele.ac.uk, and for e-ReproNim, contact e.b.roesch at reading.ac.uk . Application Process Applying for our Open Data Fellowship Program is simple! The link to the application form is at the bottom, but here's a brief outline of what you will need to consider before applying: - Field(s) of research (Up to 1000 char with spaces): What you do. - Training Needs (Up to 2000 char with spaces): In your application, highlight the training needs for yourself and your local research community. How can open data practices benefit your institution and Psychology as a whole? - Training Plan (Up to 2000 char with spaces): Describe the aim, motivation, and relevant prior experience related to the training you envision. How will this training empower your research and contribute to the open data movement? - Scope of Impact (Up to 2000 char with spaces): Outline the scope of your local community that would benefit from the training. Show us how this initiative can create a ripple effect, benefiting multiple researchers at your institution. - Special circumstances (Up to 2000 char with spaces): This is a gather-it-all section that allows you to bring anything you think might be relevant to the evaluators. Use this category to describe aspects related to equity, diversity and inclusion we ought to know about. - Justification of resources (Up to 5000 char with spaces): Provide itemised costs (including tax if applicable) describing how you plan to use the funding allocated in the Fellowship. Each Fellow can request funding for ?1,000 or (slightly) more AND upload all evidence of claims and scan of invoices in the dropbox link found at the bottom of the application form. - Host Institution Support (signed pdf with letterhead): Upload a letter of support from your host institution, as a PDF file, signed by your line manager, confirming their commitment to supporting your training and supporting you to organise training in your community. This could involve allocating time, which must be clearly stated (in number of hours estimated), or providing you with other necessary resources. Files must be uploaded onto the public dropbox link provided in the application form. You will be asked for your name, and your email address; we only need your name to link your file to your application, and the email address does not have to be real. Any issue with uploading a file, please email e.b.roesch at reading.ac.uk. Please also note: - The maximum number of characters allowed are not targets to be met. Be as concise as needed. - We are committed to supporting typically under-represented communities, and will apply a process of positive action towards supporting equity, diversity and inclusion. Specifically, after applications are graded by our evaluators and ranked, we will assert the landscape of the selection and adjust the selection if needed. Please use the Special Circumstances box to raise to the attention of the evaluators anything you think may deserve consideration. - The ReproPsy and e-ReproNim Fellowship programmes are separate projects, yet have lots in common?starting with the application form! We thus regret to say that we will not accept applications to both programmes, and you must specify one or the other on the application form. - Funding is only available to applicants from institutions hosted in the EU and the UK. Because of the structure of the funding allocation, we are reserving 5 fellowships to UK applicants in each of the fellowship programmes, 10 out of 30 in total. - Information related to the transfer of the award will be shared with successful applicants after selection. Fellows will be required to have identified who at their host institution will be managing this financial transaction: typically a post-award or a finance office. - Applications are managed using Google Forms, and PDF uploads with Dropbox. Neither of these platforms require you to create an account on Google or Dropbox. We collect your name and email addresses on the Google Form to identify your applications. This information is not shared with anyone, and only used for general statistics and logistics of the selection process. If you wish to be kept in the loop of what we do, please register to the Psychology Data Interest Group and Neuroimaging Data Working Group , with the Research Data Alliance. Alternatively, you can also visit our website . PS: I usually only manage to look at emails, Teams and the like after 2pm. For anything pressing, please call me or book yourself in my University Outlook calendar . ? *Prof. dr. Etienne B. Roesch* | *Professor of Applied Statistics & Cognitive Science* | *Univ. Reading* Institutional Lead for Open and Reproducible Research, UK Reproducibility Network Discipline Hopping Fellow, Natural Environment Research Council meetings: book yourself in my University Outlook calendar www: etienneroes.ch | osf.io/ede88 | github.com/eroesch | rdg.ac.uk/cinn | shop research: project-chai.org | rescience | ukrn -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Bowers at bristol.ac.uk Wed Feb 21 07:31:21 2024 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Wed, 21 Feb 2024 12:31:21 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From azadefarshad at gmail.com Wed Feb 21 07:30:48 2024 From: azadefarshad at gmail.com (Azade Farshad) Date: Wed, 21 Feb 2024 13:30:48 +0100 Subject: Connectionists: [CfP] CVPR 2024 Workshop on Scene Graphs and Graph Representation Learning Message-ID: The SG2RL CVPR 2024 workshop will be held on June 17th, 2024. Authors are invited to submit full-length, high-quality papers as well as extended abstracts related to the topic of scene graphs and graph representation learning. Accepted full papers will be included in the CVPR Workshops proceedings. Submission instructions are available on the SG2RL website: https://sites.google.com/view/sg2rl/ Deadlines and Important Dates: - Paper submission deadline: *10 March 2024* (23:59 PT) - Notification to Authors: 5 April 2024 - Camera-ready Deadline: 12 April 2024 Workshop: June 17th, 2024 Submitted papers should be within 8 pages (excluding references). Topics include: - Scene graph generation and prediction - Graph representation learning - Image generation and manipulation using scene graphs - Evaluation metrics for scene graphs - Common-sense graphs and knowledge graphs - Applications of graph theory and scene graphs to: computer vision, robotic perception, medical imaging, AR/VR, computer graphics, autonomous driving, etc. If you have questions, feel free to contact: azade.farshad at tum.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtkostecki at gmail.com Wed Feb 21 09:34:01 2024 From: mtkostecki at gmail.com (Mateusz Kostecki) Date: Wed, 21 Feb 2024 15:34:01 +0100 Subject: Connectionists: School of Ideas in Neuroscience 2024 Message-ID: Hello! We are super happy to announce *School** of Ideas in Neuroscience 2024!* How the knowledge in neuroscience is accumulated? How can we make sense of the terabytes of empirical data we collect every day? In everyday work, we often forget that every scientific endeavour is embedded in a broader theoretical framework. At the same time, most courses and workshops in neuroscience teach us experimental techniques, giving us more tools to gather more data. But, we are not taught how to develop ideas and integrate our research with the existing knowledge. *We believe that theoretical thinking and idea development are skills that can be trained just as new methodologies and data analysis tools*. At our school, we aim to teach participants how to think theoretically to collect data that makes sense and to make sense of collected data! Come to Warsaw to discuss the most exciting theories and ideas with neuroscientists & philosophers! We provide beautiful surroundings, great atmosphere and a lot of time to talk and think together. The school will take place in Warsaw, *July 9th - July 15th 2024*, in the famous historical building of Polish Academy of Sciences Museum of the Earth, in the beautiful park area of the city. *The fee is EUR 550, but we offer fee waivers and travel grants*. Please find more info and the application form here - https://nenckiopenlab.org/school-of-ideas-2024/ Best, School of Ideas Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhansel0 at gmail.com Wed Feb 21 13:13:18 2024 From: dhansel0 at gmail.com (David Hansel) Date: Wed, 21 Feb 2024 20:13:18 +0200 Subject: Connectionists: NeuroBridges 2024: A Mediterranean, Middle Eastern Summer School in Neuroscience - Application is open. Deadline to apply: April 1st, 2024 In-Reply-To: References: Message-ID: *NeuroBridges 2024* *A Mediterranean, Middle Eastern Summer School in Neuroscience* *Deadline to apply: April 1st, 2024* *NeuroBridges 2024* is a summer school which will take place between *September 15 and September 26, 2024* in Le Centre de Conferences Internationales de Cluny, France (CCIC) . The school will provide an overview of theoretical and experimental frameworks used to study a variety of *decision making* processes. It will cover different model systems from rodents to humans and a plethora of decision making mechanisms. Another objective of Neurobridges is to bring together *Mediterranean and Middle Eastern scientists*, in order to promote scientific cooperation between young researchers from these countries. The organizers of NeuroBridges are convinced that such scientific collaborations can lead to personal relations, which eventually may alleviate the political distress in the Middle East. The school is intended for graduate students and postdocs, primarily, but not only, from the Middle East and the Mediterranean region, either working in their home countries or abroad. Applicants should have some background in related fields in neuroscience or cognitive psychology. *All costs of registration and accommodation will be covered by the organizers. A limited number of travel grants will be also be available*. *NeuroBridges 2024 Faculty:* - Alaa Ahmed (U. Colorado) - Rava Azeredo da Silveira (IOB and CNRS) - Sylvie Granon (Universit? Paris-Saclay) - David Hansel (CNRS) - Carole Levenes (CNRS) - Yonatan Loewenstein (Hebrew U.) - Eran Lottem (Hebrew U.) - Drazen Prelec (MIT) - Reza Shadmehr (John Hopkins U.) - Haim Sompolinsky (Hebrew U. and Harvard) - *NeuroBridges* is co-organized by Ahmed El Hady (Max Planck Institute), David Hansel (CNRS, Paris), Yonatan Loewenstein (Hebrew University of Jerusalem, Israel) and Carole Levenes (CNRS, Paris). *NeuroBridges 2024* is supported by the Simons Foundation, the ANR and ELSC, and is in partnership with VVTNS (https://www.wwtns.online/). For more information and application, visit our website: https://neurobridges.net *Contact us:* neurobridges2024 at gmail.com -- -- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints P?res 75270 Paris Cedex 06 Tel (Cell): +33 607508403 - Fax (33).1.49.27.90.62 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From achler at gmail.com Wed Feb 21 13:50:50 2024 From: achler at gmail.com (Tsvi Achler) Date: Wed, 21 Feb 2024 10:50:50 -0800 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: I am glad we are circling back to a discussion of grandmother cells. I dont think we will have a conclusion here but here are a few thoughts to consider: Classical feedforward methods (and their associated gradient descent training paradigm) do not seem flexible enough to easily model how there can be: 1) grandmother cells and neuron redundancy at the same time. 2) a way to describe input expectations, "likelihood" if you will. 3) learning that is truly one-shot of grandmother cells: that does not require iid rehearsal to avoid catastrophic forgetting. Sincerely, -Tsvi On Wed, Feb 21, 2024 at 4:43?AM Jeffrey Bowers wrote: > It is possible to define a grandmother cell in a way that falsifies them. > For instance, defining grandmother cells as single neurons that only > *respond* to inputs from one category. Another definition that is more > plausible is single neurons that only *represent* one category. In > psychology there are ?localist? models that have single units that > represent one category (e.g., there is a unit in the Interactive Activation > Model that codes for the word DOG). And a feature of localist codes is > that they are partly activated by similar inputs. So a DOG detector is > partly activated by the input HOG by virtue of sharing two letters. But > that partial activation of the DOG unit from HOG is no evidence against a > localist or grandmother cell representation of the word DOG in the IA > model. Just as a simple cell of a vertical line is partly activated by a > line 5 degrees off vertical ? that does not undermine the hypothesis that > the simple cell *represents* vertical lines. I talk about the > plausibility of Grandmother cells and discuss the Aniston cells in a paper > I wrote sometime back: > > > > Bowers, J. S. (2009). On the biological plausibility of grandmother cells: > implications for neural network theories in psychology and neuroscience. *Psychological > review*, *116*(1), 220. > > > > > > *From: *Connectionists on > behalf of KENTRIDGE, ROBERT W. > *Date: *Wednesday, 21 February 2024 at 11:48 > *To: *Gary Marcus , Laurent Mertens < > laurent.mertens at kuleuven.be> > *Cc: *connectionists at mailman.srv.cs.cmu.edu < > connectionists at mailman.srv.cs.cmu.edu> > *Subject: *Re: Connectionists: Early history of symbolic and neural > network approaches to AI > > I agree ? empirical evidence is just what we need in this > super-interesting discussion. > > > > I should point out a few things about the Quiroga et al 2005 ?Jennifer > Aniston cell? finding (*Nature*, *435*. 1102 - 1107 ). > > > > Quiroga et al themselves are at pains to point out that whilst the cells > they found responded to a wide variety of depictions of specific > individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? > that is, specific cells that respond to a broad range of depictions of an > individual and **only** of that individual, meaning that one can infer > that this individual is being perceived, thought of, etc. whenever that > cell is active. > > > > The cells Quiroga found do, indeed, respond to remarkably diverse ranges > of stimuli depicting individuals, including not just photos in different > poses, at different ages, in different costumes (including Hale Berry as > Catwoman for the Hale Berry cell), but also names presented as text (e.g. > ?HALE BERRY?). Quiroga et al only presented stimuli representing a > relatively small range of individuals and so it is unsafe to conclude that > the cells they found respond **only** to the specific individuals they > found. Indeed, they report that the Jennifer Aniston cell also responded > strongly to an image of a different actress, Lisa Kudrow, who appeared in > ?Friends? along with Jennifer Aniston. > > > > So, the empirical evidence is still on the side of activity in sets of > neurons as representing specific symbols (including those standing for > specific individuals) rather than individual cells standing for specific > symbols. > > > > cheers > > > > Bob > > > > > > [image: Image result for university of durham logo] [image: > signature_2975123418] [image: signature_2364801924] [image: Image > result for durham cvac] > > Professor of Psychology, University of Durham. > > Durham PaleoPsychology Group. > > Durham Centre for Vision and Visual Cognition. > > Durham Centre for Visual Arts and Culture. > > > > [image: 9k=] > > Fellow. > > Canadian Institute for Advanced Research, > > Brain, Mind & Consciousness Programme. > > > > > > > > Department of Psychology, > > University of Durham, > > Durham DH1 3LE, UK. > > > > p: +44 191 334 3261 > > f: +44 191 334 3434 > > > > > > > > > > > > *From: *Connectionists on > behalf of Gary Marcus > *Date: *Wednesday, 21 February 2024 at 05:49 > *To: *Laurent Mertens > *Cc: *connectionists at mailman.srv.cs.cmu.edu < > connectionists at mailman.srv.cs.cmu.edu> > *Subject: *Re: Connectionists: Early history of symbolic and neural > network approaches to AI > > *[EXTERNAL EMAIL]* > > Deeply disappointing that someone would try to inject actual empirical > evidence into this discussion. ? > > > > On Feb 20, 2024, at 08:41, Laurent Mertens > wrote: > > ? > > Reacting to your statement: > > "However, inside the skull of my brain, there are not any neurons that > have a one-to-one correspondence to the symbol." > > > > What about the Grandmother/Jennifer Aniston/Halle Berry neuron? > > (See, e.g., > https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013 > > ) > > > > KR, > > Laurent > > > ------------------------------ > > *From:* Connectionists on > behalf of Weng, Juyang > *Sent:* Monday, February 19, 2024 11:11 PM > *To:* Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu > > *Subject:* Re: Connectionists: Early history of symbolic and neural > network approaches to AI > > > > Dear Michael, > > You wrote, "Your brain did not deal with symbols?" > > I have my Conscious Learning (DN-3) model that tells me: > My brain "deals with symbols" that are sensed from the extra-body > world by the brain's sensors and effecters. > > However, inside the skull of my brain, there are not any neurons that > have a one-to-one correspondence to the symbol. In this sense, the brain > does not have any symbol in the skull. > > This is my educated hypothesis. The DN-3 brain does not need any > symbol inside the skull. > > In this sense, almost all neural network models are flawed about the > brain, as long as they have a block diagram where each block corresponds to > a function concept in the extra-body world. I am sorry to say that, which > may make many enemies. > > Best regards, > > -John > ------------------------------ > > *From:* Michael Arbib > *Sent:* Monday, February 19, 2024 1:28 PM > *To:* Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu < > connectionists at mailman.srv.cs.cmu.edu> > *Subject:* RE: Connectionists: Early history of symbolic and neural > network approaches to AI > > > > So you believe that, as you wrote out these words, the neural networks in > your brain did not deal with symbols? > > > > *From:* Connectionists *On > Behalf Of *Weng, Juyang > *Sent:* Monday, February 19, 2024 8:07 AM > *To:* connectionists at mailman.srv.cs.cmu.edu > *Subject:* Connectionists: Early history of symbolic and neural network > approaches to AI > > > > I do not agree with Newell and Simon if they wrote that. Otherwise, > images and video are also symbols. They probably were not sophisticated > enough in 1976 to realize why neural networks in the brain should not > contain or deal with symbols. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: not available URL: From dhansel0 at gmail.com Wed Feb 21 14:54:39 2024 From: dhansel0 at gmail.com (David Hansel) Date: Wed, 21 Feb 2024 21:54:39 +0200 Subject: Connectionists: NeuroBridges 2024: A Mediterranean, Middle Eastern Summer School in Neuroscience - Application is open. Deadline to apply: April 1st, 2024 In-Reply-To: References: Message-ID: With the link to NeuroBridges website corrected. ------------ *NeuroBridges 2024* *A Mediterranean, Middle Eastern Summer School in Neuroscience* *Deadline to apply: April 1st, 2024* *NeuroBridges 2024* is a summer school which will take place between *September 15 and September 26, 2024* in Le Centre de Conferences Internationales de Cluny, France (CCIC) . The school will provide an overview of theoretical and experimental frameworks used to study a variety of *decision making* processes. It will cover different model systems from rodents to humans and a plethora of decision making mechanisms. Another objective of Neurobridges is to bring together *Mediterranean and Middle Eastern scientists*, in order to promote scientific cooperation between young researchers from these countries. The organizers of NeuroBridges are convinced that such scientific collaborations can lead to personal relations, which eventually may alleviate the political distress in the Middle East. The school is intended for graduate students and postdocs, primarily, but not only, from the Middle East and the Mediterranean region, either working in their home countries or abroad. Applicants should have some background in related fields in neuroscience or cognitive psychology. *All costs of registration and accommodation will be covered by the organizers. A limited number of travel grants will be also be available*. *NeuroBridges 2024 Faculty:* - Alaa Ahmed (U. Colorado) - Rava Azeredo da Silveira (IOB and CNRS) - Sylvie Granon (Universit? Paris-Saclay) - David Hansel (CNRS) - Carole Levenes (CNRS) - Yonatan Loewenstein (Hebrew U.) - Eran Lottem (Hebrew U.) - Drazen Prelec (MIT) - Reza Shadmehr (John Hopkins U.) - Haim Sompolinsky (Hebrew U. and Harvard) - *NeuroBridges* is co-organized by Ahmed El Hady (Max Planck Institute), David Hansel (CNRS, Paris), Yonatan Loewenstein (Hebrew University of Jerusalem, Israel) and Carole Levenes (CNRS, Paris). *NeuroBridges 2024* is supported by the Simons Foundation, the ANR and ELSC, and is in partnership with VVTNS (https://www.wwtns.online/). For more information and application, visit our website: https://neurobridges.net/ *Contact us:* neurobridges2024 at gmail.com -- -- 'Life is good ..' (Carl van Vreeswijk, 1962-2022) --------------------------------------- David Hansel Directeur de Recherche au CNRS Co-Group leader Cerebral Dynamics Plasticity and Learning lab., CNRS 45 rue des Saints Peres 75270 Paris Cedex 06 Tel (Cell): +33 607508403 - Fax (33).1.49.27.90.62 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From battleday at g.harvard.edu Wed Feb 21 23:53:12 2024 From: battleday at g.harvard.edu (Battleday, Ruairidh) Date: Thu, 22 Feb 2024 17:53:12 +1300 Subject: Connectionists: One week left! Abstracts for Convention on Mathematics of Neuroscience and AI. Rome, May 28-31st 2024 Message-ID: One week left to submit abstracts for the 5th International Convention on the Mathematics of Neuroscience and AI, to be held in Rome 28th-31st May 2024. We hope that you come and join us! Thank you to all the people that have submitted already?the quality so far has been fantastic. In this high-profile convention we examine computational accounts of the brain and mind from the perspectives of biocomputation, neural theory, and cognitive science. We then take findings and theory back into the creation of robust, generalizable, and scalable AI: www.neuromonster.org We invite submissions for each session via this Google form: https://forms.gle/u8tJU4PtkT1EKYte7 Keynote Speakers Professor Peter Dayan (Max Planck Institute, T?bingen) Professor Sophie Deneve (ENS, Paris) Professor Wolfgang Maass (TUG) Professor Mackenzie Mathis (EPFL) Professor Anne Collins (UC Berkeley) Dr Feryal Behbahani (Google DeepMind) Dr Giovanni Pezzulo (NRC of Italy, Rome) Session Chairs Biocomputation Yasmine Ayman (Harvard) Professor Dan Nicolau Jr (KCL) Neural theory Dr James Whittington (Stanford / Oxford) Dr Francesca Mastrogiuseppe (Champalimaud) Cognitive science Dr Ruairidh Battleday (Harvard / MIT) Dr Antonella Maselli (NRC Italy) AI Dr Ishita Dasgupta (Google DeepMind) Dr Ilia Sucholutsky (Princeton) Confirmed speakers Biocomputation Professor Andrew Adamatzky (UWE) Professor Panayiota Poirazi (FORTH, Crete) Professor Jason Shepherd (Utah) Professor Christine Grienberger (Brandeis) Neural theory Professor Rafal Bogacz (Oxford) Professor Athena Akrami (UCL) Professor Nicolas Brunel (Duke) Professor Kanaka Rajan (Harvard) Dr Lea Duncker (Stanford) Dr Kris Jensen (UCL) Cognitive science Professor Dagmar Sternad (Northeastern) Professor Bill Thompson (UC Berkeley) Professor Samuel McDougle (Yale) Dr Maria Eckstein (DeepMind) Dr Fred Callaway (NYU / Harvard) AI Professor Kevin Ellis (Cornell) Professor Najoung Kim (BU, Google) Dr Andr? Barreto (DeepMind) Dr Sophia Sanborn (USB / UBC) Dr Wilka Carvalho (Harvard) The submission format is a single A4 sheet containing (at least) an abstract of 250 words, along with any additional material you think valuable. These will be accepted for poster, spotlight, or a short talk. Submission deadline is 28th February AOE. A limited number of early-bird registrations have been posted on our website! Normal registration will follow after the submission deadline. The convention will be held at the Villa Wolkonsky, in San Giovanni, Rome (10 minutes walk from the Colosseum). The Organizing Committee. -- Dr Ruairidh McLennan Battleday BMBCh (Oxon) PhD President Thinking About Thinking, Inc (www.thinkingaboutthinking.org) Postdoctoral Research Fellow Center for Brain Science, Harvard University Center for Brains, Minds, and Machines, MIT -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kentridge at durham.ac.uk Wed Feb 21 15:56:17 2024 From: robert.kentridge at durham.ac.uk (KENTRIDGE, ROBERT W.) Date: Wed, 21 Feb 2024 20:56:17 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 22984 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.png Type: image/png Size: 226645 bytes Desc: image007.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.png Type: image/png Size: 22619 bytes Desc: image008.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 8560 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 5741 bytes Desc: image010.jpg URL: From nagai.yukie at mail.u-tokyo.ac.jp Wed Feb 21 20:19:55 2024 From: nagai.yukie at mail.u-tokyo.ac.jp (nagai.yukie at mail.u-tokyo.ac.jp) Date: Thu, 22 Feb 2024 01:19:55 +0000 Subject: Connectionists: Call for Participants: IRCN and Chen Institute Joint Course on Neuro-inspired Computation on July 16-19, 2024 Message-ID: <71D107BE-9FB5-409F-A8A9-2427C8F13867@mail.u-tokyo.ac.jp> Dear colleagues, IRCN and Chen Institute Joint Course on Neuro-inspired Computation Date: July 16-19, 2024 Venue: The University of Tokyo, Japan Host: International Research Center for Neurointelligence (WPI-IRCN) and Tianqiao & Chrissy Chen Institute https://ircn.jp/en/events/07_ircncourse2024 [ Description ] In an unprecedented collaborative effort, the World Premier International Research Center for Neurointelligence (WPI-IRCN) at the University of Tokyo and the Tianqiao & Chrissy Chen Institute (Chen Institute) are proud to present an innovative course designed to explore the frontier where natural and artificial intelligence converge. It delves into critical areas such as intrinsic dynamics, network architectures, prediction, plasticity and criticality, as well as multi-agent learning and neuromodulation, highlighting the potential for cross-disciplinary innovation in understanding cognitive processes. Participants will benefit from a rich program featuring lab tours at leading research facilities, global networking opportunities, collaborative writing for a special issue on neurointelligence, and interactions with world-class speakers. This course is designed to inspire a diverse cohort of students, researchers, and professionals to engage deeply with the latest research and contribute to the evolving field of neurointelligence, fostering a global community committed to advancing our understanding of cognitive systems. [ Application ] Deadline: March 22, 2024, 15:00 JST (Please see the web page for details) [ Speakers ] Eva Dyer (Georgia Institute of Technology) Pawel Herman (KTH Royal Institute of Technology) Dmitry Krotov (MIT-IBM Watson AI Lab) Simon Schultz (Imperial College London) Taro Toyoizumi (Riken Center for Brain Science) Naoshige Uchida (Harvard University) Leila Wehbe (Carnegie Mellon University) Kazuyuki Aihara (WPI-IRCN, The University of Tokyo) Zenas Chao (WPI-IRCN, The University of Tokyo) Shin Ishii (Kyoto University / WPI-IRCN, The University of Tokyo) Yasuo Kuniyoshi (Graduate of School of Information Science and Technology / WPI-IRCN, The University of Tokyo) Yukie Nagai (WPI-IRCN, The University of Tokyo) Kenichi Ohki (WPI-IRCN, The University of Tokyo) Takamitsu Watanabe (WPI-IRCN, The University of Tokyo) Best, Yukie Nagai, on behalf of the organizing committees ? Yukie Nagai, Ph.D. Project Professor, The University of Tokyo nagai.yukie at mail.u-tokyo.ac.jp | https://developmental-robotics.jp CREST Cognitive Feelings: https://cognitive-feeling.jp CREST Cognitive Mirroring: https://cognitive-mirroring.org From michel.verleysen at uclouvain.be Thu Feb 22 02:36:47 2024 From: michel.verleysen at uclouvain.be (Michel Verleysen) Date: Thu, 22 Feb 2024 07:36:47 +0000 Subject: Connectionists: ESANN 2024 - call for papers and announcement of special sessions Message-ID: ESANN 2024 - 32nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning Bruges (Belgium) and online, 9-10-11 October 2024 https://www.esann.org Call for papers The call for papers is available at https://www.esann.org. Deadline for submissions: May 2, 2024. The ESANN conferences cover machine learning, artificial neural networks, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered. In addition to regular sessions, 7 special sessions will be organized on the following topics: * Informed Machine Learning for Complex Data * Continual Improvement of Deep Neural Networks in The Age of Big Data * Machine learning in distributed, federated and non-stationary environments * Modern Machine Learning Methods for robust and real-time Brain-Computer Interfaces (BCI) * Domain Knowledge Integration in Machine Learning Systems * Trust in Artificial Intelligence: Beyond Interpretability * Aeronautic data analysis ESANN 2024 builds upon a successful series of conferences organized each year since 1993. ESANN has become a major scientific event in the machine learning, computational intelligence and artificial neural networks fields over the years. The conference will be organized in hybrid mode. In-person participation is preferred, however online participation is possible for those who prefer not to travel. The physical conference will be organized in Bruges, one of the most beautiful medieval towns in Europe. Designated as the "Venice of the North", the city has preserved all the charms of the medieval heritage. Its centre, which is inscribed on the Unesco World Heritage list, is in itself a real open-air museum. We hope to receive your submission to ESANN 2024 and to see you in Bruges or online! ======================================================== ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning http://www.esann.org/ * For submissions of papers, reviews, registrations: Michel Verleysen UCLouvain - Machine Learning Group 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at uclouvain.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at uclouvain.be ======================================================== [cid:image001.jpg at 01DA6569.86F7F720] Michel Verleysen Professor Louvain School of Engineering (EPL) ICTEAM institute Place du Levant, 3 box L5.03.02 B-1348-Louvain-la-Neuve michel.verleysen at uclouvain.be T?l. +32 10 47 25 51 - Fax +32 10 47 25 98 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24337 bytes Desc: image001.jpg URL: From efipatm at gmail.com Thu Feb 22 03:03:40 2024 From: efipatm at gmail.com (Efi) Date: Thu, 22 Feb 2024 10:03:40 +0200 Subject: Connectionists: CALL for Spring 2024 International AI Doctoral Academy (AIDA) course proposals Message-ID: Dear distinguished AI Professor or Senior Scientist (PhD holder), the International AI Doctoral Academy (AIDA) aims to become a reference initiative for AI education in Europe and implement mechanisms for inter-university sharing of AI educational assets at PhD-level. As part of the program, AIDA offers a series of courses on Artificial Intelligence and related fields given by distinguished lecturers. *w**e invite you to contribute to the definition of the AIDA Spring 2024 semester **course offer **program*. You can participate in the AIDA program by offering new or opening-up existing courses targeted to AIDA students* or other students worldwide according to the participation terms of your own organization. Short/Very Short course formats (5-16 hours) are typically encouraged, while semester courses, or asynchronous mode web courses will be considered as well. Accepted courses will be listed on AIDA website (*https://www.i-aida.org/ *) and will be advertised for free with the support of AIDA dissemination channels. Proposals for courses for the AIDA Spring 2024 semester program should be sent to: Prof. Stefano Berretti *stefano.berretti at unifi.it *, and Ms. Efi Patmanidou *epatman at csd.auth.gr * no later than *February 29th, 2024* and will be evaluated by the AIDA Committee. Proposals for AIDA Courses should include (please, find the AIDA Course Application Form: https://www.i-aida.org/phd-studies/short-courses/): *Course title:* *Lecturer name & affiliation:* *Host Institution:* *Content and organization: *Course details about the content. 1 paragraph *Level: *(Undergraduate/Postgraduate) *Course Duration: *Hours *Course Type: *(Short/Semester/Lecture series/Seasonal School*)* *Participation terms: **Registration fee policy for external (non-AIDA) students*, e.g., free of charge or fee of ??? Euros. *Special terms for AIDA students*, e.g., discount 50% or free for X number of AIDA students. *Add text for a) non-AIDA student registration and AIDA student registration/enrollment, as in (6) below.* *Lectures plan: *Days/time *Proposed schedule: *Start date ? End date *Language: *English/Other *Modality: *Online/in person *Notes: *Are there exams? Details on how to successfully complete the course. 1-2 sentences. *Course Link:* *AIDA students may be PhD students, post-doc researchers, possibly qualified MSc students or professionals of AIDA Members Best regards, Stefano Berretti AIDA course planning _______________________________ Prof. Stefano Berretti, Ph.D. Media Integration and Communication Center (MICC) & Department of Information Engineering (DINFO) University of Florence via S. Marta 3 - 50139, Florence - ITALY email: stefano.berretti 'at' *unifi.it * phone: +39 055 275852 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernstein.communication at fz-juelich.de Thu Feb 22 10:02:23 2024 From: bernstein.communication at fz-juelich.de (Alexander Lammers) Date: Thu, 22 Feb 2024 16:02:23 +0100 Subject: Connectionists: Call for applications - Brains for Brains Young Researcher Award Message-ID: <48347fa1-8792-49e1-861e-22eaca96069a@fz-juelich.de> International early career scientists who have shown their outstanding potential at a very early career stage are invited to apply for the Brains for Brains Award. You can win a 2000 ? travel grant to join the Bernstein Conference 2024 and visit labs of Bernstein members. ____ Who can apply? International students of all nationalities from relevant disciplines (biology, medicine, physics, mathematics, computer science, engineering, psychology, etc.) who plan to pursue a research career in computational neuroscience and who have at least one peer reviewed publication (as co-author) or peer reviewed conference abstract (as first author). ____ The award is endowed with: * a travel grant of 2.000 ? for a one-week trip to Germany, including free participation at the Bernstein Conference 2024 in Frankfurt * the chance to present your research as a talk and as a poster at the Bernstein Conference 2024 * individually planned visits to labs of the Bernstein Network Computational Neuroscience ____ Application deadline: April 21, 2024, 23:59 CEST ____ Details & application https://bernstein-network.de/en/network/awards-and-initiatives/brains-for-brains-young-researcher-award/ ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum J?lich GmbH 52425 J?lich Sitz der Gesellschaft: J?lich Eingetragen im Handelsregister des Amtsgerichts D?ren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Stefan M?ller Gesch?ftsf?hrung: Prof. Dr. Astrid Lambrecht (Vorsitzende), Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Menno.VanZaanen at nwu.ac.za Thu Feb 22 09:04:30 2024 From: Menno.VanZaanen at nwu.ac.za (Menno Van Zaanen) Date: Thu, 22 Feb 2024 14:04:30 +0000 Subject: Connectionists: Final CfP 5th workshop on Resources for African Indigenous Language (RAIL) @ LREC-COLING Message-ID: <01ca60354404a442e134a40298af4fee5dc7fe84.camel@nwu.ac.za> EXTENDED DEADLINE (28 February 2024) The fifth workshop on Resources for African Indigenous Language (RAIL) Colocated with LREC-COLING 2024 https://bit.ly/rail2024 New: extended deadline Conference dates: 20-25 May 2024 Workshop date: 25 May 2024 Venue: Lingotto Conference Centre, Torino (Italy) The fifth RAIL workshop website: https://bit.ly/rail2024 LREC-COLING 2024 website: https://lrec-coling-2024.org/ Submission website: https://softconf.com/lrec-coling2024/rail2024/ The fifth Resources for African Indigenous Languages (RAIL) workshop will be co-located with LREC-COLING 2024 in Lingotto Conference Centre, Torino, Italy on 25 May 2024. The RAIL workshop is an interdisciplinary platform for researchers working on resources (data collections, tools, etc.) specifically targeted towards African indigenous languages. In particular, it aims to create the conditions for the emergence of a scientific community of practice that focuses on data, as well as computational linguistic tools specifically designed for or applied to indigenous languages found in Africa. Many African languages are under-resourced while only a few of them are somewhat better resourced. These languages often share interesting properties such as writing systems, or tone, making them different from most high-resourced languages. From a computational perspective, these languages lack enough corpora to undertake high level development of Human Language Technologies (HLT) and Natural Language Processing (NLP) tools, which in turn impedes the development of African languages in these areas. During previous workshops, it has become clear that the problems and solutions presented are not only applicable to African languages but are also relevant to many other low-resource languages. Because these languages share similar challenges, this workshop provides researchers with opportunities to work collaboratively on issues of language resource development and learn from each other. The RAIL workshop has several aims. First, the workshop brings together researchers who work on African indigenous languages, forming a community of practice for people working on indigenous languages. Second, the workshop aims to reveal currently unknown or unpublished existing resources (corpora, NLP tools, and applications), resulting in a better overview of the current state-of-the-art, and also allows for discussions on novel, desired resources for future research in this area. Third, it enhances sharing of knowledge on the development of low-resource languages. Finally, it enables discussions on how to improve the quality as well as availability of the resources. The workshop has ?Creating resources for less-resourced languages? as its theme, but submissions on any topic related to properties of African indigenous languages (including non-African languages) may be accepted. Suggested topics include (but are not limited to) the following: Digital representations of linguistic structures Descriptions of corpora or other data sets of African indigenous languages Building resources for (under resourced) African indigenous languages Developing and using African indigenous languages in the digital age Effectiveness of digital technologies for the development of African indigenous languages Revealing unknown or unpublished existing resources for African indigenous languages Developing desired resources for African indigenous languages Improving quality, availability and accessibility of African indigenous language resources Submission requirements: We invite papers on original, unpublished work related to the topics of the workshop. Submissions, presenting completed work, may consist of up to eight (8) pages of content for a long submission and up to four (4) pages of content for a short submission plus additional pages of references. The final camera-ready version of accepted long papers are allowed one additional page of content (up to 9 pages) so that reviewers? feedback can be incorporated. Papers should be formatted according to the LREC-COLING style sheet (https://lrec-coling-2024.org/authors-kit/), which is provided on the LREC-COLING 2024 website (https://lrec-coling-2024.org/). Reviewing is double-blind, so make sure to anonymise your submission (e.g., do not provide author names, affiliations, project names, etc.) Limit the amount of self citations (anonymised citations should not be used). The RAIL workshop follows the LREC-COLING submission requirements. Please submit papers in PDF format to the START account (https://softconf.com/lrec-coling2024/rail2024/). Accepted papers will be published in proceedings linked to the LREC-COLING conference. Important dates: Submission deadline: 28 February 2024 (AoE) Date of notification: 15 March 2024 Camera ready deadline: 29 March 2024 RAIL workshop: 25 May 2024 Organising Committee Rooweither Mabuya, South African Centre for Digital Language Resources (SADiLaR), South Africa Muzi Matfunjwa, South African Centre for Digital Language Resources (SADiLaR), South Africa Mmasibidi Setaka, South African Centre for Digital Language Resources (SADiLaR), South Africa Menno van Zaanen, South African Centre for Digital Language Resources (SADiLaR), South Africa -- Prof Menno van Zaanen menno.vanzaanen at nwu.ac.za Professor in Digital Humanities South African Centre for Digital Language Resources https://www.sadilar.org ________________________________ NWU PRIVACY STATEMENT: http://www.nwu.ac.za/it/gov-man/disclaimer.html DISCLAIMER: This e-mail message and attachments thereto are intended solely for the recipient(s) and may contain confidential and privileged information. Any unauthorised review, use, disclosure, or distribution is prohibited. If you have received the e-mail by mistake, please contact the sender or reply e-mail and delete the e-mail and its attachments (where appropriate) from your system. ________________________________ From J.Bowers at bristol.ac.uk Thu Feb 22 10:32:10 2024 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Thu, 22 Feb 2024 15:32:10 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From ASIM.ROY at asu.edu Thu Feb 22 12:30:50 2024 From: ASIM.ROY at asu.edu (Asim Roy) Date: Thu, 22 Feb 2024 17:30:50 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: We had a group discussion on grandmother cells when the Jennifer Aniston paper was published. There were quite a few prominent scholars involved in that discussion including Christof Koch, Itzhak Fried (both co-authors of that paper), Walter Freeman, Jay McClelland and others. At that time, I heard that a prominent neuroscientist Prof. Horace Barlow of Cambridge University (Horace Barlow - Wikipedia), the great-grandson of Charles Darwin, was interested in the grandmother cell theory. So, I wrote to him about joining the discussion group. Here?s his response: ====================================================================================================================================== Dear Asim Yes, I would like to join your group, though I would (again) like to make it clear that the grandmother cell theory is not mine, though I still hold that something conforming amazingly well to what was conceptualised by Jerry Lettvin 50 years ago, really does exist! I am still actively interested in the topic, and have just sent off the final proofs of an essay bearing on it. I shall send you a copy of the final proof as soon as I am allowed to circulate it, and of course the more widely discussed it is the better pleased I shall be, though I fear that what I have written will not be universally accepted, at least at first! Best regards Horace ======================================================================================================================================== Note his fear: what I have written will not be universally accepted, at least at first! Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Jeffrey Bowers Sent: Thursday, February 22, 2024 8:32 AM To: KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers > Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists > on behalf of KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists > on behalf of Gary Marcus > Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens > wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists > on behalf of Weng, Juyang > Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib >; connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib > Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang >; connectionists at mailman.srv.cs.cmu.edu > Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists > On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From ASIM.ROY at asu.edu Thu Feb 22 13:14:54 2024 From: ASIM.ROY at asu.edu (Asim Roy) Date: Thu, 22 Feb 2024 18:14:54 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Both Reddy and Thorpe were associated with the Christof Koch-Itzhak Fried team at Caltech-UCLA. The following paper summarizes well the findings of that group. Reddy and Thorpe (2014) also de?ne concepts cells as ?invariant, multimodal.? Thus, the fundamental characterization of these two concepts is identical. ?Multimodal invariance? implies associative learning across modalities. The claim by Barwich tha Reddy and Thorpe (2014) also de?ne concepts cells as ?invariant, multimodal.? Thus, the fundamental characterization of these two concepts is identical. ?Multimodal invariance? implies associative learning across modalities. The claim by Barwich tha Reddy, L., and Thorpe, S. J. (2014). Concept cells through associative learning of high-level representations. Neuron 84, 248?251. doi: 10.1016/j.neuron.2014.10.004 Here are some quotes on their findings from my commentary: Frontiers | Commentary: The Value of Failure in Science: The Story of Grandmother Cells in Neuroscience (frontiersin.org) ?If a neuron's activity has ?meaning and interpretation,? it cannot be part of sparse population coding: Sparse coding is a form of population coding that uses a limited set of neurons. By definition, in population coding, one cannot assign ?meaning and interpretation? to the activity of a single neuron. Reddy and Thorpe (2014) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? Since one can assign ?meaning? to the activation of a concept cell, then that cell, theoretically, could not be part of any population coding scheme. It would be a contradiction.? ?Reddy and Thorpe (2014) categorically state that ?abstract, invariant representations? is ?a hallmark of MTL concept cells.? ?Grandmother and concept cells are the same; any argument against one also applies to the other: ?Selectivity or specificity,? ?complex concept,? ?meaning,? ?multimodal invariance? and ?abstractness? (Reddy and Thorpe, 2014) are all integral properties of both concept cells and grandmother cells.? So read carefully what those neuroscientists write. You have plenty of evidence for grandmother cells, single cell activations having ?meaning,? single cells representing abstract concepts (that?s why they call them concept cells), and single cells representing symbols. Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Jeffrey Bowers Sent: Thursday, February 22, 2024 8:32 AM To: KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers > Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists > on behalf of KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists > on behalf of Gary Marcus > Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens > wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists > on behalf of Weng, Juyang > Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib >; connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib > Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang >; connectionists at mailman.srv.cs.cmu.edu > Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists > On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From gary.marcus at nyu.edu Thu Feb 22 12:44:38 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Thu, 22 Feb 2024 09:44:38 -0800 Subject: Connectionists: neurons, grandmothers, symbols and pi. Message-ID: <4BA2B159-C478-40F2-BB06-C9535E08AD47@nyu.edu> An HTML attachment was scrubbed... URL: From frothga at sandia.gov Thu Feb 22 15:58:13 2024 From: frothga at sandia.gov (Rothganger, Fred) Date: Thu, 22 Feb 2024 20:58:13 +0000 Subject: Connectionists: Grandmother neurons In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Electrical recordings are an extremely sparse sample of the vast number of neurons in a human brain (or even rodent brain). Perhaps the following paper will help illustrate the level of caution we should exercise in interpreting the roles of particular recorded neurons: https://doi.org/10.1371/journal.pcbi.1005268 Just to offer my humble opinion on other topics in this thread: * It is more likely that any given neuron is functioning as part of a larger ensemble rather than acting alone to represent some concept. That ensemble is more likely to be a dynamical system, and its "representations" are regions or attractors in the state space of the overall system. * Another way to frame "meaning" is the actions a physical system, such as an animal, takes in the world. Neural states have meaning because they have a direct causal relationship with our body, which in turn has a causal relationship with the environment around us. Specific example: my act of emitting a certain pattern of sounds may change your behavior, which may result in me receiving some food. (Ordering dinner at a restaurant.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Thu Feb 22 17:56:26 2024 From: weng at msu.edu (Weng, Juyang) Date: Thu, 22 Feb 2024 22:56:26 +0000 Subject: Connectionists: neurons, grandmothers, symbols and pi. In-Reply-To: <4BA2B159-C478-40F2-BB06-C9535E08AD47@nyu.edu> References: <4BA2B159-C478-40F2-BB06-C9535E08AD47@nyu.edu> Message-ID: Somebody wrote, "Deeply disappointing that someone would try to inject actual empirical evidence into this discussion." I agree. If we are unclear about the mathematical definition, we waste time here. When I said "There is no neuron (or a set of neurons) that has a one-to-one correspondence with a symbol", I meant that without "one-to-one correspondence" definition, the neuron is NOT a symbol neuron you have in mind. From Wikipedia: A bijection, bijective function, or one-to-one correspondence between two mathematical sets is a function such that each element of the second set (the codomain) is mapped to from exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set. In other words, one-to-one correspondence is "one-to-one" and "onto". If we are not careful, we slip into something like "grandmother cells" without a definition. Because there are no symbol neurons, there must be no grandmother cells in the brain. Why? All neurons in the brain detect a spatiotemporal feature of the sensory space and the motor space. If you have read my book, NAI, you understand what this means and why it must be the case because the brain must not have a government inside the skull. Best regards, -John ________________________________ From: Connectionists on behalf of Gary Marcus Sent: Thursday, February 22, 2024 12:44 PM To: Jeffrey Bowers Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: neurons, grandmothers, symbols and pi. Generally agree with Jeff, but would add this to the general discussion: We need to be careful about what the question we are asking is here. Mathematicians for example represent and manipulate ?, both symbolically and (with some precision) numerically. We don?t know how the brain instantiates either of those representations. But it?s absurd to presume that we since don?t understand how either of those representations are instantiated neurally, that there is no symbolic or numeric representation thereof. Physically, the symbolic representation of pi presumably occupies the labors of multiple neurons, but in some contexts those neurons may work together as a single logical whole, and it is possible that some of the neurons involved reach threshold only when pi is being processed. Gary On Feb 22, 2024, at 07:32, Jeffrey Bowers wrote: ? Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Fri Feb 23 00:02:02 2024 From: ASIM.ROY at asu.edu (Asim Roy) Date: Fri, 23 Feb 2024 05:02:02 +0000 Subject: Connectionists: Grandmother neurons In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Fred, I have heard your arguments before. That's a fairly standard argument. You are ignoring neuroscience evidence. Remember that there is a long history of single cell recordings. And some of those studies led to Nobel prizes. Here's a list of some from Wikipedia: Single-unit recording - Wikipedia * 1928: One of the earliest accounts of being able to record from the nervous system was by Edgar Adrian in his 1928 publication "The Basis of Sensation". In this, he describes his recordings of electrical discharges in single nerve fibers using a Lippmann electrometer. He won the Nobel Prize in 1932 for his work revealing the function of neurons.[11] * 1957: John Eccles used intracellular single-unit recording to study synaptic mechanisms in motoneurons (for which he won the Nobel Prize in 1963). * 1959: Studies by David H. Hubel and Torsten Wiesel. They used single neuron recordings to map the visual cortex in unanesthesized, unrestrained cats using tungsten electrodes. This work won them the Nobel Prize in 1981 for information processing in the visual system. * Moser and O'Keefe Nobel prize (grid and place cells): The 2014 Nobel Prize in Physiology or Medicine - Press release So you are questioning some ground-breaking work in this area. You might have an argument with the Nobel committee also. But take a hard look at the single-cell experiments first before you pick a fight with them. By the way, the concept cell findings were verified with repeated experiments, something you expect in science. Take a look at the Reddy and Thorpe paper if you have not done so yet. Plus, bear in mind the Nobel prizes are not given on the basis of some fluke experiments. So, you are, in effect, arguing against the findings of Hubel and Wiesel and others. Reddy, L., and Thorpe, S. J. (2014). Concept cells through associative learning of high-level representations. Neuron 84, 248-251. doi: 10.1016/j.neuron.2014.10.004 All the best, Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Rothganger, Fred Sent: Thursday, February 22, 2024 1:58 PM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Grandmother neurons Electrical recordings are an extremely sparse sample of the vast number of neurons in a human brain (or even rodent brain). Perhaps the following paper will help illustrate the level of caution we should exercise in interpreting the roles of particular recorded neurons: https://doi.org/10.1371/journal.pcbi.1005268 Just to offer my humble opinion on other topics in this thread: * It is more likely that any given neuron is functioning as part of a larger ensemble rather than acting alone to represent some concept. That ensemble is more likely to be a dynamical system, and its "representations" are regions or attractors in the state space of the overall system. * Another way to frame "meaning" is the actions a physical system, such as an animal, takes in the world. Neural states have meaning because they have a direct causal relationship with our body, which in turn has a causal relationship with the environment around us. Specific example: my act of emitting a certain pattern of sounds may change your behavior, which may result in me receiving some food. (Ordering dinner at a restaurant.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amatra at essex.ac.uk Fri Feb 23 03:49:58 2024 From: amatra at essex.ac.uk (Matran-Fernandez, Ana) Date: Fri, 23 Feb 2024 08:49:58 +0000 Subject: Connectionists: Essex BCI Webinar: Prof Reinhold Scherer - Non-invasive Neuroadaptive Neural Interfaces: Learning to Learn Message-ID: Dear colleagues, As the organiser of the Essex Brain-Computer Interfaces and Neural Engineering (BCI-NE) webinar series, I am emailing to advertise our next online seminar: Prof Reinhold Scherer, whose talk is entitled "Non-invasive Neuroadaptive Neural Interfaces: Learning to Learn" Abstract: BCIs enable users to interact with external devices by converting signals measured from the brain into messages for the device. Patterns elicited unintentionally can be used to monitor neural processes and assess psychological states to trigger an adaptive response of the device (neuroadaptive). The main barriers to the use of spontaneous electroencephalogram (EEG)-based BCI technologies are the wide variation in performance when using BCIs and the inability of BCIs to provide meaningful control to a large proportion of users. In this talk, I will argue that EEG-based online co-adaptive BCIs, which automatically adjust or recalculate the model parameters of the algorithms that translate the patterns, help to overcome the above problems. I will also present ideas on how BCI technology can be used to detect system errors in virtual reality and how it can be used to help people with Math Anxiety. Biography: Reinhold Scherer is Professor of BCI and Neural Engineering, Co-Director of the Essex BCI-NE Laboratory and current Head of the School of Computer Science and Electronic Engineering, University of Essex. His primary research interests are in the areas of online brain-machine co-adaptation, statistical and adaptive signal processing, mobile brain and body imaging, and rehabilitation. Active research topics focus on gaining deeper insights into brain dynamics and mechanisms underlying motor and cognitive learning, with the goal of improving the interpretability of brain rhythms and thereby optimizing the performance of spontaneous EEG-based BCI interaction and rehabilitation protocols. He is an associate editor of the journals Scientific Reports, Frontiers in Neuroprosthetics, and Brain-Computer Interfaces, and a board member of the International BCI Society. The seminar will be delivered over Zoom on 28th February, 2pm UK time. In order to receive a link to join, please RSVP at the following form: https://forms.gle/v9ULLU68VrxLpPpe7A link to join the seminar will be sent to all who sign up shortly before the seminar. The talk will be recorded and made available in the lab's Youtube channel shortly after. Best wishes, Ana ------------------------ Dr Ana Matran-Fernandez PhD Lecturer in Neural Engineering and Artificial Intelligence Department of Computer Science and Electronic Engineering (CSEE) University of Essex Office: 5B.539 E amatra at essex.ac.uk Email notices: 1) I sometimes write emails outside (your) working hours. Please don't take this as an indication that I require an immediate response - we simply may be working at different times. 2) Because of the daily high load of emails, I may take some time to reply to you. If it's important, I may take even more time, because I want to provide an adequate response. If it's urgent, I'll try to reply quickly and maybe just with a very short email. No disrespect, just trying to be efficient - I'd understand if you did the same. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Scherer-banner.png Type: image/png Size: 823091 bytes Desc: Scherer-banner.png URL: From g.goodhill at wustl.edu Fri Feb 23 10:44:07 2024 From: g.goodhill at wustl.edu (Goodhill, Geoffrey) Date: Fri, 23 Feb 2024 15:44:07 +0000 Subject: Connectionists: WashU CTCN Postdoc Fellows Message-ID: The Center for Theoretical and Computational Neuroscience (ctcn.wustl.edu) at Washington University in St Louis invites applications from outstanding Postdoctoral Fellows to work at the interface between theoretical and experimental labs at WashU. Deadline for applications is April 30th. The CTCN is a joint initiative between the Schools of Medicine, Engineering, and Arts and Sciences, and provides a hub for neuroscientists to collaborate with mathematicians, physicists and engineers to find creative solutions to some of the most difficult problems currently facing neuroscience and artificial intelligence. Each CTCN Postdoctoral Fellow is based in at least two labs, but also has the opportunity to seek out new collaborations which help build new connections within the WashU community. We are looking for people with drive, independence and outstanding prior achievement, who are committed to leveraging interdisciplinary collaboration to drive forward the field of theoretical and computational neuroscience. Washington University in St Louis is ranked in the top 10 worldwide for Neuroscience and Behavior. Salary for CTCN Fellows is significantly above standard NIH postdoc rates, and funds for conference travel are included. In addition, WashU offers excellent benefits and comprehensive access to career development, professional and personal support. Applications from women and under-represented minorities are particularly welcome. The collaborative environment at WashU is a reflection of the critical importance placed on diversity, equity and inclusion and creating a welcoming place where postdocs can thrive. The St Louis metropolitan area has a population of almost 3M and is rich in culture, green spaces and thriving music and arts scenes, with a highly accessible cost of living. For more details on this prestigious Fellowship opportunity, including how to apply, please see https://ctcn.wustl.edu/postdoc-fellows Professor Geoffrey J Goodhill Departments of Developmental Biology and Neuroscience Affiliate appointments: Physics, Biomedical Engineering, Computer Science and Engineering, and Electrical and Systems Engineering Washington University School of Medicine 660 S. Euclid Avenue St. Louis, MO 63110 g.goodhill at wustl.edu https://neuroscience.wustl.edu/people/geoffrey-goodhill-phd -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bu.edu Fri Feb 23 12:07:29 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Fri, 23 Feb 2024 17:07:29 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From javier.rasero at virginia.edu Fri Feb 23 14:31:51 2024 From: javier.rasero at virginia.edu (Rasero, Javier (mnr3kn)) Date: Fri, 23 Feb 2024 19:31:51 +0000 Subject: Connectionists: Postdoc in Computational Neuroimaging at the UVA School of Data Science In-Reply-To: References: Message-ID: Dear colleagues, I am delighted to announce that the School of Data Science at the University of Virginia (USA) is seeking a postdoctoral researcher. This position is dedicated to advancing our understanding of Autism Spectrum Disorder through the evaluation of new biomarkers in neuroimaging data. The School of Data Science, founded in 2019 and the first of its kind in the nation, offers a unique multidisciplinary environment, where candidates can expand their professional network through collaborations with various departments and institutes across UVA, such as Psychology and the Brain Institute. Candidates must possess a terminal degree in computer science, neuroscience, psychology, engineering, physics, mathematics, or related discipline by the start date of the position. Preference will be given to applicants that possess the following: * Established experience in analyzing neuroimaging data (fMRI and/or diffusion MRI) and/or machine/deep learning. * Experience with statistical/scientific programming languages (expertise in Python, Matlab, and/or R preferred). * Topical expertise in statistical and computational approaches (e.g., regression, clustering, signal processing). * Demonstrated ability to complete projects as evidenced by first-author publications. This is a one-year appointment, renewable for an additional year contingent upon funding and satisfactory performance. Link to the job ad and to apply: https://uva.wd1.myworkdayjobs.com/UVAJobs/job/Charlottesville-VA/Postdoctoral-Research-Associate-in-Computational-Neuroimaging_R0057894 [https://uva.wd1.myworkdayjobs.com/UVAJobs/assets/logo] Postdoctoral Research Associate in Computational Neuroimaging uva.wd1.myworkdayjobs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuzhanglei2008 at gmail.com Fri Feb 23 13:29:28 2024 From: bnuzhanglei2008 at gmail.com (Lei Zhang) Date: Fri, 23 Feb 2024 18:29:28 +0000 Subject: Connectionists: Postdoctoral position in Computational Cognitive Science London (UK) Message-ID: (apologize for cross-posting) Applications are invited for a Wellcome Trust funded Research Fellow in the Department of Psychology, Royal Holloway, University of London, UK. Please follow this link to apply by 29-Feb-2024. The position is formally within the SoCCR Lab (Social Computation and Cognitive Representation Lab ) directed by Dr. Joseph M Barnby, which develops and tests computational and formal theories of social interaction in health and disorder. The fellow will also be working closely with the three Co-Is: Drs Alex Pike and Catia Oliveira (University of York) and Dr Lei Zhang (University of Birmingham). The post holder will be a member of this lab with responsibility for the provision of computational support for a Wellcome Trust-funded project, ?Hypatia: Health Modelling Made Simple?. The position offers an exciting opportunity to develop and use state-of-the-art methods to build a free browser-based solution to allow simulation and fitting of common computational models, with the final milestone of 12 months to have a working beta version. The role will primarily involve software engineering/programming the Hypatia site, retaining a core knowledge of computational modelling as it relates to health care, neuroscience, and psychology, and liaising with project consultants to develop the UX/UI, backend support, and aesthetics of the platform. There is significant opportunity for skill development, including gaining experience of industry liaison, translational neuroscience, and computational modelling expertise. The project team will consist of a dedicated postdoctoral researcher and research assistant. The team will also present to and be guided by an advisory board, which includes Prof. Peter Dayan (Tuebingen), Dr Emma ?ernis (University of Birmingham), Dr Rick Adams (UCL), Prof Eric-Jan Wagenmakers (Amsterdam; JASP team), and Dr Jenny Murphy (Royal Holloway). To Apply: https://jobs.royalholloway.ac.uk/Vacancy.aspx?ref=0224-053 For an informal discussion about the post, please contact the PI Dr. Joseph M Barnby on joseph.barnby at rhul.ac.uk For queries on the application process the Human Resources Department can be contacted by email at: recruitment at rhul.ac.uk --- Dr. Lei Zhang w: lei-zhang.net t: @lei_zhang_lz -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcosmrai at gmail.com Sat Feb 24 09:21:45 2024 From: marcosmrai at gmail.com (marcosmrai) Date: Sat, 24 Feb 2024 11:21:45 -0300 Subject: Connectionists: Call for post-doc applications in LLMs applied to code at University of Campinas Message-ID: Dear colleagues, We are looking for a motivated postdoc with an interest in LLMs applied to code. Strong ML/AI/Statistics skills are an advantage. More details are attached. We would appreciate it if you could share the job ad with anyone you think might be interested. How to apply: Please send your curriculum vitae and a cover letter with a maximum of two pages (English) in the following form: https://forms.gle/6qxsTp13wufBaHAV7. Applicants should also provide at least two reference letters to be sent directly by the referees to mraimundo at ic.unicamp.br before the deadline. Deadline: February 29th, 2023 (late submissions are allowed) With many thanks and kind regards, -- Marcos M. Raimundo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2024-02_PostDoc_Call_IC-UNICAMP_Inmetrics.pdf Type: application/pdf Size: 60258 bytes Desc: not available URL: From isclab.uu at gmail.com Sat Feb 24 12:03:13 2024 From: isclab.uu at gmail.com (ISCL UU) Date: Sat, 24 Feb 2024 18:03:13 +0100 Subject: Connectionists: PhD position in Machine Learning: Vision Language Modeling Message-ID: <1CA25905-264C-4428-B75B-C8C29E627FAB@gmail.com> Are you interested in working with language models and computer vision, with the support of competent and friendly colleagues in an international environment? A fully funded PhD position is available at Uppsala University, contributing to research at the forefront of machine learning, particularly in the domain of vision-language models. Project description Vision-language models integrate computer vision and natural language processing techniques to process and generate information that combines both visual and textual modalities, enabling a more profound understanding of the content within images and videos. While vision-language models exhibit promising potential, they are currently in early stages of development. Effective integration of both modalities (vision and language), and aligning visual and text embeddings into a cohesive embedding space, continues to pose significant challenges. In this project, the successful candidate will conduct basic research and methods development towards designing efficient vision-language models, exploring their applications in computer vision and analysing the consequential societal impact. Through our research, we aim to contribute significantly towards learning effective representations combining text, image and video data, potentially benefiting fields including surveillance and healthcare. The position is a part of the Beijer Laboratory for Artificial Intelligence Research , funded by Kjell and M?rta Beijer Foundation. The Beijer Laboratory for Artificial Intelligence Research was established in 2023 at Uppsala University with an ambition to grow activities within the subject of AI, focusing on applications in the life sciences and questions related to societal development . Project description and application link: here . Please apply by 28th March 2024. Contact Asst. Prof. Ekta Vats with any questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From isclab.uu at gmail.com Sat Feb 24 11:39:01 2024 From: isclab.uu at gmail.com (ISCL UU) Date: Sat, 24 Feb 2024 17:39:01 +0100 Subject: Connectionists: Two open PhD positions: theoretical machine learning and data-driven life science Message-ID: We are happy to announce 2 fully funded PhD positions at the Scientific Machine Learning group , Science for Life Laboratory (SciLifeLab) , Division of Scientific Computing, Uppsala University. We are looking for highly motivated students with a strong interest in theoretical foundations of machine learning, and data-driven solutions to challenging problems in life sciences. We appreciate enthusiasm for theoretical research in machine learning, and interest in interdisciplinary research in life sciences. We are seeking candidates with a strong background in applied mathematics, statistics, computer science, or related fields proficient in mathematical analysis, probability theory, and optimization. Solid programming skills (e.g., Python, Julia, or similar) and experience with machine learning frameworks (e.g., TensorFlow, PyTorch) are highly desirable. Topics and application links - Foundations of learning from noisy scientific data, with applications in inverse problems. Some of the topics in the scope of this project include robust learning, variational inference, large-scale optimization, fair machine learning and Bayesian inference. There is room to discuss and explore other challenging topics in the area of theoretical machine learning as well. We will build robust models to learn inverse mappings. Such problems often manifest as parameter inference or estimation problems (e.g., fitting mechanistic models such as gene regulatory networks to observed data) in various scientific disciplines. Apply here by April 2, 2024. - Adaptive Deep Learning for Accelerating Drug Repurposing. Together with the Pharmaceutical Bioinformatics research group, we will build active learning / statistical sampling methods for optimal drug repurposing experimental design. We will develop specialised deep architectures for learning drug combination synergies to drive the sampling process. The project offers a very unique opportunity to work on challenging theoretical aspects of machine learning, in tight integration with a state-of-the-art automated robotised lab to validate the algorithms. The diseases under study include cancer (soft tissue sarcomas) and Covid. Apply here by March 22, 2024 (Project 11 ). Our values We are an interdisciplinary group offering a vibrant, rich research environment with state-of-the-art resources, including computational infrastructure and collaboration opportunities within Sweden and abroad. You will enjoy a supportive and inclusive environment that values diversity, creativity, and intellectual curiosity. We offer a healthy work-life balance, and excellent career development opportunities at Uppsala University and SciLifeLab. Please contact Asst. Prof. Prashant Singh with any questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Bowers at bristol.ac.uk Sat Feb 24 11:00:28 2024 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Sat, 24 Feb 2024 16:00:28 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From J.Bowers at bristol.ac.uk Sat Feb 24 04:38:13 2024 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Sat, 24 Feb 2024 09:38:13 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From robert.kentridge at durham.ac.uk Sat Feb 24 11:38:09 2024 From: robert.kentridge at durham.ac.uk (KENTRIDGE, ROBERT W.) Date: Sat, 24 Feb 2024 16:38:09 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Thanks Steve and Jeff. This continues to be very interesting. In my reading of the history of the grandmother cell concept there was never really much emphasis on the idea that there would only be one cell in in the brain that responded to a particular concept. The notion that grandmother cells should only respond to members of the category ?grandmother? and not respond to anything else was argued about much more (because, if this is the case, then activity in the grandmother cell can be read as indicating that grandmothers are being thought about, seen, etc.). This seems dubious to me because I can?t see how such sharp category boundaries can come about. Of course, as soon as you have concept cells with broad(ish) tuning then comparing the activities of a number of them should allow sharper distinction to be drawn about the nature of the stimulus in the same fashion as we see in early vision or Georgopoulos found in motor cortex. cheers Bob [Image result for university of durham logo] [signature_3244973882] [signature_887643175] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Saturday, 24 February 2024 at 16:00 To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 22984 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.png Type: image/png Size: 226645 bytes Desc: image007.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.png Type: image/png Size: 22619 bytes Desc: image008.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 8560 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 5741 bytes Desc: image010.jpg URL: From steve at bu.edu Sat Feb 24 10:45:58 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Sat, 24 Feb 2024 15:45:58 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ARTSCAN Search invariant category learning .jpg Type: image/jpeg Size: 1120520 bytes Desc: ARTSCAN Search invariant category learning .jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ARTSCAN Search Where's Waldo search.jpg Type: image/jpeg Size: 1021734 bytes Desc: ARTSCAN Search Where's Waldo search.jpg URL: From danny.silver at acadiau.ca Sat Feb 24 21:12:32 2024 From: danny.silver at acadiau.ca (Danny Silver) Date: Sun, 25 Feb 2024 02:12:32 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From gary.marcus at nyu.edu Sat Feb 24 23:27:29 2024 From: gary.marcus at nyu.edu (Gary Marcus) Date: Sat, 24 Feb 2024 20:27:29 -0800 Subject: Connectionists: =?utf-8?b?R1BU4oCZcyBkb27igJl0IG1lbW9yaXplPw==?= Message-ID: <1BB2792A-BFB6-4347-9E0F-0B87B29EFBA1@nyu.edu> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 280267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon-300x300.png Type: image/png Size: 19300 bytes Desc: not available URL: From weng at msu.edu Sat Feb 24 16:44:21 2024 From: weng at msu.edu (Weng, Juyang) Date: Sat, 24 Feb 2024 21:44:21 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Steve, I have had pleasure to listen and follow your various ART models. With your suggestions, I also bought your Book "Conscious Mind, Resonant Brain: How Each Brain Makes a Mind" and browsed it. Let me ask some questions that will be useful for many people on this list: (1) Do you agree that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance? Of course, those invariances are not perfect as explained in my Cresceptron paper (IJCV 197), arguably the first Deep Learning network for 3D? (2) Your model discussed your book is not developmental, namely, does not start from a single cell like the human brain and does not learn incrementally. Could you point me to an incremental learning algorithm in your book if what I write is incorrect? (3) Your model does not explain how to learn any Turing machines. Do you have a plan for your model to handle (1) to (3) above, while my DN model does them? Thank you very much for being on the Advisory Committee of Brain-Mind Institute. Best regards, -John On Sat, Feb 24, 2024 at 4:03?PM Grossberg, Stephen wrote: Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 4:38 AM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From steve at bu.edu Sat Feb 24 18:32:19 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Sat, 24 Feb 2024 23:32:19 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear John, I reply below in italics, among your questions: From: Weng, Juyang Date: Saturday, February 24, 2024 at 4:44?PM To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I have had pleasure to listen and follow your various ART models. With your suggestions, I also bought your Book "Conscious Mind, Resonant Brain: How Each Brain Makes a Mind" and browsed it. Let me ask some questions that will be useful for many people on this list: (1) Do you agree that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance? Of course, those invariances are not perfect as explained in my Cresceptron paper (IJCV 197), arguably the first Deep Learning network for 3D? The grandmother cell concept is nothing more than a verbal term. It is not a computational model, so does not learn anything. I glanced at your Cresceptron paper. It says: ? the user manually draws a polygon outlining the region of interest and types in the label of its class?.? It seems that this is not a self-organizing model that learns in real time through incremental learning. Our model is, and explains challenging neurobiological data along the way. You also write that your model may be ?the first Deep Learning network for 3D?. Our work does not use Deep Learning, which has 17 serious computational problems in addition to not being biologically plausible. None of these problems of back propagation and Deep Learning have been a problem for Adaptive Resonance Theory since I introduced it in 1976. In particular, Deep Learning is both untrustworthy (because it is not explainable) and unreliable (because it can experience catastrophic forgetting). I review these 17 problems in my 2021 Magnum Opus. You can also find them discussed in Section 17 of the following 1988 article that was published in the first issue of Neural Networks: Grossberg, S. (1988) Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks, 1 , 17-61. https://sites.bu.edu/steveg/files/2016/06/Gro1988NN.pdf (2) Your model discussed your book is not developmental, namely, does not start from a single cell like the human brain and does not learn incrementally. Could you point me to an incremental learning algorithm in your book if what I write is incorrect? I am bewildered by your comment above, since it is obviously not true about ANY of my neural models of brain development and learning, all of which self-organize and work in an incremental learning setting. Such models are described in a self-contained and non-technical way in my book. Scores of my articles about self-organizing brain development and learning are described with all technical details on my web page sites.bu.edu/steveg. I am unclear what you mean by the phrase: ?does not start from a single cell like the human brain? since you clearly do not mean that the human brain is composed of a single cell. On the other hand, Chapter 17 of my 2021 Magnum Opus clarifies that principles of complementarity, uncertainty, and resonance that are embodied in Adaptive Resonance Theory, as well as in various of my other neural network models, also have precursors in cellular organisms that existed long before human brains did, including slime molds and Hydras. These design principles thus seem to have been conserved for a very long time during the evolutionary process. Principles of uncertainty, complementarity, and resonance also have analogs in the laws of physics with which our brains have ceaselessly interacted for eons during their evolution. Quantum mechanics is one example of these principles in physics. Explaining in detail how our brains were shaped during evolution to also embody these physical principles is a long-term project worthy of a great deal of additional research. (3) Your model does not explain how to learn any Turing machines. Human brains self-organize using analog signals and parallel computations, and do so in real time. Turing machines do not have these properties. So, yes, no biological neural network model of how brains make minds learn Turing machines, except in the sense that our cognitive systems, that are parts of our brains, have learned to generate emergent properties that invented and can mathematically analyze Turing machines. Is that what you mean? To explain how our brains can mathematically understand Turing machines, you first need to explain how our brains have learned to represent and use numerical representations in the first place. The question ?where do numbers come from? is a fundamental one. Some progress has been made in modeling ?where numbers come from? and how our brains can learn to use numerical representations and mathematical symbols. A LOT more work needs to be done on this fundamental problem. Perhaps the following article may be helpful: Grossberg, S. and Repin, D. (2003) A neural model of how the brain represents and compares multi-digit numbers: Spatial and categorical processes. Neural Networks, 16, 1107-1140. https://sites.bu.edu/steveg/files/2016/06/GroRep2003NN.pdf Best, Steve P.S. Since I became an Emeritus Professor, I no longer serve on any journal editorial boards or advisory committees. Please remove me from yours. Do you have a plan for your model to handle (1) to (3) above, while my DN model does them? Thank you very much for being on the Advisory Committee of Brain-Mind Institute. Best regards, -John On Sat, Feb 24, 2024 at 4:03?PM Grossberg, Stephen wrote: Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 4:38 AM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From J.Bowers at bristol.ac.uk Sun Feb 25 12:26:12 2024 From: J.Bowers at bristol.ac.uk (Jeffrey Bowers) Date: Sun, 25 Feb 2024 17:26:12 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From steve at bu.edu Sun Feb 25 10:54:05 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Sun, 25 Feb 2024 15:54:05 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Danny, Thanks for your interesting comments below! I will reply to your last question, namely: ?My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an embedding space).? [boldface mine]. I will focus on one aspect of this ?big? question, notably how Adaptive Resonance Theory, or ART, explains how we learn categories that may represent symbols and concepts of variable abstractness, ranging from concrete and specific to general and abstract. All learning in ART is regulated by interactions between an Attentional System, in which categories are learned and remembered, and an Orienting System, which enables ART to respond to novel situations. These systems obey computationally complementary laws, in keeping with the fact that the Attentional System represents information that becomes expected as it is learned, whereas the Orienting Systems enables novel information to be categorized by the Attentional System. The Attentional System is associated with processes like attention, learning, recognition, and consciousness, whereas the Orienting System is associated with processes like orienting, hypothesis testing, and memory search. INTERACTIONS between these systems make learning in a changing world possible. See the attached figure for a schematic of an ART Hypothesis Testing and Learning Cycle. See the second attached figure for how this memory search cycle explains data about Event Related Potentials, or ERPs. Back to your question: As learning of a category proceeds, it converges upon a stable critical feature pattern that embodies the information that the category represents, in several senses: Critical features are the ones that are incorporated through learning by the adaptive weights in its bottom-up filter and top-down expectation. They are the features to which its top-down expectation pays attention, and the ones that drive the predictions that the category controls. All other features are suppressed as irrelevant outliers. In terms of your question, this means that there is controlled refinement of category learning, one that enables the category to be stably remembered, and to thereby avoid the problem of catastrophic forgetting. The degree to which a category?s representation can vary depends upon a vigilance parameter that is computed in the Orienting System. Vigilance determines how big a mismatch of new information with available categories will be tolerated before a memory search for a new category is triggered. If vigilance is chosen low, then general and abstract categories are learned. Here, a high degree of variability is tolerated. If vigilance is chosen high, then concrete and specific categories are learned, such as a frontal view of your mother?s face. Here, very little variability is tolerated. I and my collaborators have discovered a lot about how vigilance works, including its anatomical, neurophysiological, biophysical, and biochemical realization in our brains. Here are a few articles about it. The concept was introduced in the 1987 in an oft-cited article with Gail Carpenter that proves mathematical theorems about how it works during category learning: Carpenter, G.A., and Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 37, 54-115. https://sites.bu.edu/steveg/files/2016/06/CarGro1987CVGIP.pdf Neurobiological details followed in a series of later articles. I cite two of them here. See my web page sites.bu.edu/steveg for others: Grossberg, S. and Versace, M. (2008). Spikes, synchrony, and attentive learning by laminar thalamocortical circuits. Brain Research, 1218, 278-312. https://sites.bu.edu/steveg/files/2016/06/GroVer2008BR.pdf Palma, J., Versace, M., and Grossberg, S. (2012). After-hyperpolarization currents and acetylcholine control sigmoid transfer functions in a spiking cortical model. Journal of Computational Neuroscience, 32, 253-280. https://sites.bu.edu/steveg/files/2016/06/PalmaGrossbergVersaceTR2012.pdf A good place for one-stop shopping that offers a self-contained and non-technical overview and synthesis of my work is my Magnum Opus Conscious Mind, Resonant Brain: How Each Brain Makes a Mind https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best, Steve From: Danny Silver Date: Saturday, February 24, 2024 at 9:13?PM To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ART hypothesis testing and search cycle.tif Type: image/tiff Size: 100818 bytes Desc: ART hypothesis testing and search cycle.tif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ART search cycle explains ERPs.tiff Type: image/tiff Size: 1557506 bytes Desc: ART search cycle explains ERPs.tiff URL: From nsmattei at gmail.com Sun Feb 25 07:06:58 2024 From: nsmattei at gmail.com (Nicholas Mattei) Date: Sun, 25 Feb 2024 17:36:58 +0530 Subject: Connectionists: Final Call for Papers: Call for Papers: The 6th Games, Agents, and Incentives Workshop (GAIW)@AAMAS 2024 Message-ID: Last Call for Papers @ GAIW! TL;DR: Webpage: https://preflib.github.io/gaiw2024 - Submission Deadline: February 26, 2024 (AoE). - Acceptance Notification: March 28, 2024. - Camera Ready: April 15, 2024 (AoE). - Workshop: May 6 or 7, 2024 (TBA). Call for Papers: We invite submissions to the 6th iteration of the Games, Agents and Incentives Workshop, co-located with AAMAS 2024 in Auckland, New Zealand. Games, Agents and Incentives is a confederated workshop which focuses on agents and incentives in AI. In particular, it promotes approaches that deal with game theory (cooperative and non-cooperative), social choice, and agent-mediated e-commerce aspects of AI systems. The confederated workshop merges multiple workshops that have been associated with AAMAS in the past, which considered different aspects of the general interplay between AI and economics: ? CoopMAS: Cooperative Games in Multi-agent Systems ? AMEC: Agent-Mediated Electronic Commerce ? EXPLORE: Exploring Beyond the Worst Case in Computational Social Choice Over the past two decades, the focus of agent incentives in decentralised and centralised AI systems has increased dramatically. These issues come up when designing preference aggregation mechanisms and markets; computing equilibria and bidding strategies; facilitating cooperation among agents; and fairly dividing resources. ********************************************************** PAPER SUBMISSION ********************************************************** Authors should submit full papers electronically in PDF format at https://cmt3.research.microsoft.com/GAIW2024 Formatting Guidelines: Please format papers according to the AAMAS 2024 format (author kit ). Optionally, one may submit in LNCS format as well (LaTeX template). Paper Length: Papers can be at most 8 pages long in AAMAS format, or 12 pages long in LNCS format. Additional pages may be used for references. Supplemental material can be appended at the end of the paper. However, reviewers are instructed to make their evaluations based on the main submission, and are not obligated to consult the supplemental material. Multiple Submissions: To widen participation and encourage discussion, there will be no formal publication of workshop proceedings. We will, however, post the accepted papers online to the benefit of the participants to the workshop. Therefore, submission of preliminary work and papers to be submitted or in preparation for submission to other major venues in the field are encouraged. Past Submissions: In order to strike a balance between new work and work that may have been presented, but not widely seen, we ask that if authors want to submit published work they do so non-anonymously and clearly indicate when and where the work was published. We will only accept work which has been published in the last calendar year (e.g., IJCAI 2023, NeurIPS 2023, AAAI 2023, AAAI 2024 and any conference held strictly after February 2023). We invite papers on topics of game theory, mechanism design, fair allocation, computational social choice, and their applications to multi-agent systems: - Algorithmic mechanism design - Auctions - Behavioral Game Theory - Bounded rationality - Cooperative Games - Computational advertising - Computational aspects of equilibria - Computational social choice - Coalitions, coordination, collective action, and cooperation - Economic aspects of security and privacy - Economic aspects of distributed and network computing - Equilibrium computation - Empirical approaches to e-market - Fairness (in ML & elsewhere) - Fair Division - Incentives in machine learning - Information and attention economics - Learning in games (e.g., solution concepts and equilibria) - Matching and Matching Markets - Negotiation - Price differentiation and price dynamics - Social networks - Trading agent design and analysis - Uncertainty in AI and economics Best Presentation Award: The organizing committee of GAIW will be giving two awards (first-place and runner up) for best paper presentations. The award criteria include the clarity of presentation, the level of engagement, the content, and discussion handline. Inquiries: If you have any questions, direct them to alan.tsang at carleton.ca. ********************************************************** IMPORTANT DATES ********************************************************** - Submission Deadline: February 26, 2024 (AoE). - Acceptance Notification: March 28, 2024. - Camera Ready: April 15, 2024 (AoE). - Workshop: May 6 or 7, 2024 (TBA). ********************************************************** ORGANIZATION ********************************************************** Program Chairs Ben Abramowitz, Tulane University Haris Aziz, UNSW Sydney and Data61 John P. Dickerson, University of Maryland Hadi Hosseini, Penn State University Nicholas Mattei, Tulane University Svetlana Obraztsova, Carleton University Zinovi Rabinovich, Carleton University Alan Tsang, Carleton University Tomasz W?s, Penn State University -- *Nicholas Mattei* Assistant Professor, Tulane University nsmattei at tulane.edu | www.nickmattei.net Stanley Thomas Hall | 305B +1 504 865 5782 Department of Computer Science Tulane University 6823 St Charles Ave New Orleans, LA 70118 -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at icas.cc Sun Feb 25 10:42:55 2024 From: info at icas.cc (ICAS Organizing Committee) Date: Sun, 25 Feb 2024 16:42:55 +0100 Subject: Connectionists: Call for Participation ACAIN 2024 - 4th Int. Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy -> Deadline: March 23 Message-ID: Call for Participation & Call for Papers (apologies for multiple copies) _______________________________________________________________ The 4th International Advanced Course & Symposium on Artificial Intelligence & Neuroscience, September 22-25, 2024, Riva del Sole Resort & SPA, Castiglione della Pescaia (Grosseto) Tuscany, Italy W: https://acain2024.icas.events E: acain at icas.cc FB: https://www.facebook.com/ACAIN.LakeDistrict/ Symposium Deadlines Paper Submission (Symposium): by Saturday March 23, 2024 (AoE). https://acain2024.icas.events/symposium-call-for-papers/ https://easychair.org/conferences/?conf=acain2024 Notification of Decision for Papers (Symposium): by Wednesday July 10, 2024. Camera Ready Submission (Symposium): by Monday July 20, 2024. Author Registration (Symposium): by July 20, 2024. Regular Registration (Symposium): by Saturday March 23, 2024. Late Registration (Symposium): from Sunday March 24, 2024. Course Deadlines Regular Registration (Course): by Saturday March 23, 2024. https://acain2024.icas.events/registration/ Late Registration (Course): from Sunday March 24, 2024. Oral Presentation Submission (Course): by Saturday March 23, 2024. Notification of Decision for Oral Presentation (Course): by August 31, 2024. SCOPE & MOTIVATION: ACAIN 2024: AI meets Computational Neuroscience and Cognitive Science The ACAIN 2024 symposium and course is an interdisciplinary event featuring leading scientists from AI and Neuroscience, providing a special opportunity to learn about cutting-edge research in the fields of AI, Neuroscience, Neuroscience-Inspired AI, Human-Level AI, and Cognitive Science. The 4th Advanced Course and Symposium on Artificial Intelligence & Neuroscience (ACAIN) is a full-immersion four-day Course and Symposium in Tuscany on cutting-edge advances in Artificial Intelligence and Neuroscience with lectures delivered by world-renowned experts. The Course provides a stimulating environment for academics, early career researchers, Post-Docs, PhD students and industry leaders. Participants will also have the chance to present their results with oral talks or posters, and to interact with their colleagues, in a convivial and productive environment. Two days of keynote talks and oral presentations, the ACAIN Symposium, (September 24-25), will be preceded by lectures of leading scientists, the ACAIN Course, (September 22-23). Bringing together AI and neuroscience promises to yield benefits for both fields. The future impact and progress in both AI and Neuroscience will strongly depend on continuous synergy, exchange, collaboration and efficient cooperation between the two research communities. These are the goals of the International Course and Symposium - ACAIN 2024, which is aimed both at AI experts with interests in Neuroscience and at neuroscientists with an interest in AI. ACAIN 2024 accepts rigorous research that promotes and fosters multidisciplinary interactions between artificial intelligence and neuroscience. The Advanced Course is suited for scholars, academics, early career researchers, Post-Docs, PhD students and industry leaders. The Event (Course and Symposium) will involve a total of 36-40 hours of lectures. Academically, this will be equivalent to 8 ECTS points for the PhD Students and the Master Students attending the Event. LECTURERS https://acain2024.icas.events/course-lecturers/ * Maria Eckstein, Google DeepMind, London, UK * Auke Jan Ijspeert, EPFL, Switzerland * Zeb Kurth-Nelson, Google DeepMind & UCL, London, UK * Loic Matthey, Google DeepMind, London, UK * Kevin J. Miller, DeepMind & UCL, London, UK * Thomas Parr, Oxford University, UK * Melika Payvand, Institute of Neuroinformatics, University of Zurich & ETH Zurich, Switzerland * Alessandro Treves, International School for Advanced Studies, Italy More Lecturers TBA COURSE DESCRIPTION: https://acain2024.icas.events/course-description/ SYMPOSIUM CALL FOR PAPERS: https://acain2024.icas.events/symposium-call-for-papers/ SYMPOSIUM PROGRAM COMMITTEE (partial list, confirmed members): https://acain2024.icas.events/program-committee/ ORGANIZING COMMITTEE: https://acain2024.icas.events/symposium-committee/ VENUE & ACCOMMODATION: https://acain2024.icas.events/venue/ ACAIN 2024 is a *Residential Conference*, all participants (invited speakers, authors, organizers, chairs, participants) must book and stay at the Riva del Sole Resort and Spa. No exceptions are allowed. https://lod2024.icas.events/lod-2024-a-residential-conference/ Riva del Sole Resort & SPA a: Localit? Riva del Sole - Castiglione della Pescaia (Grosseto) CAP 58043, Tuscany - Italy p: +39-0564-928111 f: +39-0564-935607 e: booking.events at rivadelsole.it w: www.rivadelsole.it/en ACTIVITIES: https://acain2024.icas.events/activities/ REGISTRATION: https://acain2024.icas.events/registration/ See you in Tuscany in September! ACAIN 2024 Organizing Committee. E: acain at icas.cc W: https://acain2024.icas.events FB: https://www.facebook.com/ACAIN.LakeDistrict -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From maanakg at gmail.com Sun Feb 25 15:04:31 2024 From: maanakg at gmail.com (Maanak Gupta) Date: Sun, 25 Feb 2024 14:04:31 -0600 Subject: Connectionists: Call for Papers: (Revised Deadline - March 4) ACM Workshop on Secure and Trustworthy Cyber-Physical Systems Message-ID: Dear Colleagues, ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS) aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. The workshop will be held in Porto, Portugal (or virtually) on June 21, 2024 in conjunction with the 14th ACM Conference on Data and Application Security and Privacy (CODASPY 2024). *** Paper submission deadline: March 04, 2024 *** *** Website: https://sites.google.com/view/sat-cps-2024/ *** SaT-CPS aims to represent a forum for researchers and practitioners from industry and academia interested in various areas of CPS security. SaT-CPS seeks novel submissions describing practical and theoretical solutions for cyber security challenges in CPS. Submissions can be from different application domains in CPS. Example topics of interest are given below, but are not limited to: - Secure CPS architectures - Authentication mechanisms for CPS - Access control for CPS - Key management in CPS - Key management in CPS - Attack detection for CPS - Threat modeling for CPS - Forensics for CPS - Intrusion and anomaly detection for CPS - Trusted-computing in CPS - Energy-efficient and secure CPS - Availability, recovery, and auditing for CPS - Distributed secure solutions for CPS - Metrics and risk assessment approaches - Privacy and trust - Blockchain for CPS security - Data security and privacy for CPS - Digital twins for CPS - Wireless sensor network security - CPS/IoT malware analysis - CPS/IoT firmware analysis - Economics of security and privacy - Securing CPS in medical devices/systems - Securing CPS in civil engineering systems/devices - Physical layer security for CPS - Security on heterogeneous CPS - Securing CPS in automotive systems - Securing CPS in aerospace systems - Usability security and privacy of CPS - Secure protocol design in CPS - Vulnerability analysis of CPS - Anonymization in CPS - Embedded systems security - Formal security methods in CPS - Industrial control system security - Securing Internet-of-Things - Securing smart agriculture and related domains The workshop is planned for one day, June 21, 2024, on the last day of the conference. Instructions for Paper Authors All submissions must describe original research, not published nor currently under review for another workshop, conference, or journal. All papers must be submitted electronically via the Easychair system: https://easychair.org/conferences/?conf=acmsatcps2024 Full-length papers Papers must be at most 10 pages in length in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template). Submission implies the willingness of at least one author to attend the workshop and present the paper. Accepted papers will be included in the ACM Digital Library. The presenter must register for the workshop before the deadline for author registration. Position papers and Work-in-progress papers We also invite short position papers and work-in-progress papers. Such papers can be of length up to 6 pages in double-column ACM format (as specified at https://www.acm.org/publications/proceedings-template), and must clearly state "Position Paper" or "Work in progress," as the case may be in the title section of the paper. These papers will be reviewed and accepted papers will be published in the conference proceedings. Important Dates Due date for full workshop submissions: March 04, 2024 Notification of acceptance to authors: April 1, 2024 Camera-ready of accepted papers: April 7, 2024 Workshop day: June 21, 2024 -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at msu.edu Sun Feb 25 17:48:59 2024 From: weng at msu.edu (Weng, Juyang) Date: Sun, 25 Feb 2024 22:48:59 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI Message-ID: Dear Steve, Thank you for your detailed response. To focus on my three key questions, I respond to your comments below by clarifying the three questions. (1) You agreed that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance. I object to the grandmother cell idea because it is a symbolic concept. (a) You wrote you glanced at our Cresceptron paper. It says: ?the user manually draws a polygon outlining the region of interest and types in the label of its class?.? Contrary to what you wrote about Cresceptron, Cresceptron is a real-time and incremental learning algorithm. Cresceptron was the first neural network that learns incrementally from natural and cluttered scenes, with my image annotation using a polygon. Its polygon idea was stolen by ImageNet (without citing) and ImageNet simplified my polygon to a rectangle. (b) Carpenter, Grossber, Reynolds, ARTMAP 1991 wrote, "on a trial and error basis using only local operations". Does your ARTMAP algorithm step (A.1) give different accuracies from different initializations? https://www.sciencedirect.com/science/article/abs/pii/089360809190012T (c) If I understand correctly, the above ARTMAP takes only monolithic inputs, where "the vectors may encode visual representations of objects" not a vector of clutter science which contains irrelevant backgrounds like Cresceptron did. A latter paper of yours deals with partial views but it does not deal with natural images of cluttered scenes. (2) The SOVEREIGN model discussed in your book does not start from a single cell like the human brain and does not learn incrementally. You wrote, "I am bewildered by your comment above". Probably you have not considered brain-scale development. A zygote starts from a single cell. Thus, the brain should start from a single cell too. DN3 deals with brain-patterning from a single cell. Fig. 16.42 of SOVEREIGN in your book has a static block diagram and therefore, it does not deal with brain patterning. It has an open skull to allow you to manually inject symbols as blocks. (3) Your models do not explain how to learn any Turing machines. You wrote, "no biological neural network model of how brains make minds learn Turing machines, except in the sense that our cognitive systems, that are parts of our brains, have learned to generate emergent properties that invented and can mathematically analyze Turing machines. Is that what you mean?" No, please read the paper below, how a DN overall learns any Turing Machine, not just its cognitive subsystem. This is a necessary condition for any brain-modeling network because any such models must at least be complete in Turing machine logic. https://www.scirp.org/reference/referencespapers?referenceid=1400949 Best regards, -John Weng, J. (2011) Three Theorems Brain-Like Networks Logically Reason and Optimally Generalize. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990. - References - Scientific Research Publishing Weng, J. (2011) Three Theorems Brain-Like Networks Logically Reason and Optimally Generalize. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990. www.scirp.org On Sat, Feb 24, 2024 at 6:32?PM Grossberg, Stephen wrote: Dear John, I reply below in italics, among your questions: From: Weng, Juyang > Date: Saturday, February 24, 2024 at 4:44?PM To: Jeffrey Bowers >, Grossberg, Stephen >, KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I have had pleasure to listen and follow your various ART models. With your suggestions, I also bought your Book "Conscious Mind, Resonant Brain: How Each Brain Makes a Mind" and browsed it. Let me ask some questions that will be useful for many people on this list: (1) Do you agree that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance? Of course, those invariances are not perfect as explained in my Cresceptron paper (IJCV 197), arguably the first Deep Learning network for 3D? The grandmother cell concept is nothing more than a verbal term. It is not a computational model, so does not learn anything. I glanced at your Cresceptron paper. It says: ? the user manually draws a polygon outlining the region of interest and types in the label of its class?.? It seems that this is not a self-organizing model that learns in real time through incremental learning. Our model is, and explains challenging neurobiological data along the way. You also write that your model may be ?the first Deep Learning network for 3D?. Our work does not use Deep Learning, which has 17 serious computational problems in addition to not being biologically plausible. None of these problems of back propagation and Deep Learning have been a problem for Adaptive Resonance Theory since I introduced it in 1976. In particular, Deep Learning is both untrustworthy (because it is not explainable) and unreliable (because it can experience catastrophic forgetting). I review these 17 problems in my 2021 Magnum Opus. You can also find them discussed in Section 17 of the following 1988 article that was published in the first issue of Neural Networks: Grossberg, S. (1988) Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks, 1 , 17-61. https://sites.bu.edu/steveg/files/2016/06/Gro1988NN.pdf (2) Your model discussed your book is not developmental, namely, does not start from a single cell like the human brain and does not learn incrementally. Could you point me to an incremental learning algorithm in your book if what I write is incorrect? I am bewildered by your comment above, since it is obviously not true about ANY of my neural models of brain development and learning, all of which self-organize and work in an incremental learning setting. Such models are described in a self-contained and non-technical way in my book. Scores of my articles about self-organizing brain development and learning are described with all technical details on my web page sites.bu.edu/steveg. I am unclear what you mean by the phrase: ?does not start from a single cell like the human brain? since you clearly do not mean that the human brain is composed of a single cell. On the other hand, Chapter 17 of my 2021 Magnum Opus clarifies that principles of complementarity, uncertainty, and resonance that are embodied in Adaptive Resonance Theory, as well as in various of my other neural network models, also have precursors in cellular organisms that existed long before human brains did, including slime molds and Hydras. These design principles thus seem to have been conserved for a very long time during the evolutionary process. Principles of uncertainty, complementarity, and resonance also have analogs in the laws of physics with which our brains have ceaselessly interacted for eons during their evolution. Quantum mechanics is one example of these principles in physics. Explaining in detail how our brains were shaped during evolution to also embody these physical principles is a long-term project worthy of a great deal of additional research. (3) Your model does not explain how to learn any Turing machines. Human brains self-organize using analog signals and parallel computations, and do so in real time. Turing machines do not have these properties. So, yes, no biological neural network model of how brains make minds learn Turing machines, except in the sense that our cognitive systems, that are parts of our brains, have learned to generate emergent properties that invented and can mathematically analyze Turing machines. Is that what you mean? To explain how our brains can mathematically understand Turing machines, you first need to explain how our brains have learned to represent and use numerical representations in the first place. The question ?where do numbers come from? is a fundamental one. Some progress has been made in modeling ?where numbers come from? and how our brains can learn to use numerical representations and mathematical symbols. A LOT more work needs to be done on this fundamental problem. Perhaps the following article may be helpful: Grossberg, S. and Repin, D. (2003) A neural model of how the brain represents and compares multi-digit numbers: Spatial and categorical processes. Neural Networks, 16, 1107-1140. https://sites.bu.edu/steveg/files/2016/06/GroRep2003NN.pdf Best, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From danny.silver at acadiau.ca Sun Feb 25 13:40:03 2024 From: danny.silver at acadiau.ca (Danny Silver) Date: Sun, 25 Feb 2024 18:40:03 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Jeff ? Thanks for this. I can see how a local encoding of a concept into multiple cells can build some redundancy. But this seems like an inefficient and ineffective use of a finite representational space (and associated energy) for learn the wide variety of concepts in the world. Can you explain the efficacy of a grandmother cell encoding (regardless if it is a single cell or multiple cell that do not use a distributed encoding) for a variety of related concepts such as cats that may vary small to large in size, vary in colour, and vary in their relation to humans (house cats, barn cats, mountain lions, cheetas, lions). Would there be a different grandmother cell (or collections of such cells) encoding for each cat type? And if each type of cat is associated with a richer set of modal representation from various regions of the brain encoding features such as shape, colour, smell, emotion, does a grandmother cell encoding not seem redundant and brittle as compared to a distributed representation that summarizes aspects of these modalities. There is also the issue of a fluid topology over this set of concepts that allows a house cat to morphe into a barn cat to morphe into a larger wild animal based on changes in the modal features. And we know this topology changes over time as humans experience more of the world. Initially, when we are children, such topologies are incorrect but of no consequent because our parents know better, later after learning more about the family of concepts we fill in the details changing the topology of the concepts to allow us to survive on our own. But perhaps the discussion here hinges on the difference between the representation of concepts and the representation of symbols. As we describe in https://arxiv.org/abs/2304.13626 there is the difference between concepts and symbols that refer to concepts. Concepts are complex and messy, but all animals are able to learn concepts ? most importantly, the things they like to eat and the things that can eat them and the relations between such are also concepts. Symbols on the other hand seem to be learned and used by only a few species on the planet. Symbols capture crude but important aspects of concepts and provide tools by which intelligent agents can communicate, with some difficulty. Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought. ? Danny From: Jeffrey Bowers Date: Sunday, February 25, 2024 at 1:26 PM To: Danny Silver , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From steve at bu.edu Sun Feb 25 15:38:54 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Sun, 25 Feb 2024 20:38:54 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Danny, I looked at your article to see how you are using the term ?concept?. Here?s a quote from it: ?Definition 1. A concept is an object, a collection of objects, or an abstract idea that can be learned and represented by an intelligent agent. Concepts may range from specific physical objects (?that hockey puck?), to a category of objects (?birds?), to very abstract and semantically complex ideas (?blue?, ?top?, ?justice?, ?try?, ?meaning?). More complex concepts can be built out of multiple more primitive concepts ?girl riding a bike?, ?writing a technical paper?).? Concepts like ?specific physical objects? and ?a category of objects? are regularly modeled by available neural network models like Adaptive Resonance Theory. The items that you group together that you call ?very abstract and semantically complex ideas? vary greatly in their abstractness, such as ?blue? vs. ?top?. In all of them, however, you are tacitly assuming an agent that has learned language and its meanings, at least if you literally want to use words to express these concepts. The following competence is within the capability of current models: Prime the model to store a representation of the color (not the name) ?blue? in short-term memory, and then use eye movements to sequentially shift attention to the objects in a scene until it resonates on an object that is blue. Here are a few articles that model how our brains can do that kind of thing: Huang, T.-R., and Grossberg, S. (2010). Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation. Psychological Review, 117(4), 1080-1112. https://sites.bu.edu/steveg/files/2016/06/HuangGrossberg2010PR.pdf Grossberg, S., & Huang, T.-R. (2009). ARTSCENE: A neural system for natural scene classification. Journal of Vision, 9(4):6, 1-19, https://jov.arvojournals.org/article.aspx?articleid=2193487 Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf A concept like ?justice? is hard for most humans to even define. I do not think that it is a good example of someone one would expect a model to learn. A concept like ?meaning? needs to be operationalized. I have begun to do that in the following recent article. I welcome others who want to more deeply understand how we learn language meanings to further develop or revise this model. This model requires the integration of a lot of previously defined and computationally simulated models. I illustrate that by copying its Abstract below. The model explains how we learn the meaning of phrases and sentences like the following one, which you noted in your email: ?girl riding a bike?. I call the model ChatSOME, where SOME abbreviates Self-Organizing MEaning, because the model uses the kinds of processes that are needed to replace Generative AI models like ChatGPT, which literally do not know what they are talking about. Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult?child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full ?This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning.? Best, Steve From: Danny Silver Date: Sunday, February 25, 2024 at 1:40?PM To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Jeff ? Thanks for this. I can see how a local encoding of a concept into multiple cells can build some redundancy. But this seems like an inefficient and ineffective use of a finite representational space (and associated energy) for learn the wide variety of concepts in the world. Can you explain the efficacy of a grandmother cell encoding (regardless if it is a single cell or multiple cell that do not use a distributed encoding) for a variety of related concepts such as cats that may vary small to large in size, vary in colour, and vary in their relation to humans (house cats, barn cats, mountain lions, cheetas, lions). Would there be a different grandmother cell (or collections of such cells) encoding for each cat type? And if each type of cat is associated with a richer set of modal representation from various regions of the brain encoding features such as shape, colour, smell, emotion, does a grandmother cell encoding not seem redundant and brittle as compared to a distributed representation that summarizes aspects of these modalities. There is also the issue of a fluid topology over this set of concepts that allows a house cat to morphe into a barn cat to morphe into a larger wild animal based on changes in the modal features. And we know this topology changes over time as humans experience more of the world. Initially, when we are children, such topologies are incorrect but of no consequent because our parents know better, later after learning more about the family of concepts we fill in the details changing the topology of the concepts to allow us to survive on our own. But perhaps the discussion here hinges on the difference between the representation of concepts and the representation of symbols. As we describe in https://arxiv.org/abs/2304.13626 there is the difference between concepts and symbols that refer to concepts. Concepts are complex and messy, but all animals are able to learn concepts ? most importantly, the things they like to eat and the things that can eat them and the relations between such are also concepts. Symbols on the other hand seem to be learned and used by only a few species on the planet. Symbols capture crude but important aspects of concepts and provide tools by which intelligent agents can communicate, with some difficulty. Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought. ? Danny From: Jeffrey Bowers Date: Sunday, February 25, 2024 at 1:26 PM To: Danny Silver , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From danny.silver at acadiau.ca Sun Feb 25 19:47:48 2024 From: danny.silver at acadiau.ca (Danny Silver) Date: Mon, 26 Feb 2024 00:47:48 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Steve .. Thank you for these wonderful responses. I did start reading your Magnum Opus two years ago but was not able to finish it because of other responsibilities at the time. Your words have encouraged me to have another look. And I will most definitely have a deeper look at your Frontiers in Psychology, August 2, 2023 article. From what I have read so far, the following sentences seem to agree very much with our article https://arxiv.org/abs/2304.13626: ?More generally, all the learning that is important for a child's understanding and survival requires interactions between multiple brain regions.? ; ?.. a child can learn that specific language phrases and sentences strongly correlate with specific visual objects and events that the child is simultaneously watching a teacher use or perform.? ; and ?Language meaning is thus embodied in the interactions between a language utterance and the perceptual and affective experiences with which it is correlated, ..? I do not mean to suggest that LLMs are the definitive way forward. In fact, I agree that the early LLMs were/are deficient in terms of their focus on word tokens and the statistical relations between tokens. I believe this is what leads you to say in the Frontiers in Psychology article the following: ?Perhaps most importantly, and central to the main theme of the current article, ChatGPT does not know the real-world meaning of its predictions.? However, I am intrigue by LLMs ability to process multiple modalities of data, and agree with you that this is key to developing an agent that can ?know? things; i.e. develop conceptual representations with semantical relations between such representations that are grounded in interaction with the real world. The most recent LLM-like models, particularly those being embedded in robotics, are starting to also accept and process audio and video data as well as force and tactile sensory data for aspects of perception, decision-making, control, and interaction [See the survey articles https://arxiv.org/pdf/2311.07226.pdf, and https://arxiv.org/abs/2311.12320]. This does seem like a significant step forward. And it may very well be possible to do the same or better using ART network architectures versus LLM architectures. I suspect there will be at least a dozen different but significant architectures tried over the next five years as we move more deeply into embodied AI (robotics). ? Danny From: Grossberg, Stephen Date: Sunday, February 25, 2024 at 4:39 PM To: Danny Silver , Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Dear Danny, I looked at your article to see how you are using the term ?concept?. Here?s a quote from it: ?Definition 1. A concept is an object, a collection of objects, or an abstract idea that can be learned and represented by an intelligent agent. Concepts may range from specific physical objects (?that hockey puck?), to a category of objects (?birds?), to very abstract and semantically complex ideas (?blue?, ?top?, ?justice?, ?try?, ?meaning?). More complex concepts can be built out of multiple more primitive concepts ?girl riding a bike?, ?writing a technical paper?).? Concepts like ?specific physical objects? and ?a category of objects? are regularly modeled by available neural network models like Adaptive Resonance Theory. The items that you group together that you call ?very abstract and semantically complex ideas? vary greatly in their abstractness, such as ?blue? vs. ?top?. In all of them, however, you are tacitly assuming an agent that has learned language and its meanings, at least if you literally want to use words to express these concepts. The following competence is within the capability of current models: Prime the model to store a representation of the color (not the name) ?blue? in short-term memory, and then use eye movements to sequentially shift attention to the objects in a scene until it resonates on an object that is blue. Here are a few articles that model how our brains can do that kind of thing: Huang, T.-R., and Grossberg, S. (2010). Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation. Psychological Review, 117(4), 1080-1112. https://sites.bu.edu/steveg/files/2016/06/HuangGrossberg2010PR.pdf Grossberg, S., & Huang, T.-R. (2009). ARTSCENE: A neural system for natural scene classification. Journal of Vision, 9(4):6, 1-19, https://jov.arvojournals.org/article.aspx?articleid=2193487 Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf A concept like ?justice? is hard for most humans to even define. I do not think that it is a good example of someone one would expect a model to learn. A concept like ?meaning? needs to be operationalized. I have begun to do that in the following recent article. I welcome others who want to more deeply understand how we learn language meanings to further develop or revise this model. This model requires the integration of a lot of previously defined and computationally simulated models. I illustrate that by copying its Abstract below. The model explains how we learn the meaning of phrases and sentences like the following one, which you noted in your email: ?girl riding a bike?. I call the model ChatSOME, where SOME abbreviates Self-Organizing MEaning, because the model uses the kinds of processes that are needed to replace Generative AI models like ChatGPT, which literally do not know what they are talking about. Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult?child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full ?This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning.? Best, Steve From: Danny Silver Date: Sunday, February 25, 2024 at 1:40?PM To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Jeff ? Thanks for this. I can see how a local encoding of a concept into multiple cells can build some redundancy. But this seems like an inefficient and ineffective use of a finite representational space (and associated energy) for learn the wide variety of concepts in the world. Can you explain the efficacy of a grandmother cell encoding (regardless if it is a single cell or multiple cell that do not use a distributed encoding) for a variety of related concepts such as cats that may vary small to large in size, vary in colour, and vary in their relation to humans (house cats, barn cats, mountain lions, cheetas, lions). Would there be a different grandmother cell (or collections of such cells) encoding for each cat type? And if each type of cat is associated with a richer set of modal representation from various regions of the brain encoding features such as shape, colour, smell, emotion, does a grandmother cell encoding not seem redundant and brittle as compared to a distributed representation that summarizes aspects of these modalities. There is also the issue of a fluid topology over this set of concepts that allows a house cat to morphe into a barn cat to morphe into a larger wild animal based on changes in the modal features. And we know this topology changes over time as humans experience more of the world. Initially, when we are children, such topologies are incorrect but of no consequent because our parents know better, later after learning more about the family of concepts we fill in the details changing the topology of the concepts to allow us to survive on our own. But perhaps the discussion here hinges on the difference between the representation of concepts and the representation of symbols. As we describe in https://arxiv.org/abs/2304.13626 there is the difference between concepts and symbols that refer to concepts. Concepts are complex and messy, but all animals are able to learn concepts ? most importantly, the things they like to eat and the things that can eat them and the relations between such are also concepts. Symbols on the other hand seem to be learned and used by only a few species on the planet. Symbols capture crude but important aspects of concepts and provide tools by which intelligent agents can communicate, with some difficulty. Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought. ? Danny From: Jeffrey Bowers Date: Sunday, February 25, 2024 at 1:26 PM To: Danny Silver , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From steve at bu.edu Sun Feb 25 22:11:00 2024 From: steve at bu.edu (Grossberg, Stephen) Date: Mon, 26 Feb 2024 03:11:00 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Dear Danny, Thanks for your kind comments about my recent article on how children learn language meanings. I agree that LLMs plus audio and visual information are better than those without them. On the other hand, if one?s goal is incrementally achieve anything like biological INTELLIGENCE, then one really needs self-organizing models that can learn incrementally about changing environments in real time, and can do so quickly without suffering from catastrophic forgetting. In other words, such models solve what I call the stability-plasticity dilemma, as Adaptive Resonance Theory and my other learning models do. Without that, they will not perform well on their own in unexpected environments, which is what one needs in mobile robots that one would like to achieve some degree of autonomy. Moreover, my 1980 thought experiment about how ANY system can AUTONOMOUSLY learn to correct predictive errors in a changing environment that is filled with unexpected events derives ART systems as the unique class of systems that can solve the stability-plasticity dilemma. The phrase Autonomous Adaptive Intelligence summarizes in a simple phrase what my models try to achieve. In this regard, there are design principles from which I derived the language learning part of my 2023 article on learning language meaning. These principles include the proper design of working memories to temporarily store sequences of experienced items or events, and the proper design of the sequence learning mechanisms that are needed to unitize these sequential contingencies, much as we learn words, sentences, skills, and navigational routes?and do so without experiencing catastrophic forgetting. The working memories are designed to enable such sequence learning and stable memory to occur! As a result, all linguistic, spatial, and motor sequences are stored using the same kind of neural network design for working memory. It is a shared UNIVERSAL design for working memories which enables all these kinds of information to be seamlessly combined, as needed, to make decisions, predictions, and actions work well. It is also a design that occurs in one form or another in multiple parts of our brains: a recurrent, shunting, on-center, off-surround network. What turns any such network into a working memory are the modulatory mechanisms that enable sequences to be stored, performed, and reset. Remarkably, evolutionary precursors of these modulatory mechanisms already exist in crustacea! My Magnum Opus https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 explains what these design principles and networks are, and how they work. Best again, Steve From: Danny Silver Date: Sunday, February 25, 2024 at 7:48?PM To: Grossberg, Stephen , Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve .. Thank you for these wonderful responses. I did start reading your Magnum Opus two years ago but was not able to finish it because of other responsibilities at the time. Your words have encouraged me to have another look. And I will most definitely have a deeper look at your Frontiers in Psychology, August 2, 2023 article. From what I have read so far, the following sentences seem to agree very much with our article https://arxiv.org/abs/2304.13626: ?More generally, all the learning that is important for a child's understanding and survival requires interactions between multiple brain regions.? ; ?.. a child can learn that specific language phrases and sentences strongly correlate with specific visual objects and events that the child is simultaneously watching a teacher use or perform.? ; and ?Language meaning is thus embodied in the interactions between a language utterance and the perceptual and affective experiences with which it is correlated, ..? I do not mean to suggest that LLMs are the definitive way forward. In fact, I agree that the early LLMs were/are deficient in terms of their focus on word tokens and the statistical relations between tokens. I believe this is what leads you to say in the Frontiers in Psychology article the following: ?Perhaps most importantly, and central to the main theme of the current article, ChatGPT does not know the real-world meaning of its predictions.? However, I am intrigue by LLMs ability to process multiple modalities of data, and agree with you that this is key to developing an agent that can ?know? things; i.e. develop conceptual representations with semantical relations between such representations that are grounded in interaction with the real world. The most recent LLM-like models, particularly those being embedded in robotics, are starting to also accept and process audio and video data as well as force and tactile sensory data for aspects of perception, decision-making, control, and interaction [See the survey articles https://arxiv.org/pdf/2311.07226.pdf, and https://arxiv.org/abs/2311.12320]. This does seem like a significant step forward. And it may very well be possible to do the same or better using ART network architectures versus LLM architectures. I suspect there will be at least a dozen different but significant architectures tried over the next five years as we move more deeply into embodied AI (robotics). ? Danny From: Grossberg, Stephen Date: Sunday, February 25, 2024 at 4:39 PM To: Danny Silver , Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Dear Danny, I looked at your article to see how you are using the term ?concept?. Here?s a quote from it: ?Definition 1. A concept is an object, a collection of objects, or an abstract idea that can be learned and represented by an intelligent agent. Concepts may range from specific physical objects (?that hockey puck?), to a category of objects (?birds?), to very abstract and semantically complex ideas (?blue?, ?top?, ?justice?, ?try?, ?meaning?). More complex concepts can be built out of multiple more primitive concepts ?girl riding a bike?, ?writing a technical paper?).? Concepts like ?specific physical objects? and ?a category of objects? are regularly modeled by available neural network models like Adaptive Resonance Theory. The items that you group together that you call ?very abstract and semantically complex ideas? vary greatly in their abstractness, such as ?blue? vs. ?top?. In all of them, however, you are tacitly assuming an agent that has learned language and its meanings, at least if you literally want to use words to express these concepts. The following competence is within the capability of current models: Prime the model to store a representation of the color (not the name) ?blue? in short-term memory, and then use eye movements to sequentially shift attention to the objects in a scene until it resonates on an object that is blue. Here are a few articles that model how our brains can do that kind of thing: Huang, T.-R., and Grossberg, S. (2010). Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation. Psychological Review, 117(4), 1080-1112. https://sites.bu.edu/steveg/files/2016/06/HuangGrossberg2010PR.pdf Grossberg, S., & Huang, T.-R. (2009). ARTSCENE: A neural system for natural scene classification. Journal of Vision, 9(4):6, 1-19, https://jov.arvojournals.org/article.aspx?articleid=2193487 Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf A concept like ?justice? is hard for most humans to even define. I do not think that it is a good example of someone one would expect a model to learn. A concept like ?meaning? needs to be operationalized. I have begun to do that in the following recent article. I welcome others who want to more deeply understand how we learn language meanings to further develop or revise this model. This model requires the integration of a lot of previously defined and computationally simulated models. I illustrate that by copying its Abstract below. The model explains how we learn the meaning of phrases and sentences like the following one, which you noted in your email: ?girl riding a bike?. I call the model ChatSOME, where SOME abbreviates Self-Organizing MEaning, because the model uses the kinds of processes that are needed to replace Generative AI models like ChatGPT, which literally do not know what they are talking about. Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult?child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full ?This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning.? Best, Steve From: Danny Silver Date: Sunday, February 25, 2024 at 1:40?PM To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Jeff ? Thanks for this. I can see how a local encoding of a concept into multiple cells can build some redundancy. But this seems like an inefficient and ineffective use of a finite representational space (and associated energy) for learn the wide variety of concepts in the world. Can you explain the efficacy of a grandmother cell encoding (regardless if it is a single cell or multiple cell that do not use a distributed encoding) for a variety of related concepts such as cats that may vary small to large in size, vary in colour, and vary in their relation to humans (house cats, barn cats, mountain lions, cheetas, lions). Would there be a different grandmother cell (or collections of such cells) encoding for each cat type? And if each type of cat is associated with a richer set of modal representation from various regions of the brain encoding features such as shape, colour, smell, emotion, does a grandmother cell encoding not seem redundant and brittle as compared to a distributed representation that summarizes aspects of these modalities. There is also the issue of a fluid topology over this set of concepts that allows a house cat to morphe into a barn cat to morphe into a larger wild animal based on changes in the modal features. And we know this topology changes over time as humans experience more of the world. Initially, when we are children, such topologies are incorrect but of no consequent because our parents know better, later after learning more about the family of concepts we fill in the details changing the topology of the concepts to allow us to survive on our own. But perhaps the discussion here hinges on the difference between the representation of concepts and the representation of symbols. As we describe in https://arxiv.org/abs/2304.13626 there is the difference between concepts and symbols that refer to concepts. Concepts are complex and messy, but all animals are able to learn concepts ? most importantly, the things they like to eat and the things that can eat them and the relations between such are also concepts. Symbols on the other hand seem to be learned and used by only a few species on the planet. Symbols capture crude but important aspects of concepts and provide tools by which intelligent agents can communicate, with some difficulty. Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought. ? Danny From: Jeffrey Bowers Date: Sunday, February 25, 2024 at 1:26 PM To: Danny Silver , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From dwang at cse.ohio-state.edu Sun Feb 25 15:09:50 2024 From: dwang at cse.ohio-state.edu (Wang, Deliang) Date: Sun, 25 Feb 2024 20:09:50 +0000 Subject: Connectionists: NEURAL NETWORKS, March 2024 Message-ID: Neural Networks - Volume 171, March 2024 https://www.journals.elsevier.com/neural-networks MSEDNet: Multi-scale fusion and edge-supervised network for RGB-T salient object detection Daogang Peng, Weiyi Zhou, Junzhen Pan, Danhao Wang Energy controls wave propagation in a neural network with spatial stimuli Yitong Guo, Mi Lv, Chunni Wang, Jun Ma Towards performance-maximizing neural network pruning via global channel attention Yingchun Wang, Song Guo, Jingcai Guo, Jie Zhang, ... Yuanhong Zhang Uncovering a stability signature of brain dynamics associated with meditation experience using massive time-series feature extraction Neil W Bailey, Ben D. Fulcher, Bridget Caldwell, Aron T Hill, ... Paul B Fitzgerald A deep learning model for the detection of various dementia and MCI pathologies based on resting-state electroencephalography data: A retrospective multicentre study Yusuke Watanabe, Yuki Miyazaki, Masahiro Hata, Ryohei Fukuma, ... Takufumi Yanagisawa Top-down generation of low-resolution representations improves visual perception and imagination Zedong Bi, Haoran Li, Liang Tian Document-level Relation Extraction with Relation Correlations Ridong Han, Tao Peng, Benyou Wang, Lu Liu, ... Xiang Wan Dominating Set Model Aggregation for communication-efficient decentralized deep learning Fateme Fotouhi, Aditya Balu, Zhanhong Jiang, Yasaman Esfandiari, ... Soumik Sarkar Decentralized policy learning with partial observation and mechanical constraints for multiperson modeling Keisuke Fujii, Naoya Takeishi, Yoshinobu Kawahara, Kazuya Takeda Distributed deep reinforcement learning based on bi-objective framework for multi-robot formation Jinming Li, Qingshan Liu, Guoyi Chi Adaptive-weighted deep multi-view clustering with uniform scale representation Rui Chen, Yongqiang Tang, Wensheng Zhang, Wenlong Feng Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond Ronghui Mu, Leandro Marcolino, Qiang Ni, Wenjie Ruan Position-based anchor optimization for point supervised dense nuclei detection Jieru Yao, Longfei Han, Guangyu Guo, Zhaohui Zheng, ... Junwei Han Bridging the gap with grad: Integrating active learning into semi-supervised domain generalization Jingwei Li, Yuan Li, Jie Tan, Chengbao Liu Generalized robust loss functions for machine learning Saiji Fu, Xiaoxiao Wang, Jingjing Tang, Shulin Lan, Yingjie Tian Beyond multilayer perceptrons: Investigating complex topologies in neural networks Tommaso Boccato, Matteo Ferrante, Andrea Duggento, Nicola Toschi Optimizing dense feed-forward neural networks Luis Balderas, Miguel Lastra, Jos? M. Ben?tez Joint estimation of pose, depth, and optical flow with a competition?cooperation transformer network Xiaochen Liu, Tao Zhang, Mingming Liu Large-Scale Cross-Modal Hashing with Unified Learning and Multi-Object Regional Correlation Reasoning Bo Li, Zhixin Li Black-box attacks on dynamic graphs via adversarial topology perturbations Haicheng Tao, Jie Cao, Lei Chen, Hongliang Sun, ... Xingquan Zhu Improving domain generalization by hybrid domain attention and localized maximum sensitivity Wing W.Y. Ng, Qin Zhang, Cankun Zhong, Jianjun Zhang On the compression of neural networks using ?0-norm regularization and weight pruning Felipe Dennis de Resende Oliveira, Eduardo Luiz Ortiz Batista, Rui Seara Training multi-source domain adaptation network by mutual information estimation and minimization Lisheng Wen, Sentao Chen, Mengying Xie, Cheng Liu, Lin Zheng Meta-structure-based graph attention networks Jin Li, Qingyu Sun, Feng Zhang, Beining Yang Gradient-aware learning for joint biases: Label noise and class imbalance Shichuan Zhang, Chenglu Zhu, Honglin Li, Jiatong Cai, Lin Yang ACVAE: A novel self-adversarial variational auto-encoder combined with contrast learning for time series anomaly detection Xiaoxia Zhang, Shang Shi, HaiChao Sun, Degang Chen, ... Kesheng Wu SIA: A sustainable inference attack framework in split learning Fangchao Yu, Lina Wang, Bo Zeng, Kai Zhao, ... Zhi Pang Dual-domain strip attention for image restoration Yuning Cui, Alois Knoll Deep Isotonic Embedding Network: A flexible Monotonic Neural Network Jiachi Zhao, Hongwen Zhang, Yue Wang, Yiteng Zhai, Yao Yang Modular hierarchical reinforcement learning for multi-destination navigation in hybrid crowds Wen Ou, Biao Luo, Bingchuan Wang, Yuqian Zhao Hierarchical attention-guided multiscale aggregation network for infrared small target detection Shunshun Zhong, Haibo Zhou, Zhongxu Zheng, Zhu Ma, ... Ji'an Duan Asynchronous adaptive event-triggered fault detection for delayed Markov jump neural networks: A delay-variation-dependent approach Wen-Juan Lin, Qingzhi Wang, Guoqiang Tan Distributed time-varying optimization control protocol for multi-agent systems via finite-time consensus approach Haojin Li, Xiaofeng Yue, Sitian Qin A collective neurodynamic penalty approach to nonconvex distributed constrained optimization Wenwen Jia, Tingwen Huang, Sitian Qin On a framework of data assimilation for hyperparameter estimation of spiking neuronal networks Wenyong Zhang, Boyu Chen, Jianfeng Feng, Wenlian Lu Self-supervised Learning for DNA sequences with circular dilated convolutional networks Lei Cheng, Tong Yu, Ruslan Khalitov, Zhirong Yang Memristor-induced hyperchaos, multiscroll and extreme multistability in fractional-order HNN: Image encryption and FPGA implementation Xinxin Kong, Fei Yu, Wei Yao, Shuo Cai, ... Hairong Lin GT-LSTM: A spatio-temporal ensemble network for traffic flow prediction Yong Luo, Jianying Zheng, Xiang Wang, Yanyun Tao, Xingxing Jiang Bio-inspired affordance learning for 6-DoF robotic grasping: A transformer-based global feature encoding approach Zhenjie Zhao, Hang Yu, Hang Wu, Xuebo Zhang Blockwise compression of transformer-based models without retraining Gaochen Dong, W. Chen An Online Support Vector Machine Algorithm for Dynamic Social Network Monitoring Arya Karami, Seyed Taghi Akhavan Niaki -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjhealy at unm.edu Sun Feb 25 19:27:14 2024 From: mjhealy at unm.edu (Michael Healy) Date: Mon, 26 Feb 2024 00:27:14 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: Hi, Jeff, Tom Caudell and I thought our 2018 paper Michael J. Healy, Thomas P. Caudell (2018). Local and distributed concept representation via colimits: An example. Neurocomputing 314, pp. 284--298. did a fair job of formalizing this entire argument. We showed with an argument based upon category theory that concept representations are both local and distributed. We showed that concepts can be calculated as colimits of diagrams in a category of theories. Each concept is mapped to a ''node'' in a neural network, and the arrows in the diagram (theory morphisms) are represented by connections. The colimit is a cocone over the diagram as explained in the paper. The neural structure exists in a separate, ''neural'' category representing mathematically the structure of a neural network. A node can consist of a single neuron or a ''clique'' of interconnected cells. The mapping of theories and morphisms to a neural category, the colimits, and other structures are described more fully in some of our other papers, some of which are Michael J. Healy, Thomas P. Caudell (2019). Episodic memory: A hierarchy of spatiotemporal concepts. Neural Networks, Volume 120, December 2019, Pages 40-57. Michael J. Healy, Richard D. Olinger, Robert J. Young, Shawn E. Taylor, Thomas Caudell, and Kurt W. Larson (2009). Applying category theory to improve the performance of a neural architecture. Neurocomputing 72, pp. 3158?3173. Michael J. Healy, Thomas P. Caudell (2006). Ontologies and worlds in category theory: Implications for neural systems. Axiomathes 16, pp.165?214. Respectfully, Mike ________________________________ From: Conne ctionists on behalf of Jeffrey Bowers Sent: Sunday, February 25, 2024 9:26 AM To: Danny Silver ; Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Some people who received this message don't often get email from j.bowers at bristol.ac.uk. Learn why this is important [EXTERNAL] Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg has already addressed some of your points, but let me just briefly comment on the first ? the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary. But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just one brief quote from the paper: ?But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995) called ?probability summation.? Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face is encoded (and instance theory). The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques that address all the standard points people make regarding grandmother cells. Jeff From: Danny Silver Date: Sunday, 25 February 2024 at 03:13 To: Jeffrey Bowers , Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Stephen and others ? The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective as well as from a functional perspective for a lifelong learning and reasoning agent. First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve?s position in the email below seems to support this. It also makes sense to encode representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using a finite amount of representation (memory). From a functional perspective an intelligent agent ?knows? concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual information. When an intelligent agent thinks of a ?cat? it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve?s clarification of the complexity of what we see happening in the human nervous system when representing a concept. Also note that, when you ask a child if the animal in front of her is a ?cat? her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space). Danny Silver Get Outlook for iOS ________________________________ From: Connectionists on behalf of Jeffrey Bowers Sent: Saturday, February 24, 2024 5:06 PM To: Grossberg, Stephen ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI CAUTION: This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments. I think this is where terminology is confusing things. I agree that ART (and all other neural architectures) is ?far from being a ?grandmother cell??. The question is whether a neural architecture includes grandmother cells ? that is, a unit high in a hierarchy of units that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy ? it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use ?one hot encoding? of categories (which they almost always do). So, just as grandmother cells can easy be falsified if you define a grandmother cell that only responds to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells, that eventually mapped onto singe neurons that code for one category. I?m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I?m not so familiar with ? but I?m thinking that does not include a winner-take-all dynamic). Jeff From: Grossberg, Stephen Date: Saturday, 24 February 2024 at 16:46 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers , KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu , Grossberg, Stephen Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists on behalf of Jeffrey Bowers Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. , Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists on behalf of KENTRIDGE, ROBERT W. Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus , Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists on behalf of Gary Marcus Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists on behalf of Weng, Juyang Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib ; connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang ; connectionists at mailman.srv.cs.cmu.edu Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpavone at dmi.unict.it Mon Feb 26 02:18:26 2024 From: mpavone at dmi.unict.it (Mario Pavone) Date: Mon, 26 Feb 2024 08:18:26 +0100 Subject: Connectionists: MESS 2024 ~ autoDL meets HPC ~ 15-18 July 2024, Catania, Italy Message-ID: <20240226081826.Horde.lisdDuph4B9l3DtCTGmwYuA@webmail.dmi.unict.it> Call for Participation (apologies for multiple copies) ----------------------------------------------------------------------- MESS 2024 - Metaheuristics Summer School - Automated Deep Learning meets High-Performance Computing - 15-18 July 2024, Catania, Italy https://www.ANTs-lab.it/mess2024/ mess.school at ANTs-lab.it https://www.facebook.com/groups/MetaheuristicsSchool/ ----------------------------------------------------------------------- ** APPLICATION DEADLINE: 16th March 2024 ** MESS 2024 is aimed at qualified and strongly motivated MSc and PhD students; post-docs; young researchers, and both academic and industrial professionals to provide them with an overview on the metaheuristics techniques, and an in-depth analysis of the state-of-the-art. It is a full-immersion four-days course in the beautiful city of Catania whose goal is to offer all participants with a stimulating environment to study and learn advanced concepts and recent research in the fields of Metaheuristics, Optimization, Machine Learning and Artificial Intelligence, in general. The main theme of this edition is focused on ?Automated Deep Learning Meets High-Performance Computing?, that is how these research areas may interact and affect each other in order to develop reliable and robust solving methodologies for Big Data analysis, and data-driven problems. Participants will be delivered a certificate of attendance indicating the number of hours of lectures (36-40 hours of lectures). In according to the academic system all PhD and master students attending to the summer school will may get 8 ECTS points. ** LECTURERS + Thomas Bartz-Beielstein, TH Koln, Germany Lecture#1: Introduction to Surrogate-Based Hyperparameter Tuning in Classical Machine Learning, Online Machine Learning, and Deep Learning Lecture#2: Exploring Hyperparameter-Tuning Applications: Optimizing scikit-learn, River, and PyTorch with SPOT and Other Tuning Techniques + Aaron Klein, AWS Research Berlin, Germany Lecture#1: Introduction into AutoML: Hyperparameter Optimization Lecture#2: Introduction into AutoML: Neural Architecture Search + Eric Medvet, University of Trieste, Italy Lecture#1: Metaheuristics for policy search in continuous control Lecture#2: Beyond policy search: optimizing body and brain of simulated robotic agents + El-Ghazali Talbi, University of Lille1, France Lecture#1: TBA Lecture#2: TBA ~ More Lecturers will be announced soon ~ ** METAHEURISTICS COMPETITION All participants will be involved in the ?Metaheuristics Competition Race?, where each of them, individually or divided in working groups (no more than 3 people), will develop a metaheuristic solution for a given problem presented during the school. The top three of the competition ranking will receive the MESS 2024 award and will be involved in the writing of a manuscript dedicated to the competition that, afterwards, will be submitted to an international journal for possible publication. In addition, the remaining best ten in the ranking (excluding the top three, of course) will be invited to report their work in a manuscript which will be published in the special MESS 2024 Volume of the AIRO Springer Series. ** SHORT ORAL PRESENTATION All participants may submit an abstract of their recent results, or works in progress, for presentation and having the opportunities for debate and interact with leaders in the field. The Workshop Organizers and Scientific Committee will review the abstracts and will recommend for the format of the presentation (oral or poster). All abstracts will be published on the electronic hands-out book of the summer school. The Abstracts must be submitted during the application by *March 16, 2020*. ** MESS 2024 DIRECTORS: - Salvatore Greco, University of Catania, Italy - Mario Pavone, University of Catania, Italy - Andrea Schaerf, University of Udine, Italy - El-Ghazali Talbi, University of Lille1, France - Daniele Vigo, University of Bologna, Italy ** METAHEURISTICS COMPETITION CHAIRS: - Raffaele Cerulli, University of Salerno, Italy - Sara Ceschia, University of Udine, Italy - Carmine Cerrone, University of Genoa, Italy - Andrea Schaerf, University of Udine, Italy ** MORE INFORMATION: https://www.ANTs-lab.it/mess2024/ -- mess.school at ANTs-lab.it Facebook Group: https://www.facebook.com/groups/MetaheuristicsSchool/ -- Mario F. Pavone, PhD Associate Professor Dept of Mathematics and Computer Science University of Catania V.le A. Doria 6 - 95125 Catania, Italy --------------------------------------------- tel: +39 095 7383034 mobile: +39 3384342147 Email: mpavone at dmi.unict.it http://www.dmi.unict.it/mpavone/ FB: https://www.facebook.com/mfpavone Skype: mpavone ========================================================= MESS 2024 - Metaheuristics Summer School 15-18 July 2024, Catania, Italy mess.school at ANTs-lab.it https://www.ANTs-lab.it/mess2024/ https://www.facebook.com/groups/MetaheuristicsSchool/ ========================================================= From yashar.deldjoo at poliba.it Mon Feb 26 02:47:56 2024 From: yashar.deldjoo at poliba.it (Yashar Deldjoo) Date: Mon, 26 Feb 2024 07:47:56 +0000 Subject: Connectionists: =?windows-1252?q?=5B2nd_reminder=5D_3rd_European_?= =?windows-1252?q?Workshop_on_Algorithmic_Fairness_=28EWAF=9224=29?= Message-ID: 3rd European Workshop on Algorithmic Fairness (EWAF?24) Where: Mainz, Germany When: July 1 to 3, 2024 Call for Papers: https://2024.ewaf.org/call-for-papers Submissions Deadline: March 15, 2024 Types of submissions: Both short submissions (< 4 pages) and full papers (10-15 pages) are accepted. EWAF welcomes both theoretical work from all disciplines as well as applied research, especially interdisciplinary research on algorithmic fairness in the European context. What's EWAF? EWAF?s goal is to foster the dialogue between researchers working on algorithmic fairness in the context of Europe?s legal and societal framework, especially in light of the EU?s attempts to promote ethical AI. EWAF's relaxed atmosphere makes it a great place to connect with researchers working on similar topics. Connect with us on social media for future updates European Association for Algorithmic Fairness on LinkedIn @EWAFWorkshop on Twitter @EWAFWorkshop on Mastodon Kind Regards, Yashar [photo] Dr. Yashar Deldjoo Assistant Professor, SisInf Lab, DEI, Polytechnic University of Bari, Italy Skype: yashar.deldjoo | Address: SisInf Lab, Department of Electrical Engineering and Information Technology, Polytechnic University of Bari, Via Orabona, 4, 70125 Bari, Italy University homepage: http://sisinflab.poliba.it/deldjoo Personal homepage: https://yasdel.github.io/ [https://s3.amazonaws.com/images.wisestamp.com/social_icons/square/linkedin.png] Informativa Privacy - Ai sensi del Regolamento (UE) 2016/679 si precisa che le informazioni contenute in questo messaggio sono riservate e ad uso esclusivo del destinatario. Qualora il messaggio in parola Le fosse pervenuto per errore, La preghiamo di eliminarlo senza copiarlo e di non inoltrarlo a terzi, dandocene gentilmente comunicazione. Grazie. Privacy Information - This message, for the Regulation (UE) 2016/679, may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From efipatm at gmail.com Mon Feb 26 03:36:55 2024 From: efipatm at gmail.com (Efi) Date: Mon, 26 Feb 2024 10:36:55 +0200 Subject: Connectionists: CALL for Spring 2024 International AI Doctoral Academy (AIDA) course proposals Message-ID: Dear distinguished AI Professor or Senior Scientist (PhD holder), the International AI Doctoral Academy (AIDA) aims to become a reference initiative for AI education in Europe and implement mechanisms for inter-university sharing of AI educational assets at PhD-level. As part of the program, AIDA offers a series of courses on Artificial Intelligence and related fields given by distinguished lecturers. *w**e invite you to contribute to the definition of the AIDA Spring 2024 semester **course offer **program*. You can participate in the AIDA program by offering new or opening-up existing courses targeted to AIDA students* or other students worldwide according to the participation terms of your own organization. Short/Very Short course formats (5-16 hours) are typically encouraged, while semester courses, or asynchronous mode web courses will be considered as well. Accepted courses will be listed on AIDA website (*https://www.i-aida.org/ *) and will be advertised for free with the support of AIDA dissemination channels. Proposals for courses for the AIDA Spring 2024 semester program should be sent to: Prof. Stefano Berretti *stefano.berretti at unifi.it *, and Ms. Efi Patmanidou *epatman at csd.auth.gr * no later than *February 29th, 2024* and will be evaluated by the AIDA Committee. Proposals for AIDA Courses should include (please, find the AIDA Course Application Form: https://www.i-aida.org/phd-studies/short-courses/): *Course title:* *Lecturer name & affiliation:* *Host Institution:* *Content and organization: *Course details about the content. 1 paragraph *Level: *(Undergraduate/Postgraduate) *Course Duration: *Hours *Course Type: *(Short/Semester/Lecture series/Seasonal School*)* *Participation terms: **Registration fee policy for external (non-AIDA) students*, e.g., free of charge or fee of ??? Euros. *Special terms for AIDA students*, e.g., discount 50% or free for X number of AIDA students. *Add text for a) non-AIDA student registration and AIDA student registration/enrollment, as in (6) below.* *Lectures plan: *Days/time *Proposed schedule: *Start date ? End date *Language: *English/Other *Modality: *Online/in person *Notes: *Are there exams? Details on how to successfully complete the course. 1-2 sentences. *Course Link:* *AIDA students may be PhD students, post-doc researchers, possibly qualified MSc students or professionals of AIDA Members Best regards, Stefano Berretti AIDA course planning _______________________________ Prof. Stefano Berretti, Ph.D. Media Integration and Communication Center (MICC) & Department of Information Engineering (DINFO) University of Florence via S. Marta 3 - 50139, Florence - ITALY email: stefano.berretti 'at' *unifi.it * phone: +39 055 275852 -------------- next part -------------- An HTML attachment was scrubbed... URL: From efipatm at gmail.com Mon Feb 26 03:28:10 2024 From: efipatm at gmail.com (Efi) Date: Mon, 26 Feb 2024 10:28:10 +0200 Subject: Connectionists: Live free 'AIDA AI Excellence e-Lecture' by Dr. Josef Sivic: "Learning manipulation skills from instructional videos", 27th February 2024, 17:00 CET Message-ID: Dear AI scientist/engineer/student/enthusiast, Dr. Josef Sivic (Czech Technical University and ELLIS Unit Prague) will deliver the e-lecture: "*Learning manipulation skills from instructional videos*" on *27th February, 17:00 CET*. *See details in*: https://www.i-aida.org/events/learning-manipulation-skills-from-instructional-videos/ *Location*: The seminar will be delivered online via *zoom*: https://authgr.zoom.us/j/99109051052?pwd=elAvbVNqaTY0Y0c2ZE9NWG5Walc5QT09 Meeting ID: 991 0905 1052 Passcode: 405011 The *International AI Doctoral Academy (AIDA) *, a joint initiative of the European R&D projects AI4Media , ELISE , Humane AI Net , TAILOR , VISION , is very pleased to offer you top quality scientific lectures in the framework of AIDA AI Excellence Lecture Series on several current hot AI topics. Lectures will be offered alternatingly by: - Top highly-cited senior AI scientists internationally or - Young AI scientists with promise of excellence (AI sprint lectures) These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list . Best regards Profs. N. Sebe, M. Chetouani, P. Flach, B. O?Sullivan, I. Pitas, J. Stefanowski AIDA AI Excellence Lecture Series committee members -------------- next part -------------- An HTML attachment was scrubbed... URL: From gadia at di.unimi.it Mon Feb 26 03:33:25 2024 From: gadia at di.unimi.it (gadia at di.unimi.it) Date: Mon, 26 Feb 2024 08:33:25 +0000 Subject: Connectionists: [IEEE CoG 2024] Full paper deadline approaching - March 1, 2024 Message-ID: IEEE CoG 2024: The 2024 IEEE Conference on Games August 5-8, 2024 Milan, Italy https://2024.ieee-cog.org/ Dear Friends and Colleagues, Please be reminded that the full paper submission deadline to IEEE CoG 2024 is March 1, 2024. You can find the full Call for Papers with the submission instructions at https://2024.ieee-cog.org/call-proposals/ We look forward to receiving your submissions in the coming days. *** Important Dates *** --> Full papers submission: March 1, 2024 <--- Full papers acceptance notification: April 15, 2024 Auxiliary papers submission: April 28, 2024 Auxiliary papers acceptance notification: May 28, 2024 Papers camera ready: June 7, 2024 Early bird and (author) registration: June 7, 2024 We look forward to having you join us at CoG 2024 in Milan. Best regards, The Organizing Committee of IEEE CoG 2024 --------------------------------------------------------------- Davide Gadia, PhD Associate Professor Dipartimento di Informatica "Giovanni Degli Antoni" Universit? degli Studi di Milano Via Celoria 18 20133 Milano Room 3005 - Floor 3 Tel. +39 02 5031 6301 Homepage: https://pong.di.unimi.it/davide/ PONG laboratory: https://pong.di.unimi.it/ --------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhammer at techfak.uni-bielefeld.de Mon Feb 26 06:01:46 2024 From: bhammer at techfak.uni-bielefeld.de (Barbara Hammer) Date: Mon, 26 Feb 2024 12:01:46 +0100 Subject: Connectionists: Call for fellowships Message-ID: Dear colleagues, The following opportunity might be of interest for short-term visits to beautiful Ostwestphalia: We are looking for postdocs and advanced researchers who want to continue and expand their research in line with the SAIL research agenda Universiit?t Bielefeld or Universit?t Paderborn for a short fellowship (uo to 3 months). The SAIL fellowship program is suitable for excellent international researchers from all disciplinary backgrounds on postdoc level. The aim of the program is to enrich the research carried out in SAIL while supporting the research ties with relevant communities and establishing long-term collaboration with institutes across the globe. The fellowships are intended to strengthen the innovation potential of researchers with expertise in the field of AI through further training and interdisciplinary collaboration within our research network. See the current Calls for Fellowships with all the details about the application process here: https://www.sail.nrw/research/call-for-international-fellows/ Best wishes Barbara -- Prof. Dr. Barbara Hammer Machine Learning Group, CITEC Bielefeld University D-33594 Bielefeld Phone: +49 521 / 106 12115 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wanling.cai at tcd.ie Mon Feb 26 10:11:00 2024 From: wanling.cai at tcd.ie (Wanling Cai) Date: Mon, 26 Feb 2024 15:11:00 +0000 Subject: Connectionists: [ACM IUI 2024] Online Registration until 7th March Message-ID: * We apologize if you receive multiple copies of this email * * Online version: https://iui.acm.org/2024/registration.html ACM IUI 2024 ? Online Registration until 7th March 29th International ACM Conference on Intelligent User Interfaces (ACM IUI 2024) Greenville, South Carolina, USA, March 18-21, 2024 https://iui.acm.org/2024/ https://iui.acm.org/2024/index.html ACM Conference on Intelligent User Interfaces (ACM IUI) 2024 is the 29th annual premiere venue, where researchers and practitioners will meet and discuss state-of-the-art advances at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI). Ideal IUI submissions should address practical HCI challenges using machine intelligence and discuss both computational and human-centric aspects of such methodologies, techniques, and systems. The theme for this year will be Resilience, covering a wide variety of topics, such as COVID-19 recovery, organizational cyber resilience, economic growth and stability, climate change recovery, intelligent user interface resilience, and similar. While we encourage submissions related to this theme, the scope of the conference is not limited to the theme only. Thanks for your interest in IUI 2024! Please read the information link below carefully regarding costs and deadlines. * Registration information: https://iui.acm.org/2024/registration.html - Early Registration until Feb 12, 2024 (passed) - Standard Registration until Mar 7, 2024 - Onsite Registration after Mar 7, 2024 * Registration Link: https://web.cvent.com/event/9865c85a-e052-4e7e-9947-cf19114283eb/summary Registration for the IUI 2024 full conference includes access to all sessions and workshops. Payment and registration is being handled through a 3rd party provider. If you have questions, please email the registration chair at registration2024 at iui.acm.org. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius.bilasco at univ-lille.fr Mon Feb 26 09:34:53 2024 From: marius.bilasco at univ-lille.fr (Ioan Marius Bilasco) Date: Mon, 26 Feb 2024 15:34:53 +0100 Subject: Connectionists: CFP "UHBER: Multimodal Data Analysis for Understanding of Human Behaviour, Emotions and their Reasons" - Special Session @ CBMI 2024 Message-ID: <0d45f6a4-33e4-4c3e-a7cb-b58aa37449c0@univ-lille.fr> Dear colleagues, This special session addresses the processing of all types of data related to understanding of human behaviour, emotion, and their reasons, such as current or past context. Understanding human behaviour and context may be beneficial for many services both online and in physical spaces. For example detecting lack of skills, confusion or other negative states may help to adapt online learning programmes, to detect a bottleneck in the production line, to recognise poor workplace culture etc., or maybe to detect a dangerous spot on a road before any accident happens there. Detection of unusual behaviour may help to improve security of travellers and safety of dementia sufferers and visually/audio impaired individuals, for example, to help them stay away from potentially dangerous strangers, e.g., drunk people or football fans forming in a big crowd. In the context of multimedia retrieval, understanding human behaviour and emotions could help not only for multimedia indexing, but also to derive implicit (i.e., other than intentionally reported) human feedback regarding multimedia news, videos, advertisements, navigators, hotels, shopping items etc. and improve multimedia retrieval. Humans are good at understanding other humans, their emotions and reasons. For example, when looking at people engaged in different activities (sport, driving, working on a computer, working on a construction site, using public transport etc.), a human observer can understand whether a person is engaged in the task or distracted, stopped the recommended video because the video was not interesting, or because the person quickly found what he needed in the beginning of the video. After observing another human for some time, humans can also learn the observed individuals? tastes, skills and personality traits. Hence the interest of this session is, how to improve AI understanding of the same aspects? The topics include (but are not limited to) the following: * Use of various sensors for monitoring and understanding human behaviour, emotion / mental state / cognition, and context: video, audio, infrared, wearables, virtual (e.g., mobile device usage, computer usage) sensors etc. * Methods for information fusion, including information from various heterogeneous sources. * Methods to learn human traits and preferences from long term observations. * Methods to detect human implicit feedback from past and current observations. * Methods to assess task performance: skills, emotions, confusion, engagement in the task and/or context. * Methods to detect potential security and safety threats and risks. * Methods to adapt behavioural and emotional models to different end users and contexts without collecting a lot of labels from each user and/or for each context: transfer learning, semi-supervised learning, anomaly detection, one-shot learning etc. * How to collect data for training AI methods from various sources, e.g., internet, open data, field pilots etc. * Use of behavioural or emotional data to model humans and adapt services either online or in physical spaces. * Ethics and privacy issues in modelling human emotions, behaviour, context and reasons. *Organisers of this special session are:* * Elena Vildjiounaite, Johanna Kallio, Sari J?rvinen, Satu-Marja M?kela, and Sari J?rvinen, VTT Technical Research Center of Finland, Finland. * Benjamin Allaert, IMT-Nord_Europe, France. * Ioan Marious Bilasco, University of Lille, France. * Franziska Schmalfuss, IAV GmbH, Germany. Please direct correspondence to uhber at cbmi2024.org *Paper submission:* 6 pages + 1 page of references *Deadline:* 22 march 2024 https://cbmi2024.org/?page_id=94#submissions Best regards -- Ioan Marius BILASCO https://pro.univ-lille.fr/marius-bilasco/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rava at ens.fr Mon Feb 26 07:04:14 2024 From: rava at ens.fr (Rava A. da Silveira) Date: Mon, 26 Feb 2024 13:04:14 +0100 Subject: Connectionists: CNeuro2024 summer school in computational neuroscience, Beijing, July 2024 Message-ID: *CN**EURO**2024* *Theoretical and Computational Neuroscience Summer School* *8 to 15 July 2024 ? In person in Beijing, China* *This year: ?N**EURAL** C**ODING,** L**EARNING,* *AND **D**ECISION **M* *AKING**?* *www.cneuro.net* *About CNeuro* How intelligence and behavior emerge from complex and intricate interactions within the brain remains a deep and unsolved mystery, central to an exciting area of interdisciplinary research. The past decade has seen rapid progress in experimental tools that now make it possible to monitor and manipulate brain circuits in unprecedented detail. This evolution presents challenges and opportunities for both experimentalists and theorists. Mathematical theory is instrumental in the emergence of theoretical insights and frameworks that can help guide experimental work and identify unifying principles of brain function. The aim of the one-week summer school will be to introduce students with a strong quantitative background (in mathematics, physics, computer science, and engineering) to the emerging field of theoretical and computational neuroscience. The distinguishing feature of CNeuro is the emphasis it places on the role of systematic mathematical theory for understanding the brain, in part by stressing the connections between neuroscience, statistics, machine learning, and artificial intelligence. *Topic and format* This year?s school will explore various facets of neural coding, learning, and decision making, from circuit neuroscience to cognition to artificial intelligence. The course will bring together leading scientists in the field, who will deliver both introductory and advanced lectures, and take part in small-group discussions. Other activities will include homework sessions and student presentations, as well as the now popular ?Life and Science? panel series, in which faculty and students can both share their personal experiences and views on research and career. *Faculty* Dmitri (Mitya) Chklovskii (Flatiron Institute) Nathaniel Daw (Princeton University) Pulin Gong (University of Sydney) Vincent Hakim (Ecole Normale Sup?rieure) Ole Jensen (University of Birmingham) Sukbin Lim (New York University Shanghai) Rub?n Moreno Bote (Universitat Pompeu Fabra) Naoshige Uchida (Havard University) Ninglong Xu (Institute of Neuroscience, Shanghai) Tianming Yang (Institute of Neuroscience, Shanghai) Hang Zhang (Peking University) Lusha Zhu (Peking University) *Organizers* Rava Azeredo da Silveira (ENS and IOB) Stella Christie (Tsinghua University) Sen Song (Tsinghua University) Louis Tao (Peking University) Xiaoqin Wang (Johns Hopkins University) Quan Wen (USTC) *Application and deadline* Information and application guidelines at https://www.cneuro.net/cneuro2024. Applications will be reviewed on a rolling basis. We will make every effort to recruit students from diverse backgrounds including all genders and ethnic groups. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tarek.besold at googlemail.com Mon Feb 26 06:40:43 2024 From: tarek.besold at googlemail.com (Tarek R. Besold) Date: Mon, 26 Feb 2024 12:40:43 +0100 Subject: Connectionists: CfP: 18th Intl. Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024) Message-ID: 18th International Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024) ===== TL;DR ===== Dates: September 9-12, 2024 Location: Barcelona, Catalonia, Spain Website: https://sites.google.com/view/nesy2024 Full paper abstract submission: April 5, 2024 Full paper manuscript submission: April 12, 2024 Late/short paper submission: May 5, 2024 ===== NeSy 2024 ===== Following the almost 20 years of success of NeSy, the series of International Workshops on Neural-Symbolic Learning and Reasoning ( http://www.neural-symbolic.org), in 2024 NeSy will return as the 18th International Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024), held in Barcelona (Catalonia, Spain) from September 9-12, 2024. NeSy continues to be the premier annual meeting point of the research community working on neurosymbolic integration, combining symbolic and subsymbolic approaches to learning and reasoning in AI systems. In 2024, NeSy hosts special tracks on Explainable AI and Generative AI. See the conference website (https://sites.google.com/view/nesy2024) for details. ===== Relevant Dates ===== Full paper abstract deadline: April 5, 2024 Full paper submission: April 12, 2024 Author notification: May 24, 2024 Camera-ready papers: June 19, 2024 Late/short paper submission: May 5, 2024 Author notifications: June 3, 2024 Camera-ready papers: June 19, 2024 ===== Proceedings ===== The proceedings of NeSy 2024 will be published by Springer LNCS. See the conference website (https://sites.google.com/view/nesy2024) for detailed formatting and submission instructions. ===== Organizing Committee ===== General/Local Chair: + Tarek R. Besold (Sony AI, Barcelona) Program Chairs: + Ernesto Jimenez Ruiz (City, University of London) + Artur d'Avila Garcez (City, University of London) + Tarek R. Besold (Sony AI, Barcelona) Explainable AI Special Track Chairs: + Benedikt Wagner (City, University of London) + Roberto Confalonieri (University of Padova) Generative AI Special Track Chair: + Pranava Madhyastha (City, University of London) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ASIM.ROY at asu.edu Mon Feb 26 06:13:15 2024 From: ASIM.ROY at asu.edu (Asim Roy) Date: Mon, 26 Feb 2024 11:13:15 +0000 Subject: Connectionists: Early history of symbolic and neural network approaches to AI In-Reply-To: References: <7A2C605C-7E77-499D-B028-B2BE288A956D@nyu.edu> Message-ID: 1. Grandmother cells generally stand for modality invariance, Gross (2002). And invariances can exist independently for different modalities. Thus, there is no conflict with what Steve Grossberg has said. Gross, C. G. (2002). Genealogy of the ?grandmother cell?. Neuroscientist 8, 512?518. doi: 10.1016/j.meegid.2016.04.006 1. The main finding from these single cell experiments is that their activations have ?meaning.? And some of the more significant single-cell findings were rewarded with Nobel prizes. 1. Finding of ?meaning? in these single cell activations is what nullifies the concept of population coding. After the discussion about Jennifer Aniston cells, Walter Freeman and a few others were scheduled for a debate at the IJCNN 2011 in San Jose on whether these activations can actually have ?meaning? or not. I told Walter that Quiroga had mentioned in one of his papers that they indeed have ?meaning.? Walter, of course, being a population coding champion, didn?t believe that. He knew Quiroga quite well, having co-authored a book with him (Imaging Brain Function With EEG: Advanced Temporal and Spatial Analysis of Electroencephalographic Signals | SpringerLink). So, Walter emailed Quiroga to get the answer directly from him. About an hour before the debate, Quiroga finally came through with his answer - that the activations of these cells indeed had ?meaning.? But that didn?t end the debate. I think Walter and I, along with many others (some of whom might be on this list) argued about ?meaning? and symbols a few more years after every new finding came out. But he was always scholarly, respectful of dissenting voices, and invited others to participate in these conversation. 1. Single cell experiments at UCLA were rigorously performed and repeated with many patients. Summary of these experiments by Reddy and Thorpe (2014) state that concept cells have ?meaning of a given stimulus in a manner that is invariant to different representations of that stimulus.? They also state that ?abstract, invariant representations? is ?a hallmark of MTL concept cells.? To ignore these findings would be a mistake. I understand that these findings do not go down well with the population coding theory, but facts are facts. If one wants to argue for population coding, one has to refute everything that?s claimed in the Reddy and Thorpe and other papers from UCLA/Caltech group, with finding of ?meaning? being the primary one to refute. But, beware that finding of ?meaning? goes all the way back to other single cell studies, including those of Hubel and Weisel. Reddy, L., and Thorpe, S. J. (2014). Concept cells through associative learning of high-level representations. Neuron 84, 248?251. doi: 10.1016/j.neuron.2014.10.004 1. Yoshua Bengio?s work (and that of many others including Geoffrey Hinton?s) on modularity and causality (and what he calls ?disentanglement?) is about extracting abstract concepts and very much consistent with the concept cell findings. Yoshua is no grandmother cell theorist, but here?s a quote from one of the earlier discussions here: ?Finally, our recent work on learning high-level ?system-2?-like representations and their causal dependencies seeks to learn ?interpretable? entities (with natural language) that will emerge at the highest levels of representation (not clear how distributed or local these will be, but much more local than in a traditional MLP). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the ?right? abstract concept (related to those we can name verbally) will be ?separated? (disentangled) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).? There you go ? trying to assign a single concept (with ?meaning?) to a single neuron or a set of neurons. Thus, population coding goes out the window. Yoshua can provide a further update on his perspectives, but extracting meaningful features (concepts, subconcepts) and having single neurons (or a set of neurons) represent those features is fairly widespread in computer vision. We don?t see much population coding there. Asim Roy Professor, Information Systems Arizona State University Lifeboat Foundation Bios: Professor Asim Roy Asim Roy | iSearch (asu.edu) From: Connectionists On Behalf Of Grossberg, Stephen Sent: Saturday, February 24, 2024 8:46 AM To: Jeffrey Bowers ; KENTRIDGE, ROBERT W. ; Gary Marcus ; Laurent Mertens Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff, Thanks for your supportive remark. One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells is needed for that category to work. For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category resonates via excitatory feedback signals when it is chosen. In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to the ventral, or What, cortical stream where the category is being learned. These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be distributed across the scene. These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance. I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas. Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution of the Where?s Waldo Problem. I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions. This neural architecture is far from being a ?grandmother cell?! My Magnum Opus provides a lot more modeling explanations and data about these issues: https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552 Best again, Steve From: Jeffrey Bowers > Date: Saturday, February 24, 2024 at 4:38?AM To: Grossberg, Stephen >, KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu >, Grossberg, Stephen > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff From: Grossberg, Stephen > Date: Friday, 23 February 2024 at 18:12 To: Jeffrey Bowers >, KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu >, Grossberg, Stephen > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Jeff et al., The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined. It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived object from multiple views, sizes, and positions. There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance. I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot of data. Just a scan of the article titles illustrates that there has been a lot of work on this topic. Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cognitive Psychology, 58, 1-48. https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal cortex? Neural Networks, 24, 1050-1061. https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning. Neural Networks, 24, 1036-1049. https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision. Neural Networks, 24, 686-692. https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding. Cognitive Psychology, 65, 77-117. https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Frontiers in Psychology: Perception Science, doi: 10.3389/fpsyg.2014.01457 https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its invariant object category representation cannot be used to do so. Best, Steve From: Connectionists > on behalf of Jeffrey Bowers > Date: Thursday, February 22, 2024 at 11:11?AM To: KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception. The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree it is not strong evidence for a grandmother cell ? there may be other images that drive the neuron even more, we just don?t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions ? when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand. Jeff From: KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 20:56 To: Jeffrey Bowers >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Again, it is great to be examining the relationship between ?real? neural coding and the ins and outs of representation in ANNs. I?m really pleased to be able to make a few contributions to a list which I?ve lurked on since the late 1980s! I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people?s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242), where a person?s judgment is indicative of the state of a high level consumer of orientation information. So again, I?d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn?t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex. cheers, Bob [Image result for university of durham logo] [signature_2025328812] [signature_824875734] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Jeffrey Bowers > Date: Wednesday, 21 February 2024 at 12:31 To: KENTRIDGE, ROBERT W. >, Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible is single neurons that only *represent* one category. In psychology there are ?localist? models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical ? that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back: Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. From: Connectionists > on behalf of KENTRIDGE, ROBERT W. > Date: Wednesday, 21 February 2024 at 11:48 To: Gary Marcus >, Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI I agree ? empirical evidence is just what we need in this super-interesting discussion. I should point out a few things about the Quiroga et al 2005 ?Jennifer Aniston cell? finding (Nature, 435. 1102 - 1107 ). Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ?Grandmother cells? as defined by Jerry Lettvin ? that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active. The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ?HALE BERRY?). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ?Friends? along with Jennifer Aniston. So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols. cheers Bob [Image result for university of durham logo] [signature_2975123418] [signature_2364801924] [Image result for durham cvac] Professor of Psychology, University of Durham. Durham PaleoPsychology Group. Durham Centre for Vision and Visual Cognition. Durham Centre for Visual Arts and Culture. [9k=] Fellow. Canadian Institute for Advanced Research, Brain, Mind & Consciousness Programme. Department of Psychology, University of Durham, Durham DH1 3LE, UK. p: +44 191 334 3261 f: +44 191 334 3434 From: Connectionists > on behalf of Gary Marcus > Date: Wednesday, 21 February 2024 at 05:49 To: Laurent Mertens > Cc: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI [EXTERNAL EMAIL] Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. ? On Feb 20, 2024, at 08:41, Laurent Mertens > wrote: ? Reacting to your statement: "However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol." What about the Grandmother/Jennifer Aniston/Halle Berry neuron? (See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013) KR, Laurent ________________________________ From: Connectionists > on behalf of Weng, Juyang > Sent: Monday, February 19, 2024 11:11 PM To: Michael Arbib >; connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI Dear Michael, You wrote, "Your brain did not deal with symbols?" I have my Conscious Learning (DN-3) model that tells me: My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters. However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain does not have any symbol in the skull. This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull. In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world. I am sorry to say that, which may make many enemies. Best regards, -John ________________________________ From: Michael Arbib > Sent: Monday, February 19, 2024 1:28 PM To: Weng, Juyang >; connectionists at mailman.srv.cs.cmu.edu > Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols? From: Connectionists > On Behalf Of Weng, Juyang Sent: Monday, February 19, 2024 8:07 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Connectionists: Early history of symbolic and neural network approaches to AI I do not agree with Newell and Simon if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 22984 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 226645 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 22619 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 8560 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 5741 bytes Desc: image005.jpg URL: From ali at ifi.uio.no Mon Feb 26 11:35:59 2024 From: ali at ifi.uio.no (Ali Ramezani-Kebrya) Date: Mon, 26 Feb 2024 16:35:59 +0000 Subject: Connectionists: Ph.D. Positions in ML in Norway Integreat - Norwegian Centre for Knowledge-driven Machine Learning In-Reply-To: <3A59EB92-7BB3-4B14-A360-909B796BBC73@uio.no> References: <3A59EB92-7BB3-4B14-A360-909B796BBC73@uio.no> Message-ID: <7E6E401C-18E9-4A2D-AC7C-E3D70A331CCD@uio.no> REMINDER: 13 PhD positions in Oslo and Troms?, Norway, fully funded. DEADLINE: Monday March 4 APPLICATION LINK: https://www.jobbnorge.no/en/available-jobs/job/257181/13-phd-positions-in-knowledge-driven-machine-learning 13 PhD positions in Machine Learning, Statistics, Logic, Language Technology, and Ethics at Integreat, The Norwegian Centre for Knowledge-driven Machine Learning, University of Oslo & UiT - The Arctic University of Norway, Troms?, https://www.integreat.no Start your interdisciplinary PhD in with engaged supervisors and many fellow students. Here are the list of projects: https://www.integreat.no/about/vacancies/phd-recruitment/2024/projects/ Starting in summer 2024. Regards, Ali Ramezani-Kebrya Associate Professor Department of Informatics University of Oslo Principal Investigator, SFI Visual Intelligence Norwegian Centre for Knowledge-driven Machine Learning (Integreat) European Laboratory for Learning and Intelligent Systems (ELLIS) Member https://alirk.github.io/ From: Ali Ramezani-Kebrya Date: Friday, 2 February 2024 at 19:15 To: "connectionists at mailman.srv.cs.cmu.edu" Subject: Ph.D. Positions in ML in Norway Integreat - Norwegian Centre for Knowledge-driven Machine Learning [?] [?] [?] A great opportunity to be within the first batch of Integreat Ph.Ds. Integreat is a unique environment where you work with computer scientists, statisticians, mathematicians + logic, language models, and ethics experts all in one place! I am involved in three exciting projects with wonderful colleagues: - Project 9: Embedded Sufficient Statistics (in Oslo) - Project 4: Exploration and Control of the Inner Representation in Generative AI Models (in Troms?) - Project 3: Developing Novel Information Theoretic Discrepancy Measures (in Troms?) You will receive similar resources to do the best Ph.D. in ML in both Troms? and Oslo. Engage in interdisciplinary projects in Machine Learning, Statistics, Logic, Language Technology, and Ethics with a group of devoted supervisors and fellow students. [?] Deadline: 4 March 2024 #AcademicCareers #PhDpositions #UiO #UiT #MachineLearning #Integreat #JuniorArena [?] Please share [?] Read more: [link]: https://www.integreat.no/news/join-integreat-apply-now-for-13-phd-positions-and-.html -------------------------------------- Ali Ramezani-Kebrya Associate Professor Department of Informatics University of Oslo Principal Investigator, SFI Visual Intelligence Norwegian Centre for Knowledge-driven Machine Learning (Integreat) European Laboratory for Learning and Intelligent Systems (ELLIS) Member https://alirk.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1122 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 817 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 950 bytes Desc: image003.png URL: From i.lin at ucl.ac.uk Mon Feb 26 11:11:30 2024 From: i.lin at ucl.ac.uk (Lin, I-Chun) Date: Mon, 26 Feb 2024 16:11:30 +0000 Subject: Connectionists: Research Fellow in Theoretical Neuroscience (Gatsby Unit) In-Reply-To: References: Message-ID: <6E179AE0-82CF-42C7-853A-CCB092618D74@ucl.ac.uk> Closing date:?30-Jun-2024 The Gatsby Unit invites applications for a postdoctoral training fellowship under Dr Agostina Palmigiano, focussed on developing theoretical approaches to investigate the mechanisms underlying sensory, motor or cognitive computations. You will be responsible for the primary execution of the project (with opportunities for co-supervision of students), presentation of results at conferences and seminars, and publication in suitable media. This post is initially funded for 2 years with the possibility of a one-year extension at the end of the period. You should have a strong quantitative background in theoretical neuroscience, machine learning, statistics, computer science, physics or engineering; a record of publication in highly respected journals and conferences and must hold a PhD in a relevant field by the agreed start date of the position. For detailed information on the role and how to apply, please visit?http://www.ucl.ac.uk/gatsby/vacancies?under 'Research Fellow (Palmigiano group)'. Agostina will also be at COSYNE 2024; please get in touch (a.palmigiano 'at'?ucl.ac.uk) to set up informal chats with her if interested!? About the Gatsby Unit Established in 1998 through a generous grant from the Gatsby Charitable Foundation, the Gatsby Unit has been a pioneering centre for research in theoretical neuroscience and machine learning. The Unit fosters a highly collaborative and open research culture where regular unit-wide events allow everyone to get involved in everything and be aware of what others are doing. In addition, the Unit provides a unique multidisciplinary research environment with strong links to the Sainsbury Wellcome Centre for Neural Circuits and Behaviour, the ELLIS Unit at UCL, and other neuroscience and machine learning groups at UCL and beyond. -- I-Chun Lin Scientific Programme Manager Gatsby Computational Neuroscience Unit, UCL From hocine.cherifi at gmail.com Mon Feb 26 21:36:19 2024 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Tue, 27 Feb 2024 05:36:19 +0300 Subject: Connectionists: Deadline Approaching: Submit Your Work to FRCCS 2024 till March 04, 2024 Message-ID: Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* Deadline Extension: *March 4, 2024 (Firm Deadline).* FRCCS is a single-track international conference organized by CSS France to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange, promote ideas, and facilitate the cross-fertilization of recent research work, industrial advancements, and original applications. You are cordially invited to submit your contribution until *March 4, 2024* *.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Frontiers in Big Data , edited by Frontiers *Submission * ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to: * ? *Foundations of complex systems * - *Complex Networks * - *Neuroscience, **Linguistics* - *Economics & Finance* - *Infrastructure, planning, and environment * - *Biological and (bio)medical complexity * - *Social complexity* - *Socio-Ecological Systems* - *Organisms and populations * - *Engineering systems and systems of systems* - *Complexity in physics and chemistry* *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux *PROGRAM CHAIRS* Hocine Cherifi University of Burgundy Pascal Poncelet University of Montpellier Join us at COMPLEX NETWORKS 2024 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Plos Complex Systems Founding & Adisory Editor Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Seth at sussex.ac.uk Mon Feb 26 12:21:54 2024 From: A.K.Seth at sussex.ac.uk (Anil Seth) Date: Mon, 26 Feb 2024 17:21:54 +0000 Subject: Connectionists: Five new postdoctoral and PhD positions available in consciousness science at Sussex University Message-ID: Positions available! Do you have a strong background in cognitive and computational neuroscience and a desire to contribute to the cutting edge of consciousness research? Two postdoctoral research positions and three PhD positions are now available at the Sussex Centre for Consciousness Science (SCCS; https://www.sussex.ac.uk/research/ccs/). The SCCS is a University of Sussex Centre of Excellence and an internationally prominent group in consciousness science. The positions are supported by an ERC Advanced Investigator Grant (project CONSCIOUS; 2 postdocs and 2 PhDs) and by the SCCS, via the School of Informatics and Engineering (1 PhD). The postdoc positions funded for 2 years with possible extension for another 2, starting as soon as suitable candidates are identified. The PhD positions are fully funded (for both UK and overseas candidates) for 3.5 years, starting in September 2024 (earlier start may be negotiable). The ERC positions are all within the broad remit of computational (neuro)phenomenology, exploring different applications of a predictive processing view of conscious perception. One of the ERC PhD positions will focus on perceptual diversity, analysing and extending our unique Perception Census project (https://perceptioncensus.dreamachine.world/). The SCCS PhD is flexible in terms of project focus and supervisor; here, we invite PhD proposals that have a computational/informatics element. Apply by April 03 for the postdoc positions, and by April 12 for the PhD positions. Further details, including how to apply, are here: https://www.sussex.ac.uk/research/centres/sussex-centre-for-consciousness-science/people/join-us ------------------------------------------- Anil K. Seth, D.Phil. Professor of Cognitive and Computational Neuroscience, University of Sussex Director, Sussex Centre for Consciousness Science, University of Sussex Co-Director, Canadian Institute for Advanced Research, Program on Brain, Mind, and Consciousness Advanced Investigator, European Research Council Editor-in-Chief, Neuroscience of Consciousness www.anilseth.com @anilkseth [Being You Pb Digital Assets v1 Email copy.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Being You Pb Digital Assets v1 Email copy.png Type: image/png Size: 58001 bytes Desc: Being You Pb Digital Assets v1 Email copy.png URL: From m.lengyel at eng.cam.ac.uk Tue Feb 27 03:20:58 2024 From: m.lengyel at eng.cam.ac.uk (=?utf-8?B?TcOhdMOpIExlbmd5ZWw=?=) Date: Tue, 27 Feb 2024 08:20:58 +0000 Subject: Connectionists: postdoc in computational neuroscience, Cambridge-Columbia collaboration Message-ID: We have an open postdoc position to work on a collaborative project between Guillaume Hennequin?s group and mine at the Computational and Biological Learning Lab at the University of Cambridge, and Daniel Wolpert's at the Zuckerman Mind Brain Behavior Institute, Columbia University. See details here: https://www.jobs.cam.ac.uk/job/44808/ The closing date for applications is 10th March. Both Guillaume and I will be at Cosyne this week, please do not hesitate to reach out if you would like to discuss this position with us. Best wishes, M?t? Lengyel -- Professor of Computational Neuroscience Computational and Biological Learning Lab Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ, UK tel: +44 (0)1223 748 532, fax: +44 (0)1223 765 587 email: m.lengyel at eng.cam.ac.uk web: lengyellab.org From interdonatos at gmail.com Tue Feb 27 04:33:52 2024 From: interdonatos at gmail.com (Roberto Interdonato) Date: Tue, 27 Feb 2024 10:33:52 +0100 Subject: Connectionists: =?utf-8?q?=5BDEADLINE_APPROACHING=5D_CFP_FRCCS_20?= =?utf-8?q?24_Montpellier=2C_France_May_29_=E2=80=93_31=2C_2024?= Message-ID: Fourth* F*rench* R*egional* C*onference on* C*omplex* S*ystems May 29 ? 31, 2024 Montpellier, France *FRCCS 2024* Deadline Extension: *March 4, 2024 (Firm Deadline)* After successfully hosting the conference in Dijon in 2021, Paris in 2022, and Le Havre in 2023, the fourth edition of the French Regional Conference on Complex Systems (FRCCS 2024) will be held in Montpellier, France. This single track international conference organized by CSS France serves as a platform to foster interdisciplinary exchanges among researchers from various scientific disciplines and diverse backgrounds, including sociology, economics, history, management, archaeology, geography, linguistics, statistics, mathematics, and computer science. FRCCS 2024 provides a valuable opportunity for participants to meet in France, exchange and promote ideas, facilitating the cross-fertilization of recent research work, industrial advancements, and original applications. Moreover, the conference emphasizes research topics with a high societal impact, showcasing the significance of complexity science in addressing complex societal challenges. Join us in Montpellier as we collectively strive to better understand complexity and its implications for society. You are cordially invited to submit your contribution until *March 4, 2024* *.* Finalized work (published or unpublished) and work in progress are welcome. Two types of contributions are accepted: ? *Papers* about *original research* (up to 12 pages) ? *Extended Abstract* about *published or unpublished* research (3 to 4 pages). *Keynote Speakers* ? Petter Holme , Aalto University, Finland ? Sonia K?fi , Universit? de Montpellier, France ? Natasa Przulj , Barcelona Supercomputing Center, Spain ? Boleslaw K. Szymanski Rensselaer Polytechnic Institute, USA ? Ingmar Weber , Saarland University, Germany *Publication* o Contributions will be included in the conference proceedings (With ISBN) o *Selected submissions of unpublished work will be invited for publication in special issues (fast track procedure) **of the journals:* o Applied Network Science, edited by Springer o Frontiers in Big Data , edited by Frontiers *Submission* ? Submit on CMT Microsoft at: https://cmt3.research.microsoft.com/FRCCS2024/ ? *Select the Track: FRCCS2024* *Topics include, but are not limited to:* ? *Foundations of complex systems* - Self-organization, non-linear dynamics, statistical physics, mathematical modeling and simulation, conceptual frameworks, ways of thinking, methodologies and methods, philosophy of complexity, knowledge systems, Complexity and information, Dynamics and self-organization, structure and dynamics at several scales, self-similarity, fractals - *Complex Networks* - Structure & Dynamics, Multilayer and Multiplex Networks, Adaptive Networks, Temporal Networks, Centrality, Patterns, Cliques, Communities, Epidemics, Rumors, Control, Synchronization, Reputation, Influence, Viral Marketing, Link Prediction, Network Visualization, Network Digging, Network Embedding & Learning. - *Neuroscience, **Linguistics* - Evolution of language, social consensus, artificial intelligence, cognitive processes & education, Narrative complexity - *Economics & Finance* - Game Theory, Stock Markets and Crises, Financial Systems, Risk Management, Globalization, Economics and Markets, Blockchain, Bitcoins, Markets and Employment - *Infrastructure, planning, and environment* - critical infrastructure, urban planning, mobility, transport and energy, smart cities, urban development, urban sciences - *Biological and (bio)medical complexity* - biological networks, systems biology, evolution, natural sciences, medicine and physiology, dynamics of biological coordination, aging - *Social complexity* o social networks, computational social sciences, socio-ecological systems, social groups, processes of change, social evolution, self-organization and democracy, socio-technical systems, collective intelligence, corporate and social structures and dynamics, organizational behavior and management, military and defense systems, social unrest, political networks, interactions between human and natural systems, diffusion/circulation of knowledge, diffusion of innovation - *Socio-Ecological Systems* - Global environmental change, green growth, sustainability & resilience, and culture - *Organisms and populations* - Population biology, collective behavior of animals, ecosystems, ecology, ecological networks, microbiome, speciation, evolution - *Engineering systems and systems of systems* - bioengineering, modified and hybrid biological organisms, multi-agent systems, artificial life, artificial intelligence, robots, communication networks, Internet, traffic systems, distributed control, resilience, artificial resilient systems, complex systems engineering, biologically inspired engineering, synthetic biology - *Complexity in physics and chemistry* - quantum computing, quantum synchronization, quantum chaos, random matrix theory *GENERAL CHAIRS* Roberto Interdonato CIRAD, UMR TETIS, Montpellier Bruno Pinaud LABRI University of Bordeaux -------------- next part -------------- An HTML attachment was scrubbed... URL: From announce at ucy.ac.cy Tue Feb 27 04:46:29 2024 From: announce at ucy.ac.cy (Announce) Date: Tue, 27 Feb 2024 09:46:29 +0000 Subject: Connectionists: 21st International Conference on Software and Systems Reuse (ICSR 2024): Last Mile for Paper Submission Message-ID: *** Last Mile for Paper Submission *** 21st International Conference on Software and Systems Reuse (ICSR 2024) June 10-12, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus https://cyprusconferences.org/icsr2024/ (*** Submission Deadline: March 4, 2024 AoE (extended) ***) The International Conference on Software and Systems Reuse (ICSR) is a biannual conference in the field of software reuse research and technology. ICSR is a premier event aiming to present the most recent advances and breakthroughs in the area of software reuse and to promote an intensive and continuous exchange among researchers and practitioners. The guiding theme of this edition is Sustainable Software Reuse. We invite submissions on new and innovative research results and industrial experience reports dealing with all aspects of software reuse within the context of the modern software development landscape. Topics include but are not limited to the following. 1 Technical aspects of reuse, including ? Reuse in/for Quality Assurance (QA) techniques, testing, verification, etc. ? Domain ontologies and Model-Driven Development ? Variability management and software product lines ? Context-aware and Dynamic Reuse ? Reuse in and for Machine Learning ? Domain-specific languages (DSLs) ? New language abstractions for software reuse ? Generative Development ? COTS-based development and reuse of open source assets ? Retrieval and recommendation of reusable assets ? Reuse of non-code artefacts ? Architecture-centric reuse approaches ? Service-oriented architectures and microservices ? Software composition and modularization ? Sustainability and software reuse ? Economic models of reuse ? Benefit and risk analysis, scoping ? Legal and managerial aspects of reuse ? Reuse adoption and transition to software reuse ? Lightweight reuse approaches ? Reuse in agile projects ? Technical debt and software reuse 2 Software reuse in industry and in emerging domains ? Reuse success stories ? Reuse failures, and lessons learned ? Reuse obstacles and success factors ? Return on Investment (ROI) studies ? Reuse in hot topic domains (Artificial Intelligence, Internet of Things, Virtualization, Network functions, Quantum Computing, etc.) We welcome research (16 pages) and industry papers (12 pages) following the Springer Lecture Notes in Computer Science format. Submissions will be handled via EasyChair (https://easychair.org/my/conference?conf=icsr2024). Submissions will be **double-blindly** reviewed, meaning that authors should: ? Omit all authors? names and affiliations from the title page ? Do not include the acknowledgement section, if you have any, in the submitted paper ? Refer to your own work in the third person ? Use anonymous GitHub, Zenondo, FigShare or equivalent to provide access to artefacts without disclosing your identity Both research and industry papers will be reviewed by members of the same program committee (check the website for details). Proceedings will be published by Springer in their Lecture Notes for Computer Science (LNCS) series. An award will be given to the best research and the best industry papers. The authors of selected papers from the conference will be invited to submit an extended version (containing at least 30% new material) to a special issue in the Journal of Systems and Software (Elsevier). More details will follow. IMPORTANT DATES ? Full paper submission: March 4, 2024, AoE (*** extended ***) ? Notification: April 8, 2024, AoE ? Camera Ready: April 15, 2024, AoE ? Author Registration: April 15, 2024 AoE ORGANISATION Steering Committee ? Eduardo Almeida, Federal University of Bahia, Brazil ? Goetz Botterweck, Lero, University of Limerick, Ireland ? Rafael Capilla, Rey Juan Carlos University, Spain ? John Favaro, Trust-IT, Italy ? William B. Frakes, IEEE TCSE committee on software reuse, USA ? Martin L. Griss, Carnegie Mellon University, USA ? Oliver Hummel, University of Applied Sciences, Germany ? Hafedh Mili, Universit? du Qu?bec ? Montr?al, Canada ? Nan Niu, University of Cincinnati, USA ? George Angelos Papadopoulos, University of Cyprus, Cyprus ? Claudia M.L. Werner, Federal University of Rio de Janeiro, Brazil General Chair ? George A. Papadopoulos, University of Cyprus, Cyprus Program Co-Chairs ? Achilleas Achilleos, Frederick University, Cyprus ? Lidia Fuentes, University of Malaga, Spain -------------- next part -------------- An HTML attachment was scrubbed... URL: From aitormorales95 at gmail.com Tue Feb 27 06:59:42 2024 From: aitormorales95 at gmail.com (Aitor Morales-Gregorio) Date: Tue, 27 Feb 2024 12:59:42 +0100 Subject: Connectionists: =?utf-8?b?QVNQUDIwMjQ6IDE24bWXyrAgQWR2YW5jZWQg?= =?utf-8?q?Scientific_Programming_in_Python_Summer_School_in_Heraklion=2C_?= =?utf-8?q?Crete=2C_Greece=2C_25_Aug_-_1_Sep=2C_2024?= Message-ID: <93292f22-83d4-4416-b9a0-e00cecb5cfb3@gmail.com> ASPP2024: 16?? Advanced Scientific Programming in Python Summer School ====================================================================== https://aspp.school Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists have been trained to use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques and best practices which are standard in the industry, but especially tailored to the needs of a programming scientist. Lectures are interactive and allow students to acquire direct hands-on experience with the topics. Students will work in pairs throughout the school and will team up to practice the newly learned skills in a real programming project ? an entertaining computer game. We use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. Python is the standard tool for the programming scientist due to clean language design, ease of extensibility, and the great wealth of open source libraries for scientific computing and data visualization. This school is targeted at PhD students, postdocs and more senior researchers from all areas of science. Competence in Python or in another language such as Java, JavaScript, C/C++, MATLAB, or R is absolutely required. Basic knowledge of Python and git or another version control system is assumed. Participants without any prior experience with Python or git should work through the proposed introductory material before the course. We care for diversity and inclusion, and strive for a welcoming atmosphere to programming scientists of all levels. In particular, we have focused on recruiting an international and gender-balanced pool of students. Date & Location =============== 25 August ? 1 September, 2024. Heraklion, Crete, Greece Application =========== You can apply online:https://aspp.school Application deadline: 23:59 UTC, Wednesday 1 May, 2024. There will be no deadline extension, so be sure to apply on time. Invitations and notification of rejection will be sent by Sunday 26 May, 2024. Participation is for free, i.e. no fee is charged! Participants however should take care of travel, living, and accommodation expenses by themselves. Program ======= ? Large-scale collaborative scientific code development with git and GitHub ? Best practices in data visualization ? Testing and debugging scientific code ? Advanced NumPy ? Organizing, documenting, and distributing scientific code ? Scientific programming patterns in Python ? Writing parallel applications in Python ? Profiling and speeding up scientific code ? Programming in teams Faculty ======= ? Aitor Morales-Gregorio, Institute for Advanced Simulation (IAS-6), Forschungszentrum J?lich Germany ? Jenni Rinker, Department of Wind and Energy Systems, Technical University of Denmark, Lyngby Denmark ? Lisa Schwetlick, Laboratory of Psychophysics, EPFL, Lausanne Switzerland ? Pamela Hathway, YPOG, Berlin/N?rnberg Germany ? Pietro Berkes, NAGRA Kudelski, Lausanne Switzerland ? Rike-Benjamin Schuppner, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin Germany ? Tiziano Zito, innoCampus, Technische Universit?t Berlin Germany ? Verjinia Metodieva, NeuroCure, Charit? ? Universit?tsmedizin Berlin Germany ? Zbigniew J?drzejewski-Szmek, Red Hat Inc., Warsaw Poland Organizers ========== Head of the organization for ASPP and responsible for the scientific program: ? Tiziano Zito, innoCampus, Technische Universit?t Berlin Germany Organization team in Heraklion: ? Sara Moberg, Department of Biology, Humboldt-Universit?t zu Berlin Germany ? Athanasia Papoutsi, Institute of Molecular Biology and Biotechnology of the Foundation for Research and Technology ? Hellas, Heraklion Greece ? Maria Diamantaki, Institute of Molecular Biology and Biotechnology of the Foundation for Research and Technology ? Hellas, Heraklion Greece ? Zampeta Kalogeropoulou, Digital Science & Research Solutions Ltd., Heraklion Greece Sponsors ======== We are able to hold this year's ASPP school thanks to the financial support of the T?bingen AI Center. The Institute of Molecular Biology & Biotechnology of the Foundation for Research and Technology ? Hellas is hosting us in Heraklion and is taking care of the local organization. Website:https://aspp.school Contact:info at aspp.school From jboedeck at informatik.uni-freiburg.de Tue Feb 27 07:18:16 2024 From: jboedeck at informatik.uni-freiburg.de (Joschka Boedecker) Date: Tue, 27 Feb 2024 13:18:16 +0100 Subject: Connectionists: [jobs] Multiple PhD positions in ML for automated driving Message-ID: Join us in an exciting project where we rethink the automated driving stack! The Neurorobotics Lab (https://nr.informatik.uni-freiburg.de) at the University of Freiburg, Germany, headed by Prof. Joschka Boedecker currently has multiple open full-time PhD positions on planning and learning for automated driving. We are looking for individuals who are highly motivated, curious, and keen to work in a team with excellent peers in a larger project with an industry partner. The positions are fully funded (TV-L E13) 100%. Knowledge of the German language is not required and international candidates are encouraged to apply. In particular, we strongly encourage women and people of underrepresented minority groups to apply. The candidate should have: + Master?s degree in computer science, robotics, machine learning, or a related field + Strong mathematical background + Programming skills including in deep learning frameworks such as PyTorch or TensorFlow (programming experience in C++ would be a plus) + Excellent communication skills in English + Prior research experience in (ideally) several of the following areas: - Reinforcement Learning and planning - Scene understanding - Multi-object tracking and prediction - Graph neural networks - Causal representation learning - Uncertainty estimation in deep networks - Self-supervised and unsupervised learning About the Lab and the city of Freiburg: The Neurorobotics Lab is part of the ELLIS Unit Freiburg, the Department of Computer Science, and the BrainLinks-BrainTools Center. In addition, we tightly collaborate with the Collaborative Research Institute Intelligent Oncology (CRIION) in Freiburg. Access to state-of-the-art robotics and compute facilities is available. According to Wikipedia, Freiburg is "known for its ancient university and its medieval munster, as well as for its high standard of living and advanced environmental practices. The city is situated in the heart of a major wine-growing region and serves as the primary tourist entry point to the scenic beauty of the Black Forest. According to meteorological statistics, the city is the sunniest and warmest in Germany." Please send your curriculum vitae and (unofficial) master?s transcripts with an expression of interest via email to nrapplications at cs.uni-freiburg.de as soon as possible. Selected candidates will be asked for further application documents. The call closes as soon as suitable candidates are found. From stanciu.cristi12 at gmail.com Tue Feb 27 09:53:34 2024 From: stanciu.cristi12 at gmail.com (Cristian Stanciu) Date: Tue, 27 Feb 2024 16:53:34 +0200 Subject: Connectionists: =?utf-8?q?Call_for_papers=3A_3rd_ACM_Internationa?= =?utf-8?q?l_Workshop_on_Multimedia_AI_against_Disinformation_=28MA?= =?utf-8?b?ROKAmTI0KQ==?= Message-ID: 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD?24) ACM International Conference on Multimedia Retrieval ICMR'24 Phuket, Thailand, June 10-13, 2024 https://www.mad2024.aimultimedialab.ro/ https://easychair.org/my/conference?conf=mad2024 *** Call for papers *** * Paper submission due: March 17, 2024 * Acceptance notification: April 7, 2024 * Camera-ready papers due: April 25, 2024 * Workshop @ACM ICMR 2024: June 10, 2024 Modern communication does not rely anymore solely on classic media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users. The speedup in the amount of information available, however, also led to an increased amount and quality of misleading content, disinformation and propaganda. Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens' opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as politics, health and religion. Disinformation campaigns are leveraging, among others, AI-based tools for content generation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of "deepfakes", and more recently with the use of Large Language Models (LLMs) and Large Multimodal Models (LMMs), undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new analysis tools able to detect the presence of synthetic and manipulated content, accessible to journalists and fact-checkers, robust and trustworthy, and possibly based on AI to reach greater performance. Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed in order to reduce the effects of disinformation campaigns. The workshop, in its third edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation. Topics of interest include but are not limited to: - Disinformation detection in multimedia content (e.g., video, audio, texts, images) - Multimodal verification methods - Synthetic and manipulated media detection - Multimedia forensics - Disinformation spread and effects in social media - Analysis of disinformation campaigns in societally-sensitive domains - Robustness of media verification against adversarial attacks and real-world complexities - Fairness and non-discrimination of disinformation detection in multimedia content - Explaining disinformation/disinformation detection technologies to non-expert users - Temporal and cultural aspects of disinformation - Dataset sharing and governance in AI for disinformation - Datasets for disinformation detection and multimedia verification - Open resources, e.g., datasets, software tools - Large Language Models for analyzing and mitigating disinformation campaigns - Large Multimodal Models for media verification - Multimedia verification systems and applications - System fusion, ensembling and late fusion techniques - Benchmarking and evaluation frameworks *** Submission guidelines *** When preparing your submission, please adhere strictly to the ACM ICMR 2024 instructions, to ensure the appropriateness of the reviewing process and inclusion in the ACM Digital Library proceedings. The instructions are available here: https://mad2024.aimultimedialab.ro/submissions/. *** Organizing committee *** Cristian Stanciu, Politehnica University of Bucharest, Romania Luca Cuccovillo, Fraunhofer IDMT, Germany Bogdan Ionescu, Politehnica University of Bucharest, Romania Giorgos Kordopatis-Zilos, Czech Technical University in Prague, Czechia Symeon Papadopoulos, Centre for Research and Technology Hellas, Thessaloniki, Greece Adrian Popescu, CEA LIST, Saclay, France Roberto Caldelli, CNIT and Mercatorum University, Italy The workshop is supported under the H2020 project AI4Media - A European Excellence Centre for Media, Society and Democracy (https://www.ai4media.eu/), the Horizon Europe project vera.ai - VERification Assisted by Artificial Intelligence (https://www.veraai.eu/) and Horizon Europe project AI4Debunk - Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation ( https://ai4debunk.eu/). On behalf of the organizers, Cristian Stanciu https://www.aimultimedialab.ro/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.contrerasordonez at hevs.ch Tue Feb 27 10:54:38 2024 From: victor.contrerasordonez at hevs.ch (=?iso-8859-1?Q?Contreras_Ordo=F1ez_Victor_Hugo?=) Date: Tue, 27 Feb 2024 15:54:38 +0000 Subject: Connectionists: EXTRAAMAS2024 - DEADLINE EXTENSION In-Reply-To: References: Message-ID: Dear colleague, We have received several requests for a deadline extension. Therefore, the new deadline for paper submission is: EXTRAAMAS submission deadline : 10 March 2024 EXTRAAMAS website: https://extraamas.ehealth.hevs.ch/ We look forward to see you in New Zaeland. Full Call for Papers: Please check the call for paper here: https://extraamas.ehealth.hevs.ch/docs/CfP_EXTRAAMAS24.pdf Important Dates: Deadline for submissions: 10.03.2024 Notification of acceptance: 25.03.2024 Registration Instructions: 05.04.2024 Workshop day: 6.05.2024 Camera-ready: 15.06.2024 Workshop tracks: Track 1: XAI in symbolic and subsymbolic AI Track 2: XAI in negotiation and conflict resolution Track 3: Prompts, Interactive Explainability and Dialogues Track 4: XAI in Law and Ethics. Best regards, EXTRAAMAS Program Chairs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zk240 at cam.ac.uk Wed Feb 28 04:05:57 2024 From: zk240 at cam.ac.uk (Zoe Kourtzi) Date: Wed, 28 Feb 2024 09:05:57 +0000 Subject: Connectionists: Postdoc/Research Assistant position in Neuro-Clinical informatics, University of Cambridge, UK. Message-ID: We have a Research Fellow (post-doc) or Research Assistant position in Neuro-clinical informatics at the Adaptive Brain Lab ( http://www.abg.psychol.cam.ac.uk), University of Cambridge, UK. The position focuses on translating AI-guided tools for understanding brain computations and predicting brain and mental health disorders https://www.jobs.cam.ac.uk/job/45356/ Applications by 17th March For Informal enquiries please contact Prof Zoe Kourtzi (zk240 at cam.ac.uk) with CV and brief statement of background skills and research interests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From timofte.radu at gmail.com Wed Feb 28 09:36:45 2024 From: timofte.radu at gmail.com (Radu Timofte) Date: Wed, 28 Feb 2024 15:36:45 +0100 Subject: Connectionists: [CFP] CVPR 2024 NTIRE workshop and challenges [postponed deadlines] Message-ID: Apologies for multiple postings *********************************** CALL FOR PAPERS & CALL FOR PARTICIPANTS IN 17 CHALLENGES NTIRE: 9th New Trends in Image Restoration and Enhancement workshop and challenges. In conjunction with CVPR 2024, June 17, Seattle, US. Website: https://cvlai.net/ntire/2024/ Contact: radu.timofte at uni-wuerzburg.de TOPICS ? Image/video inpainting ? Image/video deblurring ? Image/video denoising ? Image/video upsampling and super-resolution ? Image/video filtering ? Image/video de-hazing, de-raining, de-snowing, etc. ? Demosaicing ? Image/video compression ? Removal of artifacts, shadows, glare and reflections, etc. ? Image/video enhancement: brightening, color adjustment, sharpening, etc. ? Style transfer ? Hyperspectral image restoration, enhancement, manipulation ? Underwater image restoration, enhancement, manipulation ? Light field image restoration, enhancement, manipulation ? Methods robust to changing weather conditions / adverse outdoor conditions ? Image/video restoration, enhancement, manipulation on constrained settings/mobile devices ? Visual domain translation ? Multimodal translation ? Perceptual enhancement ? Perceptual manipulation ? Depth estimation ? Image/video generation and hallucination ? Image/video quality assessment ? Image/video semantic segmentation ? Saliency and gaze estimation ? Studies and applications of the above. SUBMISSION A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in CVPR style. https://cvpr.thecvf.com/Conferences/2024/AuthorGuidelines The review process is double blind. Accepted and presented papers will be published after the conference in the 2024 CVPR Workshops Proceedings. Author Kit: https://github.com/cvpr-org/author-kit/archive/refs/tags/CVPR2024-v2.zip Submission site: https://cmt3.research.microsoft.com/NTIRE2024 WORKSHOP DATES ? *Regular Papers submission deadline: March 17, 2024* *(EXTENDED)* ? *Challenge Papers submission deadline: April 5, 2024* ? *Papers reviewed elsewhere submission deadline: April 5, 2024* *(EXTENDED)* CHALLENGES - Dense and Non-Homogeneous Dehazing - Night Photography Rendering - Blind Compressed Image Enhancement - Shadow Removal - Track 1 Fidelity - Shadow Removal - Track 2 Perceptual - Efficient Super Resolution - Image Super Resolution (x4) - Light Field Image Super-Resolution - Track 1 Fidelity - Light Field Image Super-Resolution - Track 2 Efficiency - Stereo Image Super-Resolution - Track 1 Bicubic - Stereo Image Super-Resolution - Track 2 Realistic - HR Depth from Images of Specular and Transparent Surfaces - Track 1 Stereo - HR Depth from Images of Specular and Transparent Surfaces - Track 2 Mono - Bracketing Image Restoration and Enhancement - Track 1 - Bracketing Image Restoration and Enhancement - Track 2 - Portrait Quality Assessment - Quality Assessment for AI-Generated Content - Track 1 Image - Quality Assessment for AI-Generated Content - Track 2 Video - Restore Any Image Model (RAIM) in the Wild - RAW Image Super-Resolution - Short-form UGC Video Quality Assessment - Low Light Enhancement - RAW Burst Alignment and ISP To learn more about the challenges, to participate in the challenges, and to access the data everybody is invited to check the NTIRE 2024 page: https://cvlai.net/ntire/2024/ CHALLENGES DATES ? Release of train data: February 1, 2024 ? *Competitions end: March 21, 2024 (EXTENDED)* Website: https://cvlai.net/ntire/2024/ Contact: radu.timofte at uni-wuerzburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From hocine.cherifi at gmail.com Wed Feb 28 10:26:27 2024 From: hocine.cherifi at gmail.com (Hocine Cherifi) Date: Wed, 28 Feb 2024 18:26:27 +0300 Subject: Connectionists: Doctoral Student and Young Researcher Awards @ FRCCS 2024 Deadline March 11, 2024 Message-ID: Please consider to contribute to and/or forward to the appropriate groups the following opportunities: *Submission deadline extended: March 11, 2024 (firm deadline)* *DOCTORAL STUDENT AWARD * The Doctoral Student Award is in place to recognize early career contributions. It is awarded to young researchers enrolled in a Ph.D. program. This competition consists in presenting your research in simple terms in a five minutes video to a lay audience. Your presentation should be clear, concise, and convincing. More info: https://iutdijon.u-bourgogne.fr/ccs-france/call-doctoral-student-award/ *YOUNG RESARCHER AWARD * The Young Researcher Award is in place to recognize early career contributions and leadership in research in Complex Systems related fields. It is awarded to young researchers up to *five years* *after* the Ph.D. completion (date of Ph.D. defense) and the deadline of the call for nomination. More info: https://iutdijon.u-bourgogne.fr/ccs-france/call-young-resarcher-award/ Best regards Join us at COMPLEX NETWORKS 2024 *-------------------------* Hocine CHERIFI University of Burgundy Franche-Comt? Laboratoire* I*nterdisciplinaire *C*arnot de *B*ourgogne - ICB UMR 6303 CNRS Editor in Chief Plos Complex Systems Founding & Adisory Editor Applied Network Science Editorial Board member PLOS One , IEEE ACCESS , Scientific Reports , Journal of Imaging , Quality and Quantity , Computational Social Networks , Complex Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From efipatm at gmail.com Wed Feb 28 10:21:08 2024 From: efipatm at gmail.com (Efi) Date: Wed, 28 Feb 2024 17:21:08 +0200 Subject: Connectionists: TEMA/AIIA/AIDA free short e-course: "Cloud/Edge Computing for Deep Learning and Big Data Analytics", 19th March 2024 Message-ID: Dear AI/CS/ECE student/scientist/engineer/enthusiast, the Artificial Intelligence and Information Analysis (AIIA ) Lab of Aristotle University of Thessaloniki (AUTH) in cooperation with TEMA R&D project, the International AI Doctoral Academy (AIDA ), is excited to invite you to register and attend the upcoming short e-course: Cloud/Edge Computing for Deep Learning and Big Data Analytics offered by esteemed lecturers. This course consists of six lectures, covering some of the most popular tools necessary for distributed systems: Docker, Kubernetes, Orion Context Broker and Apache Airflow. Decentralized DNN applications will also be presented. *Host Institution*: Aristotle University of Thessaloniki *19 March 2024, 08:30 - 18:30 CET* *Online participation* via ZOOM Meeting ID: 928 2622 5947 Passcode: 405011 Register in advance in this course* for free *through the above-mentioned short course *www.page * . Students/Scientists, Engineers from other scientific disciplines having the necessary mathematical background are also welcomed to register. This short e-course is technically sponsored by TEMA , AI. BIG cluster , AIDA , AI4Media , AI4Europe & SIMAR R&D projects. Course Organizer: Prof. Ioannis Pitas Chair of the International AI Doctoral Academy (AIDA), Director of the Artificial Intelligence and Information analysis (AIIA) Lab, Aristotle University of Thessaloniki, Greece -------------- next part -------------- An HTML attachment was scrubbed... URL: From publicity at acsos.org Wed Feb 28 17:29:46 2024 From: publicity at acsos.org (ACSOS Publicity Chairs) Date: Wed, 28 Feb 2024 19:29:46 -0300 Subject: Connectionists: ACSOS 2024: Call for Papers Message-ID: ****** ACSOS 2024 - Call for Papers ****** 5th IEEE International Conference on Autonomic Computing and Self-Organizing Systems 16-20 September 2024 - Aarhus University (AU), Aarhus, Denmark https://2024.acsos.org/ https://twitter.com/ACSOSconf *************************************************** The IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS) is the leading forum for sharing the latest research results, ideas, and experiences in autonomic computing, self-adaptation, and self-organization. Now in its 5th edition, ACSOS was founded in 2020 as a merger of the IEEE International Conference on Autonomic Computing (ICAC) and the IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO). The world is increasingly relying on autonomous systems: in robotics, manufacturing, software engineering, vehicles, data center systems, and precision agriculture, to name just a few areas. These systems are bringing autonomy to a whole new level of dynamic decision-making under uncertainty, requiring autonomic behavior (e.g., control theory, cybernetics) and self-reference, leading to a range of self-* properties (e.g., self-awareness, self-adaptation, self-organization), and an approach in which system implementation and its environment are holistically considered. Despite this rise in autonomic and self-* systems, there remains a wide range of fundamental challenges in understanding how to design, control, reason about, and trust such systems. The IEEE ACSOS conference solicits novel research on these topics in fundamentals and methods as well as applications for autonomic and self-* systems. ACSOS is particularly proud of its long-standing academic breadth and innovative industry contributions and regularly features work from computational biologists through to operating systems researchers ? united by the common theme of autonomous systems. The conference program will include research papers, in-practice experience reports, posters, demos, and a doctoral symposium. ACSOS 2024 solicits contributions from among (but not limited to) the following autonomic, self-adaptive and self-organizing theories, methods, and applications: Theories Heuristics Optimization and configuration algorithms Models for complex and adaptive systems Methods Support for adaptive systems monitoring analyzing planing execution Specification of adaptation in processes for development, deployment runtime System support for adaptive systems Operating systems Middleware Communication Distributed learning (e.g., federated learning) Applications Software support for adaptive systems Patterns in monitoring, analyzing, planning, execution Software architectures Languages for adaptive Experiences and domains Applications of self-organization in specific use cases Insights of applying and evaluating real-world self-organizing systems generic platforms, i.e., middleware, operating systems specific applications LLMs for self-adaptive and self-organizing systems (e.g., multi-robot planning) Please note that this list is non-exhaustive. ACSOS strives to be a home for research in adaptive and self-organizing systems. **************** Important Dates ****************** Abstract Submission: April 26th, 2024 Submission Deadline: May 3rd, 2024 Notification to Authors: June 24th, 2024 Camera Ready Submission: July 19th, 2024 *************************************************** All times in Anywhere on Earth (AoE) timezone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From battleday at g.harvard.edu Wed Feb 28 23:34:05 2024 From: battleday at g.harvard.edu (Battleday, Ruairidh) Date: Thu, 29 Feb 2024 17:34:05 +1300 Subject: Connectionists: Abstract submission deadline extended: Mathematics of Neuroscience and AI, Rome, 28-31st May 2024. Message-ID: We have extended the deadline for abstracts for the 5th International Convention on the Mathematics of Neuroscience and AI, to be held in Rome 28th-31st May 2024: New deadline: 8th March, 2024; AOE. We hope that you come and join us! Thank you to all the people that have submitted already?the quality has been fantastic. If you would like to edit your submission, just submit another response. In this high-profile convention we examine computational accounts of the brain and mind from the perspectives of biocomputation, neural theory, and cognitive science. We then take findings and theory back into the creation of robust, generalizable, and scalable AI: www.neuromonster.org We invite submissions for each session via this Google form: https://forms.gle/u8tJU4PtkT1EKYte7 Keynote Speakers Professor Peter Dayan (Max Planck Institute, T?bingen) Professor Sophie Deneve (ENS, Paris) Professor Wolfgang Maass (TUG) Professor Mackenzie Mathis (EPFL) Professor Anne Collins (UC Berkeley) Dr Feryal Behbahani (Google DeepMind) Dr Giovanni Pezzulo (NRC of Italy, Rome) Session Chairs Biocomputation Yasmine Ayman (Harvard) Professor Dan Nicolau Jr (KCL) Neural theory Dr James Whittington (Stanford / Oxford) Dr Francesca Mastrogiuseppe (Champalimaud) Cognitive science Dr Ruairidh Battleday (Harvard / MIT) Dr Antonella Maselli (NRC Italy) AI Dr Ishita Dasgupta (Google DeepMind) Dr Ilia Sucholutsky (Princeton) Confirmed speakers Biocomputation Professor Andrew Adamatzky (UWE) Professor Panayiota Poirazi (FORTH, Crete) Professor Jason Shepherd (Utah) Professor Christine Grienberger (Brandeis) Neural theory Professor Rafal Bogacz (Oxford) Professor Athena Akrami (UCL) Professor Nicolas Brunel (Duke) Professor Kanaka Rajan (Harvard) Dr Lea Duncker (Stanford) Dr Kris Jensen (UCL) Cognitive science Professor Dagmar Sternad (Northeastern) Professor Bill Thompson (UC Berkeley) Professor Samuel McDougle (Yale) Dr Maria Eckstein (DeepMind) Dr Fred Callaway (NYU / Harvard) AI Professor Kevin Ellis (Cornell) Professor Najoung Kim (BU, Google) Dr Andr? Barreto (DeepMind) Dr Sophia Sanborn (USB / UBC) Dr Wilka Carvalho (Harvard) The submission format is a single A4 sheet containing (at least) an abstract of 250 words, along with any additional material you think valuable. These will be accepted for poster, spotlight, or a short talk. New submission deadline is 8th March AOE. We will reply by March 15th with a decision. The convention will be held at the Villa Wolkonsky, in San Giovanni, Rome (10 minutes walk from the Colosseum). Many thanks! The Organizing Committee. -- Dr Ruairidh McLennan Battleday BMBCh (Oxon) PhD President Thinking About Thinking (www.thinkingaboutthinking.org) Postdoctoral Research Fellow Center for Brain Science, Harvard University Center for Brains, Minds, and Machines, MIT -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.mcnamee at research.fchampalimaud.org Wed Feb 28 12:50:19 2024 From: daniel.mcnamee at research.fchampalimaud.org (Daniel McNamee) Date: Wed, 28 Feb 2024 17:50:19 +0000 Subject: Connectionists: CAJAL Course on Quantitative Approaches to Behavior and Virtual Reality 2024 Message-ID: CAJAL COURSE ON QUANTITATIVE APPROACHES TO BEHAVIOR AND VIRTUAL REALITY 2 - 21 June 2024, Champalimaud Centre for the Unknown, Lisbon, Portugal https://cajal-training.org/on-site/quantitative-approaches-to-behaviour-and-virtual-reality/ Applications deadline: 4 March 2024 DIRECTORS ? Benjamin de Bivort (Harvard University, USA) ? Ann Kennedy (Northwestern University, USA) * Giorgio Gilestro (Imperial College London, UK) ? Daniel McNamee (Champalimaud Centre for the Unknown, Portugal) Quantitative studies of behaviour are fundamental in our effort to understand brain function and malfunction. Recently, the techniques for studying behaviour, along with those for monitoring and manipulating neural activity, have progressed rapidly. Therefore, we are organizing a summer course to provide promising young scientists with a comprehensive introduction to state-of-the-art techniques in quantitative behavioural methods. This course?s content is complementary to other summer courses that focus on measuring and manipulation neurophysiological processes. Our focus is on methodologies to acquire rich data representations of behavior, dissect them statistically, model their dynamics, and integrate behavioral measurements with other kinds of neurobiological data. To this end, students will 1) fabricate devices for recording the behavior of experimental organisms (including flies, fish, and humans), 2) learn, under the guidance of the scientists developing these methods, the modern tools to analyze behavioral data from these organisms, and 3) in a week-long independent project develop and conduct a behavioral study of their own design, with the support and guidance of the course instructors and teaching assistants. The course is designed for graduate students and postdoctoral fellows from a variety of disciplines, including neuroscience, biology, ethology, computer science, and psychology. Students are expected to have a keen interest and basic background in neurobiology and behavior, as well as some programming experience. A maximum of 20 students will be accepted. Students of any nationality can apply. We specifically encourage applications from researchers who work in the developing world. Stipends are available. Confirmed faculty: Ahmed Al-Hady (Max Planck Institute, Konstanz, Germany) Kristin Branson (Janelia, USA) Andre Brown (Imperial College, UK) Bing Brunton (U Washington, USA) James Cotton (Northwestern, USA) Iain Couzin (Max Planck Institute, Konstanz, Germany) Serena Ding (Max Planck Institute, Konstanz, Germany) Gonzalo de Polavieja (Champalimaud, Portugal) Kim Hoke (Colorado State University, USA) Karla Kaun (Brown University, USA) Kate Laskowski (UC Davis, USA) Zach Mainen (Champalimaud, Portugal) Talmo Pereira (Salk Institute, USA) Nachum Ulanovsky (Weizmann Institute, Israel) (more to be confirmed) Daniel McNamee (co-director, Champalimaud Research) -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.cisek at umontreal.ca Wed Feb 28 16:19:14 2024 From: paul.cisek at umontreal.ca (Paul Cisek) Date: Wed, 28 Feb 2024 21:19:14 +0000 Subject: Connectionists: Two postdoctoral positions for the development of large-scale electrophysiological data management and sharing infrastructure Message-ID: Partnership between the Department of Neurosciences (https://neurosciences.umontreal.ca), the Faculty of Medicine of the Universit? de Montr?al (https://medecine.umontreal.ca) and IVADO (https://ivado.ca). 2-year positions; 75,000$/year salary Requirements: The applicant should hold a PhD (e.g. electrical or biomedical engineering, neurosciences, or complementary field) and have research expertise in system neuroscience, programming, and data processing and analyses. Major advances in electrophysiological recording technologies have led to an explosion in the size of datasets, as well as new challenges in processing and managing them. In particular, the advent of wireless large-scale neurophysiology and deep-learning-based pose estimation enables us to study new scientific questions on complex, unconstrained behaviors. As the pace at which we acquire data has accelerated exponentially in recent years, there is an urgent need for efficient automated pipelines to process raw data from labs into analysis-ready datasets, and to create organized structures for long-term data storage that are implemented consistently across tools and users to facilitate data sharing and complex multidimensional and multifactorial analyses. To address these challenges, several research groups at the department of neuroscience (P. Cisek, N. Dancause, B. Ebitz, A. Green, M. Perich) aim to 1) develop and deploy automated approaches to process signals recorded in our labs into analysis-ready datasets; 2) create a unified data storage and management framework to facilitate data sharing and collaborative, neuro-AI, analyses. These goals are closely aligned with those of the Union Neuroscience et Intelligence artificielle Quebec (UNIQUE; https://www.unique.quebec) center, the strategic research cluster of the Fonds de Recherche du Quebec - Nature et Technologie (FRQ-NT), the Institut de valorisation des donn?es (IVADO; https://ivado.ca) and the Quebec Artificial Intelligence Institute (Mila; https://mila.quebec). To achieve these goals, we aim to hire two postdoctoral trainees (PDFs) seeking training opportunities at the intersection of academia and industry. The candidates should have adequate training in disciplines such as engineering, data sciences, machine-learning or neurosciences. The PDFs will advance cutting edge platforms for large-scale behavioral and neurophysiology experiments shared by several research groups on campus at the Universit? de Montr?al. They will participate in the advancement of open source in neuroscience. Both positions will include working with unique electrophysiological datasets that will provide opportunities to develop novel or high-dimensional analytical tools that could lead to authorship on additional publications. These data sets can originate from ongoing scientific projects of one or more of the participating research labs. 1) Develop and deploy automated approaches to process signals from the labs into analysis-ready datasets. The postdoctoral trainee will design and develop automated processing pipelines, from raw data collected in the labs to analysis-ready state, for each type of data collected. The pipeline must adapt and enhance our current tools implemented in Matlab, R or Python. This includes incorporating automated spike sorting tools such as MountainSort, KiloSort, and Full Binary Pursuit, handling local field potential signals with software like Chronux and FieldTrip, as well as creating user interfaces. The postdoctoral trainee will be responsible for creating Standard Operating Procedures and lead user training. 2) Create a unified data storage and management framework to facilitate data sharing and collaborative, neuro-AI, analyses. The postdoctoral trainee will design and develop a robust data management platform in collaboration with researchers and staff. The platform must provide reproducible analysis automation of commonly repeated tasks and provide advanced application programming interfaces (API) that support discovery-oriented data analysis. The platform will include a central database (SQL) for storing data and metadata, provenance and versioning information, implement standardized nomenclature according to agreed ontologies, and provide an extensible suite of analysis tools (in Matlab, R or Python). The postdoctoral trainee will also be responsible for creating Standard Operating Procedures and lead user training. For application: Please send a short cover letter, resume and the names of 3 references to Numa Dancause, Bsc(PT), PhD Professor, Associate Director, D?partement de Neurosciences, Universit? de Montr?al Email: marlene.boutet at umontreal.ca Paul Cisek, PhD D?partement de neurosciences Universit? de Montr?al Physical: 2960 chemin de la tour, local 4117 Montr?al, QC H3T 1J4 CANADA Mailing: CP 6128 Succursale centre-ville, Montr?al, QC H3C 3J7 CANADA e-mail: paul.cisek at umontreal.ca Web: www.cisek.org/pavel -------------- next part -------------- An HTML attachment was scrubbed... URL: From nagai.yukie at mail.u-tokyo.ac.jp Wed Feb 28 21:04:54 2024 From: nagai.yukie at mail.u-tokyo.ac.jp (nagai.yukie at mail.u-tokyo.ac.jp) Date: Thu, 29 Feb 2024 02:04:54 +0000 Subject: Connectionists: ICDL Poster deadline extended to March 8 Message-ID: Dear colleagues, Join us in Austin May 20-23, 2024 for the IEEE International Conference on Development and Learning (ICDL). ICDL is an intimate and unique meeting gathering researchers interested in development from the perspective of ecological psychology, robotics, and computation. While the deadline for full paper submissions has passed, there is still time to submit to the poster tracks! We have confirmed an amazing list of speakers for the event - including a keynote by Jitendra Malik, the research director for Facebook AI - as well as 20 of the most important thinkers in this space. Our goal is to encourage deep conversations through carefully curated single-track symposia, including: [ Curiosity-Driven Learning and Predictive Models ] Yukie Nagai - The University of Tokyo, Japan Elizabeth Bonawitz - Harvard University, USA Gert Westermann - Lancaster Univbersity, UK Michael Goldstein - Cornell University, USA [ Language Development in Humans and Machines ] Xavier Hinaut - Inria & Bordeaux University, France Casey Lew-Williams - Princeton University, USA Daniel Messinger - University of Miami, USA Catherine Laing - University of York, UK [ Developmental Machine Learning and Everyday Inputs ] Karen Adolph - New York University, USA Linda Smith - Indiana University, USA Jochen Triesch - Frankfurt Institute for Advanced Studies, Germany Marvin Lavechin - Ecole Normale Sup?rieure, France [ Egocentric vision and attention ] John Franchak - University of California, Riverside, USA Kristen Grauman - The University of Texas at Austin, USA Mary Hayhoe - The University of Texas at Austin, USA Jim Rehg - University of Illinois Urbana-Champaign, USA [ Body Representation and Sensorimotor Organization ] Daniela Corbetta - The University of Tennessee, USA Matej Hoffmann - Czech Technical University in Prague, Czech Republic Hoshinori Kanazawa - The University of Tokyo, Japan Peter Marshall - Temple University, USA Website: https://la.utexas.edu/users/dil/ICDL_Austin_2024/ Deadline for 1-page poster abstracts and journal track poster submissions: March 1, 2024 -> March 8, 2024 (extended) Submissions are invited on a range of topics including but not limited to: ? Embodied learning and development in biological systems and robots ? Developmental stages and sensitive periods ? Emergence of verbal and nonverbal communication ? Curiosity, intrinsic motivations, exploration, play and active learning ? Architectures for lifelong learning ? Emergence of body and affordance perception ? Learning control of body movement ? Emotional development and the role of emotion in learning ? Prediction, planning, and problem solving ? The relationship between evolution and development ? Epistemological foundations and philosophical issues ? Robotic and computational models of human and animal development ? Developmentally-inspired machine learning ? Human-robot interaction in developmental contexts ? Applications of machine learning to human and animal development [ 1-page poster abstracts ] To encourage discussion of late-breaking results or for work that is not sufficiently mature for a full paper, we will accept 1-page abstracts. These submissions will not be included in the conference proceedings. Accepted abstracts will be presented during the poster session. [ Journal-track posters ] Journal track poster submissions must be about a journal paper that has been published recently (no earlier than December 2022), on a topic relevant to ICDL. Please see https://la.utexas.edu/users/dil/ICDL_Austin_2024/ for submission details. Best regards, Yukie Nagai, on behalf of the organizing committees ? Yukie Nagai, Ph.D. Project Professor, The University of Tokyo nagai.yukie at mail.u-tokyo.ac.jp | https://developmental-robotics.jp CREST Cognitive Feelings: https://cognitive-feeling.jp CREST Cognitive Mirroring: https://cognitive-mirroring.org From marius.bilasco at univ-lille.fr Thu Feb 29 04:37:45 2024 From: marius.bilasco at univ-lille.fr (Ioan Marius BILASCO) Date: Thu, 29 Feb 2024 10:37:45 +0100 Subject: Connectionists: [jobs] Phd opportunity in Lille, France - EventSpike : Asynchronous computer vision from event cameras Message-ID: <2a7600f3-5aa7-4b31-b153-82511c4b2093@univ-lille.fr> The FOX team from the CRIStAL laboratory (UMR CNRS), Lille France and the PR team from the MIS Laboratory, Amiens France are looking to recruit a joint PhD student starting in October 2024 on the following subject: *EventSpike - Async**h**ronous computer vision from event camera**s* Abstract: Video analysis is one of the fundamental tasks in computer vision. The dominant approach is based on deep neural networks applied to RGB images. These models have disadvantages such as: a) the need of large quantities of annotated data, which requires significant human work; b) the significant computational and therefore energy cost of these approaches; and c) redundancy in terms of visual information between two successive images. Spiking neural networks can offer a solution to these problems, through the use of unsupervised learning rules inspired by biological learning and the possibility of implementing them on ultra-low energy hardware components. . Event cameras that only communicate changes in light intensity are positioned as an alternative for capturing a scene when efficient processing on hardware with low computing capabilities is required. The objective of this thesis is to offer a joint response by proposing weakly supervised learning methods based on spiking learning mechanisms which will directly exploit the flow of impulses generated by an event camera. We are targeting applications around autonomous driving, such as the detection of moving vehicles or the recognition of information conveyed by large displays along the road infrastructure. The objective of this thesis is to develop new models of spiking neural networks (SNN) capable of directly processing visual information in the form of spike trains. The proposed models must be validated experimentally on dynamic vision databases, following standard protocols and best practices. Candidates must hold a Master degree (or an equivalent degree) in Computer Science, Statistics, Applied Mathematics or a related field. Experience in one or more of the following is a plus: ?? image processing, computer vision; ?? machine learning; ?? bio-inspired computing; ?? research methodology (literature review, experimentation?). ?Candidates should have the following skills: ?? good proficiency in English, both spoken and written; ?? scientific writing; ?? programming (experience in C++ is a plus, but not mandatory). The candidate will be funded for 3 years; he/she is expected to defend his/her thesis and graduate by the end of the contract. The monthly gross salary is around 2000?, including benefits (health insurance, retirement fund, and paid vacations). For application, please send the following information in a single PDF file to Pr. Marius Bilasco (marius.bilasco at univ-lille.fr) and Dr. Fabio Morbidi (fabio.morbidi at gmail.com)with subject [PhD_EventSpike]: ?* A cover letter. ?* A curriculum vitae, including a list of publications, if any. ?* Transcripts of grades of Master's degree. ?* The contact information of two references (and any letters if available). We look forward to receiving your application until *12.04.2024* *Context of the joint PhD* The FOX research group is part of the CRIStAL laboratory (University of Lille, CNRS), located in Lille, France. We focus on video analysis for human behavior understanding. Specifically, we develop spatio-temporal models of motions for tasks such as abnormal event detection, emotion recognition, and face alignment. We are also involved in IRCICA (CNRS), a research institute promoting multidisciplanary research. At IRCICA, we collaborate with computer scientists and experts in electronics engineering to create new models of neural networks that can be implemented on low-power hardware architectures. Recently, we designed state-of-the-art models for image recognition with single and multi-layer unsupervised spiking neural networks. We were among the first to succesfully apply unsupervised SNNs on modern datasets of computer vision. We also developed our own SNN simulator to support experiments with SNN on computer vision problems. Our work is published in major journals (Pattern Recognition, IEEE Trans. on Affective Computing) and conferences (WACV, IJCNN) in the field. The PR (Robotic Perception) team has skills in mobile robotics (perception), 3D reconstruction and unconventional vision. The PR team is piloting the e-Cathedral program (2015-2025) and is currently involved in four other projects which focus on event vision: the EVENTO project (2021-2024) co-financed by AID, the ANR CERBERE project ( 2022-2025) carried out by the LITIS laboratory of the University of Rouen, the ANR DEVIN project (2024-2028) carried by the I3S laboratory of the University of C?te d'Azur and the ANR-FWF project (France-Austria) (2024-2028) carried by the UPJV and by TU Graz. The PR team wishes to further strengthen this area of research and improve its expertise in AI through collaboration with the CRIStAL laboratory. The position is located in Lille, France. With over 110 000 students, the metropolitan area of Lille is one France's top education student cities. The European Doctoral College Lille Nord-Pas de Calais is headquartered in Lille Metropole and includes 3,000 PhD Doctorate students supported by university research laboratories. Lille has a convenient location in the European high-speed rail network. It lies on the Eurostar line to London (1:20 hour journey). The French TGV network also puts it only 1 hour from Paris, 35 mn from Brussels, and a short trips to other major centres in France such as Paris, Marseille and Lyon. == Ioan Marius BILASCO Professeur des Universit?s en Informatique Univ. Lille CRIStAL CNRS - UMR 9189 Cit? Scientifique 59650 Villeneuve d'Ascq Cedex - France https://pro.univ-lille.fr/marius-bilasco/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elio.tuci at gmail.com Thu Feb 29 12:05:47 2024 From: elio.tuci at gmail.com (Elio Tuci) Date: Thu, 29 Feb 2024 18:05:47 +0100 Subject: Connectionists: CfP WIVACE 2024 - XVIII International Workshop on Artificial Life and Evolutionary Message-ID: Call for Papers: XVIII International Workshop on Artificial Life and Evolutionary Computation WIVACE2024 September 11-13, 2024, Namur, Belgium Up-to-date information at https://events.info.unamur.be/wivace2024/ Scope of the Conference The International Workshop on Artificial Life and Evolutionary Computation aims at bringing together researchers working in Artificial Life (ALife) and Evolutionary Computation to present and share their research in a multidisciplinary context. The workshop provides a forum for the discussion of new research directions and applications in these fields, where different disciplines can effectively meet. In particular, the workshop brings together computer scientists, mathematicians, biologists, psychologists, and cognitive scientists to discuss issues related to the original of life, evolution and adaption, collective and social behaviours and other topics related to the development of technological solutions inspired by biological principles. The workshop is based on singe-track sessions, with oral and poster presentations. The workshop is structured using a format specifically though to allow doctoral students and early career researchers to meet in or der to discuss their ideas and to disseminate their work in a collaborative and constructive environment. Relevant Research Areas WIVACE 2024 welcomes contributions dealing with any aspect of Artificial Life (ALife) and Evolutionary Computation (EvolComp). Topics of interest include: - theoretical and empirical research in ALife and EvolComp - application of ALife and EvolComp methods to real-world problems Full list of topics available at https://events.info.unamur.be/wivace2024/ Important Dates Submission deadline: June 7, 2024 Notification of acceptance: August 9, 2024 Workshop: September 11-13, 2024 Submission of post-workshop version: 18 October 2024 Publication Details Workshopproceedings will be published by Springer in the CCSI series. All published contributions are peer-reviewed. Contributions may be submitted in one of the following forms: Full research papers (up to 12 pages) (published in the proceedings) Extended abstracts (1-4 pages) (published in the proceedings) Summaries of already published papers (1-4 pages) (not published in the proceedings) Please submit papers at: https://cmt3.research.microsoft.com/User/Login?ReturnUrl=%2FWIVACE2024 Local Chair Timoteo Carletti (Namur Institute for Complex Systems, University of Namur) Elio Tuci (Namur Institute for Complex Systems, University of Namur) Thierry-Sainclair Njougouo (Namur Institute for Complex Systems, University of Namur) More detailed information at https://events.info.unamur.be/wivace2024/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.panzeri at gmail.com Thu Feb 29 13:29:09 2024 From: stefano.panzeri at gmail.com (Stefano Panzeri) Date: Thu, 29 Feb 2024 19:29:09 +0100 Subject: Connectionists: Multiple postdoctoral positions in computational neuroscience (neural coding), Hamburg, Germany Message-ID: I am opening three fully funded postdoctoral positions in computational neuroscience at the University Medical Center Hamburg-Eppendorf (UKE) in Hamburg, Germany. We investigate how networks of neurons in the cerebral cortex encode, process and transmit information to generate behaviors such as sensation and decision-making. To address these issues, we develop and use information-theoretic and machine learning methods to study population coding, as well as neural network models to individuate mechanisms for neural information processing and generation of functions. Our laboratory enjoys extensive long-term international collaborations with experimental laboratories, including with Chris Harvey at Harvard Medical School, Wei- Lee at Boston Children?s Hospital, Tommaso Fellin and Alessandro Gozzi at IIT, Mriganka Sur at MIT, as well as collaborations with experimental laboratories in Hamburg (Andreas Engel, Tobias Donner, Ileana Hanganu-Opatz). The laboratory also offers a thriving, ambitious and well-funded research environment (with funds from the NIH Brain Initiative, the Simons Foundation, the European Union, the ERC). The lab provides ample opportunities for advanced training and for personal scientific growth. We seek candidates holding a PhD in a numerate or neuroscientific discipline, with a solid computational background and a keen interest in neuroscience. They must be highly motivated and creative individuals who want to work in a dynamic, multi-disciplinary research environment and be happy to interact with both experimental and theoretical neuroscientists. The jobs are available starting immediately and applications will be considered as soon as they are received and until the position is filled. Funding is available for several years and a minimum commitment of two years is expected by interested candidates. Interested applicants are encouraged to email me (stefano.panzeri at gmail.com) as soon as possible, to inform me of the interest for the position and initiate a discussion about research projects. For recent relevant publications from my lab, see: Kuan A et al (2024) Synaptic wiring motifs in posterior parietal cortex support decision-making *Nature:* advanced online publication (doi 10.1038/s41586-024-07088-7) Celotto M et al, An information-theoretic quantification of the content of communication between brain regions, *NeurIPS* 36, 2023 Panzeri S, et al (2022) The structures and functions of correlations in neural population codes. *Nature Reviews Neuroscience* 23: 551-567 Koren V., Panzeri S (2022) Biologically plausible solutions for spiking networks with efficient coding. *NeurIPS 35, 2022,* Curreli S, et al (2022) Complementary encoding of spatial information in hippocampal astrocites. *PLoS Biology* 20: e3001530. Valente, M. et al (2021), Correlations enhance the behavioral readout of neural population activity in association cortex. *Nature Neuroscience*, 24, 975?986 Chong, E. et al (2020) Manipulating synthetic optogenetic odors reveals the coding logic of olfactory perception. *Science* 368, 1329. Runyan C. A., et al (2017) Distinct timescales of population coding across cortex, *Nature*: 548: 92-96. Panzeri S., et al (2017) Cracking the neural code for sensory perception by combining statistics, intervention and behaviour. *Neuron* 93: 491-507 Pica, G. et al (2017) Quantifying how much sensory information in a neural code is relevant for behavior, *NeurIPS 30, 2017* Thanks Stefano Panzeri (stefano.panzeri at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From justyna.zwolak at nist.gov Thu Feb 29 22:14:31 2024 From: justyna.zwolak at nist.gov (Zwolak, Justyna P. (Fed)) Date: Fri, 1 Mar 2024 03:14:31 +0000 Subject: Connectionists: Focused issue on XAI in Sciences in MLST Message-ID: Dear Colleagues, We are happy to announce that the XAI4Sci: Explainable Machine Learning for Sciences workshop has partnered with Machine Learning: Science and Technology journal (IOPublishing) to publish a focused issue on XAI in sciences, with Craig Greenberg of the National Institute of Standards and Technology, Rich Caruana of Microsoft Research, and myself as guest editors. Machine Learning: Science and Technology is a multidisciplinary open-access journal that bridges the application of machine learning across the sciences with advances in machine learning methods and theory as motivated by physical insights. The submission site opens on April 1st, 2024, and will remain open until June 30th, 2024. Feel free to reach out to us at xai4sci2024 at gmail.com if you are interested in submitting your work for consideration in the focused issue or if you have any questions. Best, Justyna, Craig, and Rich Applied and Computational Mathematics Division National Institute of Standards and Technology 100 Bureau Drive, Stop 8910 Gaithersburg, MD 20899-8910 USA Phone: (301) 975-0527 Email: jpzwolak at nist.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MLST focus issue announcement_Explainable AI workshop 2024.pdf Type: application/pdf Size: 1221043 bytes Desc: MLST focus issue announcement_Explainable AI workshop 2024.pdf URL: