Connectionists: Workshop CFP - Human Aligned AI: Towards Algorithms that Humans Can Trust

Grossberg, Stephen steve at bu.edu
Thu Jun 13 11:00:51 EDT 2024


Dear Antonio and colleagues,

I just saw the announcement below of your workshop on Human Aligned AI, notably your statement that:

“The rapid advancement of Artificial Intelligence (AI) brings forth critical considerations for its trustworthiness, including safety and security, fairness, privacy, and explainability, demanding a human-aligned approach” [boldface mine].

I am writing to call to the attention of your attendees that there is already a human-aligned solution to this problem that has been available for several decades.

The following article discusses this problem and a solution to it:

Grossberg, S. (2020). A path towards Explainable AI and autonomous adaptive intelligence: Deep Learning, Adaptive Resonance, and models of perception, emotion, and action.
Frontiers in Neurobotics, June 25, 2020.
https://www.frontiersin.org/articles/10.3389/fnbot.2020.00036/full

The Abstract of the article explains the extent to which a trustworthy and explainable biological neural network solution of the problem exists:

“Biological neural network models whereby brains make minds help to understand autonomous adaptive intelligence. This article summarizes why the dynamics and emergent properties of such models for perception, cognition, emotion, and action are explainable, and thus amenable to being confidently implemented in large-scale applications. Key to their explainability is how these models combine fast activations, or short-term memory (STM) traces, and learned weights, or long-term memory (LTM) traces. Visual and auditory perceptual models have explainable conscious STM representations of visual surfaces and auditory streams in surface-shroud resonances and stream-shroud resonances, respectively. Deep Learning is often used to classify data. However, Deep Learning can experience catastrophic forgetting: At any stage of learning, an unpredictable part of its memory can collapse. Even if it makes some accurate classifications, they are not explainable and thus cannot be used with confidence. Deep Learning shares these problems with the back propagation algorithm, whose computational problems due to non-local weight transport during mismatch learning were described in the 1980s. Deep Learning became popular after very fast computers and huge online databases became available that enabled new applications despite these problems. Adaptive Resonance Theory, or ART, algorithms overcome the computational problems of back propagation and Deep Learning. ART is a self-organizing production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning and only locally computable quantities, to rapidly classify large non-stationary databases without experiencing catastrophic forgetting. ART classifications and predictions are explainable using the attended critical feature patterns in STM on which they build. The LTM adaptive weights of the fuzzy ARTMAP algorithm induce fuzzy IF-THEN rules that explain what feature combinations predict successful outcomes. ART has been successfully used in multiple large-scale real-world applications, including remote sensing, medical database prediction, and social media data clustering. Also explainable are the MOTIVATOR model of reinforcement learning and cognitive-emotional interactions, and the VITE, DIRECT, DIVA, and SOVEREIGN models for reaching, speech production, spatial navigation, and autonomous adaptive intelligence. These biological models exemplify complementary computing and use local laws for match learning and mismatch learning that avoid the problems of Deep Learning.”

Best,

Steve

Stephen Grossberg
Wang Professor of Cognitive and Neural Systems
Director, Center for Adaptive Systems
Emeritus Professor of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering
Boston University
sites.bu.edu/steveg/
steve at bu.edu<mailto:steve at bu.edu>
http://en.wikipedia.org/wiki/Stephen_Grossberg
http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en
https://sites.bu.edu/steveg/files/2021/08/Grossberg-CV-8-14-21.pdf
https://youtu.be/9n5AnvFur7I
https://www.youtube.com/watch?v=_hBye6JQCh4
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552

From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Antonio Cinà <antonio.cina at unige.it>
Date: Thursday, June 13, 2024 at 10:21 AM
To: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>, aixia at aixia.it <aixia at aixia.it>, Martina Mattioli <martina.mattioli at unive.it>, Teresa Scantamburlo <teresa.scantamburlo at unive.it>
Subject: Connectionists: Workshop CFP - Human Aligned AI: Towards Algorithms that Humans Can Trust

[Apologies if you receive multiple copies of this CFP]



Call for papers: Workshop on "Human Aligned AI: Towards Algorithms that Humans Can Trust" at ICMLA 2024 - https://www.icmla-conference.org/icmla24/workshops.html<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fwww.icmla-conference.org%2Ficmla24%2Fworkshops.html&e=ed7a584b&h=c4edb686&f=y&p=y>

Accepted papers will be included in the main conference proceedings.



International Conference on Machine Learning and Applications (ICMLA 2024).

18-20 December 2024, Miami, Florida, USA - https://www.icmla-conference.org/icmla24/<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fwww.icmla-conference.org%2Ficmla24%2F&e=ed7a584b&h=9bd78a16&f=y&p=y>



DESCRIPTION:

The rapid advancement of Artificial Intelligence (AI) brings forth critical considerations for its trustworthiness, including safety and security, fairness, privacy, and explainability, demanding a human-aligned approach. This workshop discusses the imperative for embedding these principles in the design and deployment of AI systems to ensure they align with human values and societal norms. We examine the challenges and strategies towards developing trustworthy AI, preventing unintended consequences, and addressing potential security vulnerabilities. Furthermore, we argue the necessity for a framework for the responsible development of AI, advocating for interdisciplinary collaboration and regulatory oversight. The goal is to foster AI technologies that enhance human well-being, uphold privacy and dignity, and contribute to a secure and equitable future.



TOPICS OF INTEREST:

Trustworthy AI

- Adversarial attacks and defenses on machine learning and deep learning

- Formal verification of machine learning and deep learning models

- Privacy-preserving machine learning and deep learning

- Explainability and Fairness

- Theoretical foundations of Human Aligned AI

Applications of Trustworthy AI

- Generative AI

- Healthcare

- Cybersecurity

- Transportation

- Robotics

- Industry



SUBMISSION:

Prospective authors must submit their paper through the ICMLA portal following the instructions provided in https://www.icmla-conference.org/icmla24/howtosubmit.html<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fwww.icmla-conference.org%2Ficmla24%2Fhowtosubmit.html&e=ed7a584b&h=5bb8a0a8&f=y&p=y> selecting "Workshop: Human Aligned AI: Towards Algorithms that Humans Can Trust".

Each paper will undergo a peer-reviewing process for its acceptance.



IMPORTANT DATES:

Submission of papers: 31 July 2024

Notification of acceptance: 31 August 2024

ICMLA conference: 18-20 December 2024

https://www.icmla-conference.org/icmla24/keydates.html<https://urlsand.esvalabs.com/?u=https%3A%2F%2Fwww.icmla-conference.org%2Ficmla24%2Fkeydates.html&e=ed7a584b&h=c5550093&f=y&p=y>



SPECIAL SESSION ORGANISERS:

Luca Oneto, University of Genoa

Battista Biggio, University of Cagliari

Noemi Greco, Google

Davide Anguita, University of Genoa

Fabio Roli, University of Genoa

Maura Pintor, University of Cagliari

Luca Demetrio, University of Genoa

Antonio Emanuele Cinà, University of Genoa

Ambra Demontis, University of Cagliari



ACKNOWLEDGEMENTS:

ELSA Project https://www.elsa-ai.eu/


Antonio Emanuele Cinà
Assistant Professor @ University of Genoa, DIBRIS
Via All'Opera Pia, 13, 16145 Genova GE

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240613/019666b2/attachment.html>


More information about the Connectionists mailing list