Connectionists: Workshop Call for Contribution on Trustworthy AI in the Wild - Extended Deadline
Malte Schilling
mschilli at techfak.uni-bielefeld.de
Fri Sep 3 12:10:52 EDT 2021
We want to advertise our workshop on Trustworthy AI and invite contributions (including and encouraging early work).
Regards
Barbara Hammer, Malte Schilling, Laurenz Wiskott
==========================================================
2021 KI in Berlin, to be held virtually
Workshop on Trustworthy AI in the wild
Half-day workshop on 27th of September 2021
AI solutions start to have an enormous impact on our lives: they are key enabler of future digital industry, potential game-changer for experimentation and discovery in science, and prevalent technology in everyday services such as internet search or human-machine communication. Moreover, AI is involved in the solutions of humans’ grand challenges, examples being AI-based environment-friendly mobility concepts, augmentation of human capabilities by intelligent assistive systems in an ageing society, or support in developing medical therapies or vaccines.
Yet, the very nature of AI technologies includes a number of novel threats which need to be addressed for trustworthy AI: many machine learning models act as black boxes which can lead to unexpected behavior, for example, when human and machine perception differ considerably. As models are trained on real-life data, there is the risk that such AI models allow for unauthorized access to sensitive information that might be contained in the data. Furthermore, data biases —caused by spurious correlations in the data— can be captured by ML models and their predictions, leading to systematically disadvantages for specific individuals or (e.g. ethnic) groups. The ubiquity of AI in virtually every aspect of life, therefore, has an enormous impact on the way in which we as a society communicate, work, decide, and interact.
Hence novel concepts on how to guarantee security, safety, privacy, and fairness of AI and how to create AI systems which support humans rather than incapacitating them are of uttermost importance and are constituting the research area on Trustworthy AI. Trustworthy AI aims for technologies that not only provide solutions to an earlier defined task, but that as well allow for insight on the functioning of the underlying system. Why did the system acted in a certain way and did not choose a different solution? Which features were important for the decision and how sure is the system of its choice, i.e. can I trust this decision?
The workshop aims, first, at understanding Machine Learning based approaches towards explainable AI solutions. Secondly, a focus of the workshop is on how we can make AI solutions more trustworthy. The goal of the workshop is to discuss existing concepts of trustworthy AI and provide a platform for the formulation of new ideas and proposals to overcome existing limitations.
The workshop aims at interested researchers with a background in machine-learning (supervised and unsupervised learning, reinforcement learning), traditional AI techniques (reasoning, planning), robotics (humanoids, probabilistic robotics), HMI/HRI, and multi-agent systems (coordination and cooperation).
TOPICS OF INTEREST
--------------------------------
- Testable criteria for trustworthy AI technologies
- How to guarantee the privacy and security of AI technologies?
- Legal, ethical, and societal implications of AI solutions
- How can systems be autonomous and safe? In interaction with humans, systems should be guaranteed to act safely and not endanger humans or other agents.
- human agency, autonomy, and oversight
- technical robustness and safety: reliability, resilience to attacks, reproducibility
- privacy and data governance
- transparency and explainability
- diversity, non-discrimination and fairness
- accountability
OVERVIEW PROGRAM
--------------------------------
Overall, the workshop aims at a multidisciplinary perspective on key aspects and challenges of Trustworthy AI and in particular when in interaction with humans. Therefore, the presentations will reflect the diversity of approaches and topics as well as there will be ample time for discussion.
The half-day workshop will consist of invited talks from AI and Machine Learning and we are open for contributed talks. Overall, we plan with five talks. Two sessions of three talks (each half an hour). The first session will be concluded by short presentations for the posters (poster flashlight).
Confirmed Speakers:
- Prof. Marc Toussaint, Head of the Learning & Intelligent Systems Lab at the EECS Faculty of TU Berlin
- Prof. Isabel Valera, Department of Computer Science of Saarland University, Saarbrücken
SUBMISSION
-------------------
Participants are invited to submit a contribution (via email: mschilli at techfak.uni-bielefeld.de) as an extended abstract (maximum 2 pages in length). Contributions will be reviewed and selected by the organizers. Contributions can be made in three categories:
- for contributed talks (of around 15 minutes plus discussion);
- for poster presentation (using a spatial chat platform for our virtual poster session);
- for presentation of proposed or planned projects or starting initiatives in the area of trustworthy AI (this will be part of the poster session, in which an initiative can get one or multiple posters presenting their plans and goals in order to invite discussion).
The workshop contributions will appear as online proceedings on the workshop webpage.
We want to give researchers a chance to present their (ongoing or planned) work. But we also want to provide a forum for relevant work that has recently been published in journals and other conferences.
IMPORTANT DATES
----------------------------
September 10th, 2021, submission deadline.
September 27, 2021, Day of the workshop
FURTHER INFORMATION
------------------------------------
For further information, please contact Malte Schilling (mschilli at techfak.uni-bielefeld.de) and see the website containing more information: https://dataninja.nrw/?page_id=343
Organizers:
Barbara Hammer (CITEC, Bielefeld University, Germany),
Malte Schilling (Machine Learning Group, Bielefeld University),
Laurenz Wiskott (Institute for Neural Computation, Ruhr-University Bochum).
More information about the Connectionists
mailing list