Connectionists: BMVA Technical meeting : 2nd CfP Cognitively inspired explainable perception-based AI
Serge Thill
serge.thill at his.se
Mon Nov 13 11:46:52 EST 2017
Dear all,
this is a just reminder about the meeting below. The deadline for contribution summaries is approaching (Dec 1).
cheers,
Serge
Begin forwarded message:
From: Andrew Gilbert <a.gilbert at surrey.ac.uk<mailto:a.gilbert at surrey.ac.uk>>
Subject: Re: BMVA Technical meeting : 2nd CfP Cognitively inspired explainable perception-based AI
Date: 19 October 2017 at 16:19:57 BST
To: <BMVA at JISCMAIL.AC.UK<mailto:BMVA at JISCMAIL.AC.UK>>
Reply-To: <a.gilbert at surrey.ac.uk<mailto:a.gilbert at surrey.ac.uk>>
BMVA Technical Meeting: Cognitively inspired explainable perception-based AI
One Day BMVA symposium in London, UK on Wednesday 7th Feburary 2018
Chairs: Serge Thill, University of Plymouth, Maria Riveiro, University of Skövde,
Keynote speakers: Alessandra Sciutti, Italian Institute of Technology, Brad Hayes, University of Colorado Boulder & Yiannis Demiris, Imperial College,
www.bmva.org/meetings<http://www.bmva.org/meetings>
Call for Papers:
AI systems are increasingly present in everyday society, from simple computer systems to agents such as autonomous vehicles or social robots. In this context, several researchers have noted that it is critical to understand how human users perceive such systems - in particular, the degree to which they understand how the system works, and what mental models they build of the underlying algorithms. "Explainable AI" thus refers to AI systems that behave or provide the necessary information so that their working becomes comprehensible to the human user.
For this meeting, we are interested in AI systems that operate at least somewhat autonomously based on real-world sensory data (in particular, based on machine vision). This includes robotics and autonomous vehicles, but can also cover disembodied systems such as decision support systems.
We are particularly interested in contributions that give detailed consideration to the fact that these are AI systems that sense (often through machine vision) the environment, and consider the possible role of understanding human cognitive mechanisms in the design of such systems. Relevant human cognitive mechanisms could include, for example, how humans perceive and interpret information themselves (which may be relevant in the design of explainable information processing by a machine) or how they interact with other intelligent agents (including their expectations on such interactions), which may impose constraints on the design of explainable systems that may be perceived as an intelligent, interactive agent by human users.
Submission Deadline:
All those interested in presenting at this meeting are invited to submit a summary of their talk at https://goo.gl/forms/OByON7vvlUX0xtDt1 by 1 Dec 2017 [firm deadline].
For queries please contact the organisers: serge.thill at plymouth.ac.uk<mailto:serge.thill at plymouth.ac.uk>, maria.riveiro at his.se<mailto:maria.riveiro at his.se>
Registration:
Book online at www.bmva.org/meetings<http://www.bmva.org/meetings>
£16 for BMVA Members, £36 for Non Members, including lunch
Thanks for reading
Andrew Gilbert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20171113/87241b74/attachment.html>
More information about the Connectionists
mailing list