Connectionists: [CfP] ICML Workshop on Models of Human Feedback for AI Alignment

Christos Dimitrakakis christos.dimitrakakis at gmail.com
Fri May 17 14:17:14 EDT 2024


Hello everyone,

We are pleased to announce the Models of Human Feedback for AI Alignment 
Workshop at ICML 2024 taking place July 26 in Vienna, Austria.

The workshop will discuss crucial questions for AI alignment and 
learning from human feedback including how to model human feedback, how 
to learn from diverse human feedback, and how to ensure alignment 
despite misspecified human models.

Call for Papers: https://sites.google.com/view/mhf-icml2024/call-for-papers
Submission Portal: 
https://openreview.net/group?id=ICML.cc/2024/Workshop/MFHAIA

Key dates:
Submission deadline: May 31st AOE
Acceptance notification: June 17th
Workshop: July 26th

We invite submissions related to the theme of the workshop.



Topics include but are not limited to:

Learning from Demonstrations (Inverse Reinforcement Learning, Imitation 
Learning, ...)
Reinforcement Learning with Human Feedback (Fine-tuning LLMs, ...)
Human-AI Alignment, AI Safety, Cooperative AI
Robotics (Human-AI Collaboration, ...)
Preference Learning, Learning to Rank (Recommendation Systems, ...)
Computational Social Choice (Preference Aggregation, ...)
Operations Research (Assortment Selection, ...)
Behavioral Economics (Bounded Rationality, ...)
Cognitive Science (Effort in Decision-Making, ...)


Best,

Christos Dimitrakakis



More information about the Connectionists mailing list