[AI Seminar] AI Lunch -- Yair Zick -- April 5th, 2016
vitercik at cs.cmu.edu
Wed Mar 30 22:10:20 EDT 2016
Dear faculty and students,
We look forward to seeing you this Tuesday, April 5th, at noon in NSH
3305 for AI lunch. To learn more about the seminar and lunch, or to
volunteer to give a talk, please visit the AI Lunch webpage
<http://www.cs.cmu.edu/~aiseminar/>. *We are looking for someone to give a
talk on May 10th.*
On Tuesday, Yair Zick <http://www.cs.cmu.edu/~yairzick/> will give a talk
titled "Towards a Value Theory for Algorithmic Transparency."
*Abstract*: Algorithmic systems that employ machine learning play an
ever-increasing role in making substantive decisions in modern society,
ranging from online personalization to insurance and credit decisions to
predictive policing. But their decision making processes are often opaque
-- it is difficult to explain why a certain decision was made -- thus
raising concerns about inadvertent introduction of harms.
We describe a new research agenda, applying game-theoretic centrality to
the algorithmic transparency problem. We develop a formal foundation to
improve the transparency of such decision-making systems that operate over
large volumes of personal information about individuals.
First, we describe an axiomatic approach to measuring feature importance in
datasets; that is, we derive a function that uniquely satisfies a set of
reasonable properties for the measurement of feature influence.
Next, we introduce a family of Quantitative Input Influence (QII) measures
that capture the degree of influence of inputs on outputs of machine
learning algorithms. Our causal QII measures carefully account for
correlations among inputs and capture input influence on aggregate effects
on groups of individuals (e.g., disparate impact based on race). The QII
measures also capture the joint and marginal influence of a set of inputs
on outputs using an aggregation method with a strong theoretical
justification. Apart from demonstrating general trends in a system, QII
guides the construction of personalized transparency reports that provide
insights into an individual's classification outcomes. Our empirical
validation demonstrates that our QII measures are a useful transparency
mechanism when black box access to the learning system is available; in
particular, they provide better explanations than standard associative
measures for a host of scenarios that we consider.
This work is based on two papers Amit Datta, Anupam Datta, Ariel D.
Procaccia and Yair Zick "Influence in Classification via Cooperative Game
Theory", appeared in the 24th International Joint Conference on Artificial
Intelligence (IJCAI 2015).
Anupam Datta, Shayak Sen and Yair Zick "Algorithmic Transparency via
Quantitative Input Influence: Theory and Experiments with Learning
Systems", to appear in the 37th IEEE Symposium on Security and Privacy
Ellen and Ariel
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ai-seminar-announce