Apr
Seminar: Variable importance, cohort Shapley and algorithmic fairness
Professor Art Owen, Department of Statistics, School of Humanities and Sciences, Stanford University, USA
While the talk does not involve difficult mathematics, it does introduce several new ideas about variable importance and Shapley value, as well as connecting those to the data in the fairness example.
The Doodle form to reserve time slots for meetings with Art is available here:
https://doodle.com/bp/malgorzatabogdan/meetings-with-art-owen
Abstract:
There are several enormous literatures on variable importance with small interactions among them. Variable importance intersects with explainable AI and also algorithmic fairness, as when a protected variablelike race or gender is important for some high stakes outcome.The usual ways to study variable importance change some but not all of a subject's variables. This can make unrealistic or even impossible combinations that in our view should not be used. We propose a solution that simply reweights existing data instead of creating new data. It is based on the Shapley value from algorithmic game theory. We illustrate it on the COMPAS data set that rated whether a subject was likely to commit another crime if released. We can compute for every subjectand every variable how important that variable was for the decision about the subject. This cohort Shapley method can also be used in privacy settings by quantifying how powerful a variable is for identifying a specific subject. This talk is based on recent joint work with Ben Seiler, Masayoshi Mase and Naofumi Hama. The opinions expressed are my own, and not those of Stanford, the National Science Foundation, or Hitachi, Ltd.
About the event
Location:
EC3:207
Contact:
jonas [dot] wallin [at] stat [dot] lu [dot] se