The recent spread of machine learning methods into critical decisionmaking, especially in public policy domains, has necessitated a focus on their intelligibility and transparency. The literature on intelligibility in machine learning offers a range of methods for identifying model variables important for making predictions, but measures of predictor importance may be poorly understood by human users, leaving the crucial matter unexplained—viz., why the predictor in question is important. There is a critical need for tools that can interpret predictor importances in such a way as to help users understand, trust, and take action on model predictions. We describe a prototype system for achieving these goals and discuss a particular use case—early intervention systems for police departments, which model officers’ risk of having “adverse incidents” with the public.
|Publication status||Published - 1 Oct 2017|
|Event||IEEE Information Visualization Conference 2017 - Phoenix, United States of America|
Duration: 1 Oct 2017 → 6 Oct 2017
|Conference||IEEE Information Visualization Conference 2017|
|Abbreviated title||IEEE InfoVis 2017|
|Country||United States of America|
|Period||1/10/17 → 6/10/17|