Visualizing Meta-Explanations in Early Intervention Systems for Police Departments

Damon Crockett, Joe Walsh, Klaus Ackermann, Andrea Navarrete, Rayid Ghani

Research output: Contribution to conferencePosterpeer-review


The recent spread of machine learning methods into critical decisionmaking, especially in public policy domains, has necessitated a focus on their intelligibility and transparency. The literature on intelligibility in machine learning offers a range of methods for identifying model variables important for making predictions, but measures of predictor importance may be poorly understood by human users, leaving the crucial matter unexplained—viz., why the predictor in question is important. There is a critical need for tools that can interpret predictor importances in such a way as to help users understand, trust, and take action on model predictions. We describe a prototype system for achieving these goals and discuss a particular use case—early intervention systems for police departments, which model officers’ risk of having “adverse incidents” with the public.
Original languageEnglish
Publication statusPublished - 1 Oct 2017
EventIEEE Information Visualization Conference 2017 - Phoenix, United States of America
Duration: 1 Oct 20176 Oct 2017


ConferenceIEEE Information Visualization Conference 2017
Abbreviated titleIEEE InfoVis 2017
Country/TerritoryUnited States of America
Internet address

Cite this