Skip to main content Skip to secondary navigation

Fairness and Usefulness

Main content start

The value we get from using a model to guide care is an interplay of the model’s output, the intervention policy as well as capacity and the benefit/harm or the intervention per se as outlined in this HAI post.

When we reviewed the existing state of affairs for responsible adoption of AI, we found that across 15 community generated guidelines, there are 220 items "to report'', which is too much work. We also found that there is very limited guidance on how to assess fairness or utility/usefulness (paper in JAMA Open). 

To address this problem, we joined in the founding team for The Coalition for Health AI, which is a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science, whose mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs.

In parallel, to bridge the gaps we found, we developed a framework to estimating usefulness (JAMIA paper), as well as a way to assess fairness in terms of the consequences of using a model to guide care (BMJ Informatics). Finally, to demonstrate application in practice, we conducted a fairness audit (fairness audit paper, in press, attached) which required 115 person-hours across 8-10 months. 

We are now working on wrapping these efforts into a reusable framework and tool supported by a Gordon and Betty Moore Foundation proposal on a "virtual model deployment"/VMD to enable such analyses routinely.