Fairness and Usefulness
The value we get from using a model to guide care is an interplay of the model’s output, the intervention policy as well as capacity and the benefit/harm or the intervention per se as outlined in this HAI blog post. In reviewing the existing state of affairs for responsible adoption of AI, we found that across 15 community generated guidelines, there are 220 items "to report'', and there is very limited guidance on how to assess fairness or utility/usefulness (JAMA Open).
To address this problem, we joined in the founding team for The Coalition for Health AI, which is a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science, whose mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs.
To bridge the gaps we found, we developed a framework to estimating usefulness (JAMIA) and made it broadly available as a python library for usefulness simulations of machine learning models in healthcare (JBI). We developed a way to assess fairness in terms of the consequences of using a model to guide care (BMJ Informatics). Finally, to demonstrate application in practice, we conducted a fairness audit (Frontiers in Digital Health) which required 115 person-hours across 8-10 months.