Skip to main content Skip to secondary navigation

Healthcare AI Blog

Main content start
Jul 27, 2022

Ensuring the Fairness of Algorithms that Predict Patient Disease Risk

Decision-support tools for helping physicians follow clinical guidelines are increasingly using artificial intelligence, highlighting the need to remove bias from underlying algorithms.

Read here
Jun 13, 2022

Healthcare Algorithms Don’t Always Need to Be Generalizable

AIMI co-director, Nigam Shah, questions the need for generalizable models and proposes instead sharing recipes for creating useful local models.

Read here
Jun 13, 2022

How Do We Ensure that Healthcare AI is Useful?

In healthcare, predictive models need to be more than good predictors. Stanford scholars suggest a framework for determining a model’s worth.

Read here
Mar 28, 2022

New AI-Driven Algorithm Can Detect Autism in Brain “Fingerprints”

Led by AIMI faculty Kaustubh Supekar, Stanford scholars have created an algorithm that uses functional MRI scans to find patterns of neural activity in the brain that indicate autism.

Read here
Mar 3, 2022

Trust is AI’s Most Critical Contribution to Health Care

AI can reveal remarkable medical insights, but only if patients and doctors have faith in it. Thus, trust has become AI’s singular goal, says Stanford's James Zou.

Read here
Feb 15, 2022

Deploying AI in Healthcare: Separating the Hype from the Helpful

AIMI Co-Director, Nigam Shah, assesses the state of AI in healthcare and encourages executives to think beyond the model.

Read here
Dec 1, 2021

Broadening the Use of Quantitative MRI, a New Approach to Diagnostics

A promising technology is held back by lack of quality data, but with a newly released dataset, Stanford researchers are about to set it free.

Read here
Aug 23, 2021

“Flying in the Dark”: Hospital AI Tools Aren’t Well Documented

A new study reveals models aren’t reporting enough, leaving users blind to potential model errors such as flawed training data and calibration drift.

Read here
Aug 2, 2021

The Open-Source Movement Comes to Medical Datasets

Hoping to spur crowd-sourced AI applications in health care, Stanford’s AIMI center is expanding its free repository of datasets for researchers around the world.

Read here
Jul 19, 2021

De-Identifying Medical Patient Data Doesn’t Protect Our Privacy

AIMI co-director, Nigam Shah, makes the case that de-identifying health records used for research doesn’t offer anonymity and hinders the learning health system.

Read here
Jun 1, 2021

Agile NLP for Clinical Text: COVID-19 and Beyond

With Trove, weakly supervised NLP of clinical text is fast, adaptive, shareable, and high performing.

Read here
Mar 16, 2021

Should AI Models Be Explainable? That depends.

AIMI Co-Director, Nigam Shah, advocates for clarity about the different types of interpretability and the contexts in which it is useful.

Read here
Nov 2, 2020

When Algorithmic Fairness Fixes Fail: The Case for Keeping Humans in the Loop

Attempts to fix clinical prediction algorithms to make them fair also make them less accurate.

Read here
Apr 1, 2020

Algorithm helps detect heart abnormalities

A Stanford AIMI-led team of researchers is using artificial intelligence to detect abnormalities in the heart through an algorithm that assesses the rate at which the heart pumps blood.

Read here
Nov 29, 2018

AI Rivals Radiologist-level X-ray Screening for Certain Lung Diseases

In a matter of seconds, a new algorithm read chest X-rays for several possible maladies, performing with about the same or better accuracy than doctors.

Read here