Skip to main content Skip to secondary navigation

2023 AIMI-HAI Partnership Grants

Main content start

The AIMI-HAI Partnership Grant is designed to fund new and ambitious ideas that reimagine artificial intelligence in healthcare, using real clinical data sets, with near term clinical applications.

We are delighted to announce the 2023 funded projects. Visit the Call for Proposals page for criteria and eligibility. If you have any questions, please contact hai-grants@lists.stanford.edu.


CBT-AI Companion: An Application for Improving Mental Health Treatment

PI: Johannes Eichstaedt

Abstract:  We seek to develop a "CBT-AI Companion," an LLM-based application designed to enhance mental health treatment. Addressing the prevalent but undertreated issues of depression and anxiety, the project aims to increase the effectiveness of psychotherapy, particularly cognitive behavioral therapy (CBT). Traditional methods often see low compliance in practicing therapy skills, which are crucial for treatment success. The project leverages large language models (LLMs) to support patients in practicing cognitive and behavioral skills, offering immediate feedback and personalized experiences based on the patient’s context and stressors. This approach is expected to improve clinical outcomes due to a stronger engagement in skill practice.

Bridging the Modality Gap: Diffusion Implicit Bridges for Inter-Modality Medical Image Translation

PI: Sergios Gattidas

Abstract:  Modern machine learning algorithms for medical image analysis perform well on tasks that are limited to a single imaging modality or contrast. However, these algorithms face limitations when processing imaging data that includes different modalities. In this project, we aim to address this limitation by developing machine learning algorithms that can translate between different medical imaging modalities. We will base our work on diffusion models - a class of machine learning models successfully used for the generation and analysis of image data in various domains. With this project, we expect to open new possibilities in machine learning-based processing and analysis of medical imaging data and to build algorithms that are accessible for a broader range of clinical situations and a larger number of patient groups.

Development of AI-Enabled Quadruped Robots (Pupper) for Improved Pediatric Patient Experience and Healthcare Outcomes

PI: Karen Liu

This project focuses on Pupper, an AI-enabled robotic dog developed at Stanford, aimed at improving the hospital experience for pediatric patients facing social isolation, depression, and/or anxiety. Unlike traditional quadrupeds, Pupper is approachable, cost-effective, and safe, making it well-suited for child interaction. It offers an engaging alternative to conventional sedation methods, potentially reducing healthcare costs and medication risks. Pupper, with its computer vision and agility capabilities, has shown promise to also serve as physical therapy motivator and emotional support. This research will progress along two parallel paths: technical enhancement of Pupper, including AI advancements like computer vision, autonomous gait, and speech processing, and clinical studies assessing Pupper’s impact in pediatric care. These studies will focus on mitigating social isolation, and reducing anxiety and/or depression, and facilitating physical therapy participation among hospitalized children.

Developing AI for Automated Skill Assessment in Open Surgical Procedures

PI: Serena Yeung

Abstract: Surgical interventions are a major form of treatment in modern healthcare, with open surgical procedures being the dominant form of surgery worldwide. Surgeon skill is a key factor affecting patient outcome, yet current methods for assessing skill are primarily qualitative and difficult to scale. Our project endeavors to make strides in developing AI as an engine for automated skill assessment in open surgical procedures. Whereas most prior work has focused on AI for laparoscopic surgical procedures, open procedures present more challenges due to the larger and more complex field of view. We will develop methods for providing complementary forms of feedback from surgical video, including kinematics analysis and action quality assessment through video question answering. Finally, we will evaluate the utility of our AI methods through pilot studies with surgical trainees. Our project aims to demonstrate the feasibility of AI in contributing quantitative and scalable skill assessment and feedback in surgical education.