Recordings of the AIMI Symposium and BOLD-AIR Summit can be viewed on the AIMI YouTube Channel. Breakout Sessions were not recorded.
When: August 3, 2021 at 11:50am - 12:40pm PDT
Location: via Zoom
Symposium attendees will have the opportunity to meet and chat with the AIMI Center and Stanford HAI teams during live Zoom breakout sessions. The purpose of the breakouts are to encourage focused discussions and networking around various topics within AI in medicine & health.
Participants will need to have the latest version of Zoom installed on their computer or phone and be logged into their Zoom account in order to participate. Breakout sessions will not be recorded.
The breakout session has limited seating; please retry to join again during the session if you are unable to enter the Zoom meeting.
|AI Education Programs||John Robichaux - Director of Education, Stanford HAI||We are seeing an unprecedented explosion in the number and types of AI education programs offered across diverse audience sets-- policymakers, executives, thought leaders and influencers, managers and professionals, government officials, university-, and K-12 populations, to name just a few. Moreover, some audiences seek educational offerings around AI that do not require knowledge of AI technical skills, focusing instead on AI's impacts within business, policy, and society, while others hunger for the technical and engineering skills needed to accelerate their AI capabilities. Finally, the landscape is full of organizations that have as much-- or more-- AI competencies than many universities. In these breakout sessions, participants will discuss current trends they are seeing in AI Education, share best practices they have learned along the way, and explore the complications of this rapidly changing landscape. In line with HAI's mission, special attention will be given to how best to amplify AI's positive benefits broadly across society, while simultaneously reducing its harms, as we build the next generation of AI education programs, no matter the audience or their core interests.|
|AI Policy & Regulation|
Russell Wald - Director of AI Policy, Stanford HAI
Daniel Zhang - AI Index Researcher, Stanford HAI
|How to scale up responsible implementation of AI solutions to improve health and reduce inefficiencies is a topic of growing interest for policymakers. Advances in AI and machine learning have transformed the healthcare industry by deriving insights from a vast amount of data, though cases of bias and discrimination as well as the lack of transparency of algorithms have created a trust problem with AI deployment. Policymakers and regulators are taking notes and exploring ways to facilitate healthcare innovation while prioritizing patient safety. This session aims to engage with stakeholders to explore the benefits and risks of AI applications in healthcare through the policy lens and invite participants to discuss the appropriate role and function of local, state, and federal governments in the deployment of health-related AI technology.|
|Computer Vision for Automated Medical Diagnosis||Yuyin Zhou, PhD - Postdoctoral Scholar, Stanford AIMI||Over the past few decades, medical imaging techniques, such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), mammography, ultrasound, and X-ray, have been used for the early detection, diagnosis, and treatment of diseases. Various medical image analysis problems, such as medical image registration, anatomical and cellular structures detection, and tissue segmentation, have achieved state-of-the-art performances by applying computer vision techniques. However, how to safely and reliably deploy these technologies for facilitating further disease diagnosis and treatment planning remains an open problem. This session aims to foster discussions offering a step towards building autonomous clinical decision-making systems with a higher-level understanding of medical computer vision.|
|Equitable AI: What is it and how do we get there?||Maxwell Cheong - IT Project Manager, Stanford Radiology||The rapid advancement of Artificial Intelligence (AI) in medicine is bringing unprecedented opportunities, benefits as well as risks to all stakeholders. While various aspects of AI in medicine have made significant progress - from cloud-based data sharing to skin cancer detection apps - many questions remain as to what it takes to ensure equitable access to these lifesaving technologies. In this breakout session, let's deep dive into this important topic by firstly examining the current state and definition of equitable AI: are AI data sharing, research and development processes transparent? Which agencies should govern and develop related guidelines? How should patient consent be incorporated into AI programs development and healthcare workflows? Then, let's discuss what long-term goals we should put in place to ensure all stakeholders will benefit equally moving forward|
|Explainable AI in Medicine|
Alokkumar Jha, PhD - Instructor, Stanford
Christian Bluethgen, MD, MSc - Postdoctoral Scholar, Stanford AIMI
|Medical decision making is a high-stake field. Therefore, imaging AI models that are intended to contribute to medical decision making need to be trusted by decision makers. Beyond extensive validation in studies and meeting necessary regulatory requirements, trust can be established by delivering explanations for the model’s output rather than outputting a final conclusion alone. With a focus on radiology, this session is intended to discuss purpose, methods and ramifications of interpretability and explainability of imaging AI models.|
|Key Concepts in AI Safety & Value||Alaa Youssef, PhD - Postdoctoral Scholar, Stanford AIMI||Machine learning (ML) has achieved remarkable success in imaging applications including image classification and generation, and natural language processing. Despite these advances, most ML applications cannot be deployed without risking system failure when encountering a previously unknown scenario. The risk of ML-applications failure in complex and new clinical environments has implications on patient safety, clinicians and public trust in the value of AI-enabled healthcare. To mitigate these unintended outcomes, we need to develop monitoring system that supervise deployed ML applications performance to identify potential unintended behaviours and ensure these applications operate safely and reliably.|
|Launching & Leading an AI Center||Johanna Kim, MBA, MPH - Executive Director, Stanford AIMI||This session will walk through the formation and development of the Stanford AIMI Center and any questions from attendees regarding launching and leading an AI Center. Johanna's expertise lies in developing strategic partnerships, facilitating cross discipline collaboration, and directing research and clinical operations. Prior to Stanford, she held leadership roles in academic medicine, garnering extensive experience in launching programs and teams from ground up and leading major strategic initiatives.|
|Rogier van der Sluijs, PhD - Postdoctoral Scholar, Stanford AIMI|
Self-supervised learning (SSL) is an emerging area of deep learning research that aims to maximize the use of unlabeled data. Through various concepts such as contrastive learning and knowledge distillation, we are now able to train highly accurate vision models with a fraction of the labels compared to fully supervised models. The medical domain might be a perfect target for SSL, given the abundance of unlabeled imaging data stored in PACS systems.
Statistical Thinking for Deep Learning
|Jin Long, PhD - Senior Biostatistician, Stanford AIMI|
Jin is a senior biostatistician in the fields related to AI-human interaction, clinical trials and experiments design, and statistical modeling, machine learning and artificial intelligence using medicine and imaging data. He is taking the lead on performing data analysis and statistical model development, participating in developing and writing grant proposal, and writing and publishing scientific manuscripts independently or collaboratively.