“Can ChatGPT diagnose me?” How large language models will transform clinical care
Event Details:
Location
How might large language models such as ChatGPT transform a patient/consumer’s ability to self-diagnose or support clinicians in diagnostic decision-making? What are critical ethical, equity and regulatory considerations for safe and effective use of these and related tools?
This event is co-hosted by the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) and Stanford Institute for Human-Centered Artificial Intelligence (HAI).
We are grateful to our co-sponsors, the Gordon and Betty Moore Foundation and GSR Ventures, for their support for this event.
Recordings
Recordings have been posted and are available to the public for free on AIMI Center's YouTube channel. Subscribe to our channel and select "all notifications" to be notified when new videos are available.
Agenda
1:45 PM | Check-In (In person) |
2:00 PM | Welcome & Introductory RemarksIn what ways do we see large language models impacting patient care and diagnosis?
|
2:15 PM | Large Language Models 101What are LLMs? How do they work? What data were they trained on? How should we think about general purpose LLMs like ChatGPT vs health-specific Foundation Models?
|
2:30 PM | Panel 1: Direct-to-Consumer Uses of ChatGPT/LLMs: How might LLMs transform a consumer’s ability to better diagnose and manage their health conditions?A high percentage of patients currently use online search engines to self-diagnose when they experience new symptoms. In what ways does ChatGPT/LLMs change how a patient interacts with online health information to make decisions about whether to seek care or not?
|
3:00 PM | Panel 2: How might LLMs/ChatGPT support clinicians in diagnostic decision-making?In what ways might these LLMs support or even replace clinicians in the diagnostic process? What types of products would clinicians find useful? What are some of the pitfalls that clinicians need to be wary of when using these tools?
|
3:30 PM | Panel 3: The Ethics, Equity, and Regulation around LLMs/ChatGPTIn what ways might these new tools affect diagnostic disparities in the US? How might the pre-existing biases in the training data potentially perpetuate disparities? How might we mitigate these effects? How should these tools be classified from a regulatory perspective (e.g., as a medical device or as a general wellness tool)? And how might that influence their safety, efficacy and usefulness? Other ethical considerations (e.g., patient consent, data privacy, risk of security / adversarial attacks)
|
4:00 PM | Networking Reception (In person) |