Stanford community & AIMI affiliates only
The wide-spread adoption of machine learning technologies in critical decision making strongly emphasizes the need for characterizing model behavior and improving its reliability and generalization. For example, a common pitfall with supervised models is that, despite achieving high accuracy on the validation data, tend to be over-confident even while making wrong predictions, and this can lead to unexpected model behavior on unseen test data. In this context, prediction calibration strategies, which can adjust predictions to improve the error distribution of a predictive model, have become popular. In this talk, we will explore the role of prediction calibration in addressing a number of critical challenges with deep model design in practice — ranging from high quality uncertainty estimation, model transfer under distribution shifts, robust model design and explainability methods. Using examples from computer vision, healthcare and scientific machine learning, benefits of prediction calibration will be discussed.
Jay Thiagarajan is a machine learning researcher in the Center for Applied Scientific Computing at Lawrence Livermore National Labs. His research broadly spans machine learning and artificial intelligence for applications in computer vision, healthcare, graph modeling and scientific data analysis. He received his PhD from Arizona State University in 2013. He is currently the PI on multiple machine learning projects funded by the DOE and the Office of Science. He has published over 150 peer-reviewed articles and multiple book chapters on machine learning and its applications. He received his LLNL early career recognition award in 2020. He serves on the applied math visioning committee of the DOE Applied Scientific Computing Research program.