Below is a list of active and ongoing projects from our lab group members. To learn more, click on the project links otherwise reach out to us via email.
Automatically Staging Osteoarthritis from X-rays and MRIs
Osteoarthritis (OA) is a leading cause of disability in older adults. Progress towards the development of disease modifying drugs and rehabilitation strategies is hindered by many factors, including lack of objective and accurate tools to assess disease progression. Large observational studies have collected yearly imaging data on thousands of patients for nearly a decade, but reliance on radiologists to process these data has currently stalled the OA research community from generating new insights on the natural progression of the disease. Rapid training set construction with technologies like Coral is crucial to facilitating these insights.
The initial goal of this project is to develop a framework for automatically and reliably staging osteoarthritis based on knee Xrays. The follow-up goal is to automatically assess knee joint abnormalities, including bone marrow lesions, from MRI data. Lastly, automatic segmentation of specific structures in the knee (e.g., cartilage) from MRI scans would be the most impactful contribution.
Cross-Modal Weak Supervision: Leveraging Text Data at Training Time to Train Image Classifiers More Efficiently
Arguably the largest development bottleneck in machine learning today is getting labeled training data. One promising direction is the use of weaker supervision that is noisier and lower-quality, but can be provided more efficiently and at a higher level by domain experts and then denoised automatically. In one current project, Snorkel, users write labeling functions to express heuristics that can generate noisy labels. These labeling functions are often easy to write over text, but less so over images. However, in many important cases we have both images and text available at training time: for example, in radiology applications, we want to train an image classifier, but also have unstructured text reports available at training time. In this project, we are exploring how this text data can be used to help more easily provide weak supervision for the end image model.
Learning to Compose Domain-Specific Transformations for Data Augmentation
One of the cornerstone techniques used with deep learning in practice is data augmentation: transforming data points in class-preserving ways (e.g. rotations, small crops, etc) to artificially increase the size of labeled training sets. Data augmentation provides significant performance gains and can be viewed as a way for domain experts to easily inject knowledge about task- and domain-specific invariants. In work to date (see links above), we have explored methods for automatically learning data augmentation models given basic transformation operations provided by domain experts. In current work, we are exploring both the theoretical foundations of data augmentations and applications to medical imaging such as mammogram and histopathology image classification.
- Utilize machine vision techniques to classify de-identified chest radiographs for misplaced endotracheal tubes, central lines, and pneumothorax.
- Develop a deep learning model that can accurately classify an imaging sequences according to modality, body region, imaging technique, imaging plane, phase and type of contrast, and MR pulse sequence.
- Evaluate a convolutional neural network model that can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of current state-of-the-art feature-extraction-based automated bone age assessment models
- Develop a deep-learning classifier for evaluating pediatric brain MRI
- Use deep learning to predict "brain age" using MRI data
- Investigate deep learning in "super human" imaging tasks including PE prediction on chest xrays and stroke detection on head CT
- Develop a convolutional neural network model that can predict pathology/genomic information from imaging examinations in pediatric cancer
- Using deep learning for rapid histopathology diagnosis in the operative setting
- Deep learning to identify facial features from cross sectional imaging
- Utilize a deep learning method for emergent imaging finding detection (multi-modality)
- Investigate whether scanner-level deep learning models can improve detection at the time of image acquisition
- Computer vision for CAD in FDG and bone scans
- Automatied fetal brain ultrasound diagnosis and evaluation with deep learning
- Musculoskeletal tumor identification on plain films with histopathologcal confirmation with deep learning
- Deep learning for imaging followup in clincial trials
- Real-time detection and diagnosis of video cystoscopy with deep learning