Evaluation of deep learning approaches in an integrated PET/MRI scanner to generate pelvis attenuation maps and characterize prostate cancer
Mentor: Ciprian Catana, M.D., Ph.D. (MGH)
Multimodal imaging has become a critical resource across the medical field, as it effectively allows for diagnosing and monitoring the development and progression of a disease at both a molecular and anatomical level. While combined positron emission tomography and computerized tomography (PET/CT) is the most widely implemented form of multimodal imaging used today, a novel and innovative medical technology, known as a hybrid PET and magnetic resonance imaging (PET/MRI), has recently emerged. The quantitative PET data combined with the simultaneously acquired MRI information has the potential to provide clinicians with tumor metabolic information (PET), coupled with clear anatomical detail (MRI) of the desired soft tissue compartments. PET/MRI has gained widespread traction in recent years largely due to the fact that MRI eliminates exposure to ionizing radiation and offers excellent soft-tissue contrast. For certain applications such as prostate cancer (PCa), PET/MRI has the potential to become the leading imaging modality as it could allow clinicians to more confidently discriminate clinically relevant from non-life-threatening PCa lesions. However, before it can be used to guide patient management, a remaining methodological challenge needs to be addressed. Specifically, a method to perform PET attenuation correction based on the MR data needs to be developed and evaluated. Attenuation correction is especially important when imaging the pelvis, as bone tissue and air pockets surrounding the prostate are often misclassified as soft tissue (see image), leading to PET quantification bias and artifacts. In this project, we will compare the performance of several deep learning approaches to generate pelvis attenuation maps from the MR images using data acquired from PCa patients. After attenuation is properly accounted for, both radiomics and deep learning approaches can be employed to identify the most relevant imaging features from each modality and combine them into a multimodal classification model that best characterizes primary prostate tumor aggressiveness.
The image displays a pelvis MR-based attenuation (µ) map. This map was created from the segmentation of three tissue classes (background air, water and fatty soft tissue), and will eventually be used in the reconstruction of a PET image. The right image is the corresponding pelvis CT scan. Tise side-by-side comparison effectively exhibits bone being misclassified as soft tissue on the MR-based attenuation map (left image). This is especially evident in the regions surrounding the bilateral acetabula in the MRI scan. Overall, the figure serves to demonstrate the need for developing a more precise and accurate method to generate pelvis attenuation maps. Source: