Make Accurate Treatment Planning in Radiotherapy using Deep Learning

Last year, a published deepMRAC study in Radiology evaluated the feasibility of deep learning-based pseudo-CT generation in PET/MR attenuation correction, in which our AI team demonstrated the pseudo-CT generated by learning MR information could significantly improve PET reconstruction in PET/MR, leading to less than 1% uncertainty in brain FDG PET quantification. We investigated the feasibility, applicability, and robustness of deep learning-based pseudo-CT generation in MR-guided radiation therapy to seek the clinical value of deep learning-based pseudo-CT generation. With the new method published as deepMTP framework, we demonstrated the high clinical value of deep learning pseudo-CT in radiation therapy for saving radiation dose and providing high-quality treatment planning equivalent to the standard clinical method.

An example of a patient with a right frontal brain tumor adjacent to chiasm and right optic nerves. deepMTP provided a treatment plan with similar PTV and isodose lines around the tumor region compared with CT-based treatment plan (CTTP) in the fused MR and CT images (a, b). The plan was designed to avoid adjacent chiasm and optic nerves. The DVH (c) showed highly similar dose curves for the PTV, chiasm, and right optic nerve between CTTP (solid line) and deepMTP (dashed line).

We have shown that deep learning approaches applied to MR-based treatment planning in radiation therapy can produce comparable plans to CT-based methods. The further development and clinical evaluation of such approaches for MR-based treatment planning have potential value for providing accurate dose coverage and reducing treatment unrelated doses in radiation therapy, improving workflow for MR-only treatment planning, combined with the improved soft tissue contrast and resolution of MR. Our study demonstrates that deep learning approaches such as deepMTP will substantially impact future work in treatment planning in the brain and elsewhere in the body.

Learning Deep to Accelerate Quantitative MR Imaging

In a recent study published in Magnetic Resonance in Medicine, our AI team at the University of Wisconsin invented a novel AI framework for accelerating MR parameter mapping. We demonstrated the AI framework’s efficacy, efficiency, and robustness in resolving the challenges of performing rapid quantitative MR imaging. Among many MR techniques, quantitative mapping of MR parameters has always been shown as a powerful tool for improved assessment of various diseases. In contrast to conventional MRI, parameter mapping can provide increased sensitivity to tissue pathologies with more specific information on tissue composition and microstructure. However, standard approaches for estimating MR parameters usually obtain repeated acquisition of data sets with varying imaging parameters, requiring long scan times. Accelerated methods are highly desirable and remain a hot topic of great interest in the MR community. 

To reconstruct MR parameter maps with less data, we invented MANTIS standing for Model‐Augmented Neural neTwork with Incoherent k‐space Sampling. Our MANTIS algorithm combines end‐to‐end convolutional neural network (CNN) mapping, model augmentation promoting data consistency, and incoherent k‐space undersampling as a synergistic framework. The CNN mapping converts a series of undersampled images straight into MR parameter maps, representing an efficient cross-domain transform learning. Signal model fidelity is enforced by connecting a pathway between the undersampled k‐space and estimated parameter maps to ensure that the algorithm constructs efficacious parameter maps consistent with the acquired k-space measurements. A randomized k-space undersampling strategy is tailored to create incoherent sampling patterns that are benign to the reconstruction network and adequate to characterize robust image features.

Our study demonstrated that the proposed MANTIS framework represents a promising approach for rapid T2 mapping in knee imaging with up to 8-fold acceleration. In future work, MANTIS can extend to other types of parameter mapping, such as T1 relaxation time, diffusion, and perfusion, with appropriate models and training data sets. We expect AI methods, including MANTIS, to advance quantitative MR imaging and bring more MR value.

Obtain Improved Performance for Chemical Shift Encoded Imaging

Dr. Alexey Samsonov and Dr. Julia Velikina are leading experts for MR image reconstruction and quantitative imaging in the University of Wisconsin imaging group. In a recent study published in Magnetic Resonance in Medicine, the team introduced a novel technique to improve the performance of chemical shift-encoded fat-water imaging by incorporating advanced MR pulse sequences. This new method exploits the insensitivity of fat tissue to the magnetization transfer (MT) effect and uses this prior information to generate much-improved water-fat separation in many body parts and applications. With improved accuracy and robustness using this novel technique, chemical shift encoded fat-water imaging can bring more value in the complex and challenging clinical imaging environment. 

Accurate and Dose-Saving Positron Emission Tomography Imaging using Deep Learning

In a recent paper published in EJNMMI Physics, our AI team at the University of Wisconsin proposed a deep learning algorithm to address the challenge for simultaneous accurate and dose-saving positron emission tomography (PET) imaging.

PET is a non-invasive imaging modality that directly provides biomarkers for physiology. Accurate PET activity is calculated by reconstructing the photon signal emitted from the PET tracer after correcting scattering and signal decay when photons travel through tissue. The conventional method to perform such a correction requires additional transmission images, such as computed tomography (CT) images, thus leading to additional radiation to patients. In this work, our AI team invented a data-driven method to correct PET signals using a pseudo-CT image generated from the PET image itself using deep learning. This novel technique avoids acquiring additional transmission images, reduces the radiation dose, and increases imaging robustness against the subject motion. Meanwhile, our experiment in brain imaging demonstrated almost no significant difference between our deep learning method and clinical standard methods, with an average deviation of less than 1% in most brain regions.

Raw PET imageDeep learning pseudo-CT image for PET correctionReal CT image

SUSAN: Smart AI for Efficient Image Synthesis and Segmentation

In addressing the challenge of creating a generalizable deep learning segmentation technique for magnetic resonance imaging (MRI), the UW-Madison Radiology AI team implemented an approach to seamlessly incorporate highly efficient image-to-image translation using adversarial learning into the segmentation algorithm. This novel technique SUSAN is now published in Magnetic Resonance in Medicine

Segmentation is a fundamental step in medical image analysis. While there have been many deep learning methods addressing segmentation challenges in medical images, training highly efficient deep learning models typically requires a large number of training data, which could be extremely expensive and time-consuming. MRI has various image contrasts, making the standard approach of training deep learning models for individual image contrast inefficient and unscalable. Our method, SUSAN, standing for Segmenting Unannotated image Structure using Adversarial Network, was invented to segment different image contrasts using only one set of standard training data. SUSAN adheres to the smart AI philosophy to understand data more efficiently and use information cost-effectively.