SANTIS: Novel Machine Learning Method Accelerates MRI

Over the past few years, machine learning has demonstrated the ability to provide improved image quality for reconstructing undersampled MRI data, providing new opportunities to improve the performance of rapid MRI further. Compared to conventional rapid imaging techniques, machine learning-based methods reformulate image reconstruction into a task of feature learning by inferencing undersampled image structures from a large image database. Such a data-driven approach has been shown to efficiently remove artifacts from undersampled images, translate k-space information into images, estimate missing k-space data, and reconstruct MR parameter maps. Many pioneering works from various research groups have greatly impacted the medical imaging reconstruction community. Now, our artificial intelligence team invented a new method to further advance machine learning-based image reconstruction by rethinking the essential reconstruction components of efficiency, accuracy, and robustness. This new framework, named Sampling-Augmented Neural neTwork with Incoherent Structure (SANTIS), was recently accepted for publication in Magnetic Resonance in Medicine.

Hammernik et.al.New York University/Graz University of Technology Variational Network
Wang et.al.Paul C. Lauterbur Research Center for Imaging End-to-end CNN
Zhu et.al.Harvard University AUTOMAP
Schlemper et.al.Imperial College London Cascade Network
Han et.al.Korea Advanced Institute of Science Technology Domain Adaptation Network
Akçakaya et.al.University of Minnesota RAKI
Mardani et.al.Stanford University GANCS
Eo et.al.Yonsei University KIKI-net
Biswas et.al.University of Iowa MoDL-SToRM
Quan et.al.Ulsan National Institute of Science and Technology Cyclic Network
Liu et.al.University of Wisconsin MANTIS

Our SANTIS framework uses a unique recipe of data cycle-consistent adversarial network combining efficient end-to-end convolutional neural network mapping, data fidelity enforcement, and adversarial training for reconstructing undersampled MR images. 1) The reconstruction efficiency was ensured by applying an end-to-end convolutional neural network that directly removes image artifacts and noises using highly efficient multi-scale deep feature learning. Many modern convolutional neural network designs provide flexibility for network selection and implementation. 2) The reconstruction accuracy was maintained by the adversarial training using an additional adversarial loss and a data consistency loss. The reconstructed images represent a natural-looking appearance with high-quality preservation for image texture and details at a vast undersampling rate. 3) The reconstruction robustness was enforced by introducing a training strategy employing sampling augmentation with extensive variation of undersampling patterns. The trained network can learn various aliasing artifact structures, thereby removing undersampling artifacts more faithfully.

SANTIS was evaluated to reconstruct undersampled knee images with a Cartesian k-space sampling scheme and undersampled liver images with a non-repeating golden-angle radial sampling scheme. SANTIS demonstrated superior reconstruction performance in both datasets and significantly improved robustness and efficiency compared to several reference methods. We believe SANTIS represents a novel concept for deep learning-based image reconstruction and may further inspire the MRI value by allowing improved rapid image acquisition and reconstruction, one of the goals of our research team.

One example of SANTIS reconstruction for a knee dataset (proton density weighted fast spin echo @ 3T GE Premier) at acceleration factor R=5. Left: fully-sampled images; Middle: undersampled image from zero-filling; Right: Reconstructed image using SANTIS.

Deep Learning Empowers Lung MR Imaging for Pulmonary Function Quantification

Deep Learning enables accurate and robust lung tissue classification for assessing pulmonary functional and structural differences between disease cohorts

Dr. Wei Zha, an imaging scientist in the Pulmonary and Metabolic Imaging Center led by Dr. Sean Fain at UW-Madison, has invented a deep learning approach to provide fast, reproducible, and robust quantification for pulmonary structure and function using Oxygen-enhanced (OE) MRI. This novel deep learning technology has great potential to create useful imaging biomarkers for assessing pulmonary diseases and was recently published in the Journal of Magnetic Resonance Imaging

Timeline of OE MR Imaging Protocol

OE MRI using a 3D radial ultrashort echo time sequence supports quantitative respiratory function assessment for lung diseases. In contrast to typical lung imaging using hyperpolarized noble gas or fluorinated gas, this novel OE technique does not require specialized multinuclear hardware or expensive specialty gases while providing full chest images of regional ventilation with isotropic spatial resolution. Despite these rapid advances in pulmonary OE MRI, the development of a quantification tool for extracting potential biomarkers and regional image features remain to be developed. The analysis of pulmonary OE MR images remains challenging due to modality-specific complexities, including coil inhomogeneity, arbitrary intensity values, local magnetic susceptibility, and reduced proton density due to the large fraction of air space in the normal lung.

This newly invented deep learning method uses an efficient convolutional encoder-decoder deep learning structure and multi-plane consensus labeling to identify 3D image features and patterns, resulting in an accurate and robust classification and segmentation of pulmonary tissues on OE MR images. Subsequent analysis using the classified tissues yielded robust quantification for lung structure and function and discovered significant differences between different pulmonary disease cohorts.

Deep Learning enables accurate and robust lung tissue classification for assessing pulmonary functional and structural differences between disease cohorts.

Make Accurate Treatment Planning in Radiotherapy using Deep Learning

Last year, a published deepMRAC study in Radiology evaluated the feasibility of deep learning-based pseudo-CT generation in PET/MR attenuation correction, in which our AI team demonstrated the pseudo-CT generated by learning MR information could significantly improve PET reconstruction in PET/MR, leading to less than 1% uncertainty in brain FDG PET quantification. We investigated the feasibility, applicability, and robustness of deep learning-based pseudo-CT generation in MR-guided radiation therapy to seek the clinical value of deep learning-based pseudo-CT generation. With the new method published as deepMTP framework, we demonstrated the high clinical value of deep learning pseudo-CT in radiation therapy for saving radiation dose and providing high-quality treatment planning equivalent to the standard clinical method.

An example of a patient with a right frontal brain tumor adjacent to chiasm and right optic nerves. deepMTP provided a treatment plan with similar PTV and isodose lines around the tumor region compared with CT-based treatment plan (CTTP) in the fused MR and CT images (a, b). The plan was designed to avoid adjacent chiasm and optic nerves. The DVH (c) showed highly similar dose curves for the PTV, chiasm, and right optic nerve between CTTP (solid line) and deepMTP (dashed line).

We have shown that deep learning approaches applied to MR-based treatment planning in radiation therapy can produce comparable plans to CT-based methods. The further development and clinical evaluation of such approaches for MR-based treatment planning have potential value for providing accurate dose coverage and reducing treatment unrelated doses in radiation therapy, improving workflow for MR-only treatment planning, combined with the improved soft tissue contrast and resolution of MR. Our study demonstrates that deep learning approaches such as deepMTP will substantially impact future work in treatment planning in the brain and elsewhere in the body.

SUSAN: Smart AI for Efficient Image Synthesis and Segmentation

In addressing the challenge of creating a generalizable deep learning segmentation technique for magnetic resonance imaging (MRI), the UW-Madison Radiology AI team implemented an approach to seamlessly incorporate highly efficient image-to-image translation using adversarial learning into the segmentation algorithm. This novel technique SUSAN is now published in Magnetic Resonance in Medicine

Segmentation is a fundamental step in medical image analysis. While there have been many deep learning methods addressing segmentation challenges in medical images, training highly efficient deep learning models typically requires a large number of training data, which could be extremely expensive and time-consuming. MRI has various image contrasts, making the standard approach of training deep learning models for individual image contrast inefficient and unscalable. Our method, SUSAN, standing for Segmenting Unannotated image Structure using Adversarial Network, was invented to segment different image contrasts using only one set of standard training data. SUSAN adheres to the smart AI philosophy to understand data more efficiently and use information cost-effectively.

Deep Learning Approach Can Detect Cartilage Lesions within the Knee Joint

The UW-Madison Musculoskeletal imaging team developed a fully-automated medical imaging diagnostic system using artificial intelligence. This system using a deep learning approach, achieved high diagnostic accuracy and reproducibility in 175 patients with knee pain for detecting cartilage degeneration while reducing subjectivity, variability, and errors associated with human interpretation of knee MR images. This study is now published in Radiology. For more details, please check out the official link or my ResearchGate page.

Illustration of the CNN architecture for the DL-based cartilage lesion detection system. Our proposed method (Left) consisted of segmentation and classification CNNs, which were connected in a cascaded fashion to create a fully-automated processing pipeline. Examples (Right) are shown to demonstrate the successful classification of different types of cartilage lesions.