SANTIS: Novel Machine Learning Method Accelerates MRI

Over the past few years, machine learning has demonstrated the ability to provide improved image quality for reconstructing undersampled MRI data, providing new opportunities to improve the performance of rapid MRI further. Compared to conventional rapid imaging techniques, machine learning-based methods reformulate image reconstruction into a task of feature learning by inferencing undersampled image structures from a large image database. Such a data-driven approach has been shown to efficiently remove artifacts from undersampled images, translate k-space information into images, estimate missing k-space data, and reconstruct MR parameter maps. Many pioneering works from various research groups have greatly impacted the medical imaging reconstruction community. Now, our artificial intelligence team invented a new method to further advance machine learning-based image reconstruction by rethinking the essential reconstruction components of efficiency, accuracy, and robustness. This new framework, named Sampling-Augmented Neural neTwork with Incoherent Structure (SANTIS), was recently accepted for publication in Magnetic Resonance in Medicine.

Hammernik et.al.New York University/Graz University of Technology Variational Network
Wang et.al.Paul C. Lauterbur Research Center for Imaging End-to-end CNN
Zhu et.al.Harvard University AUTOMAP
Schlemper et.al.Imperial College London Cascade Network
Han et.al.Korea Advanced Institute of Science Technology Domain Adaptation Network
Akçakaya et.al.University of Minnesota RAKI
Mardani et.al.Stanford University GANCS
Eo et.al.Yonsei University KIKI-net
Biswas et.al.University of Iowa MoDL-SToRM
Quan et.al.Ulsan National Institute of Science and Technology Cyclic Network
Liu et.al.University of Wisconsin MANTIS

Our SANTIS framework uses a unique recipe of data cycle-consistent adversarial network combining efficient end-to-end convolutional neural network mapping, data fidelity enforcement, and adversarial training for reconstructing undersampled MR images. 1) The reconstruction efficiency was ensured by applying an end-to-end convolutional neural network that directly removes image artifacts and noises using highly efficient multi-scale deep feature learning. Many modern convolutional neural network designs provide flexibility for network selection and implementation. 2) The reconstruction accuracy was maintained by the adversarial training using an additional adversarial loss and a data consistency loss. The reconstructed images represent a natural-looking appearance with high-quality preservation for image texture and details at a vast undersampling rate. 3) The reconstruction robustness was enforced by introducing a training strategy employing sampling augmentation with extensive variation of undersampling patterns. The trained network can learn various aliasing artifact structures, thereby removing undersampling artifacts more faithfully.

SANTIS was evaluated to reconstruct undersampled knee images with a Cartesian k-space sampling scheme and undersampled liver images with a non-repeating golden-angle radial sampling scheme. SANTIS demonstrated superior reconstruction performance in both datasets and significantly improved robustness and efficiency compared to several reference methods. We believe SANTIS represents a novel concept for deep learning-based image reconstruction and may further inspire the MRI value by allowing improved rapid image acquisition and reconstruction, one of the goals of our research team.

One example of SANTIS reconstruction for a knee dataset (proton density weighted fast spin echo @ 3T GE Premier) at acceleration factor R=5. Left: fully-sampled images; Middle: undersampled image from zero-filling; Right: Reconstructed image using SANTIS.

Deep Learning Empowers Lung MR Imaging for Pulmonary Function Quantification

Deep Learning enables accurate and robust lung tissue classification for assessing pulmonary functional and structural differences between disease cohorts

Dr. Wei Zha, an imaging scientist in the Pulmonary and Metabolic Imaging Center led by Dr. Sean Fain at UW-Madison, has invented a deep learning approach to provide fast, reproducible, and robust quantification for pulmonary structure and function using Oxygen-enhanced (OE) MRI. This novel deep learning technology has great potential to create useful imaging biomarkers for assessing pulmonary diseases and was recently published in the Journal of Magnetic Resonance Imaging

Timeline of OE MR Imaging Protocol

OE MRI using a 3D radial ultrashort echo time sequence supports quantitative respiratory function assessment for lung diseases. In contrast to typical lung imaging using hyperpolarized noble gas or fluorinated gas, this novel OE technique does not require specialized multinuclear hardware or expensive specialty gases while providing full chest images of regional ventilation with isotropic spatial resolution. Despite these rapid advances in pulmonary OE MRI, the development of a quantification tool for extracting potential biomarkers and regional image features remain to be developed. The analysis of pulmonary OE MR images remains challenging due to modality-specific complexities, including coil inhomogeneity, arbitrary intensity values, local magnetic susceptibility, and reduced proton density due to the large fraction of air space in the normal lung.

This newly invented deep learning method uses an efficient convolutional encoder-decoder deep learning structure and multi-plane consensus labeling to identify 3D image features and patterns, resulting in an accurate and robust classification and segmentation of pulmonary tissues on OE MR images. Subsequent analysis using the classified tissues yielded robust quantification for lung structure and function and discovered significant differences between different pulmonary disease cohorts.

Deep Learning enables accurate and robust lung tissue classification for assessing pulmonary functional and structural differences between disease cohorts.

Accurate and Dose-Saving Positron Emission Tomography Imaging using Deep Learning

In a recent paper published in EJNMMI Physics, our AI team at the University of Wisconsin proposed a deep learning algorithm to address the challenge for simultaneous accurate and dose-saving positron emission tomography (PET) imaging.

PET is a non-invasive imaging modality that directly provides biomarkers for physiology. Accurate PET activity is calculated by reconstructing the photon signal emitted from the PET tracer after correcting scattering and signal decay when photons travel through tissue. The conventional method to perform such a correction requires additional transmission images, such as computed tomography (CT) images, thus leading to additional radiation to patients. In this work, our AI team invented a data-driven method to correct PET signals using a pseudo-CT image generated from the PET image itself using deep learning. This novel technique avoids acquiring additional transmission images, reduces the radiation dose, and increases imaging robustness against the subject motion. Meanwhile, our experiment in brain imaging demonstrated almost no significant difference between our deep learning method and clinical standard methods, with an average deviation of less than 1% in most brain regions.

Raw PET imageDeep learning pseudo-CT image for PET correctionReal CT image

Deep Learning Approach Can Detect Cartilage Lesions within the Knee Joint

The UW-Madison Musculoskeletal imaging team developed a fully-automated medical imaging diagnostic system using artificial intelligence. This system using a deep learning approach, achieved high diagnostic accuracy and reproducibility in 175 patients with knee pain for detecting cartilage degeneration while reducing subjectivity, variability, and errors associated with human interpretation of knee MR images. This study is now published in Radiology. For more details, please check out the official link or my ResearchGate page.

Illustration of the CNN architecture for the DL-based cartilage lesion detection system. Our proposed method (Left) consisted of segmentation and classification CNNs, which were connected in a cascaded fashion to create a fully-automated processing pipeline. Examples (Right) are shown to demonstrate the successful classification of different types of cartilage lesions.

AI achieves the state-of-the-art performance for fully-automated medical image segmentation

This April 2018 top-notch medical imaging journal Magnetic Resonance in Medicine (MRM) Editor’s Pick is from Fang Liu and Richard Kijowski, researchers at the University of Wisconsin-Madison. Their paper presents a novel approach to automatically segmenting knee joint structures in magnetic resonance images, combining the power of recently developed deep learning-based artificial intelligence with a highly efficient 3D deformable model approach. The detail for the interview with Fang and Rick about their current and upcoming projects is linked here blog.ismrm.org/2018/04/13/qa-with-and-fang-liu-and-richard-kijowski/