Deep Learning Publications

2021
Schilling KG, Rheault F, Petit L, Hansen CB, Nath V, Yeh F-C, Girard G, Barakovic M, Rafael-Patino J, Yu T, et al. Tractography Dissection Variability: What Happens When 42 Groups Dissect 14 White Matter Bundles on the Same Dataset?. Neuroimage. 2021;243 :118502.Abstract
White matter bundle segmentation using diffusion MRI fiber tractography has become the method of choice to identify white matter fiber pathways in vivo in human brains. However, like other analyses of complex data, there is considerable variability in segmentation protocols and techniques. This can result in different reconstructions of the same intended white matter pathways, which directly affects tractography results, quantification, and interpretation. In this study, we aim to evaluate and quantify the variability that arises from different protocols for bundle segmentation. Through an open call to users of fiber tractography, including anatomists, clinicians, and algorithm developers, 42 independent teams were given processed sets of human whole-brain streamlines and asked to segment 14 white matter fascicles on six subjects. In total, we received 57 different bundle segmentation protocols, which enabled detailed volume-based and streamline-based analyses of agreement and disagreement among protocols for each fiber pathway. Results show that even when given the exact same sets of underlying streamlines, the variability across protocols for bundle segmentation is greater than all other sources of variability in the virtual dissection process, including variability within protocols and variability across subjects. In order to foster the use of tractography bundle dissection in routine clinical settings, and as a fundamental analytical tool, future endeavors must aim to resolve and reduce this heterogeneity. Although external validation is needed to verify the anatomical accuracy of bundle dissections, reducing heterogeneity is a step towards reproducible research and may be achieved through the use of standard nomenclature and definitions of white matter bundles and well-chosen constraints and decisions in the dissection process.
Abdelmoula WM, Lopez BG-C, Randall EC, Kapur T, Sarkaria JN, White FM, Agar JN, Wells WM, Agar NYR. Peak Learning of Mass Spectrometry Imaging Data Using Artificial Neural Networks. Nat Commun. 2021;12 (1) :5544.Abstract
Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis. Despite many solutions being developed, the large data size and high dimensional nature of MSI, especially 3D datasets, still pose computational and memory complexities that hinder accurate identification of biologically relevant molecular patterns. Moreover, the subjectivity in the selection of parameters for conventional pre-processing approaches can lead to bias. Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. The resulting msiPL method learns and visualizes the underlying non-linear spectral manifold, revealing biologically relevant clusters of tissue anatomy in a mouse kidney and tumor heterogeneity in human prostatectomy tissue, colorectal carcinoma, and glioblastoma mouse model, with identification of underlying m/z peaks. The method is applied for the analysis of MSI datasets ranging from 3.3 to 78.9 GB, without prior pre-processing and peak picking, and acquired using different mass spectrometers at different centers.
Fichtinger G, Mousavi P, Ungi T, Fenster A, Abolmaesumi P, Kronreif G, Ruiz-Alzola J, Ndoye A, Diao B, Kikinis R. Design of an Ultrasound-Navigated Prostate Cancer Biopsy System for Nationwide Implementation in Senegal. J Imaging. 2021;7 (8) :154.Abstract
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on "frugal technology", intended to be affordable for low-middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond.
Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper S, Aerts HJWL, Homeyer A, Lewis R, Akbarzadeh A, Bontempi D, et al. NCI Imaging Data Commons. Cancer Res. 2021;81 (16) :4188-93.Abstract
The National Cancer Institute (NCI) Cancer Research Data Commons (CRDC) aims to establish a national cloud-based data science infrastructure. Imaging Data Commons (IDC) is a new component of CRDC supported by the Cancer Moonshot{trade mark, serif}. The goal of IDC is to enable a broad spectrum of cancer researchers, with and without imaging expertise, to easily access and explore the value of de-identified imaging data and to support integrated analyses with non-imaging data. We achieve this goal by co-locating versatile imaging collections with cloud-based computing resources and data exploration, visualization, and analysis tools. The IDC pilot was released in October 2020 and is being continuously populated with radiology and histopathology collections. IDC provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research.
Bastos DCDA, Juvekar P, Tie Y, Jowkar N, Pieper S, Wells WM, Bi WL, Golby A, Frisken S, Kapur T. Challenges and Opportunities of Intraoperative 3D Ultrasound With Neuronavigation in Relation to Intraoperative MRI. Front Oncol. 2021;11 :656519.Abstract
Introduction: Neuronavigation greatly improves the surgeons ability to approach, assess and operate on brain tumors, but tends to lose its accuracy as the surgery progresses and substantial brain shift and deformation occurs. Intraoperative MRI (iMRI) can partially address this problem but is resource intensive and workflow disruptive. Intraoperative ultrasound (iUS) provides real-time information that can be used to update neuronavigation and provide real-time information regarding the resection progress. We describe the intraoperative use of 3D iUS in relation to iMRI, and discuss the challenges and opportunities in its use in neurosurgical practice. Methods: We performed a retrospective evaluation of patients who underwent image-guided brain tumor resection in which both 3D iUS and iMRI were used. The study was conducted between June 2020 and December 2020 when an extension of a commercially available navigation software was introduced in our practice enabling 3D iUS volumes to be reconstructed from tracked 2D iUS images. For each patient, three or more 3D iUS images were acquired during the procedure, and one iMRI was acquired towards the end. The iUS images included an extradural ultrasound sweep acquired before dural incision (iUS-1), a post-dural opening iUS (iUS-2), and a third iUS acquired immediately before the iMRI acquisition (iUS-3). iUS-1 and preoperative MRI were compared to evaluate the ability of iUS to visualize tumor boundaries and critical anatomic landmarks; iUS-3 and iMRI were compared to evaluate the ability of iUS for predicting residual tumor. Results: Twenty-three patients were included in this study. Fifteen patients had tumors located in eloquent or near eloquent brain regions, the majority of patients had low grade gliomas (11), gross total resection was achieved in 12 patients, postoperative temporary deficits were observed in five patients. In twenty-two iUS was able to define tumor location, tumor margins, and was able to indicate relevant landmarks for orientation and guidance. In sixteen cases, white matter fiber tracts computed from preoperative dMRI were overlaid on the iUS images. In nineteen patients, the EOR (GTR or STR) was predicted by iUS and confirmed by iMRI. The remaining four patients where iUS was not able to evaluate the presence or absence of residual tumor were recurrent cases with a previous surgical cavity that hindered good contact between the US probe and the brainsurface. Conclusion: This recent experience at our institution illustrates the practical benefits, challenges, and opportunities of 3D iUS in relation to iMRI.
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain Adaptation for Segmentation of Critical Structures for Prostate Cancer Therapy. Sci Rep. 2021;11 (1) :11480.Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Madore B, Preiswerk F, Bredfeldt JS, Zong S, Cheng C-C. Ultrasound-based Sensors to Monitor Physiological Motion. Med Phys. 2021;48 (7) :3614-22.Abstract
PURPOSE: Medical procedures can be difficult to perform on anatomy that is constantly moving. Respiration displaces internal organs by up to several centimeters with respect to the surface of the body, and patients often have limited ability to hold their breath. Strategies to compensate for motion during diagnostic and therapeutic procedures require reliable information to be available. However, current devices often monitor respiration indirectly, through changes on the outline of the body, and they may be fixed to floors or ceilings, and thus unable to follow a given patient through different locations. Here we show that small ultrasound-based sensors referred to as "organ configuration motion" (OCM) sensors can be fixed to the abdomen and/or chest and provide information-rich, breathing-related signals. METHODS: By design, the proposed sensors are relatively inexpensive. Breathing waveforms were obtained from tissues at varying depths and/or using different sensor placements. Validation was performed against breathing waveforms derived from magnetic resonance imaging (MRI) and optical tracking signals in five and eight volunteers, respectively. RESULTS: Breathing waveforms from different modalities were scaled so they could be directly compared. Differences between waveforms were expressed in the form of a percentage, as compared to the amplitude of a typical breath. Expressed in this manner, for shallow tissues, OCM-derived waveforms on average differed from MRI and optical tracking results by 13.1% and 15.5%, respectively. CONCLUSION: The present results suggest that the proposed sensors provide measurements that properly characterize breathing states. While OCM-based waveforms from shallow tissues proved similar in terms of information content to those derived from MRI or optical tracking, OCM further captured depth-dependent and position-dependent (i.e., chest and abdomen) information. In time, the richer information content of OCM-based waveforms may enable better respiratory gating to be performed, to allow diagnostic and therapeutic equipment to perform at their best.
He J, Zhang F, Xie G, Yao S, Feng Y, Bastos DCA, Rathi Y, Makris N, Kikinis R, Golby AJ, et al. Comparison of Multiple Tractography Methods for Reconstruction of the Retinogeniculate Visual Pathway Using Diffusion MRI. Hum Brain Mapp. 2021;42 (12) :3887-904.Abstract
The retinogeniculate visual pathway (RGVP) conveys visual information from the retina to the lateral geniculate nucleus. The RGVP has four subdivisions, including two decussating and two nondecussating pathways that cannot be identified on conventional structural magnetic resonance imaging (MRI). Diffusion MRI tractography has the potential to trace these subdivisions and is increasingly used to study the RGVP. However, it is not yet known which fiber tracking strategy is most suitable for RGVP reconstruction. In this study, four tractography methods are compared, including constrained spherical deconvolution (CSD) based probabilistic (iFOD1) and deterministic (SD-Stream) methods, and multi-fiber (UKF-2T) and single-fiber (UKF-1T) unscented Kalman filter (UKF) methods. Experiments use diffusion MRI data from 57 subjects in the Human Connectome Project. The RGVP is identified using regions of interest created by two clinical experts. Quantitative anatomical measurements and expert anatomical judgment are used to assess the advantages and limitations of the four tractography methods. Overall, we conclude that UKF-2T and iFOD1 produce the best RGVP reconstruction results. The iFOD1 method can better quantitatively estimate the percentage of decussating fibers, while the UKF-2T method produces reconstructed RGVPs that are judged to better correspond to the known anatomy and have the highest spatial overlap across subjects. Overall, we find that it is challenging for current tractography methods to both accurately track RGVP fibers that correspond to known anatomy and produce an approximately correct percentage of decussating fibers. We suggest that future algorithm development for RGVP tractography should take consideration of both of these two points.
Zhang F, Breger A, Cho KIK, Ning L, Westin C-F, O'Donnell LJ, Pasternak O. Deep Learning Based Segmentation of Brain Tissue from Diffusion MRI. Neuroimage. 2021;233 :117934.Abstract
Segmentation of brain tissue types from diffusion MRI (dMRI) is an important task, required for quantification of brain microstructure and for improving tractography. Current dMRI segmentation is mostly based on anatomical MRI (e.g., T1- and T2-weighted) segmentation that is registered to the dMRI space. However, such inter-modality registration is challenging due to more image distortions and lower image resolution in dMRI as compared with anatomical MRI. In this study, we present a deep learning method for diffusion MRI segmentation, which we refer to as DDSeg. Our proposed method learns tissue segmentation from high-quality imaging data from the Human Connectome Project (HCP), where registration of anatomical MRI to dMRI is more precise. The method is then able to predict a tissue segmentation directly from new dMRI data, including data collected with different acquisition protocols, without requiring anatomical data and inter-modality registration. We train a convolutional neural network (CNN) to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary. To further improve accuracy, our method adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters. The DKI parameters are calculated from the recently proposed mean-kurtosis-curve method that corrects implausible DKI parameter values and provides additional features that discriminate between tissue types. We demonstrate high tissue segmentation accuracy on HCP data, and also when applying the HCP-trained model on dMRI data from other acquisitions with lower resolution and fewer gradient directions.
Sedghi A, O'Donnell LJ, Kapur T, Learned-Miller E, Mousavi P, Wells WM. Image Registration: Maximum Likelihood, Minimum Entropy and Deep Learning. Med Image Anal. 2021;69 :101939.Abstract
In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.
Nitsch J, Sack J, Halle MW, Moltz JH, Wall A, Rutherford AE, Kikinis R, Meine H. MRI-Based Radiomic Feature Analysis of End-Stage Liver Disease for Severity Stratification. Int J Comput Assist Radiol Surg. 2021;16 (3) :457-66.Abstract
PURPOSE: We aimed to develop a predictive model of disease severity for cirrhosis using MRI-derived radiomic features of the liver and spleen and compared it to the existing disease severity metrics of MELD score and clinical decompensation. The MELD score is compiled solely by blood parameters, and so far, it was not investigated if extracted image-based features have the potential to reflect severity to potentially complement the calculated score. METHODS: This was a retrospective study of eligible patients with cirrhosis ([Formula: see text]) who underwent a contrast-enhanced MR screening protocol for hepatocellular carcinoma (HCC) screening at a tertiary academic center from 2015 to 2018. Radiomic feature analyses were used to train four prediction models for assessing the patient's condition at time of scan: MELD score, MELD score [Formula: see text] 9 (median score of the cohort), MELD score [Formula: see text] 15 (the inflection between the risk and benefit of transplant), and clinical decompensation. Liver and spleen segmentations were used for feature extraction, followed by cross-validated random forest classification. RESULTS: Radiomic features of the liver and spleen were most predictive of clinical decompensation (AUC 0.84), which the MELD score could predict with an AUC of 0.78. Using liver or spleen features alone had slightly lower discrimination ability (AUC of 0.82 for liver and AUC of 0.78 for spleen features only), although this was not statistically significant on our cohort. When radiomic prediction models were trained to predict continuous MELD scores, there was poor correlation. When stratifying risk by splitting our cohort at the median MELD 9 or at MELD 15, our models achieved AUCs of 0.78 or 0.66, respectively. CONCLUSIONS: We demonstrated that MRI-based radiomic features of the liver and spleen have the potential to predict the severity of liver cirrhosis, using decompensation or MELD status as imperfect surrogate measures for disease severity.

Pages