Deep Learning Publications

Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper S, Aerts HJWL, Homeyer A, Lewis R, Akbarzadeh A, Bontempi D, et al. NCI Imaging Data Commons. Cancer Res. 2021;81 (16) :4188-93.Abstract
The National Cancer Institute (NCI) Cancer Research Data Commons (CRDC) aims to establish a national cloud-based data science infrastructure. Imaging Data Commons (IDC) is a new component of CRDC supported by the Cancer Moonshot{trade mark, serif}. The goal of IDC is to enable a broad spectrum of cancer researchers, with and without imaging expertise, to easily access and explore the value of de-identified imaging data and to support integrated analyses with non-imaging data. We achieve this goal by co-locating versatile imaging collections with cloud-based computing resources and data exploration, visualization, and analysis tools. The IDC pilot was released in October 2020 and is being continuously populated with radiology and histopathology collections. IDC provides access to curated imaging collections, accompanied by documentation, a user forum, and a growing number of analysis use cases that aim to demonstrate the value of a data commons framework applied to cancer imaging research.
Bastos DCDA, Juvekar P, Tie Y, Jowkar N, Pieper S, Wells WM, Bi WL, Golby A, Frisken S, Kapur T. Challenges and Opportunities of Intraoperative 3D Ultrasound With Neuronavigation in Relation to Intraoperative MRI. Front Oncol. 2021;11 :656519.Abstract
Introduction: Neuronavigation greatly improves the surgeons ability to approach, assess and operate on brain tumors, but tends to lose its accuracy as the surgery progresses and substantial brain shift and deformation occurs. Intraoperative MRI (iMRI) can partially address this problem but is resource intensive and workflow disruptive. Intraoperative ultrasound (iUS) provides real-time information that can be used to update neuronavigation and provide real-time information regarding the resection progress. We describe the intraoperative use of 3D iUS in relation to iMRI, and discuss the challenges and opportunities in its use in neurosurgical practice. Methods: We performed a retrospective evaluation of patients who underwent image-guided brain tumor resection in which both 3D iUS and iMRI were used. The study was conducted between June 2020 and December 2020 when an extension of a commercially available navigation software was introduced in our practice enabling 3D iUS volumes to be reconstructed from tracked 2D iUS images. For each patient, three or more 3D iUS images were acquired during the procedure, and one iMRI was acquired towards the end. The iUS images included an extradural ultrasound sweep acquired before dural incision (iUS-1), a post-dural opening iUS (iUS-2), and a third iUS acquired immediately before the iMRI acquisition (iUS-3). iUS-1 and preoperative MRI were compared to evaluate the ability of iUS to visualize tumor boundaries and critical anatomic landmarks; iUS-3 and iMRI were compared to evaluate the ability of iUS for predicting residual tumor. Results: Twenty-three patients were included in this study. Fifteen patients had tumors located in eloquent or near eloquent brain regions, the majority of patients had low grade gliomas (11), gross total resection was achieved in 12 patients, postoperative temporary deficits were observed in five patients. In twenty-two iUS was able to define tumor location, tumor margins, and was able to indicate relevant landmarks for orientation and guidance. In sixteen cases, white matter fiber tracts computed from preoperative dMRI were overlaid on the iUS images. In nineteen patients, the EOR (GTR or STR) was predicted by iUS and confirmed by iMRI. The remaining four patients where iUS was not able to evaluate the presence or absence of residual tumor were recurrent cases with a previous surgical cavity that hindered good contact between the US probe and the brainsurface. Conclusion: This recent experience at our institution illustrates the practical benefits, challenges, and opportunities of 3D iUS in relation to iMRI.
Zhang F, Breger A, Cho KIK, Ning L, Westin C-F, O'Donnell LJ, Pasternak O. Deep Learning Based Segmentation of Brain Tissue from Diffusion MRI. Neuroimage. 2021;233 :117934.Abstract
Segmentation of brain tissue types from diffusion MRI (dMRI) is an important task, required for quantification of brain microstructure and for improving tractography. Current dMRI segmentation is mostly based on anatomical MRI (e.g., T1- and T2-weighted) segmentation that is registered to the dMRI space. However, such inter-modality registration is challenging due to more image distortions and lower image resolution in dMRI as compared with anatomical MRI. In this study, we present a deep learning method for diffusion MRI segmentation, which we refer to as DDSeg. Our proposed method learns tissue segmentation from high-quality imaging data from the Human Connectome Project (HCP), where registration of anatomical MRI to dMRI is more precise. The method is then able to predict a tissue segmentation directly from new dMRI data, including data collected with different acquisition protocols, without requiring anatomical data and inter-modality registration. We train a convolutional neural network (CNN) to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary. To further improve accuracy, our method adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters. The DKI parameters are calculated from the recently proposed mean-kurtosis-curve method that corrects implausible DKI parameter values and provides additional features that discriminate between tissue types. We demonstrate high tissue segmentation accuracy on HCP data, and also when applying the HCP-trained model on dMRI data from other acquisitions with lower resolution and fewer gradient directions.
Sedghi A, O'Donnell LJ, Kapur T, Learned-Miller E, Mousavi P, Wells WM. Image Registration: Maximum Likelihood, Minimum Entropy and Deep Learning. Med Image Anal. 2021;69 :101939.Abstract
In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.