Publications

2020
Fan Zhang, Thomas Noh, Parikshit Juvekar, Sarah F Frisken, Laura Rigolo, Isaiah Norton, Tina Kapur, Sonia Pujol, William Wells, Alex Yarmarkovich, Gordon Kindlmann, Demian Wassermann, Raul San Jose Estepar, Yogesh Rathi, Ron Kikinis, Hans J Johnson, Carl-Fredrik Westin, Steve Pieper, Alexandra J Golby, and Lauren J O'Donnell. 3/2020. “SlicerDMRI: Diffusion MRI and Tractography Research Software for Brain Cancer Surgery Planning and Visualization.” JCO Clin Cancer Inform, 4, Pp. 299-309.Abstract
PURPOSE: We present SlicerDMRI, an open-source software suite that enables research using diffusion magnetic resonance imaging (dMRI), the only modality that can map the white matter connections of the living human brain. SlicerDMRI enables analysis and visualization of dMRI data and is aimed at the needs of clinical research users. SlicerDMRI is built upon and deeply integrated with 3D Slicer, a National Institutes of Health-supported open-source platform for medical image informatics, image processing, and three-dimensional visualization. Integration with 3D Slicer provides many features of interest to cancer researchers, such as real-time integration with neuronavigation equipment, intraoperative imaging modalities, and multimodal data fusion. One key application of SlicerDMRI is in neurosurgery research, where brain mapping using dMRI can provide patient-specific maps of critical brain connections as well as insight into the tissue microstructure that surrounds brain tumors. PATIENTS AND METHODS: In this article, we focus on a demonstration of SlicerDMRI as an informatics tool to enable end-to-end dMRI analyses in two retrospective imaging data sets from patients with high-grade glioma. Analyses demonstrated here include conventional diffusion tensor analysis, advanced multifiber tractography, automated identification of critical fiber tracts, and integration of multimodal imagery with dMRI. RESULTS: We illustrate the ability of SlicerDMRI to perform both conventional and advanced dMRI analyses as well as to enable multimodal image analysis and visualization. We provide an overview of the clinical rationale for each analysis along with pointers to the SlicerDMRI tools used in each. CONCLUSION: SlicerDMRI provides open-source and clinician-accessible research software tools for dMRI analysis. SlicerDMRI is available for easy automated installation through the 3D Slicer Extension Manager.
Yuanqian Gao, Kiyoshi Takagi, Takahisa Kato, Naoyuki Shono, and Nobuhiko Hata. 2/2020. “Continuum Robot With Follow-the-Leader Motion for Endoscopic Third Ventriculostomy and Tumor Biopsy.” IEEE Trans Biomed Eng, 67, 2, Pp. 379-90.Abstract
BACKGROUND: In a combined endoscopic third ventriculostomy (ETV) and endoscopic tumor biopsy (ETB) procedure, an optimal tool trajectory is mandatory to minimize trauma to surrounding cerebral tissue. OBJECTIVE: This paper presents wire-driven multi-section robot with push-pull wire. The robot is tested to attain follow-the-leader (FTL) motion to place surgical instruments through narrow passages while minimizing the trauma to tissues. METHODS: A wire-driven continuum robot with six sub-sections was developed and its kinematic model was proposed to achieve FTL motion. An accuracy test to assess the robot's ability to attain FTL motion along a set of elementary curved trajectory was performed. We also used hydrocephalus ventricular model created from human subject data to generate five ETV/ETB trajectories and conducted a study assessing the accuracy of the FTL motion along these clinically desirable trajectories. RESULTS: In the test with elementary curved paths, the maximal deviation of the robot was increased from 0.47 mm at 30 turn to 1.78 mm at 180 in a simple C-shaped curve. S-shaped FTL motion had lesser deviation ranging from 0.16 to 0.18 mm. In the phantom study, the greatest tip deviation was 1.45 mm, and the greatest path deviation was 1.23 mm. CONCLUSION: We present the application of a continuum robot with FTL motion to perform a combined ETV/ETB procedure. The validation study using human subject data indicated that the accuracy of FTL motion is relatively high. The study indicated that FTL motion may be useful tool for combined ETV and ETB.
Christian Wachinger, Matthew Toews, Georg Langs, William Wells, and Polina Golland. 2/2020. “Keypoint Transfer for Fast Whole-Body Segmentation.” IEEE Trans Med Imaging, 39, 2, Pp. 273-82.Abstract
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.
Joeky T Senders, Patrick Staples, Alireza Mehrtash, David J Cote, Martin JB Taphoorn, David A Reardon, William B Gormley, Timothy R Smith, Marike L Broekman, and Omar Arnaout. 2/2020. “An Online Calculator for the Prediction of Survival in Glioblastoma Patients Using Classical Statistics and Machine Learning.” Neurosurgery, 86, 2, Pp. E184-E192.Abstract
BACKGROUND: Although survival statistics in patients with glioblastoma multiforme (GBM) are well-defined at the group level, predicting individual patient survival remains challenging because of significant variation within strata. OBJECTIVE: To compare statistical and machine learning algorithms in their ability to predict survival in GBM patients and deploy the best performing model as an online survival calculator. METHODS: Patients undergoing an operation for a histopathologically confirmed GBM were extracted from the Surveillance Epidemiology and End Results (SEER) database (2005-2015) and split into a training and hold-out test set in an 80/20 ratio. Fifteen statistical and machine learning algorithms were trained based on 13 demographic, socioeconomic, clinical, and radiographic features to predict overall survival, 1-yr survival status, and compute personalized survival curves. RESULTS: In total, 20 821 patients met our inclusion criteria. The accelerated failure time model demonstrated superior performance in terms of discrimination (concordance index = 0.70), calibration, interpretability, predictive applicability, and computational efficiency compared to Cox proportional hazards regression and other machine learning algorithms. This model was deployed through a free, publicly available software interface (https://cnoc-bwh.shinyapps.io/gbmsurvivalpredictor/). CONCLUSION: The development and deployment of survival prediction tools require a multimodal assessment rather than a single metric comparison. This study provides a framework for the development of prediction tools in cancer patients, as well as an online survival calculator for patients with GBM. Future efforts should improve the interpretability, predictive applicability, and computational efficiency of existing machine learning algorithms, increase the granularity of population-based registries, and externally validate the proposed prediction tool.
Christian Herz, Kyle MacNeil, Peter A Behringer, Junichi Tokuda, Alireza Mehrtash, Parvin Mousavi, Ron Kikinis, Fiona M Fennessy, Clare M Tempany, Kemal Tuncali, and Andriy Fedorov. 2/2020. “Open Source Platform for Transperineal In-Bore MRI-Guided Targeted Prostate Biopsy.” IEEE Trans Biomed Eng, 67, 2, Pp. 565-76.Abstract
OBJECTIVE: Accurate biopsy sampling of the suspected lesions is critical for the diagnosis and clinical management of prostate cancer. Transperineal in-bore MRI-guided prostate biopsy (tpMRgBx) is a targeted biopsy technique that was shown to be safe, efficient, and accurate. Our goal was to develop an open source software platform to support evaluation, refinement, and translation of this biopsy approach. METHODS: We developed SliceTracker, a 3D Slicer extension to support tpMRgBx. We followed modular design of the implementation to enable customization of the interface and interchange of image segmentation and registration components to assess their effect on the processing time, precision, and accuracy of the biopsy needle placement. The platform and supporting documentation were developed to enable the use of software by an operator with minimal technical training to facilitate translation. Retrospective evaluation studied registration accuracy, effect of the prostate segmentation approach, and re-identification time of biopsy targets. Prospective evaluation focused on the total procedure time and biopsy targeting error (BTE). RESULTS: Evaluation utilized data from 73 retrospective and ten prospective tpMRgBx cases. Mean landmark registration error for retrospective evaluation was 1.88 ± 2.63 mm, and was not sensitive to the approach used for prostate gland segmentation. Prospectively, we observed target re-identification time of 4.60 ± 2.40 min and BTE of 2.40 ± 0.98 mm. CONCLUSION: SliceTracker is modular and extensible open source platform for supporting image processing aspects of the tpMRgBx procedure. It has been successfully utilized to support clinical research procedures at our site.
Haoyin Zhou and Jayender Jagadeesan. 2/2020. “Real-Time Dense Reconstruction of Tissue Surface From Stereo Optical Video.” IEEE Trans Med Imaging, 39, 2, Pp. 400-12.Abstract
We propose an approach to reconstruct dense three-dimensional (3D) model of tissue surface from stereo optical videos in real-time, the basic idea of which is to first extract 3D information from video frames by using stereo matching, and then to mosaic the reconstructed 3D models. To handle the common low-texture regions on tissue surfaces, we propose effective post-processing steps for the local stereo matching method to enlarge the radius of constraint, which include outliers removal, hole filling, and smoothing. Since the tissue models obtained by stereo matching are limited to the field of view of the imaging modality, we propose a model mosaicking method by using a novel feature-based simultaneously localization and mapping (SLAM) method to align the models. Low-texture regions and the varying illumination condition may lead to a large percentage of feature matching outliers. To solve this problem, we propose several algorithms to improve the robustness of the SLAM, which mainly include 1) a histogram voting-based method to roughly select possible inliers from the feature matching results; 2) a novel 1-point RANSAC-based [Formula: see text] algorithm called as DynamicR1PP [Formula: see text] to track the camera motion; and 3) a GPU-based iterative closest points (ICP) and bundle adjustment (BA) method to refine the camera motion estimation results. Experimental results on ex- and in vivo data showed that the reconstructed 3D models have high-resolution texture with an accuracy error of less than 2 mm. Most algorithms are highly parallelized for GPU computation, and the average runtime for processing one key frame is 76.3 ms on stereo images with 960×540 resolution.
Guoqiang Xie, Fan Zhang, Laura Leung, Michael A Mooney, Lorenz Epprecht, Isaiah Norton, Yogesh Rathi, Ron Kikinis, Ossama Al-Mefty, Nikos Makris, Alexandra J Golby, and Lauren J O'Donnell. 1/2020. “Anatomical Assessment of Trigeminal Nerve Tractography Using Diffusion MRI: A Comparison of Acquisition B-Values and Single- and Multi-Fiber Tracking Strategies.” Neuroimage Clin, 25, Pp. 102160.Abstract
BACKGROUND: The trigeminal nerve (TGN) is the largest cranial nerve and can be involved in multiple inflammatory, compressive, ischemic or other pathologies. Currently, imaging-based approaches to identify the TGN mostly rely on T2-weighted magnetic resonance imaging (MRI), which provides localization of the cisternal portion of the TGN where the contrast between nerve and cerebrospinal fluid (CSF) is high enough to allow differentiation. The course of the TGN within the brainstem as well as anterior to the cisternal portion, however, is more difficult to display on traditional imaging sequences. An advanced imaging technique, diffusion MRI (dMRI), enables tracking of the trajectory of TGN fibers and has the potential to visualize anatomical regions of the TGN not seen on T2-weighted imaging. This may allow a more comprehensive assessment of the nerve in the context of pathology. To date, most work in TGN tracking has used clinical dMRI acquisitions with a b-value of 1000 s/mm and conventional diffusion tensor MRI (DTI) tractography methods. Though higher b-value acquisitions and multi-tensor tractography methods are known to be beneficial for tracking brain white matter fiber tracts, there have been no studies conducted to evaluate the performance of these advanced approaches on nerve tracking of the TGN, in particular on tracking different anatomical regions of the TGN. OBJECTIVE: We compare TGN tracking performance using dMRI data with different b-values, in combination with both single- and multi-tensor tractography methods. Our goal is to assess the advantages and limitations of these different strategies for identifying the anatomical regions of the TGN. METHODS: We proposed seven anatomical rating criteria including true and false positive structures, and we performed an expert rating study of over 1000 TGN visualizations, as follows. We tracked the TGN using high-quality dMRI data from 100 healthy adult subjects from the Human Connectome Project (HCP). TGN tracking performance was compared across dMRI acquisitions with b = 1000 s/mm, b = 2000 s/mm and b = 3000 s/mm, using single-tensor (1T) and two-tensor (2T) unscented Kalman filter (UKF) tractography. This resulted in a total of six tracking strategies. The TGN was identified using an anatomical region-of-interest (ROI) selection approach. First, in a subset of the dataset we identified ROIs that provided good TGN tracking performance across all tracking strategies. Using these ROIs, the TGN was then tracked in all subjects using the six tracking strategies. An expert rater (GX) visually assessed and scored each TGN based on seven anatomical judgment criteria. These criteria included the presence of multiple expected anatomical segments of the TGN (true positive structures), specifically branch-like structures, cisternal portion, mesencephalic trigeminal tract, and spinal cord tract of the TGN. False positive criteria included the presence of any fibers entering the temporal lobe, the inferior cerebellar peduncle, or the middle cerebellar peduncle. Expert rating scores were analyzed to compare TGN tracking performance across the six tracking strategies. Intra- and inter-rater validation was performed to assess the reliability of the expert TGN rating result. RESULTS: The TGN was selected using two anatomical ROIs (Meckel's Cave and cisternal portion of the TGN). The two-tensor tractography method had significantly better performance on identifying true positive structures, while generating more false positive streamlines in comparison to the single-tensor tractography method. TGN tracking performance was significantly different across the three b-values for almost all structures studied. Tracking performance was reported in terms of the percentage of subjects achieving each anatomical rating criterion. Tracking of the cisternal portion and branching structure of the TGN was generally successful, with the highest performance of over 98% using two-tensor tractography and b = 1000 or b = 2000. However, tracking the smaller mesencephalic and spinal cord tracts of the TGN was quite challenging (highest performance of 37.5% and 57.07%, using two-tensor tractography with b = 1000 and b = 2000, respectively). False positive connections to the temporal lobe (over 38% of subjects for all strategies) and cerebellar peduncles (100% of subjects for all strategies) were prevalent. High joint probability of agreement was obtained in the inter-rater (on average 83%) and intra-rater validation (on average 90%), showing a highly reliable expert rating result. CONCLUSIONS: Overall, the results of the study suggest that researchers and clinicians may benefit from tailoring their acquisition and tracking methodology to the specific anatomical portion of the TGN that is of the greatest interest. For example, tracking of branching structures and TGN-T2 overlap can be best achieved with a two-tensor model and an acquisition using b = 1000 or b = 2000. In general, b = 1000 and b = 2000 acquisitions provided the best-rated tracking results. Further research is needed to improve both sensitivity and specificity of the depiction of the TGN anatomy using dMRI.
Sarah Frisken, Ma Luo, Parikshit Juvekar, Adomas Bunevicius, Ines Machado, Prashin Unadkat, Melina M Bertotti, Matt Toews, William M Wells, Michael I Miga, and Alexandra J Golby. 1/2020. “A Comparison of Thin-Plate Spline Deformation and Finite Element Modeling to Compensate for Brain Shift during Tumor Resection.” Int J Comput Assist Radiol Surg, 15, 1, Pp. 75-85.Abstract
PURPOSE: Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS: Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS: The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS: In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Adomas Bunevicius, Katharina Schregel, Ralph Sinkus, Alexandra Golby, and Samuel Patz. 1/2020. “REVIEW: MR Elastography of Brain Tumors.” Neuroimage Clin, 25, Pp. 102109.Abstract
MR elastography allows non-invasive quantification of the shear modulus of tissue, i.e. tissue stiffness and viscosity, information that offers the potential to guide presurgical planning for brain tumor resection. Here, we review brain tumor MRE studies with particular attention to clinical applications. Studies that investigated MRE in patients with intracranial tumors, both malignant and benign as well as primary and metastatic, were queried from the Pubmed/Medline database in August 2018. Reported tumor and normal appearing white matter stiffness values were extracted and compared as a function of tumor histopathological diagnosis and MRE vibration frequencies. Because different studies used different elastography hardware, pulse sequences, reconstruction inversion algorithms, and different symmetry assumptions about the mechanical properties of tissue, effort was directed to ensure that similar quantities were used when making inter-study comparisons. In addition, because different methodologies and processing pipelines will necessarily bias the results, when pooling data from different studies, whenever possible, tumor values were compared with the same subject's contralateral normal appearing white matter to minimize any study-dependent bias. The literature search yielded 10 studies with a total of 184 primary and metastatic brain tumor patients. The group mean tumor stiffness, as measured with MRE, correlated with intra-operatively assessed stiffness of meningiomas and pituitary adenomas. Pooled data analysis showed significant overlap between shear modulus values across brain tumor types. When adjusting for the same patient normal appearing white matter shear modulus values, meningiomas were the stiffest tumor-type. MRE is increasingly being examined for potential in brain tumor imaging and might have value for surgical planning. However, significant overlap of shear modulus values between a number of different tumor types limits applicability of MRE for diagnostic purposes. Thus, further rigorous studies are needed to determine specific clinical applications of MRE for surgical planning, disease monitoring and molecular stratification of brain tumors.
Shun Yao, Einat Liebenthal, Parikshit Juvekar, Adomas Bunevicius, Matthew Vera, Laura Rigolo, Alexandra J Golby, and Yanmei Tie. 1/2020. “Sex Effect on Presurgical Language Mapping in Patients With a Brain Tumor.” Front Neurosci, 14, Pp. 4.Abstract
Differences between males and females in brain development and in the organization and hemispheric lateralization of brain functions have been described, including in language. Sex differences in language organization may have important implications for language mapping performed to assess, and minimize neurosurgical risk to, language function. This study examined the effect of sex on the activation and functional connectivity of the brain, measured with presurgical functional magnetic resonance imaging (fMRI) language mapping in patients with a brain tumor. We carried out a retrospective analysis of data from neurosurgical patients treated at our institution who met the criteria of pathological diagnosis (malignant brain tumor), tumor location (left hemisphere), and fMRI paradigms [sentence completion (SC); antonym generation (AG); and resting-state fMRI (rs-fMRI)]. Forty-seven patients (22 females, mean age = 56.0 years) were included in the study. Across the SC and AG tasks, females relative to males showed greater activation in limited areas, including the left inferior frontal gyrus classically associated with language. In contrast, males relative to females showed greater activation in extended areas beyond the classic language network, including the supplementary motor area (SMA) and precentral gyrus. The rs-fMRI functional connectivity of the left SMA in the females was stronger with inferior temporal pole (TP) areas, and in the males with several midline areas. The findings are overall consistent with theories of greater reliance on specialized language areas in females relative to males, and generalized brain areas in males relative to females, for language function. Importantly, the findings suggest that sex could affect fMRI language mapping. Thus, considering sex as a variable in presurgical language mapping merits further investigation.
2019
Fan Zhang, Nico Hoffmann, Suheyla Cetin Karayumak, Yogesh Rathi, Alexandra J Golby, and Lauren J O'Donnell. 10/2019. “Deep White Matter Analysis: Fast, Consistent Tractography Segmentation Across Populations and dMRI Acquisitions.” Med Image Comput Comput Assist Interv, 11766, Pp. 599-608.Abstract
We present a deep learning tractography segmentation method that allows fast and consistent white matter fiber tract identification across healthy and disease populations and across multiple diffusion MRI (dMRI) acquisitions. We create a large-scale training tractography dataset of 1 million labeled fiber samples (54 anatomical tracts are included). To discriminate between fibers from different tracts, we propose a novel 2D multi-channel feature descriptor (FiberMap) that encodes spatial coordinates of points along each fiber. We learn a CNN tract classification model based on FiberMap and obtain a high tract classification accuracy of 90.99%. The method is evaluated on a test dataset of 374 dMRI scans from three independently acquired populations across health conditions (healthy control, neuropsychiatric disorders, and brain tumor patients). We perform comparisons with two state-of-the-art white matter tract segmentation methods. Experimental results show that our method obtains a highly consistent segmentation result, where over 99% of the fiber tracts are successfully detected across all subjects under study, most importantly, including patients with space occupying brain tumors. The proposed method leverages deep learning techniques and provides a much faster and more efficient tool for large data analysis than methods using traditional machine learning techniques.
Pelin Aksit Ciris, Jr-yuan George Chiou, Daniel I Glazer, Tzu-Cheng Chao, Clare M Tempany-Afdhal, Bruno Madore, and Stephan E. Maier. 2019. “Accelerated Segmented Diffusion-Weighted Prostate Imaging for Higher Resolution, Higher Geometric Fidelity, and Multi-b Perfusion Estimation.” Invest Radiol, 54, 4, Pp. 238-46.Abstract
PURPOSE: The aim of this study was to improve the geometric fidelity and spatial resolution of multi-b diffusion-weighted magnetic resonance imaging of the prostate. MATERIALS AND METHODS: An accelerated segmented diffusion imaging sequence was developed and evaluated in 25 patients undergoing multiparametric magnetic resonance imaging examinations of the prostate. A reduced field of view was acquired using an endorectal coil. The number of sampled diffusion weightings, or b-factors, was increased to allow estimation of tissue perfusion based on the intravoxel incoherent motion (IVIM) model. Apparent diffusion coefficients measured with the proposed segmented method were compared with those obtained with conventional single-shot echo-planar imaging (EPI). RESULTS: Compared with single-shot EPI, the segmented method resulted in faster acquisition with 2-fold improvement in spatial resolution and a greater than 3-fold improvement in geometric fidelity. Apparent diffusion coefficient values measured with the novel sequence demonstrated excellent agreement with those obtained from the conventional scan (R = 0.91 for bmax = 500 s/mm and R = 0.89 for bmax = 1400 s/mm). The IVIM perfusion fraction was 4.0% ± 2.7% for normal peripheral zone, 6.6% ± 3.6% for normal transition zone, and 4.4% ± 2.9% for suspected tumor lesions. CONCLUSIONS: The proposed accelerated segmented prostate diffusion imaging sequence achieved improvements in both spatial resolution and geometric fidelity, along with concurrent quantification of IVIM perfusion.
Lauren J O'Donnell, Alessandro Daducci, Demian Wassermann, and Christophe Lenglet. 2019. “Advances in Computational and Statistical Diffusion MRI.” NMR Biomed., 32, 4, Pp. e3805.Abstract
Computational methods are crucial for the analysis of diffusion magnetic resonance imaging (MRI) of the brain. Computational diffusion MRI can provide rich information at many size scales, including local microstructure measures such as diffusion anisotropies or apparent axon diameters, whole-brain connectivity information that describes the brain's wiring diagram and population-based studies in health and disease. Many of the diffusion MRI analyses performed today were not possible five, ten or twenty years ago, due to the requirements for large amounts of computer memory or processor time. In addition, mathematical frameworks had to be developed or adapted from other fields to create new ways to analyze diffusion MRI data. The purpose of this review is to highlight recent computational and statistical advances in diffusion MRI and to put these advances into context by comparison with the more traditional computational methods that are in popular clinical and scientific use. We aim to provide a high-level overview of interest to diffusion MRI researchers, with a more in-depth treatment to illustrate selected computational advances.
Jie Luo, Alireza Sedghi, Karteek Popuri, Dana Cobzas, Miaomiao Zhang, Frank Preiswerk, Matthew Toews, Alexandra Golby, Masashi Sugiyama, William III M Wells, and Sarah Frisken. 2019. “On the Applicability of Registration Uncertainty.” In MICCAI 2019, LNCS 11765: Pp. 410-9. Shenzhen, China: Springer.Abstract
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.
Luo MICCAI 2019
Wenya Linda Bi, Ahmed Hosny, Matthew B Schabath, Maryellen L Giger, Nicolai J Birkbak, Alireza Mehrtash, Tavis Allison, Omar Arnaout, Christopher Abbosh, Ian F Dunn, Raymond H Mak, Rulla M Tamimi, Clare M Tempany, Charles Swanton, Udo Hoffmann, Lawrence H Schwartz, Robert J Gillies, Raymond Y Huang, and Hugo JWL Aerts. 2019. “Artificial Intelligence in Cancer Imaging: Clinical Challenges and Applications.” CA Cancer J Clin, 69, 2, Pp. 127-57.Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Walid M Abdelmoula, Michael S Regan, Begona GC Lopez, Elizabeth C Randall, Sean Lawler, Ann C Mladek, Michal O Nowicki, Bianca M Marin, Jeffrey N Agar, Kristin R Swanson, Tina Kapur, Jann N Sarkaria, William Wells, and Nathalie YR Agar. 2019. “Automatic 3D Nonlinear Registration of Mass Spectrometry Imaging and Magnetic Resonance Imaging Data.” Anal Chem, 91, 9, Pp. 6206-16.Abstract
Multimodal integration between mass spectrometry imaging (MSI) and radiology-established modalities such as magnetic resonance imaging (MRI) would allow the investigations of key questions in complex biological systems such as the central nervous system. Such integration would provide complementary multiscale data to bridge the gap between molecular and anatomical phenotypes, potentially revealing new insights into molecular mechanisms underlying anatomical pathologies presented on MRI. Automatic coregistration between 3D MSI/MRI is a computationally challenging process due to dimensional complexity, MSI data sparsity, lack of direct spatial-correspondences, and nonlinear tissue deformation. Here, we present a new computational approach based on stochastic neighbor embedding to nonlinearly align 3D MSI to MRI data, identify and reconstruct biologically relevant molecular patterns in 3D, and fuse the MSI datacube to the MRI space. We demonstrate our method using multimodal high-spectral resolution matrix-assisted laser desorption ionization (MALDI) 9.4 T MSI and 7 T in vivo MRI data, acquired from a patient-derived, xenograft mouse brain model of glioblastoma following administration of the EGFR inhibitor drug of Erlotinib. Results show the distribution of some identified molecular ions of the EGFR inhibitor erlotinib, a phosphatidylcholine lipid, and cholesterol, which were reconstructed in 3D and mapped to the MRI space. The registration quality was evaluated on two normal mouse brains using the Dice coefficient for the regions of brainstem, hippocampus, and cortex. The method is generic and can therefore be applied to hyperspectral images from different mass spectrometers and integrated with other established in vivo imaging modalities such as computed tomography (CT) and positron emission tomography (PET).
J Nitsch, J Klein, P Dammann, K Wrede, O Gembruch, JH Moltz, H Meine, U Sure, R. Kikinis, and D Miller. 2019. “Automatic and Efficient MRI-US Segmentations for Improving Intraoperative Image Fusion in Image-guided Neurosurgery.” Neuroimage Clin, 22, Pp. 101766.Abstract
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s.
Alireza Mehrtash, Mohsen Ghafoorian, Guillaume Pernelle, Alireza Ziaei, Friso G Heslinga, Kemal Tuncali, Andriy Fedorov, Ron Kikinis, Clare M Tempany, William M Wells, Purang Abolmaesumi, and Tina Kapur. 2019. “Automatic Needle Segmentation and Localization in MRI with 3D Convolutional Neural Networks: Application to MRI-targeted Prostate Biopsy.” IEEE Trans Med Imaging., 38, 4, Pp. 1026-36.Abstract
Image-guidance improves tissue sampling during biopsy by allowing the physician to visualize the tip and trajectory of the biopsy needle relative to the target in MRI, CT, ultrasound, or other relevant imagery. This paper reports a system for fast automatic needle tip and trajectory localization and visualization in MRI that has been developed and tested in the context of an active clinical research program in prostate biopsy. To the best of our knowledge, this is the first reported system for this clinical application, and also the first reported system that leverages deep neural networks for segmentation and localization of needles in MRI across biomedical applications. Needle tip and trajectory were annotated on 583 T2-weighted intra-procedural MRI scans acquired after needle insertion for 71 patients who underwent transperenial MRI-targeted biopsy procedure at our institution. The images were divided into two independent training-validation and test sets at the patient level. A deep 3-dimensional fully convolutional neural network model was developed, trained and deployed on these samples. The accuracy of the proposed method, as tested on previously unseen data, was 2.80 mm average in needle tip detection, and 0.98° in needle trajectory angle. An observer study was designed in which independent annotations by a second observer, blinded to the original observer, were compared to the output of the proposed method. The resultant error was comparable to the measured inter-observer concordance, reinforcing the clinical acceptability of the proposed method. The proposed system has the potential for deployment in clinical routine.
Karol Miller, Grand R Joldes, George Bourantas, Simon K Warfield, Damon E Hyde, Ron Kikinis, and Adam Wittek. 2019. “Biomechanical Modeling and Computer Simulation of the Brain during Neurosurgery.” Int J Numer Method Biomed Eng, Pp. e3250.Abstract
Computational biomechanics of the brain for neurosurgery is an emerging area of research recently gaining in importance and practical applications. This review paper presents the contributions of the Intelligent Systems for Medicine Laboratory and its collaborators to this field, discussing the modeling approaches adopted and the methods developed for obtaining the numerical solutions. We adopt a physics-based modeling approach and describe the brain deformation in mechanical terms (such as displacements, strains, and stresses), which can be computed using a biomechanical model, by solving a continuum mechanics problem. We present our modeling approaches related to geometry creation, boundary conditions, loading, and material properties. From the point of view of solution methods, we advocate the use of fully nonlinear modeling approaches, capable of capturing very large deformations and nonlinear material behavior. We discuss finite element and meshless domain discretization, the use of the total Lagrangian formulation of continuum mechanics, and explicit time integration for solving both time-accurate and steady-state problems. We present the methods developed for handling contacts and for warping 3D medical images using the results of our simulations. We present two examples to showcase these methods: brain shift estimation for image registration and brain deformation computation for neuronavigation in epilepsy treatment.
G Fan, H Liu, Z. Wu, Y. Li, C Feng, D Wang, J Luo, WM Wells, and S He. 2019. “Deep Learning-Based Automatic Segmentation of Lumbosacral Nerves on CT for Spinal Intervention: A Translational Study.” AJNR Am J Neuroradiol, 40, 6, Pp. 1074-81.Abstract
BACKGROUND AND PURPOSE: 3D reconstruction of a targeted area ("safe" triangle and Kambin triangle) may benefit the viability assessment of transforaminal epidural steroid injection, especially at the L5/S1 level. However, manual segmentation of lumbosacral nerves for 3D reconstruction is time-consuming. The aim of this study was to investigate the feasibility of deep learning-based segmentation of lumbosacral nerves on CT and the reconstruction of the safe triangle and Kambin triangle. MATERIALS AND METHODS: A total of 50 cases of spinal CT were manually labeled for lumbosacral nerves and bones using Slicer 4.8. The ratio of training/validation/testing was 32:8:10. A 3D U-Net was adopted to build the model SPINECT for automatic segmentations of lumbosacral structures. The Dice score, pixel accuracy, and Intersection over Union were computed to assess the segmentation performance of SPINECT. The areas of Kambin and safe triangles were measured to validate the 3D reconstruction. RESULTS: The results revealed successful segmentation of lumbosacral bone and nerve on CT. The average pixel accuracy for bone was 0.940, and for nerve, 0.918. The average Intersection over Union for bone was 0.897 and for nerve, 0.827. The Dice score for bone was 0.945, and for nerve, it was 0.905. There were no significant differences in the quantified Kambin triangle or safe triangle between manually segmented images and automatically segmented images ( > .05). CONCLUSIONS: Deep learning-based automatic segmentation of lumbosacral structures (nerves and bone) on routine CT is feasible, and SPINECT-based 3D reconstruction of safe and Kambin triangles is also validated.

Pages