Computation Core

William Wells B Madore
William Wells, PhD
Core Lead
Bruno Madore, PhD
Project Lead

The computation project is leveraging recent progress in ultrasound-ultrasound (US) registration and in hybrid US-MRI technology to develop synergistic software and hardware technology that is aimed at improving surgical and interventional guidance in the presence of tissue deformation or motion, issues that complicate treatment monitoring or comparisons to pre-operative images and treatment plans.  Our approach to addressing deformation problems in image guided therapy (IGT) leverages our recent work in feature-based US-US registration, where image content is modeled in terms of local scale-invariant image features, i.e., distinctive patterns of echogenic anatomical tissue that can be automatically extracted from images and used as the basis for registration. Our solution for motion in IGT is built upon our recent developments in hybrid US-MRI technology that acquires MRI and ultrasound simultaneously to exploit the relative strengths of MRI (high spatial resolution and excellent soft tissue contrast), and US (high frame rate). Much of the proposed research deals with providing solutions to registration problems for IGT applications, such as tissue deformation fields, and we believe that in this context it is important to characterize the potential uncertainties in these solutions, similarly to providing error bars in other estimation problems.To this end we are developing registration-with-uncertainty algorithms that incorporate random process models of spatial uncertainty. The technology is evaluated in the context of our testbed clinical projects, image-guided neurosurgery and abdominal cryotherapy, in the AMIGO suite, our advanced interventional suite that includes intra-operative 3T MRI, ultrasound and PET/CT. The hybrid US-MRI approach enables rapid updates to MRI images to accommodate, e.g., breathing motions during cryoablation procedures.In addition, US-US registration algorithms facilitate improvements in US-updated neurosurgical guidance, and have potential IGT applications in our program or elsewhere, for example in prostate biopsies. In order to facilitate dissemination of these algorithms to the broader IGT community, we distribute software components in the open-source SlicerIGT platform. Our projects are:

Registration algorithms for MRI and US with emphasis on uncertainty and algorithm performance. We continue algorithm developments aimed at characterizing uncertainty and accuracy in image registration,and tissue deformation estimation from implanted trackers,that are based on Gaussian Random Fields (GRF). We are also developing algorithms that estimate surgical tissue deformations from our feature-based ultrasound / ultrasound registration technology. Finally, we translate the developed algorithms into AMIGO using the SlicerIGT platform by providing extensions that visualize deformed MRI based on intraoperative US, associated registration uncertainty, and integrated laser surface scanning for neurosurgery. (Contact: William Wells)

Technology for simultaneous US-MRI acquisition for monitoring procedures. We are developing machine learning techniques that use high bandwidth US data to estimate motion and deformation in MRI images. We are also further generalizing the hybrid US-MRI approach by exploiting information from 256 independent channels, from a custom-built MR-compatible 256-element 2D US transducer array provided by an industrial partner. We are developing a pre-scan calibration (“learning”) phase that employs simultaneously-acquired MRI and USdata. We will deploy on-line deformation-corrected updates of MR as they become available from the scanner, for monitoring cryoablations. (Contact: Bruno Madore)

Software and Documentation

3D Slicer, a comprehensive open source platform for medical image analysis, contains several modules and functions that have been contributed by us for Computation. These include: Source Code for the Paper Titled: Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features (Med Image Anal. 2013 Apr;17(3):271-82.)


MRI acquired to guide Gynecologic Brachytherapy Catheter Placement


3D Slicer

Full Publication List in PubMed

Select Recent Publications

Alireza Sedghi, Lauren J O'Donnell, Tina Kapur, Erik Learned-Miller, Parvin Mousavi, and William M Wells. 4/2021. “Image Registration: Maximum Likelihood, Minimum Entropy and Deep Learning.” Med Image Anal, 69, Pp. 101939.Abstract
In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.
Alireza Mehrtash, William M Wells, Clare M Tempany, Purang Abolmaesumi, and Tina Kapur. 12/2020. “Confidence Calibration and Predictive Uncertainty Estimation for Deep Medical Image Segmentation.” IEEE Trans Med Imaging, 39, 12, Pp. 3868-78.Abstract
Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2)We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.
Shenyan Zong, Guofeng Shen, Chang-Sheng Mei, and Bruno Madore. 12/2020. “Improved PRF-Based MR Thermometry Using k-Space Energy Spectrum Analysis.” Magn Reson Med, 84, 6, Pp. 3325-32.Abstract
PURPOSE: Proton resonance frequency (PRF) thermometry encodes information in the phase of MRI signals. A multiplicative factor converts phase changes into temperature changes, and this factor includes the TE. However, phase variations caused by B and/or B inhomogeneities can effectively change TE in ways that vary from pixel to pixel. This work presents how spatial phase variations affect temperature maps and how to correct for corresponding errors. METHODS: A method called "k-space energy spectrum analysis" was used to map regions in the object domain to regions in the k-space domain. Focused ultrasound heating experiments were performed in tissue-mimicking gel phantoms under two scenarios: with and without proper shimming. The second scenario, with deliberately de-adjusted shimming, was meant to emulate B inhomogeneities in a controlled manner. The TE errors were mapped and compensated for using k-space energy spectrum analysis, and corrected results were compared with reference results. Furthermore, a volunteer was recruited to help evaluate the magnitude of the errors being corrected. RESULTS: The in vivo abdominal results showed that the TE and heating errors being corrected can readily exceed 10%. In phantom results, a linear regression between reference and corrected temperatures results provided a slope of 0.971 and R of 0.9964. Analysis based on the Bland-Altman method provided a bias of -0.0977°C and 95% limits of agreement that were 0.75°C apart. CONCLUSION: Spatially varying TE errors, such as caused by B and/or B inhomogeneities, can be detected and corrected using the k-space energy spectrum analysis method, for increased accuracy in proton resonance frequency thermometry.
Fan Zhang, Thomas Noh, Parikshit Juvekar, Sarah F Frisken, Laura Rigolo, Isaiah Norton, Tina Kapur, Sonia Pujol, William Wells, Alex Yarmarkovich, Gordon Kindlmann, Demian Wassermann, Raul San Jose Estepar, Yogesh Rathi, Ron Kikinis, Hans J Johnson, Carl-Fredrik Westin, Steve Pieper, Alexandra J Golby, and Lauren J O'Donnell. 3/2020. “SlicerDMRI: Diffusion MRI and Tractography Research Software for Brain Cancer Surgery Planning and Visualization.” JCO Clin Cancer Inform, 4, Pp. 299-309.Abstract
PURPOSE: We present SlicerDMRI, an open-source software suite that enables research using diffusion magnetic resonance imaging (dMRI), the only modality that can map the white matter connections of the living human brain. SlicerDMRI enables analysis and visualization of dMRI data and is aimed at the needs of clinical research users. SlicerDMRI is built upon and deeply integrated with 3D Slicer, a National Institutes of Health-supported open-source platform for medical image informatics, image processing, and three-dimensional visualization. Integration with 3D Slicer provides many features of interest to cancer researchers, such as real-time integration with neuronavigation equipment, intraoperative imaging modalities, and multimodal data fusion. One key application of SlicerDMRI is in neurosurgery research, where brain mapping using dMRI can provide patient-specific maps of critical brain connections as well as insight into the tissue microstructure that surrounds brain tumors. PATIENTS AND METHODS: In this article, we focus on a demonstration of SlicerDMRI as an informatics tool to enable end-to-end dMRI analyses in two retrospective imaging data sets from patients with high-grade glioma. Analyses demonstrated here include conventional diffusion tensor analysis, advanced multifiber tractography, automated identification of critical fiber tracts, and integration of multimodal imagery with dMRI. RESULTS: We illustrate the ability of SlicerDMRI to perform both conventional and advanced dMRI analyses as well as to enable multimodal image analysis and visualization. We provide an overview of the clinical rationale for each analysis along with pointers to the SlicerDMRI tools used in each. CONCLUSION: SlicerDMRI provides open-source and clinician-accessible research software tools for dMRI analysis. SlicerDMRI is available for easy automated installation through the 3D Slicer Extension Manager.
Cheng-Chieh Cheng, Frank Preiswerk, and Bruno Madore. 6/2020. “Multi-pathway Multi-echo Acquisition and Neural Contrast Translation to Generate a Variety of Quantitative and Qualitative Image Contrasts.” Magn Reson Med, 83, 6, Pp. 2310-21.Abstract
PURPOSE: Clinical exams typically involve acquiring many different image contrasts to help discriminate healthy from diseased states. Ideally, 3D quantitative maps of all of the main MR parameters would be obtained for improved tissue characterization. Using data from a 7-min whole-brain multi-pathway multi-echo (MPME) scan, we aimed to synthesize several 3D quantitative maps (T and T ) and qualitative contrasts (MPRAGE, FLAIR, T -weighted, T -weighted, and proton density [PD]-weighted). The ability of MPME acquisitions to capture large amounts of information in a relatively short amount of time suggests it may help reduce the duration of neuro MR exams. METHODS: Eight healthy volunteers were imaged at 3.0T using a 3D isotropic (1.2 mm) MPME sequence. Spin-echo, MPRAGE, and FLAIR scans were performed for training and validation. MPME signals were interpreted through neural networks for predictions of different quantitative and qualitative contrasts. Predictions were compared to reference values at voxel and region-of-interest levels. RESULTS: Mean absolute errors (MAEs) for T and T maps were 216 ms and 11 ms, respectively. In ROIs containing white matter (WM) and thalamus tissues, the mean T /T predicted values were 899/62 ms and 1139/58 ms, consistent with reference values of 850/66 ms and 1126/58 ms, respectively. For qualitative contrasts, signals were normalized to those of WM, and MAEs for MPRAGE, FLAIR, T -weighted, T -weighted, and PD-weighted contrasts were 0.14, 0.15, 0.13, 0.16, and 0.05, respectively. CONCLUSIONS: Using an MPME sequence and neural-network contrast translation, whole-brain results were obtained with a variety of quantitative and qualitative contrast in ~6.8 min.
Sarah Frisken, Ma Luo, Parikshit Juvekar, Adomas Bunevicius, Ines Machado, Prashin Unadkat, Melina M Bertotti, Matt Toews, William M Wells, Michael I Miga, and Alexandra J Golby. 1/2020. “A Comparison of Thin-Plate Spline Deformation and Finite Element Modeling to Compensate for Brain Shift during Tumor Resection.” Int J Comput Assist Radiol Surg, 15, 1, Pp. 75-85.Abstract
PURPOSE: Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS: Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS: The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS: In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Christian Wachinger, Matthew Toews, Georg Langs, William Wells, and Polina Golland. 2/2020. “Keypoint Transfer for Fast Whole-Body Segmentation.” IEEE Trans Med Imaging, 39, 2, Pp. 273-82.Abstract
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.
Jian Wang, William M Wells, Polina Golland, and Miaomiao Zhang. 12/2019. “Registration Uncertainty Quantification via Low-dimensional Characterization of Geometric Deformations.” Magn Reson Imaging, 64, Pp. 122-31.Abstract
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace's method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
Jie Luo, Alireza Sedghi, Karteek Popuri, Dana Cobzas, Miaomiao Zhang, Frank Preiswerk, Matthew Toews, Alexandra Golby, Masashi Sugiyama, William III M Wells, and Sarah Frisken. 10/2019. “On the Applicability of Registration Uncertainty.” In MICCAI 2019, LNCS 11765: Pp. 410-9. Shenzhen, China: Springer.Abstract
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.