Computation Core

William Wells Bruno Madore
William Wells, PhD
Core Lead
Bruno Madore, PhD
Project Lead

The computation project is leveraging recent progress in ultrasound-ultrasound (US) registration and in hybrid US-MRI technology to develop synergistic software and hardware technology that is aimed at improving surgical and interventional guidance in the presence of tissue deformation or motion, issues that complicate treatment monitoring or comparisons to pre-operative images and treatment plans.  Our approach to addressing deformation problems in image guided therapy (IGT) leverages our recent work in feature-based US-US registration, where image content is modeled in terms of local scale-invariant image features, i.e., distinctive patterns of echogenic anatomical tissue that can be automatically extracted from images and used as the basis for registration. Our solution for motion in IGT is built upon our recent developments in hybrid US-MRI technology that acquires MRI and ultrasound simultaneously to exploit the relative strengths of MRI (high spatial resolution and excellent soft tissue contrast), and US (high frame rate). Much of the proposed research deals with providing solutions to registration problems for IGT applications, such as tissue deformation fields, and we believe that in this context it is important to characterize the potential uncertainties in these solutions, similarly to providing error bars in other estimation problems.To this end we are developing registration-with-uncertainty algorithms that incorporate random process models of spatial uncertainty. The technology is evaluated in the context of our testbed clinical projects, image-guided neurosurgery and abdominal cryotherapy, in the AMIGO suite, our advanced interventional suite that includes intra-operative 3T MRI, ultrasound and PET/CT. The hybrid US-MRI approach enables rapid updates to MRI images to accommodate, e.g., breathing motions during cryoablation procedures.In addition, US-US registration algorithms facilitate improvements in US-updated neurosurgical guidance, and have potential IGT applications in our program or elsewhere, for example in prostate biopsies. In order to facilitate dissemination of these algorithms to the broader IGT community, we distribute software components in the open-source SlicerIGT platform. Our projects are:

Registration algorithms for MRI and US with emphasis on uncertainty and algorithm performance. We continue algorithm developments aimed at characterizing uncertainty and accuracy in image registration,and tissue deformation estimation from implanted trackers,that are based on Gaussian Random Fields (GRF). We are also developing algorithms that estimate surgical tissue deformations from our feature-based ultrasound / ultrasound registration technology. Finally, we translate the developed algorithms into AMIGO using the SlicerIGT platform by providing extensions that visualize deformed MRI based on intraoperative US, associated registration uncertainty, and integrated laser surface scanning for neurosurgery. (Contact: William Wells)

Technology for simultaneous US-MRI acquisition for monitoring procedures. We are developing machine learning techniques that use high bandwidth US data to estimate motion and deformation in MRI images. We are also further generalizing the hybrid US-MRI approach by exploiting information from 256 independent channels, from a custom-built MR-compatible 256-element 2D US transducer array provided by an industrial partner. We are developing a pre-scan calibration (“learning”) phase that employs simultaneously-acquired MRI and USdata. We will deploy on-line deformation-corrected updates of MR as they become available from the scanner, for monitoring cryoablations. (Contact: Bruno Madore)

Software and Documentation

3D Slicer, a comprehensive open source platform for medical image analysis, contains several modules and functions that have been contributed by us for Computation. These include: Source Code for the Paper Titled: Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features (Med Image Anal. 2013 Apr;17(3):271-82.)

Data

MRI acquired to guide Gynecologic Brachytherapy Catheter Placement

Links

3D Slicer

Full Publication List

In NIH/NLMdatabase and in our Abstracts Database.

Select Recent Publications

Alireza Mehrtash, William M Wells, Clare M Tempany, Purang Abolmaesumi, and Tina Kapur. 7/2020. “Confidence Calibration and Predictive Uncertainty Estimation for Deep Medical Image Segmentation.” IEEE Trans Med Imaging, PP.Abstract
Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2)We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.
Shenyan Zong, Guofeng Shen, Chang-Sheng Mei, and Bruno Madore. 6/2020. “Improved PRF-Based MR Thermometry Using k-Space Energy Spectrum Analysis.” Magn Reson Med.Abstract
PURPOSE: Proton resonance frequency (PRF) thermometry encodes information in the phase of MRI signals. A multiplicative factor converts phase changes into temperature changes, and this factor includes the TE. However, phase variations caused by B and/or B inhomogeneities can effectively change TE in ways that vary from pixel to pixel. This work presents how spatial phase variations affect temperature maps and how to correct for corresponding errors. METHODS: A method called "k-space energy spectrum analysis" was used to map regions in the object domain to regions in the k-space domain. Focused ultrasound heating experiments were performed in tissue-mimicking gel phantoms under two scenarios: with and without proper shimming. The second scenario, with deliberately de-adjusted shimming, was meant to emulate B inhomogeneities in a controlled manner. The TE errors were mapped and compensated for using k-space energy spectrum analysis, and corrected results were compared with reference results. Furthermore, a volunteer was recruited to help evaluate the magnitude of the errors being corrected. RESULTS: The in vivo abdominal results showed that the TE and heating errors being corrected can readily exceed 10%. In phantom results, a linear regression between reference and corrected temperatures results provided a slope of 0.971 and R of 0.9964. Analysis based on the Bland-Altman method provided a bias of -0.0977°C and 95% limits of agreement that were 0.75°C apart. CONCLUSION: Spatially varying TE errors, such as caused by B and/or B inhomogeneities, can be detected and corrected using the k-space energy spectrum analysis method, for increased accuracy in proton resonance frequency thermometry.
Jian Wang, William M Wells, Polina Golland, and Miaomiao Zhang. 2019. “Registration Uncertainty Quantification via Low-dimensional Characterization of Geometric Deformations.” Magn Reson Imaging, 64, Pp. 122-31.Abstract
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace's method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
Cheng-Chieh Cheng, Frank Preiswerk, and Bruno Madore. 6/2020. “Multi-pathway Multi-echo Acquisition and Neural Contrast Translation to Generate a Variety of Quantitative and Qualitative Image Contrasts.” Magn Reson Med, 83, 6, Pp. 2310-21.Abstract
PURPOSE: Clinical exams typically involve acquiring many different image contrasts to help discriminate healthy from diseased states. Ideally, 3D quantitative maps of all of the main MR parameters would be obtained for improved tissue characterization. Using data from a 7-min whole-brain multi-pathway multi-echo (MPME) scan, we aimed to synthesize several 3D quantitative maps (T and T ) and qualitative contrasts (MPRAGE, FLAIR, T -weighted, T -weighted, and proton density [PD]-weighted). The ability of MPME acquisitions to capture large amounts of information in a relatively short amount of time suggests it may help reduce the duration of neuro MR exams. METHODS: Eight healthy volunteers were imaged at 3.0T using a 3D isotropic (1.2 mm) MPME sequence. Spin-echo, MPRAGE, and FLAIR scans were performed for training and validation. MPME signals were interpreted through neural networks for predictions of different quantitative and qualitative contrasts. Predictions were compared to reference values at voxel and region-of-interest levels. RESULTS: Mean absolute errors (MAEs) for T and T maps were 216 ms and 11 ms, respectively. In ROIs containing white matter (WM) and thalamus tissues, the mean T /T predicted values were 899/62 ms and 1139/58 ms, consistent with reference values of 850/66 ms and 1126/58 ms, respectively. For qualitative contrasts, signals were normalized to those of WM, and MAEs for MPRAGE, FLAIR, T -weighted, T -weighted, and PD-weighted contrasts were 0.14, 0.15, 0.13, 0.16, and 0.05, respectively. CONCLUSIONS: Using an MPME sequence and neural-network contrast translation, whole-brain results were obtained with a variety of quantitative and qualitative contrast in ~6.8 min.
Jie Luo, Alireza Sedghi, Karteek Popuri, Dana Cobzas, Miaomiao Zhang, Frank Preiswerk, Matthew Toews, Alexandra Golby, Masashi Sugiyama, William III M Wells, and Sarah Frisken. 2019. “On the Applicability of Registration Uncertainty.” In MICCAI 2019, LNCS 11765: Pp. 410-9. Shenzhen, China: Springer.Abstract
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.
Jie Luo, Matthew Toews, Inês Machado, Sarah Frisken, Miaomiao Zhang, Frank Preiswerk, Alireza Sedghi, Hongyi Ding, Steve Pieper, Polina Golland, Alexandra Golby, Masashi Sugiyama, and William III M Wells. 2018. “A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation.” In MICCAI 2018, LNCS 11073: Pp. 30-38. Springer.Abstract
A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as the tumor resection, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active registration framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data.
Inês Machado, Matthew Toews, Elizabeth George, Prashin Unadkat, Walid Essayed, Jie Luo, Pedro Teodoro, Herculano Carvalho, Jorge Martins, Polina Golland, Steve Pieper, Sarah Frisken, Alexandra Golby, William Wells, and Yangming Ou. 2019. “Deformable MRI-Ultrasound Registration using Correlation-based Attribute Matching for Brain Shift Correction: Accuracy and Generality in Multi-site Data.” Neuroimage, 202, Pp. 116094.Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (US) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy US. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. To improve accuracy of registration, we use high-dimensional texture attributes instead of image intensities and propose to replace the standard difference-based attribute matching with correlation-based attribute matching. We also present a strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images. We optimize key parameters across independent MR-iUS brain tumor datasets acquired at three different institutions, with a total of 43 tumor patients and 758 corresponding landmarks to validate the registration algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, our algorithm was able to reduce landmark errors prior to registration in three data sets (5.37 ± 4.27, 4.18 ± 1.97 and 6.18 ± 3.38 mm, respectively) to a consistently low level (2.28 ± 0.71, 2.08 ± 0.37 and 2.24 ± 0.78 mm, respectively). Our algorithm is compared to 15 other algorithms that have been previously tested on MR-iUS registration and it is competitive with the state-of-the-art on multiple datasets. We show that our algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). We further characterized landmark errors according to brain regions and tumor types, a topic so far missing in the literature. We found that landmark errors were higher in high-grade than low-grade glioma patients, and higher in tumor regions than in other brain regions.
S Frisken, M Luo, I Machado, P Unadkat, P Juvekar, A Bunevicius, M Toews, WM Wells, MI Miga, and AJ Golby. 2019. “Preliminary Results Comparing Thin Plate Splines with Finite Element Methods for Modeling Brain Deformation during Neurosurgery using Intraoperative Ultrasound.” Proc SPIE Int Soc Opt Eng, 10951.Abstract
Brain shift compensation attempts to model the deformation of the brain which occurs during the surgical removal of brain tumors to enable mapping of presurgical image data into patient coordinates during surgery and thus improve the accuracy and utility of neuro-navigation. We present preliminary results from clinical tumor resections that compare two methods for modeling brain deformation, a simple thin plate spline method that interpolates displacements and a more complex finite element method (FEM) that models physical and geometric constraints of the brain and its material properties. Both methods are driven by the same set of displacements at locations surrounding the tumor. These displacements were derived from sets of corresponding matched features that were automatically detected using the SIFT-Rank algorithm. The deformation accuracy was tested using a set of manually identified landmarks. The FEM method requires significantly more preprocessing than the spline method but both methods can be used to model deformations in the operating room in reasonable time frames. Our preliminary results indicate that the FEM deformation model significantly out-performs the spline-based approach for predicting the deformation of manual landmarks. While both methods compensate for brain shift, this work suggests that models that incorporate biophysics and geometric constraints may be more accurate.
Bojan Kocev, Horst Karl Hahn, Lars Linsen, William M Wells, and Ron Kikinis. 2019. “Uncertainty-aware Asynchronous Scattered Motion Interpolation using Gaussian Process Regression.” Comput Med Imaging Graph, 72, Pp. 1-12.Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.