Computation Research Publications

2021
Sedghi A, O'Donnell LJ, Kapur T, Learned-Miller E, Mousavi P, Wells WM. Image Registration: Maximum Likelihood, Minimum Entropy and Deep Learning. Med Image Anal. 2021;69 :101939.Abstract
In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.
2020
Mehrtash A, Wells WM, Tempany CM, Abolmaesumi P, Kapur T. Confidence Calibration and Predictive Uncertainty Estimation for Deep Medical Image Segmentation. IEEE Trans Med Imaging. 2020;39 (12) :3868-78.Abstract
Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2)We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.
Kikinis R, Wells WM. Detection of Brain Metastases with Deep Learning Single-Shot Detector Algorithms. Radiology. 2020;295 (2) :416-7.
Zong S, Shen G, Mei C-S, Madore B. Improved PRF-Based MR Thermometry Using k-Space Energy Spectrum Analysis. Magn Reson Med. 2020;84 (6) :3325-32.Abstract
PURPOSE: Proton resonance frequency (PRF) thermometry encodes information in the phase of MRI signals. A multiplicative factor converts phase changes into temperature changes, and this factor includes the TE. However, phase variations caused by B and/or B inhomogeneities can effectively change TE in ways that vary from pixel to pixel. This work presents how spatial phase variations affect temperature maps and how to correct for corresponding errors. METHODS: A method called "k-space energy spectrum analysis" was used to map regions in the object domain to regions in the k-space domain. Focused ultrasound heating experiments were performed in tissue-mimicking gel phantoms under two scenarios: with and without proper shimming. The second scenario, with deliberately de-adjusted shimming, was meant to emulate B inhomogeneities in a controlled manner. The TE errors were mapped and compensated for using k-space energy spectrum analysis, and corrected results were compared with reference results. Furthermore, a volunteer was recruited to help evaluate the magnitude of the errors being corrected. RESULTS: The in vivo abdominal results showed that the TE and heating errors being corrected can readily exceed 10%. In phantom results, a linear regression between reference and corrected temperatures results provided a slope of 0.971 and R of 0.9964. Analysis based on the Bland-Altman method provided a bias of -0.0977°C and 95% limits of agreement that were 0.75°C apart. CONCLUSION: Spatially varying TE errors, such as caused by B and/or B inhomogeneities, can be detected and corrected using the k-space energy spectrum analysis method, for increased accuracy in proton resonance frequency thermometry.
Zhang F, Noh T, Juvekar P, Frisken SF, Rigolo L, Norton I, Kapur T, Pujol S, Wells W, Yarmarkovich A, et al. SlicerDMRI: Diffusion MRI and Tractography Research Software for Brain Cancer Surgery Planning and Visualization. JCO Clin Cancer Inform. 2020;4 :299-309.Abstract
PURPOSE: We present SlicerDMRI, an open-source software suite that enables research using diffusion magnetic resonance imaging (dMRI), the only modality that can map the white matter connections of the living human brain. SlicerDMRI enables analysis and visualization of dMRI data and is aimed at the needs of clinical research users. SlicerDMRI is built upon and deeply integrated with 3D Slicer, a National Institutes of Health-supported open-source platform for medical image informatics, image processing, and three-dimensional visualization. Integration with 3D Slicer provides many features of interest to cancer researchers, such as real-time integration with neuronavigation equipment, intraoperative imaging modalities, and multimodal data fusion. One key application of SlicerDMRI is in neurosurgery research, where brain mapping using dMRI can provide patient-specific maps of critical brain connections as well as insight into the tissue microstructure that surrounds brain tumors. PATIENTS AND METHODS: In this article, we focus on a demonstration of SlicerDMRI as an informatics tool to enable end-to-end dMRI analyses in two retrospective imaging data sets from patients with high-grade glioma. Analyses demonstrated here include conventional diffusion tensor analysis, advanced multifiber tractography, automated identification of critical fiber tracts, and integration of multimodal imagery with dMRI. RESULTS: We illustrate the ability of SlicerDMRI to perform both conventional and advanced dMRI analyses as well as to enable multimodal image analysis and visualization. We provide an overview of the clinical rationale for each analysis along with pointers to the SlicerDMRI tools used in each. CONCLUSION: SlicerDMRI provides open-source and clinician-accessible research software tools for dMRI analysis. SlicerDMRI is available for easy automated installation through the 3D Slicer Extension Manager.
Cheng C-C, Preiswerk F, Madore B. Multi-pathway Multi-echo Acquisition and Neural Contrast Translation to Generate a Variety of Quantitative and Qualitative Image Contrasts. Magn Reson Med. 2020;83 (6) :2310-21.Abstract
PURPOSE: Clinical exams typically involve acquiring many different image contrasts to help discriminate healthy from diseased states. Ideally, 3D quantitative maps of all of the main MR parameters would be obtained for improved tissue characterization. Using data from a 7-min whole-brain multi-pathway multi-echo (MPME) scan, we aimed to synthesize several 3D quantitative maps (T and T ) and qualitative contrasts (MPRAGE, FLAIR, T -weighted, T -weighted, and proton density [PD]-weighted). The ability of MPME acquisitions to capture large amounts of information in a relatively short amount of time suggests it may help reduce the duration of neuro MR exams. METHODS: Eight healthy volunteers were imaged at 3.0T using a 3D isotropic (1.2 mm) MPME sequence. Spin-echo, MPRAGE, and FLAIR scans were performed for training and validation. MPME signals were interpreted through neural networks for predictions of different quantitative and qualitative contrasts. Predictions were compared to reference values at voxel and region-of-interest levels. RESULTS: Mean absolute errors (MAEs) for T and T maps were 216 ms and 11 ms, respectively. In ROIs containing white matter (WM) and thalamus tissues, the mean T /T predicted values were 899/62 ms and 1139/58 ms, consistent with reference values of 850/66 ms and 1126/58 ms, respectively. For qualitative contrasts, signals were normalized to those of WM, and MAEs for MPRAGE, FLAIR, T -weighted, T -weighted, and PD-weighted contrasts were 0.14, 0.15, 0.13, 0.16, and 0.05, respectively. CONCLUSIONS: Using an MPME sequence and neural-network contrast translation, whole-brain results were obtained with a variety of quantitative and qualitative contrast in ~6.8 min.
Frisken S, Luo M, Juvekar P, Bunevicius A, Machado I, Unadkat P, Bertotti MM, Toews M, Wells WM, Miga MI, et al. A Comparison of Thin-Plate Spline Deformation and Finite Element Modeling to Compensate for Brain Shift during Tumor Resection. Int J Comput Assist Radiol Surg. 2020;15 (1) :75-85.Abstract
PURPOSE: Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS: Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS: The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS: In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Wachinger C, Toews M, Langs G, Wells W, Golland P. Keypoint Transfer for Fast Whole-Body Segmentation. IEEE Trans Med Imaging. 2020;39 (2) :273-82.Abstract
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.
2019
Wang J, Wells WM, Golland P, Zhang M. Registration Uncertainty Quantification via Low-dimensional Characterization of Geometric Deformations. Magn Reson Imaging. 2019;64 :122-31.Abstract
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace's method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
Luo J, Sedghi A, Popuri K, Cobzas D, Zhang M, Preiswerk F, Toews M, Golby A, Sugiyama M, Wells WIIIM, et al. On the Applicability of Registration Uncertainty, in MICCAI 2019. Vol LNCS 11765. Shenzhen, China: Springer ; 2019 :410-9.Abstract
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.
Luo MICCAI 2019
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, et al. Deformable MRI-Ultrasound Registration using Correlation-based Attribute Matching for Brain Shift Correction: Accuracy and Generality in Multi-site Data. Neuroimage. 2019;202 :116094.Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (US) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy US. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. To improve accuracy of registration, we use high-dimensional texture attributes instead of image intensities and propose to replace the standard difference-based attribute matching with correlation-based attribute matching. We also present a strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images. We optimize key parameters across independent MR-iUS brain tumor datasets acquired at three different institutions, with a total of 43 tumor patients and 758 corresponding landmarks to validate the registration algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, our algorithm was able to reduce landmark errors prior to registration in three data sets (5.37 ± 4.27, 4.18 ± 1.97 and 6.18 ± 3.38 mm, respectively) to a consistently low level (2.28 ± 0.71, 2.08 ± 0.37 and 2.24 ± 0.78 mm, respectively). Our algorithm is compared to 15 other algorithms that have been previously tested on MR-iUS registration and it is competitive with the state-of-the-art on multiple datasets. We show that our algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). We further characterized landmark errors according to brain regions and tumor types, a topic so far missing in the literature. We found that landmark errors were higher in high-grade than low-grade glioma patients, and higher in tumor regions than in other brain regions.
Zaffino P, Pernelle G, Mastmeyer A, Mehrtash A, Zhang H, Kikinis R, Kapur T, Spadea MF. Fully Automatic Catheter Segmentation in MRI with 3D Convolutional Neural Networks: Application to MRI-guided Gynecologic Brachytherapy. Phys Med Biol. 2019;64 (16) :165008.Abstract
External-beam radiotherapy followed by High Dose Rate (HDR) brachytherapy is the standard-of-care for treating gynecologic cancers. The enhanced soft-tissue contrast provided by Magnetic Resonance Imaging (MRI) makes it a valuable imaging modality for diagnosing and treating these cancers. However, in contrast to Computed Tomography (CT) imaging, the appearance of the brachytherapy catheters, through which radiation sources are inserted to reach the cancerous tissue later on, is often variable across images. This paper reports, for the first time, a new deep-learning-based method for fully automatic segmentation of multiple closely spaced brachytherapy catheters in intraoperative MRI. 
 Represented in the data are 50 gynecologic cancer patients treated by MRI-guided HDR brachytherapy. For each patient, a single intraoperative MRI was used. 826 catheters in the images were manually segmented by an expert radiation physicist who is also a trained radiation oncologist. The number of catheters in a patient ranged between 10 and 35. A deep 3-dimensional Convolutional Neural Network (CNN) model was developed and trained. In order to make the learning process more robust, the network was trained 5 times, each time using a different combination of shown patients. Finally, each test case was processed by the 5 networks and the final segmentation was generated by voting on the obtained 5 candidate segmentations. 4-fold validation was executed and all the patients were segmented.
 An average distance error of 2.0±3.4 mm was achieved. False positive and false negative catheters were 6.7% and 1.5% respectively. Average Dice score was equal to 0.60±0.17. The algorithm is available for use in the open source software platform 3D Slicer allowing for wide scale testing and research discussion. In conclusion, to the best of our knowledge, fully automatic segmentation of multiple closely spaced catheters from intraoperative MR images was achieved for the first time in gynecological brachytherapy.
Abdelmoula WM, Regan MS, Lopez BGC, Randall EC, Lawler S, Mladek AC, Nowicki MO, Marin BM, Agar JN, Swanson KR, et al. Automatic 3D Nonlinear Registration of Mass Spectrometry Imaging and Magnetic Resonance Imaging Data. Anal Chem. 2019;91 (9) :6206-16.Abstract
Multimodal integration between mass spectrometry imaging (MSI) and radiology-established modalities such as magnetic resonance imaging (MRI) would allow the investigations of key questions in complex biological systems such as the central nervous system. Such integration would provide complementary multiscale data to bridge the gap between molecular and anatomical phenotypes, potentially revealing new insights into molecular mechanisms underlying anatomical pathologies presented on MRI. Automatic coregistration between 3D MSI/MRI is a computationally challenging process due to dimensional complexity, MSI data sparsity, lack of direct spatial-correspondences, and nonlinear tissue deformation. Here, we present a new computational approach based on stochastic neighbor embedding to nonlinearly align 3D MSI to MRI data, identify and reconstruct biologically relevant molecular patterns in 3D, and fuse the MSI datacube to the MRI space. We demonstrate our method using multimodal high-spectral resolution matrix-assisted laser desorption ionization (MALDI) 9.4 T MSI and 7 T in vivo MRI data, acquired from a patient-derived, xenograft mouse brain model of glioblastoma following administration of the EGFR inhibitor drug of Erlotinib. Results show the distribution of some identified molecular ions of the EGFR inhibitor erlotinib, a phosphatidylcholine lipid, and cholesterol, which were reconstructed in 3D and mapped to the MRI space. The registration quality was evaluated on two normal mouse brains using the Dice coefficient for the regions of brainstem, hippocampus, and cortex. The method is generic and can therefore be applied to hyperspectral images from different mass spectrometers and integrated with other established in vivo imaging modalities such as computed tomography (CT) and positron emission tomography (PET).
Frisken S, Luo M, Machado I, Unadkat P, Juvekar P, Bunevicius A, Toews M, Wells WM, Miga MI, Golby AJ. Preliminary Results Comparing Thin Plate Splines with Finite Element Methods for Modeling Brain Deformation during Neurosurgery using Intraoperative Ultrasound. Proc SPIE Int Soc Opt Eng. 2019;10951 :1095120.Abstract
Brain shift compensation attempts to model the deformation of the brain which occurs during the surgical removal of brain tumors to enable mapping of presurgical image data into patient coordinates during surgery and thus improve the accuracy and utility of neuro-navigation. We present preliminary results from clinical tumor resections that compare two methods for modeling brain deformation, a simple thin plate spline method that interpolates displacements and a more complex finite element method (FEM) that models physical and geometric constraints of the brain and its material properties. Both methods are driven by the same set of displacements at locations surrounding the tumor. These displacements were derived from sets of corresponding matched features that were automatically detected using the SIFT-Rank algorithm. The deformation accuracy was tested using a set of manually identified landmarks. The FEM method requires significantly more preprocessing than the spline method but both methods can be used to model deformations in the operating room in reasonable time frames. Our preliminary results indicate that the FEM deformation model significantly out-performs the spline-based approach for predicting the deformation of manual landmarks. While both methods compensate for brain shift, this work suggests that models that incorporate biophysics and geometric constraints may be more accurate.
Kocev B, Hahn HK, Linsen L, Wells WM, Kikinis R. Uncertainty-aware Asynchronous Scattered Motion Interpolation using Gaussian Process Regression. Comput Med Imaging Graph. 2019;72 :1-12.Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.
Ciris PA, Chiou J-yuan G, Glazer DI, Chao T-C, Tempany-Afdhal CM, Madore B, Maier SE. Accelerated Segmented Diffusion-Weighted Prostate Imaging for Higher Resolution, Higher Geometric Fidelity, and Multi-b Perfusion Estimation. Invest Radiol. 2019;54 (4) :238-46.Abstract
PURPOSE: The aim of this study was to improve the geometric fidelity and spatial resolution of multi-b diffusion-weighted magnetic resonance imaging of the prostate. MATERIALS AND METHODS: An accelerated segmented diffusion imaging sequence was developed and evaluated in 25 patients undergoing multiparametric magnetic resonance imaging examinations of the prostate. A reduced field of view was acquired using an endorectal coil. The number of sampled diffusion weightings, or b-factors, was increased to allow estimation of tissue perfusion based on the intravoxel incoherent motion (IVIM) model. Apparent diffusion coefficients measured with the proposed segmented method were compared with those obtained with conventional single-shot echo-planar imaging (EPI). RESULTS: Compared with single-shot EPI, the segmented method resulted in faster acquisition with 2-fold improvement in spatial resolution and a greater than 3-fold improvement in geometric fidelity. Apparent diffusion coefficient values measured with the novel sequence demonstrated excellent agreement with those obtained from the conventional scan (R = 0.91 for bmax = 500 s/mm and R = 0.89 for bmax = 1400 s/mm). The IVIM perfusion fraction was 4.0% ± 2.7% for normal peripheral zone, 6.6% ± 3.6% for normal transition zone, and 4.4% ± 2.9% for suspected tumor lesions. CONCLUSIONS: The proposed accelerated segmented prostate diffusion imaging sequence achieved improvements in both spatial resolution and geometric fidelity, along with concurrent quantification of IVIM perfusion.
Yengul SS, Barbone PE, Madore B. Dispersion in Tissue-Mimicking Gels Measured with Shear Wave Elastography and Torsional Vibration Rheometry. Ultrasound Med Biol. 2019;45 (2) :586-604.Abstract
Dispersion, or the frequency dependence of mechanical parameters, is a primary confounding factor in elastography comparisons. We present a study of dispersion in tissue-mimicking gels over a wide frequency band using a combination of ultrasound shear wave elastography (SWE), and a novel torsional vibration rheometry which allows independent mechanical measurement of SWE samples. Frequency-dependent complex shear modulus was measured in homogeneous gelatin hydrogels of two different bloom strengths while controlling for confounding factors such as temperature, water content and material aging. Furthermore, both techniques measured the same physical samples, thereby eliminating possible variation caused by batch-to-batch gel variation, sample geometry differences and boundary artifacts. The wide-band measurement, from 1 to 1800 Hz, captured a 30%-50% increase in the storage modulus and a nearly linear increase with frequency of the loss modulus. The magnitude of the variation suggests that accounting for dispersion is essential for meaningful comparisons between SWE implementations.
Mehrtash A, Ghafoorian M, Pernelle G, Ziaei A, Heslinga FG, Tuncali K, Fedorov A, Kikinis R, Tempany CM, Wells WM, et al. Automatic Needle Segmentation and Localization in MRI with 3D Convolutional Neural Networks: Application to MRI-targeted Prostate Biopsy. IEEE Trans Med Imaging. 2019;38 (4) :1026-36.Abstract
Image-guidance improves tissue sampling during biopsy by allowing the physician to visualize the tip and trajectory of the biopsy needle relative to the target in MRI, CT, ultrasound, or other relevant imagery. This paper reports a system for fast automatic needle tip and trajectory localization and visualization in MRI that has been developed and tested in the context of an active clinical research program in prostate biopsy. To the best of our knowledge, this is the first reported system for this clinical application, and also the first reported system that leverages deep neural networks for segmentation and localization of needles in MRI across biomedical applications. Needle tip and trajectory were annotated on 583 T2-weighted intra-procedural MRI scans acquired after needle insertion for 71 patients who underwent transperenial MRI-targeted biopsy procedure at our institution. The images were divided into two independent training-validation and test sets at the patient level. A deep 3-dimensional fully convolutional neural network model was developed, trained and deployed on these samples. The accuracy of the proposed method, as tested on previously unseen data, was 2.80 mm average in needle tip detection, and 0.98° in needle trajectory angle. An observer study was designed in which independent annotations by a second observer, blinded to the original observer, were compared to the output of the proposed method. The resultant error was comparable to the measured inter-observer concordance, reinforcing the clinical acceptability of the proposed method. The proposed system has the potential for deployment in clinical routine.
Cheng C-C, Preiswerk F, Hoge WS, Kuo T-H, Madore B. Multipathway Multi-echo (MPME) Imaging: All Main MR Parameters Mapped Based on a Single 3D Scan. Magn Reson Med. 2019;81 (3) :1699-1713.Abstract
PURPOSE: Quantitative parameter maps, as opposed to qualitative grayscale images, may represent the future of diagnostic MRI. A new quantitative MRI method is introduced here that requires a single 3D acquisition, allowing good spatial coverage to be achieved in relatively short scan times. METHODS: A multipathway multi-echo sequence was developed, and at least 3 pathways with 2 TEs were needed to generate T , T , T , B , and B maps. The method required the central k-space region to be sampled twice, with the same sequence but with 2 very different nominal flip angle settings. Consequently, scan time was only slightly longer than that of a single scan. The multipathway multi-echo data were reconstructed into parameter maps, for phantom as well as brain acquisitions, in 5 healthy volunteers at 3 T. Spatial resolution, matrix size, and FOV were 1.2 × 1.0 × 1.2 mm , 160 × 192 × 160, and 19.2 × 19.2 × 19.2 cm (whole brain), acquired in 11.5 minutes with minimal acceleration. Validation was performed against T , T , and T maps calculated from gradient-echo and spin-echo data. RESULTS: In Bland-Altman plots, bias and limits of agreement for T and T results in vivo and in phantom were -2.9/±125.5 ms (T in vivo), -4.8/±20.8 ms (T in vivo), -1.5/±18.1 ms (T in phantom), and -5.3/±7.4 ms (T in phantom), for regions of interest including given brain structures or phantom compartments. Due to relatively high noise levels, the current implementation of the approach may prove more useful for region of interest-based as opposed to pixel-based interpretation. CONCLUSIONS: We proposed a novel approach to quantitatively map MR parameters based on a multipathway multi-echo acquisition.
2018
Luo J, Toews M, Machado I, Frisken S, Zhang M, Preiswerk F, Sedghi A, Ding H, Pieper S, Golland P, et al. A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation, in MICCAI 2018. Vol LNCS 11073. Springer ; 2018 :30-38.Abstract
A reliable Ultrasound (US)-to-US registration method to compensate for brain shift would substantially improve Image-Guided Neurological Surgery. Developing such a registration method is very challenging, due to factors such as the tumor resection, the complexity of brain pathology and the demand for fast computation. We propose a novel feature-driven active registration framework. Here, landmarks and their displacement are first estimated from a pair of US images using corresponding local image features. Subsequently, a Gaussian Process (GP) model is used to interpolate a dense deformation field from the sparse landmarks. Kernels of the GP are estimated by using variograms and a discrete grid search method. If necessary, the user can actively add new landmarks based on the image context and visualization of the uncertainty measure provided by the GP to further improve the result. We retrospectively demonstrate our registration framework as a robust and accurate brain shift compensation solution on clinical data.
Luo MICCAI 2018

Pages