Publications by Year: 2019

2019
Jian Wang, William M Wells, Polina Golland, and Miaomiao Zhang. 12/2019. “Registration Uncertainty Quantification via Low-dimensional Characterization of Geometric Deformations.” Magn Reson Imaging, 64, Pp. 122-31.Abstract
This paper presents an efficient approach to quantifying image registration uncertainty based on a low-dimensional representation of geometric deformations. In contrast to previous methods, we develop a Bayesian diffeomorphic registration framework in a bandlimited space, rather than a high-dimensional image space. We show that a dense posterior distribution on deformation fields can be fully characterized by much fewer parameters, which dramatically reduces the computational complexity of model inferences. To further avoid heavy computation loads introduced by random sampling algorithms, we approximate a marginal posterior by using Laplace's method at the optimal solution of log-posterior distribution. Experimental results on both 2D synthetic data and real 3D brain magnetic resonance imaging (MRI) scans demonstrate that our method is significantly faster than the state-of-the-art diffeomorphic registration uncertainty quantification algorithms, while producing comparable results.
Haoyin Zhou, Tao Zhang, and Jayender Jagadeesan. 12/2019. “Re-weighting and 1-Point RANSAC-Based P nP Solution to Handle Outliers.” IEEE Trans Pattern Anal Mach Intell, 41, 12, Pp. 3022-33.Abstract
The ability to handle outliers is essential for performing the perspective- n-point (P nP) approach in practical applications, but conventional RANSAC+P3P or P4P methods have high time complexities. We propose a fast P nP solution named R1PP nP to handle outliers by utilizing a soft re-weighting mechanism and the 1-point RANSAC scheme. We first present a P nP algorithm, which serves as the core of R1PP nP, for solving the P nP problem in outlier-free situations. The core algorithm is an optimal process minimizing an objective function conducted with a random control point. Then, to reduce the impact of outliers, we propose a reprojection error-based re-weighting method and integrate it into the core algorithm. Finally, we employ the 1-point RANSAC scheme to try different control points. Experiments with synthetic and real-world data demonstrate that R1PP nP is faster than RANSAC+P3P or P4P methods especially when the percentage of outliers is large, and is accurate. Besides, comparisons with outlier-free synthetic data show that R1PP nP is among the most accurate and fast P nP solutions, which usually serve as the final refinement step of RANSAC+P3P or P4P. Compared with REPP nP, which is the state-of-the-art P nP algorithm with an explicit outliers-handling mechanism, R1PP nP is slower but does not suffer from the percentage of outliers limitation as REPP nP.
Inês Machado, Matthew Toews, Elizabeth George, Prashin Unadkat, Walid Essayed, Jie Luo, Pedro Teodoro, Herculano Carvalho, Jorge Martins, Polina Golland, Steve Pieper, Sarah Frisken, Alexandra Golby, William Wells, and Yangming Ou. 11/2019. “Deformable MRI-Ultrasound Registration using Correlation-based Attribute Matching for Brain Shift Correction: Accuracy and Generality in Multi-site Data.” Neuroimage, 202, Pp. 116094.Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (US) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy US. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. To improve accuracy of registration, we use high-dimensional texture attributes instead of image intensities and propose to replace the standard difference-based attribute matching with correlation-based attribute matching. We also present a strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images. We optimize key parameters across independent MR-iUS brain tumor datasets acquired at three different institutions, with a total of 43 tumor patients and 758 corresponding landmarks to validate the registration algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, our algorithm was able to reduce landmark errors prior to registration in three data sets (5.37 ± 4.27, 4.18 ± 1.97 and 6.18 ± 3.38 mm, respectively) to a consistently low level (2.28 ± 0.71, 2.08 ± 0.37 and 2.24 ± 0.78 mm, respectively). Our algorithm is compared to 15 other algorithms that have been previously tested on MR-iUS registration and it is competitive with the state-of-the-art on multiple datasets. We show that our algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). We further characterized landmark errors according to brain regions and tumor types, a topic so far missing in the literature. We found that landmark errors were higher in high-grade than low-grade glioma patients, and higher in tumor regions than in other brain regions.
JP Guenette, N Ben-Shlomo, J Jayender, RT Seethamraju, V Kimbrell, N-A Tran, RY Huang, CJ Kim, JI Kass, CE Corrales, and TC Lee. 11/2019. “MR Imaging of the Extracranial Facial Nerve with the CISS Sequence.” AJNR Am J Neuroradiol, 40, 11, Pp. 1954-9.Abstract
BACKGROUND AND PURPOSE: MR imaging is not routinely used to image the extracranial facial nerve. The purpose of this study was to determine the extent to which this nerve can be visualized with a CISS sequence and to determine the feasibility of using that sequence for locating the nerve relative to tumor. MATERIALS AND METHODS: Thirty-two facial nerves in 16 healthy subjects and 4 facial nerves in 4 subjects with parotid gland tumors were imaged with an axial CISS sequence protocol that included 0.8-mm isotropic voxels on a 3T MR imaging system with a 64-channel head/neck coil. Four observers independently segmented the 32 healthy subject nerves. Segmentations were compared by calculating average Hausdorff distance values and Dice similarity coefficients. RESULTS: The primary bifurcation of the extracranial facial nerve into the superior temporofacial and inferior cervicofacial trunks was visible on all 128 segmentations. The mean of the average Hausdorff distances was 1.2 mm (range, 0.3-4.6 mm). Dice coefficients ranged from 0.40 to 0.82. The relative position of the facial nerve to the tumor could be inferred in all 4 tumor cases. CONCLUSIONS: The facial nerve can be seen on CISS images from the stylomastoid foramen to the temporofacial and cervicofacial trunks, proximal to the parotid plexus. Use of a CISS protocol is feasible in the clinical setting to determine the location of the facial nerve relative to tumor.
Jie Luo, Alireza Sedghi, Karteek Popuri, Dana Cobzas, Miaomiao Zhang, Frank Preiswerk, Matthew Toews, Alexandra Golby, Masashi Sugiyama, William III M Wells, and Sarah Frisken. 10/2019. “On the Applicability of Registration Uncertainty.” In MICCAI 2019, LNCS 11765: Pp. 410-9. Shenzhen, China: Springer.Abstract
Estimating the uncertainty in (probabilistic) image registration enables, e.g., surgeons to assess the operative risk based on the trustworthiness of the registered image data. If surgeons receive inaccurately calculated registration uncertainty and misplace unwarranted confidence in the alignment solutions, severe consequences may result. For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters. The majority of existing research focuses on trying out different summary statistics as well as means to exploit them. Distinctively, in this paper, we study two rarely examined topics: (1) whether those summary statistics of the transformation distribution most informatively represent the registration uncertainty; (2) Does utilizing the registration uncertainty always be beneficial. We show that there are two types of uncertainties: the transformation uncertainty, Ut, and label uncertainty Ul. The conventional way of using Ut to quantify Ul is inappropriate and can be misleading. By a real data experiment, we also share a potentially critical finding that making use of the registration uncertainty may not always be an improvement.
Luo MICCAI 2019
Karol Miller, Grand R Joldes, George Bourantas, Simon K Warfield, Damon E Hyde, Ron Kikinis, and Adam Wittek. 10/2019. “Biomechanical Modeling and Computer Simulation of the Brain during Neurosurgery.” Int J Numer Method Biomed Eng, 35, 10, Pp. e3250.Abstract
Computational biomechanics of the brain for neurosurgery is an emerging area of research recently gaining in importance and practical applications. This review paper presents the contributions of the Intelligent Systems for Medicine Laboratory and its collaborators to this field, discussing the modeling approaches adopted and the methods developed for obtaining the numerical solutions. We adopt a physics-based modeling approach and describe the brain deformation in mechanical terms (such as displacements, strains, and stresses), which can be computed using a biomechanical model, by solving a continuum mechanics problem. We present our modeling approaches related to geometry creation, boundary conditions, loading, and material properties. From the point of view of solution methods, we advocate the use of fully nonlinear modeling approaches, capable of capturing very large deformations and nonlinear material behavior. We discuss finite element and meshless domain discretization, the use of the total Lagrangian formulation of continuum mechanics, and explicit time integration for solving both time-accurate and steady-state problems. We present the methods developed for handling contacts and for warping 3D medical images using the results of our simulations. We present two examples to showcase these methods: brain shift estimation for image registration and brain deformation computation for neuronavigation in epilepsy treatment.
Fan Zhang, Nico Hoffmann, Suheyla Cetin Karayumak, Yogesh Rathi, Alexandra J Golby, and Lauren J O'Donnell. 10/2019. “Deep White Matter Analysis: Fast, Consistent Tractography Segmentation Across Populations and dMRI Acquisitions.” Med Image Comput Comput Assist Interv, 11766, Pp. 599-608.Abstract
We present a deep learning tractography segmentation method that allows fast and consistent white matter fiber tract identification across healthy and disease populations and across multiple diffusion MRI (dMRI) acquisitions. We create a large-scale training tractography dataset of 1 million labeled fiber samples (54 anatomical tracts are included). To discriminate between fibers from different tracts, we propose a novel 2D multi-channel feature descriptor (FiberMap) that encodes spatial coordinates of points along each fiber. We learn a CNN tract classification model based on FiberMap and obtain a high tract classification accuracy of 90.99%. The method is evaluated on a test dataset of 374 dMRI scans from three independently acquired populations across health conditions (healthy control, neuropsychiatric disorders, and brain tumor patients). We perform comparisons with two state-of-the-art white matter tract segmentation methods. Experimental results show that our method obtains a highly consistent segmentation result, where over 99% of the fiber tracts are successfully detected across all subjects under study, most importantly, including patients with space occupying brain tumors. The proposed method leverages deep learning techniques and provides a much faster and more efficient tool for large data analysis than methods using traditional machine learning techniques.
Luca Canalini, Jan Klein, Dorothea Miller, and Ron Kikinis. 10/2019. “Segmentation-based Registration of Ultrasound Volumes for Glioma Resection in Image-guided Neurosurgery.” Int J Comput Assist Radiol Surg, 14, 10, Pp. 1697-1713.Abstract
PURPOSE: In image-guided surgery for glioma removal, neurosurgeons usually plan the resection on images acquired before surgery and use them for guidance during the subsequent intervention. However, after the surgical procedure has begun, the preplanning images become unreliable due to the brain shift phenomenon, caused by modifications of anatomical structures and imprecisions in the neuronavigation system. To obtain an updated view of the resection cavity, a solution is to collect intraoperative data, which can be additionally acquired at different stages of the procedure in order to provide a better understanding of the resection. A spatial mapping between structures identified in subsequent acquisitions would be beneficial. We propose here a fully automated segmentation-based registration method to register ultrasound (US) volumes acquired at multiple stages of neurosurgery. METHODS: We chose to segment sulci and falx cerebri in US volumes, which remain visible during resection. To automatically segment these elements, first we trained a convolutional neural network on manually annotated structures in volumes acquired before the opening of the dura mater and then we applied it to segment corresponding structures in different surgical phases. Finally, the obtained masks are used to register US volumes acquired at multiple resection stages. RESULTS: Our method reduces the mean target registration error (mTRE) between volumes acquired before the opening of the dura mater and during resection from 3.49 mm (± 1.55 mm) to 1.36 mm (± 0.61 mm). Moreover, the mTRE between volumes acquired before opening the dura mater and at the end of the resection is reduced from 3.54 mm (± 1.75 mm) to 2.05 mm (± 1.12 mm). CONCLUSION: The segmented structures demonstrated to be good candidates to register US volumes acquired at different neurosurgical phases. Therefore, our solution can compensate brain shift in neurosurgical procedures involving intraoperative US data.
Ananya Panda, Verena C Obmann, Wei-Ching Lo, Seunghee Margevicius, Yun Jiang, Mark Schluchter, Indravadan J Patel, Dean Nakamoto, Chaitra Badve, Mark A Griswold, Irina Jaeger, Lee E Ponsky, and Vikas Gulani. 9/2019. “MR Fingerprinting and ADC Mapping for Characterization of Lesions in the Transition Zone of the Prostate Gland.” Radiology, 292, 3, Pp. 685-94.Abstract
BackgroundPreliminary studies have shown that MR fingerprinting-based relaxometry combined with apparent diffusion coefficient (ADC) mapping can be used to differentiate normal peripheral zone from prostate cancer and prostatitis. The utility of relaxometry and ADC mapping for the transition zone (TZ) is unknown.PurposeTo evaluate the utility of MR fingerprinting combined with ADC mapping for characterizing TZ lesions.Materials and MethodsTZ lesions that were suspicious for cancer in men who underwent MRI with T2-weighted imaging and ADC mapping ( values, 50-1400 sec/mm), MR fingerprinting with steady-state free precession, and targeted biopsy (60 in-gantry and 15 cognitive targeting) between September 2014 and August 2018 in a single university hospital were retrospectively analyzed. Two radiologists blinded to Prostate Imaging Reporting and Data System (PI-RADS) scores and pathologic diagnosis drew regions of interest on cancer-suspicious lesions and contralateral visually normal TZs (NTZs) on MR fingerprinting and ADC maps. Linear mixed models compared two-reader means of T1, T2, and ADC. Generalized estimating equations logistic regression analysis was used to evaluate both MR fingerprinting and ADC in differentiating NTZ, cancers and noncancers, clinically significant (Gleason score ≥ 7) cancers from clinically insignificant lesions (noncancers and Gleason 6 cancers), and characterizing PI-RADS version 2 category 3 lesions.ResultsIn 67 men (mean age, 66 years ± 8 [standard deviation]) with 75 lesions, targeted biopsy revealed 37 cancers (six PI-RADS category 3 cancers and 31 PI-RADS category 4 or 5 cancers) and 38 noncancers (31 PI-RADS category 3 lesions and seven PI-RADS category 4 or 5 lesions). The T1, T2, and ADC of NTZ (1800 msec ± 150, 65 msec ± 22, and [1.13 ± 0.19] × 10 mm/sec, respectively) were higher than those in cancers (1450 msec ± 110, 36 msec ± 11, and [0.57 ± 0.13] × 10 mm/sec, respectively; < .001 for all). The T1, T2, and ADC in cancers were lower than those in noncancers (1620 msec ± 120, 47 msec ± 16, and [0.82 ± 0.13] × 10 mm/sec, respectively; = .001 for T1 and ADC and = .03 for T2). The area under the receiver operating characteristic curve (AUC) for T1 plus ADC was 0.94 for separation. T1 and ADC in clinically significant cancers (1440 msec ± 140 and [0.58 ± 0.14] × 10 mm/sec, respectively) were lower than those in clinically insignificant lesions (1580 msec ± 120 and [0.75 ± 0.17] × 10 mm/sec, respectively; = .001 for all). The AUC for T1 plus ADC was 0.81 for separation. Within PI-RADS category 3 lesions, T1 and ADC of cancers (1430 msec ± 220 and [0.60 ± 0.17] × 10 mm/sec, respectively) were lower than those of noncancers (1630 msec ± 120 and [0.81 ± 0.13] × 10 mm/sec, respectively; = .006 for T1 and = .004 for ADC). The AUC for T1 was 0.79 for differentiating category 3 lesions.ConclusionMR fingerprinting-based relaxometry combined with apparent diffusion coefficient mapping may improve transition zone lesion characterization.© RSNA, 2019
Paolo Zaffino, Guillaume Pernelle, Andre Mastmeyer, Alireza Mehrtash, Hongtao Zhang, Ron Kikinis, Tina Kapur, and Maria Francesca Spadea. 8/2019. “Fully Automatic Catheter Segmentation in MRI with 3D Convolutional Neural Networks: Application to MRI-guided Gynecologic Brachytherapy.” Phys Med Biol, 64, 16, Pp. 165008.Abstract
External-beam radiotherapy followed by High Dose Rate (HDR) brachytherapy is the standard-of-care for treating gynecologic cancers. The enhanced soft-tissue contrast provided by Magnetic Resonance Imaging (MRI) makes it a valuable imaging modality for diagnosing and treating these cancers. However, in contrast to Computed Tomography (CT) imaging, the appearance of the brachytherapy catheters, through which radiation sources are inserted to reach the cancerous tissue later on, is often variable across images. This paper reports, for the first time, a new deep-learning-based method for fully automatic segmentation of multiple closely spaced brachytherapy catheters in intraoperative MRI. &#13; Represented in the data are 50 gynecologic cancer patients treated by MRI-guided HDR brachytherapy. For each patient, a single intraoperative MRI was used. 826 catheters in the images were manually segmented by an expert radiation physicist who is also a trained radiation oncologist. The number of catheters in a patient ranged between 10 and 35. A deep 3-dimensional Convolutional Neural Network (CNN) model was developed and trained. In order to make the learning process more robust, the network was trained 5 times, each time using a different combination of shown patients. Finally, each test case was processed by the 5 networks and the final segmentation was generated by voting on the obtained 5 candidate segmentations. 4-fold validation was executed and all the patients were segmented.&#13; An average distance error of 2.0±3.4 mm was achieved. False positive and false negative catheters were 6.7% and 1.5% respectively. Average Dice score was equal to 0.60±0.17. The algorithm is available for use in the open source software platform 3D Slicer allowing for wide scale testing and research discussion. In conclusion, to the best of our knowledge, fully automatic segmentation of multiple closely spaced catheters from intraoperative MR images was achieved for the first time in gynecological brachytherapy.
Niravkumar A Patel, Gang Li, Weijian Shang, Marek Wartenberg, Tamas Heffter, Everette C Burdette, Iulian Iordachita, Junichi Tokuda, Nobuhiko Hata, Clare M Tempany, and Gregory S. Fischer. 6/2019. “System Integration and Preliminary Clinical Evaluation of a Robotic System for MRI-Guided Transperineal Prostate Biopsy.” J Med Robot Res, 4, 2.Abstract
This paper presents the development, preclinical evaluation, and preliminary clinical study of a robotic system for targeted transperineal prostate biopsy under direct interventional magnetic resonance imaging (MRI) guidance. The clinically integrated robotic system is developed based on a modular design approach, comprised of surgical navigation application, robot control software, MRI robot controller hardware, and robotic needle placement manipulator. The system provides enabling technologies for MRI-guided procedures. It can be easily transported and setup for supporting the clinical workflow of interventional procedures, and the system is readily extensible and reconfigurable to other clinical applications. Preclinical evaluation of the system is performed with phantom studies in a 3 Tesla MRI scanner, rehearsing the proposed clinical workflow, and demonstrating an in-plane targeting error of 1.5mm. The robotic system has been approved by the institutional review board (IRB) for clinical trials. A preliminary clinical study is conducted with the patient consent, demonstrating the targeting errors at two biopsy target sites to be 4.0 and 3.7, which is sufficient to target a clinically significant tumor foci. First-in-human trials to evaluate the system's effectiveness and accuracy for MR image-guide prostate biopsy are underway.
Walid M Abdelmoula, Michael S Regan, Begona GC Lopez, Elizabeth C Randall, Sean Lawler, Ann C Mladek, Michal O Nowicki, Bianca M Marin, Jeffrey N Agar, Kristin R Swanson, Tina Kapur, Jann N Sarkaria, William Wells, and Nathalie YR Agar. 5/2019. “Automatic 3D Nonlinear Registration of Mass Spectrometry Imaging and Magnetic Resonance Imaging Data.” Anal Chem, 91, 9, Pp. 6206-16.Abstract
Multimodal integration between mass spectrometry imaging (MSI) and radiology-established modalities such as magnetic resonance imaging (MRI) would allow the investigations of key questions in complex biological systems such as the central nervous system. Such integration would provide complementary multiscale data to bridge the gap between molecular and anatomical phenotypes, potentially revealing new insights into molecular mechanisms underlying anatomical pathologies presented on MRI. Automatic coregistration between 3D MSI/MRI is a computationally challenging process due to dimensional complexity, MSI data sparsity, lack of direct spatial-correspondences, and nonlinear tissue deformation. Here, we present a new computational approach based on stochastic neighbor embedding to nonlinearly align 3D MSI to MRI data, identify and reconstruct biologically relevant molecular patterns in 3D, and fuse the MSI datacube to the MRI space. We demonstrate our method using multimodal high-spectral resolution matrix-assisted laser desorption ionization (MALDI) 9.4 T MSI and 7 T in vivo MRI data, acquired from a patient-derived, xenograft mouse brain model of glioblastoma following administration of the EGFR inhibitor drug of Erlotinib. Results show the distribution of some identified molecular ions of the EGFR inhibitor erlotinib, a phosphatidylcholine lipid, and cholesterol, which were reconstructed in 3D and mapped to the MRI space. The registration quality was evaluated on two normal mouse brains using the Dice coefficient for the regions of brainstem, hippocampus, and cortex. The method is generic and can therefore be applied to hyperspectral images from different mass spectrometers and integrated with other established in vivo imaging modalities such as computed tomography (CT) and positron emission tomography (PET).
Elizabeth C Randall, Giorgia Zadra, Paolo Chetta, Begona GC Lopez, Sudeepa Syamala, Sankha S Basu, Jeffrey N Agar, Massimo Loda, Clare M Tempany, Fiona M Fennessy, and Nathalie YR Agar. 5/2019. “Molecular Characterization of Prostate Cancer with Associated Gleason Score using Mass Spectrometry Imaging.” Mol Cancer Res, 17, 5, Pp. 1155-65.Abstract
Diagnosis of prostate cancer is based on histological evaluation of tumor architecture using a system known as the 'Gleason score'. This diagnostic paradigm, while the standard of care, is time-consuming, shows intra-observer variability and provides no information about the altered metabolic pathways, which result in altered tissue architecture. Characterization of the molecular composition of prostate cancer and how it changes with respect to the Gleason score (GS) could enable a more objective and faster diagnosis. It may also aid in our understanding of disease onset and progression. In this work, we present mass spectrometry imaging for identification and mapping of lipids and metabolites in prostate tissue from patients with known prostate cancer with GS from 6 to 9. A gradient of changes in the intensity of various lipids was observed, which correlated with increasing GS. Interestingly, these changes were identified in both regions of high tumor cell density, and in regions of tissue that appeared histologically benign, possibly suggestive of pre-cancerous metabolomic changes. A total of 31 lipids, including several phosphatidylcholines, phosphatidic acids, phosphatidylserines, phosphatidylinositols and cardiolipins were detected with higher intensity in GS (4+3) compared with GS (3+4), suggesting they may be markers of prostate cancer aggression. Results obtained through mass spectrometry imaging studies were subsequently correlated with a fast, ambient mass spectrometry method for potential use as a clinical tool to support image-guided prostate biopsy. Implications: In this study we suggest that metabolomic differences between prostate cancers with different Gleason scores can be detected by mass spectrometry imaging.
Wenya Linda Bi, Ahmed Hosny, Matthew B Schabath, Maryellen L Giger, Nicolai J Birkbak, Alireza Mehrtash, Tavis Allison, Omar Arnaout, Christopher Abbosh, Ian F Dunn, Raymond H Mak, Rulla M Tamimi, Clare M Tempany, Charles Swanton, Udo Hoffmann, Lawrence H Schwartz, Robert J Gillies, Raymond Y Huang, and Hugo JWL Aerts. 3/2019. “Artificial Intelligence in Cancer Imaging: Clinical Challenges and Applications.” CA Cancer J Clin, 69, 2, Pp. 127-57.Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Martin T King, Paul L Nguyen, Ninjin Boldbaatar, David D Yang, Vinayak Muralidhar, Clare M Tempany, Robert A Cormack, Mark D Hurwitz, Warren W Suh, Mark M Pomerantz, Anthony V D'Amico, and Peter F Orio. 3/2019. “Evaluating the Influence of Prostate-specific Antigen Kinetics on Metastasis in Men with PSA Recurrence after Partial Gland Therapy.” Brachytherapy, 18, 2, Pp. 198-203.Abstract
PURPOSE: Although current Delphi Consensus guidelines do not recommend a specific definition of biochemical recurrence after partial gland therapy, these guidelines acknowledge that serial prostate-specific antigen (PSA) tests remain the best marker for monitoring disease after treatment. The purpose of this study was to determine whether PSA velocity at failure per the Phoenix (nadir + 2 ng/mL) definition is associated with metastasis and prostate cancer-specific mortality (PCSM) in a cohort of patients who experienced PSA failure after partial gland therapy. METHODS: Between 1997 and 2007, 285 patients with favorable risk prostate cancer underwent partial prostate brachytherapy to the peripheral zone. PSA velocity was calculated for 94 patients who experienced PSA failure per the Phoenix (nadir + 2) definition. Fine and Gray competing risks regression was performed to determine whether PSA velocity and other clinical factors were associated with metastasis and PCSM. RESULTS: The median time to PSA failure was 4.2 years (interquartile range: 2.2, 7.9), and the median followup time after PSA failure was 6.5 years (3.5-9.7). Seventeen patients developed metastases, and five experienced PCSM. On multivariate analysis, PSA velocity ≥3.0 ng/mL/year (adjusted hazard ratio 5.97; [2.57, 13.90]; p < 0.001) and PSA nadir (adjusted hazard ratio 0.39; [0.24, 0.64]; p < 0.001) were significantly associated with metastasis. PSA velocity ≥3.0 ng/mL/year was also associated with PCSM (HR 15.3; [1.8, 128.0]; p = 0.012) on univariate analysis. CONCLUSIONS: Rapid PSA velocity at PSA failure after partial gland treatment may be prognostic for long-term outcomes.
Wei Huang, Yiyi Chen, Andriy Fedorov, Xia Li, Guido H Jajamovich, Dariya I Malyarenko, Madhava P Aryal, Peter S LaViolette, Matthew J Oborski, Finbarr O'Sullivan, Richard G Abramson, Kourosh Jafari-Khouzani, Aneela Afzal, Alina Tudorica, Brendan Moloney, Sandeep N Gupta, Cecilia Besa, Jayashree Kalpathy-Cramer, James M Mountz, Charles M Laymon, Mark Muzi, Paul E Kinahan, Kathleen Schmainda, Yue Cao, Thomas L Chenevert, Bachir Taouli, Thomas E Yankeelov, Fiona Fennessy, and Xin Li. 3/2019. “The Impact of Arterial Input Function Determination Variations on Prostate Dynamic Contrast-Enhanced Magnetic Resonance Imaging Pharmacokinetic Modeling: A Multicenter Data Analysis Challenge, Part II.” Tomography, 5, 1, Pp. 99-109.Abstract
This multicenter study evaluated the effect of variations in arterial input function (AIF) determination on pharmacokinetic (PK) analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using the shutter-speed model (SSM). Data acquired from eleven prostate cancer patients were shared among nine centers. Each center used a site-specific method to measure the individual AIF from each data set and submitted the results to the managing center. These AIFs, their reference tissue-adjusted variants, and a literature population-averaged AIF, were used by the managing center to perform SSM PK analysis to estimate K (volume transfer rate constant), v (extravascular, extracellular volume fraction), k (efflux rate constant), and τ (mean intracellular water lifetime). All other variables, including the definition of the tumor region of interest and precontrast T values, were kept the same to evaluate parameter variations caused by variations in only the AIF. Considerable PK parameter variations were observed with within-subject coefficient of variation (wCV) values of 0.58, 0.27, 0.42, and 0.24 for K, v, k, and τ, respectively, using the unadjusted AIFs. Use of the reference tissue-adjusted AIFs reduced variations in K and v (wCV = 0.50 and 0.10, respectively), but had smaller effects on k and τ (wCV = 0.39 and 0.22, respectively). k is less sensitive to AIF variation than K, suggesting it may be a more robust imaging biomarker of prostate microvasculature. With low sensitivity to AIF uncertainty, the SSM-unique τ parameter may have advantages over the conventional PK parameters in a longitudinal study.
S Frisken, M Luo, I Machado, P Unadkat, P Juvekar, A Bunevicius, M Toews, WM Wells, MI Miga, and AJ Golby. 2/2019. “Preliminary Results Comparing Thin Plate Splines with Finite Element Methods for Modeling Brain Deformation during Neurosurgery using Intraoperative Ultrasound.” Proc SPIE Int Soc Opt Eng, 10951, Pp. 1095120.Abstract
Brain shift compensation attempts to model the deformation of the brain which occurs during the surgical removal of brain tumors to enable mapping of presurgical image data into patient coordinates during surgery and thus improve the accuracy and utility of neuro-navigation. We present preliminary results from clinical tumor resections that compare two methods for modeling brain deformation, a simple thin plate spline method that interpolates displacements and a more complex finite element method (FEM) that models physical and geometric constraints of the brain and its material properties. Both methods are driven by the same set of displacements at locations surrounding the tumor. These displacements were derived from sets of corresponding matched features that were automatically detected using the SIFT-Rank algorithm. The deformation accuracy was tested using a set of manually identified landmarks. The FEM method requires significantly more preprocessing than the spline method but both methods can be used to model deformations in the operating room in reasonable time frames. Our preliminary results indicate that the FEM deformation model significantly out-performs the spline-based approach for predicting the deformation of manual landmarks. While both methods compensate for brain shift, this work suggests that models that incorporate biophysics and geometric constraints may be more accurate.
Francesco Alessandrino, Mehdi Taghipour, Elmira Hassanzadeh, Alireza Ziaei, Mark Vangel, Andriy Fedorov, Clare M Tempany, and Fiona M Fennessy. 1/2019. “Predictive Role of PI-RADSv2 and ADC Parameters in Differentiating Gleason Pattern 3 + 4 and 4 + 3 Prostate Cancer.” Abdom Radiol (NY), 44, 1, Pp. 279-85.Abstract
PURPOSE: To compare the predictive roles of qualitative (PI-RADSv2) and quantitative assessment (ADC metrics), in differentiating Gleason pattern (GP) 3 + 4 from the more aggressive GP 4 + 3 prostate cancer (PCa) using radical prostatectomy (RP) specimen as the reference standard. METHODS: We retrospectively identified treatment-naïve peripheral (PZ) and transitional zone (TZ) Gleason Score 7 PCa patients who underwent multiparametric 3T prostate MRI (DWI with b value of 0,1400 and where unavailable, 0,500) and subsequent RP from 2011 to 2015. For each lesion identified on MRI, a PI-RADSv2 score was assigned by a radiologist blinded to pathology data. A PI-RADSv2 score ≤ 3 was defined as "low risk," a PI-RADSv2 score ≥ 4 as "high risk" for clinically significant PCa. Mean tumor ADC (ADC), ADC of adjacent normal tissue (ADC), and ADC (ADC/ADC) were calculated. Stepwise regression analysis using tumor location, ADC and ADC, b value, low vs. high PI-RADSv2 score was performed to differentiate GP 3 + 4 from 4 + 3. RESULTS: 119 out of 645 cases initially identified met eligibility requirements. 76 lesions were GP 3 + 4, 43 were 4 + 3. ADC was significantly different between the two GP groups (p = 0.001). PI-RADSv2 score ("low" vs. "high") was not significantly different between the two GP groups (p = 0.17). Regression analysis selected ADC (p = 0.03) and ADC (p = 0.0007) as best predictors to differentiate GP 4 + 3 from 3 + 4. Estimated sensitivity, specificity, and accuracy of the predictive model in differentiating GP 4 + 3 from 3 + 4 were 37, 82, and 66%, respectively. CONCLUSIONS: ADC metrics could differentiate GP 3 + 4 from 4 + 3 PCa with high specificity and moderate accuracy while PI-RADSv2, did not differentiate between these patterns.
Pelin Aksit Ciris, Jr-yuan George Chiou, Daniel I Glazer, Tzu-Cheng Chao, Clare M Tempany-Afdhal, Bruno Madore, and Stephan E. Maier. 2019. “Accelerated Segmented Diffusion-Weighted Prostate Imaging for Higher Resolution, Higher Geometric Fidelity, and Multi-b Perfusion Estimation.” Invest Radiol, 54, 4, Pp. 238-46.Abstract
PURPOSE: The aim of this study was to improve the geometric fidelity and spatial resolution of multi-b diffusion-weighted magnetic resonance imaging of the prostate. MATERIALS AND METHODS: An accelerated segmented diffusion imaging sequence was developed and evaluated in 25 patients undergoing multiparametric magnetic resonance imaging examinations of the prostate. A reduced field of view was acquired using an endorectal coil. The number of sampled diffusion weightings, or b-factors, was increased to allow estimation of tissue perfusion based on the intravoxel incoherent motion (IVIM) model. Apparent diffusion coefficients measured with the proposed segmented method were compared with those obtained with conventional single-shot echo-planar imaging (EPI). RESULTS: Compared with single-shot EPI, the segmented method resulted in faster acquisition with 2-fold improvement in spatial resolution and a greater than 3-fold improvement in geometric fidelity. Apparent diffusion coefficient values measured with the novel sequence demonstrated excellent agreement with those obtained from the conventional scan (R = 0.91 for bmax = 500 s/mm and R = 0.89 for bmax = 1400 s/mm). The IVIM perfusion fraction was 4.0% ± 2.7% for normal peripheral zone, 6.6% ± 3.6% for normal transition zone, and 4.4% ± 2.9% for suspected tumor lesions. CONCLUSIONS: The proposed accelerated segmented prostate diffusion imaging sequence achieved improvements in both spatial resolution and geometric fidelity, along with concurrent quantification of IVIM perfusion.
Lauren J O'Donnell, Alessandro Daducci, Demian Wassermann, and Christophe Lenglet. 2019. “Advances in Computational and Statistical Diffusion MRI.” NMR Biomed., 32, 4, Pp. e3805.Abstract
Computational methods are crucial for the analysis of diffusion magnetic resonance imaging (MRI) of the brain. Computational diffusion MRI can provide rich information at many size scales, including local microstructure measures such as diffusion anisotropies or apparent axon diameters, whole-brain connectivity information that describes the brain's wiring diagram and population-based studies in health and disease. Many of the diffusion MRI analyses performed today were not possible five, ten or twenty years ago, due to the requirements for large amounts of computer memory or processor time. In addition, mathematical frameworks had to be developed or adapted from other fields to create new ways to analyze diffusion MRI data. The purpose of this review is to highlight recent computational and statistical advances in diffusion MRI and to put these advances into context by comparison with the more traditional computational methods that are in popular clinical and scientific use. We aim to provide a high-level overview of interest to diffusion MRI researchers, with a more in-depth treatment to illustrate selected computational advances.

Pages