TRD 3: Intraoperative Devices

N Hata Jay Oliver Jonas K Tuncali
Nobuhiko Hata PhD Jayender Jagadeesan PhD Oliver Jonas PhD Kemal Tuncali MD

Led by Nobuhiko Hata, the focus areas of the Intraoperative Devices TRD are:

  • An Intraoperative needle guidance system for prostate biopsy that models needle deflection and derives optimal insertion paths for in-bore MRI-guided prostate interventions. 

  • An augmented reality guided navigation system for thoracoscopic lung surgery with compensation for lung deformation.

  • Implantable microdevices with integrated retrievability for difficult-to-access tissues, such as the brain.

Select Publications

Kobayashi S, King F, Hata N. Automatic Segmentation of Prostate and Extracapsular Structures in MRI to Predict Needle Deflection in Percutaneous Prostate Intervention. Int J Comput Assist Radiol Surg. 2022.Abstract
PURPOSE: Understanding the three-dimensional anatomy of percutaneous intervention in prostate cancer is essential to avoid complications. Recently, attempts have been made to use machine learning to automate the segmentation of functional structures such as the prostate gland, rectum, and bladder. However, a paucity of material is available to segment extracapsular structures that are known to cause needle deflection during percutaneous interventions. This research aims to explore the feasibility of the automatic segmentation of prostate and extracapsular structures to predict needle deflection. METHODS: Using pelvic magnetic resonance imagings (MRIs), 3D U-Net was trained and optimized for the prostate and extracapsular structures (bladder, rectum, pubic bone, pelvic diaphragm muscle, bulbospongiosus muscle, bull of the penis, ischiocavernosus muscle, crus of the penis, transverse perineal muscle, obturator internus muscle, and seminal vesicle). The segmentation accuracy was validated by putting intra-procedural MRIs into the 3D U-Net to segment the prostate and extracapsular structures in the image. Then, the segmented structures were used to predict deflected needle path in in-bore MRI-guided biopsy using a model-based approach. RESULTS: The 3D U-Net yielded Dice scores to parenchymal organs (0.61-0.83), such as prostate, bladder, rectum, bulb of the penis, crus of the penis, but lower in muscle structures (0.03-0.31), except and obturator internus muscle (0.71). The 3D U-Net showed higher Dice scores for functional structures ([Formula: see text]0.001) and complication-related structures ([Formula: see text]0.001). The segmentation of extracapsular anatomies helped to predict the deflected needle path in MRI-guided prostate interventions of the prostate with the accuracy of 0.9 to 4.9 mm. CONCLUSION: Our segmentation method using 3D U-Net provided an accurate anatomical understanding of the prostate and extracapsular structures. In addition, our method was suitable for segmenting functional and complication-related structures. Finally, 3D images of the prostate and extracapsular structures could simulate the needle pathway to predict needle deflections.
Ehdaie B, Tempany CM, Holland F, Sjoberg DD, Kibel AS, Trinh Q-D, Durack JC, Akin O, Vickers AJ, Scardino PT, et al. MRI-Guided Focused Ultrasound Focal Therapy for Patients With Intermediate-Risk Prostate Cancer: A Phase 2b, Multicentre Study. Lancet Oncol. 2022;23 (7) :910-8.Abstract
BACKGROUND: Men with grade group 2 or 3 prostate cancer are often considered ineligible for active surveillance; some patients with grade group 2 prostate cancer who are managed with active surveillance will have early disease progression requiring radical therapy. This study aimed to investigate whether MRI-guided focused ultrasound focal therapy can safely reduce treatment burden for patients with localised grade group 2 or 3 intermediate-risk prostate cancer. METHODS: In this single-arm, multicentre, phase 2b study conducted at eight health-care centres in the USA, we recruited men aged 50 years and older with unilateral, MRI-visible, primary, intermediate-risk, previously untreated prostate adenocarcinoma (prostate-specific antigen ≤20 ng/mL, grade group 2 or 3; tumour classification ≤T2) confirmed on combined biopsy (combining MRI-targeted and systematic biopsies). MRI-guided focused ultrasound energy, sequentially titrated to temperatures sufficient for tissue ablation (about 60-70°C), was delivered to the index lesion and a planned margin of 5 mm or more of normal tissue, using real-time magnetic resonance thermometry for intraoperative monitoring. Co-primary outcomes were oncological outcomes (absence of grade group 2 and higher cancer in the treated area at 6-month and 24-month combined biopsy; when 24-month biopsy data were not available and grade group 2 or higher cancer had occurred in the treated area at 6 months, the 6-month biopsy results were included in the final analysis) and safety (adverse events up to 24 months) in all patients enrolled in the study. This study is registered with ClinicalTrials.gov, NCT01657942, and is no longer recruiting. FINDINGS: Between May 4, 2017, and Dec 21, 2018, we assessed 194 patients for eligibility and treated 101 patients with MRI-guided focused ultrasound. Median age was 63 years (IQR 58-67) and median concentration of prostate-specific antigen was 5·7 ng/mL (IQR 4·2-7·5). Most cancers were grade group 2 (79 [78%] of 101). At 24 months, 78 (88% [95% CI 79-94]) of 89 men had no evidence of grade group 2 or higher prostate cancer in the treated area. No grade 4 or grade 5 treatment-related adverse events were reported, and only one grade 3 adverse event (urinary tract infection) was reported. There were no treatment-related deaths. INTERPRETATION: 24-month biopsy outcomes show that MRI-guided focused ultrasound focal therapy is safe and effectively treats grade group 2 or 3 prostate cancer. These results support focal therapy for select patients and its use in comparative trials to determine if a tissue-preserving approach is effective in delaying or eliminating the need for radical whole-gland treatment in the long term. FUNDING: Insightec and the National Cancer Institute.
Zhou H, Jayender J. EMDQ: Removal of Image Feature Mismatches in Real-Time. IEEE Trans Image Process. 2022;31 :706-20.Abstract
This paper proposes a novel method for removing image feature mismatches in real-time that can handle both rigid and smooth deforming environments. Image distortion, parallax and object deformation may cause the pixel coordinates of feature matches to have non-rigid deformations, which cannot be represented using a single analytical rigid transformation. To solve this problem, we propose an algorithm based on the re-weighting and 1-point RANSAC strategy (R1P-RNSC), which operates under the assumption that a non-rigid deformation can be approximately represented by multiple rigid transformations. R1P-RNSC is fast but suffers from the drawback that local smoothing information cannot be considered, thus limiting its accuracy. To solve this problem, we propose a non-parametric algorithm based on the expectation-maximization algorithm and the dual quaternion-based representation (EMDQ). EMDQ generates dense and smooth deformation fields by interpolating among the feature matches, simultaneously removing mismatches that are inconsistent with the deformation field. It relies on the rigid transformations obtained by R1P-RNSC to improve its accuracy. The experimental results demonstrate that EMDQ has superior accuracy compared to other state-of-the-art mismatch removal methods. The ability to build correspondences for all image pixels using the dense deformation field is another contribution of this paper.
Dominas C, Bhagavatula S, Stover EH, Deans K, Larocca C, Colson YL, Peruzzi PP, Kibel AS, Hata N, Tsai LL, et al. The Translational and Regulatory Development of an Implantable Microdevice for Multiple Drug Sensitivity Measurements in Cancer Patients. IEEE Trans Biomed Eng. 2022;69 (1) :412-21.Abstract
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow. METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients. RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress. CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents. SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
Zhou H, Jayender J. EMDQ-SLAM: Real-time High-resolution Reconstruction of Soft Tissue Surface from Stereo Laparoscopy Videos. Med Image Comput Comput Assist Interv. 2021;12904 :331-340.Abstract
We propose a novel stereo laparoscopy video-based non-rigid SLAM method called EMDQ-SLAM, which can incrementally reconstruct thee-dimensional (3D) models of soft tissue surfaces in real-time and preserve high-resolution color textures. EMDQ-SLAM uses the expectation maximization and dual quaternion (EMDQ) algorithm combined with SURF features to track the camera motion and estimate tissue deformation between video frames. To overcome the problem of accumulative errors over time, we have integrated a g2o-based graph optimization method that combines the EMDQ mismatch removal and as-rigid-as-possible (ARAP) smoothing methods. Finally, the multi-band blending (MBB) algorithm has been used to obtain high resolution color textures with real-time performance. Experimental results demonstrate that our method outperforms two state-of-the-art non-rigid SLAM methods: MISSLAM and DefSLAM. Quantitative evaluation shows an average error in the range of 0.8-2.2 mm for different cases.
Xu Z, Yan J, Luo J, Wells W, Li X, Jagadeesan J. Unimodal Cyclic Regularization for Training Multimodal Image Registration Networks. Proc IEEE Int Symp Biomed Imaging. 2021;2021.Abstract
The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.
Banach A, King F, Masaki F, Tsukada H, Hata N. Visually Navigated Bronchoscopy Using Three Cycle-Consistent Generative Adversarial Network for Depth Estimation. Med Image Anal. 2021;73 :102164.Abstract
[Background] Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence. [Materials and Methods] We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens. [Results] The ATE using 3cGAN was 6.2 +/- 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05). [Conclusion] VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
Zhou H, Jayender J. Real-Time Nonrigid Mosaicking of Laparoscopy Images. IEEE Trans Med Imaging. 2021;40 (6) :1726-36.Abstract
The ability to extend the field of view of laparoscopy images can help the surgeons to obtain a better understanding of the anatomical context. However, due to tissue deformation, complex camera motion and significant three-dimensional (3D) anatomical surface, image pixels may have non-rigid deformation and traditional mosaicking methods cannot work robustly for laparoscopy images in real-time. To solve this problem, a novel two-dimensional (2D) non-rigid simultaneous localization and mapping (SLAM) system is proposed in this paper, which is able to compensate for the deformation of pixels and perform image mosaicking in real-time. The key algorithm of this 2D non-rigid SLAM system is the expectation maximization and dual quaternion (EMDQ) algorithm, which can generate smooth and dense deformation field from sparse and noisy image feature matches in real-time. An uncertainty-based loop closing method has been proposed to reduce the accumulative errors. To achieve real-time performance, both CPU and GPU parallel computation technologies are used for dense mosaicking of all pixels. Experimental results on in vivo and synthetic data demonstrate the feasibility and accuracy of our non-rigid mosaicking method.
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain Adaptation for Segmentation of Critical Structures for Prostate Cancer Therapy. Sci Rep. 2021;11 (1) :11480.Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Wang D, Zhang T, Li M, Bueno R, Jayender J. 3D Deep Learning Based Classification of Pulmonary Ground Glass Opacity Nodules With Automatic Segmentation. Comput Med Imaging Graph. 2021;88 :101814.Abstract
Classifying ground-glass lung nodules (GGNs) into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC) on diagnostic CT images is important to evaluate the therapy options for lung cancer patients. In this paper, we propose a joint deep learning model where the segmentation can better facilitate the classification of pulmonary GGNs. Based on our observation that masking the nodule to train the model results in better lesion classification, we propose to build a cascade architecture with both segmentation and classification networks. The segmentation model works as a trainable preprocessing module to provide the classification-guided 'attention' weight map to the raw CT data to achieve better diagnosis performance. We evaluate our proposed model and compare with other baseline models for 4 clinically significant nodule classification tasks, defined by a combination of pathology types, using 4 classification metrics: Accuracy, Average F1 Score, Matthews Correlation Coefficient (MCC), and Area Under the Receiver Operating Characteristic Curve (AUC). Experimental results show that the proposed method outperforms other baseline models on all the diagnostic classification tasks.
More