TRD 3: Intraoperative Devices

N Hata Jay Oliver Jonas K Tuncali
Nobuhiko Hata, PhD Jayender Jagadeesan, PhD Oliver Jonas, PhD Kemal Tuncali, MD

Led by Nobuhiko Hata, the focus areas of the Intraoperative Devices TRD are:

  • An Intraoperative needle guidance system for prostate biopsy that models needle deflection and derives optimal insertion paths for in-bore MRI-guided prostate interventions. 

  • An augmented reality guided navigation system for thoracoscopic lung surgery with compensation for lung deformation.

  • Implantable microdevices with integrated retrievability for difficult-to-access tissues, such as the brain.

Select Publications

Dominas C, Bhagavatula S, Stover EH, Deans K, Larocca C, Colson YL, Peruzzi PP, Kibel AS, Hata N, Tsai LL, et al. The Translational and Regulatory Development of an Implantable Microdevice for Multiple Drug Sensitivity Measurements in Cancer Patients. IEEE Trans Biomed Eng. 2021;PP.Abstract
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow. METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients. RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress. CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents. SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
Zhou H, Jayender J. Real-Time Nonrigid Mosaicking of Laparoscopy Images. IEEE Trans Med Imaging. 2021;40 (6) :1726-36.Abstract
The ability to extend the field of view of laparoscopy images can help the surgeons to obtain a better understanding of the anatomical context. However, due to tissue deformation, complex camera motion and significant three-dimensional (3D) anatomical surface, image pixels may have non-rigid deformation and traditional mosaicking methods cannot work robustly for laparoscopy images in real-time. To solve this problem, a novel two-dimensional (2D) non-rigid simultaneous localization and mapping (SLAM) system is proposed in this paper, which is able to compensate for the deformation of pixels and perform image mosaicking in real-time. The key algorithm of this 2D non-rigid SLAM system is the expectation maximization and dual quaternion (EMDQ) algorithm, which can generate smooth and dense deformation field from sparse and noisy image feature matches in real-time. An uncertainty-based loop closing method has been proposed to reduce the accumulative errors. To achieve real-time performance, both CPU and GPU parallel computation technologies are used for dense mosaicking of all pixels. Experimental results on in vivo and synthetic data demonstrate the feasibility and accuracy of our non-rigid mosaicking method.
Wang D, Li M, Ben-Shlomo N, Corrales EC, Cheng Y, Zhang T, Jayender J. A Novel Dual-Network Architecture for Mixed-Supervised Medical Image Segmentation. Comput Med Imaging Graph. 2021;89 :101841.Abstract
In medical image segmentation tasks, deep learning-based models usually require densely and precisely annotated datasets to train, which are time-consuming and expensive to prepare. One possible solution is to train with the mixed-supervised dataset, where only a part of data is densely annotated with segmentation map and the rest is annotated with some weak form, such as bounding box. In this paper, we propose a novel network architecture called Mixed-Supervised Dual-Network (MSDN), which consists of two separate networks for the segmentation and detection tasks respectively, and a series of connection modules between the layers of the two networks. These connection modules are used to extract and transfer useful information from the detection task to help the segmentation task. We exploit a variant of a recently designed technique called 'Squeeze and Excitation' in the connection module to boost the information transfer between the two tasks. Compared with existing model with shared backbone and multiple branches, our model has flexible and trainable feature sharing fashion and thus is more effective and stable. We conduct experiments on 4 medical image segmentation datasets, and experiment results show that the proposed MSDN model outperforms multiple baselines.
Wang D, Zhang T, Li M, Bueno R, Jayender J. 3D Deep Learning Based Classification of Pulmonary Ground Glass Opacity Nodules With Automatic Segmentation. Comput Med Imaging Graph. 2021;88 :101814.Abstract
Classifying ground-glass lung nodules (GGNs) into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC) on diagnostic CT images is important to evaluate the therapy options for lung cancer patients. In this paper, we propose a joint deep learning model where the segmentation can better facilitate the classification of pulmonary GGNs. Based on our observation that masking the nodule to train the model results in better lesion classification, we propose to build a cascade architecture with both segmentation and classification networks. The segmentation model works as a trainable preprocessing module to provide the classification-guided 'attention' weight map to the raw CT data to achieve better diagnosis performance. We evaluate our proposed model and compare with other baseline models for 4 clinically significant nodule classification tasks, defined by a combination of pathology types, using 4 classification metrics: Accuracy, Average F1 Score, Matthews Correlation Coefficient (MCC), and Area Under the Receiver Operating Characteristic Curve (AUC). Experimental results show that the proposed method outperforms other baseline models on all the diagnostic classification tasks.
Xu Z, Luo J, Yan J, Li X, Jayender J. F3RNet: Full-resolution Residual Registration Network for Deformable Image Registration. Int J Comput Assist Radiol Surg. 2021;16 (6) :923-32.Abstract
PURPOSE: Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes. METHODS: We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency. RESULTS: We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches. CONCLUSION: By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.