The ability to extend the field of view of laparoscopy images can help the surgeons to obtain a better understanding of the anatomical context. However, due to tissue deformation, complex camera motion and significant three-dimensional (3D) anatomical surface, image pixels may have non-rigid deformation and traditional mosaicking methods cannot work robustly for laparoscopy images in real-time. To solve this problem, a novel two-dimensional (2D) non-rigid simultaneous localization and mapping (SLAM) system is proposed in this paper, which is able to compensate for the deformation of pixels and perform image mosaicking in real-time. The key algorithm of this 2D non-rigid SLAM system is the expectation maximization and dual quaternion (EMDQ) algorithm, which can generate smooth and dense deformation field from sparse and noisy image feature matches in real-time. An uncertainty-based loop closing method has been proposed to reduce the accumulative errors. To achieve real-time performance, both CPU and GPU parallel computation technologies are used for dense mosaicking of all pixels. Experimental results on in vivo and synthetic data demonstrate the feasibility and accuracy of our non-rigid mosaicking method.
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow. METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients. RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress. CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents. SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
Optimal resection of breast tumors requires removing cancer with a rim of normal tissue while preserving uninvolved regions of the breast. Surgical and pathological techniques that permit rapid molecular characterization of tissue could facilitate such resections. Mass spectrometry (MS) is increasingly used in the research setting to detect and classify tumors and has the potential to detect cancer at surgical margins. Here, we describe the ex vivo intraoperative clinical application of MS using a liquid micro-junction surface sample probe (LMJ-SSP) to assess breast cancer margins. In a midpoint analysis of a registered clinical trial, surgical specimens from 21 women with treatment naïve invasive breast cancer were prospectively collected and analyzed at the time of surgery with subsequent histopathological determination. Normal and tumor breast specimens from the lumpectomy resected by the surgeon were smeared onto glass slides for rapid analysis. Lipidomic profiles were acquired from these specimens using LMJ-SSP MS in negative ionization mode within the operating suite and post-surgery analysis of the data revealed five candidate ions separating tumor from healthy tissue in this limited dataset. More data is required before considering the ions as candidate markers. Here, we present an application of ambient MS within the operating room to analyze breast cancer tissue and surgical margins. Lessons learned from these initial promising studies are being used to further evaluate the five candidate biomarkers and to further refine and optimize intraoperative MS as a tool for surgical guidance in breast cancer.
Introduction: Neuronavigation greatly improves the surgeons ability to approach, assess and operate on brain tumors, but tends to lose its accuracy as the surgery progresses and substantial brain shift and deformation occurs. Intraoperative MRI (iMRI) can partially address this problem but is resource intensive and workflow disruptive. Intraoperative ultrasound (iUS) provides real-time information that can be used to update neuronavigation and provide real-time information regarding the resection progress. We describe the intraoperative use of 3D iUS in relation to iMRI, and discuss the challenges and opportunities in its use in neurosurgical practice. Methods: We performed a retrospective evaluation of patients who underwent image-guided brain tumor resection in which both 3D iUS and iMRI were used. The study was conducted between June 2020 and December 2020 when an extension of a commercially available navigation software was introduced in our practice enabling 3D iUS volumes to be reconstructed from tracked 2D iUS images. For each patient, three or more 3D iUS images were acquired during the procedure, and one iMRI was acquired towards the end. The iUS images included an extradural ultrasound sweep acquired before dural incision (iUS-1), a post-dural opening iUS (iUS-2), and a third iUS acquired immediately before the iMRI acquisition (iUS-3). iUS-1 and preoperative MRI were compared to evaluate the ability of iUS to visualize tumor boundaries and critical anatomic landmarks; iUS-3 and iMRI were compared to evaluate the ability of iUS for predicting residual tumor. Results: Twenty-three patients were included in this study. Fifteen patients had tumors located in eloquent or near eloquent brain regions, the majority of patients had low grade gliomas (11), gross total resection was achieved in 12 patients, postoperative temporary deficits were observed in five patients. In twenty-two iUS was able to define tumor location, tumor margins, and was able to indicate relevant landmarks for orientation and guidance. In sixteen cases, white matter fiber tracts computed from preoperative dMRI were overlaid on the iUS images. In nineteen patients, the EOR (GTR or STR) was predicted by iUS and confirmed by iMRI. The remaining four patients where iUS was not able to evaluate the presence or absence of residual tumor were recurrent cases with a previous surgical cavity that hindered good contact between the US probe and the brainsurface. Conclusion: This recent experience at our institution illustrates the practical benefits, challenges, and opportunities of 3D iUS in relation to iMRI.
OBJECTIVE: Accurate biopsy sampling of the suspected lesions is critical for the diagnosis and clinical management of prostate cancer. Transperineal in-bore MRI-guided prostate biopsy (tpMRgBx) is a targeted biopsy technique that was shown to be safe, efficient, and accurate. Our goal was to develop an open source software platform to support evaluation, refinement, and translation of this biopsy approach. METHODS: We developed SliceTracker, a 3D Slicer extension to support tpMRgBx. We followed modular design of the implementation to enable customization of the interface and interchange of image segmentation and registration components to assess their effect on the processing time, precision, and accuracy of the biopsy needle placement. The platform and supporting documentation were developed to enable the use of software by an operator with minimal technical training to facilitate translation. Retrospective evaluation studied registration accuracy, effect of the prostate segmentation approach, and re-identification time of biopsy targets. Prospective evaluation focused on the total procedure time and biopsy targeting error (BTE). RESULTS: Evaluation utilized data from 73 retrospective and ten prospective tpMRgBx cases. Mean landmark registration error for retrospective evaluation was 1.88 ± 2.63 mm, and was not sensitive to the approach used for prostate gland segmentation. Prospectively, we observed target re-identification time of 4.60 ± 2.40 min and BTE of 2.40 ± 0.98 mm. CONCLUSION: SliceTracker is modular and extensible open source platform for supporting image processing aspects of the tpMRgBx procedure. It has been successfully utilized to support clinical research procedures at our site.
Patient-mounted needle guide devices for percutaneous ablation are vulnerable to patient motion. The objective of this study is to develop and evaluate a software system for an MRI-compatible patient-mounted needle guide device that can adaptively compensate for displacement of the device due to patient motion using a novel image-based automatic device-to-image registration technique. We have developed a software system for an MRI-compatible patient-mounted needle guide device for percutaneous ablation. It features fully-automated image-based device-to-image registration to track the device position, and a device controller to adjust the needle trajectory to compensate for the displacement of the device. We performed: (a) a phantom study using a clinical MR scanner to evaluate registration performance; (b) simulations using intraoperative time-series MR data acquired in 20 clinical cases of MRI-guided renal cryoablations to assess its impact on motion compensation; and (c) a pilot clinical study in three patients to test its feasibility during the clinical procedure. FRE, TRE, and success rate of device-to-image registration were [Formula: see text] mm, [Formula: see text] mm, and 98.3% for the phantom images. The simulation study showed that the motion compensation reduced the targeting error for needle placement from 8.2 mm to 5.4 mm (p < 0.0005) in patients under general anesthesia (GA), and from 14.4 mm to 10.0 mm ([Formula: see text]) in patients under monitored anesthesia care (MAC). The pilot study showed that the software registered the device successfully in a clinical setting. Our simulation study demonstrated that the software system could significantly improve targeting accuracy in patients treated under both MAC and GA. Intraprocedural image-based device-to-image registration was feasible.