Segmentation of brain tissue types from diffusion MRI (dMRI) is an important task, required for quantification of brain microstructure and for improving tractography. Current dMRI segmentation is mostly based on anatomical MRI (e.g., T1- and T2-weighted) segmentation that is registered to the dMRI space. However, such inter-modality registration is challenging due to more image distortions and lower image resolution in dMRI as compared with anatomical MRI. In this study, we present a deep learning method for diffusion MRI segmentation, which we refer to as DDSeg. Our proposed method learns tissue segmentation from high-quality imaging data from the Human Connectome Project (HCP), where registration of anatomical MRI to dMRI is more precise. The method is then able to predict a tissue segmentation directly from new dMRI data, including data collected with different acquisition protocols, without requiring anatomical data and inter-modality registration. We train a convolutional neural network (CNN) to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary. To further improve accuracy, our method adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters. The DKI parameters are calculated from the recently proposed mean-kurtosis-curve method that corrects implausible DKI parameter values and provides additional features that discriminate between tissue types. We demonstrate high tissue segmentation accuracy on HCP data, and also when applying the HCP-trained model on dMRI data from other acquisitions with lower resolution and fewer gradient directions.
OBJECTIVE: Accurate biopsy sampling of the suspected lesions is critical for the diagnosis and clinical management of prostate cancer. Transperineal in-bore MRI-guided prostate biopsy (tpMRgBx) is a targeted biopsy technique that was shown to be safe, efficient, and accurate. Our goal was to develop an open source software platform to support evaluation, refinement, and translation of this biopsy approach. METHODS: We developed SliceTracker, a 3D Slicer extension to support tpMRgBx. We followed modular design of the implementation to enable customization of the interface and interchange of image segmentation and registration components to assess their effect on the processing time, precision, and accuracy of the biopsy needle placement. The platform and supporting documentation were developed to enable the use of software by an operator with minimal technical training to facilitate translation. Retrospective evaluation studied registration accuracy, effect of the prostate segmentation approach, and re-identification time of biopsy targets. Prospective evaluation focused on the total procedure time and biopsy targeting error (BTE). RESULTS: Evaluation utilized data from 73 retrospective and ten prospective tpMRgBx cases. Mean landmark registration error for retrospective evaluation was 1.88 ± 2.63 mm, and was not sensitive to the approach used for prostate gland segmentation. Prospectively, we observed target re-identification time of 4.60 ± 2.40 min and BTE of 2.40 ± 0.98 mm. CONCLUSION: SliceTracker is modular and extensible open source platform for supporting image processing aspects of the tpMRgBx procedure. It has been successfully utilized to support clinical research procedures at our site.
Patient-mounted needle guide devices for percutaneous ablation are vulnerable to patient motion. The objective of this study is to develop and evaluate a software system for an MRI-compatible patient-mounted needle guide device that can adaptively compensate for displacement of the device due to patient motion using a novel image-based automatic device-to-image registration technique. We have developed a software system for an MRI-compatible patient-mounted needle guide device for percutaneous ablation. It features fully-automated image-based device-to-image registration to track the device position, and a device controller to adjust the needle trajectory to compensate for the displacement of the device. We performed: (a) a phantom study using a clinical MR scanner to evaluate registration performance; (b) simulations using intraoperative time-series MR data acquired in 20 clinical cases of MRI-guided renal cryoablations to assess its impact on motion compensation; and (c) a pilot clinical study in three patients to test its feasibility during the clinical procedure. FRE, TRE, and success rate of device-to-image registration were [Formula: see text] mm, [Formula: see text] mm, and 98.3% for the phantom images. The simulation study showed that the motion compensation reduced the targeting error for needle placement from 8.2 mm to 5.4 mm (p < 0.0005) in patients under general anesthesia (GA), and from 14.4 mm to 10.0 mm ([Formula: see text]) in patients under monitored anesthesia care (MAC). The pilot study showed that the software registered the device successfully in a clinical setting. Our simulation study demonstrated that the software system could significantly improve targeting accuracy in patients treated under both MAC and GA. Intraprocedural image-based device-to-image registration was feasible.
PURPOSE: To develop and evaluate an approach to estimate the respiratory-induced motion of lesions in the chest and abdomen. MATERIALS AND METHODS: The proposed approach uses the motion of an initial reference needle inserted into a moving organ to estimate the lesion (target) displacement that is caused by respiration. The needles position is measured using an inertial measurement unit (IMU) sensor externally attached to the hub of an initially placed reference needle. Data obtained from the IMU sensor and the target motion are used to train a learning-based approach to estimate the position of the moving target. An experimental platform was designed to mimic respiratory motion of the liver. Liver motion profiles of human subjects provided inputs to the experimental platform. Variables including the insertion angle, target depth, target motion velocity and target proximity to the reference needle were evaluated by measuring the error of the estimated target position and processing time. RESULTS: The mean error of estimation of the target position ranged between 0.86 and 1.29 mm. The processing maximum training and testing time was 5 ms which is suitable for real-time target motion estimation using the needle position sensor. CONCLUSION: The external motion of an initially placed reference needle inserted into a moving organ can be used as a surrogate, measurable and accessible signal to estimate in real-time the position of a moving target caused by respiration; this technique could then be used to guide the placement of subsequently inserted needles directly into the target.
Brain shift during tumor resection compromises the spatial validity of registered preoperative imaging data that is critical to image-guided procedures. One current clinical solution to mitigate the effects is to reimage using intraoperative magnetic resonance (iMR) imaging. Although iMR has demonstrated benefits in accounting for preoperative-to-intraoperative tissue changes, its cost and encumbrance have limited its widespread adoption. While iMR will likely continue to be employed for challenging cases, a cost-effective model-based brain shift compensation strategy is desirable as a complementary technology for standard resections. We performed a retrospective study of [Formula: see text] tumor resection cases, comparing iMR measurements with intraoperative brain shift compensation predicted by our model-based strategy, driven by sparse intraoperative cortical surface data. For quantitative assessment, homologous subsurface targets near the tumors were selected on preoperative MR and iMR images. Once rigidly registered, intraoperative shift measurements were determined and subsequently compared to model-predicted counterparts as estimated by the brain shift correction framework. When considering moderate and high shift ([Formula: see text], [Formula: see text] measurements per case), the alignment error due to brain shift reduced from [Formula: see text] to [Formula: see text], representing [Formula: see text] correction. These first steps toward validation are promising for model-based strategies.
OBJECTIVE: The purpose of this article is to report our intermediate to long-term outcomes with image-guided percutaneous hepatic tumor cryoablation and to evaluate its technical success, technique efficacy, local tumor progression, and adverse event rate. MATERIALS AND METHODS: Between 1998 and 2014, 299 hepatic tumors (243 metastases and 56 primary tumors; mean diameter, 2.5 cm; median diameter, 2.2 cm; range, 0.3-7.8 cm) in 186 patients (95 women; mean age, 60.9 years; range, 29-88 years) underwent cryoablation during 236 procedures using CT (n = 126), MRI (n = 100), or PET/CT (n = 10) guidance. Technical success, technique efficacy at 3 months, local tumor progression (mean follow-up, 2.5 years; range, 2 months to 14.6 years), and adverse event rates were calculated. RESULTS: The technical success rate was 94.6% (279/295). The technique efficacy rate was 89.5% (231/258) and was greater for tumors smaller than 4 cm (93.4%; 213/228) than for larger tumors (60.0%; 18/30) (p < 0.0001). Local tumor progression occurred in 23.3% (60/258) of tumors and was significantly more common after the treatment of tumors 4 cm or larger (63.3%; 19/30) compared with smaller tumors (18.0%; 41/228) (p < 0.0001). Adverse events followed 33.8% (80/236) of procedures and were grade 3-5 in 10.6% (25/236) of cases. Grade 3 or greater adverse events more commonly followed the treatment of larger tumors (19.5%; 8/41) compared with smaller tumors (8.7%; 17/195) (p = 0.04). CONCLUSION: Image-guided percutaneous cryoablation of hepatic tumors is efficacious; however, tumors smaller than 4 cm are more likely to be treated successfully and without an adverse event.