Show simple item record

Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning

dc.contributor.authorLiu, Lianli
dc.contributor.authorShen, Liyue
dc.contributor.authorJohansson, Adam
dc.contributor.authorBalter, James M
dc.contributor.authorCao, Yue
dc.contributor.authorVitzthum, Lucas
dc.contributor.authorXing, Lei
dc.date.accessioned2024-05-01T18:29:49Z
dc.date.available2025-05-01 14:29:48en
dc.date.available2024-05-01T18:29:49Z
dc.date.issued2024-04
dc.identifier.citationLiu, Lianli; Shen, Liyue; Johansson, Adam; Balter, James M; Cao, Yue; Vitzthum, Lucas; Xing, Lei (2024). "Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning." Medical Physics 51(4): 2526-2537.
dc.identifier.issn0094-2405
dc.identifier.issn2473-4209
dc.identifier.urihttps://hdl.handle.net/2027.42/192909
dc.description.abstractBackgroundVolumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings.PurposeTo investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy.MethodsA multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking.ResultsAcross the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions.ConclusionA prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.
dc.publisherIEEE
dc.publisherWiley Periodicals, Inc.
dc.subject.otherdeep learning
dc.subject.othermotion management
dc.subject.otherMR-guided radiotherapy
dc.subject.otherimage reconstruction
dc.titleVolumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning
dc.typeArticle
dc.rights.robotsIndexNoFollow
dc.subject.hlbsecondlevelMedicine (General)
dc.subject.hlbtoplevelHealth Sciences
dc.description.peerreviewedPeer Reviewed
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/192909/1/mp16845.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/192909/2/mp16845_am.pdf
dc.identifier.doi10.1002/mp.16845
dc.identifier.sourceMedical Physics
dc.identifier.citedreferenceQureshi AH, Mousavian A, Paxton C, Yip MC, Nerp FoxD. Neural rearrangement planning for unknown objects. arXiv preprint arXiv:210601352. 2021.
dc.identifier.citedreferenceHarris W, Yin FF, Cai J, Ren L. Volumetric cine magnetic resonance imaging (VC-MRI) using motion modeling, free-form deformation and multi-slice undersampled 2D cine MRI reconstructed with spatio-temporal low-rank decomposition. Quant Imaging Med Surg. 2020; 10 ( 2 ): 432 - 450.
dc.identifier.citedreferenceLiu L, Shen L, Johansson A, et al. Real time volumetric MRI for 3D motion tracking via geometry-informed deep learning. Med Phys. 2022; 49 ( 9 ): 6110 - 6119.
dc.identifier.citedreferenceXiao H, Ni R, Zhi S, et al. A dual-supervised deformation estimation model (DDEM) for constructing ultra-quality 4D-MRI based on a commercial low-quality 4D-MRI for liver cancer radiation therapy. Med Phys. 2022; 49 ( 5 ): 3159 - 3170.
dc.identifier.citedreferenceFeng L, Axel L, Chandarana H, Block KT, Sodickson DK, Otazo R. XD-GRASP: golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing. Magn Reson Med. 2016; 75 ( 2 ): 775 - 788.
dc.identifier.citedreferenceKeijnemans K, Borman PTS, van Lier A, Verhoeff JJC, Raaymakers BW, Fast MF. Simultaneous multi-slice accelerated 4D-MRI for radiotherapy guidance. Phys Med Biol. 2021; 66 ( 9 ).
dc.identifier.citedreferenceEslami SA, Jimenez Rezende D, Besse F, et al. Neural scene representation and rendering. Science. 2018; 360 ( 6394 ): 1204 - 1210.
dc.identifier.citedreferenceSitzmann V, Zollhöfer M, Wetzstein G. Scene representation networks: continuous 3D-structure-aware neural scene representations. Adv Neural Inf Process Syst. 2019; 32.
dc.identifier.citedreferenceSitzmann V, Martel J, Bergman A, Lindell D, Wetzstein G. Implicit neural representations with periodic activation functions. Adv Neural Inf Process Syst. 2020; 33: 7462 - 7473.
dc.identifier.citedreferenceChen Y, Liu S, Wang X, ed. Learning continuous image representation with local implicit image function. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
dc.identifier.citedreferenceShen L, Pauly J, Xing L. NeRP: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction. IEEE Transactions on Neural Networks and Learning Systems. 2022.
dc.identifier.citedreferenceDupont E, Goliński A, Alizadeh M, Teh YW, Coin DoucetA. Compression with implicit neural representations. arXiv preprint arXiv:210303123. 2021.
dc.identifier.citedreferenceShen T, Gao J, Yin K, Liu M-Y, Fidler S. Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. Adv Neural Inf Process Syst. 2021; 34: 6087 - 6101.
dc.identifier.citedreferenceVasudevan V, Shen L, Huang C, et al. Implicit neural representation for radiation therapy dose distribution. Phys Med Biol. 2022; 67 ( 12 ): 125014.
dc.identifier.citedreferenceLiu L, Shen L, Yang Y, et al. Modeling linear accelerator (Linac) beam data by implicit neural representation learning for commissioning and quality assurance applications. Med Phys. 2023; 50 ( 5 ): 3137 - 3147.
dc.identifier.citedreferenceKeiper TD, Tai A, Chen X, et al. Feasibility of real-time motion tracking using cine MRI during MR-guided radiation therapy for abdominal targets. Med Phys. 2020; 47 ( 8 ): 3554 - 3566.
dc.identifier.citedreferenceTancik M, Srinivasan P, Mildenhall B, et al. Fourier features let networks learn high frequency functions in low dimensional domains. Adv Neural Inf Process Syst. 2020; 33: 7537 - 7547.
dc.identifier.citedreferenceLiu L, Johansson A, Cao Y, Lawrence TS, Balter JM. Volumetric prediction of breathing and slow drifting motion in the abdomen using radial MRI and multi-temporal resolution modeling. Phys Med Biol. 2021; 66 ( 17 ): 175028.
dc.identifier.citedreferenceUlyanov D, Vedaldi A, Lempitsky V, ed. Deep image prior. Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE; 2018.
dc.identifier.citedreferenceJohansson A, Balter JM, Cao Y. Abdominal DCE-MRI reconstruction with deformable motion correction for liver perfusion quantification. Med Phys. 2018; 45 ( 10 ): 4529 - 4540.
dc.identifier.citedreferenceLiu L, Johansson A, Cao Y, Kashani R, Lawrence TS, Balter JM. Modeling intra-fractional abdominal configuration changes using breathing motion-corrected radial MRI. Phys Med Biol. 2021; 66 ( 8 ): 085002.
dc.identifier.citedreferenceSeregni M, Paganelli C, Lee D, et al. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI. Phys Med Biol. 2016; 61 ( 2 ): 872.
dc.identifier.citedreferenceStemkens B, Tijssen RH, De Senneville BD, Lagendijk JJ, Van Den Berg CA. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy. Phys Med Biol. 2016; 61 ( 14 ): 5335.
dc.identifier.citedreferenceBruijnen T, Stemkens B, Lagendijk JJ, Van Den Berg CA, Tijssen RH. Multiresolution radial MRI to reduce IDLE time in pre-beam imaging on an MR-Linac (MR-RIDDLE). Phys Med Biol. 2019; 64 ( 5 ).
dc.identifier.citedreferenceBjerre T, Crijns S, af Rosenschöld PM, et al. Three-dimensional MRI-linac intra-fraction guidance using multiple orthogonal cine-MRI planes. Phys Med Biol. 2013; 58 ( 14 ): 4943.
dc.identifier.citedreferenceMostafaei F, Tai A, Omari E, et al. Variations of MRI-assessed peristaltic motions during radiation therapy. PLoS One. 2018; 13 ( 10 ): e0205917.
dc.identifier.citedreferenceJohansson A, Balter JM, Cao Y. Gastrointestinal 4D MRI with respiratory motion correction. Med Phys. 2021; 48 ( 5 ): 2521 - 2527.
dc.identifier.citedreferenceLi W, Purdie TG, Taremi M, et al. Effect of immobilization and performance status on intrafraction motion for stereotactic lung radiotherapy: analysis of 133 patients. Int J Radiat Oncol Biol Phys. 2011; 81 ( 5 ): 1568 - 1575.
dc.identifier.citedreferenceWysocka B, Kassam Z, Lockwood G, et al. Interfraction and respiratory organ motion during conformal radiotherapy in gastric cancer. Int J Radiat Oncol Biol Phys. 2010; 77 ( 1 ): 53 - 59.
dc.identifier.citedreferenceBalter JM, Ten Haken RK, TS Lawrence, Lam KL, Robertson JM. Uncertainties in CT-based radiation therapy treatment planning associated with patient breathing. Int J Radiat Oncol Biol Phys. 1996; 36 ( 1 ): 167 - 174.
dc.identifier.citedreferenceAruga T, Itami J, Aruga M, et al. Target volume definition for upper abdominal irradiation using CT scans obtained during inhale and exhale phases. Int J Radiat Oncol Biol Phys. 2000; 48 ( 2 ): 465 - 469.
dc.identifier.citedreferenceJayachandran P, Minn AY, Van Dam J, Norton JA, Koong AC, Chang DT. Interfractional uncertainty in the treatment of pancreatic cancer with radiation. Int J Radiat Oncol Biol Phys. 2010; 76 ( 2 ): 603 - 607.
dc.identifier.citedreferenceWojcieszynski AP, Rosenberg SA, Brower JV, et al. Gadoxetate for direct tumor therapy and tracking with real-time MRI-guided stereotactic body radiation therapy of the liver. Radiother Oncol. 2016; 118 ( 2 ): 416 - 418.
dc.identifier.citedreferencede Muinck Keizer D, Pathmanathan A, Andreychenko A, et al. Fiducial marker based intra-fraction motion assessment on cine-MR for MR-linac treatment of prostate cancer. Phys Med Biol. 2019; 64 ( 7 ): 07NT2.
dc.identifier.citedreferenceGinn JS, Ruan D, Low DA, Lamb JM. An image regression motion prediction technique for MRI-guided radiotherapy evaluated in single-plane cine imaging. Med Phys. 2020; 47 ( 2 ): 404 - 413.
dc.identifier.citedreferenceHuttinga NR, Bruijnen T, Van Den Berg CA, Sbrizzi A. Real-time non-rigid 3D respiratory motion estimation for MR-guided radiotherapy using MR-MOTUS. IEEE Trans Med Imaging. 2021; 41 ( 2 ): 332 - 346.
dc.identifier.citedreferenceHuttinga NR, Bruijnen T, van den Berg CA, Sbrizzi A. Nonrigid 3D motion estimation at high temporal resolution from prospectively undersampled k-space data using low-rank MR-MOTUS. Magn Reson Med. 2021; 85 ( 4 ): 2309 - 2326.
dc.identifier.citedreferenceRomaguera LV, Mezheritsky T, Mansour R, Carrier J-F, Kadoury S. Probabilistic 4D predictive model from in-room surrogates using conditional generative networks for image-guided radiotherapy. Med Image Anal. 2021; 74: 102250.
dc.identifier.citedreferenceShao H-C, Li T, Dohopolski MJ, Wang J, et al. Real-time MRI motion estimation through an unsupervised k-space-driven deformable registration network (KS-RegNet). Phys Med Biol. 2022; 67 ( 13 ): 135012.
dc.identifier.citedreferenceFeng L, Tyagi N, Otazo R. MRSIGMA: magnetic resonance signature matching for real-time volumetric imaging. Magn Reson Med. 2020; 84 ( 3 ): 1280 - 1292.
dc.identifier.citedreferenceMickevicius NJ, Paulson ES. Simultaneous acquisition of orthogonal plane cine imaging and isotropic 4D-MRI using super-resolution. Radiother Oncol. 2019; 136: 121 - 129.
dc.identifier.citedreferenceHan P, Chen J, Xiao J, et al. Single projection driven real-time multi-contrast (SPIDERM) MR imaging using pre-learned spatial subspace and linear transformation. Phys Med Biol. 2022; 67 ( 13 ): 135008.
dc.working.doiNOen
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.