Show simple item record

Three- dimensional self super- resolution for pelvic floor MRI using a convolutional neural network with multi- orientation data training

dc.contributor.authorFeng, Fei
dc.contributor.authorAshton-Miller, James A.
dc.contributor.authorDeLancey, John O.L.
dc.contributor.authorLuo, Jiajia
dc.date.accessioned2022-03-07T03:12:39Z
dc.date.available2023-03-06 22:12:38en
dc.date.available2022-03-07T03:12:39Z
dc.date.issued2022-02
dc.identifier.citationFeng, Fei; Ashton-Miller, James A. ; DeLancey, John O.L.; Luo, Jiajia (2022). "Three- dimensional self super- resolution for pelvic floor MRI using a convolutional neural network with multi- orientation data training." Medical Physics 49(2): 1083-1096.
dc.identifier.issn0094-2405
dc.identifier.issn2473-4209
dc.identifier.urihttps://hdl.handle.net/2027.42/171860
dc.description.abstractPurposeHigh- resolution pelvic magnetic resonance (MR) imaging is important for the high- resolution and high- precision evaluation of pelvic floor disorders (PFDs), but the data acquisition time is long. Because high- resolution three- dimensional (3D) MR data of the pelvic floor are difficult to obtain, MR images are usually obtained in three orthogonal planes: axial, sagittal, and coronal. The in- plane resolution of the MR data in each plane is high, but the through- plane resolution is low. Thus, we aimed to achieve 3D super- resolution using a convolutional neural network (CNN) approach to capture the intrinsic similarity of low- resolution 3D MR data from three orientations.MethodsWe used a two- dimensional (2D) super- resolution CNN model to solve the 3D super- resolution problem. The residual- in- residual dense block network (RRDBNet) was used as our CNN backbone. For a given set of low through- plane resolution pelvic floor MR data in the axial or coronal or sagittal scan plane, we applied the RRDBNet sequentially to perform super- resolution on its two projected low- resolution views. Three datasets were used in the experiments, including two private datasets and one public dataset. In the first dataset (dataset 1), MR data acquired from 34 subjects in three planes were used to train our super- resolution model, and low- resolution MR data from nine subjects were used for testing. The second dataset (dataset 2) included a sequence of relatively high- resolution MR data acquired in the coronal plane. The public MR dataset (dataset 3) was used to demonstrate the generalization ability of our model. To show the effectiveness of RRDBNet, we used datasets 1 and 2 to compare RRDBNet with interpolation and enhanced deep super- resolution (EDSR) methods in terms of peak signal- to- noise ratio (PSNR) and structural similarity (SSIM) index. As 3D MR data from one view have two projected low- resolution views, different super- resolution orders were compared in terms of PSNR and SSIM. Finally, to demonstrate the impact of super- resolution on the image analysis task, we used datasets 2 and 3 to compare the performance of our method with interpolation on the 3D geometric model reconstruction of the urinary bladder.ResultsA CNN- based method was used to learn the intrinsic similarity among MR acquisitions from different scan planes. Through- plane super- resolution for pelvic MR images was achieved without using high- resolution 3D data, which is useful for the analysis of PFDs.
dc.publisherWiley Periodicals, Inc.
dc.publisherSpringer International Publishing
dc.subject.otherdeep learning
dc.subject.otherMRI
dc.subject.other3D super- resolution
dc.titleThree- dimensional self super- resolution for pelvic floor MRI using a convolutional neural network with multi- orientation data training
dc.typeArticle
dc.rights.robotsIndexNoFollow
dc.subject.hlbsecondlevelMedicine (General)
dc.subject.hlbtoplevelHealth Sciences
dc.description.peerreviewedPeer Reviewed
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/171860/1/mp15438.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/171860/2/mp15438_am.pdf
dc.identifier.doi10.1002/mp.15438
dc.identifier.sourceMedical Physics
dc.identifier.citedreference- National cancer institute clinical proteomic tumor analysis consortium (CPTAC)- . Radiology Data from the Clinical Proteomic Tumor Analysis Consortium Uterine Corpus Endometrial Carcinoma [CPTAC- UCEC] Collection [Data set]. The Cancer Imaging Archive; 2018.
dc.identifier.citedreferenceLim B, Son S, Kim H, Nah S, Lee KM. Enhanced deep residual networks for single image super- resolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2017: 1132 - 1140.
dc.identifier.citedreferencePeng C, Lin WA, Liao H, Chellappa R, Zhou SK. SAINT: spatially aware interpolation network for medical slice synthesis. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020: 7747 - 7756.
dc.identifier.citedreferenceJog A, Carass A, Prince JL. Self super- resolution for magnetic resonance images. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W, eds. Medical Image Computing and Computer- Assisted Intervention - MICCAI 2016; Springer International Publishing; 2016: 553 - 560.
dc.identifier.citedreferenceZhao C, Carass A, Dewey BE, et al. A deep learning based anti- aliasing self super- resolution algorithm for MRI. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola- López C, Fichtinger G, eds. Medical Image Computing and Computer Assisted Intervention - MICCAI 2018; Springer International Publishing; 2018: 100 - 108.
dc.identifier.citedreferenceZhao C, Shao M, Carass A, et al. Applications of a deep learning method for anti- aliasing and super- resolution in MRI. Magn Reson Imaging. 2019; 64: 132 - 141.
dc.identifier.citedreferenceWang X, Yu K, Wu S, et al. ESRGAN: enhanced super- resolution generative adversarial networks. In: Leal- Taixé L, Roth S, eds. Computer Vision - ECCV 2018 Workshops, Springer International Publishing; 2018: 100 - 108.
dc.identifier.citedreferenceSzegedy C, Ioffe S, Vanhoucke V, Alemi AA. Aaai. Inception- v4, Inception- ResNet and the impact of residual connections on learning. In: 31st Aaai Conference on Artificial Intelligence, Assoc Advancement Artificial Intelligence, Palo Alto; 2017.
dc.identifier.citedreferenceZhou W, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004; 13: 600 - 612.
dc.identifier.citedreferenceWang Z, Bovik AC. Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag. 2009; 26: 98 - 117.
dc.identifier.citedreferenceClark K, Vendt B, Smith K, et al. the cancer imaging archive (TCIA): maintaining and operating a public information repository. J Digit Imaging. 2013; 26: 1045 - 1057.
dc.identifier.citedreferenceChaudhari AS, Fang Z, Kogan F, et al. Super- resolution musculoskeletal MRI using deep learning. Magn Reson Med. 2018; 80: 2139 - 2154.
dc.identifier.citedreferenceDelbracio M, Sapiro G. Burst deblurring: removing camera shake through Fourier burst accumulation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2015: 2385 - 2393.
dc.identifier.citedreferenceChen Y, Xie Y, Zhou Z, Shi F, Christodoulou AG, Li D. Brain MRI super resolution using 3D deep densely connected neural networks. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018); 2018: 739 - 742.
dc.identifier.citedreferencePham C, Ducournau A, Fablet R, Rousseau F. Brain MRI super- resolution using deep 3D convolutional networks. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017); 2017: 197 - 200.
dc.identifier.citedreferenceNeubert A, Bourgeat P, Wood J, et al. Simultaneous super- resolution and contrast synthesis of routine clinical magnetic resonance images of the knee for improving automatic segmentation of joint cartilage: data from the Osteoarthritis Initiative. Med Phys. 2020; 47: 4939 - 4948.
dc.identifier.citedreferenceSood RR, Shao W, Kunder C, et al. 3D Registration of pre- surgical prostate MRI and histopathology images via super- resolution volume reconstruction. Med Image Anal. 2021; 69: 101957.
dc.identifier.citedreferenceDu J, He Z, Wang L, et al. Super- resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network. Neurocomputing. 2020; 392: 209 - 220.
dc.identifier.citedreferenceGeorgescu M- I, Ionescu RT, Verga N. Convolutional neural networks with intermediate loss for 3D super- resolution of CT and MRI scans. IEEE Access. 2020; 8: 49112 - 49124.
dc.identifier.citedreferenceZhao C, Carass A, Dewey BE, Prince JL, Self super- resolution for magnetic resonance images using deep networks. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018); 2018: 365 - 368.
dc.identifier.citedreferenceLedig C, Theis L, Huszár F, et al. Photo- realistic single image super- resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017: 105 - 114.
dc.identifier.citedreferenceChen Y, Shi F, Christodoulou AG, Xie Y, Zhou Z, Li D. Efficient and accurate MRI super- resolution using a generative adversarial network and 3D multi- level densely connected network. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola- López C, Fichtinger G, eds. Medical Image Computing and Computer Assisted Intervention - MICCAI 2018, Springer International Publishing; 2018: 91 - 99.
dc.identifier.citedreferenceYou C, Li G, Zhang Y, et al. CT super- resolution GAN constrained by the identical, residual, cycle learning ensemble (GAN- CIRCLE). IEEE Trans Med Imaging. 2020; 39: 188 - 203.
dc.identifier.citedreferenceHoyte L, Ye W, Brubaker L, et al. Segmentations of MRI images of the female pelvic floor: a study of inter- and intra- reader reliability. J Magn Reson Imaging. 2011; 33: 684 - 691.
dc.identifier.citedreferenceAkhondi- Asl A, Hoyte L, Lockhart ME, Warfield SK. A logarithmic opinion pool based staple algorithm for the fusion of segmentations with associated reliability weights. IEEE Trans Med Imaging. 2014; 33: 1997 - 2009.
dc.identifier.citedreferenceFeng F, Ashton- Miller JA, DeLancey JOL, Luo J. Convolutional neural network- based pelvic floor structure segmentation using magnetic resonance imaging in pelvic organ prolapse. Med Phys. 2020; 47: 4281 - 4293.
dc.identifier.citedreferenceLarson KA, Luo JJ, Guire KE, Chen LY, Ashton- Miller JA, DeLancey JOL. 3D analysis of cystoceles using magnetic resonance imaging assessing midline, paravaginal, and apical defects. Int Urogynecol J. 2012; 23: 285 - 293.
dc.identifier.citedreferenceChen L, Ashton- Miller JA, DeLancey JOL. A 3D finite element model of anterior vaginal wall support to evaluate mechanisms underlying cystocele formation. J Biomech. 2009; 42: 1371 - 1377.
dc.identifier.citedreferenceLuo J, Chen L, Fenner DE, Ashton- Miller JA, DeLancey JOL. A multi- compartment 3- D finite element model of rectocele and its interaction with cystocele. J Biomech. 2015; 48: 1580 - 1586.
dc.identifier.citedreferenceLuo J, Smith TM, Ashton- Miller JA, DeLancey JOL. In vivo properties of uterine suspensory tissue in pelvic organ prolapse. J Biomech Eng. 2014; 136:021016- 1- 021016- 6.
dc.identifier.citedreferenceTimofte R, De V, Gool LV. Anchored neighborhood regression for fast example- based super- resolution. In: 2013 IEEE International Conference on Computer Vision; 2013: 1920 - 1927.
dc.identifier.citedreferenceSchulter S, Leistner C, Bischof H. Fast and accurate image upscaling with super- resolution forests. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2015: 3791 - 3799.
dc.identifier.citedreferenceDong C, Loy CC, He K, Tang X. Image super- resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016; 38: 295 - 307.
dc.working.doiNOen
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.