Show simple item record

Multi-scale cascaded networks for synthesis of mammogram to decrease intensity distortion and increase model-based perceptual similarity

dc.contributor.authorJiang, Gongfa
dc.contributor.authorHe, Zilong
dc.contributor.authorZhou, Yuanpin
dc.contributor.authorWei, Jun
dc.contributor.authorXu, Yuesheng
dc.contributor.authorZeng, Hui
dc.contributor.authorWu, Jiefang
dc.contributor.authorQin, Genggeng
dc.contributor.authorChen, Weiguo
dc.contributor.authorLu, Yao
dc.date.accessioned2023-03-03T21:10:38Z
dc.date.available2024-03-03 16:10:37en
dc.date.available2023-03-03T21:10:38Z
dc.date.issued2023-02
dc.identifier.citationJiang, Gongfa; He, Zilong; Zhou, Yuanpin; Wei, Jun; Xu, Yuesheng; Zeng, Hui; Wu, Jiefang; Qin, Genggeng; Chen, Weiguo; Lu, Yao (2023). "Multi-scale cascaded networks for synthesis of mammogram to decrease intensity distortion and increase model-based perceptual similarity." Medical Physics 50(2): 837-853.
dc.identifier.issn0094-2405
dc.identifier.issn2473-4209
dc.identifier.urihttps://hdl.handle.net/2027.42/175933
dc.description.abstractPurposeSynthetic digital mammogram (SDM) is a 2D image generated from digital breast tomosynthesis (DBT) and used as a substitute for a full-field digital mammogram (FFDM) to reduce the radiation dose for breast cancer screening. The previous deep learning-based method used FFDM images as the ground truth, and trained a single neural network to directly generate SDM images with similar appearances (e.g., intensity distribution, textures) to the FFDM images. However, the FFDM image has a different texture pattern from DBT. The difference in texture pattern might make the training of the neural network unstable and result in high-intensity distortion, which makes it hard to decrease intensity distortion and increase perceptual similarity (e.g., generate similar textures) at the same time. Clinically, radiologists want to have a 2D synthesized image that feels like an FFDM image in vision and preserves local structures such as both mass and microcalcifications (MCs) in DBT because radiologists have been trained on reading FFDM images for a long time, while local structures are important for diagnosis. In this study, we proposed to use a deep convolutional neural network to learn the transformation to generate SDM from DBT.MethodTo decrease intensity distortion and increase perceptual similarity, a multi-scale cascaded network (MSCN) is proposed to generate low-frequency structures (e.g., intensity distribution) and high-frequency structures (e.g., textures) separately. The MSCN consist of two cascaded sub-networks: the first sub-network is used to predict the low-frequency part of the FFDM image; the second sub-network is used to generate a full SDM image with textures similar to the FFDM image based on the prediction of the first sub-network. The mean-squared error (MSE) objective function is used to train the first sub-network, termed low-frequency network, to generate a low-frequency SDM image. The gradient-guided generative adversarial network’s objective function is to train the second sub-network, termed high-frequency network, to generate a full SDM image with textures similar to the FFDM image.Results1646 cases with FFDM and DBT were retrospectively collected from the Hologic Selenia system for training and validation dataset, and 145 cases with masses or MC clusters were independently collected from the Hologic Selenia system for testing dataset. For comparison, the baseline network has the same architecture as the high-frequency network and directly generates a full SDM image. Compared to the baseline method, the proposed MSCN improves the peak-to-noise ratio from 25.3 to 27.9 dB and improves the structural similarity from 0.703 to 0.724, and significantly increases the perceptual similarity.ConclusionsThe proposed method can stabilize the training and generate SDM images with lower intensity distortion and higher perceptual similarity.
dc.publisherWiley Periodicals, Inc.
dc.publisherSpringer International Publishing
dc.subject.otherdeep learning
dc.subject.othersynthetic mammogram
dc.subject.otherbreast cancer
dc.subject.otherdigital breast tomosynthesis (DBT)
dc.subject.othergenerative adversarial networks (GAN)
dc.titleMulti-scale cascaded networks for synthesis of mammogram to decrease intensity distortion and increase model-based perceptual similarity
dc.typeArticle
dc.rights.robotsIndexNoFollow
dc.subject.hlbsecondlevelMedicine (General)
dc.subject.hlbtoplevelHealth Sciences
dc.description.peerreviewedPeer Reviewed
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/175933/1/mp16007.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/175933/2/mp16007_am.pdf
dc.identifier.doi10.1002/mp.16007
dc.identifier.sourceMedical Physics
dc.identifier.citedreferenceGilbert FJ, Tucker L, Gillan MG, et al. Accuracy of digital breast tomosynthesis for depicting breast cancer subgroups in a UK retrospective reading study (TOMMY Trial). Radiology. 2015; 277 ( 3 ): 697 - 706.
dc.identifier.citedreferenceWang X, Yu K, Wu S, et al. ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops. Vol 11133. Springer; 2019
dc.identifier.citedreferenceJiang G, Lu Y, Wei J, Xu Y. Synthesize mammogram from digital breast tomosynthesis with gradient guided cGANs. In: Shen D, Liu T, Peters TM, et al., eds. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science. Springer International Publishing; 2019: 801 - 809. https://doi.org/10.1007/978-3-030-32226-7_89
dc.identifier.citedreferenceJiang G, Wei J, Xu Y, et al. Synthesis of mammogram from digital breast tomosynthesis using deep convolutional neural network with gradient guided cGANs. IEEE Trans Med Imaging. 2021; 40 ( 8 ): 1 - 1. https://doi.org/10.1109/TMI.2021.3071544
dc.identifier.citedreferenceBlau Y, Michaeli T. The Perception-Distortion Tradeoff. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2018.
dc.identifier.citedreferenceJohnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution. In: Leibe B, Matas J, Sebe N, Welling M, eds. Computer Vision – ECCV 2016. Springer International Publishing; 2016: 694 - 711.
dc.identifier.citedreferenceYang Q, Yan P, Zhang Y, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans Med Imaging. 2018; 37 ( 6 ): 1348 - 1357. https://doi.org/10.1109/TMI.2018.2827462
dc.identifier.citedreferenceMirza M, Osindero S. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
dc.identifier.citedreferenceSmith AP, Chen B, Jing Z, inventors; Hologic Inc, assignee. Mammography/tomosynthesis systems and methods automatically deriving breast characteristics from breast x-ray images and automatically adjusting image processing parameters accordingly. US patent 8170320B2. May 1, 2012.
dc.identifier.citedreferenceZhang R, Isola P, Efros AA, Shechtman E, Wang O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2018: 586 - 595.
dc.identifier.citedreferenceZhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process. 2017; 26 ( 7 ): 3142 - 3155. https://doi.org/10.1109/TIP.2017.2662206
dc.identifier.citedreferenceBa JL, Kiros JR, Hinton GE. Layer normalization. arXiv:1607.06450, 2016.
dc.identifier.citedreferenceSimonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2015.
dc.identifier.citedreferenceDeng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2009: 248 - 255. https://doi.org/10.1109/CVPR.2009.5206848
dc.identifier.citedreferenceKingma DP, Ba J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2017.
dc.identifier.citedreferenceLi H, Jiang G, Zhang J, et al. Fully convolutional network ensembles for white matter hyperintensities segmentation in MR images. NeuroImage. 2018; 183: 650 - 665. https://doi.org/10.1016/j.neuroimage.2018.07.005
dc.identifier.citedreferenceSkaane P, Bandos AI, Eben EB, et al. Two-view digital breast tomosynthesis screening with synthetically reconstructed projection images: comparison with digital breast tomosynthesis with full-field digital mammographic images. Radiology. 2014; 271 ( 3 ): 655 - 663.
dc.identifier.citedreferenceRatanaprasatporn L, Chikarmane SA, Giess CS. Strengths and weaknesses of synthetic mammography in screening. RadioGraphics. 2017; 37 ( 7 ): 1913 - 1927.
dc.identifier.citedreferenceNelson JS, Wells JR, Baker JA, Samei E. How does c-view image quality compare with conventional 2D FFDM? Med Phys. 2016; 43 ( 5 ): 2538 - 2547.
dc.identifier.citedreferenceBarca P, Lamastra R, Aringhieri G, Tucciariello RM, Traino A, Fantacci ME. Comprehensive assessment of image quality in synthetic and digital mammography: a quantitative comparison. Australas Phys Eng Sci Med. 2019; 42 ( 4 ): 1141 - 1152.
dc.identifier.citedreferenceAujero MP, Gavenonis SC, Benjamin R, Zhang Z, Holt JS. Clinical performance of synthesized two-dimensional mammography combined with tomosynthesis in a large screening population. Radiology. 2017; 283 ( 1 ): 70 - 76.
dc.identifier.citedreferenceGastounioti A, McCarthy AM, Pantalone L, Synnestvedt M, Kontos D, Conant EF. Effect of mammographic screening modality on breast density assessment: digital mammography versus digital breast tomosynthesis. Radiology. 2019; 291 ( 2 ): 320 - 327.
dc.identifier.citedreferenceDestounis SV, Santacroce A, Arieno A. Update on breast density, risk estimation, and supplemental screening. Am J Roentgenol. 2020; 214 ( 2 ): 296 - 305.
dc.identifier.citedreferenceMackenzie A, Thomson EL, Mitchell M, et al. Virtual clinical trial to compare cancer detection using combinations of 2D mammography, digital breast tomosynthesis and synthetic 2D imaging. Eur Radiol. 2022; 32 ( 2 ): 806 - 814.
dc.identifier.citedreferenceKhanani S, Xiao L, Jensen MR, et al. Comparison of breast density assessments between synthesized C-ViewTM & intelligent 2DTM mammography. Br J Radiol. 2022; 95: 20211259.
dc.identifier.citedreferenceHorvat JV, Keating DM, Rodrigues-Duarte H, Morris EA, Mango VL. Calcifications at digital breast tomosynthesis: imaging features and biopsy techniques. Radiographics. 2019; 39 ( 2 ): 307 - 318.
dc.identifier.citedreferenceZeng B, Yu K, Gao L, Zeng X, Zhou Q. Breast cancer screening using synthesized two-dimensional mammography: a systematic review and meta-analysis. Breast. 2021; 59: 270 - 278.
dc.identifier.citedreferenceKang HJ, Chang JM, Lee J, et al. Replacing single-view mediolateral oblique (MLO) digital mammography (DM) with synthesized mammography (SM) with digital breast tomosynthesis (DBT) images: comparison of the diagnostic performance and radiation dose with two-view DM with or without MLO-DBT. Eur J Radiol. 2016; 85 ( 11 ): 2042 - 2048.
dc.identifier.citedreferenceYou C, Zhang Y, Gu Y, et al. Comparison of the diagnostic performance of synthesized two-dimensional mammography and full-field digital mammography alone or in combination with digital breast tomosynthesis. Breast Cancer. 2020; 27 ( 1 ): 47 - 53.
dc.identifier.citedreferenceAbdullah P, Alabousi M, Ramadan S, et al. Synthetic 2D mammography versus standard 2D digital mammography: a diagnostic test accuracy systematic review and meta-analysis. AJR Am J Roentgenol. 2021; 217 ( 2 ): 314 - 325.
dc.identifier.citedreferenceHodgson R, Heywang-Köbrunner SH, Harvey SC, et al. Systematic review of 3D mammography for breast cancer screening. Breast. 2016; 27: 52 - 61. https://doi.org/10.1016/j.breast.2016.01.002
dc.identifier.citedreferenceSvahn TM, Houssami N, Sechopoulos I, Mattsson S. Review of radiation dose estimates in digital breast tomosynthesis relative to those in two-view full-field digital mammography. Breast. 2015; 24 ( 2 ): 93 - 99. https://doi.org/10.1016/j.breast.2014.12.002
dc.identifier.citedreferencevan Schie G, Wallis MG, Leifland K, Danielsson M, Karssemeijer N. Mass detection in reconstructed digital breast tomosynthesis volumes with a computer-aided detection system trained on 2D mammograms. Med Phys. 2013; 40 ( 4 ): 041902. https://doi.org/10.1118/1.4791643
dc.identifier.citedreferenceKim ST, Kim DH, Ro YM. Generation of conspicuity-improved synthetic image from digital breast tomosynthesis. In: 2014 19th International Conference on Digital Signal Processing. IEEE; 2014: 395 - 399. https://doi.org/10.1109/ICDSP.2014.6900693
dc.identifier.citedreferenceHomann H, Bergner F, Erhard K. Computation of synthetic mammograms with an edge-weighting algorithm. In: Medical Imaging 2015: Physics of Medical Imaging. Vol 9412. SPIE; 2015. https://doi.org/10.1117/12.2081797
dc.identifier.citedreferenceWei J, Chan H-P, Helvie MA, et al. Synthesizing mammogram from digital breast tomosynthesis. Phys Med Biol. 2019; 64 ( 4 ): 045011. https://doi.org/10.1088/1361-6560/aafcda
dc.identifier.citedreferenceRuth C, Smith A, Stein J, inventors; Hologic Inc, assignee System and method for generating a 2D image from a tomosynthesis data set. US patent 7760924B2. July 20, 2010.
dc.identifier.citedreferenceNelson JS, Wells JR, Baker JA, Samei E. How does c-view image quality compare with conventional 2D FFDM? Med Phys. 2016; 43 ( 5 ): 2538 - 2547. https://doi.org/10.1118/1.4947293
dc.identifier.citedreferenceBarca P, Lamastra R, Aringhieri G, Tucciariello RM, Traino A, Fantacci ME. Comprehensive assessment of image quality in synthetic and digital mammography: a quantitative comparison. Australas Phys Eng Sci Med. 2019; 42 ( 4 ): 1141 - 1152. https://doi.org/10.1007/s13246-019-00816-8
dc.identifier.citedreferenceRonneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International Publishing; 2015: 234 - 241.
dc.identifier.citedreferenceIsola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2017.
dc.working.doiNOen
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.