ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Deep learning‐based multimodal fusion network for segmentation and classification of breast cancers using B‐mode and elastography ultrasound images
Cited 28 time in scopus Download 38 time Share share facebook twitter linkedin kakaostory
Authors
Sampa Misra, Chiho Yoon, Kwang-Ju Kim, Ravi Managuli, Richard G. Barr, Jongduk Baek, Chulhong Kim
Issue Date
2023-11
Citation
Bioengineering & Translational Medicine, v.8, no.6, pp.1-13
ISSN
2380-6761
Publisher
American Institute of Chemical Engineers
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1002/btm2.10480
Abstract
Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 짹 0.00% and specificity of 94.28 짹 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
KSP Keywords
B-MODE, Benign and Malignant, Breast cancer Detection, Clinical data, Computer-aided diagnosis (CAD) systems, Convolution neural network(CNN), Decision network, Fusion method, Malignant lesions, Medical Imaging, Real-world
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY