ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Deep learning-based multimodal fusion network for segmentation and classification of breast cancers using B-mode and elastography ultrasound images
Cited 4 time in scopus Download 3 time Share share facebook twitter linkedin kakaostory
저자
Sampa Misra, 윤치호, 김광주, Ravi Managuli, Richard G. Barr, 백종덕, 김철홍
발행일
202311
출처
Bioengineering & Translational Medicine, v.8 no.6, pp.1-13
ISSN
2380-6761
출판사
American Institute of Chemical Engineers
DOI
https://dx.doi.org/10.1002/btm2.10480
협약과제
22ZD1100, 대경권 지역산업 기반 ICT 융합기술 고도화 지원사업, 문기영
초록
Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 짹 0.00% and specificity of 94.28 짹 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
KSP 제안 키워드
B-MODE, Benign and Malignant, Breast Cancer(BC), Breast cancer detection, Breast lesion, Clinical data, Computer-aided diagnosis (CAD) systems, Convolution neural network(CNN), Decision networks, Learning-based, Malignant lesions