ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술지 Efficient Privacy-Preserving Machine Learning for Blockchain Network
Cited 93 time in scopus Download 122 time Share share facebook twitter linkedin kakaostory
김현일, 김승현, 황정연, 서창호
IEEE Access, v.7, pp.136481-136495
19HH5400, O2O 서비스를 위한 무자각 증강인증 및 프라이버시가 보장되는 블록체인 ID 관리 기술 개발, 김수형
A blockchain as a trustworthy and secure decentralized and distributed network has been emerged for many applications such as in banking, finance, insurance, healthcare and business. Recently, many communities in blockchain networks want to deploy machine learning models to get meaningful knowledge from geographically distributed large-scale data owned by each participant. To run a learning model without data centralization, distributed machine learning (DML) for blockchain networks has been studied. While several works have been proposed, privacy and security have not been sufficiently addressed, and as we show later, there are vulnerabilities in the architecture and limitations in terms of efficiency. In this paper, we propose a privacy-preserving DML model for a permissioned blockchain to resolve the privacy, security, and performance issues in a systematic way. We develop a differentially private stochastic gradient descent method and an error-based aggregation rule as core primitives. Our model can treat any type of differentially private learning algorithm where non-deterministic functions should be defined. The proposed error-based aggregation rule is effective to prevent attacks by an adversarial node that tries to deteriorate the accuracy of DML models. Our experiment results show that our proposed model provides stronger resilience against adversarial attacks than other aggregation rules under a differentially private scenario. Finally, we show that our proposed model has high usability because it has low computational complexity and low transaction latency.
KSP 제안 키워드
Adversarial Attacks, Distributed machine learning, Experiment results, Geographically distributed, Large-scale data, Low Computational Complexity, Proposed model, Stochastic gradient descent method, aggregation rules, distributed network, learning algorithms
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)