ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Improved contrastive learning model via identification of false‐negatives in self‐supervised learning
Cited 1 time in scopus Download 138 time Share share facebook twitter linkedin kakaostory
Authors
Joonsun Auh, Changsik Cho, Seon‐tae Kim
Issue Date
2024-12
Citation
ETRI Journal, v.46, no.6, pp.1020-1029
ISSN
1225-6463
Publisher
한국전자통신연구원
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.4218/etrij.2023-0285
Abstract
Self‐supervised learning is a method that learns the data representation through unlabeled data. It is efficient because it learns from large‐scale unlabeled data and through continuous research, performance comparable to supervised learning has been reached. Contrastive learning, a type of self‐supervised learning algorithm, utilizes data similarity to perform instance‐level learning within an embedding space. However, it suffers from the problem of false‐negatives, which are the misclassification of data class during training the data representation. They result in loss of information and deteriorate the performance of the model. This study employed cosine similarity and temperature simultaneously to identify false‐negatives and mitigate their impact to improve the performance of the contrastive learning model. The proposed method exhibited a performance improvement of up to 2.7% compared with the existing algorithm on the CIFAR‐100 dataset. Improved performance on other datasets such as CIFAR‐10 and ImageNet was also observed.
KSP Keywords
Cosine similarity, Data class, Data representation, Data similarity, Embedding space, Improved performance, Supervised learning algorithm, learning models, loss of information, performance improvement, unlabeled data
This work is distributed under the term of Korea Open Government License (KOGL)
(Type 4: : Type 1 + Commercial Use Prohibition+Change Prohibition)
Type 4: