ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Mitigating False Negatives in Contrastive Learning via Progressive Hyperparameter Adjustment
Cited - time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
Authors
Joonsun Auh, Changsik Cho, Seon-Tae Kim
Issue Date
2026-01
Citation
IEEE Access, v.14, pp.10276-10285
ISSN
2169-3536
Publisher
IEEE
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1109/ACCESS.2026.3654091
Abstract
This study introduces a technique for gradually increasing the cosine similarity threshold and temperature of contrastive learning models to enhance their performance. The conventional contrastive learning approach uses a fixed temperature, which fails to adequately capture changes in the data distribution during training. Only one positive sample is generated within each training batch, thus false negative samples inevitably occur when the batch size is larger than the number of classes. To address this limitation, we propose a method for improving the contrastive learning model by gradually increasing the value of hyperparameters during the training process to flexibly learn the representation of the data and to identify false negative samples. During the initial phase of training, a low temperature is employed to distinguish differences between samples, promoting hard negative separation. As training progresses, the temperature is gradually increased, thereby allowing semantically similar samples to be positioned closer together. Additionally, the cosine similarity between the anchor sample and all other samples is computed and used to distinguish between positive and negative samples based on whether its value exceeds a certain threshold. However, the cosine similarity threshold is gradually increased such that as training progresses, samples that are quite similar to the anchor are trained as positive pairs. The proposed method demonstrates superior performance relative to conventional contrastive learning on the CIFAR-10, CIFAR-100, and ImageNet-2012 datasets, and in various experimental environments, such as ResNet-18, ResNet-50. We show that it is possible to compensate for the limitations of the contrastive learning.
Keyword
Contrastive learning, cosine similarity, false negatives, self-supervised learning, temperature parameter
KSP Keywords
Batch size, CIFAR-10, Cosine similarity, Data Distribution, False negative, Initial phase, Learning approach, Positive and negative, Similar samples, Temperature parameter, learning models
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY