This study introduces a technique for gradually increasing the cosine similarity threshold and temperature of contrastive learning models to enhance their performance. The conventional contrastive learning approach uses a fixed temperature, which fails to adequately capture changes in the data distribution during training. Only one positive sample is generated within each training batch, thus false negative samples inevitably occur when the batch size is larger than the number of classes. To address this limitation, we propose a method for improving the contrastive learning model by gradually increasing the value of hyperparameters during the training process to flexibly learn the representation of the data and to identify false negative samples. During the initial phase of training, a low temperature is employed to distinguish differences between samples, promoting hard negative separation. As training progresses, the temperature is gradually increased, thereby allowing semantically similar samples to be positioned closer together. Additionally, the cosine similarity between the anchor sample and all other samples is computed and used to distinguish between positive and negative samples based on whether its value exceeds a certain threshold. However, the cosine similarity threshold is gradually increased such that as training progresses, samples that are quite similar to the anchor are trained as positive pairs. The proposed method demonstrates superior performance relative to conventional contrastive learning on the CIFAR-10, CIFAR-100, and ImageNet-2012 datasets, and in various experimental environments, such as ResNet-18, ResNet-50. We show that it is possible to compensate for the limitations of the contrastive learning.
Keyword
Contrastive learning, cosine similarity, false negatives, self-supervised learning, temperature parameter
KSP Keywords
Batch size, CIFAR-10, Cosine similarity, Data Distribution, False negative, Initial phase, Learning approach, Positive and negative, Similar samples, Temperature parameter, learning models
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
Copyright Policy
ETRI KSP Copyright Policy
The materials provided on this website are subject to copyrights owned by ETRI and protected by the Copyright Act. Any reproduction, modification, or distribution, in whole or in part, requires the prior explicit approval of ETRI. However, under Article 24.2 of the Copyright Act, the materials may be freely used provided the user complies with the following terms:
The materials to be used must have attached a Korea Open Government License (KOGL) Type 4 symbol, which is similar to CC-BY-NC-ND (Creative Commons Attribution Non-Commercial No Derivatives License). Users are free to use the materials only for non-commercial purposes, provided that original works are properly cited and that no alterations, modifications, or changes to such works is made. This website may contain materials for which ETRI does not hold full copyright or for which ETRI shares copyright in conjunction with other third parties. Without explicit permission, any use of such materials without KOGL indication is strictly prohibited and will constitute an infringement of the copyright of ETRI or of the relevant copyright holders.
J. Kim et. al, "Trends in Lightweight Kernel for Many core Based High-Performance Computing", Electronics and Telecommunications Trends. Vol. 32, No. 4, 2017, KOGL Type 4: Source Indication + Commercial Use Prohibition + Change Prohibition
J. Sim et.al, “the Fourth Industrial Revolution and ICT – IDX Strategy for leading the Fourth Industrial Revolution”, ETRI Insight, 2017, KOGL Type 4: Source Indication + Commercial Use Prohibition + Change Prohibition
If you have any questions or concerns about these terms of use, or if you would like to request permission to use any material on this website, please feel free to contact us
KOGL Type 4:(Source Indication + Commercial Use Prohibition+Change Prohibition)
Contact ETRI, Research Information Service Section
Privacy Policy
ETRI KSP Privacy Policy
ETRI does not collect personal information from external users who access our Knowledge Sharing Platform (KSP). Unathorized automated collection of researcher information from our platform without ETRI's consent is strictly prohibited.
[Researcher Information Disclosure] ETRI publicly shares specific researcher information related to research outcomes, including the researcher's name, department, work email, and work phone number.
※ ETRI does not share employee photographs with external users without the explicit consent of the researcher. If a researcher provides consent, their photograph may be displayed on the KSP.