EMCLR: Expectation Maximization Contrastive Learning Representations

  • Meng Liu
  • , Ran Yi*
  • , Lizhuang Ma*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

One of the bottlenecks of self-supervised contrastive learning is the degenerate constant solution, where all the samples are mapped to one single point in representation space. To prevent such collapses, the mainstream paradigm is using negative samples, forcing negative pairs to push away. However, such manner results in O (2) time and space complexities, limiting the expansibility, scalability and efficiency. We observe current negative-requiring objectives can be decomposed to alignment and uniformity, where uniformity dominates the O (N2) complexity. To reduce the complexity, inspired by the traditional EM algorithm, we derive the embedding matrix of each batch with optimally uniform distribution and discard the uniformity part in objectives. Specifically, for stacked embedding matrices of two views, we first calculate the optimal solution of one view by the proposed algorithm. Then we align the embedding matrix with the obtained optimal solution. The learning paradigm ingeniously avoids model collapses without ad-hoc negative pairs and reduces the square complexity to linear. Extensive experiments on CIFAR-10/100 and STL-10 show the proposed methods achieve comparable results in O(N) complexity.

Original languageEnglish
Title of host publicationICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728163277
DOIs
StatePublished - 2023
Externally publishedYes
Event48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023 - Rhodes Island, Greece
Duration: 4 Jun 202310 Jun 2023

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2023-June
ISSN (Print)1520-6149

Conference

Conference48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Country/TerritoryGreece
CityRhodes Island
Period4/06/2310/06/23

Keywords

  • Contrastive learning
  • Representation learning
  • Self-supervised learning

Fingerprint

Dive into the research topics of 'EMCLR: Expectation Maximization Contrastive Learning Representations'. Together they form a unique fingerprint.

Cite this