Variational Distillation for Multi-View Learning

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV$^{2}$2D), tackling the above limitations for generalized multi-view learning. Uniquely, MV$^{2}$2D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV$^{2}$2D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle.

Original languageEnglish
Pages (from-to)4551-4566
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume46
Issue number7
DOIs
StatePublished - 1 Jul 2024

Keywords

  • Multi-view learning
  • information bottleneck
  • knowledge distillation
  • mutual information
  • variational inference

Fingerprint

Dive into the research topics of 'Variational Distillation for Multi-View Learning'. Together they form a unique fingerprint.

Cite this