基于拓扑一致性对抗互学习的知识蒸馏

Translated title of the contribution: Topology-guided Adversarial Deep Mutual Learning for Knowledge Distillation

Xuan Lai, Yan Yun Qu, Yuan Xie, Yu Long Pei

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The existing mutual-deep-learning based knowledge distillation methods have the limitations: the discrepancy between the teacher network and the student network is only used to supervise the knowledge transfer neglecting other constraints, and the result-driven supervision is only used neglecting process-driven supervision. This paper proposes a topology-guided adversarial deep mutual learning network (TADML). This method trains multiple classification sub-networks of the same task simultaneously and each sub-network learns from others. Moreover, our method uses an adversarial network to adaptively measure the differences between pairwise sub-networks and optimizes the features without changing the model structure. The experimental results on three classification datasets: CIFAR10, CIFAR100 and Tiny-ImageNet and a person re-identification dataset Market1501 show that our method has achieved the best results among similar model compression methods.

Translated title of the contributionTopology-guided Adversarial Deep Mutual Learning for Knowledge Distillation
Original languageChinese (Traditional)
Pages (from-to)102-110
Number of pages9
JournalZidonghua Xuebao/Acta Automatica Sinica
Volume49
Issue number1
DOIs
StatePublished - Jan 2023

Fingerprint

Dive into the research topics of 'Topology-guided Adversarial Deep Mutual Learning for Knowledge Distillation'. Together they form a unique fingerprint.

Cite this