Abstract
Multi-exit architectures consist of a backbone and branch classifiers that offer shortened inference pathways to reduce the run-time of deep neural networks. In this paper, we analyze different branching patterns that vary in their allocation of computational complexity for the branch classifiers. Constant-complexity branching keeps all branches the same, while complexity-increasing and complexity-decreasing branching place more complex branches later or earlier in the backbone respectively. Through extensive experimentation on multiple backbones and datasets, we find that complexity-decreasing branches are more effective than constant-complexity or complexity-increasing branches, which achieve the best accuracy-cost trade-off. We investigate a cause by using knowledge consistency to probe the effect of adding branches onto a backbone. Our findings show that complexity-decreasing branching yields the least disruption to the feature abstraction hierarchy of the backbone, which explains the effectiveness of the branching patterns.
| Original language | English |
|---|---|
| Article number | 103900 |
| Journal | Computer Vision and Image Understanding |
| Volume | 239 |
| DOIs | |
| State | Published - Feb 2024 |
Keywords
- Branch classifiers
- Knowledge consistency
- Model compression and acceleration
- Multi-exit architectures
Fingerprint
Dive into the research topics of 'A closer look at branch classifiers of multi-exit architectures'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver