Abstract
Unpaired image dehazing has attracted more and more attention, since the pair-wise training data which is prerequisite for the supervised dehazing methods leads to high cost if they are really captured or performance degradation on the real-hazy scenes if they are synthesized. The existing methods for unpaired image dehazing are all based on the CycleGAN-like framework with pixel-to-pixel constraint, which leads to burdensome model complexity and unstable training. In this paper, we propose a novel single GAN model for unpaired image dehazing (SinGAN-Dehaze), which gets rid of the cycle-consistency constraint. To be specific, the cycle-consistency is decoupled to content-consistency and style-consistency, where the pixel-to-pixel mapping is replaced by the patch-to-patch semantic mapping. The content-consistency is ensured by capturing local distinctive representations and global contextual dependencies. The style-consistency is achieved by forcing the high-frequency information distribution of dehazing result close to that of the clear image with similar style. Extensive experiments demonstrate that our proposal can achieve superior performance for unpaired image dehazing in terms of the objective index and visual effect on both synthetic and real-hazy scenarios.
| Original language | English |
|---|---|
| Article number | 129888 |
| Journal | Neurocomputing |
| Volume | 636 |
| DOIs | |
| State | Published - 1 Jul 2025 |
Keywords
- Contrastive learning
- Disentangled representation learning
- Style transfer
- Unpaired image dehazing