TY - GEN
T1 - Joint-attention discriminator for accurate super-resolution via adversarial training
AU - Chen, Rong
AU - Xie, Yuan
AU - Luo, Xiaotong
AU - Qu, Yanyun
AU - Li, Cuihua
N1 - Publisher Copyright:
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2019/10/15
Y1 - 2019/10/15
N2 - Tremendous progress has been witnessed on single image super-resolution (SR), where existing deep SR models achieve impressive performance in objective criteria, e.g., PSNR and SSIM. However, most of the SR methods are limited in visual perception, for example, they look too smooth. Generative adversarial network (GAN) favors SR visual effects over most of the deep SR models but is poor in objective criteria. In order to trade off the objective and subjective SR performance, we design a joint-attention discriminator with which GAN improves the SR performance in PSNR and SSIM, as well as maintaining the visual effect compared with non-attention GAN based SR models. The joint-attention discriminator contains dense channel-wise attention and cross-layer attention blocks. The former is applied in the shallow layers of the discriminator for channel-wise weighting combination of feature maps. The latter is employed to select feature maps in some middle and deep layers for effective discrimination. Extensive experiments are conducted on six benchmark datasets and the experimental results show that our proposed discriminator combining with different generators can achieve more realistic visual performances.
AB - Tremendous progress has been witnessed on single image super-resolution (SR), where existing deep SR models achieve impressive performance in objective criteria, e.g., PSNR and SSIM. However, most of the SR methods are limited in visual perception, for example, they look too smooth. Generative adversarial network (GAN) favors SR visual effects over most of the deep SR models but is poor in objective criteria. In order to trade off the objective and subjective SR performance, we design a joint-attention discriminator with which GAN improves the SR performance in PSNR and SSIM, as well as maintaining the visual effect compared with non-attention GAN based SR models. The joint-attention discriminator contains dense channel-wise attention and cross-layer attention blocks. The former is applied in the shallow layers of the discriminator for channel-wise weighting combination of feature maps. The latter is employed to select feature maps in some middle and deep layers for effective discrimination. Extensive experiments are conducted on six benchmark datasets and the experimental results show that our proposed discriminator combining with different generators can achieve more realistic visual performances.
KW - Cross-layer Attention
KW - Dense Channel-wise Attention
KW - Generative Adversarial Network
KW - Image Super-resolution
KW - Joint-attention Discriminator
UR - https://www.scopus.com/pages/publications/85074867584
U2 - 10.1145/3343031.3351008
DO - 10.1145/3343031.3351008
M3 - 会议稿件
AN - SCOPUS:85074867584
T3 - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
SP - 711
EP - 719
BT - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
T2 - 27th ACM International Conference on Multimedia, MM 2019
Y2 - 21 October 2019 through 25 October 2019
ER -