TY - JOUR
T1 - Learning Representations for High-Dynamic-Range Image Color Transfer in a Self-Supervised Way
AU - Huang, Yifei
AU - Qiu, Sheng
AU - Wang, Changbo
AU - Li, Chenhui
N1 - Publisher Copyright:
© 1999-2012 IEEE.
PY - 2021
Y1 - 2021
N2 - Reference-based color transfer between images has been a fundamental function in image editing. However, existing approaches pay less attention to high-dynamic-range (HDR) images. It is worth noting that designing an appropriate representation for HDR images to achieve satisfying color transfer is challenging. In this paper, we propose an innovative high-dynamic-range image color transfer generative adversarial network (HDRCTGAN) to encode the original image into fine representations that allow transfer of the color of the reference image to the target image. We propose to learn fine representations through a generative adversarial network (GAN) in a self-supervised way. Particularly, the proposed method is self-supervised learning that requires only unlabeled HDR images instead of supervised learning that requires lots of ground truth pairs. HDRCTGAN consists of a generator to transfer the color of the reference image to the target image over the feature domain and a discriminator to suppress the artifacts caused by the generator. We also design a loss function to ensure that HDRCTGAN possesses two required properties: (a) high fidelity and (b) self-identity. The proposed approach yields a pleasing visual result. We have carried out HDR specific evaluations including both objective quantitative experiments with HDR metrics and subjective user studies operated on HDR display devices to demonstrate the effectiveness of our method. Furthermore, we have verified the applicability of the proposed approach to several applications, such as color transfer of HDR images captured by smartphones, color transfer of fabric images, and reference-based grayscale image colorization.
AB - Reference-based color transfer between images has been a fundamental function in image editing. However, existing approaches pay less attention to high-dynamic-range (HDR) images. It is worth noting that designing an appropriate representation for HDR images to achieve satisfying color transfer is challenging. In this paper, we propose an innovative high-dynamic-range image color transfer generative adversarial network (HDRCTGAN) to encode the original image into fine representations that allow transfer of the color of the reference image to the target image. We propose to learn fine representations through a generative adversarial network (GAN) in a self-supervised way. Particularly, the proposed method is self-supervised learning that requires only unlabeled HDR images instead of supervised learning that requires lots of ground truth pairs. HDRCTGAN consists of a generator to transfer the color of the reference image to the target image over the feature domain and a discriminator to suppress the artifacts caused by the generator. We also design a loss function to ensure that HDRCTGAN possesses two required properties: (a) high fidelity and (b) self-identity. The proposed approach yields a pleasing visual result. We have carried out HDR specific evaluations including both objective quantitative experiments with HDR metrics and subjective user studies operated on HDR display devices to demonstrate the effectiveness of our method. Furthermore, we have verified the applicability of the proposed approach to several applications, such as color transfer of HDR images captured by smartphones, color transfer of fabric images, and reference-based grayscale image colorization.
KW - Self-supervised learning
KW - color transfer
KW - generative adversarial network
KW - image manipulation
UR - https://www.scopus.com/pages/publications/85098123690
U2 - 10.1109/TMM.2020.2981994
DO - 10.1109/TMM.2020.2981994
M3 - 文章
AN - SCOPUS:85098123690
SN - 1520-9210
VL - 23
SP - 176
EP - 188
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
M1 - 9042237
ER -