TY - GEN
T1 - Domain Adaptation with One-step Transformation
AU - Peng, Xishuai
AU - Li, Yuanxiang
AU - Murphey, Yi Lu
AU - Wei, Xian
AU - Luo, Jianhua
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - It is a crucial property for autonomous vehicle driving systems to robustly perform in different driving surroundings. However, the modules based on computer vision suffer from the performance degradation problem, when there is distribution discrepancy between the practically captured data and the training data. In this paper, we address this problem by learning an one-step transformation to bridge the discrepancy from source domain to target domain. Since the feature space learned by labeled source data is well-trained, the target data firstly are directly mapped to this feature space. With regard the domain discrepancy, the distribution of source and target features need to be further aligned. We model the aligning process as an one-step transformation and implement it as one layer convolutional neural network. In order to effectively learn the one-step transformation, a new adversarial loss function is proposed to minimize the Wasserstein distance of involving domains and the prediction error simultaneously. The experiments are conducted on six datasets, including the challenging traffic-related data,e.g. traffic sign images and the pedestrian fisheye images captured by the cameras installed in a moving vehicle. The results demonstrated the efficiency of the proposed method in comparison with other eight classical recognition methods.
AB - It is a crucial property for autonomous vehicle driving systems to robustly perform in different driving surroundings. However, the modules based on computer vision suffer from the performance degradation problem, when there is distribution discrepancy between the practically captured data and the training data. In this paper, we address this problem by learning an one-step transformation to bridge the discrepancy from source domain to target domain. Since the feature space learned by labeled source data is well-trained, the target data firstly are directly mapped to this feature space. With regard the domain discrepancy, the distribution of source and target features need to be further aligned. We model the aligning process as an one-step transformation and implement it as one layer convolutional neural network. In order to effectively learn the one-step transformation, a new adversarial loss function is proposed to minimize the Wasserstein distance of involving domains and the prediction error simultaneously. The experiments are conducted on six datasets, including the challenging traffic-related data,e.g. traffic sign images and the pedestrian fisheye images captured by the cameras installed in a moving vehicle. The results demonstrated the efficiency of the proposed method in comparison with other eight classical recognition methods.
KW - adversarial loss function
KW - autonomous vehicle driving system
KW - computer vision
KW - convolutional neural network
UR - https://www.scopus.com/pages/publications/85062778940
U2 - 10.1109/SSCI.2018.8628835
DO - 10.1109/SSCI.2018.8628835
M3 - 会议稿件
AN - SCOPUS:85062778940
T3 - Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018
SP - 539
EP - 546
BT - Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018
A2 - Sundaram, Suresh
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th IEEE Symposium Series on Computational Intelligence, SSCI 2018
Y2 - 18 November 2018 through 21 November 2018
ER -