TY - GEN
T1 - Enhancing Robustness of Lane Detection Through Dynamic Smoothness
AU - Qiu, Zengyu
AU - Zhao, Jing
AU - Sun, Shiliang
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2022
Y1 - 2022
N2 - Many lane detection methods only consider single frame information and often ignore the contextual information between consecutive frames, which is not robust enough in the absence of lane visual information. In practice, lane detection is usually processed in a dynamic environment. If the semantic relationship between multiple frames is learned, the complementary information from adjacent frames can be used to make up for the absence of visual information in some certain frames, and thus the accuracy of lane detection will be improved. Based on this idea, we propose a convolutional GRU (ConvGRU) model to fuse continuous multi-frame lane feature information and enhance the semantic information of the current frame as well. Moreover, for the current lane dataset lacks datasets in complex scenarios, we generate four more challenging lane scene datasets in the original TuSimple dataset through the style transfer algorithm to verify the robustness of the model. In different complex lane scenes, our method can achieve the state-of-the-art performance in terms of accuracy, precision and F1-Measure. Our code is available at https://github.com/Cuibaby/ConvGRULane.
AB - Many lane detection methods only consider single frame information and often ignore the contextual information between consecutive frames, which is not robust enough in the absence of lane visual information. In practice, lane detection is usually processed in a dynamic environment. If the semantic relationship between multiple frames is learned, the complementary information from adjacent frames can be used to make up for the absence of visual information in some certain frames, and thus the accuracy of lane detection will be improved. Based on this idea, we propose a convolutional GRU (ConvGRU) model to fuse continuous multi-frame lane feature information and enhance the semantic information of the current frame as well. Moreover, for the current lane dataset lacks datasets in complex scenarios, we generate four more challenging lane scene datasets in the original TuSimple dataset through the style transfer algorithm to verify the robustness of the model. In different complex lane scenes, our method can achieve the state-of-the-art performance in terms of accuracy, precision and F1-Measure. Our code is available at https://github.com/Cuibaby/ConvGRULane.
KW - Autonomous driving
KW - ConvGRU
KW - Environment perception
KW - Lane detection
KW - Semantic segmentation
UR - https://www.scopus.com/pages/publications/85130913335
U2 - 10.1007/978-981-16-9492-9_16
DO - 10.1007/978-981-16-9492-9_16
M3 - 会议稿件
AN - SCOPUS:85130913335
SN - 9789811694912
T3 - Lecture Notes in Electrical Engineering
SP - 148
EP - 161
BT - Proceedings of 2021 International Conference on Autonomous Unmanned Systems, ICAUS 2021
A2 - Wu, Meiping
A2 - Niu, Yifeng
A2 - Gu, Mancang
A2 - Cheng, Jin
PB - Springer Science and Business Media Deutschland GmbH
T2 - International Conference on Autonomous Unmanned Systems, ICAUS 2021
Y2 - 24 September 2021 through 26 September 2021
ER -