TY - JOUR
T1 - MFIALane
T2 - Multiscale Feature Information Aggregator Network for Lane Detection
AU - Qiu, Zengyu
AU - Zhao, Jing
AU - Sun, Shiliang
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - Lane detection differs from general object detection in that lane lines are usually long and narrow in the road image, and more attention to image features at different scales is required to reason about lane lines under occlusion, degradation, and bad weather. However, most existing semantic segmentation-based lane detection methods focus on solving the convolutional receptive field through aggregating information vertically and horizontally in the same feature map, which may ignore important information contained in multi-scale features. Besides, the high-level semantic information of whether the lane exists is not fully utilized, as they often add a module at the final stage of the network output to determine whether the lane exists, which is a dispensable for their network. Based on the above analysis, we design a novel lane detection network based on semantic segmentation which consists of a Multi-scale Feature Information Aggregator (MFIA) module and a Channel Attention (CA) module. Many experiments on the TRLane dataset, the generated Lane dataset, BDD100K dataset, TuSimple dataset, VIL-100 dataset and CULane dataset show that our approach can achieve the state-of-the-art performance (our code will be available at https://github.com/Cuibaby/MFIALane). In addition, considering that different perceptual tasks in autonomous driving are able to share the feature extraction network, we also conduct the experiment for drivable area segmentation on BDD100K dataset. Our approach also achieves good results compared to many existing methods, showing that our proposed model is capable of simultaneously handling multiple perceptual tasks in autonomous driving scenarios.
AB - Lane detection differs from general object detection in that lane lines are usually long and narrow in the road image, and more attention to image features at different scales is required to reason about lane lines under occlusion, degradation, and bad weather. However, most existing semantic segmentation-based lane detection methods focus on solving the convolutional receptive field through aggregating information vertically and horizontally in the same feature map, which may ignore important information contained in multi-scale features. Besides, the high-level semantic information of whether the lane exists is not fully utilized, as they often add a module at the final stage of the network output to determine whether the lane exists, which is a dispensable for their network. Based on the above analysis, we design a novel lane detection network based on semantic segmentation which consists of a Multi-scale Feature Information Aggregator (MFIA) module and a Channel Attention (CA) module. Many experiments on the TRLane dataset, the generated Lane dataset, BDD100K dataset, TuSimple dataset, VIL-100 dataset and CULane dataset show that our approach can achieve the state-of-the-art performance (our code will be available at https://github.com/Cuibaby/MFIALane). In addition, considering that different perceptual tasks in autonomous driving are able to share the feature extraction network, we also conduct the experiment for drivable area segmentation on BDD100K dataset. Our approach also achieves good results compared to many existing methods, showing that our proposed model is capable of simultaneously handling multiple perceptual tasks in autonomous driving scenarios.
KW - Deep learning
KW - autonomous driving
KW - lane detection
KW - semantic segmentation
UR - https://www.scopus.com/pages/publications/85137936447
U2 - 10.1109/TITS.2022.3195742
DO - 10.1109/TITS.2022.3195742
M3 - 文章
AN - SCOPUS:85137936447
SN - 1524-9050
VL - 23
SP - 24263
EP - 24275
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 12
ER -