Robust Road Sign Feature Extraction Through Data Curation and Multi-Task Learning for Global Map Creation
-
https://doi.org/10.14419/enqwz712
Received date: June 27, 2025
Accepted date: August 5, 2025
Published date: August 12, 2025
-
Road Sign; Feature Extraction; Data Curation; YOLOv7; Multi-Task Learning; Global Map -
Abstract
This research presents a comprehensive approach to enhancing road sign feature extraction for global map creation through strategic improvements in data quality, feature learning, and network architecture. Designed to address core challenges in HERE Technologies' map creation pipeline (US patent PAN: 18/988231), our approach significantly improves the Stage 1 component of their patented three-stage framework by replacing the previous YOLOv7-based implementation with a more robust and effective solution. The methodology centers on three key innovations: (1) an intelligent data curation and filtering strategy that reduces annotation noise by 37% and improves overall data quality without extensive manual re-annotation; (2) novel self-supervised pretext tasks that develop rich feature representations of road sign characteristics such as color, shape, and contextual positioning; and (3) a multi-headed network architecture that preserves geometric understanding while enabling simultaneous optimization of detection, segmentation, and classification tasks. These innovations collectively address critical map creation challenges, including domain divergence between different imagery sources, class imbalance across sign types, data scarcity for rare classes, and noisy training samples. Evaluation metrics demonstrate exceptional improvements, with the enhanced system achieving 92% precision, 93% mAP@0.5 for detection, and processing inputs 64.29% faster than the previous implementation while simultaneously performing multiple tasks. The approach significantly improves performance in challenging scenarios, with a 53% improvement in adverse lighting conditions and 31% higher accuracy in poor weather. By focusing on fundamental improvements in data quality, feature representation, and architectural design rather than simply adopting newer base models, this work establishes a foundation for more efficient and accurate feature extraction that enables faster global expansion of map coverage without sacrificing quality.
-
References
- Behrendt, K., L. Novak, and R. Botros. 2017. “A deep learning approach to traffic lights:Detection, tracking, and classification.” IEEE Int. Conf. Robot. Autom., 1370–1377.Singapore. https://doi.org/10.1109/ICRA.2017.7989163.
- Du, L., W. Chen, S. Fu, H. Kong, C. Li, and Z. Pei. 2019. “Real-time detection of vehicle and traffic light for intelligent and connected vehicles based on YOLOv3 network.” 5th Int. Conf.Transp. Inf. Saf., 388–392. Liverpool, UK. https://doi.org/10.1109/ICTIS.2019.8883761.
- ensen, M. B., K. Nasrollahi, and T. B. Moeslund. 2017. “Evaluating State-of-the-art Object Detector on Challenging Traffic Light Data.” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., 882–888. https://doi.org/10.1109/CVPRW.2017.122.
- Jensen, M. B., M. P. Philipsen, A. Møgelmose, T. B. Moeslund, and M. M. Trivedi. 2016.“Vision for Looking at Traffic Lights: Issues, Survey, and Perspectives.” IEEE Trans. Intell.Transp. Syst., 17 (7): 1800–1815. https://doi.org/10.1109/TITS.2015.2509509.
- Jocher, G. n.d. “GitHub - ultralytics/yolov5.” Accessed June 1, 2022.https://github.com/ultralytics/yolo.
- Li, Z., Q. Zeng, Y. Liu, J. Liu, and L. Li. 2021. “An improved traffic lights recognition algorithm for autonomous driving in complex scenarios.” Int. J. Distrib. Sens. Networks, 17 (5). https://doi.org/10.1177/15501477211018374.
- Liu, S., L. Qi, H. Qin, J. Shi, and J. Jia. 2018. “Path Aggregation Network for InstanceSegmentation.” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 8759–8768. https://doi.org/10.1109/CVPR.2018.00913.
- Peng, J., M. Xu, and Y. Yan. 2021. “Automatic Recognition of Pointer Meter Reading Based on Yolov4 and Improved U-net Algorithm.” IEEE Int. Conf. Electron. Technol. Commun. Inf.,52–57. Changchun, China. https://doi.org/10.1109/ICETCI53161.2021.9563496.
- Possatti, L. C., R. Guidolini, V. B. Cardoso, R. F. Berriel, T. M. Paixão, C. Badue, A. F. De Souza, and T. Oliveira-Santos. 2019. “Traffic Light Recognition Using Deep Learning andPrior Maps for Autonomous Cars.” Int. Jt. Conf. Neural Networks (IJCNN). Budapest, Hungary. https://doi.org/10.1109/IJCNN.2019.8851927.
- Wang, C. Y., H. Y. Mark Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, and I. H. Yeh. 2019.“CSPNet: A New Backbone that can Enhance Learning Capability of CNN.” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., 1571–1580. IEEE Computer Society. https://doi.org/10.1109/CVPRW50498.2020.00203.
- Wang, Q., Q. Zhang, X. Liang, Y. Wang, C. Zhou, and V. I. Mikulovich. 2022. “Traffic Lights Detection and Recognition Method Based on the Improved YOLOv4 Algorithm.” Sensors,22 (200). https://doi.org/10.3390/s22010200.
- Yan, S., X. Liu, W. Qian, and Q. Chen. 2021. “An End-to-End Traffic Light Detection Algorithm Based on Deep Learning.” Int. Conf. Secur. Pat-tern Anal. Cybern., 370–373. https://doi.org/10.1109/SPAC53836.2021.9539934.
- Guo, J., You, R., Huang, L., 2020. Mixed vertical-and-horizontal-text traffic sign detection and recognition for street-level scene,2020. IEEE Ac-cess 8, 69413–69425. https://doi.org/10.1109/ACCESS.2020.2986500.
- Zhou, S., Qiu, J., 2021. Enhanced SSD with interactive multi-scale attention features for object detection. Multimed. Tools Appl. 80, 11539–11556. https://doi.org/10.1007/s11042-020-10191-2.
- Zhang, J., Sun, J., Wang, J., Yue, X.-G., 2021. Visual object tracking based on residual network and cascaded correlation filters. J. Ambient Intell. Human Comput. 12 (8), 8427–8440. https://doi.org/10.1007/s12652-020-02572-0.
- Temel, D., Chen, M.H., AlRegib, G., 2020. Traffic sign detection under challenging conditions: a deeper look into performance variations and spec-tral characteristics. IEEE Trans. Intell. Transp. Syst. 21 (9), 3663–3673. https://doi.org/10.1109/TITS.2019.2931429.
- Kamal, U., Tonmoy, T.I., Das, S., Hasan, M.K., 2019. Automatic traffic sign detection and recognition using SegU-Net and a modified Tversky loss function with L1-constraint. IEEE Trans. Intell. Transp. Syst. 1–13. https://doi.org/10.1109/TITS.2019.2911727.
- Wong, A., Shafiee, M.J., St. Jules, M., 2018. MicronNet: a highly compact deep convolutional neural network architecture for real-time embedded traffic sign classification. IEEE Access 6, 59803–59810. https://doi.org/10.1109/ACCESS.2018.2873948.
- Avramovic´, A., Sluga, D., Tabernik, D., Skocˇaj, D., Stojnic´, V., Ilc, N., 2020. Neural-network-based traffic sign detection and recognition in high-definition images using region focusing and parallelization. IEEE Access 8, 189855–189868. https://doi.org/10.1109/ACCESS.2020.3031191.
- Lee, H.S., Kim, K., 2018. Simultaneous traffic sign detection and boundary estimation using convolutional neural network. IEEE Trans. Intell. Transp. Syst. 19(5), 1652–1663. https://doi.org/10.1109/TITS.2018.2801560.
- G.Dsilva,A.Bhoir,X.Jin and K.Gao, "Systems and Methods for Road Sign Detection from Street Level Imagery Using a Multi-Stage Neural Net-work," U.S. Patent Application 18/988,231, Dec. 19, 2024.
-
Downloads
-
How to Cite
D’silva , G. ., & Bharadi , D. V. A. . (2025). Robust Road Sign Feature Extraction Through Data Curation and Multi-Task Learning for Global Map Creation. International Journal of Basic and Applied Sciences, 14(4), 368-377. https://doi.org/10.14419/enqwz712
