A Review of Deep Learning-Based Lane Detection Methods in Complex Environments
-
https://doi.org/10.14419/wb7z2179
Received date: June 27, 2025
Accepted date: August 12, 2025
Published date: August 20, 2025
-
Lane Detection; Complex Environments; Temporal Information Fusion; Global Context Integration -
Abstract
Lane detection is pivotal for enhancing the safety and functionality of Advanced Driver Assistance Systems (ADAS) and autonomous driving. Traditional image processing methods, while efficient, struggle in complex environments characterized by occlusions, lighting variations, and road clutter. Deep learning, particularly Convolutional Neural Networks (CNNs), has revolutionized lane detection by enabling automatic feature extraction from raw data, yet challenges persist in handling environmental variability and feature sparsity. This paper comprehensively reviews lane detection methodologies, encompassing both traditional techniques (e.g., Hough transforms, edge detection) and modern deep learning approaches. It emphasizes the critical role of integrating global and local contextual information to improve accuracy in challenging scenarios. Deep learning methods are categorized into three paradigms based on lane representation: segmentation-based, point-based, and parametric-based models. The review further explores how temporal feature fusion (leveraging consecutive video frames) mitigates occlusions and missing features, while spatial feature fusion captures long-range dependencies for holistic scene understanding. Key findings reveal that temporal-spatial fusion significantly enhances robustness, though real-time performance and adaptability to extreme conditions remain limitations. The paper concludes by identifying future research directions, prioritizing efficient architecture for real-time deployment and improved resilience in dynamic, unstructured environments.
-
References
- Y. Zhang, Z. Tu, and F. Lyu, “A Review of Lane Detection Based on Deep Learning Methods,” Mech. Eng. Sci., vol. 5, no. 2, May 2024, doi: 10.33142/mes.v5i2.12721.
- Q. Zou, H. Jiang, Q. Dai, Y. Yue, L. Chen, and Q. Wang, “Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks,” IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 41–54, Jan. 2020, doi: 10.1109/TVT.2019.2949603.
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna-9203, Bangladesh, Md. Rezwanul Haque, Md. Milon Islam, K. Saeed Alam, and H. Iqbal, “A Computer Vision based Lane Detection Approach,” Int. J. Image Graph. Signal Pro-cess., vol. 11, no. 3, pp. 27–34, Mar. 2019, doi: 10.5815/ijigsp.2019.03.04.
- J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986, doi: 10.1109/TPAMI.1986.4767851.
- R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun ACM, vol. 15, no. 1, pp. 11–15, 1972, doi: 10.1145/361237.361242.
- X. He et al., “Monocular Lane Detection Based on Deep Learning: A Survey,” Dec. 11, 2024, arXiv: arXiv:2411.16316. doi: 10.48550/arXiv.2411.16316.
- L. Deng, H. Cao, and Q. Lan, “Dynamically Enhanced lane detection with multi-scale semantic feature fusion,” Comput. Electr. Eng., vol. 118, p. 109426, Sep. 2024, doi: 10.1016/j.compeleceng.2024.109426.
- V. Devane, G. Sahane, H. Khairmode, and G. Datkhile, “Lane Detection Techniques using Image Processing,” ITM Web Conf., vol. 40, p. 03011, 2021, doi: 10.1051/itmconf/20214003011.
- S. Sultana, B. Ahmed, M. Paul, M. R. Islam, and S. Ahmad, “Vision-Based Robust Lane Detection and Tracking in Challenging Condi-tions,” IEEE Access, vol. 11, pp. 67938–67955, 2023, doi: 10.1109/ACCESS.2023.3292128.
- R. K. Megalingam, N. C. Pradeep, A. Reghu, S. A. Sreemangalam, A. Ayaaz, and A. Hegde Kota, “Lane Detection Using Hough Trans-form and Kalman Filter,” in 2024 International Conference on E-mobility, Power Control and Smart Systems (ICEMPS), Thiruvanan-thapuram, India: IEEE, Apr. 2024, pp. 01–05. doi: 10.1109/ICEMPS60684.2024.10559324.
- Y. Zhang, Z. Lu, X. Zhang, J.-H. Xue, and Q. Liao, “Deep Learning in Lane Marking Detection: A Survey,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 5976–5992, Jul. 2022, doi: 10.1109/TITS.2021.3070111.
- N. Sukumar and P. Sumathi, “A Robust Vision-based Lane Detection using RANSAC Algorithm,” in 2022 IEEE Global Conference on Compu-ting, Power and Communication Technologies (GlobConPT), New Delhi, India: IEEE, Sep. 2022, pp. 1–5. doi: 10.1109/GlobConPT57482.2022.9938320.
- U. Khamdamov, A. Abdullayev, M. Mukhiddinov, and S. Xalilov, “Algorithms of Multidimensional Signals Processing based on Cubic Basis Splines for Information Systems and Processes,” J. Appl. Sci. Eng., vol. 24, no. 2, pp. 141–150, 2021, doi: 10.6180/jase.202104_24(2).0003.
- D. Neven, B. D. Brabandere, S. Georgoulis, M. Proesmans, and L. V. Gool, “Towards End-to-End Lane Detection: an Instance Seg-mentation Ap-proach,” Feb. 15, 2018, arXiv: arXiv:1802.05591. doi: 10.48550/arXiv.1802.05591.
- X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial As Deep: Spatial CNN for Traffic Scene Understanding,” Dec. 17, 2017, arXiv: arXiv:1712.06080. Accessed: Jul. 29, 2024. [Online]. Available: http://arxiv.org/abs/1712.06080
- Y. Li et al., “Nighttime lane markings recognition based on Canny detection and Hough transform,” in 2016 IEEE International Confer-ence on Real-time Computing and Robotics (RCAR), Jun. 2016, pp. 411–415. doi: 10.1109/RCAR.2016.7784064.
- H. Lyu, Z. Zhu, and S. Fu, “ENet-SAD–A CNN-based lane detection for recognizing various road conditions,” in Third International Conference on Algorithms, Network, and Communication Technology (ICANCT 2024), SPIE, Mar. 2025, pp. 268–276. doi: 10.1117/12.3060154.
- T. Zheng et al., “RESA: Recurrent Feature-Shift Aggregator for Lane Detection,” Mar. 25, 2021, arXiv: arXiv:2008.13719. doi: 10.48550/arXiv.2008.13719.
- R. Liu, Z. Yuan, T. Liu, and Z. Xiong, “End-to-end Lane Shape Prediction with Transformers,” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA: IEEE, Jan. 2021, pp. 3693–3701. doi: 10.1109/WACV48630.2021.00374.
- Y. Dong, S. Patil, B. Van Arem, and H. Farah, “A hybrid spatial–temporal deep learning architecture for lane detection,” Comput.-Aided Civ. In-frastruct. Eng., vol. 38, no. 1, pp. 67–86, Jan. 2023, doi: 10.1111/mice.12829.
- C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 2, pp. 113–125, Feb. 1998, doi: 10.1109/34.659930.
- J. He, S. Sun, D. Zhang, G. Wang, and C. Zhang, “Lane Detection for Track-following Based on Histogram Statistics,” in 2019 IEEE International Conference on Electron Devices and Solid-State Circuits (EDSSC), Xi’an, China: IEEE, Jun. 2019, pp. 1–2. doi: 10.1109/EDSSC.2019.8754094.
- S. Annadurai, Fundamentals of Digital Image Processing. Pearson Education India, 2007.
- X. Zhang and X. Zhu, “Autonomous path tracking control of intelligent electric vehicles based on lane detection and optimal preview method,” Expert Syst. Appl., vol. 121, pp. 38–48, May 2019, doi: 10.1016/j.eswa.2018.12.005.
- L. Han, Y. Tian, and Q. Qi, “Research on edge detection algorithm based on improved sobel operator,” MATEC Web Conf., vol. 309, p. 03031, 2020, doi: 10.1051/matecconf/202030903031.
- M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and au-tomated car-tography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981, doi: 10.1145/358669.358692.
- [H. Chen, M. Wang, and Y. Liu, “BSNet: Lane Detection via Draw B-spline Curves Nearby,” Jan. 17, 2023, arXiv: arXiv:2301.06910. Accessed: Nov. 02, 2024. [Online]. Available: http://arxiv.org/abs/2301.06910
- L. Tabelini, R. Berriel, T. M. Paixão, C. Badue, A. F. De Souza, and T. Oliveira-Santos, “PolyLaneNet: Lane Estimation via Deep Poly-nomial Re-gression,” Jul. 14, 2020, arXiv: arXiv:2004.10924. Accessed: Jul. 27, 2024. [Online]. Available: http://arxiv.org/abs/2004.10924
- R. Raguram, J.-M. Frahm, and M. Pollefeys, “A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Ran-dom Sample Consensus,” in Computer Vision – ECCV 2008, vol. 5303, D. Forsyth, P. Torr, and A. Zisserman, Eds., in Lecture Notes in Computer Science, vol. 5303. , Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 500–513. doi: 10.1007/978-3-540-88688-4_37.
- Z. Feng, S. Guo, X. Tan, K. Xu, M. Wang, and L. Ma, “Rethinking efficient lane detection via curve modeling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17062–17070. Accessed: May 19, 2025. [Online]. Available: http://openaccess.thecvf.com/content/CVPR2022/html/Feng_Rethinking_Efficient_Lane_Detection_via_Curve_Modeling_CVPR_2022_paper.html
- T. Zheng et al., “CLRNet: Cross Layer Refinement Network for Lane Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA: IEEE, Jun. 2022, pp. 888–897. doi: 10.1109/CVPR52688.2022.00097.
- A. Vaswani et al., “Attention is All you Need,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2017. Accessed: Jun. 26, 2025. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
- S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Sur-vey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 7, pp. 3523–3542, Jul. 2022, doi: 10.1109/TPAMI.2021.3059968.
- A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmenta-tion,” Jun. 07, 2016, arXiv: arXiv:1606.02147. doi: 10.48550/arXiv.1606.02147.
- H. Xu, S. Wang, X. Cai, W. Zhang, X. Liang, and Z. Li, “CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive Point Blending,” Jul. 23, 2020, arXiv: arXiv:2007.12147. doi: 10.48550/arXiv.2007.12147.
- H. Abualsaud, S. Liu, D. Lu, K. Situ, A. Rangesh, and M. M. Trivedi, “LaneAF: Robust Multi-Lane Detection with Affinity Fields,” Aug. 20, 2021, arXiv: arXiv:2103.12040. doi: 10.48550/arXiv.2103.12040.
- Z. Chen, Y. Liu, M. Gong, B. Du, G. Qian, and K. Smith-Miles, “Generating Dynamic Kernels via Transformers for Lane Detection,” in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France: IEEE, Oct. 2023, pp. 6812–6821. doi: 10.1109/ICCV51070.2023.00629.
- R. Girshick, “Fast R-CNN,” Sep. 27, 2015, arXiv: arXiv:1504.08083. doi: 10.48550/arXiv.1504.08083.
- X. Li, J. Li, X. Hu, and J. Yang, “Line-CNN: End-to-End Traffic Line Detection With Line Proposal Unit,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 1, pp. 248–258, Jan. 2020, doi: 10.1109/TITS.2019.2890870.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
- L. Tabelini, R. Berriel, T. M. Paixão, C. Badue, A. F. D. Souza, and T. Oliveira-Santos, “Keep your Eyes on the Lane: Real-time Atten-tion-guided Lane Detection,” Nov. 18, 2020, arXiv: arXiv:2010.12035. doi: 10.48550/arXiv.2010.12035.
- Z. Qin, H. Wang, and X. Li, “Ultra Fast Structure-aware Deep Lane Detection,” Aug. 04, 2020, arXiv: arXiv:2004.11757. Accessed: Jul. 30, 2024. [Online]. Available: http://arxiv.org/abs/2004.11757
- Z. Qin, P. Zhang, and X. Li, “Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal Classification,” Jun. 15, 2022, arXiv: arXiv:2206.07389. Accessed: Jul. 31, 2024. [Online]. Available: http://arxiv.org/abs/2206.07389
- L. Chen et al., “PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark,” Jul. 19, 2022, arXiv: arXiv:2203.11089. doi: 10.48550/arXiv.2203.11089.
- M. Wang, Y. Zhang, W. Feng, L. Zhu, and S. Wang, “Video Instance Lane Detection via Deep Temporal and Geometry Consistency Constraints,” in Proceedings of the 30th ACM International Conference on Multimedia, Lisboa Portugal: ACM, Oct. 2022, pp. 2324–2332. doi: 10.1145/3503161.3547914.
- L. Zhuang, T. Jiang, M. Qiu, A. Wang, and Z. Huang, “Transformer Generates Conditional Convolution Kernels for End-to-End Lane Detection,” IEEE Sens. J., vol. 24, no. 17, pp. 28383–28396, Sep. 2024, doi: 10.1109/JSEN.2024.3430234.
- T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in 2017 IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, Jul. 2017, pp. 936–944. doi: 10.1109/CVPR.2017.106.
- J. Han et al., “Laneformer: Object-Aware Row-Column Transformers for Lane Detection,” Proc. AAAI Conf. Artif. Intell., vol. 36, no. 1, pp. 799–807, Jun. 2022, doi: 10.1609/aaai.v36i1.19961.
- Y. Zhang et al., “VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection,” in 2021 IEEE/CVF International Confer-ence on Computer Vision (ICCV), Montreal, QC, Canada: IEEE, Oct. 2021, pp. 15661–15670. doi: 10.1109/ICCV48922.2021.01539.
- Z. Qiu, J. Zhao, and S. Sun, “MFIALane: Multiscale Feature Information Aggregator Network for Lane Detection,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, pp. 24263–24275, Dec. 2022, doi: 10.1109/TITS.2022.3195742.
- D. Jin, D. Kim, and C.-S. Kim, “Recursive Video Lane Detection,” Aug. 22, 2023, arXiv: arXiv:2308.11106. doi: 10.48550/arXiv.2308.11106.
- K. Zhou, L. Li, W. Zhou, Y. Wang, H. Feng, and H. Li, “LaneTCA: Enhancing Video Lane Detection with Temporal Context Aggrega-tion,” Aug. 25, 2024, arXiv: arXiv:2408.13852. doi: 10.48550/arXiv.2408.13852.
- L. Liu, X. Chen, S. Zhu, and P. Tan, “CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution,” Feb. 10, 2023, arXiv: arXiv:2105.05003. Accessed: Nov. 02, 2024. [Online]. Available: http://arxiv.org/abs/2105.05003
- Q. Chang and Y. Tong, “A Hybrid Global-Local Perception Network for Lane Detection,” Proc. AAAI Conf. Artif. Intell., vol. 38, no. 2, pp. 981–989, Mar. 2024, doi: 10.1609/aaai.v38i2.27858.
- W. Han and J. Shen, “Decoupling the Curve Modeling and Pavement Regression for Lane Detection,” Sep. 19, 2023, arXiv: arXiv:2309.10533. Accessed: Jul. 25, 2024. [Online]. Available: http://arxiv.org/abs/2309.10533
- Z. Lv, D. Han, W. Wang, and D. Z. Chen, “A Siamese Transformer with Hierarchical Refinement for Lane Detection”.
- K. Zhou, “Lane2Seq: Towards Unified Lane Detection via Sequence Generation,” Feb. 26, 2024, arXiv: arXiv:2402.17172. Accessed: Jun. 11, 2024. [Online]. Available: http://arxiv.org/abs/2402.17172
- A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” Jun. 03, 2021, arXiv: arXiv:2010.11929. Accessed: Jun. 24, 2024. [Online]. Available: http://arxiv.org/abs/2010.11929
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” May 18, 2015, arXiv: arXiv:1505.04597. Accessed: Aug. 07, 2024. [Online]. Available: http://arxiv.org/abs/1505.04597
- L. Tabelini, R. Berriel, A. F. De Souza, C. Badue, and T. Oliveira-Santos, “Lane Marking Detection and Classification using Spatial-Temporal Fea-ture Pooling,” in 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy: IEEE, Jul. 2022, pp. 1–7. doi: 10.1109/IJCNN55064.2022.9892478.
-
Downloads
-
How to Cite
huang, shiling, Zin , N. A. M. ., & Hamzah , M. H. I. . (2025). A Review of Deep Learning-Based Lane Detection Methods in Complex Environments. International Journal of Basic and Applied Sciences, 14(4), 549-561. https://doi.org/10.14419/wb7z2179
