A Novel Multi-Sensor Fusion SLAM Framework for Anti-‎Interference

  • Authors

    • Yu Ji School of Engineering and Technology, Panyapiwat Institute of Management, Nonthaburi,11120, Thailand
    • Jian Qu School of Engineering and Technology, Panyapiwat Institute of Management, Nonthaburi,11120, Thailand https://orcid.org/0000-0002-1658-5088
    https://doi.org/10.14419/g10zn898

    Received date: July 31, 2025

    Accepted date: August 7, 2025

    Published date: August 16, 2025

  • Multi-Sensor Fusion; ROS; SLAM; CLAHE Algorithm; LiDAR Noise Filtering
  • Abstract

    Nowadays, people have high requirements for the robustness of autonomous driving. As a key technology in the field of autonomous driving, SLAM requires a lightweight and explainable algorithm framework when dealing with perception degradation scenes. At present, most ‎multi-sensor fusion SLAM algorithms use methods such as training neural networks or adding new odometry constraints to achieve anti-‎interference, but these methods do not meet the requirements. We use the data communication technology of the ROS (Robot Operating ‎System) platform to integrate the CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm, the LiDAR Noise Filtering algorithm, and RTAB-Map (a multi-sensor fusion algorithm based on graph optimization). We then conducted experiments in three designed ‎perception degradation scenes and used the MME indicator to quantify the experiments. The results showed that the MME of our framework in the perception-degraded environment was reduced by an average of 0.107, proving that the framework we proposed performs bet-‎ter than RTAB-Map in dealing with some perception degradation scenes.

  • References

    1. Grisetti, G., Kummerle, R., Stachniss, C., & Burgard, W. (2010). A tutorial on graph-based SLAM. IEEE Intelligent Transportation Systems Maga-zine, 2(4). https://doi.org/10.1109/MITS.2010.939925.
    2. Liang, M., Yang, B., Chen, Y., Hu, R., & Urtasun, R. (2019). Multi-task multi-sensor fusion for 3D object detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June. https://doi.org/10.1109/CVPR.2019.00752.
    3. Tang, Q., Liang, J., & Zhu, F. (2023). A comparative review on multi-modal sensors fusion based on deep learning. In Signal Processing (Vol. 213). https://doi.org/10.1016/j.sigpro.2023.109165.
    4. Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D. L., & Han, S. (2023). BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation. Proceedings - IEEE International Conference on Robotics and Automation, 2023-May. https://doi.org/10.1109/ICRA48891.2023.10160968.
    5. Huang, Y., Shan, T., Chen, F., & Englot, B. (2022). DiSCo-SLAM: Distributed Scan Context-Enabled Multi-Robot LiDAR SLAM with Two-Stage Global-Local Graph Optimization. IEEE Robotics and Automation Letters, 7(2). https://doi.org/10.1109/LRA.2021.3138156.
    6. Labbé, M., & Michaud, F. (2019). RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. Journal of Field Robotics, 36(2). https://doi.org/10.1002/rob.21831.
    7. Phan, H. A., Nguyen, P. V., Khuat, T. H. T., Van, H. D., Tran, D. H. Q., Dang, B. L., Bui, T. T., Thanh, V. N. T., & Duc, T. C. (2023). A Sensor Fusion Approach for Improving Implementation Speed and Accuracy of RTAB-Map Algorithm Based Indoor 3D Mapping. Proceedings of JCSSE 2023 - 20th International Joint Conference on Computer Science and Software Engineering. https://doi.org/10.1109/JCSSE58229.2023.10201983.
    8. Yang, S., Song, Y., Kaess, M., & Scherer, S. (2016). Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments. IEEE Inter-national Conference on Intelligent Robots and Systems, 2016-November. https://doi.org/10.1109/IROS.2016.7759204.
    9. Pire, T., Fischer, T., Castro, G., De Cristóforis, P., Civera, J., & Jacobo Berlles, J. (2017). S-PTAM: Stereo Parallel Tracking and Mapping. Robotics and Autonomous Systems, 93. https://doi.org/10.1016/j.robot.2017.03.019.
    10. Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., & Rus, D. (2020). LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. IEEE International Conference on Intelligent Robots and Systems. https://doi.org/10.1109/IROS45743.2020.9341176.
    11. Tuna, T., Nubert, J., Nava, Y., Khattak, S., & Hutter, M. (2024). X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Ex-treme Environments. IEEE Transactions on Robotics, 40. https://doi.org/10.1109/TRO.2023.3335691.
    12. Xanthidis, M., Skaldebø, M., Haugaløkken, B., Evjemo, L., Alexis, K., & Kelasidi, E. (2024). ResiVis: A Holistic Underwater Motion Planning Approach for Robust Active Perception Under Uncertainties. IEEE Robotics and Automation Letters, 9(11), 9391–9398. https://doi.org/10.1109/LRA.2024.3455893.
    13. Chrysanthidis, G. (2023). lidar_noise_filtering. GitHub. https://github.com/ch-geo/lidar_noise_filtering (accessed 2025 05/03/2025).
    14. Yu, L., Yang, E., & Yang, B. (2022). AFE-ORB-SLAM: Robust Monocular VSLAM Based on Adaptive FAST Threshold and Image Enhance-ment for Complex Lighting Environments. Journal of Intelligent and Robotic Systems: Theory and Applications, 105(2). https://doi.org/10.1007/s10846-022-01645-w.
    15. Lin, X., Yang, X., Yao, W., Wang, X., Ma, X., & Ma, B. (2024). Graph-based adaptive weighted fusion SLAM using multimodal data in complex underground spaces. ISPRS Journal of Photogrammetry and Remote Sensing, 217, 101–119. https://doi.org/10.1016/j.isprsjprs.2024.08.007.
    16. Sabry, M., Osman, M., Hussein, A., Mehrez, M. W., Jeon, S., & Melek, W. (2022). A Generic Image Processing Pipeline for Enhancing Accuracy and Robustness of Visual Odometry. Sensors, 22(22). https://doi.org/10.3390/s22228967.
    17. Frosi, M., & Matteucci, M. (2022). ART-SLAM: Accurate Real-Time 6DoF LiDAR SLAM. IEEE Robotics and Automation Letters, 7(2), 2692–2699. https://doi.org/10.1109/LRA.2022.3144795.
    18. Ferrari, S., Giammarino, L. D., Brizi, L., & Grisetti, G. (2024). MAD-ICP: It is All About Matching Data – Robust and Informed LiDAR Odome-try. IEEE Robotics and Automation Letters, 9(11), 9175–9182. https://doi.org/10.1109/LRA.2024.3456509.
    19. Li, M., & Mourikis, A. I. (2013). High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics Research, 32(6). https://doi.org/10.1177/0278364913481251.
    20. Lin, J., & Zhang, F. (2022). R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package. Proceedings - IEEE International Conference on Robotics and Automation. https://doi.org/10.1109/ICRA46639.2022.9811935.
    21. Jia, Y., Luo, H., Zhao, F., Jiang, G., Li, Y., Yan, J., Jiang, Z., & Wang, Z. (2021). Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method. IEEE International Conference on Intelligent Robots and Systems. https://doi.org/10.1109/IROS51168.2021.9635905.
    22. Lee, J., Komatsu, R., Shinozaki, M., Kitajima, T., Asama, H., An, Q., & Yamashita, A. (2024). Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. IEEE Robotics and Automation Letters, 9(8), 7270–7277. https://doi.org/10.1109/LRA.2024.3421792.
    23. Li, Y., Zhang, W., Zhang, Z., Shi, X., Li, Z., Zhang, M., & Chi, W. (2025). An adaptive compensation strategy for sensors based on the degree of degradation. Biomimetic Intelligence and Robotics, 100235. https://doi.org/10.1016/j.birob.2025.100235.
    24. Hu, X., Wu, J., Jia, M., Yan, H., Jiang, Y., Jiang, B., Zhang, W., He, W., & Tan, P. (2025). MapEval: Towards Unified, Robust and Efficient SLAM Map Evaluation Framework. IEEE Robotics and Automation Letters, 10(5), 4228–4235. https://doi.org/10.1109/LRA.2025.3548441.
    25. Ding, S., & Qu, J. (2023). Research on Multi-tasking Smart Cars Based on Autonomous Driving Systems. SN Computer Science, 4(3), 292. https://doi.org/10.1007/s42979-023-01740-1.
    26. Li, Y., & Qu, J. (2024). A novel neural network architecture and cross-model transfer learning for multi-task autonomous driving. Data Technolo-gies and Applications, 58(5), 693–717. https://doi.org/10.1108/DTA-08-2022-0307.
    27. Reke, M., Peter, D., Schulte-Tigges, J., Schiffer, S., Ferrein, A., Walter, T., & Matheis, D. (2020). A self-driving car architecture in ROS2. 2020 International SAUPEC/RobMech/PRASA Conference, SAUPEC/RobMech/PRASA 2020. https://doi.org/10.1109/SAUPEC/RobMech/PRASA48453.2020.9041020.
    28. Geneva, P., Eckenhoff, K., & Huang, G. (2018). Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization. Proceedings - IEEE Inter-national Conference on Robotics and Automation. https://doi.org/10.1109/ICRA.2018.8460204.
    29. Eros, E., Dahl, M., Bengtsson, K., Hanna, A., & Falkman, P. (2019). A ROS2 based communication architecture for control in collaborative and intelligent automation systems. Procedia Manufacturing, 38. https://doi.org/10.1016/j.promfg.2020.01.045.
    30. Chen, K., Hoque, R., Dharmarajan, K., Llontopl, E., Adebola, S., Ichnowski, J., Kubiatowicz, J., & Goldberg, K. (2023). FogROS2-SGC: A ROS2 Cloud Robotics Platform for Secure Global Connectivity. IEEE International Conference on Intelligent Robots and Systems. https://doi.org/10.1109/IROS55552.2023.10341719.
    31. Patoliya, J., & Mewada, H. (2019). Comprehensive study and investigation of ROS for computer vision applications using Raspberry Pi. In Interna-tional Journal of Engineering & Technology (Vol. 8, Issue 3). https://doi.org/10.14419/ijet.v8i3.29694.
    32. Bi, C., Shi, S., & Qu, J. (2024). Enhancing Autonomous Driving: A Novel Approach of Mixed Attack and Physical Defense Strategies. ASEAN Journal of Scientific and Technological Reports, 28(1), e254093. https://doi.org/10.55164/ajstr.v28i1.254093.
    33. Shi, S., & Qu, J. (2024). Multi-Task in Autonomous Driving through RDNet18-CA with LiSHTL-S Loss Function. ECTI Transactions on Comput-er and Information Technology, 18(2), 158–173.
    34. Hu, X., Zheng, L., Wu, J., Geng, R., Yu, Y., Wei, H., Tang, X., Wang, L., Jiao, J., & Liu, M. (2024). PALoc: Advancing SLAM Benchmarking With Prior-Assisted 6-DoF Trajectory Generation and Uncertainty Estimation. IEEE/ASME Transactions on Mechatronics, 29(6), 4297–4308. https://doi.org/10.1109/TMECH.2024.3362902.
    35. Wang, L.; Zhong, X.; Xu, Z.; Chai, K.; Zhao, A.; Zhao, T.; Jiang, C.; Wang, Q.; Zhang, F. (2025). LEMON-mapping: loop-enhanced large-scale multi-session point cloud merging and optimization for globally consistent mapping. arXiv, arXiv:2505.10018.
    36. Ding, S., & Qu, J. (2022). A Study on Safety Driving of Intelligent Vehicles Based on Attention Mechanisms. ECTI Transactions on Computer and Information Technology, 16(4). https://doi.org/10.37936/ecti-cit.2022164.248674.
    37. Gu, H., & Qu, J. (2025). Semantic-Aware Path Planning by Using Dynamic Weighted‎Dijkstra for Autonomous Driving. International Journal of Basic and Applied Sciences, 14(3), 345-360. https://doi.org/10.14419/96xxwj87.
    38. Sizintsev, M., Rajvanshi, A., Chiu, H. P., Kaighn, K., Samarasekera, S., & Snyder, D. P. (2019). Multi-Sensor Fusion for Motion Estimation in Vis-ually-Degraded Environments. 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2019. https://doi.org/10.1109/SSRR.2019.8848958.
    39. Tang, J., Liu, S., Liu, L., Yu, B., & Shi, W. (2020). LoPECS: A Low-Power Edge Computing System for Real-Time Autonomous Driving Services. IEEE Access, 8, 30467–30479. https://doi.org/10.1109/ACCESS.2020.2970728.
    40. Lu, G., Yang, H., Li, J., Kuang, Z., & Yang, R. (2023). A Lightweight Real-Time 3D LiDAR SLAM for Autonomous Vehicles in Large-Scale Ur-ban Environment. IEEE Access, 11, 12594–12606. https://doi.org/10.1109/ACCESS.2023.3241800.
  • Downloads

  • How to Cite

    Ji , Y., & Qu, J. (2025). A Novel Multi-Sensor Fusion SLAM Framework for Anti-‎Interference. International Journal of Basic and Applied Sciences, 14(4), 462-474. https://doi.org/10.14419/g10zn898