Countering AI-Driven Adversaries: An Adaptive Deception‎Framework for Resilient Web Security

  • Authors

    • Saeed Serwan Abdulsattar Dept of Electrical and Electronics Engineering, University of Bahrain
    • Hani Al-Balasmeh Dept of Informatics Engineering, College of Engineering, University of Technology, Bahrain https://orcid.org/0000-0003-3643-0769
    • Mohammed Majed Mohammed Al-Khalidy Dept of Electrical and Electronics Engineering, University of Bahrain
    • Fayzeh Abdulkareem Jaber Dept of Computer Studies, University of Technology, Bahrain
    • Rahmeh Abdulkaeem Jaber Dept of Business Administration, University of Technology, Bahrain
    https://doi.org/10.14419/h8f10w38

    Received date: October 12, 2025

    Accepted date: November 8, 2025

    Published date: November 12, 2025

  • Cybersecurity; Deception Framework; Behavioral Analytics; Bot Detection; Web Security; Machine Learning
  • Abstract

    This paper presents a novel deception-based cybersecurity framework that redefines web defense through adaptive, embedded traps designed to detect and contain AI-driven automated adversaries. With the rapid advancement of machine learning (ML) and large language ‎models (LLMs), traditional web security measures—such as CAPTCHA, honeypots, and Web Application Firewalls (WAFs)—have become increasingly ineffective. Modern bots can now simulate human browsing, execute JavaScript, and evade detection through the use of ‎adaptive algorithms. The proposed framework introduces invisible, dynamically generated traps within the Document Object Model (DOM) ‎and JavaScript layers of web applications to exploit the behavioral disparities between genuine users and automated systems.‎

    These traps include hidden forms, off-screen anchor links, and randomized decoy endpoints that are imperceptible to human users but ‎tectable by automated bots. Interactions with these traps trigger behavioral analysis routines that calculate a Non-Human Interaction Likelihood (NHIL) score, a novel session-level metric that employs sigmoid activation functions to measure behavioral abnormality across multiple parameters. Based on this score, the system classifies, logs, and isolates non-human activity in real time.‎

    An experimental evaluation in a test web environment demonstrated perfect classification performance, with an F1-score of 1.0, achieving ‎complete detection accuracy without false positives or degradation in user experience. Page load latency increased by less than five milliseconds, confirming the framework’s lightweight and seamless integration.‎

    By merging adversarial design, behavioral analytics, and adaptive deception, the proposed system establishes a resilient, intelligence-driven ‎approach to web security. It transforms defensive architecture from reactive to proactive—detecting, engaging, and neutralizing automated ‎adversaries at the interaction layer—offering a scalable model for the next generation of deception-centric cybersecurity‎.

  • References

    1. A. Vaswani et al., “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 5998–6008.
    2. Z. Zhang, X. Wang, and W. Zhu, “Automated machine learning on graphs: A survey,” arXiv preprint arXiv:2103.00742v4, Dec. 2021.
    3. D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, Feb. 2021, https://doi.org/10.1109/TNNLS.2020.2979670.
    4. J. Zhang et al., “When LLMs Meet Cybersecurity: A Systematic Literature Review,” arXiv preprint arXiv:2405.03644, Dec. 2024. https://doi.org/10.1186/s42400-025-00361-w.
    5. N. Kshetri, “Cybercrime and Privacy Threats of Large Language Models,” IT Professional, vol. 25, no. 3, pp. 9–13, May–June 2023, https://doi.org/10.1109/MITP.2023.3275489.
    6. S. Sivakorn, I. Polakis, and A. Keromytis, “I am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs,” 2016 IEEE European Symposi-um on Security and Privacy (EuroS&P), Saarbrücken, Germany, pp. 388–403, 2016, https://doi.org/10.1109/EuroSP.2016.37.
    7. Learning to Break Semantic Image CAPTCHA,” 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbruecken, Germany, 2016, pp. 388–403, https://doi.org/10.1109/EuroSP.2016.37.
    8. E. Bursztein et al., “How Good Are Humans at Solving CAPTCHAs? A Large Scale Evaluation,” 2010 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 2010, pp. 399–413, https://doi.org/10.1109/SP.2010.31.
    9. A. Earles et al., “Empirical study & evaluation of modern CAPTCHAs,” arXiv:2307.12108v1, 2023.
    10. N. Rathour, K. Kaur, S. Bansal, and C. Bhargava, “A Cross Correlation Approach for Breaking of Text CAPTCHA,” 2018 International Confer-ence on Intelligent Circuits and Systems (ICICS), Phagwara, India, 2018, pp. 6–10, https://doi.org/10.1109/ICICS.2018.00014.
    11. J. Yan and A. S. El Ahmad, “Breaking Visual CAPTCHAs with.
    12. Naive Pattern Recognition Algorithms,” Twenty-Third Annual Computer Security Applications Conference (ACSAC 2007), Miami Beach, FL, USA, 2007, pp. 279–291, https://doi.org/10.1109/ACSAC.2007.47.
    13. M. Tang et al., “Research on Deep Learning Techniques in Breaking Text-Based Captchas and Designing Image-Based Captcha,” IEEE Transac-tions on Information Forensics and Security, vol. 13, no. 10, pp. 2522–2537, Oct. 2018, https://doi.org/10.1109/TIFS.2018.2821096.
    14. A. Plesner, T. Vontobel, and R. Wattenhofer, “Breaking reCAPTCHAv2,” 2024 IEEE 48th Annual Computers, Software, and Applications Con-ference (COMPSAC), Osaka, Japan, 2024, pp. 1047–1056, https://doi.org/10.1109/COMPSAC61105.2024.00142.
    15. K. Nagendran et al., “Web Application Firewall Evasion Techniques,” 2020 6th International Conference on Advanced Computing and Communi-cation Systems (ICACCS), Coimbatore, India, 2020, pp. 194–199, https://doi.org/10.1109/ICACCS48705.2020.9074217.
    16. M. Amouei, M. Rezvani, and M. Fateh, “RAT: Reinforcement-LearningDriven and Adaptive Testing for Vulnerability Discovery in Web Applica-tion Firewalls,” IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 5, pp. 3371–3386, Sept.–Oct. 2022, https://doi.org/10.1109/TDSC.2021.3095417.
    17. B. I. Mukhtar and M. A. Azer, “Evaluating the Modsecurity Web
    18. Application Firewall Against SQL Injection Attacks,” 2022 International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 2022, pp. 327–332.
    19. P. Patel, A. Dalvi, and I. Sidddavatam, “Exploiting Honeypot for Cryptojacking: The other side of the story of honeypot deployment,” 2022 6th International Conference on Computing, Communication, Control and Automation (ICCUBEA), Pune, India, 2022, pp. 1–5, https://doi.org/10.1109/ICCUBEA54992.2022.10010904.
    20. M. Tsikerdekis et al., “Approaches for Preventing Honeypot Detection and Compromise,” 2018 Global Information Infrastructure and Networking Symposium (GIIS), Thessaloniki, Greece, 2018, pp. 1–6, https://doi.org/10.1109/GIIS.2018.8635603.
    21. J. Qu, X. Ma, and J. Li, “TrafficGPT: Breaking the Token Barrier for Efficient Long Traffic Analysis and Generation,” arXiv:2403.05822v2 [cs.LG], 18 Mar. 2024.
    22. N. Lu, “Large Language Models can be Guided to Evade AI Generated Text Detection,” arXiv:2305.10847v6 [cs.CL], 15 May 2024.
    23. F. Daniel, C. Cappiello, and B. Benatallah, “Bots Acting Like Humans: Understanding and Preventing Harm,” IEEE Internet.
    24. D. B. Acharya, K. Kuppan, and B. Divya, “Agentic AI: Autonomous Intelligence for Complex Goals—A Comprehensive Survey,” IEEE Access, vol. 13, pp. 18912–18936, 2025, https://doi.org/10.1109/ACCESS.2025.3532853.
    25. S. Kusal et al., “AI-Based Conversational Agents: A Scoping Review From Technologies to Future Directions,” IEEE Access, vol. 10, pp. 92337–92356, 2022, https://doi.org/10.1109/ACCESS.2022.3201144.
    26. V. Papaspirou, I. Kantzavelou, L. Maglaras, and H. Janicke, “A novel two-factor honeytoken authentication mechanism,” 2021 International Con-ference on Computer Communications and Networks (ICCCN), 2021, pp. 1–8, https://doi.org/10.1109/ICCCN52240.2021.9522319.
    27. A. Javadpour et al., “A comprehensive survey on cyber deception techniques to improve honeypot performance,” Computers & Security, vol. 140, Art. no. 103792, May 2024, https://doi.org/10.1016/j.cose.2024.103792.
    28. Z. Moric, V. Daki´ c, and D. Regvart, “Advancing Cybersecurity with´ Honeypots and Deception Strategies,” Informatics, vol. 12, no. 1, p. 14, 2025, https://doi.org/10.3390/informatics12010014.
    29. J.-H. Cho et al., “Toward Proactive, Adaptive Defense: A Survey on Moving Target Defense,” IEEE Communications Surveys & Tutorials, vol. 22, no. 1, pp. 709–745, Firstquarter 2020, https://doi.org/10.1109/COMST.2019.2963791.
    30. W. Soussi, M. Christopoulou, G. Xilouris, and G. Gür, “Moving Target Defense as a Proactive Defense Element for Beyond 5G,” IEEE Communi-cations Standards Magazine, vol. 5, no. 3, pp. 72–79, Sept. 2021, https://doi.org/10.1109/MCOMSTD.211.2000087.
    31. J. Pawlick, E. Colbert, and Q. Zhu, “A game-theoretic taxonomy and survey of defensive deception for cybersecurity and privacy,” ACM Compu-ting Surveys, vol. 52, no. 3, pp. 1–32, 2019, https://doi.org/10.1145/3337772.
    32. C. Lam, J. J. Ding, and J.-C. Liu, “XML Document Parsing: Operational and Performance Characteristics,” Computer, vol. 41, no. 9, pp. 30–37, Sept. 2008, https://doi.org/10.1109/MC.2008.403.
    33. M. S. Kang, S. B. Lee, and V. D. Gligor, “The Crossfire Attack,” 2013 IEEE Symposium on Security and Privacy, Berkeley, CA, USA, 2013, pp. 127–141, https://doi.org/10.1109/SP.2013.19.
    34. Y. Xu et al., “Threats to online surveys: Recognizing, detecting, and preventing survey bots,” Social Work Research, vol. 46, no. 4, pp. 1–12, Oct. 2022, https://doi.org/10.1093/swr/svac023.
    35. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986, https://doi.org/10.1038/323533a0.
    36. L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338–353, 1965, https://doi.org/10.1016/S0019-9958(65)90241-X.
    37. D. R. Cox, “The regression analysis of binary sequences,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 20, no. 2, pp. 215–242, 1958. https://doi.org/10.1111/j.2517-6161.1958.tb00292.x.
  • Downloads

  • How to Cite

    Abdulsattar, S. S. ., Al-Balasmeh, H., Al-Khalidy, M. M. M. ., Jaber, F. A. ., & Jaber, R. A. . (2025). Countering AI-Driven Adversaries: An Adaptive Deception‎Framework for Resilient Web Security. International Journal of Basic and Applied Sciences, 14(7), 296-304. https://doi.org/10.14419/h8f10w38