ChatGPT-Based Preparation Tool for Industrial Engineer ‎Certification Examinations

  • Authors

    • Cheng-Wen Chang Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, ‎Taiwan
    • Chen-Chi Wang Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, ‎Taiwan
    • Yung-Chun Chang Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, ‎Taiwan
    • Chun-Ying Lin Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, ‎Taiwan
    • Jui-Chan Huang Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, ‎Taiwan
    https://doi.org/10.14419/vet88k67

    Received date: December 18, 2025

    Accepted date: January 6, 2026

    Published date: January 9, 2026

  • AI Language Model; Chinese Institute of Industrial Engineers (CIIE); ChatGPT; Answer Accuracy; Chain of Thought (CoT); Tree of Thought (ToT)‎.
  • Abstract

    This study aims to evaluate the performance of artificial intelligence language models in the Industrial Engineer Certification Examination ‎and to analyze their accuracy across different subjects. A total of 750 past exam questions (from 2019 to 2024) across five subjects, namely ‎Quality Management, Production and Operations Management, Operations Research, Engineering Economics, and Human Factors ‎Engineering, were tested using ChatGPT-3.5 and ChatGPT-4o. The models’ responses were compared with standard answers to compute ‎accuracy rates and identify performance differences. Additionally, the Chain of Thought (CoT) and Tree of Thought (ToT) reasoning ‎frameworks were applied to incorrect responses from GPT-4o to assess improvements in reasoning. The results show that ChatGPT-4o ‎achieved significantly higher accuracy and reasoning coherence than ChatGPT-3.5, particularly in Quality Management and Production ‎Management. Overall, this study demonstrates the potential of generative AI as an effective tool for professional education and exam ‎preparation, offering insights for future model optimization and educational integration‎.

  • References

    1. A. Gocen, F. Aydemir. (2020). Artificial Intelligence in Education and Schools. Research on Education and Media Vol. 12, N. 1. https://doi.org/10.2478/rem-2020-0003.
    2. A. Vakilzadeh, S. Pourahmad Ghalejoogh, M. Hatami. (2023). Evaluating the potential of large language model AI as project management assis-tants: a comparative simulation to evaluate GPT-3.5, GPT-4, and Google-Bard ability to pass the PMI’s PMP test. Evaluating the Potential of Large Language Model AI as Project Management Assistants: A Comparative Simulation to Evaluate GPT-3.5, GPT-4, and Google- Bard Ability to Pass the PMI’s PMP Test(August 1, 2023). https://doi.org/10.2139/ssrn.4568800.
    3. Chinese Institute of Industrial Engineers. (2024). Certification Examination Overview. https://www.ciie.org.tw/about1-c10h3.
    4. J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, and D. Zhou. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837.
    5. J. G. Meyer, R. J. Urbanowicz, P.C.N. Martin, K. O’Connor, R. Li, P. C. Peng, T. J. Bright, N. Tatonetti, K. J. Won, G. Gonzalez-Hernandez & J. H. Moore. (2023). ChatGPT and large language models in academia: Opportunities and challenges. BioData Mining, 16(1), 20. https://doi.org/10.1186/s13040-023-00339-9.
    6. M. Alfertshofer, C.C. Hoch, P.F. Funk, K. Hollmann, B. Wollenberg, S. Knoedler, L. Knoedler. (2024). Sailing the seven seas: a multinational comparison of ChatGPT’s performance on medical licensing examinations. Annals of Biomedical Engineering 52 (6), 1542-1545. https://doi.org/10.1007/s10439-023-03338-3.
    7. M. Lewandowski, P. Łukowicz, D. Świetlik, W.B. Rybak. (2023). ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology. Clinical and Experimental Dermatology, Volume 49, Issue 7, July 2024, Pages 686–691. https://doi.org/10.1093/ced/llad255.
    8. P. P. Ray (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Inter-net of Things and Cyber-Physical Systems 3, 121-154. https://doi.org/10.1016/j.iotcps.2023.04.003.
    9. S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, and K. Narasimhan. (2024). Tree of thoughts: Deliberate problem solving with large lan-guage models. Advances in Neural Information Processing Systems, 36.
  • Downloads

  • How to Cite

    Chang, C.-W. ., Wang , C.-C. ., Chang, Y.-C. ., Lin, C.-Y. ., & Huang, J.-C. . (2026). ChatGPT-Based Preparation Tool for Industrial Engineer ‎Certification Examinations. International Journal of Basic and Applied Sciences, 15(1), 44-48. https://doi.org/10.14419/vet88k67