Automated Ranking Assessment based on Completeness and Correctness of a Computer Program Solution

  • Authors

    • S. Suhailan
    • M. K. Yusof
    • A. F.A. Abidin
    • S. A. Fadzli
    • M. S. Mat Deris
    • S. Abdul Samad
    https://doi.org/10.14419/ijet.v7i3.28.23438
  • program features, automated assessment, ranking features.
  • Many automated programming assessment methods requires program to be represented into certain calculated features. In order to assess the difficulty of a program in answering a computational programming question, two main factors need to be considered in extracting the features; program incompleteness and solution correctness. Common features were based on solution's template matching to assess a program correctness. However, incomplete program that usually occurs among novice learners may rise difficulty for the technique in parsing the program's structure. This research proposes program's scoring features based on instruction template's sequence and ratio to represent the programs into a solution ranking list in solving a programming question. The features were evaluated against manual rubric's assessment of 67 incomplete Java programs. The result shows that the proposed features were highly correlated with the manual rubric's assessment (rho = 0.9142086, S = 4299.5, p-value < 2.2e-16). Thus, the proposed features can be used to automatically rank computer programs based on expected instruction-based of solution templates. The ranking result can be used to identify most struggled user especially in assisting students in a programming lab exercise session.

     

     

  • References

    1. [1] M. Joy, 2010, “Automated Assessment,†University of Warwick. https://www2.warwick.ac.uk/fac/sci/dcs/research/edtech/automatedassessment/.

      [2] S. Safei, A. S. Shibghatullah, and B. Mohd Aboobaider, 2014, “A Perspective of Automated Programming Error Feedback Approaches,†Journal of Theoretical and Applied Information Technology, 70(1), 121–129.

      [3] B. E. Vaessen, F. J. Prins, and J. Jeuring, 2014, “Computers and Education University Students’ Achievement Goals and Help-Seeking Strategies in An Intelligent Tutoring System,†Computers and Education, 72, 196–208.

      [4] Y. Udagawa, “A Novel Technique for Retrieving Source Code Duplication,†Proceedings of the Ninth International Conference on Systems, 2014, pp. 172–177.

      [5] B. Biegel, Q. D. Soetens, W. Hornig, S. Diehl, and S. Demeyer, “Comparison of Similarity Metrics for Refactoring Detection,†Proceedings of the 8th Working Conference on Mining Software Repositories, 2011, pp. 53–62.

      [6] E. Stankov, M. Jovanov, A. M. Bogdanova, and M. Gusev, 2013, “A New Model for Semiautomatic Student Source Code Assessment,†Journal of Computing and Information Technology, 21(3), 185–194.

      [7] M. Mojzeš, M. Rost, J. Smolka, and M. Virius, “Feature Space for Statistical Classification of Java Source Code Patterns,†Proceedings of the 15th International Carpathian Control Conference, 2014, pp. 357–361.

      [8] C. Fernandez-Medina, J. R. Pérez-Pérez, V. M. Ãlvarez-García, and M. D. P. Paule-Ruiz, “Assistance in Computer Programming Learning Using Educational Data Mining and Learning Analytics,†Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, 2013, pp. 237–242.

      [9] S. Sharma, C. S. Sharma, and V. Tyagi, “Plagiarism Detection Tool ‘Parikshak,’†Proceedings of the International Conference on Communication, Information and Computing Technology, 2015, pp. 1–7.

      [10] S. Schleimer, D. S. Wilkerson, and A. Aiken, “Winnowing: Local Algorithms for Document Fingerprinting,†Proceedings of the ACM SIGMOD International Conference on on Management of Data, 2003, pp. 76–85.

      [11] U. Bandara and G. Wijayarathna, 2013, “Source Code Author Identification With Unsupervised Feature Learning,†Pattern Recognition Letters, 34(3), 330–334.

      [12] R. Lange and S. Mancoridis, “Using Code Metric Histograms and Genetic Algorithms to Perform Author Identification for Software Forensics,†Proceedings of the Genetic and Evolutionary Computation Conference, 2007, pp. 2082–2089.

      [13] T. Wang, X. Su, Y. Wang, and P. Ma, 2007, “Semantic Similarity-Based Grading of Student Programs,†Information and Software Technology, 49(2), 99–107.

      [14] C. M. Tang, Y. T. Yu, and C. K. Poon, “A Review of the Strategies for Output Correctness Determination in Automated Assessment of Student Programs,†Proceedings of the 14th Global Chinese Conference on Computers in Education, 2010, pp. 584–591.

      [15] P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx, “Understanding the Syntax Barrier for Novices,†Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, 2011, pp. 208-212.

      [16] A. Papancea, J. Spacco, and D. Hovemeyer, “An Open Platform for Managing Short Programming Exercises,†Proceedings of the Ninth Annual International ACM Conference on International Computing Education Research, 2013, pp. 47–51.

      [17] D. S. Morris, “Automatic Grading of Student’s Programming Assignments: An Interactive Process and Suite of Programs,†Proceedings of the 33rd ASEE/IEEE Frontiers in Education Conference, 2003, pp. 1–6.

      [18] C. M. Tang, Y. T. Yu, and C. K. Poon, “An Approach Towards Automatic Testing of Student Programs Using Token Patterns,†Proceedings of the 17th International Conference on Computers in Education, 2009, pp. 188–190.

      [19] P. Garg, S. Sangwan, and R. K. Garg, 2014, “Design an Expert System for Ranking of Software Metrics,†International Journal for Research in Applied Science and Engineering Technology, 2(8), 109–117.

      [20] H. M. Manoj and A. N. Nandakumar, 2014, “A Survey on Modelling of Software Metrics for Ranking Code Reusability in Object Oriented Design Stage,†International Jornal of Engineering Research and Technology, 3(12), 538–544.

      [21] M. Suarez and R. Sison, 2008, “Automatic Construction of a Bug Library for Object-Oriented Novice Java Programmer Errors,†Intelligent Tutoring System, 5091, 184–193.

      [22] E. R. Sykes, 2005, “Qualitative Evaluation of the Java Intelligent Tutoring System,†Journal of Systemics, Cybernetics and Informatics, 3(5), 49–60.

      [23] R. Singh, S. Gulwani, and A. Solar-lezama, “Automated Feedback Generation for Introductory Programming Assignments,†Proceedings of the ACM Programming Language Design and Implementation, 2013, pp. 15–26.

      [24] A. Bagini, 2011, “Automatic Assessment of Java Programming Patterns for Novices,†University of Western Australia.

      [25] C.-H. Hsiao, M. Cafarella, and S. Narayanasamy, 2014, “Using Web Corpus Statistics for Program Analysis,†ACM SIGPLAN Notices, 49(10), 49–65.

      [26] G. Frantzeskou, E. Stamatatos, S. Gritzalis, C. E. Chaski, and B. S. Howald, 2007, “Identifying Authorship by Byte-Level N-Grams: The Source Code Author Profile (SCAP) Method,†International Journal of Digital Evidence, 6(1), 1–18.

      [27] A. Pektaş, “Proposal of n-gram Based Algorithm for Malware Classification,†Proceedings of the Fitth International Conference on Emerging Security Information, Systems and Technologies, 2011, pp. 14–18.

      [28] T. Abou-Assaleh, N. Cercone, V. Keselj, and R. Sweidan, “Detection of New Malicious Code Using N-Grams Signatures,†Proceedings of the Second Annual Conference on Privacy, Security and Trust, 2004, pp. 193–196.

      [29] A. Jadalla and A. Elnagar, 2008, “PDE4Java: Plagiarism Detection Engine for Java Source Code: A Clustering Approach,†International Journal of Business Intelligence and Data Mining, 3(2), 121–135.

      [30] S. Safei, S. Abdul Samad, M. A. Burhanuddin, and A. H. Nazirah, 2017, "Program Statement Parser for Computational Programmng Feedback", Journal of Engineering and Applied Science, 12(5), 7057-7062.

  • Downloads

  • How to Cite

    Suhailan, S., K. Yusof, M., F.A. Abidin, A., A. Fadzli, S., S. Mat Deris, M., & Abdul Samad, S. (2018). Automated Ranking Assessment based on Completeness and Correctness of a Computer Program Solution. International Journal of Engineering & Technology, 7(3.28), 278-283. https://doi.org/10.14419/ijet.v7i3.28.23438