The Best Effort System to Score Subjective Answers of Tests in a Large Group
Keywords:automatic essay scoring, automatic scoring system, content-based scoring, Internet-based scoring system, short answer scoring, subjective-type evaluation
The subjective tests can improve the quality of education by measuring the cognitive abilities, but the biggest drawback is the lack of fairness, consistency, and accuracy. To improve the drawback, we proposed the best effort system that scores the correct subjective answers based on the correct answer table made by committee members, then classifies the rest of subjective answers into groups of similar answers so that the latest automatic scoring systems and graders assign each reasonable credit to each group of similar subjective answers.
In the scoring system, the groups of the similar answers are evaluated by raters and the latest automatic scoring systems, such as syntax tree comparison grading, and the syntax and semantic tree-oriented grading. All the scores for each similar answer are added and then an average for each similar is stored in the similar answer table. Finally, the system grades applicantâ€™s answers using the correct answer table and the similar answer table. This paper proposes the algorithm for the best effort scoring system to include the latest automatic scoring system in order to be as fair, consistent, and accurate as possible.
 Reiser RA & Kegelmann HW (1994), Evaluating instructional Software: A review and critique of current method. Education Technology Research and Development, 1994;Vol.42, No.3, 63-69.
 Lee JY (2017), Dynamic Relocation of True-False Questions Using Ready-made Arrays with Random Numbers. International Journal of Software Engineering and Its Applications, Vo.10, No.8, 91-100.
 Correia R, Baptista J, Eskenazi M & Mamede N (2012), Automatic Generation of Cloze Question Stems. In Computational Proceeding of the Portuguese Language, Springer-Verlag Berlin Heidelberg, 168-178.
 Majumder M & Saha SK (2015), A System Multiple Choice Questions: With a Novel Approach for Sentence Selection. Proceedings of the 2nd Workshop on Natural Language Proceeding, 64-72.
 Fairon C (1999), A Web-based System for Automatic Language Skill Assessment: EVALING. Proceedings of Computer Mediated Language Assessment and Evaluation in Natural Language Processing Workshop, 62-67.
 Kim YS, Oh JS, Lee JY & Chang JH (2004), An intelligent grading system for descriptive examination paper based on probabilistic latent semantic analysis. Springer-Verlag Berlin Heidelberg, 1141-1146.
 Leacock C & Chodorow M(2003), C-rater: Automated Scoring of Short-Answer Questions. Computers and the Humanities, Vol.37, No.4, 389-405.
 Sukkarieh JZ & Stoyanchev S (2009), Automating Model Building in C-rater. Proceeding of the 2009 Workshop on Applied Textual Inference, 61-69.
 Sukkarieh JZ & Blackmore J (2009), C-rater: Automatic Content Scoring for Short Constructed Response. Proceedings of the Twenty-Second International FLAIRS Conference, 290-295.
 Attali Y & Burstein J (2006), Automated Essay Scoring With e-raterÂ® V.2. Journal of Technology, Learning, and Assessment, Vol.4, No.3, 1-31.
 Dikli S (2006), An Overview of Automated Scoring of Essays. The Journal of Technology, Learning, and Assessment, Vol.5, No.1, 1-36.
 Langer A, Banga R & Mittal A (2010), Subramanian LV. Variant Search and Syntactic Tree Similarity Based Approach to Retrieve Matching Questions for SMS Queries. â€™10 Proceedings of the Fourth Workshop on Analytic for Noisy Unstructured Text Data, 67-72.
View Full Article:
How to Cite
LicenseAuthors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under aÂ Creative Commons Attribution Licensethat allows others to share the work with an acknowledgement of the work''s authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal''s published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (SeeÂ The Effect of Open Access).