|Keywords:||Automated Grading; Java; machine learning; grade modeling|
|Full text PDF:||http://dspace.library.uu.nl:8080/handle/1874/337629|
This study explores the influence of Java code features on the prediction accuracy of manual grades. In particular, we follow the high level definition of the proposed feature grammar by Aggarwal and Srikant and provide a new low level interpretation. Through a series of experiments we explore the influence of changing the feature granularity. In order to utilize predictive models the source code of Java solutions has to be converted into suitable input. In this regard we develop a feature generation tool for Java code, named JFEX. The grading process of the solutions follows a distinctive grading rubric which is more solution oriented than the originally proposed rubric. We empirically test if the algorithm-oriented features are able to capture these different grade distinctions and verify if this improves the grading accuracy in comparison to test case predictions. In the end this work did not provide significant proof that automated test case based grading is improved by feature modeling. However, we did demonstrate some encouraging evidence that shows the features have the capacity to improve test case accuracy. A subset of the selected features seemed highly relevant to the problem at hand. Classification modeling with attention for the ordinal ordering between the grade levels presented itself as the best candidate to realize the potential of the features. Advisors/Committee Members: Jeuring, J.T., Feelders, A.J..