AbstractsEducation Research & Administration

A validity argument for the use of scores from a web-search-permitted and web-source-based integrated writing test

by Hee Sung Jun




Institution: Iowa State University
Department:
Year: 2014
Keywords: Bilingual, Multilingual, and Multicultural Education; Educational Assessment, Evaluation, and Research
Record ID: 2024502
Full text PDF: http://lib.dr.iastate.edu/etd/13899


http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=4906&context=etd


Abstract

The field of language assessment has seen a recent surge of literature on assessment tasks that integrate two or more skills, such as reading and writing. Source-based writing is also gaining much interest in both first and second language studies, with a particular focus on issues relating to source selection and source language use. The purpose of this study is to build a validity argument for the use of scores from a web-search-permitted and web-source-based integrated writing test. Scores from the test are intended to be used as final exam scores in an academic writing course for international undergraduate students at a large research university in the US. The construct that the test is intended to measure is web-researching-to-write or web-source-based writing, which is defined by the course syllabus and teaching/learning activities. There are seven inferences that make up the validity argument: domain description, evaluation, generalization, explanation, extrapolation, utilization, and implication. This chain of seven inferences connects the target language use domain and observations of performance to scores and leads ultimately to the consequences of test use. Each inference is supported by a warrant, which in turn is supported by one or more assumptions. Each assumption is backed by evidence. Mixed methods were used to collect and analyze data that would become the backing. Data included 48 Camtasia screen capture recordings, 50 test essays, 40 post-test test-taker questionnaire responses, 6 post-test test-taker interviews, 9 follow-up test-taker questionnaire responses, 9 follow-up test-taker interviews, 5 instructor interviews, and documents. All of the assumptions underlying the seven inferences were at least partially supported by the backing, which means that the overall validity argument can be upheld by the chain of seven inferences. Further research is suggested to produce additional backing in support of the comparatively weaker inferences. This study contributes to validation research in language assessment by providing an example of a validity argument constructed for low-stakes classroom-based testing. Furthermore, the study introduces the web-search-permitted and web-source-based integrated writing test as a test that has potential to be adopted by various stakeholders and opens up new possibilities for research on integrated language assessment tasks.