AbstractsPsychology

Data quality in probability-based online panels: Nonresponse, attrition, and panel conditioning

by B. Struminskaya




Institution: Universiteit Utrecht
Department:
Year: 2014
Keywords: survey methodology; online panels; data quality; survey quality; nonresponse; attrition; panel conditioning
Record ID: 1262878
Full text PDF: http://dspace.library.uu.nl:8080/handle/1874/301751


Abstract

Online panels – surveys administered over the Internet in which persons are asked to complete surveys regularly – offer cost reductions compared to surveys that use more traditional modes of data collection (face-to-face, telephone, and mail). However, some characteristics of online panels may cause errors, threatening the data quality. For example, excluding non-Internet users may result in coverage error; if persons selected for the study cannot be reached or do not want to participate, it may result in nonresponse error; study participants may choose to stop participating in later waves (attrition). Furthermore, respondents may learn to answer dishonestly or answer filter questions negatively to reduce the burden of participation. The main question of this dissertation is that of how good is the data collected in probability-based online panels (i.e., panels in which respondents are selected by researchers as a result of application of statistical procedures of random sampling). The five studies in this dissertation address the questions of data quality, using data from a probability-based telephone-recruited online panel of Internet users in Germany. To answer the question about goodness of the final estimates collected in the online panel, we compared data from the online panel to data from two high-quality face-to-face reference surveys. We found several differences among the surveys, however, most of these differences averaged to a few percentage points. We took the analysis further studying mode system effects (i.e., differences in the estimates as the results of the whole process by which they were collected). We found that the online panel and further two reference surveys differed in attitudinal measures. However, for factual questions the reference surveys differed from the online panel and not from each other. Our overall conclusion is that the data from the online panel is fairly comparable to the data from high-quality face-to-face surveys. This dissertation concentrated on the processes that can cause errors in data collected in the online panel. We found that participation in the panel is selective: previous experience with the Internet and online surveys predicted willingness to participate and actual participation in the panel. Incentives and fieldwork agencies that performed the recruitment also influenced the decision to participate. To study why panel members chose to discontinue participation, we contrasted the role of incentives and non-reward motivation. We found that respondents who viewed surveys as long, difficult, too personal were more likely to attrite and that incentives (although negatively related to attrition) did not compensate for this burdensome experience. To find out if the group of respondents who are longer in the panel would answer differently than the group of respondents with a shorter duration, we conducted two experiments. We found limited evidence of advantageous learning and no evidence of disadvantageous learning. The results of this dissertation provide additional…