Enhancing response time thresholds with response behaviors for detecting disengaged examinees

Periodical
Large-scale Assessments in Education
Volume
8
Year
2020
Issue number
5
Relates to study/studies
PISA 2012

Enhancing response time thresholds with response behaviors for detecting disengaged examinees

Abstract

The item responses of examinees who rapid-guess, who do not spend enough time reading and engaging with an item, will not reflect their true ability on that item. Rapid-disengagement refers to rapidly selecting a response to multiple-choice items (i.e., rapid-guess), omitting items, or providing short-unrelated answers to open-ended items in an unreasonably short time. Such rapid responses need to be identified to enhance the validity of the arguments from the test data. Detection of rapid-guessing behaviors is typically based on identifying a threshold to represent the minimum response time required for the student to have thoughtfully considered a given item. This study investigates whether using response behaviors can improve the detection of rapid-disengagement by investigating two approaches: (a) using response behaviors to decide on the size of the threshold, and (b) using response behaviors as a condition for detecting disengaged examinees in addition to response times, referred as enhanced methods. Process data and item responses from the PISA 2012 computer-based mathematics assessment were used to examine both approaches under threshold values varying from very small (5 s) to very large (60 s). Results suggested that response behaviors can provide meaningful input on establishing the size of the threshold and that while enhanced methods performed better than using only response times in recognizing rapid-disengagement in some cases, no clear pattern was observed as to when such improvement occurs. This study makes a unique contribution by inspecting the response behaviors of disengaged examinees and providing guidelines on using response behaviors to decide on the size of the threshold. This study suggests response behavior categories that can be applicable to many item and response types, which make them suitable for use in digitally-based large-scale assessments.