A threshold for a q-sorting methodology for computer-adaptive surveys

Sarah Sabbaghan

Research output: Contribution to conferencePaper

Abstract

© 2017 Proceedings of the 25th European Conference on Information Systems, ECIS 2017. All rights reserved. Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Due to the complexity of CAS, little work has been done on developing methods for validating their content and construct validity. We have created a new q-sorting technique where the hierarchies that independent raters develop are transformed into a quantitative form, and that quantitative form is tested to determine the inter-rater reliability of the individual branches in the hierarchy. The hierarchies are then successively transformed to test if they branch in the same way. The objective of this paper is to identify suitable measures and a “good enough” threshold for demonstrating the similarity of two CAS trees. To find suitable measures, we perform a set of bootstrap simulations to measure how various statistics change as a hypothetical CAS deviates from a “true” version. We find that the 3 measures of association, Goodman and Kruskal's Lambda, Cohen's Kappa, and Goodman and Kruskal's Gamma together provide information useful for assessing construct validity in CAS. In future work we are interested in both finding a “good enough” threshold(s) for assessing the overall similarity between tree hierarchies and diagnosing causes of disagreements between the tree hierarchies.
Original languageEnglish
Publication statusPublished - 1 Jan 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'A threshold for a q-sorting methodology for computer-adaptive surveys'. Together they form a unique fingerprint.

Cite this