The Reliability of Crowdsourcing: Latent Trait Modeling with Mechanical Turk

Citation data:

CONFERENCE: Seaver College Research And Scholarly Achievement Symposium

Publication Year:
Usage 147
Downloads 97
Abstract Views 50
Repository URL:
Baucum, Matt; Rouse, Steven, Dr.; Miller-Perrin, Cindy; Mancuso, Elizabeth, Dr.
statistics; crowdsourcing; methodology; Applied Statistics; Quantitative Psychology
poster description
Mechanical Turk, an online crowdsourcing platform, has recently received increased attention in the social sciences as studies continue to suggest its viability as a source for reliable experimental data. Given the ease with which large samples can be quickly and inexpensively gathered, it is worth examining whether Mechanical Turk can provide accurate experimental data for methodologies requiring such large samples. One such methodology is Item Response Theory, a psychometric paradigm that defines test items by a mathematical relationship between a respondent’s ability and the probability of item endorsement. To test whether Mechanical Turk can serve as a reliable source of data for Item Response Theory modeling (also known as latent trait modeling), researchers administered a verbal reasoning scale to 500 Mechanical Turk workers and compared the resulting Item Response Theory model to that of an existing normative sample. While Item Characteristic Curves did significantly differ, both models had high agreement on the fit of participants’ response patterns and on participant ability estimation. Future research should attempt to extend such findings to other variations of Item Response Theory models.