Crowdsourcing as research method: hype, hopes, and hazards by Mark Carrigan published on 2016-09-20T19:25:59Z Isabell Stamm, Lina Eklund, Wanda Liebermann Crowdsourcing is often hyped as a key element to handling the big data revolution and thereby create new forms of collective knowledge production. This paper aims to demystify the promises of crowdsourcing as research method by drawing attention to the ways in which crowdsourcing affects meanings of scientific knowledge production. Our analysis is informed by the proceedings of a symposium on crowdsourcing in the humanities and social science held at UC Berkeley in 2015, in conjunction with content analysis of ten websites of crowdsourcing service providers, and a critical reflection on the discourse of digital methods. We show that the translation of crowdsourcing from a business process to research method is not trivial. This sociotechnical practice is distinct from other methods as it conveys epistemological assumptions underlying other kinds of methods. While there is no doubt that crowdsourcing can be useful as a research method, the assessment of the produced knowledge lacks traditional research standards. Crowdsourcing rests upon an unpredictable composition of the crowd, malleability of tasks, and in-built ambiguity of platforms, leaving room to draw contradictory fantasies about the crowd. The challenge is to create a fit between the assumed characteristics and capabilities of crowd-taskers and the assigned task. Consequently, the image we have of the crowd and its capacities, and the ideas that researchers have about what good research is together must act as an implicit compass requiring much greater reflexivity than necessary for other kinds of methods. Hence, we find that crowdsourcing does not per se affect meanings of scientific knowledge production. Rather, the applicable image of the crowd shapes the methodological design in all its detail. On a pragmatic note, clarifying how researchers understand and draw on the platforms, crowd-taskers, and tasks has to be the starting point for any crowdsourcing project. These reflections help researcher make decisions about the design of the crowdsourcing process and, eventually, to assess the validity of data and interpretations. Ultimately, we conclude that crowdsourcing does not fit within a single epistemological paradigm, which complicates the assessment of the quality and value of the knowledge produced.