'Making Such Bargain': Transcribe Bentham and the Quality and Cost
1 ‘Making such bargain’: Transcribe Bentham and the quality and cost-effectiveness of 2 crowdsourced transcription1 3 Tim Causer,2 Kris Grint,3 Anna-Maria Sichani,4 and Melissa Terras5 4 5 §1. Introduction and context 6 7 Research and cultural heritage institutions have, in recent years, given increasing 8 consideration to crowdsourcing in order to improve access to, and the quality of, their digital 9 resources. Such crowdsourcing tasks take many forms, ranging from tagging, identifying, 10 text-correcting, annotating, and transcribing information, often creating new data in the 11 process. Those considering launching their own cultural heritage crowdsourcing initiative are 12 now able to draw upon a rich body of evaluative research, dealing with the quantity of 13 contributions made by volunteers, the motivations of those who participate in such projects, 14 the establishment and design of crowdsourcing initiatives, and the public engagement value 15 of so doing (Haythornthwaite, 2009; Dunn and Hedges, 2012; Causer and Wallace, 2012; 16 Romeo and Blaser, 2011; Holley, 2009). Scholars have also sought to posit general models 17 for successful crowdsourcing for cultural heritage, and attempts have also made to assess the 18 quality of data produced through such initiatives (Noordegraaf et al, 2014; Causer and Terras, 19 2014b; Dunn and Hedges, 2013; McKinley, 2015; Nottamkandath et al, 2014). All of these 20 studies are enormously important in understanding how to launch and run a successful 21 humanities crowdsourcing programme. However, there is a shortage of detailed evaluations 22 of whether or not humanities crowdsourcing—specifically crowdsourced transcription— 23 produces data of a high enough standard to be used in scholarly work, and whether or not it is 24 an economically viable and sustainable endeavour.
[Show full text]