UNIVERSITY OF PATRAS DEPARTMENT OF ECONOMICS

Ph.D Thesis

Title:

Quality of work in online labor markets: An empirical study in paid environments

by

Vaggelis Mourelatos Department of Economics University of Patras

1

University of Patras, Department of Economics

Title:

Quality of work in online labor markets: An empirical study in paid crowdsourcing environments

by

Vaggelis Mourelatos Department of Economics University of Patras

Approved by a three-member committee

…………………………… ………………………… …………………………

Manolis Tzagarakis Efthalia Dimara Nikos Karacapilidis Assistant Professor Professor Professor

2

…………………………….

Mourelatos Vaggelis

Diploma in Economics, Department of Economics, University of Patras (2008) Master thesis in Business Engineering, School of Electrical & Computer Engineering, National Technical University of Athens, Athens (2011)

Patras, 2019 © 2019 Vaggelis Mourelatos. All rights reserved

3

ABSTRACT

Over the years a great number of different websites have emerged (e.g. Amazon Mechanical Turk, Crowdflower, Microworkers) that offer crowdsourcing services which focus on specific tasks, which range from general purpose simple chores to research and development assignments. As the number of crowdsourcing websites is today rapidly increasing, research efforts concentrate on examining and analysing this new way of providing labor, while, at the same time, addressing problems that may arise.

The overall purpose of our research, conducted at the Department of Economics of the University of Patras, is to investigate various economic aspects of this new form of work currently becoming all the more important. The emphasis is in particular on the issue of quality in such labor markets. For that reason, as a first step, our aim is to review and analyze existing websites providing crowdsourcing services in an attempt to establish a framework that will allow systematic discussion, comparison and assessment of their overall performance.

Moreover, despite the increased popularity of the crowdsourcing online activity, little is also known on the quality of output and the associated determinants of a task- specific outcome. In this thesis, we investigate also the impact of cognitive and non- cognitive skills on the quality of a task-specific outcome by conducting an on a popular crowdsourcing platform. Using linear regression models and controlling for a wide set of individual characteristics and country-specific indicators, we found that the performance of workers depends on cognitive skills, personality traits and work effort. Overall, we find that micro-workers with higher levels of neuroticism perform worse, a finding that is in line with relevant studies in traditional labor markets. These results are expected to gain insights on the effective role of worker attributes in online labor markets.

Thesis Supervisor: Tzagarakis Manolis Assistant Professor on Department of Economics University of Patras

4

ACKNOWLEDGEMENTS

Prima facea, I would like to thank my Professor Manolis Tzagarakis for giving me the opportunity to learn from him. His research interests on the “Economic aspects in online communities” not only broadened my understanding of the internet but also gave me many relevant ideas for this thesis. Through working with him, both on this thesis and other projects, I have come to appreciate the true value of the internet as a mean of connecting people and mobilizing them for a specific purpose. A great teacher inspires, and I have certainly been inspired by Professor Tzagarakis through our conversations.

In addition, I would like to thank the whole staff of Economic Department for all their support during my study, especially professors Nikolaos Giannakopoulos and Efthalia Dimara. They not only provided me with the opportunity of maximizing my learning skills, but also went out of their way to accommodate my research questions during this thesis.

Finally, I would also like to say a heartfelt thank you to my Mum, Dad, my brother Haris and my girlfriend Pepita, for always believing in me and encouraging me to follow my dreams. I would like to thank my whole family and my closest friends Nikos, Giorgos, Eleutheria for helping in whatever way they could during this challenging period.

5

Εκτεταμένη Περίληψη στην Ελληνική Γλώσσα

Εισαγωγή Αυτή η πτυχιακή αναλύει την διαδικασία και τις τεχνικές του πληθοπορισμού (crowdsourcing) και της συμμετοχικής χρηματοδότησης () σαν ένα νέο μοντέλο εύρεσης και παροχής εργασίας, κατά το οποίο άτομα και οργανισμοί χρησιμοποιούν ένα σύνολο από χρήστες του διαδικτύου για να αποκτήσουν υπηρεσίες ή ιδέες τις οποίες και χρειάζονται.

Πιο συγκεκριμένα, η συγκεκριμένη διδακτορική διατριβή, εξερευνά το συνεχώς αναπτυσσόμενο φαινόμενο της διαδικτυακής εργασίας, μέσα από το πρίσμα των διαδικτυακών ιστότοπων που παρέχουν ένα περιβάλλον πληθοπορισμού. Στα επόμενα κεφάλαια παρουσιάζεται μια μικροσκοπική και μία μακροσκοπική ανάλυση των χαρακτηριστικών που επηρεάζουν την ποιότητα της εργασίας μέσα σε αυτά τα διαδικτυακά εργασιακά περιβάλλοντα.

H μικροσκοπική ανάλυση επικεντρώνεται στα χαρακτηριστικά των συμμετεχόντων σε διαδικτυακές αγορές εργασίας δίνοντας έμφαση στα γνωστικά και μη-γνωστικά χαρακτηριστικά τους, ενώ η μακροσκοπική ανάλυση έχει ως επίκεντρο του ενδιαφέροντος τις διαδικτυακές πλατφόρμες οι οποίες παρέχουν υπηρεσίες πληθοπορισμού και συμμετοχικής χρηματοδότησης στο διαδίκτυο και στα χαρακτηριστικά τους.

Σε γενικές γραμμές αύτη η νέα και καινοτόμος μορφή της διαδικτυακής εργασίας (πληθοπορισμού) αναπτύσσεται σταθερά τα τελευταία χρόνια, όπως μπορεί να αποδειχθεί από τον μεγάλο αριθμό των πλατφορμών που έχουν δημιουργηθεί τα τελευταία πέντε χρόνια αλλά και από τον μεγάλο αριθμό των επισκέψεων στις ιστοσελίδες και των εγγραφόμενων χρηστών. Αυτή η ραγδαία ανάπτυξη και απότομη δημοτικότητα έχει ως αποτέλεσμα μια πολυπλοκότητα, καθώς υπάρχει μεγάλη διακύμανση και ποικιλία στα χαρακτηριστικά που υπάρχουν μεταξύ των συμμετεχόντων στο πληθοπορισμό αλλά και των πλατφορμών που παρέχουν τέτοιου είδους υπηρεσίες.

Αυτό έχει σαν αποτέλεσμα, η έρευνά μας να προσπαθήσει να ικανοποιήσει την ανάγκη για σταθεροποίηση της ποιότητα εργασίας η οποία υπάρχει σε εργασιακά περιβάλλοντα αμειβόμενου πληθοπορισμού με το να παρουσιάσει χρήσιμα ευρήματα πάνω στους παράγοντες οι οποίοι επηρεάζουν την απόδοση των ατόμων αλλά και των πλατφορμών σε διαδικτυακά περιβάλλοντα πληθοπορισμού.

6

Η εξέλιξη του διαδικτύου Το World Wide Web το οποίο είναι κοινώς γνωστό ως web, δεν είναι συνώνυμο με το διαδικτύου αλλά είναι το πιο γνωστό κομμάτι του διαδικτύου το οποίο μπορεί να καθοριστεί ως ένα τεχνολογικό-κοινωνικό σύστημα στο οποίο τα άτομα μπορούν να επικοινωνήσουν, βασιζόμενοι σε τεχνολογικά δίκτυα. Η ιδέα του τεχνολογικού- κοινωνικού συστήματος αναφέρεται σε ένα σύστημα το οποίο αυξάνει την ανθρώπινη νόηση, την επικοινωνία και την συνεργασία. Η νόηση είναι το απαραίτητο προαπαιτούμενο για την επικοινωνία και η προϋπόθεση για την συνεργασία. Με άλλα λόγια η συνεργασία απαιτεί επικοινωνία και η επικοινωνία απαιτεί νόηση/γνώση.

Όμως με το πέρασμα των χρόνων τα χαρακτηριστικά και η μορφή του Web άλλαξαν. Ο αρχικός Παγκόσμιος Ιστός (WEB 1.0) συνέδεε δεδομένα και πληροφορίες που μπορούσαν να παρέχουν στο διαδίκτυο μόνο όσοι γνώριζαν HTML (Hyper Text Markup Language). Ο χρήστης του διαδικτύου ήταν παθητικός δέκτης των πληροφοριών. Αυτό άλλαξε με την έλευση του WEB 2.0, του διαδραστικού ιστού. Οι χρήστες του διαδικτύου μπορούσαν πλέον να αλληλοεπιδρούν μεταξύ τους, να ανταλλάσσουν πληροφορίες, να διαμορφώνουν το περιεχόμενο μιας ιστοσελίδας. Ο Web 2.0 συνδέει άτομα μεταξύ τους και είναι αποτέλεσμα κυρίως του εύκολου τρόπου κατασκευής ιστοσελίδων.

Τα πρώτα WEB 2.0 εργαλεία είναι τα ιστολόγια (blogs), στα οποία ο χρήστης του διαδικτύου μπορεί να αφήνει σχόλια. Άλλα παραδείγματα WEB 2.0 εργαλείων είναι τα wikis και οι ιστοσελίδες κοινωνικής δικτύωσης.

Διαδικτυακές αγορές εργασίας Τα τελευταία χρόνια, έχει εμφανιστεί ένας μεγάλος αριθμός διαδικτυακών αγορών εργασίας οι οποίες επιτρέπουν στους εργαζόμενους να προσφέρουν την εργασία τους με αμοιβή σε μια αντίστοιχη 'δεξαμενή' αγοραστών εργασίας. Οι δημιουργοί αυτών των διαδικτυακών αγορών, παίζουν τον ρόλο των μεσαζόντων, παρέχοντας θεσμική υποστήριξη και διορθώνοντας πληροφοριακές ασυμμετρίες. Οι διαδικτυακές αγορές εργασίας είναι αγορές όπου (1) η εργασία προσφέρεται με χρήματα (2) το προϊόν αυτής της εργασίας παραδίδεται μέσω τηλεπικοινωνίας και (3) η κατανομή της εργασίας και των χρημάτων καθορίζεται μέσω ενός συνόλου αγοραστών και παροχών εργασίας οι οποίοι λειτουργούν μέσα σε ένα σύστημα τιμών.

Οι Διαδικτυακές Αγορές Εργασίας (ΔΑΕ) εμπίπτουν σε δυο μεγάλες κατηγορίες- ''spot'' και ''contest''. Καμία διαδικτυακή αγορά δεν είναι πραγματικά ''spot'' με την έννοια μιας αγοράς αγαθών, αλλά συγκεκριμένες ΔΑΕ παρουσιάζουν κάποιες συμφωνίες αγοραστών και παροχών με συμφωνημένες τιμές για συγκεκριμένο χρονικό διάστημα. Παραδείγματα spot αγορών αποτελούν οι διαδικτυακές πλατφόρμες Odesk, Elance, iFreelance και Guru. Σε αυτές τις ιστοσελίδες οι εργαζόμενοι φτιάχνουν προφίλ στο διαδίκτυο και οι αγοραστές εργασίας

7

δημοσιεύουν κάποιες δουλειές και αναμένουν τους εργαζόμενους να κάνουν κάποια αίτηση ή και οι ίδιοι ενεργά επιζητούν αιτούντες εργασία.

Στις αγορές contest οι αγοραστές προτείνουν κάποιο διαγωνισμό για αγαθά που σχετίζονται με την δημιουργία και την πληροφορία, όπως για παράδειγμα ένα λογότυπο (π.χ. 99 Designs και CrowdSPRING), λύσεις σε μηχανολογικά προβλήματα (π.χ. Innocentive) και νομική έρευνα (π.χ. Article One Partners). Σε αυτήν την κατηγορία των ΔΑΕ οι συμμετέχοντες δημιουργούν τις δικές τους εκδοχές του αγαθού και οι αγοραστές επιλέγουν ένα νικητή από μια 'δεξαμενή' διαγωνιζομένων. Σε κάποιες αγορές ο αγοραστής πρέπει να συμφωνήσει ότι θα διαλέξει και θα πληρώσει έναν νικητή πριν να δημοσιεύσει το διαγωνισμό. Σε κάποιες άλλες αγορές υψηλού ρίσκου όπου μια λύση ίσως είναι μη πιθανή, ο αγοραστής δεν είναι υποχρεωμένος να διαλέξει κάποιον νικητή.

Η φύση αλλά και οι αρχές των διαδικτυακών αγορών εργασίας διαφέρουν από αυτές των παραδοσιακών αγορών, σε δυο τουλάχιστον αρχές. Πρώτον, σ' αυτές τις αγορές δεν υπάρχει ένα μοναδικό ''αγαθό/προϊόν'' εργασίας με μια άμεσα παρατηρήσιμη ποιότητα και σε μια μοναδική επικρατούσα τιμή-και οι δουλειές που προσφέρονται και οι εργάτες έχουν τις ιδιομορφίες τους. Αυτό έχει σαν αποτέλεσμα και οι εταιρείες και οι εργαζόμενοι να δυσκολεύονται να βρουν κάποιον με τον οποίο θα ταιριάξουν καλά ακόμα και όταν δημιουργούνται κάποιες συνεργασίες. Είναι δύσκολο για την κάθε μεριά να ξέρει επακριβώς τι θα αποκομίσει όταν μπαίνει στην διαδικασία των συμβολαίων. Τέτοιες ασυμμετρίες πληροφόρησης ανάμεσα σε αγοραστές και παρόχους εργασίας όταν συνδυάζονται με στρατηγική συμπεριφορά, μπορούν να εμποδίσουν την ποιότητα της εργασίας. Αν μάλιστα είναι και αρκετά σοβαρές μπορούν να εμποδίσουν ακόμα και την αγορές να συνεχίσουν να υπάρχουν). Δεύτερον, είναι γνωστό ότι η εργασία είναι μια υπηρεσία η οποία παραδίδεται για μια κάποια χρονική περίοδο και συχνά συνοδεύεται από επενδύσεις σε ανθρώπινο κεφάλαιο οι οποίες χαρακτηρίζονται από συγκεκριμένες σχέσεις (π.χ. το να μάθει κάποιος ένα συγκεκριμένο προσόν για μια συγκεκριμένη δουλειά). Αυτό δημιουργεί ένα πλήθος προβλημάτων που αφορούν τα κίνητρα και το οποίο δυσκολεύει τα εμπλεκόμενα μέρη να συνεργαστούν πλήρως (Williamson 1979).

Στις παραδοσιακές αγορές εργασίας, οι εξωτερικοί μεσάζοντες , όπως τα «temp agencies», τα σωματεία και οι υπηρεσίες εξέτασης / testing services κερδίζουν με το να παρέχουν πληροφορίες. Οι δημιουργοί των διαδικτυακών αγορών εργασίας κάνουν το ίδιο πράγμα, παρόλο που το εύρος τους είναι μεγαλύτερο και καλύπτει πιο πολλές κατηγορίες. Επίσης παρέχουν υποδομές όπως συστήματα πληρωμών και καταχωρήσεων, υποδομές επικοινωνίας και τεχνολογία αναζήτησης-υπηρεσίες οι οποίες συνήθως παρέχονται είτε από την κυβέρνηση ή και από τα ίδια τα συμβαλλόμενα μέρη.

Όλα αυτά τα προαναφερθέντα περιγράφουν την τεράστια επίδραση που έχει το Web και η εξέλιξή του στις αγορές εργασίας. Ακριβώς για αυτόν τον λόγο οι ερευνητές

8

ξεκίνησαν στα τέλη της δεκαετίας του 90 να εξετάζουν σε βάθος διάφορα θέματα, όπως για το αν κάποια στιγμή γίνουμε μάρτυρες της εμφάνισης μόνο διαδικτυακών αγορών εργασίας αποκλειστικά, όπου εργαζόμενοι και αγοραστές, γεωγραφικά διασκορπισμένοι, μπορούν να φτιάχνουν συμβόλαια για δουλειά τα οποία να αποστέλλονται ''διαδικτυακά''. Τέτοιες αγορές θα μπορούσαν να είναι μια εξέλιξη δίχως προηγούμενο, αφού οι αγορές εργασίας ήταν πάντα γεωγραφικά εξαρτημένες και διασπασμένες.

Οι ερευνητές είχαν διάφορες απόψεις - ο Malone προέβλεψε την ανάδυση πολλών αγορών με αυτά τα χαρακτηριστικά (Malone & Laubacher 1998), ενώ o Autor είχε κάποιες αμφιβολίες, υποστηρίζοντας ότι οι ασυμμετρίες στην πληροφορία θα έκαναν σχετικά απίθανη την εμφάνιση τέτοιων αγορών για αυτό και προέβλεψε την εμφάνιση κάποιων εξωτερικών μεσαζόντων οι οποίοι θα χρησιμοποιούσαν την δική τους φήμη ώστε να παρέχουν πληροφορίες μεγάλου εύρους σχετικά με τους εργαζόμενους-όπως την ικανότητα, τα προσόντα, την αξιοπιστία τους και την εργασιακή τους ευσυνειδησία- σε αγοραστές οι οποίοι θα ήταν απρόθυμοι να προσλάβουν εργαζόμενους βασιζόμενοι αποκλειστικά σε δημογραφικά χαρακτηριστικά και σε αυτό-αξιολογήσεις.

Στα περίπου δέκα χρόνια που μεσολάβησαν από τότε είδαμε την εμφάνιση ενός αριθμού πραγματικά παγκόσμιων διαδικτυακών αγορών εργασίας, όπως είχε προβλέψει ο Malone. Μέχρι το 2009 πάνω από 2 εκατομμύρια λογαριασμοί εργαζομένων είχαν δημιουργηθεί σε διάφορες αγορές στο Διαδίκτυο, με πάνω από 700 εκατομμύρια δολάρια μικτών μισθών να έχουν αποδοθεί σε εργαζόμενους (Frei, 2009, & Horton & Hilton 2010). Παρόλα αυτά, και σύμφωνα με τις απόψεις του Autor , αυτές οι αγορές δεν εμφανίστηκαν 'από το πουθενά', αλλά μέσα σε αυστηρά κατασκευασμένες πλατφόρμες, οι οποίες υιοθέτησαν τις πολιτικές και τις αρχές του πληθοπορισμού.

Προέλευση και ορισμός του πληθοπορισμού Ο Πληθοπορισμός ετυμολογικά είναι μία μορφή συλλογικής Διαδικτυακής δραστηριότητας και αποτελείται από μία οντότητα (“πληθοποριστής”) που προτείνει σε μία πολυπληθή ομάδα ατόμων (“πλήθος”) μέσω μίας ανοιχτής πρόσκλησης, να αναλάβουν εθελοντικά μια εργασία. Ο όρος αυτός επινοήθηκε για πρώτη φορά από τον Jeff Howe το 2006 σε ένα άρθρο στο περιοδικό Wired το οποίο ονομαζόταν ''Η άνοδος του πληθοπορισμού και συμφώνα με το οποίο:

''Δίνοντας έναν απλό ορισμό, ο πληθοπορισμός αντιπροσωπεύει την ενέργεια μιας εταιρείας ή ενός θεσμού να πάρει μια λειτουργία που κάποτε πραγματοποιούταν από εργαζόμενους και να την αναθέσει σε τρίτους, σε ένα μη καθορισμένο (και γενικά μεγάλο) δίκτυο ανθρώπων με την μορφή ανοιχτής πρόσκλησης. Αυτό μπορεί να πάρει την μορφή συναδελφικής παραγωγής (όταν η δουλειά πραγματοποιείται συλλογικά) αλλά επίσης ανατίθεται και σε μεμονωμένα άτομα. Η απαραίτητη προϋπόθεση είναι να

9

έχει την μορφή της ανοιχτής πρόσκλησης και ένα μεγάλο δίκτυο πιθανών εργαζομένων.''

O Howe είχε εμπνευστεί και επηρεαστεί απο τον James Surowieki και το βιβλίο του ''Η σοφία του πλήθους''/''The wisdom of crowds''. Εκεί ο Surowieki περιγράφει την έννοια της συλλογικής νοημοσύνης-''κάτω απ τις κατάλληλες συνθήκες οι ομάδες είναι εξαιρετικά ευφυής και είναι συχνά πιο έξυπνες και από τα πιο έξυπνα άτομα μέσα σ'αυτές (Selzer & Mahmoudi, 2006). Γι' αυτόν ακριβώς το λόγο και με βάση τα προαναφερθέντα ο πληθοπορισμός λειτουργεί σαν εργαλείο που έχει πρόσβαση σε αυτήν την συλλογική νοημοσύνη ή σε εξωτερικές ικανότητες. Γενικά, η λέξη πληθοπορισμός (crowdsourcing) από μόνη της αντικατοπτρίζει τον ορισμό της καθώς είναι ένας συνδυασμός των λέξεων ''crowd'' και ''outsourcing'' αναφερόμενη στους συμμετέχοντες του πληθοπορισμού και της εξωτερικής ανάθεσης σαν μια εμπορική πρακτική. O ορισμός περιλαμβάνει τέσσερα κρίσιμα σημεία-την πλατφόρμα, το πλήθος, την ανοιχτή πρόσκληση και την ανάθεση εργασίας.

Ας μην ξεχνάμε ότι η τεχνολογία του πληθοπορισμού έχει γίνει πιο εκλεπτυσμένη, συνδέοντας ελεύθερους επαγγελματίες και άτομα ενθουσιώδη με τις εταιρείες που ψάχνουν για «projects» ή μια απλή ολοκλήρωση κάποιας εργασίας. Για να λειτουργήσει ο πληθοπορισμός πρέπει ο αιτών της δουλειάς (requester) να σκεφτεί τον καλύτερο ιστότοπο πληθοπορισμού, με βάση τα χαρακτηριστικά και τα εργαλεία που αυτό παρέχει. Για αυτόν τον λόγο οι διαδικτυακές πλατφόρμες πληθοπορισμού είναι κοινότητες βασισμένες στο διαδίκτυο, οι οποίες περιέχουν άτομα με διαφορετικά προσόντα (εργαζόμενους-workers) έτοιμα να δουλέψουν σε καμπάνιες πληθοπορισμού. Έτσι, ο μεμονωμένος αιτών ή μια εταιρεία μπορεί να χρησιμοποιήσει αυτές τις πλατφόρμες όπου οι πελάτες μπορούν να αναθέσουν μια μεγαλύτερη ποικιλία ειδών δημιουργικής εργασίας σε χαμηλότερο κόστος από ότι στα παραδοσιακά μέσα. Αυτά τα μέρη της διαδικτυακής αγοράς εργασίας έχουν κάποια χαρακτηριστικά που επηρεάζουν την ποιότητα της παραγόμενης εργασίας, τα οποία και θα αναλυθούν στο κεφάλαιο 3.

Ένα βασικό συστατικό του πληθοπορισμού είναι επίσης η φύση της σύνδεσης μεταξύ των requester μιας δουλειάς στο διαδίκτυο και των πιθανών εργαζομένων (workers). Με άλλα λόγια ο πληθοπορισμός αποτελείται από την ανοικτή μέσω διαδικτύου πρόσκληση (open call) για μια δημιουργική ιδέα ή εύρεση λύσεων αξιολόγηση ή οποιουδήποτε άλλου τύπου επιχειρηματικών θεμάτων και να επιτρέψει τον οποιοδήποτε (από το πλήθος) να υποβάλλει μια λύση. Ένα από τα χαρακτηριστικά τα οποία διαφοροποιούν τους ανθρώπους που περιλαμβάνονται μέσα στο πλήθος είναι ότι πρέπει να αποζημιωθούν επειδή δρουν σχεδόν εθελοντικά. Κατά αυτόν τον τρόπο ο όρος 'ανοιχτή' σηματοδοτεί την ενεργεία μιας εταιρείας ή ενός θεσμού/οργανισμού ή ενός μεμονωμένου requester οι οποίοι αναθέτουν μια ποικιλία εργασιών μέσω κάποιων διαδικτυακών πλατφόρμων σε ένα μη καθορισμένο (και συχνά μεγάλο) δίκτυο ανθρώπων και συγκεντρώνει όλους μαζί τους πιθανούς

10

συμμετέχοντες χωρίς όμως να περιορίζεται σε ειδικούς ή προεπιλεγμένους υποψήφιους και αυτή η συμμετοχή δεν θα υπόκειται σε διακρίσεις (Schenk & Guittard 2011). Mε αυτόν τον τρόπο ο καθένας μπορεί να ανταποκριθεί σε αυτήν την πρόσκληση (Μεμονωμένα άτομα μπορούν να συμμετάσχουν όπως και εταιρείες, μη κερδοσκοπικοί οργανισμοί ή κοινότητες μεμονωμένων ατόμων).

Ο Whitla το εξηγεί αυτό αρκετά καθαρά, λέγοντας ότι η πρόσκληση μπορεί να είναι κάποια από τους τρείς ακόλουθους τύπους.

• μια πραγματικά ανοιχτή πρόσκληση όπου οποιοσδήποτε ενδιαφερόμενος μπορεί να συμμετάσχει • μια πρόσκληση όπου περιορίζεται σε μια κοινότητα με συγκεκριμένη γνώση και εμπειρία • ένας συνδυασμός των δυο προηγούμενων, όπου πραγματοποιείται μια ανοιχτή πρόσκληση

Γι' αυτόν τον λόγο, ένα ακόμα απαραίτητο χαρακτηριστικό του πληθοπορισμού είναι το ''πλήθος'', το οποίο, όπως συμφωνούν και οι περισσότεροι μελετητές, χαρακτηρίζεται από ένα μεγάλο πλήθος ανθρώπων που ονομάζονται εργαζόμενοι (Kozinets et al. 2008). Επίσης πολλοί μελετητές συμφωνούν ότι το πλήθος πρέπει να είναι ετερογενές στα χαρακτηριστικά του, όπως τα δημογραφικά χαρακτηριστικά για παράδειγμα και ειδικά στην γνώση και ικανότητες των ατόμων (Selzer &Mahmoudi 2012). Γι' αυτόν τον λόγο, έχουν γίνει αρκετές προσπάθειες να αναλυθούν τα δημογραφικά χαρακτηριστικά σε σχέση με θεωρίες κινήτρων. Για παράδειγμα ο Ipeirotis εξέτασε μια από τις μεγαλύτερες διαδικτυακές πλατφόρμες πληθοπορισμού (Amazon Mechanical Turk) και έδειξε ότι αποτελείται από νεαρά άτομα κυρίως κοπέλες από μικρές οικογένειες εργατών στις ΗΠΑ και την Ινδία. Επίσης ο Ipeirotis συνέδεσε τα δημογραφικά και κοινωνικό-οικονομικά χαρακτηριστικά του πλήθους με τα κίνητρά τους να συμμετέχουν σε περιβάλλοντα πληθοπορισμού. Για παράδειγμα η έρευνά του έδειξε ότι τα κίνητρα ήταν αρκετά διαφορετικά ανάμεσα στους Ινδούς και τους εργάτες των ΗΠΑ. Συγκεκριμένα, πολλοί λίγοι Ινδοί συμμετέχουν στην πλατφόρμα Mechanical Turk απλά για να ''σκοτώσουν την ώρα τους'' και οι περισσότεροι Ινδοί αντιμετωπίζουν το Mechanical Turk σαν κύρια πηγή εισοδήματος. (Αυτό δεν προκαλεί κάποια έκπληξη αν κανείς λάβει υπόψη του το μέσο εισόδημα ενός Ινδού εργάτη σε σχέση με το μέσο εισόδημα ενός Αμερικανού.) (Ipeirotis 2010). Παρόλα αυτά ακόμα και τώρα ένα μεγάλο μέρος του παγκόσμιου πληθυσμού δεν έχει πρόσβαση στο ίντερνετ, ειδικά με υψηλές ταχύτητες σύνδεσης, το οποίο θα τους έδινε την δυνατότητα να συμμετάσχουν σε διαδικτυακά projects πληθοπορισμού. Αυτό έχει σαν αποτέλεσμα να υπάρχει ένας περιορισμός στην ποικιλομορφία του πλήθους, όπως για παράδειγμα συγκεκριμένες ηλικίες ή εθνικότητες να υπο-εκπροσωπούνται.

11

Τελευταίο αλλά και αρκετά σημαντικό, το πιο βασικό στοιχείο του πληθοπορισμού είναι η φύση, τα είδη και τα μεμονωμένα χαρακτηριστικά των ''online jobs or tasks''/της διαδικτυακής εργασίας./εγχειρήματος''. Οι μελετητές έχουν διαφορετικές απόψεις σε σχέση με την εργασία ή την ανάθεση ενός εγχειρήματος την οποία το πλήθος προορίζεται να αναλάβει. Μπορεί να ποικίλλει από απλές ταξινομήσεις έως την γέννηση μιας ιδέας ή την ανάπτυξη ενός νέου προϊόντος (είναι επίσης γνωστά ως Εγχειρήματα Ανθρώπινης Νοημοσύνης ΕΑΝ/Human Intelligence Tasks HITs). Ακόμα και ο ίδιος ο Howe δεν μας δίνει μια επεξήγηση του task στον ορισμό του για τον πληθοπορισμό και επίσης αργότερα αναγνώρισε το γεγονός ότι το task δεν χρειάζεται απαραίτητα να πραγματοποιηθεί από την εταιρεία αλλά μπορεί να πραγματοποιηθεί μόνο από τους εργάτες. Ανεξάρτητα από τον τύπο της εργασίας, γενικά το task υποβάλλεται από έναν requester σε μια πλατφόρμα πληθοπορισμού, χρειάζεται να έχει ένα ξεκάθαρο στόχο και να είναι εναρμονισμένο με τις αρχές του πληθοπορισμού ώστε να έχει το επιθυμητό αποτέλεσμα. Στο κεφάλαιο 3, το οποίο περιλαμβάνει την έρευνά μας για τα χαρακτηριστικά των διαδικτυακών αγορών εργασίας, παρουσιάζονται με λεπτομέρεια όλοι οι πιθανοί τύποι μιας διαδικτυακής εργασίας πληθοπορισμού.

Λαμβάνοντας υπόψη όλα τα παραπάνω, είναι εμφανής η ωφέλιμη και για τις δύο πλευρές φύση του πληθοπορισμού. Τα πλεονεκτήματά του βασίζονται στο ωφέλιμο γεγονός το οποίο προκύπτει απ την διαδικασία του πληθοπορισμού και για τους requesters (την εταιρεία ή ένα μεμονωμένο άτομο) αλλά και για τους εργαζομένους. Μέσω του πληθοπορισμού, ως ένα μοντέλο διαδικτυακής εύρεσης επιχειρηματικών λύσεων η εταιρεία έχει πρόσβαση σε ιδέες, καινοτομίες, πληροφορίες και εξωτερική γνώση την οποία χρησιμοποιεί για να παράγει αξία (Aitamurto et al. 2011; Sloane 2011) έχοντας την ευχέρεια να εκμεταλλευτεί φθηνό εργατικό δυναμικό.

Το χρηματικό βραβείο ή οποιοδήποτε είδος αποζημίωσης αποτελούν ένα μικρό μέρος μόνο του κόστους το οποίο η εταιρεία θα είχε καταβάλλει αν είχαν προσλάβει μια επαγγελματική διαφημιστική εταιρεία ή είχε διευθετήσει την αναγκαία δουλειά-task μέσα το εσωτερικό της, με ιδίους πόρους. Κατά αυτόν τον τρόπο ο πληθοπορισμός παίρνει ιδιαίτερη αξία αν η αναγκαία δουλειά-task πραγματοποιείται σε κόστος χαμηλότερο από το κόστος που θα υπήρχε αν γινόταν το όλη η αναγκαία δουλειά-task εσωτερικά και αν το αποτέλεσμα καταλήγει να είναι καλύτερο και πιο προσαρμοσμένο στις ανάγκες του πελάτη (Whitla, 2009; Selzer & Mahmoudi, 2012). Αντίστοιχα, ο πληθοπορισμός μειώνει το κόστος της γέννησης και της παραγωγής ιδεών σε σχέση με το αντίστοιχο κόστος στην κανονική αγορά εργασίας (Brabham, 2008). Aπό την άλλη πλευρά οι εργαζόμενοι μπορεί να παίρνουν μέρος σε projects πληθοπορισμού γιατί μπορεί να αντλούν κάποια ευχαρίστηση απ' την εκπλήρωση του task ή μπορεί να έχουν την επιθυμία να μοιραστούν την γνώση τους και τα ταλέντα τους ή για κάποια κοινωνική αναγνώριση ή γιατί επιθυμούν να γίνουν μέρος μιας κοινότητας που' χει σαν αποτέλεσμα οικονομικές απολαβές (Mladenow et al., 2014; Kozinets et al. 2008).

12

Έτσι λοιπόν, μια αποτελεσματική διαδικασία πληθοπορισμού περιλαμβάνει την χρήση του Internet/Διαδικτύου και αποτελείται από ένα καθαρά ορισμένο πλήθος, μία αναγκαία δουλειά-task με ξεκάθαρο στόχο, έναν καθαρά ορισμένο crowdsourcer, μια καθαρά ορισμένη αποζημίωση (αξία) η οποία λαμβάνεται από τον crowdsourcer, μία ανατιθεμένη διαδικασία συμμετοχικού τύπου στο διαδίκτυο και μια ανοιχτή πρόσκληση ποικίλης έκτασης.

Συμπερασματικά, είναι εμφανές ότι ο πληθοπορισμός έχει αναπτυχθεί σε σχέση με τον αρχικό ορισμό. Η διαδικασία του έχει πολλά επίπεδα και περιέχει διάφορα βήματα ώστε να καταλήξουμε στο τελικό αποτέλεσμα. Γι' αυτόν τον λόγο, λήφθηκε υπόψη για την έρευνά μας ένας αυξανόμενος αριθμός χαρακτηριστικών έτσι ώστε να δοθούν απαντήσεις και να προκύψουν κάποια σαφή συμπεράσματα σχετικά με τις ερωτήσεις και το στόχο της διδακτορικής διατριβής.

Τύποι Πληθοπορισμού Γενικά, ο πληθοπορισμός καλύπτει τέσσερις τομείς:

Crowd labor/Εργασία πλήθους Η εργασία του πληθοπορισμού επιτρέπει σε κάποιον να αναζητήσει ελεύθερους επαγγελματίες που θα ολοκληρώσουν τμήμα ή και ολόκληρο το διαδικτυακό project. Μπορούν να αναζητηθούν άτομα να πραγματοποιήσουν συγκεκριμένες εργασίες σε μια καθορισμένη τιμή, όπως για παράδειγμα, με την ιστοσελίδα Fiverr ή μπορούν να δημοσιευθούν projects σαν διαγωνισμοί ή εργασία για πρόσληψη και ως αποτέλεσμα, ταλαντούχοι ελεύθεροι επαγγελματίες να διαγωνιστούν. Το Amazon Mechanical Turk επιτρέπει στους συμμετέχοντες να χωρίσουν ένα Project σε ένα τεράστιο αριθμό tasks/εργασιών και να τα αναθέσουν σε ξεχωριστούς διαδικτυακούς εργαζόμενους.

Ανοιχτή καινοτομία/Δημιουργικότητα πλήθους Αυτές οι εταιρείες πληθοπορισμού επιτρέπουν σε διάφορους ανθρώπους να δημοσιεύσουν πάνω σε projects. H HitRECord είναι μια εταιρεία πληθοπορισμού που σχετίζεται με την τέχνη και το βίντεο και επιτρέπει σε κάποιον να δημοσιεύσει και να συνεργαστεί πάνω σε καλλιτεχνικά projects των οποίων τα αποτελέσματα έχουν διαγωνιστεί σε φεστιβάλ ταινιών.

Πρόσβαση σε διανεμημένη γνώση ή εμπειρία To Wikipedia είναι το πιο γνωστό παράδειγμα ενός πληθοπορισμού ιστότοπου, όπου χρησιμοποιείται για να έχει πρόσβαση και να μοιράζει γνώση από διάφορες πηγές; Παρόλα αυτά υπάρχουν εταιρείες που το προάγουν για πιο συγκεκριμένους επιχειρηματικούς σκοπούς. Αυτό μπορεί να περιλαμβάνει feedback από πελάτες ή beta testing διαδικασίες.

13

Χρηματοδότηση μέσω πληθοπορισμού Εταιρείες και επιχειρηματίες στρέφονται στο κοινό για την χρηματοδότηση ιδεών. Αν κάποιος ενδιαφέρεται για αυτό του είδος χρηματοδότησης μπορεί να τσεκάρει τους ιστότοπους που σχετίζονται με το crowdfunding και το επιχειρηματικό crowdfunding. Παρόλο που πιθανότατα δεν υπάρχει ένας τρόπος μόνο για να κατηγοριοποιήσουμε το τοπίο του πληθοπορισμού, οι πιο δημοφιλείς ταξινομήσεις που έχουν γίνει από ειδικούς της τεχνικής αυτής και ερευνητές, ορίζουν τον πληθοπορισμού που χαρακτηρίζεται από χρηματική ανταμοιβή (όπως π.χ. το αμειβόμενο πληθοπορισμό) σύμφωνα με τις ακόλουθες τέσσερις μεταβλητές.

✓ με βάση τον τύπο της εργασίας που πραγματοποιείται ✓ με βάση το κίνητρο της συμμετοχής ✓ με βάση το πως λειτουργούν οι αιτήσεις ✓ με βάση τα προβλήματα τα οποία ο πληθοπορισμός επιχειρεί να λύσει

Πιο συγκεκριμένα:

Πρώτον, ο πληθοπορισμός που βασίζεται στον τύπο της εργασίας που πραγματοποιείται από το πλήθος και τον τρόπο με τον οποίο τα άτομα μέσα στο πλήθος επικοινωνούν και συνεργάζονται μεταξύ τους κατηγοριοποιείται σε:

• Social production crowds - αυτό συμβαίνει όταν μία μεγάλη ομάδα ατόμων δανείζουν και προσφέρουν τα ξεχωριστά ταλέντα τους έτσι ώστε να οδηγηθούμε στην δημιουργία κάποιου προϊόντος (για παράδειγμα το Wikpedia ή το Linux). • Averaging crowds - αυτά παρέχουν απόψεις που βγαίνουν από μέσους όρους πάνω σε περίπλοκα θέματα, οι οποίες μπορεί να είναι και πιο ακριβείς από την γνώμη ενός μόνο μεμονωμένου ατόμου (όπως συμβαίνει για παράδειγμα στο χρηματιστήριο). • Data-mine crowds - αυτό συμβαίνει όταν ένα μεγάλο πλήθος ανθρώπων, χωρίς καμία γνώση των μελών του, παράγει ένα σύνολο δεδομένων συμπεριφοράς, όπου επιτρέπει να αποκτήσουμε μια πιο διορατική ματιά στα μοτίβα της αγοράς (όπως για παράδειγμα τα recommendation systems του Amazon και του Ebay) • Networking crowds - ένα πλήθος ανθρώπων το οποίο μοιράζεται πληροφορίες μέσω ενός κοινού συστήματος επικοινωνίας όπως το Facebook ή το Twitter. • Transactional crowds-ένα γκρουπ το οποίο κυρίως συντονίζεται γύρω από point-to-point συναλλαγές (όπως για παράδειγμα το eBay και το Innocentive).

14

Αυτή η κατηγοριοποίηση είναι χρήσιμη καθώς επιτρέπει την κατανόηση των διαφορετικών ικανοτήτων που τα πλήθη κατέχουν και τους πολλούς τρόπους με τους οποίους μπορούν να δουλέψουν μαζί ή ο καθένας ξεχωριστά ώστε να πραγματοποιήσουν ένα project (Carr 2010). Επίσης ο πληθοπορισμός μπορεί να κατηγοριοποιηθεί με βάση το κίνητρο που οδηγεί τα πλήθη να συμμετέχουν σε εφαρμογή του πληθοπορισμού. Πιο συγκεκριμένα:

• Οι Κοινής χρήσεως/Communals - εμπλέκουν την ταυτότητά τους με το πλήθος και αναπτύσσουν κοινωνικό κεφάλαιο μέσω της συμμετοχής στον ιστότοπο • Οι Ωφελιμιστές/Utilizers - αναπτύσσουν κοινωνικό κεφάλαιο εξελίσσοντας τα ατομικά τους προσόντα μέσω του διαδικτυακού ιστότοπου • Οι Aspirers/φιλόδοξοι - αυτοί βοηθούν στην επιλογή περιεχομένου σε διαγωνισμούς πληθοπορισμού αλλά δεν συνεισφέρουν κάποιο καινούριο περιεχόμενο από μόνοι τους • Lurkers/Οι Παρατηρητές - αυτοί απλά παρακολουθούν

Αυτή η συγκεκριμένη κατηγοριοποίηση επικεντρώνεται πιο πολύ στα μέλη του πλήθους παρά στα προβλήματα τα οποία μπορεί να λύσει ο πληθοπορισμός (Martineu, 2012). Επιπλέον υπάρχει και η κατηγοριοποίηση του πληθοπορισμού με βάση το πως λειτουργούν διάφορες εφαρμογές του. Έτσι έχουμε :

• Σοφία του πλήθους/Crowd wisdom - εδώ χρησιμοποιείται η '' συλλογική νοημοσύνη'' των ατόμων μέσα ή ακόμα και έξω από έναν οργανισμό με σκοπό να επιλυθούν δύσκολα προβλήματα (Η Innocentive είναι το κλασσικό παράδειγμα) • Δημιουργία του πλήθους/Crowd creation - εδώ επωφελούμαστε από τις ικανότητες και την βαθιά γνώση ενός πλήθους ανθρώπων ώστε να δημιουργηθούν νέα προϊόντα. • Crowd voting/ Η ψήφος του πλήθους - η κοινότητα ψηφίζει την αγαπημένη τους ιδέα ή προϊόν (το Threadless είναι το πρωταρχικό παράδειγμα του Howe). • Η χρηματοδότηση από το πλήθος/Crowdfunding - υπάρχει μια πληθώρα διαφορετικών τύπων από πλατφόρμες χρηματοδότησης στην αγορά (κάποιες βασίζονται στις ανταμοιβές όπως η και κάποιες βασίζονται στα ίδια κεφάλαια, όπως η ) και εκπληρώνουν διαφορετικούς σκοπούς.

Τέλος, σύμφωνα με τον Darren Brabham, όλες οι παραπάνω κατηγοριοποιήσεις του πληθοπορισμού δεν επικεντρώνονται στο είδος του προβλήματος το οποίο ένας οργανισμός προσπαθεί να λύσει όταν απευθύνεται στο πλήθος. Ο δικός του διαχωρισμός, είναι στην πραγματικότητα με κέντρο το είδος του προβλήματος και των χαρακτηριστικών του, που ο πληθοπορισμός καλείται να λύσει (Brabham, 2013).

15

Σε αυτήν την κατεύθυνση έχουμε :

• Ανακάλυψη και διαχείριση της γνώσης - ένας οργανισμός αναθέτει σε ένα πλήθος να βρει και να συλλέξει πληροφορίες σε μια κοινή μορφή (κάποια παραδείγματα είναι το Peer-to-patent, peertopatent.org, SeeClickFix, και ένα πρόσφατο από την Gianluca Petreli BeMyEye). Είναι ιδανικό για συλλογή πληροφοριών, οργάνωση και αναφορά προβλημάτων. • Ευρεία έρευνα - ένας οργανισμός αναθέτει σε ένα πλήθος να επιλύσει εμπειρικά προβλήματα (π.χ. Innocentive, Goldcorp Challenge). Ιδανικό για σύλληψη προβλημάτων με εμπειρικά αποδεδειγμένες λύσεις όπως για παράδειγμα, οι επιστημονικές προκλήσεις. • Peer-vetted δημιουργική παραγωγή - οι οργανισμοί αναθέτουν σε ένα πλήθος να δημιουργήσει και να επιλέξει δημιουργικές ιδέες (π.χ. Threadless, Doritos contest). • Χρήση διανεμημένης ανθρώπινης νοημοσύνης - κατάλληλη όχι στο να παράγει σχεδιασμούς, να βρίσκει πληροφορίες ή να αναπτύσσει λύσεις αλλά στο να επεξεργάζεται δεδομένα. Μεγάλα προβλήματα δεδομένων, αναλύονται σε μικρότερα tasks/εγχειρήματα τα οποία απαιτούν ανθρώπινη νοημοσύνη και τα άτομα του πλήθους αποζημιώνονται/ανταμείβονται όταν επεξεργάζονται κομμάτια δεδομένων. Η χρηματική αποζημίωση/ανταμοιβή είναι το κίνητρο για την συμμετοχή. Το Amazon Mechanical Turk είναι το τέλειο παράδειγμα.

Λαμβάνοντας υπόψη όλα τα παραπάνω, γίνεται άμεσα αντιληπτό ότι η φύση του πληθοπορισμού έχει περίπλοκες διαστάσεις και περικλείει μια μεγάλη ποσότητα από επεξεργασίες, διαδικασίες και χαρακτηριστικά. Παρόλη την πληθώρα των ορισμών για τον πληθοπορισμό, υπάρχει μια σταθερά που αφορά την δημοσίευση προβλημάτων στο ευρύ κοινό και την ανοιχτή πρόσκληση για συνεισφορά στην επίλυση τους.

Σκοπός και αντικείμενο της έρευνας Ο σκοπός αυτής της διπλωματικής διατριβής είναι να ερευνήσει σε βάθος, ποιοι παράγοντες έχουν επίδραση πάνω στην ποιότητα της εργασίας στους διαδικτυακούς τόπους εργασίας του πληθοπορισμού. Για να μπορέσουμε να ερευνήσουμε αυτό το ζήτημα, η έρευνα εξέτασε τα χαρακτηριστικά της σύνθεσης του εργατικού δυναμικού (δηλαδή τους εργαζόμενους) και τις παρεχόμενες τεχνολογικές εφαρμογές (δηλαδή τις πλατφόρμες πληθοπορισμού). Αυτό είχε σαν αποτέλεσμα να έχουν δημιουργηθεί δυο ερευνητικά ερωτήματα (ΕΕ) τα οποία έπειτα σκιαγραφούνται με την παρουσίαση των ευρημάτων της ποσοτικής έρευνας η οποία πραγματοποιήθηκε με την μορφή ενός πειραματικού ερευνητικού σχεδιασμού. Πιο συγκεκριμένα τα ερευνητικά ερωτήματα συνοψίζονται στις παρακάτω ερωτήσεις.

16

ΕΕ1-Σε ποια έκταση τα χαρακτηριστικά μιας διαδικτυακής πλατφόρμας πληθοπορισμού επηρεάζουν τον αριθμό και την συνολική απόδοση των εργασιών που συντελούνται σε αυτές τις πλατφόρμες; (Μακροσκοπική ανάλυση)

ΕΕ2-Σε ποια έκταση τα χαρακτηριστικά και οι προθέσεις συμπεριφοράς ενός εργαζόμενου επηρεάζουν την συνολική του απόδοση όταν συμμετέχει σε διαδικτυακές εργασίες πληθοπορισμού; (Μικροσκοπική ανάλυση)

Τελικώς, ο σκοπός αυτής της έρευνας είναι να παρουσιάσει κάποιες ενδείξεις για τον εισαγωγικό ρόλο διαφόρων χαρακτηριστικών, είτε αυτά αφορούν τις πλατφόρμες που παρέχουν υπηρεσίες πληθοπορισμού είτε αφορούν τους εργαζόμενους που παίρνουν μέρος σε τέτοιες διαδικτυακές εκστρατείες. Για αυτόν τον λόγο αυτή η έρευνα επικεντρώνεται αποκλειστικά στην έννοια του πληθοπορισμού και κατ' επέκταση στα διαδικτυακά εργασιακά περιβάλλοντα, λαμβάνοντας υπόψη τις πρόσφατες εξελίξεις και ότι όλο και περισσότερες εταιρείες χρησιμοποιούν τις τεχνολογικές δυνατότητες του διαδικτύου για να δημιουργήσουν αξία με τους πελάτες τους σε αυτά τα περιβάλλοντα. Ως εκ τούτου, η συγκεκριμένη έρευνα υιοθετεί μια διαφορετική προσέγγιση από τις προαναφερθείσες μελέτες πάνω στην ποιότητα της εργασίας και προσπαθεί να ενσωματώσει νέες διαστάσεις εξέτασης πάνω στην γενική έρευνα, όπως τον ρόλο των προσωπικών χαρακτηριστικών των εργαζόμενων, έχοντας αυτές τις νέες διαστάσεις σαν οδηγό για την πρόβλεψη της διαδικτυακής τους απόδοσης.

Το ερευνητικό κενό και η συμβολή της παρούσας διδακτορικής διατριβής Τα τελευταία χρόνια, τα περιβάλλοντα πληθοπορισμού έχουν τραβήξει το ενδιαφέρον ερευνητών από διάφορα πεδία, οι οποίοι φιλοδοξούν να ερευνήσουν, να αναλύσουν και να κατανοήσουν αυτή την νέα μορφή εργασίας. Για αυτόν τον λόγο, γενικά, πολλοί επιστημονικοί ερευνητές οι οποίοι προέρχονται κυρίως από κοινωνικές επιστήμες όπως τα οικονομικά και την ψυχολογία αλλά και από εφαρμοσμένες επιστήμες όπως την πληροφορική και την μηχανολογία, έχουν πραγματοποιήσει μελέτες χρησιμοποιώντας αυτόν τον νέο τρόπο διαδικτυακής εργασίας.

Συγκεκριμένα, όλο και περισσότεροι ψυχολόγοι, χρησιμοποιούν πλατφόρμες πληθοπορισμού με σκοπό να πραγματοποιήσουν έγκυρες έρευνες σε μεγάλα και ποικίλα δείγματα, ακριβώς επειδή έχουν την δυνατότητα να εκμεταλλευτούν μεγάλα δείγματα συμμετεχόντων με ποικίλα χαρακτηριστικά και αξιόπιστες διαδικασίες. Με αυτόν τον τρόπο της διαδικτυακής συλλογής δεδομένων, οι μελέτες των ψυχολόγων εστιάζουν στην μέτρηση και ερμηνεία των προσωπικών χαρακτηριστικών του πλήθους οι οποίες βασίζονται σε ένα παραδοσιακό επιστημονικό πλαίσιο συμπεριφοράς (Moriarty, 2010; Bates & Lanza 2013).

17

Παρομοίως, η δουλειά των οικονομολόγων έχει τον στόχο να τεστάρει βασικές οικονομικές θεωρίες εργασίας, χρησιμοποιώντας πειραματική οικονομική έρευνα και αναλύοντας τους παράγοντες αυτού του νέου διαδικτυακού επιχειρηματικού μοντέλου, με την βοήθεια των πλεονεκτημάτων του πληθοπορισμού, όπως για παράδειγμα, ότι η διαδικτυακή έρευνα έχει λιγότερο κόστος από την παραδοσιακή έρευνα με ερωτηματολόγια, τα δεδομένα συλλέγονται από ένα μεγαλύτερο εύρος του πληθυσμού και υπάρχει γρηγορότερη συλλογή δεδομένων.

Ο πρωταρχικός στόχος των οικονομολόγων είναι να παρατηρούν και να εξετάζουν την συνολική επίδραση των διαδικτυακών κοινοτήτων του πληθοπορισμού, έτσι ώστε οι διαδικτυακές εργασίες να γίνουν εργαλείο για ευρύτερη οικονομική ανάπτυξη (Chorton & Hilton, 2010; Chadler & Horton 2011).

Τέλος, μέσα στα πρόσφατα χρόνια, ο πληθοπορισμός έχει αυξανόμενα γίνει ένα κεντρικό σημείο για μελετητές στις επιστήμες της πληροφορικής και της μηχανολογίας. Η έρευνα σε αυτά τα επιστημονικά πεδία αναζητά να δώσει απαντήσεις σε διάφορα ερωτήματα τα οποία κυρίως αφορούν ένα ταξινομικό πλαίσιο για διαδικασίες πληθοπορισμού για μια αποτελεσματική και αποδοτική διαχείριση του πλήθους (Geiger et al. 2011; Feller et al. 2010; Doan, 2011) και να αναζητήσει αποτελεσματικούς τρόπους για μεγιστοποίηση της ωφελιμότητας των πρακτικών πληθοπορισμού βασισμένες στην από κοινού δημιουργία και στις θεωρίες καινοτομίας των χρηστών (Schenk & Guittard, 2011; Leimester, 2009).

Λαμβάνοντας υπόψη όλες τις προαναφερθείσες μελέτες πάνω στον πληθοπορισμό είναι άμεσα αντιληπτό ότι η ποιότητα της εργασίας είναι το κέντρο της προσοχής στις αγορές διαδικτυακής εργασίας. Οι ερευνητές έχουν χρησιμοποιήσει μια πλούσια ποικιλία από όρους, στρατηγικές και προσεγγίσεις με σκοπό να αναλύσουν τους καθοριστικούς παράγοντες και την επίδραση τους στην διαδικτυακή εργασία σε περιβάλλοντα πληθοπορισμού, Για αυτόν τον λόγο, οι προηγούμενες μελέτες έχουν συνδέσει την ποιότητα εργασίας με το σχεδιασμό τεχνικών εξασφάλισης ποιότητας, εμπλέκοντας την εφαρμογή τους από τις διαδικτυακές πλατφόρμες και με θεωρίες κινήτρων και συμπεριφοράς για να αυξήσουν την παραγωγικότητα των εργατών.

Παρόλη την αυξανόμενη προσοχή που δίνεται στην ποιότητα της εργασίας και την τάση απ' την μεριά των συγγραφέων να εστιάσουν σε διάφορες διαστάσεις της ποιότητας στις διαδικτυακές αγορές εργασίας, δεν είναι ακόμα ξεκάθαροι οι παράγοντες οι οποίοι επηρεάζουν την απόδοση των εμπλεκόμενων μερών του πληθοπορισμού οδηγώντας στο τελικό αποτέλεσμα.

Συνεπώς, είναι επιτακτική η ανάγκη να διερευνηθούν περαιτέρω τα χαρακτηριστικά αυτής της νέας μορφής εργασίας έτσι να γίνει βαθύτερα κατανοητή και να προταθούν επίσης αποδοτικές τεχνικές εξασφάλισης ποιότητας, οι οποίες θα βελτιώσουν την ικανότητα όχι μόνο του ατόμου αλλά και της πλατφόρμας να

18

κατανοήσει καλύτερα το ψηφιακό περιεχόμενο του πληθοπορισμού, έχοντας ως αποτέλεσμα υψηλότερα επίπεδα ποιότητας εργασίας.

Γι’ αυτόν ακριβώς το λόγο, η συγκεκριμένη πτυχιακή εστιάζει στην ποιότητα της διαδικτυακής εργασίας με το να αναλύσει σε βάθος την σχέση των χαρακτηριστικών των διαδικτυακών πλατφορμών του πληθοπορισμού με την συνολική τους απόδοση μέσω μιας μακροσκοπικής ανάλυσης και την σχέση και επίδραση των χαρακτηριστικών των εργαζομένων (για παράδειγμα γνωστικές και μη γνωστικές ικανότητες) και της κοινωνικό-οικονομικής τους κατάστασης πάνω στην ποιότητα των αποτελεσμάτων τους μέσω μιας μικροσκοπικής εξέτασης.

Μακροσκοπική Ανάλυση Σε αυτή την ενότητα παρουσιάζεται η μακροσκοπική προσέγγιση για την ποσοτική μελέτη πάνω στην απόδοση της διαδικτυακής εργασίας. Αυτό περιλαμβάνει έναν ορισμό για την έννοια της απόδοσης των διαδικτυακών πλατφορμών όπως επίσης και μια περίληψη των σημαντικότερων αποτελεσμάτων της. Τα μακροσκοπικά αποτελέσματα για την απόδοση θα συνδυαστούν με τα μετέπειτα ευρήματα που αφορούν την απόδοση των ατόμων που παίρνουν μέρος στον πληθοπορισμό με σκοπό να αποκομίσουμε γνώσεις για το τι τελικά επηρεάζει την ποιότητα της διαδικτυακής εργασίας.

Εισαγωγή Το μέγεθος της γνώσης που μοιράζεται διαδικτυακά μέσω του Παγκόσμιου Ιστού αυξάνεται με γεωμετρικούς ρυθμούς. Στο σημερινό περιβάλλον του Παγκόσμιου Ιστού, οι χρήστες ανταλλάσσουν γνώσεις και απόψεις χρησιμοποιώντας φόρουμ διαλόγου, κοινωνικά δίκτυα όπως επίσης και μια ποικιλία από συνεργατικά συστήματα υποστήριξης. H τόσο διαδεδομένη παρουσία ενός τέτοιου Παγκόσμιου Ιστού και η τεράστιας κλίμακας επικοινωνία μεταξύ των χρηστών, καθιστούν δυνατό το χαρακτηρισμό αυτών των περιβαλλόντων σαν εκθέτες ''συλλογικής νοημοσύνης'' (Malone, 2009) η οποία ορίζεται σαν ''καθολικώς διανεμημένη νοημοσύνη, συνεχώς ενισχυμένη, συντονισμένη σε πραγματικό χρόνο που έχει σαν συνέπεια τον αποτελεσματικό συνδυασμό επιμέρους δεξιοτήτων'' (Levy, 2009). Ενώ στα προαναφερθέντα περιβάλλοντα η συλλογική νοημοσύνη εμφανίζεται με έναν σχετικά μη ευθύ τρόπο, υπάρχουν διάφορες προσπάθειες να χαλιναγωγήσουμε και να εκμεταλλευτούμε ρητώς και ευθέως τέτοια συλλογική νοημοσύνη στο σκηνικό του σημερινού Παγκόσμιου Ιστού.

Λόγω της τρομερής ανάπτυξης που προαναφέραμε του Web 2.0 και της συμμετοχικής του φύσης ένα τεράστιο υψηλής γνώσης εργατικό δυναμικό έχει μπει στο διαδικτυακό εργατικό δυναμικό κάνοντας δυνατή την ανάπτυξη νέων μορφών αγορών και καινοτόμων μοντέλων διαδικτυακής εργασίας (Kim & Lee, 2006).

19

Έτσι, o πληθοπορισμός μπορεί να θεωρηθεί ως μια περαιτέρω εξέλιξη του outsourcing (εξωτερικής ανάθεσης εργασιών). Απ' όταν ο Jeff Howe εισήγαγε τον όρο '' Πληθοπορισμός '' το 2006 για πρώτη φορά στην ιστορία (Howe, 2006) προσδιορίζοντάς το ως ''....η ενέργεια του να πάρεις μια δουλειά η οποία παραδοσιακά πραγματοποιούταν από έναν επίσημα ορισμένο υπάλληλο μιας εταιρείας και την αναθέτεις εξωτερικά σε ένα ακαθόριστο, γενικά μεγάλο γκρουπ ατόμων, με την μορφή ανοιχτής πρόσκλησης'', ο πληθοπορισμός έχει εξελιχθεί σε ένα καθοριστικό κομμάτι του σύγχρονου ιντερνέτ, όπου τα πάντα είναι σχεδιασμένα έτσι ώστε να επωφεληθούν από τον διαδικτυακό κόσμο. Κάθε μέρα χιλιάδες εργαζόμενοι, κατηγοριοποιούν εικόνες, γράφουν άρθρα, μεταφράζουν κείμενα ή κάνουν διάφορα άλλα είδη εργασιών σε τέτοιου είδους περιβάλλοντα. Ο πληθοπορισμός, σαν όρος, είναι ένα στρατηγικό μοντέλο που θα προσελκύσει, ένα ενδιαφερόμενο και με γεμάτο κίνητρα πλήθος ατόμων τα οποία είναι ικανά να δώσουν λύσεις οι οποίες είναι καλύτερες και πιο αποτελεσματικότερες σε ποιότητα και ποσότητα από εκείνες που ακόμα και οι παραδοσιακές επιχειρήσεις μπορούν (Brabham, 2008).

Σήμερα ο όρος έχει γίνει ισότιμος με την διαδικτυακή εργασία. Με την ανάπτυξη μεγάλων crowdsourcing και crowdfunding ιστοσελίδων, όπως το Amazon Mechanical Turk ή το Kickstarter αντίστοιχα, μπορεί να υπάρχει η δυνατότητα για εύκολη και γρήγορη πρόσβαση σε ένα τεράστιο ανθρώπινο δυναμικό με μια μεγάλη βάση γνώσεων και δεξιοτήτων, το οποίο μπορεί να χρησιμοποιηθεί για να διαχειριστεί προβλήματα τα οποία απαιτούν ανθρώπινη νοημοσύνη. Παρόλο που πολλοί έχουν σχολιάσει την ποιότητα των αποτελεσμάτων και τον επαγγελματισμό των εργαζομένων σε τέτοιες διαδικτυακές πλατφόρμες (Poetz & Schreier, 2012) οι νέες εξελίξεις σε αυτό το πεδίο είναι συνεχείς και ραγδαίες. Για αυτόν ακριβώς το λόγο υπάρχει η ανάγκη για μια ανάλυση της ανατομίας τέτοιων διαδικτυακών πλατφορμών και των χαρακτηριστικών τους, που επηρεάζουν σημαντικά την απόδοσή τους.

Παρόλο που οι διαδικτυακές εταιρείες δεν είναι πανομοιότυπες με τις παραδοσιακές, υπάρχουν αυξανόμενες αποδείξεις σχετικά με την σύγκλισή τους, εξαιτίας της γενικότερης ενσωμάτωσης του διαδικτύου και των χαρακτηριστικών του και στις δυο περιπτώσεις (Straub et al. 2004). Επίσης δεν πρέπει να ξεχνάμε ότι μια διαδικτυακή πλατφόρμα είναι στην πραγματικότητα μια επιχείρηση στο διαδίκτυο και όλες οι επιχειρήσεις έχουν σαν στόχο την βελτίωση της κερδοφορίας τους έτσι ώστε να διατηρήσουν την βιωσιμότητά τους και την ανταγωνιστικότητά τους (Alsyouf, 2007).

Στην δική μας περίπτωση, το σύνολο δεδομένων αποτελείται από διαδικτυακές επιχειρήσεις (οι οποίες βασίζονται στις αρχές του μοντέλου e-business) που έχουν σαν πιθανούς δυνητικούς πελάτες χρήστες του διαδικτύου, και τις κοινότητες πληθοπορισμού του Web 2.0 σαν πεδίο εργασιακής δραστηριότητας (Hoegg et al, 2006). Για αυτούς τους λόγους, οι κύριες στρατηγικές τους εστιάζουν στο πώς να βελτιώσουν των κύκλο εργασιών τους δηλαδή το ποσοστό των επισκεπτών που αναλαμβάνουν μια διαδικτυακή δουλειά (Rappa, 2000). Παρόλα αυτά, δεν είναι

20

ακόμα ξεκάθαρο ποιοι παράγοντες ενεργούν πάνω στην προσπάθεια μιας ιστοσελίδας πληθοπορισμού να προσελκύσει στοχευμένη κίνηση έτσι ώστε να κατορθώσει να έχει οικονομική επιτυχία.

Βασιζομένη στα παραπάνω, η έρευνα η οποία παρουσιάζεται σε αυτό το κεφάλαιο ερευνά κάποιες πτυχές των πλατφορμών πληθοπορισμού που καθορίζουν την επίδρασή τους στην ανταγωνιστικότητα και την απόδοση. Η μελέτη παρακολούθησε και εξέτασε 174 πλατφόρμες πληθοπορισμού και συμμετοχικής χρηματοδότησης μέσα σε μια περίοδο πέντε χρόνων έτσι ώστε να αξιολογήσει εάν ο τζίρος τους όντως επηρεάζεται από παράγοντες που έχουν σχέση με τα χαρακτηριστικά τους και τις πρακτικές τους (όπως το είδος των υπηρεσιών τους, περιοχή εγκατάστασης και η χρήση διαφορετικών ψηφιακών στρατηγικών μάρκετινγκ).

Έχοντας υπόψη στον ορισμό ότι η ''απόδοση της επιχείρησης'', γενικά, αποτελείται, όντως, από την συνολική παραγωγή μιας εταιρείας ανεξάρτητα με το αν είναι διαδικτυακή ή όχι (Richard et al., 2009) και επίσης στο ότι οι διαδικτυακές εταιρείες ασχολούνται αρκετά με τις διαδικτυακές μετρήσεις ώστε να υπολογίσουν την απόδοσή τους, συλλέξαμε τις μετρήσεις απόδοσης διαφόρων ιστοσελίδων για κάθε διαδικτυακή πλατφόρμα, σαν δείκτες της απόδοσής τους (Βenwell et al. 2010).

Tα δεδομένα που αναφέρθηκαν προηγουμένως, συλλέχθηκαν από μια αρκετά γνωστή ιστοσελίδα που ονομάζεται Alexa, η οποία παρέχει μια αναλυτική εικόνα της κίνησης των ιστοσελίδων. Σχετιζόμενη με το Webometrics, την διαδικασία της μέτρησης διαφόρων πτυχών των ιστοσελίδων που περιλαμβάνουν την δημοτικότητά τους και τα μοτίβα χρήσης, το Alexa έχει αποδειχτεί ότι ξεπερνάει σε απόδοση άλλες παρόμοιες υπηρεσίες όπως το Google Trends for Websites and Compete (Vaughan &Yang, 2013). Σχετικά με τα χαρακτηριστικά των πλατφορμών πληθοπορισμού, εξετάσαμε κάθε ιστοσελίδα και μαζέψαμε όλα τα απαραίτητα δεδομένα για την ανάλυση (Kim et al. 2010).

Κάποιες πληροφορίες και δεδομένα δεν ήταν άμεσα διαθέσιμα στις ιστοσελίδες και αυτό επηρέασε το εύρος/αντικείμενο και τον χρόνο εκπλήρωσης της έρευνάς μας. Έτσι είχαμε ελάχιστες ελλείψεις πληροφορίας όσον αφορά την κίνηση για μερικές ιστοσελίδες, που στο τέλος θεωρήθηκαν ως μη διαθέσιμες (Μ/Δ). Ανεξάρτητα από τα διαδικτυακά χαρακτηριστικά της κάθε online εταιρείας, δημιουργήσαμε και άλλες μεταβλητές αναλύοντας μία-μία την κάθε ιστοσελίδα με βάση διάφορα κριτήρια. Για αυτόν τον λόγο δημιουργήσαμε ένα λογαριασμό σε κάθε ιστοσελίδα πληθοπορισμού έτσι ώστε να έχουμε πλήρη πρόσβαση στα χαρακτηριστικά της.

Η μελέτη αυτού του κεφαλαίου αφορά ένα δείγμα των εκατό κορυφαίων ιστοσελίδων πληθοπορισμού κάθε χρόνο μέσα σε μια χρονική περίοδο πέντε χρόνων. Αναλύοντας τις μη μεταβαλλόμενες σε σχέση με το χρόνο μεταβλητές με OLS βρήκαμε την ένδειξη ότι υπάρχει μια ισχυρή σχέση ανάμεσα στην απόδοση εταιρειών πληθοπορισμού και σε γκρουπ συγκεκριμένων χαρακτηριστικών. Επίσης

21

χρησιμοποιώντας ανάλυση σταθερών επιδράσεων στον χρόνο, το οποίο θέτει υπό έλεγχο τις επιδράσεις των μη-μεταβαλλόμενων χρονικά μεταβλητών με μη χρονικά μεταβαλλόμενες επιδράσεις βρήκαμε ότι η παραγωγικότητα μιας ιστοσελίδας και η διείσδυση της στις κινητές συσκευές (smartphones & tablets) έχουν μεγάλη επίδραση πάνω στην απόδοσή της με το πέρασμα του χρόνου.

Ανάλυση δεδομένων Έχοντας σαν στόχο να υπολογίσουμε με μεγαλύτερη ακρίβεια την προαναφερθείσα απόδοση των διαδικτυακών πλατφορμών, συλλέξαμε δεδομένα από την ιστοσελίδα alexa.com τα οποία αφορούσαν τα web session τους τα οποία πραγματοποιήθηκαν και χαρακτήρισαν την κίνηση τους από το 2012 έως το 2016. Με τον όρο session εννοούμε την μοναδική διάδραση ενός χρήστη που πραγματοποιείται σε μια ιστοσελίδα. Με βάση τον ορισμό τα session εμπεριέχουν τα πολλαπλά screen ή page views, τα events, τις κοινωνικές αλληλεπιδράσεις και τις ηλεκτρονικές εμπορικές συναλλαγές που πραγματοποιούνται σε μια ιστοσελίδα μέσα σε ένα δεδομένο χρονικό πλαίσιο (Stevanovic et al. 2011). Στο πλαίσιο της ανάλυσης, sessions χρησιμοποιούνται σαν την απόλυτη μονάδα μέτρησης για την κίνηση μιας ιστοσελίδας και για αυτόν τον σκοπό χρησιμοποιήσαμε από το Alexa τις μηνιαίες μετρήσεις των sessions των επισκεπτών. Συλλέξαμε τα μηνιαία sessions από κάθε ιστοσελίδα και υπολογίσαμε τον μέσο όρο των μοναδικών sessions της ιστοσελίδας για την κάθε χρονιά.

Στην μελέτη μας τα sessions χωρίζονται σε δυο κατηγορίες-στα sessions τα οποία πραγματοποιήθηκαν από άτομα που χρησιμοποίησαν τον προσωπικό τους υπολογιστή για να αποκτήσουν πρόσβαση στην ιστοσελίδα και από δω και πέρα θα αναφέρονται ως ''Desktop Sessions'' και στα sessions τα οποία πραγματοποιήθηκαν μέσω κινητών συσκευών και smartphones τα οποία θα αναφέρονται ως ''Mobile Sessions''. Το άθροισμα των δύο αυτών κατηγοριών θα αναφέρεται ως ''Overall Sessions''. O λόγος για αυτήν την διάκριση είναι διπλός. Πρώτον, ο τρόπος με τον οποίο οι χρήστες εισέρχονται στην ιστοσελίδα είναι ένας σημαντικός παράγοντας στην ανάλυση που παρουσιάζεται σε αυτό το κεφάλαιο. Δεύτερον, έχοντας υπόψη μας το γεγονός ότι πρόσφατες έρευνες έχουν δείξει ανοδική τάση του πληθοπορισμού σε κινητές συσκευές, ερευνήσαμε την γενικότερη διείσδυση του σε κινητές συσκευές/mobile penetration και την εξέλιξη της στο χρόνο (Eagle, 2009 & Chatzimilioudis et al 2012 & Gupta et al. 2012).

Για αυτόν τον λόγο έχουμε την διαβεβαίωση ότι στην μελέτη μας ο αριθμός των sessions που πραγματοποιήθηκε σε μια ιστοσελίδα πληθοπορισμού είναι ένας αποτελεσματικός εκτιμητής της αποτελεσματικότητας και της οικονομικής ανάπτυξης και αποτελεί την εξαρτημένη μεταβλητή μας (Plaza 2011). Τα δεδομένα τα οποία αφορούν την κίνηση αυτών των ιστοσελίδων συλλέχθηκαν επίσης από το alexa.com (η κατάταξη μιας ιστοσελίδας βασίζεται στην μέτρηση των επισκεπτών μέσα σε μια κυλιόμενη περίοδο 3 μηνών) σε τιμές από το 2012 στο 2016 (Lo & Sedhain, 2006)

22

σαν μια σχετική μονάδα μέτρησης της δημοτικότητας μιας ιστοσελίδας, ενώ τα δεδομένα που αφορούν την ανατομία και τα χαρακτηριστικά τους έχουν συλλεχθεί ένα-ένα, ακολουθώντας τα βήματα της μεθοδολογίας μας.

Τέλος, αποκτήσαμε τα προαναφερθέντα δεδομένα από το alexa.com (Πιστοποιημένες Μετρήσεις του Alexa) επειδή τα δεδομένα ήταν ανοιχτού κώδικα και άμεσα προσβάσιμα και οι επί τόπου στατιστικές αναλύσεις του αποδείχτηκαν ότι ήταν σχεδόν ίδιες με αυτές του Google Analytics, το οποίο όμως έχει περιορισμένη πρόσβαση για κάθε ιστοσελίδα (Plaza, 2011 & Zahran et al 2014).

Μεθοδολογία Σε αυτήν την ενότητα θα παρουσιάσουμε με λεπτομέρεια τις συγκεκριμένες διαδικασίες που ακολουθήσαμε για να επιλέξουμε περιπτώσεις και δεδομένα. Η επιλογή των ιστοσελίδων βασίστηκε σε μια διπλής κλίμακας μεθοδολογία επιλογής. Πρώτον, για να κτίσουμε το δείγμα μας ακολουθήσαμε τα παρακάτω βήματα για την επιλογή υποψήφιων ιστοσελίδων πληθοπορισμού και crowdfunding.

Για αυτό το λόγο,

• Κοιτάξαμε όλους τους οργανισμούς που είναι σημειωμένοι σε επιστημονικά άρθρα μέσω του Science Direct and Google Scholar, ψάχνοντας τον όρο ''πληθοπορισμός'' • Χρησιμοποιήσαμε τις τρεις πιο δημοφιλές μηχανές αναζήτησης-Google, Bing και Yahoo! Για να βρούμε διαδικτυακές πλατφόρμες πληθοπορισμού τις οποίες οι ενδιαφερόμενοι είναι πιθανό να συναντήσουν. Κοιτάξαμε τους πρώτους εκατό (100) ιστότοπους που μας έδειξε η αναζήτηση. • Κοιτάξαμε στις καταχωρήσεις του Wikipedia για '' πληθοπορισμό'' και ''συμμετοχική χρηματοδότηση''. Η έρευνα στο Alexa έγινε για τους κορυφαίους σε κατάταξη ιστότοπούς πληθοπορισμού και συμμετοχικής χρηματοδότησης και επίσης τους πιο συσχετιζόμενους με αυτούς.

Αυτό είχε σαν αποτέλεσμα να συλλέξουμε ένα μεγάλο αριθμό πλατφορμών με τις παραπάνω υπηρεσίες. Για να επιλέξουμε όμως τον τελικό αριθμό των ιστοσελίδων για την έρευνά μας, ακολουθήσαμε τα εξής τρία κριτήρια :

α) Γλώσσα - Όλες οι ιστοσελίδες πληθοπορισμού που εξετάστηκαν έπρεπε να παρέχουν τις υπηρεσίες τους στα Αγγλικά. Αυτό διευκόλυνε την αξιολόγηση των υπηρεσιών που παρείχαν αυτές οι ιστοσελίδες αλλά και την κατανόηση της χρήσης τους.

β) Παρουσίαση των παρεχόμενων υπηρεσιών - Οι ιστοσελίδες έπρεπε να παρέχουν τις απαιτούμενες πληροφορίες έτσι ώστε να διευκολύνουν την αξιολόγησή τους.

23

γ) Ήταν απαραίτητο να προσφέρονται οι απαραίτητες πληροφορίες για την ολοκλήρωση της αξιολόγησης. Πολλές ιστοσελίδες δεν αποκαλύπτουν όλες τις απαιτούμενες πληροφορίες και συνεπώς, αποκλείονταν από την ανάλυσή μας.

Αυτή, η διπλής κλίμακας μεθοδολογία είχε σαν αποτέλεσμα να συγκεντρώσουμε 174 ιστοσελίδες, οι οποίες τέθηκαν υπό εξέταση, για μια περίοδο πέντε χρόνων (2012- 2016). Σύμφωνα με την μέθοδο που αναφέρθηκε νωρίτερα, για να συλλέξουμε τα χαρακτηριστικά τους και να τα ενσωματώσουμε στην ανάλυσή μας, πραγματοποιήσαμε μια ανάλυση περιεχομένου, κατά την οποία εξετάσαμε κάθε ιστοσελίδα ξεχωριστά συγκεντρώνοντας τις κατάλληλες πληροφορίες για το ακόλουθο γκρουπ μεταβλητών (Krippendorff, 2004).

Οι ιστοσελίδες που επιλέχθηκαν, αξιολογήθηκαν με βάση κάποια κριτήρια, τα οποία έχουν ως στόχο να συλλάβουν διάφορες πτυχές των υπηρεσιών που προσφέρονται. Αυτά τα κριτήρια καλύπτουν τεχνικά αλλά και λειτουργικά χαρακτηριστικά των εξεταζόμενων ιστοσελίδων. Παρακάτω παρουσιάζουμε αυτά τα χαρακτηριστικά με περισσότερη λεπτομέρεια. Πιο συγκεκριμένα:

Τύπος των παρεχόμενων υπηρεσιών. Οι υπηρεσίες των ιστοσελίδων ομαδοποιήθηκαν στις ακόλουθες δέκα κατηγορίες (Schenk & Guittard, 2011):

α) Μικροεργασίες/Απλές δουλειές, οι οποίες θεωρούνται η μικρότερη μονάδα εργασίας σε μια εικονική γραμμή παραγωγής π.χ. Κατηγοριοποίηση, tagging, έρευνα διαδικτύου, απομαγνητοφώνηση κ.λ.π.

β) Crowdfunding (συμμετοχική χρηματοδότηση), το οποίο είναι η συλλογή χρηματικών κεφαλαίων από backers, (το πλήθος) για να χρηματοδοτήσουν την πρωτοβουλία (project). To crowdfunding έχει τις ρίζες του στην έννοια του πληθοπορισμού, που είναι η ευρύτερη έννοια ότι ένα μεμονωμένο άτομο εκπληρώνει το στόχο του, με το να δέχεται και να απολαμβάνει μικρές συνεισφορές από πολλές πλευρές. Το crowdfunding είναι η εφαρμογή ακριβώς αυτής της έννοιας, της συλλογής κεφαλαίων μέσω μικρών συνεισφορών από πολλά μέρη έτσι ώστε να χρηματοδοτηθεί ένα συγκεκριμένο project ή εγχείρημα.

γ) Υπηρεσίες πληθοπορισμού στο κινητό, οι οποίες είναι εφαρμογές για κινητά τηλέφωνα βασισμένες στο ''πλήθος'',

δ) Υπηρεσίες γέννησης περιεχομένου, κατά τις οποίες το περιεχόμενο δημιουργείτε από το πλήθος. Αυτή η μέθοδος γίνεται αυξανόμενα δημοφιλής εξαιτίας του γεγονότος ότι προσφέρει μια εναλλακτική λύση στην δημιουργία περιεχομένου και στην επιμέλεια περιεχόμενου.

ε) Υπηρεσίες καταχώρησης δεδομένων, τα οποία είναι projects τα οποία χρησιμοποιούν πολλά διαφορετικά modi operandi π.χ. Excel, Word, ηλεκτρονική επεξεργασία δεδομένων, πληκτρολόγηση, κωδικοποίηση και υπαλληλικές εργασίες.

24

ζ) Υπηρεσίες High knowledge intensity/Υψηλής γνώσης, οι οποίες είναι εξειδικευμένες υπηρεσίες σε συγκεκριμένα πεδία π.χ. υγεία, νομική, ασφάλιση, διαχείριση δεδομένων, παροχή συμβουλευτικών υπηρεσιών, έρευνα αγοράς, και cloud υπηρεσιών.

η) Υπηρεσίες ανάπτυξης προγραμμάτων, οι οποίες έχουν σαν στόχο το λογισμικό να εφαρμόζεται από το πλήθος.

θ) Υπηρεσίες σχεδιασμού ιστοσελίδων και γραφικού σχεδιασμού, οι οποίες χρησιμοποιούν την συνεισφορά του πλήθους για την δημιουργία καλλιτεχνικών projects/δουλειών.)

ι) Υπηρεσίες μετάφρασης, που στοχεύουν στην μετάφραση από την πρωτότυπη γλώσσα στην γλώσσα που επιθυμούμε.

κ) Αξιολογήσεις και τεστ προϊόντων, τα οποία πραγματοποιούνται από το πλήθος.

Ποιότητα και αξιοπιστία. Αυτή η ομάδα μεταβλητών χρησιμοποιείται για να αναφέρει ποιες τεχνικές εφαρμόζει η ιστοσελίδα, για να εξασφαλίσει την ποιότητα του αποτελέσματος που παρέχεται από τους εργάτες. Επίσης περιλαμβάνει τις τεχνικές τις οποίες παρέχει μια πλατφόρμα για τον εντοπισμό απάτης, για να εξασφαλίσει την αξιοπιστία των εργαζομένων (Wang et al.2011).

Περιοχή. Υποδεικνύεται η περιοχή προέλευσης στην οποία η πλατφόρμα λειτουργεί. Με βάση το δείγμα μας, έχουμε τέσσερις βασικές κατηγορίες-Βόρεια Αμερική, Ευρώπη, Αυστραλία και Ασία.

Διαδικτυακό αποτύπωμα. Αυτή η μεταβλητή αντικατοπτρίζει τις στρατηγικές που χρησιμοποιεί μια πλατφόρμα, σαν ένα εργαλείο ψηφιακού μάρκετινγκ και περιλαμβάνει τρεις κατηγορίες-κοινωνικά δίκτυα, κοινότητες video streaming- sharing, και blogs/forums (Thackeray et al. 2008)

Επισκεψιμότητα. Αυτή η μεταβλητή αποτελείται από τις διαφορετικές προελεύσεις των επισκέψεων, οι οποίες πραγματοποιούνται σε κάθε μια διαδικτυακή πλατφόρμα τύπου πληθοπορισμού. Για παράδειγμα, διαδικτυακές πλατφόρμες με καλή παρουσία σε μηχανές αναζήτησης (π.χ. Google search engine) κατάφεραν να αυξήσουν τα session τους με το πέρασμα του χρόνου, έχοντας σαν αποτέλεσμα την βελτίωση της οικονομικής τους δραστηριότητας (Ortega &Aguillo, 2010 & Tierney & Pan, 2012).

Τα χαρακτηριστικά μιας ιστοσελίδας πληθοπορισμού μπορούν να έχουν σημαντική επίδραση πάνω στην γενική της απόδοση ως μια εικονική αγορά. Σε αυτό το άρθρο ερευνήσαμε τους καθοριστικούς παράγοντες απόδοσης σε διάφορα περιβάλλοντα πληθοπορισμού. Εστιάσαμε στην ανάλυση της επίδρασης των συγκεκριμένων χαρακτηριστικών που μια πλατφόρμα πληθοπορισμού παρέχει ώστε να διαμορφώσει την διαδικτυακή αγορά πληθοπορισμού, χρησιμοποιώντας το πιο αξιόπιστο σύνολο δεδομένων από τo Alexa και χρησιμοποιώντας την μέθοδο ελαχίστων τετραγώνων για τις μη χρονικά μεταβαλλόμενες επιδράσεις/time-invariant effects, και σταθερών

25

επιδράσεων (Fixed Effects) ανάλυση για τις χρονικά μεταβαλλόμενες. Ένα κύριο εύρημα της έρευνάς μας είναι ότι παρόλο που η κίνηση σε τέτοια περιβάλλοντα αυξάνεται, τα desktop sessions με το πέρασμα του χρόνου ακολουθούν μια αντίθετη πορεία. Επίσης ενδείξεις δείχνουν ότι υπάρχει επίδραση του είδους της παρεχόμενης υπηρεσίας μιας ιστοσελίδας, των μηχανισμών ελέγχου ποιότητας που χρησιμοποιήθηκαν και των υιοθετημένων στρατηγικών ψηφιακού μάρκετινγκ πάνω στην γενικότερη απόδοση μιας ιστοσελίδας ως μια διαδικτυακή επιχειρησιακή οντότητα. Με αυτόν τον τρόπο, αποδείχτηκε από την ανάλυση ότι ένας αποτελεσματικός τρόπος για μια ιστοσελίδα να αυξήσει την απόδοσή της είναι να παρέχει τα προφίλ των εργαζομένων της με πληροφορίες σχετικά με την εργασιακή τους απόδοση στο παρελθόν και να δώσει την δυνατότητα στους requesters μιας δουλειάς να κάνουν τεστ ικανοτήτων και άσκησης στους εργαζόμενους που έχουν προσληφθεί.

Επίσης, η ανάλυσή μας αποκαλύπτει ότι τα εργαλεία για την ανίχνευση εργαζομένων με κύριο στόχο να προσφέρουν «κακής ποιότητας» εργασία, παίζουν σημαντικό ρόλο στην προσπάθεια μιας διαδικτυακής πλατφόρμας να αυξήσει την απόδοσή της γιατί οι πιθανοί requesters είναι πρόθυμοι να τις χρησιμοποιήσουν για να αναζητήσουν τους καλύτερους online εργαζόμενους. Τελευταίο και πολύ σημαντικό είναι ότι η ανάλυση μας επιβεβαιώνει ότι απ' την στιγμή που οι ιστοσελίδες μας δραστηριοποιούνται διαδικτυακά πρέπει να υιοθετήσουν διάφορες αρχές του ψηφιακού μάρκετινγκ, γιατί οι στρατηγικές ψηφιακού μάρκετινγκ έχουν ένα σημαντικό αλλά και δυνατό αποτέλεσμα στα συνολικά session τους (desktop και mobile).

Με το πέρασμα των χρόνων, έχουν εμφανιστεί διάφορες ιστοσελίδες πληθοπορισμού οι οποίες έχουν εξελιχθεί και προσφέρουν μια μεγάλη ποικιλία από υπηρεσίες στους τελικούς χρήστες. Κάθε μια από αυτές τις ιστοσελίδες επιδεικνύει ένα εύρος χαρακτηριστικών που διευκολύνουν τις εργασίες των χρηστών. Προς αυτήν την κατεύθυνση λοιπόν, ερευνήσαμε το πώς η απόδοση των ιστοσελίδων, μετρημένη σε sessions, σχετίζεται με τα χαρακτηριστικά αυτών των ιστοσελίδων. Έχοντας εκτιμήσει εμπειρικά μοντέλα έτσι ώστε να βρούμε διάφορα θέματα που αποτελούν κλειδιά στην απόδοση των διαδικτυακών πλατφορμών και σε ένα συγκεκριμένο χρονικό σημείο αλλά και με την πάροδο του χρόνου. Το κεντρικό σημείο της μελέτης σε αυτό το κεφάλαιο είναι η ανάγκη να κατανοήσουμε σε βάθος το πώς διάφορα πληθοποριστικά χαρακτηριστικά μιας διαδικτυακής πλατφόρμας επηρεάζουν την απόδοσή της και το να βγάλουμε κάποια συμπεράσματα για τα πραγματικά θέματα και προκλήσεις τα οποία υπάρχουν στην προσπάθειά της ιστοσελίδας να αυξήσει τα συνολικά sessions και το τζίρο της.

Γενικά, η έρευνα δείχνει ότι η παραγωγικότητα μιας ιστοσελίδας πληθοπορισμού είναι μεγαλύτερη όταν προσφέρει ένα πλαίσιο για συγκεκριμένες δουλειές οι οποίες μπορούν εύκολα να τυποποιηθούν όπως οι εργασίες στο κινητό ή δουλειές γραφιστικού σχεδιασμού. Επίσης η έρευνά μας επιβεβαίωσε ότι οι πρακτικές ποιοτικής διαχείρισης/quality management practices είναι πολύ αποτελεσματικές στο

26

να προβλέπουν την απόδοση μιας διαδίκτυαoύ τύπου επιχείρησης/web-enabled firm όπως και στις παραδοσιακές επιχειρήσεις. Αυτό καθιστά αναγκαίο να ερευνήσουμε περαιτέρω τις τεχνικές ποιοτικού ελέγχου οι οποίες μπορούν να χρησιμοποιηθούν από ιστότοπους πληθοπορισμού (Nair, 2006). Tέλος, όπως αναμέναμε η γεωγραφική τοποθεσία μιας διαδικτυακού τύπου πλατφόρμας πληθοπορισμού δεν παίζει κάποιο σημαντικό ρόλο στην συνολική της απόδοση, σε αντίθεση με τις έρευνες που αφορούν τις παραδοσιακές αγορές εργασίας (Folta et al. 2006).

Τέτοιου είδους παρατηρήσεις είναι σημαντικές αν αναλογιστούμε τις βασικές αρχές και τα πλεονεκτήματα της χρήσης του ιντερνέτ από εταιρείες γενικότερα. Στις μέρες μας, ειδικά οι διαδικτυακού τύπου καινοτόμες εταιρείες όπως οι διαδικτυακές πλατφόρμες πληθοπορισμού μπορούν εύκολα να υπερβούν τα όρια του μεγέθους και της τοποθεσίας και να ανταγωνιστούν στην παγκόσμια ηλεκτρονική αγορά (Cronin, 1997).

Αυτή η έρευνα έδειξε επίσης τον σημαντικό ρόλο που παίζουν οι κινητές συσκευές και τα smartphones σαν ένα εναλλακτικό τρόπο συμμετοχής στην εξωτερική ανάθεση δουλειών στο διαδίκτυο. Δεδομένης της ραγδαίας αύξησης τέτοιων συσκευών, στο πλαίσιο του πληθοπορισμού, η μελλοντική έρευνα οφείλει να πραγματοποιήσει μια βαθύτερη έρευνα για το ρόλο αυτών των συσκευών στις εργασίες πληθοπορισμού (Miao et al. 2016). Πιο συγκεκριμένα, να υπάρξουν ερωτήματα σε σχέση με το αν η χρήση τέτοιων συσκευών επηρεάζει τον τύπο των προτιμώμενων εργασιών και ακόμα πιο σημαντικό, πως επηρεάζουν την απόδοση των εργαζομένων ειδικά σε εργασίες οι οποίες εξαρτώνται από την τοποθεσία.

Τέλος, μέσω της βαθύτερης κατανόησης των παραγόντων που επηρεάζουν την απόδοσή του και αξιοποιώντας τις τεχνολογικές δυνατότητες των υπολογιστικών συστημάτων, τα οποία έχουν ενσωματωθεί με έναν οικείο τρόπο στις μέρες μας σε φυσικά και κοινωνικά πλαίσια, ο πληθοπορισμός μπορεί πολύ εύκολα να εξελιχθεί και να γίνει ένας αυξανόμενα δημοφιλής και ελκυστικός τρόπος διαδικτυακής εργασίας, δίνοντας την δυνατότητα σε μια μεγάλη γκάμα εφαρμογών και υπηρεσιών να ολοκληρωθούν με σε μικρό χρονικό διάστημα και πολύ χαμηλό κόστος.

Μικροσκοπική Ανάλυση Σε αυτή την ενότητα παρουσιάζεται η μικροσκοπική προσέγγιση για την ποσοτική μελέτη πάνω στην απόδοση της εργασίας σε περιβάλλοντα πληθοπορισμού. Αυτό περιλαμβάνει ένα σχεδιασμό πειράματος το όποιο και πραγματοποιήθηκε σε μία από τις πιο γνωστές πλατφόρμες πληθοπορισμού με στόχο την καλύτερη κατανόηση των μεμονωμένων χαρακτηριστικών των εργαζομένων σε τέτοια περιβάλλοντα και των επιδράσεων που έχουν στην τελική ποιότητα της εργασίας τους.

Μακροχρόνια, οι οικονομολόγοι κάνουν σοβαρές προσπάθειες για να καταφέρουν να εντοπίσουν τους καθοριστικούς παράγοντες της ατομικής συμπεριφοράς στην αγορά

27

εργασίας (Goldin & Katz, 2008). Αποδείξεις από τις οικονομικές επιστήμες και την ψυχολογία δίνουν έμφαση στο ρόλο των νοητικών/γνωστικών χαρακτηριστικών αλλά και των χαρακτηριστικών που αφορούν την προσωπικότητα στο να εξηγήσουν την απόδοση ενός ατόμου σε μια εργασία συγκεκριμένου τύπου σε διάφορα project (Borgahns et al 2008). Παρόλο που οι διαδικτυακές αγορές εργασίας δεν είναι πανομοιότυπες με τις παραδοσιακές (Horton, 2010), υπάρχει αυξανόμενη απόδειξη προς την σχεδόν εργασιακή φύση της σχέσης μεταξύ των requesters και των micro- workers κατά την διάρκεια της διαδικτυακής τους συνεργασίας (Chen &Horton, 2016).

Επίσης, παρόμοια με την παραδοσιακή αγορά εργασίας, και οι διαδικτυακές αγορές εργασίας δεν είναι ομογενής όσο αφορά το πλήθος και τον χρόνο από την στιγμή που το προφίλ των micro-workers σχετίζεται με παράγοντες δημογραφικούς, ανθρωπίνου κεφαλαίου και σχετικούς με το εισόδημα (Ipeirotis, 2010; Ross et al. 2010;Farell et al. 2017). Γι’ αυτό το λόγο, η παρούσα μελέτη προσπάθησε να ερευνήσει και την ατομική απόδοση των εργαζομένων σε μια δουλειά σε ένα παγκόσμιο, αρκετά ετερογενές περιβάλλον με το να επικεντρωθεί στα γνωστικά/νοητικά προσόντα και στα στοιχεία προσωπικότητας. Η απόδοση σε μια εργασία, σε περιβάλλοντα πληθοπορισμού, συνήθως μετριέται με τον όρο ''ποιότητα αποτελέσματος'', ο οποίος αναφέρεται σε μια υποκειμενική κρίση για το αν η υποβαλλόμενη εργασία ικανοποιεί τα κριτήρια του αιτούντα για εργασία (requester). Όταν ολοκληρωθεί η δουλειά, ο requester αποδίδει την χρηματική ανταμοιβή σύμφωνα με την εκπλήρωση των προκαθορισμένων προδιαγραφών που αφορούν την ποιότητα του αποτελέσματος (Felstiner, 2011).

Ερευνητές στο χώρο της πληροφορικής και της λήψης αποφάσεων (Allahbakhsh et al 2013; Kokkodis & Ipeirotis, 2016) τονίζουν ότι η ποιότητα συνολικού αποτελέσματος εξαρτάται επίσης τόσο από τον σχεδιασμό και τις απαιτήσεις του project αλλά και από τα στοιχεία των εργαζομένων (ικανότητες, εξειδίκευση, και φήμη/υπόληψη). Όσον αφορά τον σχεδιασμό του διαδικτυακού project, μπορούμε να διακρίνουμε δυο τύπους. Ο πρώτος αναφέρεται σε εργασίες που απαιτούν πολλά προσόντα και υψηλή εξειδίκευση (π.χ. προγραμματισμός υπολογιστών, ανάπτυξη λογισμικού) και ο δεύτερος τύπος αναφέρεται σε εργασίες που απαιτούν λιγότερο ειδικά προσόντα (καταχώρηση δεδομένων, αναζήτηση στο ίντερνετ, ή διοικητική υποστήριξη).

Μεθοδολογικώς, η εμπειρική ανάλυση των διαδικτυακών αγορών εργασίας απαιτεί microdata που σχετίζονται με την απόδοση εργασίας συλλεγμένα από διάφορα πεδία πειραμάτων όπου η απόδοση εργασίας συνήθως μετριέται από μεταβλητές όπως, μετρήσεις και/ή αναλογίες από ταξινομημένες ομάδες εικόνων (Mason & Watts, 2009), σωστές απαντήσεις στην τοποθέτηση ετικέτας σε φωτογραφικό υλικό (Chandler & Horton, 2011) ή ο αριθμός των αναγνωρισμένων αντικειμένων σε μια εικόνα (Chandler & Capelner, 2013). Ο ρόλος των γνωρισμάτων των εργαζομένων στην ανάλυση της ατομικής απόδοσης εργασίας σε αγορές πληθοπορισμού έχει μόλις

28

πρόσφατα τραβήξει την προσοχή των οικονομολόγων (Mason & Watts, 2009; Chandler & Horton, 2011; Chandler & Capelner, 2013; Pallais, 2014; Horton & Zeckhouser, 2016; Pallais & Sands, 2016).

Επίσης, πρωταρχικές αποδείξεις από έρευνες στον κλάδο της πληροφορικής (Downs et al. 2010; Kazai et al. 2011;2012; Mourelatow & Tzagarakis, 2016) δείχνουν ότι η ποιότητα αποτελέσματος σε δουλειές πληθοπορισμού ποικίλλει σύμφωνα με τα δημογραφικά στοιχεία (φύλο, ηλικία και προέλευση), τα χαρακτηριστικά ανθρώπινου κεφαλαίου (εκπαίδευση, ικανότητα σε υπολογιστές και προηγούμενη εργασιακή εμπειρία) και στοιχεία προσωπικότητας (Big Five Personality Test). Όπως τονίζουν ο Κοkkodis & Ipeirotis (2016) η αξία των εργαζόμενων σε μια διαδικτυακή αγορά εργασίας είναι ιδιαίτερα ετερογενής, και αποτελείται από ένα σετ παρατηρήσιμων (π.χ. προσόντα, εκπαίδευση, επαγγελματικό ιστορικό, πιστοποιήσεις) αλλά και υποβοσκουσών χαρακτηριστικών (π.χ. εξειδίκευση και ικανότητα). Παρόλο που εμφανίζεται ένας αριθμός μειονεκτημάτων στην μέτρηση και ανάλυση της απόδοσης εργασίας σε διαδικτυακά πειράματα (Horton et al, 2011; Chen & Konstan, 2015), πρόσφατη πειραματικές αποδείξεις υποδεικνύουν ότι οι micro-workers-σε σύγκριση με τους συμμετέχοντες σε εκτός διαδικτύου πειράματα-είναι εξίσου ειλικρινείς με παρόμοιες προτιμήσεις και επίπεδα προσπάθειας ακόμα και σε περιβάλλοντα χαμηλών αποδοχών σε χρήματα (Farrel et al. 2017).

Παρόλα αυτά, καμιά από αυτές τις μελέτες δεν ερευνά την απόδοση των εργαζομένων μέσω μιας προσέγγισης που εστιάζει σε μια μόνο δουλειά, εφαρμοσμένη σε ένα παγκόσμιο και ιδιαίτερα ετερογενές περιβάλλον όπου το επίκεντρο να βρίσκεται στο ρόλο των νοητικών/γνωστικών προσόντων και των στοιχείων της προσωπικότητας.

Για σκοπούς ανάλυσης λοιπόν, πραγματοποιήσαμε ένα διαδικτυακό πείραμα χρησιμοποιώντας την πλατφόρμα πληθοπορισμού microworkers.com, όπου η απόδοση των microworkers-εργαζομένων βασίζεται στις σωστές απαντήσεις τους σε σχέση με την ακρόαση ενός μουσικού δείγματος με στίχους. Επίσης συλλέξαμε πληροφορίες για τα γνωστικά-νοητικά προσόντα (εκπαίδευση, ικανότητα στους υπολογιστές και proficiency στα Αγγλικά), για τα στοιχεία της προσωπικότητας (John & Srivastava, 1999) και για διάφορα δημογραφικά χαρακτηριστικά (φύλο, ηλικία, και χώρα προέλευσης). Η επίδραση των στοιχείων της προσωπικότητας (η ειλικρίνεια, η ευσυνειδησία, η ικανότητα του να είναι κάποιος ευχάριστος, η εξωστρέφεια, ο νευρωτισμός) έχει καθιερωθεί αρκετά στα οικονομικά μοντέλα της συμπεριφοράς του ατόμου, ως καθοριστικοί παράγοντες της συνολικής απόδοσης του ατόμου σε μία αγορά εργασίας. Παρόλα αυτά, στις διαδικτυακές αγορές εργασίας υπάρχουν λίγες σχετικές αποδείξεις.

Από όσο τουλάχιστον μπορούμε να γνωρίζουμε, η έρευνα μας είναι μία από τις πρώτες απόπειρες να χρησιμοποιηθεί ένα διαδικτυακό πείραμα το οποίο θα εξετάσει απευθείας την σχέση μεταξύ των στοιχείων προσωπικότητας και της ατομικής επίδοσης σε δραστηριότητες και περιβάλλοντα πληθοπορισμού. Αυτή η πρακτική θα

29

μας βοηθήσει να εξερευνήσουμε τους μηχανισμούς πίσω από την σχέση μεταξύ των στοιχείων προσωπικότητας και της απόδοσης εργασίας και να κατανοήσουμε καλύτερα το πώς τα στοιχεία της προσωπικότητας μπορούν να εξηγήσουν το επίπεδο της ποιότητας του αποτελέσματος ενός εργαζόμενου σε διαδικτυακές αγορές εργασίας.

Χρησιμοποιώντας μοντέλα γραμμικής παλινδρόμησης ελαχίστων τετραγώνων (OLS), με μια ευρεία ομάδα επεξηγηματικών μεταβλητών, βρήκαμε ότι, τα επίπεδα εξωστρέφειας και νευρωτισμού ενός εργαζομένου σε «online» περιβάλλοντα, ασκούν μια στατιστικά σημαντική και ισχυρή επίδραση στην ατομική απόδοση ενός ατόμου. Πιο συγκεκριμένα οι micro-workers με υψηλότερα επίπεδα νευρωτισμού έδωσαν λιγότερες σωστές απαντήσεις στο διαδικτυακό μας πείραμα ενώ από την άλλη μεριά οι micro-workers με υψηλότερα επίπεδα εξωστρέφειας είχαν καλύτερη απόδοση και ποιότητα αποτελεσμάτων.

Αυτή η επιρροή επιβεβαιώνεται όταν η ανάλυση πραγματοποιείται και σε διαφορετικές υποκατηγορίες των microworkers-εργαζομένων (δηλαδή σε εργαζόμενους που προέρχονταν από ανεπτυγμένες και λιγότερο ανεπτυγμένες χώρες). Επίσης βρήκαμε ότι στις λιγότερο ανεπτυγμένες χώρες, επιπλέον στοιχεία της προσωπικότητας φαίνονται να επηρεάζουν την απόδοση ενός διαδικτυακού εργαζόμενου (π.χ. ευσυνειδησία και εξωστρέφεια). Microworkers με υψηλότερα επίπεδα ευσυνειδησίας έδωσαν καλύτερα αποτελέσματα ενώ εκείνοι με υψηλότερα επίπεδα εξωστρέφειας παρουσιάζουν χαμηλότερα επίπεδα στην ποιότητα της εργασίας.

Τέλος, επιτρέποντας την επίδραση των στοιχείων προσωπικότητας να ποικίλλει σύμφωνα με το φύλο και την εκπαίδευση, ανακαλύψαμε ότι ο νευρωτισμός έχει μια αρνητική επίδραση ειδικά στην απόδοση των ανδρών. Επίσης βρήκαμε ότι οι άντρες που δεν είχαν τριτοβάθμια εκπαίδευση αλλά είχαν υψηλότερα επίπεδα ευσυνειδησίας, αποδίδουν καλύτερα σε διαδικτυακές εργασίες και ότι οι γυναίκες με υψηλότερα επίπεδα ειλικρίνειας παρέχουν καλύτερη ποιότητα εργασίας.

Συζήτηση και Συμπεράσματα Η συμβολή αυτής της διπλωματικής διατριβής είναι διπλή. Αρχικά, με τη διεξαγωγή της μακροσκοπικής ανάλυσης των πλατφορμών πληθοπορισμού, καταλήξαμε σε πολλά χρήσιμα αποτελέσματα. Σε γενικές γραμμές, τα χαρακτηριστικά πολλών ιστοσελίδων (ως εν δυνάμει διαδικτυακές αγορές εργασίας) έχουν σημαντικές επιπτώσεις στη συνολική τους απόδοση. Η μελέτη αποκάλυψε ότι, το είδος εργασίας που πραγματεύεται μία online πλατφόρμα που παρέχει υπηρεσίες πληθοπορισμού, οι μηχανισμοί ελέγχου ποιότητας που χρησιμοποιούνται και οι υιοθετημένες

30

στρατηγικές ψηφιακού μάρκετινγκ, είναι καθοριστικοί παράγοντες για τη συνολική απόδοση της.

Ως εκ τούτου, αποδείχτηκε από την ανάλυση ότι ένας αποτελεσματικός τρόπος για να αυξήσει τις επιδόσεις και την παραγωγικότητά της μία πλατφόρμα πληθοπορισμού είναι η παροχή στους requesters επιπρόσθετων πληροφοριών των εργαζομένων που συμμετέχουν στην διαδικτυακή αγορά εργασίας, σχετικά με την απόδοση τους σε αντίστοιχα project εργασίας στο παρελθόν και της παροχής δυνατότητας στους αιτούντες μια εργασία για τη διεξαγωγή δοκιμών δεξιοτήτων και πρακτικής μεταξύ δυνητικών εργαζομένων έτσι ώστε να γίνεται μία προεπιλογή των τελικών εργαζομένων που θα συμμετάσχουν στο project εργασίας. Επιπρόσθετα, η παροχή αποτελεσματικών μηχανισμών για την ανίχνευση ανεπιθύμητων συμπεριφορών (π.χ. κακόβουλοι εργαζόμενοι) και η ενσωμάτωση στρατηγικών ψηφιακού μάρκετινγκ έχουν σημαντική θετική επίδραση στην παραγωγικότητα και απόδοση μιας online πλατφόρμας πληθοπορισμού. Τελευταίο αλλά εξίσου σημαντικό είναι ότι η έρευνα αποκάλυψε ότι το κινητό και τα tablet, θα είναι ένα βασικό στοιχείο της διαδικασίας του πληθοπορισμού στο εγγύς μέλλον.

Όσον αφορά τη μικροσκοπική ανάλυση, η μελέτη έδειξε ότι οι δεξιότητες των ατόμων που συμμετέχουν σε πληθοποριστικά projects εργασίας, εμφανίζονται ως κρίσιμοι παράγοντες για την επίτευξη υψηλής ποιότητας εργασίας. Έτσι, η έρευνα περιλαμβάνει επίσης και μία ολοκληρωμένη και σοβαρή προσπάθεια να κατανοηθεί ο ρόλος των γνωστικών και μη γνωστικών δεξιοτήτων και ο αντίκτυπός τους στην επίδοσή των εργαζομένων σε διαδικτυακές αγορές εργασίας.

Ξέροντας ότι, οι online εργαζόμενοι προέρχονται από διαφορετικά κοινωνικοοικονομικά υπόβαθρα, με διαφορετικές δεξιότητες και διαφορετικά χαρακτηριστικά γνωρίσματα, αποφασίσαμε να διερευνήσουμε βαθύτερα τα χαρακτηριστικά ενός ατόμου που μπορεί να συνδέονται με την συνολική του απόδοση.

Γι’ αυτό το λόγο, με τη διεξαγωγή ενός online πειράματος πληθοπορισμού και χρησιμοποιώντας την πλατφόρμα microworkers.com η μελέτη συνέλεξε λεπτομερείς πληροφορίες σχετικά με τις γνωστικές δεξιότητες, τα χαρακτηριστικά της προσωπικότητας και διάφορα κοινωνικοοικονομικά χαρακτηριστικά των και έδειξε ότι, ο νευρωτισμός και η εξωστρέφεια ασκούν στατιστικά σημαντική αρνητική και θετική αντίστοιχα, επίδραση στην απόδοση ενός ατόμου, δείχνοντας ότι οι εργαζόμενοι στις διαδικτυακές αγορές εργασίας με υψηλότερα επίπεδα νευρωτισμού παρέχουν χαμηλής ποιότητας εργασία, ενώ με υψηλότερα επίπεδα εξωστρέφειας υψηλής ποιότητας εργασία, κάτι που συμβαδίζει και με τις σχετικές μελέτες στις παραδοσιακές αγορές εργασίας.

Επιπλέον, η απόδοση εργασίας των εργαζομένων, στο διαδίκτυο, που προέρχονται από λιγότερο ανεπτυγμένες χώρες είναι καλύτερη για αυτούς που έχουν υψηλότερα

31

επίπεδα ευσυνειδησίας, ενώ είναι χειρότερη σε αυτούς με υψηλότερα επίπεδα εξωστρέφειας.

Λαμβάνοντας υπόψη όλα τα παραπάνω αποτελέσματα, είναι προφανές ότι τα χαρακτηριστικά ενός ατόμου που συμμετέχει σε περιβάλλοντα πληθοπορισμού έχουν σημαντικό αντίκτυπο στη εργασιακή συμπεριφορά του και απόδοση. Για το λόγο αυτό, οι ερευνητές στο εγγύς μέλλον θα πρέπει να επικεντρωθούν περισσότερο στους ίδιους τους εργαζόμενους (workers) και στην βαθύτερη κατανόηση των χαρακτηριστικών εκείνων που παίζουν σημαντικό ρόλο στην τελική διαμόρφωση της παραγωγικότητας τους, έτσι ώστε οι requesters να έχουν την ευκαιρία να παρατηρήσουν γρήγορα και αποτελεσματικά το μοντέλο συμπεριφοράς ενός εργαζομένου και να προχωρήσουν με μεγαλύτερη ασφάλεια στην επιλογή του για το project εργασίας σε περιβάλλοντα πληθοπορισμού.

Ως εκ τούτου, στο σύνολό της αυτή η διατριβή προσθέτει στον ερευνητικό κόσμο χρήσιμες πληροφορίες και στρατηγικές για την καλύτερη και αποτελεσματικότερη αξιολόγηση και βελτίωση της ποιότητας εργασίας σε περιβάλλοντα πληθοπορισμού, εντοπίζοντας τους βασικούς παράγοντες που επηρεάζουν σημαντικά την παραγωγικότητα, τόσο από την πλευρά των πλατφορμών που παρέχουν υπηρεσίες πληθοπορισμού (Μακροσκοπική Ανάλυση), όσο και από την πλευρά των ατόμων που συμμετέχουν σε τέτοιες διαδικτυακές αγορές εργασίας (Μικροσκοπική Ανάλυση).

32

TABLE OF CONTENTS

Chapter 1 Crowdsourcing in Economics: An Introduction ...... 35 1.1 Introduction ...... 35 1.2 The Evolution of the Web ...... 35 1.3 Online Labor Markets ...... 38 1.4 Crowdsourcing ...... 40 1.4.1 Background and History ...... 40 1.4.2 Origin and Definition ...... 41 1.4.3 Types of Crowdsourcing ...... 44 1.4.4 Issues on Crowdsourcing ...... 47 1.5 Purpose and Scope of Research ...... 48 1.6 Thesis Outline ...... 48 Chapter 2 Crowdsourcing in Economics: A Literature Review ...... 50 2.1 Introduction ...... 50 2.2 Literature Review ...... 51 2.3 Research Gap and Thesis Contribution ...... 66 Chapter 3 An investigation of factors affecting the visits of online crowdsourcing and labor platforms...... 68 3.1 Introduction ...... 68 3.2 Theoretical Background ...... 70 3.3 Data Analysis ...... 72 3.3.1 Data Source ...... 72 3.3.2 Methodology ...... 73 3.3.3 Descriptive Statistics ...... 75 3.4 Empirical Analysis...... 85 3.4.1 Overview ...... 85 3.4.2 Empirical Model...... 86 3.4.3 Estimation Results ...... 86 3.4.4 Conclusions ...... 93 Chapter 4 Personality Traits and Performance in Online Labor Markets ...... 95 4.1 Introduction and Theoretical Background ...... 95 4.2 Experimental Framework ...... 97 4.2.1 Design ...... 97

33

4.2.2 Hypotheses ...... 99 4.3 Data Analysis ...... 100 4.3.1 Summary statistics ...... 100 4.3.2 Sample characteristics ...... 102 4.4 Empirical analysis ...... 107 4.4.1 Empirical model ...... 107 4.4.2 Estimation results ...... 108 4.5 Conclusions ...... 115 Chapter 5 Discussion and Conclusions ...... 117 5.1 Limitations ...... 117 5.2 Conclusions and Recommendations ...... 118 5.3 Future Work ...... 120

34

Chapter 1 Crowdsourcing in Economics: An Introduction

1.1 Introduction This thesis will discuss crowdsourcing and crowdfunding as a new sourcing model of labor, in which individuals or organizations use contributions from Internet users to obtain needed services or ideas.

This research explores the growing phenomena of people working online, by focusing on a specific issue within crowdsourcing and crowdfunding environments namely quality of work. In the following chapters we present a microscopic and macroscopic analysis for the quality of work being made on such online labor environments.

The microscopic approach considers quality concerns as function of the quality of crowdsourcing workers and focuses on and their cognitive and non-cognitive specific- characteristics whereas the macroscopic approach aims at expressing quality in terms of the sites providing crowdsourcing services.

In general this new and innovative type of online labor has been growing steadily in the last years, as can be proven by the high number of platforms who have been created in the last five years, and their high number of Page Views and Registered Users.

Its rapid growth and sudden popularity, is resulting on a complexity because of the great variation in characteristics and traits that exist among the crowdsourcing participants and the crowdsourcing platforms.

As a result our research try to satisfy the need for standardization in quality of work being made in paid crowdsourcing labor environments, by presenting useful findings on the factors that affect the performance of both individuals and platforms on crowdsourcing online environments (Ipeirotis & Horton, 2011).

1.2 The Evolution of the Web The World Wide Web which is commonly known as the web, is not synonymous with the internet but is the most prominent part of the internet that can be defined as a techno-social system to interact humans based on technological networks. The notion of the techno-social system refers to a system that enhances human cognition, communication, and co-operation; Cognition is the necessary prerequisite to communicate and the precondition to co-operate. In other words, cooperation needs communication and communication needs cognition (Fuchs et al. 2010).

35

Web is the largest transformable-information construct and its idea was firstly introduced by Tim Burners-Lee in 1989 (Berners et al. 2001 & Aghaei et al. 2012). In the past two decades much progress has been made about the web and its related technologies. In general, the initial version of the Web (i.e. web 1.0) was referred as the first generation of “World Wide Web”. At this stage, the entire websites were inter- connected by hyperlinks only as a set of static websites that were not yet providing interactive content (web of cognition) while in the second stage of Internet development (i.e. web 2.0) web browsers came into picture with dynamic websites allowing the interaction between users and websites (web of communication).

More particular, Web 1.0 was mainly a read-only web having a static and somewhat mono-directional form. This version of the web was giving the opportunity to businesses only to provide catalogs or brochures in order to present their productions and services, allowing customers only to read them and contact back. Actually, the catalogs and the brochures were similarly advertisements in newspapers and magazines and most owners of ecommerce websites employed shopping cart applications in different shapes and forms (Naik & Shivalingaiah, 2008). Furthermore, the websites included static HTML pages that updated infrequently and their main goal was to publish the information for anyone at any time and establish an online presence. Also, the websites were not interactive and indeed were as brochure- ware. Users and visitors of the websites could only visit the sites without any impacts or contributions and the linking structure was too weak. Core protocols of web 1.0 were HTTP, HTML and URI (Cormode & Krishnamurthy, 2008).

In the following years a new version of the Web known as Web 2.0 was introduced. More particular, the term of Web 2.0 was officially defined in 2004 by Dale Dougherty, vice-president of O’Reilly Media, in a conference brainstorming session between O'Reilly and MediaLive International1. Tim O’Reilly defines Web 2.0 as follows:

“Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them.”

In general, Web 2.0 introduces the opportunity of reading as well as writing, by becoming bi-directional for the user. In other words, the users of Web 2.0 have more interaction with less control. Moreover, Web 2.0 is not only a new version of web 1.0; Flexible web design, creative reuse, updates, collaborative content creation and

1 Tim Berners-Lee. The World Wide Web: A very short personal history, in: , 1998.

36 modification were facilitated through this version. One of outstanding features of Web 2.0 is to support collaboration and to help gather collective intelligence based on the wisdom of the crowds which refers to the idea that “large groups of people are collectively smarter than even individual experts when it comes to problem solving, decision making, innovating and predicting “ (Surowiecki & Silverman, 2007).

Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making (Levy, 1997). For that reason is also known as the wisdom web, people-centric web, participative web, and read-write web (O’reilly, 2005 & Bonabeau, 2009).

In order to understand better the evolution of the Web and the progress of its technologies and services, the following table (table 1) and figure (figure 1) presents a comparison of the Web 1.0 and the Web 2.0 in some features simplicity.

Table 1. A Comparison of Web 1.0 and Web 2.0 Web 1.0 Web 2.0 Reading Reading and Writing Companies Communities Client-Server Peer to Peer HTML, Portals XML, RSS Taxonomy Tags Owning Sharing IPOs Trade Sales Netscape Google Web Forms Web Applications Screen Scraping APIs Dial up Broadband Hardware costs Bandwidth costs Lectures Conversation Advertising Word of mouth Services sold over the web Web services Information portals Platforms

This rapid rise of the web and mainly the technologies and services that includes as blogs, really simple syndication (RSS), wikis, mashups, tags, folksonomy, and tag clouds had a big impact to people’s daily routine and created suitable conditions for new economic processes to emerge through several types of online labor markets as crowdsourcing and crowdfunding online environments (Zimmer, 2008).

37

Figure 1. The evolution of the Web. Drawn by http://www.iblognet.com

1.3 Online Labor Markets In recent years, a number of online labor markets have emerged that allow workers from around the world to sell their labor to an equally global pool of buyers. The creators of these markets play the role of labor market intermediary by providing institutional support and remedying informational asymmetries.

Online Labor markets are markets where (1) labor is exchanged for money, (2) the product of that labor is delivered “over a wire” and (3) the allocation of labor and money is determined by a collection of buyers and sellers operating within a price system (Horton, 2010).

Such markets fall into two broad categories: “spot” and “contest.” No labor market is truly “spot” in the sense of a commodity market, but certain OLMs feature buyer/seller agreements to trade at agreed prices for certain durations of time. Examples of spot markets include oDesk, Elance, iFreelance and Guru online platforms. In these websites, workers create online profiles and buyers post jobs and wait for workers to apply and/or actively solicit applicants.

In contest markets, buyers propose contests for informational goods such as logos (e.g., 99Designs and CrowdSPRING), solutions to engineering problems (e.g., InnoCentive) and legal research (e.g., Article One Partners). In this version of OLMs,

38 the participants create their own versions of the good and the buyer selects a winner from a pool of competitors. In some markets, the buyer must agree to select and pay a winner before they can post a contest; in other high-stakes markets where a solution may be unlikely, the buyer is under no obligation to select a winner.

The nature and principles of the online labor markets differ to the traditional ones in at least two principles. First of all, there is no single “commodity” of labor with an immediately observable quality and single prevailing price—both jobs and workers are idiosyncratic. As a result, this makes it difficult for firms and workers to find a good match, and even when matches are formed; it is difficult for either party to know precisely what they are getting when they enter into contracts. Buyer/seller information asymmetries, when combined with opportunities for strategic behavior, can impede markets; if sufficiently severe, they can prevent markets from existing (Rothschild & Stiglitz, 1976 & Autor, 2001). Secondly, it is known that, labor is a service that is delivered over time, often accompanied by relationship-specific investments in human capital (e.g., learning a particular skill for a particular job), which creates a number of the incentive issues that make it hard for parties to fully cooperate (Williamson, 1979).

In traditional labor markets, third-party intermediaries such as temp agencies, unions and testing services profit from supplying information (Autor, 2008). The creators of online labor markets do the same thing, though their scope is wider and more comprehensive. They also provide infrastructure like payment and recordkeeping systems, communications infrastructures and search technology— functions typically provided by a government or by parties themselves.

The aforementioned, describes the major effects that Web and its evolution is having on the labor markets. For that reason, the researchers began in the late of 90’s examining deeply several issues such as whether we might see the emergence of entirely online labor markets, where geographically dispersed workers and employers could make contracts for work sent “down a wire.” Such markets would be an unprecedented development, as labor markets have always been geographically segmented.

Researchers were of mixed opinions: Malone predicted the emergence of such an “E- lance” market (Malone & Laubacher, 1998), while Autor was skeptical, arguing that informational asymmetries would make such markets unlikely (Autor, 2001). Instead, Autor predicted the emergence of third-party intermediaries that could use their own reputation to convey “high bandwidth” information about workers—such as ability, skills, reliability and work ethic—to buyers who would be unwilling to hire workers based solely on demographic characteristics and self-reports.

In the approximately 10 years since, we have witnessed the emergence of a number of truly global online labor markets, as Malone predicted. By 2009, over 2 million

39 worker accounts had been created across different markets, with over $700 million in gross wages paid to workers (Frei, 2009 & Horton & Chilton,2010). However, consistent with Autor’s position, these markets have emerged not “in the wild,” but within the context of highly structured platforms which mainly adopted the policies and principles of paid crowdsourcing.

1.4 Crowdsourcing 1.4.1 Background and History In 1714, the British Government was stuck for a solution to what they called “The Longitude Problem” which made sailing difficult and perilous (killing 1,000s of seamen every year). Seeking innovation, the British Government offered £20,000 for people to invent a solution (£20,000 in 1714 is around $4.7 million dollars in 2010). This is possibly the first ever example of crowdsourcing. The contest, considered almost unsolvable, was won by John Harrison, the son of a carpenter. Harrison invented the 'marine chronometer' (i.e. an accurate, vacuum sealed pocket watch). The aristocrats were hesitant to award Harrison the prize but eventually paid him the £20,000. This is the first example of crowdsourcing and it is knowne as the “The Longitude Prize” and it highlights one of the principles of crowdsourcing - innovation and creativity can come from anywhere.

Similarly some decades after, in 1936, Toyota held a logo contest to redesign its logo. They received 27,000 entries and the winning logo was the three Japanese katakana letters for "Toyoda" in a circle, which was later modified by Risaburo Toyoda to "Toyota".

Furthermore, in 1955, the Premier of NSW State of Australia, Joseph Cahill, held a contest that offers £5,000 to the best design of a building for part of Sydney's Harbour. The contest received 233 entries from 32 countries around the world. The winning design is one of the most innovative landmarks. Architectural contests continue to be a popular model for getting buildings designed.

Lastly, the period 2001-2005 several examples of crowdsourcing were emerged. More particular, during this period innovative websites became popular, such as:

-Wikipedia which is based on crowdsourced knowledge and

- Youtube which is based on crowdsourced entertainment / TV

All the above mentioned examples were inorganically crowdsourcing attempts without a conceptual framework and strategy. For that reason, all the above campaigns pointed to the need for an integrated and targeted crowdsourcing framework containing sufficient policies and strategies for crowdsourcing.

40

Figure 2. The history/genesis of Crowdsourcing. Drawn by http://crowdsourcingba333.weebly.com/index.html

1.4.2 Origin and Definition The term crowdsourcing as a portmanteau of crowd and outsourcing, was firstly coined by Jeff Howe in 2006 in an article for the Wired magazine called “The Rise of Crowdsourcing”. According to this:

"Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers." (Howe, 2006)

Howe was inspired and influenced by James Surowieki and his book “The Wisdom of Crowds”. Therein Surowieki describes the concept of collective intelligence: “under the right circumstances, groups are remark-ably intelligent, and are often smarter than the smartest people in them” (Selzer & Mahmoudi, 2006). Hence, based on the aforementioned, crowdsourcing functions as a tool to access this collective intelligence or external competences.

In general, the word crowdsourcing itself reflects its definition as it is a combination of the words ‘crowd’ and ‘outsourcing’, referring to the participants of crowdsourcing and outsourcing as a business practice (Prpic et al., 2015). The definition contains four essential parts: the platform, an open call, the crowd and the task.

Recall, crowdsourcing technology has grown sophisticated, connecting freelancers and enthusiasts with companies looking for projects or simple task completion. In order to crowdsourcing to be executed the requester of the job must consider the best crowdsourcing site, depending on the provided features and tools. Hence,

41 crowdsourcing online platforms are web-based communities containing individuals with different skills ready to work on crowdsourcing campaigns. Thus, a requester or a company can use these online platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means. These online labor marketplaces have different characteristics which we will analyze in Chapter 3.

One basic component of crowdsourcing is also the nature of the connection between the requesters of an online job and the potential workers. In other words, crowdsourcing consists of making an “open online call” for a creative idea, or problem-solving, or evaluation or any other type of business issues, and to let anyone (in the crowd) submit solutions (Ribiere & Tuggle, 2010). One of the characteristics that differentiate the people included in the crowd is that they have to be compensated because they are acting voluntarily. Thereby the adjective “open” signifies the act of a company or institution or individual requester which is outsourcing a variety of jobs through several online platforms, to an undefined (and generally large) network of people (i.e. the crowd), by bringing together the potential participants without being be limited to experts or preselected candidates, or that participation should be non- discriminatory (Schenk & Guittard, 2011). Hence, everybody can answer the call: individuals can participate in addition to firms, non-profit organizations or communities of individuals. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of an open call format, and the wide network of potential workers (Estellés & González, 2012 & Kazai, 2011).

Whitla clearly explains this by indicating that the call can be of one of three types:

• a true open call where any given interested party can participate; • a call limited to a community with specific knowledge and expertise; • a combination of both, where an open call is made, but those who can participate are controlled. (Whitla, 2009)

Therefore, one more essential part of crowdsourcing is the “crowd”, which, as most scholars agree, is characterized as a large group of people named workers (Kozinets et al., 2008). Furthermore, many scholars agree that the crowd should be heterogeneous in its characteristics such as demographics and especially in their skills and knowledge (Selzer & Mahmoudi, 2012). For that reason, many attempts have tried to analyze the demographic characteristics of the crowd associated to motivation theories. For example, Ipeirotis examined one of the biggest crowdsourcing online platforms (i.e. Amazon Mechanical Turk) and showed that is consists of young, mainly female from small families workers from USA and India. Moreover, Ipeirotis linked the crowd’s demographic and social-economic characteristics with their incentives of participation on crowdsourcing environments. For instance, his research showed that, the motivation was quite different across Indian and US-workers. Namely, very few Indian workers participate on Mechanical Turk online platform for "killing time", and significantly more Indians treat Mechanical Turk as a primary

42 source of income. (Not surprising given the average income level of an Indian worker vs. the income level of the US workers) (Ipeirotis, 2010). However, even now, a large part of the world population does not have access to the internet, especially high- speed connections, which would enable them to take part in crowdsourcing online projects resulting in a limitation of the diversity of the crowd, as for instance certain age groups or nationalities might be underrepresented.

Last but not least, the most crucial element of crowdsourcing is the nature, the types and the individual features of the “online jobs or tasks”. Scholars have different opinions about the task or job which the crowd is oriented to solve. It can range from simply sorting tasks to an idea generation or a new product development (they are also known as Human Intelligence Tasks; HITs). Even Howe does not provide a specification of the task in his crowdsourcing definition and later also acknowledged that the task does not need to be performed by the company in the beginning but can be uniquely performed by the workers. Irrespective of the type of the job, in general, the task that is submitted by a requester on a crowdsourcing platform needs to have a clear objective and to be in line with crowdsourcing principles in order to have the desirable outcome. In chapter 3, which includes our research on the characteristics of the online labor markets, we present in detail, all the possible types of an online crowdsourcing job.

By taking all the above into consideration, it is obvious the win-win nature of crowdsourcing for both sides. Its advantages rely on the benefit that arises from crowdsourcing process for both requesters (e.g. company or individual) and workers. Through the crowdsourcing, as an online business solving model, the company gets access to ideas, innovations, information and external knowledge, which it uses to generate value (Aitamurto et al., 2011; Sloane, 2011) with the provided opportunity of the exploitation of a cheap workforce. The prize money or recompense of any kind is just a small fraction of the cost which companies would have spent if they had hired a professional advertising agency or performed the task internally, for instance. Thereby, crowdsourcing is especially worthwhile if the task is solved at a lower cost than it could have been done internally and if the solution turns out to be better and more adapted to customer needs (Whitla, 2009; Selzer & Mahmoudi, 2012). Accordingly, crowdsourcing reduces the costs of generating ideas and producing them compared to the respective cost in the normal labor market (Brabham, 2008). On the other hand, individuals-workers may take part in crowdsourcing projects because they have fun carrying out the task, the desire to share their knowledge and talents, long for social recognition or want to be part of a community resulting in financial rewards (Mladenow et al., 2014; Kozinets et al., 2008).

Hence, an efficient crowdsourcing procedure entails the usage of Internet and consists of a clearly defined crowd, a task with a clear goal, a clearly defined crowdsourcer, a clearly defined compensation (value) received by the crowdsourcer, an online assigned process of participative type and open call of variable extent.

43

All in all, it is obvious that crowdsourcing developed since the initial definition. Its procedure is multi-level and contains several steps in order to conclude to the final outcome. Hence, an increasing number of characteristics must be taken into consideration for our research in order to extract safe conclusions about our research questions. A graphical depiction of the paid crowdsourcing structure and procedure is shown in Figure 3.

Figure 3. General structure of the paid crowdsourcing procedure.

1.4.3 Types of Crowdsourcing In general, crowdsourcing generally covers four areas:

Crowd Labor: Crowdsourcing labor lets you seek freelancers to complete all or part of a project online. You can either seek people to perform specific tasks at a set price such as with the crowdsourcing site Fiverr or you can post projects as contests or work for hire and have talented freelancers compete. Amazon Mechanical Turk allows you to split projects that have a huge number of tasks that cannot be done by computer, such as classifying photos, and pay pennies per task.

Open Innovation/Crowd Creativity: These crowdsourcing companies allow multiple people to post on projects. HitRECord is an art and video crowdsourcing company that lets people post and collaborate on artistic projects whose results have competed in film festivals. Chaordix uses a similar approach but for product innovation.

Access Distributed Knowledge or Experience: Wikipedia is the most common example of a crowdsourcing website used to access and share knowledge from multiple sources; however, there are companies that foster this for more specific business purposes. This can also include customer feedback or beta testing.

44

Crowdsourcing Funding: Companies and solopreneurs are turning to the public for funding of ideas. If crowdsourcing fundraising is your area of interest, check out our crowdfunding and business crowdfunding sites.

While there isn't probably one right way to categorize the crowdsourcing landscape, the most popular classifications done by industry experts and researchers define crowdsourcing with monetary rewards (i.e. paid crowdsourcing) according to the following 4 variables:

• Based on the type of labor performed • Based on the motivation to participate • Based on how applications function • Based on the problems that crowdsourcing is trying to solve

More specific:

Firstly, crowdsourcing based on the type of labor performed by the crowd and the way individuals in the crowd communicate and collaborate with one another is categorized in:

• Social-production crowds – this is when a large group of individuals lends their distinct talents to the creation of some product (as an example Wikipedia or Linux). • Averaging crowds – provide an average judgment on some complex matter that can be, in some cases, more accurate than the judgment of any one individual (as an example the stock market). • Data-mine crowds – this is when a large group of people, without any knowledge of its members, produces a set of behavioral data that allows to gain insight into market patterns (as an example Ebay's or Amazon's recommendation systems). • Networking crowds – a group that trades information through a shared communication system such as Facebook or Twitter. • Transactional crowds – a group that coordinates mainly around point-to-point transactions (as an example eBay and Innocentive).

This categorization is useful as it allows understanding the different abilities crowds possess and the many ways they can work together or in isolation to perform a task (Carr, 2010).

Moreover, crowdsourcing can be categorized based on the motivation that drives crowds to participate in crowdsourcing application:

• Communals – mesh their identities with the crowd and develop social capital through participation on the site • Utlizers – develop social capital by developing their individual skills through the site

45

• Aspirers – help select content in crowdsourcing contests but do not contribute original content themselves • Lurkers – who simply observe

This categorization focuses more on the crowd members rather than the problems that crowdsourcing may solve (Martineau. 2012).

In addition, the categorization of crowdsourcing based on how various applications function follows (Howe, 2008):

• Crowd wisdom – using the "collective intelligence" of people within or outside an organization to solve complex problems (Innocentive is the classic example). • Crowd creation – leveraging the ability and insights of a crowd of people to create new products. Since Howe's original definition this is an area that has evolved significantly and that I am following with particular interest (as an example I love Quirky's co-creation community). • Crowd voting – where the community votes for their favorite idea or product (Threadless is Howe's original example). • Crowd funding – there is a proliferation of crowdfunding platforms in the market of different types (rewards-based such as Kickstarter and equity-based such as CrowdCube) and serving different purposes.

Finally, according to Daren Brabham, all of the above segmentations of crowdsourcing are not focusing on the kind of problem an organization wants to solve when it turns to a crowd. His problem-centric segmentation is in fact based on the type of problems that crowdsourcing is best suited to solve (Brabham, 2013):

• Knowledge discovery and management – an organization tasks a crowd with finding and collecting information into a common format (examples: Peer-to- patent peertopatent.org, SeeClickFix or one recently launched by Gianluca Petrelli BeMyEye). Ideal for information gathering, organization and reporting problems. • Broadcast search – organizations tasks a crowd with solving empirical problems (e.g. Innocentive, Goldcorp Challenge). Ideal for ideation problems with empirical provable solutions such as scientific challenges. • Peer-vetted creative production – organizations tasks a crowd with creating and selecting creative ideas. (e.g. Threadless, Doritos contest) • Distributed human intelligence tasking – appropriate not to produce designs, find information, or develop solutions, but to process data. Large data problems are decomposed into small tasks requiring human intelligence, and individuals in the crowd are compensated for processing bits of data. Monetary compensation is a common motivator for participation. Amazon Mechanical Turk is the perfect example.

46

By taking all the above into consideration, it is noticeable that the nature of crowdsourcing has complex dimensions and involves a high amount of processes, procedures and features. Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.

Researchers have used crowdsourcing systems like the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation. Notable examples include using the crowd to create speech and language databases (Callison & Dredze, 1010; McGraw, Glass, & Seneff, 2011), and using the crowd to conduct user studies (Kittur, Chi & Suh, 2008). Crowdsourcing systems provide these researchers with the ability to gather large amount of data. Additionally, using crowdsourcing techniques, researchers can collect data from populations and demographics they may not have had access locally, but that improve the validity and value of their work.

1.4.4 Issues on Crowdsourcing At least on paper, crowdsourcing seems like a great solution. What many users (i.e. requesters or companies) have discovered, however is that there are many unforeseen, even fatal difficulties they encounter while engaging in these crowdsourcing efforts. In some case these challenges completely eclipse the benefits, potentially resulting in an unfulfilled promise and a disillusioned co-operation between requester and worker which is congregates on the level of the final crowdsourcing labor outcome. As in traditional labor markets, it is difficult to define "labor quality" with precision. It is known from prior studies that we can never specify all of the characteristics that might be related to quality, however, nor have we measures for many that we can name. Hence, under the term "labor quality", many factors can contribute. Nevertheless, the inference about quality depends upon the assumed relationship between quality and labor compensation (Fuchs, 1964). The aforementioned issue on quality is complex and difficult to manage because crowdsourcing’s characteristics don’t allow an efficient monitoring of the online job. Moreover, it is known that the online labor markets (OLMs) providing crowdsourcing services lack on confidentiality and communication between the participants of an online project. Last but not least crowdsourcing is based on the principle that someone’s ideas may be the solution in an online problem. Nevertheless, there are many studies that show a big gap between the ideas and the real solutions on a task. In other words, the risk/reward ratio in many cases appears worse, so participants (i.e. workers) are less likely to put in a substantial amount of work when there are a lot of competitors resulting in “low quality” outcomes.

47

1.5 Purpose and Scope of Research The purpose of the thesis is to closely investigate which factors have a great impact on quality of work in crowdsourcing online labor marketplaces. In order to explore this issue, this research examined the characteristics of the composition of the labor force (i.e. the workers) and the provided technological applications (i.e. the crowdsourcing platforms). As a result, the following two research questions (RQ) have been formulated and then they are outlined by presenting the findings of the conducted quantitative studies in form of an experimental research design. More specific:

RQ1: To what extent do an online crowdsourcing platform’s characteristics affect its overall performance resulting in the total number of crowdsourcing sessions being conducted over time? (Macroscopic analysis)

RQ2: To what extent do a worker’s characteristics and behavioral intentions affect his overall performance when he participates on crowdsourcing online jobs? (Microscopic analysis)

Ultimately, the purpose of this study is to present some indication for the import role of several characteristics, whether concern the platforms which provide crowdsourcing services or the workers who participate in such online campaigns.

Thus, this study will solely focus on the concept of crowdsourcing and therefore on the online labor environments, taking recent developments into account that more and more companies use the technological possibilities of the internet to create value with online consumers. By doing this, the study will take a different approach than the after mentioned studies on quality of work and tries to embody on the overall research new dimensions of examination such as the role of workers’ personality traits by having them as a guide of online performance prediction.

1.6 Thesis Outline The present research is subdivided into five chapters. Following the introduction chapter, the literature review of crowdsourcing will be categorized in chapter 2. This is then followed by chapter 3 where a theoretical discussion of the concept of estimating in a macroscopic way the quality of work on crowdsourcing environments by presenting as well as the development of hypotheses highlighting the relationship between a crowdsourcing online platform’s characteristics with its general performance. Afterwards, the conceptual framework will be presented based on the preceding theoretical findings. Chapter 4 will describe the microscopic approach of this study, by presenting the methodology, the analysis and the retrieved findings of our crowdsourcing experiment in which we examined the impact of individuals’ characteristics on performance. The thesis will then discuss the main findings in

48 chapter 5, illustrate the main limitations as well as theoretical contributions and conclude by outlining recommendations for future research.

49

Chapter 2 Crowdsourcing in Economics: A Literature Review

2.1 Introduction In recent years crowdsourcing environments have attracted the interest of researchers from various fields, who aspire to survey, analyze, comprehend and improve this new form of labor. Hence, in general, many scientific researchers mainly deriving from social sciences as economics and psychology as well as applied sciences as computer science and engineering have conducted studies, using this new way of online labor.

In particular, psychologists more and more, are using crowdsourcing platforms in order to conduct scientifically valid studies in large and diverse samples, because they have the opportunity to exploit large samples of participants with diverse characteristics and reliable procedures. With this new way of online data collection psychologists’ studies focus on measuring and interpreting the crowd’s personality traits and behavior based on a traditional behavioral science context (Moriarty, 2010; Bates & Lanza, 2013).

Similarly, economists’ work has the aim of testing basic economic labor theories, conducting experimental economic research and analyzing the factors of this new online business model, with the help of crowdsourcing’s advantages (i.e. online research is less expensive than traditional research, data collected from a wider range of the population and quicker data collection). The primary goal of economists is to monitor and examine the overall economic impact of crowdsourcing online communities, in order to online works to be a tool for economic development (Horton & Chilton, 2010; Chandler & Horton, 2011).

Last but not least, recent years, crowdsourcing has also been become an increasingly focal point for scholars and practitioners in the field of computer science and engineering. These researchers are seeking to provide answers to several questions mainly concerning a taxonomic framework for crowdsourcing processes for an efficient and effective management of the crowd (Geiger et al. 2011; Feller et al. 2010; Doan, 2011) and guidelines for a utility maximization of the crowdsourcing practices based on co-creation and user innovation theories (Schenk & Guittard, 2011; Leimeister, 2009)(figure 4).

50

Figure 4. The academic representations of crowdsourcing.

2.2 Literature Review While it is becoming increasingly appealing to the public and to the research body, a major challenge arises for paid crowdsourcing environments as well and that is the online quality of work on a performance review of each crowdsourcing stakeholder (i.e. workers and online platforms). By this term we mean, the subjective judgment of whether the submitted work meets the requesters’ criteria (Allahbakhsh et al. 2013). This result is the combination of two aspects; the decision that a requester makes for the online platform which will run his crowdsourcing project and the hired workers who will participate on the task. These two parts of crowdsourcing and their characteristics will form the quality of the final output.

As a consequence, studies that propose strategies exploiting crowdsourcing are increasingly being applied in the area of Quality of Experience (QoE). Scholars from various scientific fields are mainly focuses on seven dimensions of quality of crowdsourcing work:

✓ Cheat Detection on Crowdsourcing Platforms. ✓ Gamification as a technique for quality assurance on Crowdsourcing environments. ✓ The relationship between incentives and quality on Crowdsourcing environments. ✓ Standardized frameworks of task design and efficient crowdsourcing techniques. ✓ The impact of workers’ cognitive skills on performance. ✓ The effect of workers’ online training techniques on Performance. ✓ The impact of workers’ online behavior & personality traits on performance.

51

In particular, scholars aim to address questions related to what factor affect the quality of work in crowdsourced tasks when these are performed under different incentives. The focus is in particular whether or not different incentives are associated with different factors influencing the quality of work (Allahbakhsh et al. 2013). Towards this, many researchers conducted experiments where the same crowdsourcing task has been submitted under three different incentive schemes and compared the quality of work received.

Therefore, many articles suggest that the level of the quality of workers’ results closely relates to the incentives of the participants as well as depends on the environment where the experiments take place (Kaufmann et al., 2011). In existing literature, motivation in participating in internet based online marketplaces as crowdsourcing can be divided in two categories: intrinsic and extrinsic (Thompson et al. 1999). Intrinsic motivation exists if an individual is activated because of his/her seeking for the fulfillment generated by the activity (e.g. acting just for fun). In the case of extrinsic motivation the activity is just an instrument for achieving a certain desired outcome (e.g. acting for money or to avoid sanctions). Thompson suggested that, in general, extrinsic motivation is generally stronger than intrinsic motivation concerning the use of Internet while Brabham showed that the possibility of earning money (extrinsic motivation) is the most dominant factor to participate in crowdsourcing platforms, followed by the generated fun (intrinsic motivation) (Brabham, 2008). His research is in line with Redi and Povoa, which showed that monetary reward, can have a multifaceted impact on the quality of online labor (Redi & Povoa, 2014).

More particular, Horton in his studies, showed that, when a requester attracts the crowd with high monetary rewards may attract target earners workers which will try to obtain some self-imposed earnings goal rather than respond to the current offered wage (Horton & Chilton, 2010). With other words, higher online wages do not always guarantee greater quality of online labor. On the other hand, one of the interesting findings on Aker’s study on crowdsourcing’s economic aspects is that their results do not confirm previous studies which concluded that an increase in payment attracts more noise. They suggest that a fair wage policy by the requester can have a great positive impact on quality of work. They also find that a worker’s country of origin may have a partial impact on his performance, going in line with Mourelatos and Tzagarakis results that workers coming from developing countries such as Bangladesh, Nepal and Sri Lanka have a sloppy behavior on crowdsourcing tasks resulting in a poor overall quality of results (Mourelatos & Tzagarakis, 2016). For that reason, Sinlga in his research made the first serious attempt on presenting optimal pricing policies which are central to maximizing the quality of work by determining the right monetary incentives and by using the approach of regret minimization (Singla & Krause, 2013).

In the context of this framework, many crowdsourcing experiments were conducted on different environments: i) in controlled laboratories setting with university

52 students, ii) on popular social networking sites (e.g. Facebook) and iii) on a crowdsourcing platforms (e.g. Microworkers.com, Amazon Mechanical Turk), in order to researchers examine the patterns of variation exhibited by data on quality characteristics among different incentives2 (Mourelatos & Tzagarakis, 2016; Hossfeld et al. 2014). Both results showed that paid users are more likely to commit to the execution of a crowdsourcing task. However, they may not perform it as reliably as volunteer users may do, driven by their intrinsic motivation indicating that there are cases in crowdsourcing environments where many intrinsic factors seem to dominate the extrinsic ones.

In addition, Rogstadius presented a study in which the effect of extrinsic and intrinsic motivators on task performance was estimated. The study concluded that work accuracy can be improved significantly through intrinsic motivators, especially in cases where extrinsic motivation is low (Rogstadius et al. 2011). In general, existing research point out the positive role that intrinsic and extrinsic motivation play in crowdsourced tasks. However, the question of hosw motivational aspects can be influenced or triggered by the design choices of the crowdsourcing requester i.e. how a task has to be designed so as to motivate only specific groups of workers, which can guarantee a desired level of quality of work, is not addressed. Table 1, summarizes a worker’s motivation on crowdsourcing according to the abovementioned researches.

Types of Motivation Intrinsic Extrinsic (predisposition in person e.g. drives, (additional to personal predispositions needs, desires) external reinforcements) Fun, joy, gaming, interest, satisfaction, Usability, sociability, material/financial self-actualization, self-reinforcement capital, money, rewards, prices, medals, credit points Table 1. Types of Motivation in Crowdsourcing Environments/Platforms.

As a result, many research approaches focused on the task design (under which the requester describes the task that should be completed) consisting of several components (task definition, user interface, granularity and compensation policy) which obviously affect the quality of the worker’s result. Hence, tasks must be designed to maximize the likelihood and ease with which workers can provide useful responses (Chandler et al. 2014). Although comparatively little research has been done on task design itself, there is a large literature on survey design that is relevant to requesters, which may be useful when considering data quality issues identified in pilot testing. While specific design considerations largely depend on the researcher’s goals, task design can be improved iteratively through pilot testing, and a number of principles exist that can improve the quality of data collected on crowdsourcing

2 The experiments aim to investigate intrinsic as well as extrinsic motivations: while the experiments in laboratory and on online labor environments are extrinsically motivated, the experiments on social networking sites are intrinsically motivated.

53 marketplaces. In particular, crowd members are heterogeneous and requesters can take advantage of this by preselecting workers, through qualification tests, which are most capable of performing specific tasks (Li et al. 2014). It is obvious, that the design of a crowdsourcing task has a great impact on the overall quality of labor. Through all the different approaches all converge to two standpoints. Firstly, the task in its final formation must be fully understandable and with meaningfulness in order to trigger the higher levels of a worker’s job effort by prevailing over the general “monotony” nature of the crowdsourcing jobs (Chandler & Kapelner, 2013). Lastly, the task must not contain complexity of actions, because this may have the effect of an increasing cognitive demand of the workers, thereby having a detrimental effect on the performance of their work (Finnerty et al. 2013).

Equally important is also another approach made by Van and Costa, who firstly introduced the relationship between a worker’s online training and performance by proposing a working task framework which evaluated a worker’s in performing and provides in-task training through state of art learning models (& Costa et al. 2011) and programmatically generated test tasks (Van et al. 2014). For example, Dontcheva in her paper showed that in an image-editing task, workers without prior experience editing images, gain new skills through interactive step-by-step tutorials and test their knowledge by improving real-world images submitted by requesters (Dontcheva et al. 2014). Lastly, Gadiraju improve the workers’ online training research by separating the crowdsourcing training in (i) implicit training, where workers are provided training when they provide erraneous responses to questions with priorly known answers, and (ii) explicit training, where workers are required to go through a training phase before they attempt to work on the task itself (Gadiraju et al. 2015).

As it is known, most of crowdsourcing tasks have the form of microjobs, viz.; jobs that are not require many working skills and time of completion by workers (Brabham, 2013). For that reason, jobs on crowdsourcing environments suffer from a “boring and routine nature” which probably results in poor outputs (Chandler & Kapelner, 2013).

On this condition, in 2012, Eickhoff, examined the potential of adding gamification to microtask interfaces as a means of improving both worker engagement and effectiveness and as a technique for quality assessment (Eickhoff et al. 2012). Thus, he designed a game-based task that is able to achieve high quality at significantly lower pay rates, facing fewer malicious submissions.

By the term Gamification, we mean, the process of adding games or gamelike elements to something (such as an online task) so as to encourage participation of individuals (Aparicio et al. 2012 & Robson et al. 2015).

In general, the gamification techniques are intended to leverage people's natural desires for socializing, learning, mastery, competition, achievement, status, self-

54 expression, altruism, or closure, or simply their response to the framing of a situation as game or play. Early gamification strategies use mainly rewards for players who accomplish desired tasks or competition to engage players (Lieberoth, 2015)

In this context, researchers conducted a series of experiments on crowdsourcing online platforms by integrating into several tasks (e.g. image labeling, transcription etc.) gamification techniques and meaning their impact on the overall quality of labor. For example, Mekler in her paper “Disassembling Gamification: The Effects of Points and Meaning on User Motivation and Performance” examined the effects of gamification in an image annotation task on performance. Thus, by providing the task with a game-frame they inspired individuals to work harder and to put more effort when labeling the images, leading to an increase in quality (Mekler et al. 2013). In the same line, Feyisetan proposed that gamification through personalized game experience leads to better accuracy and lower costs than conventional approaches that use only monetary incentives (Feyisetan et al. 2015). Similar studies were conducted also on mobile crowdsourcing environments. Mobile crowdsourcing describes the collation of a large group of people’s views and/or observations. These crowdsourcing activities are processed on mobile phones or other handheld mobile devices (Eagle, 2009). Gamification seems to have more controversial effects on performance in this type of crowdsourcing. For example, Dergousoff and Mandryk deployed a task game on Android by rewarding players for participation in microexperiments with in-game currency earned. Their findings demonstrate that, in case of task that requires cognitive skilled workers, the crowdsourcing game had a negative impact on their performance (Dergousoff & Mandryk, 2015). Thus, Sigala, though applying crowdsourcing gamification processes on tourism, highlights the relation between gamification and workers’ cognitive capabilities and personality on outcome (Sigala, 2015). However, the literature until now does not provide any conclusive results regarding the impact of the various gamification design elements on online users’ outcomes and there are no general guidelines on how to design an interesting and effective game, which will be suitable every time with several factors, as is the type of the crowdsourcing job and the characteristics of the potential workers (Hossfeld et al. 2014).

Although the interest in gamification is growing steadily, all the above mentioned studies raise the question; does gamification work in achieving high quality of labor in crowdsourcing environments? The usage of gamification techniques must be attentive and diligent, because there is a risk of extrinsic rewards undermine intrinsic motivations and hence would in essence undermine gamification which is an attempt to afford for the emergence of intrinsic motivations (Hamari et al. 2014 & Kaufman et al. 2016). Hence, gamification has been an effective approach for increasing crowdsourcing participation and online quality of the crowdsourced work; however, differences exist between the different types of crowdsourcing online jobs. The research conducted in the context of crowdsourcing of homogenous tasks has most commonly used simple gamification implementations, such as points and

55 leaderboards, whereas crowdsourcing implementations that seek diverse and creative contributions employ gamification with a richer set of mechanics (Morschheuser et al. 2017).

Recall, crowdsourcing consists of a large and relatively cheap workforce of workers with various characteristic. The anonymity of the workers, encourage them to cheat the employers in order to maximize their income, resulting in unreliable outcomes. In other words, some workers submit incorrect results in order to maximize their income by completing as many jobs as possible; others just do not work correctly. Therefore, many techniques have been developed and proposed by scholars to detect and control cheating workers and invalid work results.

Thus, cheat detection and quality assessment techniques were firstly introduced in the studies of Hsueh, Melville & Sindhwani and Hirth, Hoßfeld, & Tran-Gia. More particular the first suggested approach is based on a majority decision process to eliminate incorrect results. The employer in the first step of the process submits his task to the crowdsourcing platform and then the platform duplicates the task in which i different workers participate in its completion. They submit their individual results which might be correct or incorrect. The crowdsourcing platform performs a majority decision, and the result most of the workers submitted is assumed to be correct and returned to the employer (Hsueh et al. 2009 & Hirth et al. 2011).

In the second suggested quality control technique, i.e. the control group approach, the employer submits the main task to the crowdsourcing platform and the task is chosen by a worker, who submits the required task result. Then, the crowdsourcing platform generates new validation tasks for this result. The result of the main task is given to a group of j other workers, who rate it according to given criteria. In the following step, the ratings of the different workers are returned to the crowdsourcing platform, which calculates the overall rating of the main task. The main task is considered to be valid, if the majority of the control group decides the task is correctly done. Then, the main worker is paid and the result is returned to the employer (Hirth et al. 2013).

Moreover, based on a range of experiments, others researchers conclude that cheaters are less frequently encountered in novel tasks that involve creativity and abstract thinking. Crowd filtering as a cheat control mechanism could be shown to have significant impact on the observed cheater rates, while filtering by origin or by means of a recruitment step were able to greatly reduce the amount of cheating, the batch processing times multiplied (Eickhoff & de Vries, 2013).

On the other hand, many scholars argue about the effectiveness of the above mentioned quality assurance techniques and highlight the adoption of preference tests and gold-standard questions as a detector and monitor of the behavior of cheater workers. Preference tests refer to a diagnostics, qualification tests or benchmarks consisting of pre-task questions which can be used as a proxy for the future quality

56 outcome of a worker (Buchholz & Latorre 2011). On the other hand, gold-standard questions (or ground truth questions) refer to several questions provided to workers, by the requester, but who already knows the right answer and take place either before or during the online job and can be used as a proof or disproof a worker’s degree of task engagement (Venetis & Garcia-Molina, 2012). Finally, in his review paper, Difallah divides all these quality control processes into two major categories; the priori cheater dissuasion and a poste-riori quality control (Difallah et al. 2012).

As of today, the crowdsourcing market is flourishing and it is strongly based on monetary incentives. Because of this, it may attract more and more cheaters and thus give rise to novel cheating- schemes. The literature until now showed that many of current quality control mechanisms can fail naively in detecting well-organized spammers. Hence, based on the presented overview, the cheat detections and quality assurance techniques need further investigation, based on our understanding on which types of online jobs attract more cheaters and which task features have to be controlled (e.g. reward, design etc.)

As it mentioned before, the design and the principles of the crowdsourcing tasks vary. Hence, it is obvious that each task requires particular working skills for its completion but on online labor markets, specific tasks’ features attract workers of a particular group of characteristics. In this direction, many scholars have examined the exact relationship between the characteristics of an online job and the factors that affect an individual’s decision on selecting and participating on this task.

The characteristics of the online workforce are divided into two groups. The cognitive characteristics, which include demographic information (e.g. age, gender, ethnicity etc.) and working abilities (e.g. educational level, computer competence, language skills etc.) and the non-cognitive characteristics, which include information on the personality traits of the potential workers.

Concerning the first category, Schulze by examine the aforementioned relation, found that there is little influence of the education level, age, and gender on workers’ task choice and in later, work effort (Schulze et al. 2011). For this reason, Morris, in his research, discusses the importance of a worker’s priming for participation in a crowdsourcing job and the need of techniques, which will trigger and strengthen a worker’s cognitive profile, in order to work to the appropriate online job (Morris et al. 2012). Thus, Alagarai proposed the use of cognitively inspired features as graphics and music on a task design as a powerful technique for maximizing the performance of crowd workers through the increase of their priming perception on the task’s requirements (Alagarai et al. 2014). Moreover, Hassan and Curry introduced and evaluated the capability tracing technique which measures the latent capabilities of workers, in order to make inferences about their future performance on several heterogeneous tasks (Hassan & Curry, 2013). Clearly, the research on the relationship between a worker’s cognitive skills and their impact on this overall performance

57 needs further investigation and deeper understanding in order to a framework to be developed from which researchers can begin to further define key uses and characteristics associated with the phenomenon of crowdsourcing. For example, Erickson proposes that key to the theoretical and practical application of crowdsourcing will gain a better understanding by the investigation of the link between organizational need and desired crowd characteristics (Erickson et al. 2012).

Lately, several scholars begin to understand that another group of factors which may emerge as a critical condition in achieving high quality of work in crowdsourcing tasks are workers’ non-cognitive skills. Crowdsourcing is characterized by its large and anonymous workforce, consisting of individuals with different personalities. Similar to traditional labor markets, each employee’s personality suits better to jobs with specific characteristics and this link is a crucial condition for higher levels of performance and productivity (Mount et al. 1998; Hogan & Holland, 2003; Murphy, 2005).

Similarly, for online labor marketplaces as crowdsourcing environments, researchers in order to investigate deeper the link between crowd’s diversity and work accuracy adopted theories and practices for estimating online performance through personality predictors. Kazai, firstly, used behavioral observations (task’s completion time, fraction of useful labels, label accuracy) to define five worker types: Spammer, Sloppy, Incompetent, Competent, Diligent (Kazai et al. 2011), and then she utilized psychological models and tests to measure the personality traits of online workers. She found that on an image labeling task, a worker’s degree of openness and conscientiousness relate significantly to his work accuracy (Kazai et al. 2012), while Mourelatos and Tzagarakis by launching a transcription task, showed that a worker’s emotion stability and extraversion levels have a significant impact on their performance and these effects vary significantly among the workers’ social-economic status (Mourelatos & Tzagarakis, 2016). These results indicate that different workers’ personality traits have different effects on their quality of results, depending on the characteristics and features of the online job.

By understanding the exact relationship between a worker’s cognitive and non- cognitive abilities and his online performance, the goal of the research body will be the proposal of efficient techniques, so that to have a perfect match between the online jobs’ features and the workers’ characteristics, resulting in high quality standards of outputs. Last but not least, all the aforementioned studies are being presented on the following table which consists of four columns. Column one and two show the authors and the respectively paper title, while column three and four their methodology and major findings.

58

A/A Authors Paper Methodology Major Findings Different incentives Allahbakhsh, M., are associated with Benatallah, B., Ignjatovic, Quality control in crowdsourcing systems: Incentives and different factors 1 A., Motahari-Nezhad, H. Issues and directions quality on influencing the R., Bertino, E., & Dustdar, Crowdsourcing online quality of S. (2013) work Definition of Thompson S.H Teo, Incentives and Intrinsic and extrinsic motivation in intrinsic and 2 Vivien K.G Lim, Raye quality on Internet usage extrinsic online Y.C Lai. (1999) Crowdsourcing incentives Introduced the possibility of earning money Incentives and (extrinsic Crowdsourcing as a Model for Problem 3 Brabham (2008) quality on motivation) is the Solving Crowdsourcing most dominant factor to participate in crowdsourcing platforms, Monetary reward Crowdsourcing for Rating Image Incentives and can have a 4 Redi & Povoa (2014) Aesthetic Appeal: Better a Paid or a quality on multifaceted impact Volunteer Crowd? Crowdsourcing on the quality of online labor Paid users are more likely to commit to the execution of a crowdsourcing Hossfeld, T., Keimel, C., task. However, they Incentives and Hirth, M., Gardlo, B., Best Practices for QoE Crowdtesting: may not perform it 5 quality on Habigt, J., Diepold, K., QoE Assessment With Crowdsourcing as reliably as Crowdsourcing Tran-Gia, P. (2014) volunteer users may do, driven by their intrinsic motivation. The most reliable results were Investigating Factors Influencing the Incentives and provided by Mourelatos, E., & Quality of Crowdsourced Work under 6 quality on volunteer users, Tzagarakis, M. (2016) Different Incentives: Some Empirical Crowdsourcing driven by their Results intrinsic motivation. Present optimal pricing policies Truthful incentives in crowdsourcing tasks Incentives and Singla, A., & Krause, A. which is central to 7 using regret minimization mechanisms quality on (2013) maximizing the Crowdsourcing quality of work by determin-

59

ing the right monetary incentives and by using the approach of regret minimization Online work accuracy can be improved Rogstadius, J., Kostakos, An assessment of intrinsic and extrinsic Incentives and significantly V., Kittur, A., Smus, B., 8 motivation on task performance in quality on through intrinsic Laredo, J., & Vukovic, M. crowdsourcing markets. Crowdsourcing motivators, (2011) especially in cases where extrinsic motivation is low The incentives of the crowdsourcing More than fun and money. Worker Incentives and participants vary Kaufmann, N., Schulze, T., 9 Motivation in Crowdsourcing-A Study on quality on among the & Veit, D. (2011) Mechanical Turk. Crowdsourcing environment where the experiments take place One of the interesting findings is that their results do not confirm previous studies which concluded Aker, A., El-Haj, M., Incentives and Assessing Crowdsourcing Quality through that an increase in 10 Albakour, M. D., & quality on Objective Tasks payment attracts Kruschwitz, U. (2012) Crowdsourcing more noise. They also find that a worker’s country of origin has a partial impact on his performance Explores the relationship Breaking monotony with meaning: Task design and between Chandler, D., & Kapelner, 11 Motivation in crowdsourcing markets quality on the A. (2013) Crowdsourcing meaningfulness" of a task and worker effort. Tasks must be designed to Task design and maximize the Chandler, J., Paolacci, G., Risks and rewards of crowdsourcing 12 quality on likelihood and ease & Mueller, P. (2014) marketplaces Crowdsourcing with which workers can provide useful responses Crowd members The wisdom of minority: Discovering and Task design and Li, H., Zhao, B., & are heterogeneous 13 targeting the right group of workers for quality on Fuxman, A. (2014) and requesters can crowdsourcing Crowdsourcing take advantage of

60

this by preselecting workers, through qualification tests, which are most capable of performing specific tasks The complexity of a task can have the effect of increasing the cognitive Finnerty, A., Kucherbaev, Keep it simple: reward and task design in Task design and demands on the 14 P., Tranquillini, S., & crowdsourcing quality on worker, thereby Convertino, G. (2013) Crowdsourcing having a detrimental effect on the performance of their work. Propose a working task framework which evaluated a Van Pelt, C. R., Cox, R., Predicting future performance of multiple Online Training worker’s in 15 Sorokin, A., & Juster, M. workers on crowdsourcing tasks and Techniques on performing and (2014) selecting repeated crowdsourcing workers Performance provides in-task training through programmatically generated test tasks. Workers through interactive step-by- step tutorials gained skills and Dontcheva, M., Morris, R. Combining crowdsourcing and learning to Online Training test their 16 R., Brandt, J. R., & Gerber, improve engagement and performance Techniques on knowledge by E. M. (2014) Performance improving real- world images submitted by requesters. Evaluate the improvement of On using crowdsourcing and active Costa, J., Silva, C., Online Training performance with learning to improve 17 Antunes, M., & Ribeiro, B. Techniques on the use of active classification performance (2011) Performance learning methods

through state of art models.s Definition and Online Training Gadiraju, U., Fetahu, B., & Training workers for improving exploitation of 18 Techniques on Kawase, R. (2015) performance in crowdsourcing microtasks. implicit and Performance explicit training Gamification through Gamification as a Feyisetan, O., Simperl, E., Improving paid microtasks through personalized game technique for 19 Van Kleek, M., & gamification and adaptive furtherance experience leads to quality assurance Shadbolt, N. (2015) incentives better accuracy and on Crowdsourcing lower costs than conventional

61

approaches that use only monetary incentives They designed a game-based task that is able to Gamification as a Eickhoff, C., Harris, C. G., Quality through flow and immersion: achieve high technique for 20 de Vries, A. P., & gamifying crowdsourced relevance quality at quality assurance Srinivasan, P. (2012) assessments significantly lower on Crowdsourcing pay rates, facing fewer malicious submissions. Gamification as a Game-frame tasks Mekler, E. D., Brühlmann, Disassembling gamification: the effects of technique for increase workers’ 21 F., Opwis, K., & Tuch, A. points and meaning on user motivation quality assurance work effort and N. (2013) and performance on Crowdsourcing quality of labor They deployed a task game on Gamification as a Android by Mobile gamification for crowdsourcing Dergousoff, K., & technique for rewarding players 22 data collection: Leveraging the freemium Mandryk, R. L. (2015) quality assurance for participation in model on Crowdsourcing microexperiments with in-game currency earned There are no Gamification as a general guidelines Hossfeld, T., Keimel, C., Crowdsourcing quality-of-experience technique for on how to design 23 & Timmerer, C. (2014) assessments quality assurance an interesting on Crowdsourcing game for QoE assessment. He demonstrate that gamification Gamification as a can be used for Gamification for crowdsourcing technique for crowdsourcing as 24 Sigala, M. (2015) marketing practices: Applications and quality assurance an influencing benefits in tourism on Crowdsourcing factor of customer behaviour through marketing practices They highlight distinct intrinsic motivational Investigating the Impact of' Emphasis Gamification as a factors, which used Kaufman, G., Flanagan, Frames' and Social Loafing on Player technique for to describe an 25 M., & Punjasthitkul, S. Motivation and Performance in a quality assurance online game in (2016) Crowdsourcing Game on Crowdsourcing which players provide descriptive metadata “tags” for digitized images They created a Gamification as a framework for Does gamification work?a literature Hamari, J., Koivisto, J., & technique for examining the 26 review of empirical studies on Sarsa, H. (2014) quality assurance effects of gamification on Crowdsourcing gamification by drawing from the

62

definitions of gamification and the discussion on motivational affordances Gamification has been an effective approach for increasing crowdsourcing participation and the quality of the crowdsourced work; however, differences exist between different types of crowdsourcing: the research conducted in the context of Gamification as a Morschheuser, B., Hamari, Gamified crowdsourcing: crowdsourcing of technique for 27 J., Koivisto, J., & Conceptualization, literature review, and homogenous tasks quality assurance Maedche, A. (2017) future agenda has most on Crowdsourcing commonly used simple gamification implementations, such as points and leaderboards, whereas crowdsourcing implementations that seek diverse and creative contributions employ gamification with a richer set of mechanics. They denote two crowd-based approaches to Cost-optimal validation mechanisms and Cheat Detection validate the Hirth, M., Hoßfeld, T., & 28 cheat-detection for crowdsourcing on Crowdsourcing submitted work Tran-Gia, P. (2011) platforms Platforms (The Majority Decision Approach & The Control Group Approach) They rely on a combination of Cheat Detection Hsueh P, Melville P, Data quality from crowdsourcing: a gold standard 29 on Crowdsourcing Sindhwani V (2009) study of annotation selection criteria labels and majority Platforms voting to ensure result quality 30 Eickhoff, C., & de Vries, Increasing cheat robustness of Cheat Detection They design and

63

A. P. (2013) crowdsourcing tasks on Crowdsourcing formulate tasks (in Platforms types and interface) in such a way that they are less attractive for cheaters Through crowdsourcing preference tests and gold questions monitored and Cheat Detection studied the Buchholz, S., & Latorre, J. Crowdsourcing Preference Tests, and 31 on Crowdsourcing behavior of (2011) How to Detect Cheating Platforms cheaters and tried to propose metrics (i.e. withholding payment) for post- hoc exclusion of workers They denote two crowd-based approaches to Analyzing costs and accuracy of Cheat Detection validate the Hirth, M., Hoßfeld, T., & 32 validation mechanisms for crowdsourcing on Crowdsourcing submitted work Tran-Gia, P. (2013) platforms Platforms (The Majority Decision Approach & The Control Group Approach) They re- view techniques currently used to detect spammers and ma- licious workers, whether they are bots or humans randomly or semi-randomly completing tasks; Difallah, D. E., Demartini, Mechanical Cheat: Spamming Schemes Cheat Detection then, they describe 33 G., & Cudré-Mauroux, P. and Adversarial Techniques on on Crowdsourcing the (2012) Crowdsourcing Platforms Platforms limitations of existing techniques by proposing approaches that individuals, or groups of individuals, could use to at- tack a task on existing crowdsourcing platforms. 34 Venetis, P., & Garcia- Quality control for comparison microtasks Cheat Detection They study error

64

Molina, H. (2012) on Crowdsourcing masking techniques Platforms (e.g., voting, gold- standard questions) for the detection of bad workers

There is little influence of Schulze, T., Seedorf, S., Exploring task properties in Cognitive Skills education level, 35 Geiger, D., Kaufmann, N., crowdsourcing–an empirical study on on performance age, and gender on & Schader, M. (2011) mechanical turk workers’ task choice and work effort They have studied the effects of priming on task performance in microtask platforms. The R. R. Morris, M. Priming for better performance in Cognitive Skills study found that by 36 Dontcheva, and E. M. microtask crowdsourcing environments on performance using primes like Gerber.(2012) images and music the performance of crowd workers can be improved in the short term. The use of cognitively inspired features as graphics and music on a task Alagarai Sampath, H., Cognitively inspired task design to Cognitive Skills design as a 37 Rajeshuni, R., & improve user performance on on performance powerful technique Indurkhya, B. (2014) crowdsourcing platforms for maximizing the performance of crowd workers. They introduce and evaluate capability tracing, a technique for A capability requirements approach for measuring latent Hassan, U., & Curry, E. predicting Cognitive Skills capabilities or 38 (2013) worker performance in crowdsourcing on performance workers to make inferences about their performance on heterogeneous tasks They develop Hanging with the right crowd: preliminary Erickson, L., Petrick, I., & Matching crowdsourcing need to crowd Cognitive Skills 39 guidelines for Trauth, E. (2012) characteristics. on performance matching the right

crowd to the right

65

job Those that are primed to feel happy exhibit enhanced creative Morris, R. R., Dontcheva, Affect and creative performance on Behavior & performance, 40 M., Finkelstein, A., & crowdsourcing platforms Personality Traits whereas those that Gerber, E. (2013) on performance merely report feeling happy exhibit impaired creative performance They used behavioral observations (HIT completion time, fraction of useful Worker types and personality traits in Behavior & Kazai, G., Kamps, J., & labels, label 41 crowdsourcing relevance labels Personality Traits Milic-Frayling, N. (2011) accuracy) to define on performance five worker types: Spammer, Sloppy, Incompetent, Competent, Diligent They found that a worker’s degree of The face of quality in crowdsourcing Behavior & Kazai, G., Kamps, J., & openness and 42 relevance labels: Demographics, Personality Traits Milic-Frayling, N. (2012) conscientiousness personality and labeling accuracy on performance relate significantly to accuracy Workers coming from developing Worker’s Cognitive Abilities and countries have a Behavior & Mourelatos, E., & Personality Traits as Predictors of sloppy behavior on 43 Personality Traits Tzagarakis, M. (2016) Effective Task Performance in crowdsourcing on performance Crowdsourcing Tasks tasks resulting in a poor overall quality of results Table 2. Related literature on quality of work in paid crowdsourcing online labor markets.

2.3 Research Gap and Thesis Contribution By taking into consideration, all the aforementioned studies on crowdsourcing, it is notable that quality of labor in online marketplaces is in the center of attention. Researchers have used a broad variety of terms, strategies and approaches in order to analyze the influencing factors for online labor of crowdsourcing. Thus, prior studies have linked quality of work with the design of quality assurance techniques involving their application by the online platforms and with motivation and behavioral theories to increase workers’ productivity.

66

Despite the increasing attention paid to quality of work, and the tendency on the part of the authors to focus into several dimensions of quality of online labor markets, still are unclear the particular factors that affect the performance of the engaged parts of crowdsourcing (i.e. requesters, workers and online platforms) resulting in the final outcome.

As a consequence, it is crucial, the further investigation of the characteristics of this new way of labor, in order to understand deeper this certain area of study and propose efficient quality assurance strategies which will improve an individual's and an online platform ability to comprehend better crowdsourcing digital content, resulting in higher levels of quality of work.

For that reason, this thesis focuses on quality of online labor, by going deeply on the relationship of crowdsourcing online platforms’ characteristics and their overall performance through a macroscopic analysis (Chapter 3) and on the relation between workers’ skills (i.e. cognitive and non-cognitive abilities) and social-economic status on their quality of results through a microscopic examination (Chapter 4).

67

Chapter 3 An investigation of factors affecting the visits of online crowdsourcing and labor platforms

In this chapter the macroscopic approach for the quantitative study on performance will be presented. This includes a definition of the concept of crowdsourcing online platform’s performance as well as an outline of its possible measurements. The macroscopic results on performance will be combined with the following findings regarding crowdsourcing individuals’ performance in order to derive hypotheses for the study. The established relationships between the discussed concepts will then be visualized in a research conceptual model, presented at this section.

3.1 Introduction The volume of knowledge sharing online via the World Wide Web (WWW) is exponentially increasing. In today’s WWW environment, users exchange knowledge and opinions by using discussion fora, social networks, as well as a variety of collaboration support systems. The ubiquity of such WWW and users’ large-scale interaction make it possible to characterize these environments as exhibiting “collective intelligence” (Malone, 2009), defined as “a universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills” (Levy, 1997). While in the abovementioned environments collective intelligence emerges rather implicitly, there are attempts to explicitly harness and exploit such collective intelligence in today’s WWW settings.

With the tremendous growth of Web 2.0 and its participatory nature, a huge high- knowledge workforce has entered the online workforce making possible the development of new forms of marketplaces and innovative models of online labor (Kim & Lee, 2006). A new approach to use this workforce and its wisdom is referred to as crowdsourcing, which rely on the motto “Everyone Knows Something” (Adamic et al. 2008). Crowdsourcing can be viewed as a further development of outsourcing. Since Jeff Howe introduced the term "Crowdsourcing" in 2006 for the first time in history (Howe, 2006) defining it as “…the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call”, crowdsourcing has become a pivotal part of the current Internet, where everything is designed to take advantage of the networked world. Everyday thousands of workers, categorize images, write articles, translate texts or perform several other types of tasks in such environments. Crowdsourcing, as a term, is a strategic model to attract an interested, motivated crowd of individuals capable of providing solutions superior in quality and quantity to those that even traditional forms of business can (Brabham, 2008). Today

68 the term is equivalent to online labor. With the growth of great crowdsourcing and crowdfunding websites like Amazon Mechanical Turk or Kickstarter respectively, a huge human workforce with a large knowledge base can be easily accessed and utilized to tackle problems that require human intelligence to be addressed. Although many have been commented on the quality of results and professionalism of the workers in such online platforms (Poetz & Schreier 2012), new developments in this field is constant and rapid. Therefore, there is a need of analysis to understand the anatomy of such online platforms and their characteristics which significantly affect their performance.

Over the past two decades a great deal of attention has been paid to the development and use of measures of performance that are useful in motivating and reporting on the performance of business and organizations. More specific, economists make serious attempts to identify the major determinants of a business performance by analyzing the productivity function side of typical firms and drawing distinctions between “hard” and “soft” performance criteria (Dalton et al. 1980), while others argue that the structure within each industry is related with companies performance and for its appropriate measuring we must define the segment scope (e.g. products), the vertical scope (e.g. what activities are performed by the firms versus suppliers and channels) and the geographic scope which the firm operates (Porter, 1979 & Porter 1986).

Although web-enabled firms are not identical to traditional ones, there is growing evidence for their convergence, due to the general embodiment of the Internet and its principles in both cases (Straub et al. 2004). Moreover, we must not forget that an online platform is in fact a type of online business and all businesses aim at improving their profitability in order to have sustainability and competitiveness (Alsyouf, 2007). In our case, our dataset consists of web-enabled firms (based on the e-business model principles), having as potential clients, web users and field of activities the Web 2.0 crowdsourcing communities (Hoegg et al. 2006). For that reasons, their main strategies are focused on how to improve their conversion rate (i.e. the percentage of their visitors who take a desired action) (Rappa, 2000). Yet, it is unclear, which factors enact on a crowdsourcing website’s effort to attract targeted traffic, so that to achieve financial success.

Based on the above, the research reported in this chapter investigates some aspects of crowdsourcing platforms and determine their impact on its performance and competitiveness. The study monitored 174 crowdsourcing and crowdfunding platforms over a period of five years in order to assess whether their turnover is being affected by factors related to their characteristics and practices (such as their type of services, quality assurance mechanisms, region of establishment and the usage of different digital marketing strategies.) Based on the definition that “business performance”, in general, comprises the actual output of a firm regardless if it is web- enabled or not (Richard et al. 2009), and that online firms are closely engaged with web metrics to gauge their performance, we gathered several websites’ performance

69 metrics for each crowdsourcing platform as indicators of its performance (Benwell et al. 2010).

The abovementioned data were drawn from a well-known website called Alexa, which provides analytical insights of the traffic of websites. Related to Webometrics, the process of measuring various aspects of websites that include their popularity and usage patterns, Alexa has been shown to outperform other similar services such as Google Trends for Websites and Compete (Vaughan and Yang, 2013). Regarding the characteristics of crowdsourcing platforms we reviewed every site and manually collected all necessary data for the analysis (Kim et al. 2010). In the end we had very few drawbacks. Some information and data were not immediately available on websites and influenced the scope and the completion time of our research. So we had some missing values, especially for traffic, which at last considered as unavailable (N/A). Regardless the web-enabled firm-specific characteristics, we obtained the variables by analyzing one by one each website based on several criteria. For that reason, we set up an account to each crowdsourcing website in order to have fully access to its characteristics.

The study of this chapter, concerns a sample of the top-ranking one hundred crowdsourcing websites in each year in a five year period. Analyzing the time- invariant variables with OLS and Quantile regression (due to the large deviation in performance on our database), indicates that there is a strong correlation between the performance of such websites and specific characteristic-groups, while our Fixed- Effects Model controlling for the effects of time-invariant variables with time- invariant effects revealed that a website’s traffic and mobile penetration has great impact on its performance over time.

3.2 Theoretical Background In recent years, many crowdsourcing studies have been conducted providing the beginnings of a literature on the most fundamental issues raised by crowdsourcing. Crowdsourcing, in general, has attracted the interest of researchers from various fields, who aspire to survey, analyze, comprehend and improve its models, issues and systems of control (Saxton et al. 2013).

Therefore, many research efforts have as a focal point the development of several mechanisms in order to make possible the quality control and cheat detection (Donmez et al. 2009), or trying to suggest a taxonomy for crowdsourcing systems on the World Wide Web (Doan et al. 2011), while others tried to answer to important questions (e.g. What tasks are most paid? or When are the users of my platform active?) by analyzing the anatomy of a crowdsourcing platform (Hirth et al. 2011). Regarding the “business nature” of crowdsourcing websites, research has focused in very specific economic aspects. Ipeirotis in 2010 analyzed the demographics of

70 crowdsourcing participants, on Amazon Mechanical Turk, which is probably the most-attracted crowdsourcing online platform. He revealed that, the scope of participation in such environments is strongly related to the participant’s country of origin and its economic characteristics (Ipeirotis 2010). Moreover, Rogstadius in 2011 dived into the interaction between intrinsic and extrinsic motivation on job performance introducing for the first time the “crowding out” effects, meaning that once extrinsic motivation becomes stronger than intrinsic motivation, accuracy converges to equal or lower levels regardless of the level of extrinsic motivation provided (Rogstadius et al. 2011). Furthermore, Braham using gratifications theories suggest that, an individual’s motivation for drawing to crowdsourcing online platforms varies depending on the extent of the individual’s need for making money, developing creative skills, and leveraging experience into freelance work (Brabham, 2010). Last but not least, researches also investigate models of workers supplying labor to paid crowdsourcing projects in an attempt to estimate worker’s reservation wage (Horton et al. 2010).

As defined above and highlighted in figure 5, crowdsourcing by being an online marketplace, contains several new characteristics of labor, which need further investigation. Researchers begin to understand more and more the power of its “business nature”, forming appropriate conditions for further economic analysis on behalf of the online platforms providing crowdsourcing and crowdfunding services. The study of this chapter, presents a macroscopic analysis in order to clarify several factors, as an online platform’s region of origin, quality and reliability strategy that follows, services that provides and its digital marketing tactics, which have a great impact on a crowdsourcing website’s online activity and by extension to its performance and financial sustainability and competitiveness.

Figure 5. The Business Process of the collaborative workforce of crowdsourcing.

71

3.3 Data Analysis 3.3.1 Data Source In order to estimate more accurately the aforementioned performance of the crowdsourcing online platforms, we gathered data from alexa.com regarding their web sessions that occurred and characterize their traffic from 2012 to 2016. By the term session, we mean a user’s unique interaction that takes place on a website. By definition, sessions contain the multiple screen or page views, events, social interactions and ecommerce transactions that occur in a website within a given time frame (Stevanovic et al. 2011). In the context of the analysis, sessions are used as an absolute measure of a website’s traffic; for this Alexa's monthly visitors’ sessions metric is used. We gathered each website’s monthly sessions and we calculated the website’s average number of unique sessions in each year.

In our study sessions are divided in two types: sessions occurred by individuals accessing a website via desktop computers which from now will be referred to as “Desktop Sessions” and sessions occurred via mobile devices and smartphones, which will be referred to as “Mobile Sessions” and their sum which will be referred to as “Overall Sessions". The reason for this division is twofold. Firstly, the way users access the website is an important factor in the analysis presented in this chapter. Secondly, having in mind that recent studies have shown an upward trend of mobile crowdsourcing, we investigated the general mobile penetration (and its evolution in time) in our websites-dataset (Eagle, 2009 & Chatzimilioudis et al. 2012 & Gupta et al. 2012).

For this reason we assert in our study that the number of sessions that occurred in a crowdsourcing website is an efficient estimator of its performance and economic growth and constitute our dependent variable (Plaza, 2011). Data regarding the traffic of these websites are drawn also from alexa.com (a site’s ranking is based on the measure of unique visitors over a rolling 3 month period) in values of 2012 to 2016 (Lo & Sedhain, 2006) as a website’s a relative measure of popularity, while the data regarding their anatomy and characteristics have been collected by author one by one following the steps of our methodology.

Finally, we obtained the aforementioned data from alexa.com (Alexa’s Certified Metrics) because the data were open source and its on-site analytics are proven to be almost the same as the ones on Google Analytics, which has limited access for each website (Plaza, 2011 & Zahran et al. 2014).

72

3.3.2 Methodology In this section, we detail the specific procedures we undertook for selecting cases and gathering the data. Our website selection was based on a two scaled selection methodology. Firstly, to build our sample, we undertook the following additional steps in order to search for candidate crowdsourcing and crowdfunding websites. Thus,

• We looked at all organizations noted in all academic articles found via Science Direct and Articles and we search on the term “crowdsourcing.” • We used the most three popular search engines – Google, Bing and Yahoo! to identify crowdsourcing online platforms that stakeholders are likely to encounter, we looked at the first 100 sites returned by the search. • We looked at the Wikipedia entries for “crowdsourcing” and “crowdfunding.” Alexa search for the top ranked crowdsourcing and crowdfunding websites and their most related ones.

As a result we gathered a great number of crowdsourcing and crowdfunding online platforms. In selecting the final number of websites for our research, we were guided by the following three criteria: a) Language: All crowdsourcing websites reviewed had to present their services in English. This facilitated the work of assessing services provided and comprehending their use. b) Presentation of services provided: Websites had to provide the information required in order to facilitate their review. c) Information needed for completing the review had to be offered. Many websites do not disclose all information required and such websites were excluded from our analysis.

This two-scaled method resulted in 174 candidate crowdsourcing and crowdfunding websites, which they were placed under examination, for a five year time period (2012-2016). In line with our method noted earlier, to gather their characteristics and embody them in our analysis, we conducted a content analysis by reviewing each website separately assembling the appropriate information for the following group of variables (Krippendorff, 2004).

The websites selected were assessed against a number of criteria, which aim to capture various aspects of the services offered. These criteria cover technical as well as operational features of the websites reviewed. Below we present the criteria in greater detail:

Type of service provided. Services provided by websites were grouped into the following ten (Schenk & Guittard, 2011):

73 a. Microworks/Simple tasks, which are considered the smallest unit of work in a virtual assembly line, e.g. categorization, tagging, Web research, transcription, etc. b. Crowdfunding, which is the collection of finance from backers (the crowd) to fund an initiative (project). Crowdfunding has its origins in the concept of crowdsourcing, which is the broader concept of an individual reaching a goal by receiving and leveraging small contributions from many parties. Crowdfunding is the application of this concept to collect funds through small contributions by many parties in order to finance a particular project or venture. c. Mobile crowdsourcing services, which are applications for mobile phones based on the “crowd”. d. Content Generation services, in which content is generated by the crowd. This method is becoming increasingly popular because it offers an alternative to content creation and content curation. e. Data Entry services, which are projects using many different modi operandi, e.g. Excel, Word, electronic data processing, typing, coding and clerical assignments. f. High knowledge intensity services, which are specialized services in particular fields, e.g. health, law, insurance, consultancies, data management, market research and cloud applications. g. Program developing services, which focus on having software implemented by the crowd. h. Web and graphic design services, which use the crowd contribution in the creation of artistic projects. i. Translation services, which aim at translating content from a source language into a target language. j. Product reviews and testing, in which such tasks are conducted by the crowd.

Quality & Reliability. This group of variables is used to report which techniques the website employs to ensure the quality of results provided by workers. It also includes the techniques a platform provides for cheat detection in order to ensure workers’ reliability (Wang et al. 2011).

Region. Indicates the region of origin the platform is operating in (Ross et al. 2010). Based on our sample, we have four basic categories; North America, Europe, Australia and Asia.

Online Imprint. This variable reflects the strategies a platform uses as a tool of digital marketing and includes three categories; social networks, video streaming-sharing communities and blogs/forums (Thackeray et al. 2008).

Traffic Acquisition. This variable consists of the different origins of the visits being occurred in each crowdsourcing online platform. For example, online platforms with good presence on search engines (e.g. Google Search Engine) managed to increase their sessions over time resulting to their improvement of their economic activity (Ortega & Aguillo, 2010 & Tierney & Pan, 2012).

74

3.3.3 Descriptive Statistics Our sample comprises 174 of the most well-known and high-ranked, English language crowdsourcing and crowdfunding websites from year 2012 to 2016.

Table 3 presents the summary statistics of our dataset. Column 1 shows the sum of observations of each variable. It is noticeable that, the sum it is different among variables, because a number of websites either came up short; or either changed field of service during the duration of our research. Columns 2, 3, 4 and 5 present the mean, minimum, maximum values of each variable and the standard deviation respectively.

Obs Mean Min Max Std. Dev. Variables Code_Name [1] [2] [3] [4] [5] Desktop Sessions (in millions) lnsessions 775 12.70 6.91 20.44 2.45 (Dependent)

Alexa Rankings (in thousands) alexacategory 782 3.41 1 5 1.28 Mobile Sessions (in millions) lnmsessions 663 11.89 2.99 19.92 2.42 Mobile Penetration (%) mobilepen 776 .28 .00 .80 .19

Websites’ Type of

Services/Tasks Microtasks mwk 870 .17 0 1 .37 Crowdfunding crf 870 .36 0 1 .48 Tasks on Mobile mcw 870 .04 0 1 .20 Content Generation cntg 870 .09 0 1 .28 Data Entry dte 870 .07 0 1 .25 High Tech hts 870 .21 0 1 .41 Program Developing pdvp 870 .07 0 1 .25 Graphic Design dsns 870 .13 0 1 .34 Translation trs 870 .04 0 1 .20 Reviews & Testing Products rtp 870 .06 0 1 .24

Websites’ Quality Assurance No Quality Assurance qr1 870 .33 0 1 .47 Reviews & Ratings qr2 870 .33 0 1 .47 Workers Profile qr3 870 .45 0 1 .50 Skills & Practice Tests qr4 870 .11 0 1 .32 Spamming Tools qr5 870 .25 0 1 .43

Websites’ Region North America NorthAmerica 870 .69 0 1 .46 Europe Europe 870 .22 0 1 .41 Australia Australia 870 .02 0 1 .15 Asia Asia 870 .07 0 1 .25

75

Websites’ Online Imprint Social Networks Social 835 .14 0 1 .35 Blogs and Forums Blogforum 835 .78 0 1 .41 Video Communities VideoCom 835 .07 0 1 26

Traffic Acquisition Google google 870 .24 0 .47 .08 Youtube youtube 870 .04 0 .32 .06 Facebook facebook 870 .06 0 .25 .06 Table 3. Descriptive Statistics for the Whole Sample Source: Dataset with results drawn from alexa.com from year 2012 to 2016. Author’s calculations. Notes: Whole sample consists of 174 online platforms. Desktop sessions and mobile sessions are in logarithmic values. Alexa rankings is a categorical variable which shows whether an online platform/website is very high, above average, average, below average or very low in the leaderboard of alexa.

Recall, our database consists of 174 crowdsourcing websites assuming that crowdfunding is a unique subcategory of crowdsourcing type of services. Table 4, presents the descriptive statistics for this categorical variable. Of all our websites, the approximately 33 % is engaged with crowdfunding services, approximately 20% with high tech services and approximately 13% with graphic design and microtask services.

Websites’ Type of Services Freq. Percent Cum. Microtasks 22 12.64 12.64 Crowdfunding 57 32.76 45.40 Tasks on Mobile 4 2.30 47.70 Content Generation 5 2.87 50.57 Data Entry 8 4.60 55.17 High Tech 33 18.97 74.14 Program Developing 4 2.30 76.44 Graphic Design 23 13.22 89.66 Translation 7 4.02 93.68 Reviews & Testing Products 11 6.32 100.00 Total 174 100.00 Table 4. Descriptive Statistics for Websites’ Type of Services. Source: Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174.

Additionally, table 5 describes the relationship between a website’s region of origin and its basic provided service. The data would seem to suggest that crowdsourcing online platforms located in North America are more specialized in tasks on mobile (i.e. all mobile crowdsourcing websites are located in USA and Canada indicating a high degree of satisfaction by the users of mobile services in general) (Turel & Serenko 2006), in data entry (87.50 %) and in reviews and testing products (81.82 %). On the other hand, crowdsourcing websites located in Europe are mostly providing

76 translation services (42.86%) and microtasks (36.36%). In Australia the most common provided services is graphic design (13.04%) and in Asia program developing services (25.00%). Taking into account the abovementioned statistical data, we can surmise that websites have somehow formulated their framework of operation depending of their location and its demographic characteristics. For example, although crowdfunding relaxes geographic constraints among funders (Agrawal et al. 2010), has been observed in recent years a growing trend of such projects in USA, with key reason the over a billion dollars spending by millions of individual crowdfunding backers, and a large-scale action by the US Congress to encourage crowdfunding as a source of capital for new venture (Burtch et al., 2011). Hence, a variety of geographic effects on funding have been identified playing an important role on its efficient implementation (Mollick, 2014).

Region (%) Websites’ Type of North Europe Australia Asia Services America Microtasks 54.55 36.36 0.00 9.09 Crowdfunding 63.16 24.56 1.75 10.53 Tasks on Mobile 100.00 0.00 0.00 0.00 Content Generation 80.00 20.00 0.00 0.00 Data Entry 87.50 12.50 0.00 0.00 High Tech 78.79 18.18 0.00 3.03 Program Developing 75.00 0.00 0.00 25.00 Graphic Design 69.57 13.04 13.04 4.35 Translation 42.86 42.86 0.00 14.29 Reviews & Testing 81.82 18.18 0.00 0.00 Products Total 68.97 21.84 2.30 6.90 Table 5. Relationship between the online platforms’ region of origin and their type of services Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174.

In order to understand better the impact of a website’s region of origin on its crowdsourcing process we investigated also the relationship between the online platforms’ quality mechanisms and their location (table 6). According to table 6, approximately one out of three websites located in North America, Europe and Asia has not yet embodied any quality control mechanism on its crowdsourcing process. It is noteworthy to mention that, “workers’ profiles” and “Spamming tools” seem to be the most acceptable mechanisms of quality control and assurance by the websites regardless location.

77

Quality Mechanisms (%) Region No Quality Reviews Workers’ Skills & Practice Spamming Mechanisms &Ratings Profiles Tests Tools North America 36.04 4.50 21.62 8.11 29.73 Europe 37.84 0.00 40.54 0.00 21.62 Australia 0.00 0.00 75.00 0.00 25.00 Asia 33.33 8.33 41.67 8.33 8.33 Total 35.37 3.66 28.66 6.10 26.22 Table 6. Relationship between the online platforms’ quality mechanisms and their region of origin Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174.

Effective quality control plays an important role in determining the success of any crowdsourcing procedure because is more and more associated to quickly, cheaply, and easily machine learning strategies (Lease, 2011). Therefore, trying to understand them more deeply, we analyze them with respect to websites’ provided services (table 7). One major finding is that in case of mobile crowdsourcing we have a zero use of the existing quality control mechanisms, suggesting that they are totally ineligible for mobile crowdsourcing environments. This shows that, although crowdsourcing grows rapidly in smartphones, several aspects of its implementation as the quality control, remain at an early stage (Zhu et al. 2014). Last but not least, based on table 7 statistics, we can easily ascertain which quality mechanism is most suitable depending on the provided type of service. For example, online platforms with crowdfunding and microtasks services are mainly use “workers profiles” and “spamming tools” as quality assurance mechanisms.

Quality Mechanisms (%) Websites’ Type of No Quality Reviews Workers Skills & Practice Spamming Services Mechanisms &Ratings ’ Profiles Tests Tools Microtasks 33.33 4.76 23.81 9.52 28.57 Crowdfunding 33.93 0.00 39.29 0.00 26.79 Tasks on Mobile 100.00 0.00 0.00 0.00 0.00 Content Generation 33.33 0.00 0.00 0.00 66.67 Data Entry 28.57 0.00 14.29 0.00 57.14 High Tech 41.94 6.45 19.35 9.68 22.58 Program Developing 0.00 50.00 0.00 0.00 50.00 Graphic Design 30.43 4.35 39.13 8.70 17.39 Translation 28.57 0.00 28.57 28.57 14.29 Reviews & Testing 50.00 0.00 20.00 10.00 20.00 Products Total 35.37 3.66 28.66 6.10 26.22 Table 7. Relationship between the online platforms’ quality mechanisms and their type of services Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174.

78

Our data collection refers to a five-year period of time (2012-2016). For that reason, some of our variables such as an online platform’s overall sessions (desktop and mobile), alexa ranking and mobile penetration, are time-variant, meaning that their values are changing over time. Table 8 presents the summary statistics of these time- variant variables of our database.

Years Time-variant Variables 2012 2013 2014 2015 2016 Overall Sessions (in millions) 6.696 7.601 8.003 9.229 11.333 (47.950) (52.001) (52.208) (54.708) (62.990) Mobile Sessions (in millions) 211 1.719 2.780 4.347 6.437 (936) (11.484) (18.506) (27.467) (37.493) Mobile Penetration (%) .059 .176 .299 .393 .478 (.080) (.128) (.132) (.129) (.129) Alexa Ranking (in thousands) 255.878 254.179 175.167 224.538 229.194 (296.280) (282.825) (199.953) (298.290) (322.428) Table 8. Summary statistics for time-variant variables. Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174. In parenthesis are the values of standard deviation for each case.

Desktop Sessions have a positive correlation with mobile sessions being occurred on crowdsourcing and crowdfunding websites over time (but without taking in to consideration the within variation among crowdsourcing websites), while at the same time seems that the sum of their sessions (Overall sessions) and their average ranking of traffic follows the opposite with the values of alexa ranking not being in reversed mode (figure 6 & figure 7).

Figure 6. The positive association of websites’ session by desktop and sessions by mobile.

79

Figure 7. The negative association of websites’ overall sessions and alexa rankings.

This means that, a crowdsourcing website that is in high-ranked positions of traffic may achieve high performance. For example, a crowdsourcing platform may receive a high-amount of unique visitors, being with this way in a high-ranked position and this pool of users is resulting in a form of session.

It is also notable, that the average mobile penetration in our sample, which is calculated by dividing the average mobile sessions by the average of the desktop sessions, is on average is 28% (for period 2012-2016) and its value is rapidly growing over time (i.e. in 2012 the mobile penetration was approximately 6% while in 2016 the mobile penetration reached to almost 48%) (Table 8). Figure 8 shows the percentage increase of mobile penetration in crowdsourcing and crowdfunding platforms in each region over time. More specific, the largest percentage increase of mobile penetration from year 2012 to 2016 in crowdsourcing websites was made in Australia (40.68 %) and in Asia (19.95 %), while in North America was 9.67 % and in Europe was 16.91 %.

Our study confirms the Google reports that smartphone penetration in Australia was 66% in 2013, dramatically rising from just 19% in 2007 having a similar impact in the usage of mobile in the completion of crowdsourcing services by the Australians (King, 2014). Similarly, in contrast to desktop computers, mobile phone penetration in India is very high (about 50%) in recent years. Many of these phones are simple, “candy-bar” style phones capable of surfing the web with a low cost of mobile Internet, making easy to Asians to complete quickly crowdsourcing tasks (Narula et al. 2011 & Vashistha et al. 2015).

80

Figure 8. The average percentage change of mobile penetration in crowdsourcing websites over time by region

The average of our under examination websites’ alexa ranking has fluctuations over time (table 8). Figure 9 and table 8 show a growing traffic on crowdsourcing and crowdfunding online platforms in 2014. Our alexa ranking analysis also revealed that this traffic is gradually reduced over the following years, while the desktop sessions have increasing trend. A possible interpretation is that, in year 2014 crowdsourcing gained for several reasons a growing attention, which was not denatured in permanent users of its services. Hence, the sum of crowdsourcing participants over the following years was reduced, while its basic supporters increased its use.

Figure 9. Distribution of websites’ alexa rankings over time. . In y axis the closer to zero is a distribution of websites the more high-ranked their position on Alexa ranking is.

81

Having categorized the websites’ values of the average alexa ranking over time (based on the quantile values of their distribution); we conducted a cross-tab analysis. Table 9 revealed that, 30 % of crowdsourcing and crowdfunding online platforms in Australia manage to be in high-ranked places of traffic, while the respective online platforms in Europe and Asia fail to attract attention. In North America the percentages are almost equally distributed among the categories of the alexa rankings.

Alexa Rankings (%) Region Very Low Below Average Average Above Average Very High North America 21.39 25.59 25.59 16.45 10.97 Europe 35.71 22.62 26.79 11.31 3.57 Australia 0.00 15.00 15.00 40.00 30.00 Asia 40.43 31.91 17.02 0.00 10.64 Table 9. Relationship between the online platforms’ average alexa rankings and their region of origin. Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174

Furthermore, table 10 presents the relationship between the websites’ average values of alexa ranking and their provided “quality control” mechanisms. It is notable that, online platforms with no embodied mechanisms of quality control and assurance hold the low-traffic positions of alexa ranking (approximately 70% is located below average). Hence, our initial results emerge the question of quality control in crowdsourcing systems, as a critical issue and also affirm the need of a framework for characterizing various dimensions of quality assurance in such online environments (Allahbakhsh et al. 2013). Moreover, based on table 8, we can realize that depending on the website’s quality control mechanism, we have a different impact on its traffic.

Alexa Rankings (%) Quality Mechanisms Very Low Below Average Average Above Average Very High No Quality 47.35 20.41 15.51 8.98 7.76 Reviews & Ratings 36.51 37.04 11.11 12.23 3.11 Workers’ Profiles 15.81 26.51 28.37 15.35 13.95 Skills & Practice Tests 15.91 13.64 61.36 4.55 4.55 Spamming Tools 11.44 28.86 25.87 20.90 12.94 Table 10. Relationship between the online platforms’ average alexa rankings and their quality mechanisms Source: Dataset with results drawn from alexa.com. Author’s calculations. Notes: Number of observations (online platforms under examination) equals to 174

In order to have a better view of the desktop sessions being accomplished in our under examination crowdsourcing websites (our dependent variable) we split the sample in

82 to three major categories based on the quantile values. Low sessions category includes websites which are in the right tail of the distribution (i.e. belonging to the 25% of our sample), intermediate sessions category includes websites belonging from 25% to 75% of our distribution and high sessions category consists of websites with a great amount of sessions (i.e. belonging from 75 % to 100% of the distribution)(Figure 10).

Figure 10. Distribution of online platforms’ the desktop sessions

Afterwards, we investigated the relationship between our dependent variable with several groups of independent variables through a cross tab analysis. As can be seen from table 11 columns 1, 2, 3 we have some major differences in websites’ performance (i.e. desktop sessions) depending on their quality control mechanisms, region of origin and type of services. For example, regarding the quality mechanisms group of variables, the results demonstrate that websites with spamming tools and profiles with working- information of their workers achieve high sessions in approximately 40 % (i.e. almost one out of two). With respect to region category, the results show that 70 % of the crowdsourcing websites located in Australia manage to have high-performance, while in other regions the percentages are almost equally distributed among subcategories of performance. Last but not least, with reference to websites’ provided services, the statistics would seem to suggest that mainly online platforms with mobile crowdsourcing (75%) and graphic design services have high values of performance (40%). On the contrary, crowdsourcing online platforms with primarily program developing services have low levels of performance, in percentage of 60% indicating that although efforts are made, potential users will not switch easily to crowdsourcing environments for IT (i.e. information technology) solutions (Peng et al. 2014).

83

Moreover, column 4, 5 shows paired t-tests of comparison of the percentages on each group of variables between crowdsourcing online platforms with high and low levels of performance. It can be seen from table 11 (column 5) that, we have several statistical significant differences on 1%, 5% and 10% levels of significance, between the percentages of crowdsourcing websites with low and high levels of desktop sessions, on each group of variables.

Desktop Sessions (%) T-test Low Intermediate High Difference [3]-[1] |t| Group of Variables [1] [2] [3] [4] [5] Alexa Rankings 553,560 133,995 63,975 -489,585 21.01*** (in thousands) Mobile Sessions 56 1,466 12,162 -12,106 3.88*** (in millions)

Quality Mechanisms No Quality 40.69 29.66 29.66 -11.03 6.33*** Reviews & Ratings 50.00 40.00 10.00 -40.00 8.80*** Workers’ Profiles 14.89 45.96 39.15 24.26 9.26*** Skills & Practice 14.00 62.00 24.00 10.00 2.48** Tests Spamming Tools 14.42 46.05 39.53 25.11 3.79***

Region North America 22.17 43.00 34.83 12.66 1.96* Europe 28.42 48.42 23.16 -5.26 2.42* Australia 5.00 25.00 70.00 65.00 2.81** Asia 30.00 35.00 35.00 5.00 .59

Websites’ Type of

Services Microtasks 36.36 38.18 25.45 -10.91 4.06*** Crowdfunding 20.00 43.16 36.84 16.84 2.74*** Tasks on Mobile 0.00 25.00 75.00 75.00 1.72*** Content Generation 0.00 80.00 20.00 20.00 2.88* Data Entry 37.50 40.00 22.50 -15.00 3.14* High Tech 16.97 47.27 35.76 18.79 1.29 Program 60.00 30.00 10.00 -50.00 1.96* Developing Graphic Design 27.83 32.17 40.00 12.17 .13 Translation 14.29 68.57 17.14 2.85 .25 Reviews & Testing 30.91 45.45 23.64 -7.27 1.71 Products Total 23.68 43.22 33.10 9.42 Table 11. Relationship between the online platforms’ desktop sessions (dependent) and several groups of variables (independent). Samples t-tests for comparison of websites’ Alexa Rankings, Mobile Sessions, Quality Assurance Mechanisms, Region of Origin & Services, with low and high amount of desktop sessions. Source: Dataset with results drawn from Alexa.com. Author’s calculations. Notes: Statistical significance: *** 1%, ** 5% and * 10%.

84

3.4 Empirical Analysis 3.4.1 Overview Our primary goals in this essay are to encourage crowdsourcing online platforms to think more broadly about their performance and to stimulate research aimed at enhancing our understanding of the connection between service qualities in general and performance. Towards investigating the impact of the observed group of variables to websites’ desktop sessions, we derived an econometric model which aims at predicting the performance based on websites’ traffic (alexa rankings), mobile sessions, quality control and assurance mechanisms, region of origin, type of services, social imprint (digital marketing strategies) and traffic acquisition.

In particular, we utilize an OLS linear regression model, which applied to each census separately. This model estimates how, on average, the above groups of variables affect crowdsourcing online platforms’ desktop sessions and therefore their performance. For example, it can address questions like: “Is the usage of mobile in crowdsourcing tasks important in achieving high levels of performance?” or “Does the location of a crowdsourcing website affect its performance? “etc. In linear regression, the regression coefficients represent the increase or decrease in the response variable produced by a one unit increase in the predictor variable associated with the coefficient in each case.

Nevertheless, we also wanted a more comprehensive picture of predictors’ effect on the response variable, in order to investigate, if their effect exists and to what extent, on low and high values of the dependent variable and be compared to their effect on median. For this reason, we also used for our estimation a quantile regression. The quantile regression parameter estimates the change in a specified quantile of the response variable produced by a one unit change in the predictor variable (Bassett et al. 2002). This allows comparing how some quantiles of the desktop sessions may be more affected by certain websites; characteristics than other quantiles. This is reflected in the change in the size of the regression coefficient (Buchinsky 1998).

Last but not least, our database consists also of repeated observations for several websites’ characteristics, and a reasonable amount of variation of our key X variables within each group over time. For this reason, we utilize also a fixed effects model relying on this within-group action. Fixed effects or within estimators have a long tradition in the production function literature, in fact they were introduced to economics in this context (Mundlak, 1961; Hoch, 1962). In this study, by using a FE model, we tried to measure the impact of several time-variant variables on crowdsourcing online platforms’ performance over time (from year 2012 to 2016).

85

3.4.2 Empirical Model According to the abovementioned, our econometric specification with no consideration for repeated observations is of the following general form:

푇 Yit = α +β퐷푡 + γMSit +δARit + εQAit +ζRit + ηSit +θSIit+ιΤΑit+εit (1)

Where Υit is the logarithmic values of the aforementioned desktop sessions th th T (dependent variable) for the i online platform in the sample at the t time period. Dt is a vector of time fixed-effects (2012-2016). MSit is a vector of the mobile sessions th th on the i online platform over time, ARit is a vector that gives the impact of each i online platform’s alexa ranking on Yit at the time t. QAit is a vector which includes the quality assurance mechanisms of the ith online platform (No quality assurance mechanisms, Reviews & Ratings, Workers’ Profile, Skills & Practice Tests, th Spamming Detector Tools), Rit is also a vector for region of the i online platform (North America,Europe,Australia,Asia) and Sit is a vector of the services being provided by the ith online platform (Microtasks,Crowdfunding, Tasks on Mobile, Content Generation, Data Entry, High Tech, Programm Developing, Graphic Design, Translation, Reviews & Testing Products). Last but not least, SIit is a vector which includes the effect of the ith’s online platform social imprint on the desktop sessions (No Social Imprint, Social Networks, Blogs and Forums) and ΤΑit is also a vector which includes the Traffic Acquisition (Google,Youtube,Facebook) of the ith online th th platform. The εit is the random disturbance for the i case at the t time period with 2 2 E(εit)=0 and E(εit)=σεt.

3.4.3 Estimation Results If we apply model (1) to our data set, we obtain the following OLS linear regression model and coefficient for the 25th, 50th and 75th quantiles (Table 12).

OLS Regression The results suggest that there are remarked differences across the distribution of the desktop sessions of crowdsourcing platforms with respect to their characteristics. In particular, the first column in table 12 presents the coefficients for the OLS linear regression model. The results show that several independent variables are statistically significant. For example, it is expected that an online platform, the more mobile sessions it has, the more overall performance will also have (at 1% level of significance). Moreover, all levels of websites’ ranking of traffic are statistically significant related to performance. More specific, crowdsourcing websites being in below average or very low ranked positions, achieve low levels of performance compared with websites which are in high-ranked positions of traffic (in all cases the

86 right hand variable is negatively related to the left hand variable at the 1% level of significance).

Among the quality control mechanisms being used by the crowdsourcing platforms we find that “workers’ profile”, “skill and practice tests” and “spamming tools” have a strong effect on the desktop sessions of a crowdsourcing website at 1% level of significance. For example, an online platform having embedded job profiles for its workers and spamming tools, are approximately 55% and 45% respectively more productive than websites with no quality control and assurance mechanisms. The data would seem to suggest that, the offered opportunity to requesters for conducting skills and practice tests in order to decide their hired-workers by crowdsourcing platforms, is the most effective quality control mechanism, because it has the greatest impact on the overall crowdsourcing output (r=1.112 & p. value= .000 ).

Regarding the effect of a website’s region of origin (i.e. the region that a crowdsourcing platform is being hosted), our results revealed that only crowdsourcing platforms located in Australia are statistically significant 40.7% more productive than websites in North America (at 10% level of significance). It is notable though, that crowdsourcing websites in Europe and Asia have also more desktop sessions that websites in North America (5.4% and 33.9% respectively). A possible interpretation is that, in North America we have crowdsourcing projects with serious and professional characteristics, meaning that the sum of them might be less than Asia, Europe and Australia (where we have more microtasks), but with more financial transactions (especially crowdfunding projects) (Weinstein, 2013).

Furthermore, the findings of OLS regression also showed a strong correlation between a website’s types of crowdsourcing service and its performance. More particular, having as a reference group websites with microtasks, table 12 shows that websites with mobile tasks, content generation tasks, data entry jobs and reviews & testing products services, are more productive at 1% level of significance, websites with crowdfunding projects and high tech tasks are more productive at 5% level of significance and websites with program developing tasks are more productive at 1% level of significance. The abovementioned results indicate that, a website’s provided crowdsourcing service has a strong effect on its performance and further research of this emerging set of new business models, focusing at involving the crowd in activities such as concept development, problem solving or production, is needed (Geerts, 2009).

In addition, as can be seen from table 12, referring to “websites’ online imprint” group of variables, our study suggests that, crowdsourcing websites having profiles in popular and well-known social networks such as Facebook, twitter and YouTube are approximately 36% more productive than websites with no online imprint in such environments (at 10% level of significance), while seems that, there is no effect of a website’s respective profiles in blogs and forums, on the desktop sessions.

87

Last but not least, our data analysis reveals a strong correlation between the desktop sessions of a crowdsourcing website and its traffic acquisition strategies (i.e. the site of a website’s incoming traffic). For example, crowdsourcing websites achieve for each 1% increased unit of incoming traffic from Facebook, on average 15.5% higher performance (at 1% level of significance). Similarly, crowdsourcing websites’ desktop sessions increase for incoming traffic from YouTube and Google at 1% and 5% level of significance respectively. Our results confirm the websites’ need to leverage traffic acquisition strategies to connect with the right audiences — people who are most likely to become paying customers (in our case requesters) (Shuen, 2008).

Quantile Regression For further analysis, quantile regression makes it possible to statistically examine the extent to which the crowdsourcing websites’ characteristics (explanatory variables) are valued differently across the distribution of their desktop sessions (dependent variable).

The remaining columns in table 12 (column 2, 3 & 4) present the regression’s coefficients for the 25th, 50th and 75th quantiles respectively. It is obvious that, the results for the 25th quantile are concerning websites with low levels of performance, while the results for 75th quantile are concerning websites with high levels of performance. Recall that quantile regressions were estimated to determine if the impact of the independent variables varied for websites at different points in the distribution of their desktop sessions. Our results of the quantile regressions showed that the position of a crowdsourcing website in the desktop sessions’ distribution did, in fact, significantly affect the impact that various independent variables had on their level of performance.

More specific, the quantile analysis shows that, the effect of the average number of a website’s mobile sessions is positive and significantly different from zero throughout the conditional distribution of desktop session. Hence, the quantile analysis broadly confirms the OLS result of a strong, positive effect of mobile sessions, which holds throughout the conditional distribution. Similarly, regarding a website’s values of traffic (alexa ranking), its effect on performance remains negative and strong (at 1% level of significance) across 25th, 50th and 75th quantiles.

Concerning the “Websites’ Quality Assurance” group of variables, the quantile results reveal that, in websites with high levels of performance (75th quantile), all quality mechanisms have a strong impact on their desktop sessions, while in websites with low levels of performance (25th quantile), only “Workers’ Profiles” and “Skills & Practice Tests” have a significant positive effect on desktop sessions (at 1% level of significance). A possible explanation is that, an efficient strategy of a crowdsourcing

88 website to achieve high levels of performance is to combine all the existing mechanisms of quality control and assurance with an appropriate way.

Furthermore, quantile regression show that, among websites with low levels of performance, only websites from Australia have a significant increased number of desktop sessions (at 1% levels of significance) compared to the ones located in North America, while is seems that the equivalent effect of websites located in Asia appears from 50th onwards (r= .243 & p.value=.037 and r= .478 & p.value=.097).

Additionally, it is noticeable that, among crowdsourcing websites’ with high levels of desktop sessions, high tech and program developing services have a great impact on performance (at 1% level of significance), while in low levels crowdfunding, tasks on mobile, content generation and reviews and testing products services have the greatest impact on performance, having as a reference group websites with microtasks services. The aforementioned results confirm the significant role of a website’s provided service on its levels of performance.

The 50th quantile regression also point the nuanced positive effect of a website’s imprint in social networks on its desktop sessions (r= .348 & p.value=.057). Lastly, with respect to prior results, we found that, like OLS, the quantile results show the significant effects of a websites traffic acquisition across the performance distribution. In particular, among online platforms with high levels of performance, the effect of incoming traffic from YouTube is positive and very strong (r= 2.047 & p.value=.004), while among online platforms with low levels of desktop sessions, we have also a strong effect from incoming traffic of Facebook users (r= 325 & p.value=.002). The effect of incoming traffic from Google appears on the 50th quantile and is weak (r=.015 & p.value=.077). Our results, possible indicate that crowdsourcing websites prefer the quick, cheap and easy way of YouTube videos and Facebook advertisements for their attraction and engagement with potential requesters, than affiliating more efficient, but costly strategies of digital marketing and tools for better search engine optimization (e.g. Google AdWords).

OLS Quantile Regression Mean 0.25 0.50 0.75 Independent variables [1] [2] [3] [4] Constant 15.082*** 14.754*** 15.232*** 16.568*** (.380) (.142) (.567) (.403)

Desktop sessions in time (years) 2013 .053 .106 .108 .013 (.159) (.142) (.125) (.111) 2014 .083 .106 .060 -.067 (.182) (.141) (.149) (.134)

89

2015 .275 .501 .377 .243 (.210) (.167) (.173) (.161) 2016 .143 .301 .288 .177 (238) (.192) (.179) (.186)

Mobile Sessions 2.201*** 1.384*** 1.610*** 2.001*** (.403) (.291) (.333) (.350)

Alexa ranking Above Average -1.883*** -1.744*** -1.780*** -2.282*** (.170) (.044) (.132) (.171) Average -3.625*** -3.470*** -3.477*** -3.913*** (.172) (.166) (.137) (.176) Below Average -4.731*** -4.900*** -4.680*** -4.995*** (.172) (.180) (.138) (.180) Very Low -5.876*** -6.045*** -6.078*** -6.363*** (.202) (.175) (.158) (.217)

Websites’ Quality Assurance Reviews & Ratings .265 -.016 .018 .152** (.302) (.117) (.096) (.070) Workers’ Profiles .547*** .309*** .233** .321*** (.136) (.106) (.106) (.080) Skills & Practice Tests 1.112*** .510*** .614*** .413*** (.334) (.119) (.112) (.101) Spamming Detector Tools .453*** .068 .006 .087*** (.140) (.110) (.080) (.082)

Websites’ Region Europe .054 -.018 .086 .005 (.096) (.097) (.102) (.069) Australia .407* .447*** .249* .382*** (.217) (.141) (.165) (.121) Asia .339 .309 .243** .478* (.215) (.371) (.079) (.269)

Websites’ Type of Services/Tasks Crowdfunding Tasks .304** .414*** .386** .038 (.159) (.140) (.160) (.146) Tasks on Mobile Applications 1.394*** 1.643*** 1.288*** .604** (.276) (.252) (.195) (.309) Content Generation Tasks .531*** .732*** .442** .384** (.197) (.221) (.193) (.202) Data Entry Tasks .952*** .084 .080 .688 (.344) (.284) (.240) (.702) High Tech Tasks .700** .674** .734*** .533*** (.166) (.126) (.187) (.144) Program Developing Tasks .722* .195 -.037 1.241*** (.383) (.308) (.276) (.292) Graphic Design Tasks -.098 .008 .002 .077

90

(.179) (.164) (.187) (.138) Translation Tasks .285 .613** .371** .071 (.190) (.272) (.168) (.160) Reviews & Testing Products .860*** .586*** .618*** .492 Tasks (.247) (.197) (.208) (.416)

Websites’ Online Imprint Social Networks .361* .095 .348* .114 (.240) (.130) (.198) (.326) Blogs and Forums -.055 -.066 -.024 -.162 (.183) (.120) (.167) (.314)

Traffic Acquisition Google .022** .029 .015* .017 (.009) (.019) (.007) (.028) Youtube 2.424*** 2.332 *** 1.360** 2.047*** (.879) (.867) (.697) (.670) Facebook .155*** .325*** .263*** .071 (.059) (.053) (.058) (.073) R-squared .762 .575 .570 .567 Observations 738 738 738 738 Table 12. The determinants of the desktop sessions being made in crowdsourcing platforms Source: Dataset with results drawn from Alexa.com. Authors calculations. Notes: Dependent variable: Websites’ Desktop Sessions (in logarithmic values). In parentheses heteroskedasticity corrected standard errors. For Websites’ Quality Assurance categorical variable the reference group is websites with no quality assurance mechanisms, for Websites’ Region categorical variable the reference group is websites located in North America, for Websites’ Type of Services/Tasks categorical variable the reference group is websites providing Microtasks/Simple Tasks Services, for Websites’ Online Imprint categorical variable the reference group is websites with no online imprint, for Desktop sessions in time the reference year is 2012 and for Alexa Ranking the reference group is websites with very high ranking (much traffic). Statistical significance: *** 1%, ** 5% and * 10%.

Panel Data Analysis (Fixed Effects Model) Returning to the general panel model in equation (general model)(1),if we take into consideration the repeated observations and if we assume that the coefficients of the time-invariant variables do not change over time and in order to treat the unobserved heterogeneity (εit = vi+uit), the equation (general model) becomes :

푇 ΔYit = c +βΔ퐷푡 +γARit+δΔΜSit+ vi + uit (2)

Equation (2) models the usual fixed effects model (FEM), which does not explicitly include the time-invariant observed variables and their coefficients. If we apply model (2) to our data set, we obtain the following Fixed Effects model (table 13).

91

Fixed Effects Model Factors Coef. [95% Conf. Interval] Constant 8.207*** 7.340 9.074 (.439)

Desktop sessions in time (years) 2013 -.550*** -.735 -.365 .(094) 2014 -.681*** -.903 -.459 (.113) 2015 -.655*** -.911 -.399 (.130) 2016 -.807*** -1.070 -.543 (.133)

Alexa ranking Above Average -.538*** -.741 -.334 (.103) Average -1.058*** -1.343 -.772 (.145) Below Average -1.355*** -1.664 -1.045 (.157) Very Low -1.759*** -2.148 -1.371 (.197)

Mobile Sessions .556*** .473 .639 (.042) R-squared .945 Observations 659 Table 13. Panel Regression for desktop sessions in crowdsourcing platforms with Fixed Effects Source: Dataset with results drawn from Alexa.com. Author’s calculations Notes: Dependent variable: Websites’ Desktop Sessions (in logarithmic values). In parentheses heteroskedasticity corrected standard errors. For Desktop sessions in time the reference year is 2012 and for Alexa Ranking the reference group is websites with very high ranking (much traffic). Statistical significance: *** 1%, ** 5% and * 10%.

With respect to prior results, we found that, unlike OLS, the FE model shows significant effects of time on the average performance of crowdsourcing online platforms. More specific, by having as a reference group the average desktop sessions of crowdsourcing websites in 2012, FE model show that, as time passes, the average websites’ performance is reduced with an increasing trend. For example, from 2012 to 2013, the websites lost on average approximately 55% of their average desktop sessions, while from 2012 to 2014 approximately 68%. Certainly is worryingly that, within five years (2012-2016), crowdsourcing online platforms have mislaid approximately 80% of their working activity, on average.

92

Furthermore, it is notable that, on average, a website’s 1% increased mobile sessions; result in an increase by 55.6% on its desktop sessions, across all time periods (at 1% level of significance). Last but not least, FE model confirms OLS results, that crowdsourcing online platforms, being in below average or very low ranked positions, achieve low levels of performance compared with websites which are in high-ranked positions of traffic (in all cases the right hand variable is negatively related to the left hand variable at the 1% level of significance), over time.

Discussion A crowdsourcing website’s characteristics can have important effects on its overall performance as a virtual market. In this article, we investigated the performance determinants on several crowdsourcing environments. We focused on analyzing the effects of specific features which an online platform provides in order to configure its online crowdsourcing market using the most reliable dataset given by Alexa and utilizing OLS and Quantile regressions for time-invariant effects and Fixed Effects (FE) model for time-variant. One major finding of our research is that although the traffic in such environments increases, the desktop sessions over time have an opposite direction. Furthermore, we were able to provide indications regarding the effects of a website’s type of providing service, quality control mechanisms used and adopted digital marketing strategies on its overall performance as an online business entity. Hence, it has been proved by the analysis that an effective way for a website to increase its performance is by providing the profiles of its workers with information about their job performance in the past and by giving the opportunity to the requesters of a job to conduct skill and practice tests among hired workers. Moreover, our analysis revealed that tools for spam-detection play a significant role in the effort of crowdsourcing platforms’ to increase their performance because potential requesters are willing to use them easier. Last but not least, this research confirms that, since our websites have their activities online, ought to adopt several principles of digital marketing, because digital marketing strategies have a significant and strong effect on their overall sessions (desktop and mobile).

3.4.4 Conclusions Over the years, several crowdsourcing sites have emerged and have evolved to offer a diverse set of services to end-users. Each of these sites exhibits a range of characteristics that aim to facilitate the user’s tasks. How users respond to the characteristics of crowdsourcing websites is at the heart of this research. Towards this we investigated how the performance of websites, as measured in sessions, is correlated with their website characteristics. We have estimated several models (that include OLS, Quantile Regression & Fixed Effects Model), in order to identify several key issues in the performance of crowdsourcing platforms both in a particular point in time and over time.

93

The focal point of the study reported in this chapter was the need of understanding deeply how several crowdsourcing characteristics of an online platform affect its performance and draw conclusions for the real issues and challenges which lie, in its effort of increasing its overall sessions and turnover.

Overall, the study shows that a crowdsourcing website’s attention seeking is higher when it offers a framework for certain jobs which can be easily standardized such as tasks on mobile or graphic design tasks (Beck, 1999). Moreover, our study confirmed that quality management practices are strong predictors of a web-enabled firm’s performance as in traditional firms, making it necessary to further investigate the quality control techniques that can be used by the crowdsourcing websites (Nair, 2006). Finally, as we expected, the geographical location of a web-enabled crowdsourcing platform doesn’t play any important role in its overall performance, unlike the studies related to the traditional labor market (Folta et al. 2006). Such observations make sense, considering the basic principles and advantages of the usage of the Internet by firms in general. Nowadays, especially the web-enabled innovative companies as crowdsourcing online platforms can easily use the Internet to transcend the limits of size and location and to compete in the global electronic marketplace (Cronin, 1997).

The research also revealed the significant role of mobiles devices and smartphones as an alternative way for participating in outsourcing jobs on the Internet. Given the rapid increase of such kind of devises in the context of crowdsourcing future research should undertake a deeper investigation on the role of such devices in crowdsourcing tasks (Miao et al. 2016). In particular, questions related to whether their use influences the type of tasks preferred and more importantly, how these affect the performance of workers especially in tasks that are location dependent. Finally, by understanding deeper the factors affecting its performance and by leveraging the technological capabilities of computing systems which has become nowadays more intimately embedded in physical and social contexts, crowdsourcing can easily evolve and become an increasingly popular and attractive way of online labor, enabling a wide range of applications (Goncalves et al. 2017).

94

Chapter 4 Personality Traits and Performance in Online Labor Markets

In this chapter the microscopic approach for the quantitative study on performance will be presented. This includes an investigation and an analysis of the impact of an individual’s cognitive and non-cognitive skills on the quality of a task-specific outcome by conducting an experiment on a popular crowdsourcing platform. Using linear regression models and controlling for a wide set of individual characteristics and country-specific indicators, we found that the performance of workers depends on cognitive skills, personality traits and work effort. The established relationships between the discussed concepts will then be visualized in a research conceptual model, presented at this section.

4.1 Introduction and Theoretical Background Recall, the expansion of Web 2.0 has affected the functioning of the traditional labor market and contributed in the creation of the online labor market (Malone & Laubacher, 1998; Autor, 2001; Horton, 2010). The implications of the online labor market are numerous and relate to income distribution, firm-specific economic outcomes and geography (Agrawal et al. 2015). Crowdsourcing as one of the most widespread and emerging tool of this changing digital environment relates to both sides of the labor market since online marketplaces allow requesters (i.e., individuals and/or organizations) to offer online paid opportunities to the motivated crowd of individuals (i.e., micro-workers) who are interested and capable to provide solutions to a wide range of human related tasks (Kittur, et al., 2013). In other words, crowdsourcing is a mechanism that optimally reallocates resources through labor matching and productivity outcomes (Pallais, 2014; Pallais & Sands, 2016).

Longstanding, economists make serious attempts to identify the determinants of individual labor market behavior by analyzing the supply side of the typical labor market (Goldin & Katz, 2008) and more recently by adopting a task-specific framework which uncovers effectively workers’ unobserved attributes in exploring individual productivity (Autor & Handel, 2013). Evidence from economics and psychology highlight the role of cognitive and personality traits in explaining individual’s task-specific performance in several socio-economic outcomes (Borghans et al. 2008). Although online labor markets are not identical to the traditional ones (Horton, 2010), there is growing evidence towards the quasi-employment nature in the relationship between requesters and micro-workers during their online interaction (Chen and Horton, 2016). Moreover, similarly to the traditional labor market, online

95 labor markets are not homogenous across the crowd and time since the profile of micro-workers relate to demographic, human capital and income-related factors (Ipeirotis, 2010; Ross et al. 2010; Farell et al. 2017). The present study attempts to investigate workers’ task performance in a global highly heterogeneous environment by focusing on the role of cognitive skills and personality traits.

Task performance in crowdsourcing environments is usually measured by the term “quality of results” which refers to the subjective judgment of whether the submitted work meets the requesters’ criteria. Due to task completion, the requester reimburses the monetary reward according to the fulfillment of the predetermined requirements regarding the quality of results (Felstiner, 2011). Computer and decision-making scientists (Allahbakhsh et al. 2013; Kokkodis & Ipeirotis, 2016) point out, that the overall outcome quality depends on task design (user interface, task definition, type of task, compensation policy) and workers’ attributes (abilities, expertise and reputation). Regarding task design, two types can be identified. The first refers to highly skill-intensive tasks (i.e., computer programming or software development) and the second to less skill-intensive tasks (data entry, internet research, or administrative support).

Methodologically, the empirical analysis of online labor markets requires microdata regarding task performance drawn from relevant field experiments where task performance is usually measured by variables such as, counts and/or proportions of image sets sorted (Mason & Watts, 2009), correct answers in labeling photographic images (Chandler & Horton, 2011) or number of identified tumors in an image (Chandler & Kapelner, 2013). The role of worker’s attributes in analyzing individual’s task performance in crowdsourcing markets has only recently gained attention by economists (Mason & Watts, 2009; Chandler & Horton, 2011; Chandler & Kapelner, 2013; Pallais, 2014; Horton & Zeckhauser, 2016; Pallais & Sands, 2016). In addition, preliminary evidence from computer scientists (Downs et al. 2010; Kazai et al. 2011; 2012; Mourelatos & Tzagarakis 2016) show that the quality of results in crowdsourcing tasks varies according to demographics (gender, age and origin), human capital characteristics (education, IT competence and occupation) and personality traits (Big Five personality traits, BF). As Kokkodis & Ipeirotis (2016) highlight, workers' value in an online labor market is highly heterogeneous and is composed by a set of observable (e.g., skills, education, work history, certifications) and latent characteristics (e.g., expertise and ability). Although a number of shortcomings arise in measuring and analyzing task performance in online experiments (Horton et al. 2011; Chen & Konstan, 2015), recent experimental evidence suggests that micro-workers -compared to participants in offline experiments- are equally honest with similar preferences and effort levels even in low pay environments (Farell et al. 2017). However, none of these studies investigate workers’ performance within a single-task approach applied to a global and highly heterogeneous environment where the focus is on the role of cognitive skills and personality traits.

96

For analytical purposes, we conduct an online experiment using the micro-workers Platform where the performance of micro-workers is based on their correct answers regarding the listening of a music sample with lyrics. We also collect information on cognitive skills (education, computer competency and English proficiency), personality traits (John & Srivastava, 1999) and several socio-demographic characteristics (gender, age and country of origin). The impact of personality traits (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) is well established in economic models of individual behavior regarding task performance using data drawn from surveys (Cunha et al. 2010) or controlled laboratory experiments (Cubel at al. 2016). However, relevant evidence in online labor markets is scarce. To the best of our knowledge, our study is the first attempt to use an online experiment to directly test the relationship between the personality traits and individual productivity in crowdsourcing activities. This exercise allows us to explore the mechanisms behind the relationship between personality traits and task performance, and to understand further how personality traits can explain the level of quality of results and its distribution among micro-workers.

Using linear regression models with a wide set of explanatory variables and augmented specifications for a differentiated impact of personality traits on micro- workers’ performance we found that neuroticism exerts a statistically significant and robust impact. That is, micro-workers with higher levels of neuroticism provide a smaller amount of correct answers in our online experiment (on average, 2 out of the 31 correct answers). This effect is confirmed when the analysis is performed in different sub-samples of micro-workers (developed and less-developed countries). Furthermore, we found that in less developed countries additional facets of personality traits seem to affect performance (i.e., conscientiousness and extraversion). Micro-workers with higher levels of conscientiousness provide better results while those with higher levels of extraversion report lower levels of quality of work. The overall finding regarding neuroticism is confirmed when we applied a two- step robustness analysis.

Lastly, allowing the impact of personality traits to vary according to gender and education we found that neuroticism has a negative impact especially on male performance that males with non-tertiary education perform better in online tasks for higher levels of conscientiousness and that females with higher levels of openness provide better quality of work.

4.2 Experimental Framework 4.2.1 Design For the purposes of our analysis we design an online experiment using the micro- workers.com crowdsourcing platform and we collect data from 250 micro-workers.

97

The task was programmed using the music platform Soundcloud. The task belongs to a family of performance-based tasks and trace out the ability of workers to provide results that meet specific requirements in order to get paid, replicating the context of a real-world workplace. More specifically, a micro-workers.com “open call” campaign was launched for 15 hours and interested workers could participate. Each participant is paid by $0.82-dollar cents and the overall cost of the project counts to $205 (N=250). Our offered wage is based on the minimum job completion time and equals to the average wage of the crowdsourcing platform to avoid self-selection biases.

Workers have access to a link of a music sample with lyrics (56 words in English) and they provide in a text box as many correct words as they could. Once an answer is submitted, it could not be changed and thus we are able to directly measure individual productivity. The task was preceded by an online survey regarding demographic characteristics (i.e., age, gender and country of birth), cognitive skills (i.e., education level, computer and English competency) and personality traits (i.e., extraversion, agreeableness, consciousness, neuroticism and openness). In addition, we collected information on work effort (i.e., number of repeats and time of task completion) by using the Soundcloud platform3. Furthermore, we used a web crawler4 to obtain more information about workers’ experience in online labor markets (i.e., registration year, number of tasks completed and total earning for previous online jobs in this platform).

Two issues need to be clarified. First, to collect information on personality traits we constructed a “short length & quick to fulfill” questionnaire with the aim to avoid participants to get bored and to spend a lot of time for providing answers. Thus, we administrated the well-known 44-item Big Five Inventory survey (John et al. 1991; 2008) which is a mid-sized questionnaire ensuring an accurate measure of each personality trait using responses on a five-point Likert scale, i.e., from 1: Disagree to 5: Agree. The Big Five Inventory is designed through factor analysis so each trait is orthogonal to the rest (McCrae and Costa, 1999). Second, to become aware on worker’s involvement in crowdsourcing activities we define experience by subtracting the year of registration in micro-workers.com from the year of the experiment (2015).

In addition, we define the general success rate by dividing the number of completed tasks with the number of all tasks that each worker has ever participated in micro- workers.com. Furthermore, we define micro-worker’s adjusted earnings by dividing the sum of annual earnings from all the previous online completed jobs with the

3 We obtained the effort variables (i.e. repeated action and completion time) by using the metrics of our experimental framework and the music-based platform of soundcloud.com. Hence, we calculated the time of completion by measuring the minutes of a worker from the moment he sign in to the task until his final answer submission. Concerning a subject’s repeated action, the “soundcloud” platform in its professional mode, is providing stats of who (ID) played the music sample and how many times. The only thing we had to do at the end was to match the IDs from soundcloud with the ones in our experimental task. 4 Crawlers are URL discovery tools, with the help of which, someone can get easily, data from the web programmatically (Pant et al. 2004).

98 country-specific average GDP per capita (in PPP) regarding the years that each worker is active in the crowdsourcing platform.

4.2.2 Hypotheses In this section, we describe the basic hypotheses concerning the relationship between the performance and personality traits in our experimental crowdsourcing task. We expect that the effects of personality on performance from the present study will be different from the evidence found in previous offline studies which use survey data.

More specifically, extraversion refers to individual’s concentration of interest on an external object indicating that those of higher values of this trait enjoy activities that involve positive feelings and experiences (Clark and Watson, 1991). In conventional labor markets extraversion relates positively to performance in high-autonomy (Barrick & Mount, 1991) and creative tasks (Rothman and Coetzer, 2003). Since our experiment conducted on soundcloud platform, which includes a working environment with music, photos and colored graphics, we expect that extraverted workers will be assertive and energetic on the completion of task resulting in a high- quality overall performance.

Agreeableness is a personality trait that reflects behavioral outcomes which define a person as kind, sympathetic, cooperative, warm and considerate. People who score high on this dimension are empathetic and altruistic, while a low agreeableness score relates to selfish behavior and a lack of empathy (Graziano and Eisenberg, 1997). Evidence suggests that agreeableness is positively correlated with performance in tasks requiring cooperative interchange with others (Witt et al., 2002). However, since our task does not involve interpersonal interaction we expect that agreeableness should not exert any significant role in micro-workers’ performance.

Neuroticism refers to low emotion stability and predictability and reflects negative aspects of personality such as sadness, anxiety and insecurity. Empirical evidence from typical labor markets suggests that individual’s performance is positively related to emotion stability (Hörmann and Maschke, 1996; Salgado, 1997). Hence, we expect that workers with high levels of neuroticism will provide low scores in our online experiment.

Conscientiousness refers to individual’s ability to be careful, responsible and efficient. Empirical studies show a positive correlation with labor market outcomes, i.e., wages and promotions (Tett et al., 1991) and in general that cognitive ability and conscientiousness help us on the explanation of the process through which human capital gets translated into performance effectiveness (Ng & Feldman, 2010).

99

Thus, we expect that micro-workers with higher values in conscientiousness will enhance higher performance.

Lastly, openness refers to the ability of an individual to have active imagination, aesthetic sensitivity and attentiveness to feelings, preference for variety, intellectual curiosity and independence of judgment. Higher values of openness seem to correlate positively with task performance especially in creative and artistic jobs. However, empirical evidence suggests that, in tasks with piece rate payments, higher values in openness are associated with lower performance (Muller and Schwieren, 2012). In our case, because of the nature of our task (i.e. doesn’t require any creativity and artistic skills nor aesthetic sensitivity), we expect in general that high values of openness should be associated with low performance, but the effect might be not significant enough.

A summary of our testable hypothesis is as follows:

✓ Hypothesis 1. Extraversion is positively associated with online performance. ✓ Hypothesis 2. Neuroticism is negatively associated with online performance. ✓ Hypothesis 3. Conscientiousness is positively associated with online performance ✓ Hypothesis 4. Openness has a negative relationship with online performance. ✓ Hypothesis 5. Agreeableness is positively associated with online performance.

4.3 Data Analysis 4.3.1 Summary statistics Table 14 presents summary statistics for both dependent and independent variables. More specifically, the average number of correct answers is approximately 33.86 with a standard deviation of 9.34. Regarding personality traits we use normalized scales by grouping the 44-item Big Five Inventory. According to our data, the mean score of (a) Openness is 3.684 suggesting a high tendency toward creativity and active imagination (b) Conscientiousness is 3.330 signifying little thoroughness in our crowdsourcing task (c) Extraversion is 3.324 exhibiting an average disposition on this trait (d) Agreeableness is 3.319 indicating that our workers seem to be more empathetic and altruistic and (e) Neuroticism is 3.118 suggesting that workers tend to be relaxed. Regarding cognitive skills, we observe that most micro-workers have completed tertiary education (67.6%), have advanced computer skills (81.6%) and competence in English language (81.2%).

In addition, most of our workers are males (67%) and young (30 years of age). Concerning effort variables, we observe that the average completion time is 5.82 minutes, most of our micro-workers (82%) repeat the task less than 4 times indicating

100 limited cheating, half of the workforce have successfully completed the tasks that have been involved during their participation in the micro-workers platform, and on average their experience with the platform is 2.88 years. In addition, micro-workers who belong to the upper 75% of the GDP adjusted relative income distribution have an average value of 0.26 regarding the share of crowdsourcing earnings to the average GDP per capita. Lastly, 36% of the workers in the sample come from low performing countries (i.e., Sri Lanka, India, Bangladesh, Philippines, Nepal, Algeria, Malaysia and Tunisia).

Obs. Mean Min. Max. S.D. Quality of results Total Correct Answers 250 33.86 10 52 9.344 Non-cognitive skills (normalized) Openness 250 3.684 2.20 5.00 .532 Conscientiousness 250 3.330 1.56 4.44 .434 Extraversion 250 3.324 1.25 4.88 .455 Agreeableness 250 3.319 1.78 4.44 .412 Neuroticism 250 3.118 1.00 4.75 .463 Cognitive skills Tertiary education 250 0.676 0 1 0.469 Computer competence 250 0.816 0 1 0.388 English competence 250 0.812 0 1 0.391 Demographics Age 250 30.1 18 62 8.282 Female 250 0.332 0 1 0.472 Effort variables Completion time (in minutes) 250 5.820 3 10 1.675 Repeated action 250 0.820 0 1 0.385 Success Rate 250 0.480 0 1 0.501 Experience (in years) 250 2.876 1 6 1.999 Relative Income Position 250 0.256 0 1 .437 Country grouping Low performers 250 0.216 0 1 0.481 Table 14. Summary Statistics Source: Dataset with results drawn from Micro-workers.com. Author’s calculations. Notes: Age and Crowdsourcing experience is measured in years. Task completion time is measured in minutes. High repeated action variable contains workers with over three times of task’s repletion before the final submission. High general success rate variable contains workers with general success rate greater than or equal its median (>=.985) and high relative income position workers with relative income position greater than or equal its 3rd quantile (>=.114). Low performers variable contains workers from Sri Lanka, India, Bangladesh, Philippines, Nepal, Algeria, Malaysia & Tunisia.

To understand better the relationship between quality of results and personality traits we present in Figure 11 & 12 the relation between performance and the distribution of each personality trait using a non-parametric local polynomial smoothing regression. We observe that performance (score) in online labor markets relates positively to openness, extraversion and agreeableness, negatively to neuroticism and no relationship is found for conscientiousness. In addition, there is no evidence of extreme values.

101

Figure 11. Distribution of each personality trait variable

Figure 12. Relationship between personality traits and performance (score)

4.3.2 Sample characteristics Before we proceed with the empirical modeling of the relationship between performance and personality traits we explore whether personality traits differ (t-test) across several sources of individual heterogeneity (i.e., gender, age, cognitive skills and crowdsourcing income). Results are presented in Table 15. There is no evidence on personality differences between males and females and between workers of high and low English competence (Figure 13 & Figure 14). However, personality traits differ according to age, education, computer competence, relative income position and country.

102

Figure 12. Personality Traits Density Distribution by Gender

Figure 14. Personality Traits Density Distribution by English Competence

Thus, figure 15 shows that workers under the age of thirty have statistically significant higher levels of agreeableness and neuroticism to those who are older (|t|= 2.035 & p= 0.042, |t|=2.535 & p= 0.005 respectively)., Concerning the educational level, we found that workers with tertiary education tend to be more extraverts compared to workers with lower education (Figure 16, (|t|= 2.461 & p= 0.007).

103

Figure 15. Personality Traits Density Distribution by Age

Figure 16. Personality Traits Density Distribution by Education

In addition, workers with high levels of computer competence have statistically significant higher values in openness, extraversion and agreeableness at 5%, 1% and 1% level of significance respectively, than the ones with low computer skills (Figure 17, table 15). Moreover, workers whose relative earnings position is in the upper 75% of the corresponding distribution seem to be less open, less extraverts and with lower levels of neuroticism too, compared to those whose relative earnings position is lower (Figure 18, table 15). Lastly, regarding the workers’ country of origin, table 2 shows

104 that, individuals coming from low performing countries have statistically significant lower levels of openness (|t|= 1.838 & p= 0.033) and extraversion (|t|= 1.985 & p= 0.024) than the ones originated from high performing countries (Figure 19).

Figure 17. Personality Traits Density Distribution by Computer Competence

Figure 18. Personality Traits Density Distribution by Relative Income Position

105

Figure 19. Personality Traits Density Distribution by Performing Country

The following table (table 15) presents all the personality traits across heterogeneous groups (t-test).

Paired Variables Gender Males Females Difference [1]-[2] t-test [1] [2] [3] [4] Openness 3.667 3.718 -0.051 0.713 Conscientiousness 3.340 3.311 0.029 0.504 Extraversion 3.318 3.334 -0.016 0.265 Agreeableness 3.339 3.278 0.061 1.102 Neuroticism 3.124 3.105 0.019 0.290

Age Younger Older Openness 3.680 3.696 -0.016 0.201 Conscientiousness 3.321 3.361 -0.040 0.602 Extraversion 3.317 3.344 -0.027 0.392 Agreeableness 3.348 3.222 0.126 2.035b Neuroticism 3.157 2.982 0.175 2.535a

Education High Low Openness 3.691 3.669 0.022 0.305 Conscientiousness 3.330 3.332 -0.002 0.044 Extraversion 3.372 3.222 0.150 2.461a Agreeableness 3.337 3.283 0.055 0.971 Neuroticism 3.136 3.079 0.057 0.918

Computer High Low Competence Openness 3.715 3.546 0.169 1.962b Conscientiousness 3.342 3.280 0.062 0.866 Extraversion 3.384 3.057 0.327 4.568a

106

Agreeableness 3.356 3.157 0.199 3.004a Neuroticism 3.124 3.090 0.034 0.451

English Competence High Low Openness 3.621 3.649 -0.043 0.500 Conscientiousness 3.333 3.319 0.014 0.194 Extraversion 3.334 3.277 0.057 0.784 Agreeableness 3.323 3.300 -0.023 0.348 Neuroticism 3.102 3.186 -0.084 1.130

Relative Income High Low Position Openness 3.563 3.726 -0.163 2.132b Conscientiousness 3.282 3.347 -0.065 1.010 Extraversion 3.238 3.352 -0.114 1.744c Agreeableness 3.292 3.328 -0.036 0.618 Neuroticism 2.990 3.161 -0.171 2.579a

High Low Countries Performing Performing Openness 3.716 3.567 0.149 1.838b Conscientiousness 3.346 3.274 0.072 1.083 Extraversion 3.353 3.215 0.138 1.985b Agreeableness 3.321 3.311 0.010 0.169 Neuroticism 3.113 3.134 -0.021 0.300 Table 15. Personality traits across heterogeneous groups (t-test). Source: Dataset with results drawn from Microworkers.com. Author’s calculations. Notes: Statistical significance: a 1%, b 5% and c 10%.

4.4 Empirical analysis 4.4.1 Empirical model To investigate the impact of personality traits on micro-workers’ performance regarding crowdsourcing environments, we adopt a linear regression model where the dependent variable measures quality of work. To isolate the effects of personality traits and other “environmental” factors our empirical specification is:

Qi = α +βkNCik + γDi + δCi +θICi +ei (1)

where, Qi measures the quality of results (Total Correct Answers) of worker i, NCik is a vector of non-cognitive skills (where k refers to the Big Five variables, k=1,...,5), Di is a vector of demographic characteristics (age and gender), Ci is a vector of cognitive skills (education, computer, English levels), ICi is a vector of task effort-specific variables (time and number of repeats of the task, the general success rate, crowdsourcing experience and its relative income position) and ei the disturbance term. This specification, although general, is expected to provide evidence on the role of Big Five variables on workers’ performance through the vector of estimated

107 coefficients βk. For analytical purposes, we will also estimate model (1) for certain groups of micro-workers and we will allow more flexible specifications by introducing interaction effects between personality traits and several sources of individual heterogeneity (i.e., gender and education). For interpretation purposes, Big Five scores are standardized in all specifications to have a zero mean and a standard deviation of one.

4.4.2 Estimation results This section presents estimates from several specifications of the baseline model (1), for sub-groups of micro-workers according to their origin and for richer specifications with interaction effects to identify possible differentiated results of personality traits on performance for specific sources of individual heterogeneity.

Baseline model Table 16 presents the results of our baseline model. According to the results presented in Column 1, extraversion and neuroticism seem to exert a statistically significant effect on workers’ performance while the rest of personality traits do not seem to correlate with performance. More specifically, an increase of a standard deviation in the level of extraversion indicates an increase in performance by 2.38 total correct answers. On the other hand, an increase of a standard deviation in the level of neuroticism induces a decrease in a worker’s performance of about 1.86 total correct answers. These findings seem to corroborate the hypotheses that extraversion has a positive effect on online job outcomes while neuroticism is negatively associated with online performance. Furthermore, the inclusion of demographic variables (Column 2) does not seem to affect these estimates. However, the addition of cognitive variables (Column 3) resulted in altering the effect of extraversion on workers’ performance indicating a possible correlation between personality traits and cognitive skills. Workers with tertiary education and high computer competence provide more correct answers compared to workers with non-tertiary education and low computer competence, respectively. However, the effect of neuroticism is not affected, both in terms of significance and magnitude. The same holds with the augmented specification (Column 4) where several task-effort variables taken into consideration. Micro-workers who spent more time to complete the task and with higher task repeated action before their final submission seem to perform better. Last but not least, the last specification revealed also that, workers over thirty years old provided worse results compared to the younger-aged ones. Thus, only H1 and H2 seems not to be rejected by the data indicating that a more extravert nature of the workers is resulting on a better online job outcome and that more neurotic individuals perform worse in online crowdsourcing tasks.

108

Independent variables [1] [2] [3] [4] 37.831a 39.882a 27.688a 17.592a Constant (0.302) (1.594) (1.584) (2.661) -0.778 -0.800 -0.613 -0.359 Openness (0.570) (0.583) (0.523) (0.487) -0.227 -0.131 0.277 0.460 Conscientiousness (0.626) (0.632) (0.515) (0.475) 2.387a 2.384a 1.075b 0.844a Extraversion (0.839) (0.826) (0.567) (0.481) 0.645 0.617 0.297 0.398 Agreeableness (0.630) (0.635) (0.380) (0.348) -1.860b -1.957a -1.633a -1.667c Neuroticism (0.703) (0.666) (0.463) (0.462) -0.078 -0.071 -0.086b Age (0.051) (0.045) (0.043) 0.717 0.641 1.437 Female (1.117) (1.023) (1.121) 6.327a 4.597a Tertiary education (1.072) (0.736) 6.078a 4.439a Computer competence(H) (1.552) (1.321) 1.734 0.448 English competence(H) (1.170) (1.001) 1.796a Completion time (0.359) 1.467c Repeated action (H) (0.816) 0.962 Success Rate (H) (0.901) 0.191 Experience (0.267) 2.802 Relative Income Position (3.744) R-squared 0.533 0.537 0.707 0.773 Observations 250 250 250 250 Table 16. Determinants of crowdsourcing micro-task performance. Source: Dataset with results drawn from Microworkers.com. Author’s calculations. Notes: Dependent variable: Quality of results (Total Correct Answers). OLS estimates. Our explanatory variables of non-cognitive skills have been standardized to have a mean of zero and a standard deviation of one. In parentheses heteroskedasticity corrected standard errors. All model specifications include country-specific fixed-effects. Statistical significance: a 1%, b 5% and c 10%.

High and low performing groups It is well known that micro-workers from developing countries provide lower quality of results in online crowdsourcing tasks compared to those coming from western developed countries (Litman et al. 2015). To investigate this argument, we define two groups of micro-workers and we estimated the above specifications, separately. The first group includes workers from western developed countries (USA, Germany, England, Canada, Portugal, Belgium, Serbia, Croatia, Bulgaria, Romania, FYROM, Hungary, Australia, Czech, Venezuela, Brazil, Argentina, Denmark, Japan, Italy, Spain, France, Saudi Arabia & Greece) and the second, workers from less developed

109 countries (India, Sri Lanka, Nepal, Bangladesh, Philippines, Algeria, Tunisia and Malaysia). Estimation results, for high and low performers, are presented in Table 17 and 18, respectively.

Regarding workers from high performing countries (Table 17), workers with higher values in extraversion give a higher amount of total correct answers while those with higher values in neuroticism provide lower quality of results (Column 1). These findings are not sensitive to the inclusion of demographic (Column 2), but their effect on performance is diminishing when we embody in our model, cognitive skills and effort – specific variables (Columns 3-4). Regarding the effect of cognitive skills, we found that workers’ performance is related to computer competence and tertiary education at 1% level of significance while the role of English competence is non- existent. Regarding the demographic variables, a weak effect of age seems to exist in all specification (Columns 2-4), going in line with table 3 (pooled sample). Moreover, in the case of workers’ coming especially from western countries, a worker’s online job completion time has a strong significant effect on his overall job’s output. Lastly, the last specification (Column 4) seems to confirm the evidence presented in Column 4 of Table 16 (pooled sample) indicating that even within samples of micro-workers from western countries more neurotic individuals provide significantly lower quality results in online crowdsourcing tasks (H2) and extravert workers provide significantly higher ones (H1).

Independent variables [1] [2] [3] [4] 37.859a 39.859a 26.279a 17.768a Constant (0.445) (1.909) (1.769) (2.827) -0.575 -0.638 -0.362 -0.123 Openness (0.674) (0.748) (0.651) (0.609) -0.265 -0.098 0.182 0.279 Conscientiousness (0.839) (0.857) (0.611) (0.655) 2.508b 2.490b 1.036 0.923c Extraversion (1.041) (1.025) (0.643) (0.543) 0.686 0.644 0.172 0.361 Agreeableness (0.697) (0.729) (0.374) (0.298) -1.946b -2.128b -1.585a -1.679a Neuroticism (0.842) (0.807) (0.477) (0.539) -0.099c -0.086c -0.095b Age (0.058) (0.045) (0.045) 1.702 1.324 1.921 Female (1.033) (1.051) (1.213) 5.961a 4.368a Tertiary education (0.886) (0.818) 7.789a 5.879a Computer competence(H) (1.879) (1.640) 2.007 0.653 English competence(H) (1.401) (1.313) 1.568a Completion time (0.393) 1.615 Repeated action (H) (1.141) Success Rate (H) 0.421

110

(0.819) 0.305 Experience (0.308) -2.395 Relative Income Position (4.965) R-squared 0.314 0.326 0.617 0.690 Observations 196 196 196 196 Table 17. Determinants of workers’ performance on crowdsourcing task: workers from high performing countries Source: Dataset with results drawn from Microworkers.com. Author’s calculations. Notes: Dependent variable: Quality of results (Total Correct Answers). OLS estimates. Our explanatory variables of non-cognitive skills have been standardized to have a mean of zero and a standard deviation of one. In parentheses heteroskedasticity corrected standard errors. All model specifications include country- specific fixed-effects. Statistical significance: a 1%, b 5% and c 10%.

Regarding workers from low performing countries (Table 18), we observe that extraversion and neuroticism exerts a statistically significant effect on individual productivity (Column 1). This finding seems to be robust even after the inclusion of additional covariates, i.e., demographics and cognition indicators (Columns 2-3). However, when we corrected for effort-specific variables (Column 4) we found that micro-workers’ productivity is negatively affected by neuroticism and positively by conscientiousness, confirming hypotheses H2 and H3, while the positive effect of extraversion is becoming more robust, supporting hypothesis H1. In addition, only the effect of tertiary education is now significant and positive in full specification (Column 4) and that female workers’ coming from low performing countries provided us with worse results than the men (Column 2 & 3), a finding that has not been found in Table 17 (high performing countries). Regarding effort specific variables, we found that micro-workers with higher values in the completion time variable seem to provide higher quality of results and that only in this sub- sample, the general success rate in previous online tasks of a worker coming especially from eastern countries is a strong indicator of his current online performance in 1% level of significance (Column 4).

Independent variables [1] [2] [3] [4] 37.623a 39.585a 32.739a 10.071 Constant (0.256) (3.227) (2.125) (6.321) -1.262 -1.439 -1.257 -0.451 Openness (0.938) (0.959) (1.079) (1.141) -0.016 -0.434 -0.034 0.976c Conscientiousness (1.117) (0.831) (0.802) (0.540) 1.927c 2.267b 1.713c 0.314a Extraversion (1.012) (0.993) (0.839) (0.702) 0.498 0.766 0.760 0.439 Agreeableness (1.442) (1.319) (0 .978) (1.629) -1.152c -1.649c -2.114b -1.680c Neuroticism (0.910) (0.904) (0 .796) (0.523) Age 0 .047 0.019 0.087

111

(0 .138) (0 .175) (0.135) -4.123c -2.822c -0.229 Female (2.127) (1.458) (1.650) 5.283 3.164c Tertiary education (4.295) (1.463) 2.016 1.413 Computer competence(H) (2.646) (1.235) -0.640 0.917 English competence(H) (1.035) (1.518) 2.780a Completion time (0.646) 1.087 Repeated action (H) (0.767) 5.631a Success Rate (H) (1.704) -0.277 Experience (0.645) 6.433 Relative Income Position (5.321) R-squared 0.735 0.754 0.793 0.890 Observations 54 54 54 54 Table 18. Determinants of crowdsourcing micro-task performance (low performance workers) Source: Dataset with results drawn from Microworkers.com. Author’s calculations. Notes: Dependent variable: Quality of results (Total Correct Answers). OLS estimates. Our explanatory variables of non-cognitive skills have been standardized to have a mean of zero and a standard deviation of one. In parentheses heteroskedasticity corrected standard errors. All model specifications include country- specific fixed-effects. Statistical significance: a 1%, b 5% and c 10%.

Heterogeneous effects We now turn our attention to the possibility that personality traits may be correlated with performance differently across various sub-samples of micro-workers. We are interested in analyzing whether the effects of personality traits on overall performance depend on gender and education. It may be the case that the differences regarding the impact of personality traits on the quality of results are driven by the gender composition of micro-workers with non-tertiary and tertiary education. Table 19 presents the estimated results of all model specifications with the interaction terms. For comparison purposes, Column (1) shows the results of our baseline model.

Regarding gender differences (Column 2) we observe that male subjects with higher levels of neuroticism achieved lower quality of results, but we find no significant relationship between performance and neuroticism for females. A rise of one standard deviation in neuroticism, decreases performance by approximately 1.4 for men and 1.9 for women, showing that neuroticism has a large impact especially on male performance at 1% level of significance. Our result is in line with previous literature obtained by Cubel et al. (2016), who reported that on a laboratory productivity experiment, neuroticism has a negative impact for both genders, and by Schmitt (2007) who found that females report lower levels of emotional stability which is related to low levels of self-efficacy and performance.

112

Column (3) gives the necessary information to assess whether the impact of personality traits on performance vary between micro-workers with tertiary and non- tertiary education. We found that the quality of results is significant higher for micro- workers with non-tertiary education and higher levels of conscientiousness, while is lower for workers with at least tertiary education (i.e. a rise of one standard deviation in conscientiousness, increases performance by approximately 2.3 and decreases by 4.1 respectively in the sub-groups). In addition, performance also deteriorates for micro-workers with non-tertiary education and higher levels of neuroticism.

The full specification presented in Column 4, shows that male micro-workers with low levels of neuroticism provide lower quality of results (a one-standard deviation increase in neuroticism is negatively associated with 1.6 correct answers). Regarding the impact of conscientiousness on performance we found that males with non-tertiary education perform better in online tasks for higher levels of this personality trait. On the other hand, males with tertiary education and higher levels of conscientiousness seem to provide lower quality of work in online crowdsourcing environments.

Independent Variables [1] [2] [3] [4] -0.778 -0.728 -0.752 -1.055 Openness (0.571) (0.587) (0.642) (0.740) 1.983 2.162 Openness × Female (1.973) (1.790) 1.179 0.851 Openness × Tertiary Education (1.779) (1.647) -0.228 0.321 2.283a 2.164a Conscientiousness (0.626) (0.524) (0.517) (0.542) 1.243 -2.653 Conscientiousness × Female (2.281) (2.421) -6.351b -7.099b Conscientiousness × Tertiary Education (2.664) (2.649) 2.388a 0 .808 0.341 0 .212 Extraversion (0.839) (0.539) (0.951) (0.985) 0.041 -0.155 Extraversion × Female (2.138) (2.155) 1.685 2.044 Extraversion × Tertiary Education (2.776) (2.823) 0.646 0.509 -0.004 0.247 Agreeableness (0.630) (0.389) (1.031) (1.045) -1.254 -1.274 Agreeableness × Female (2.539) (2.520) 1.260 -0.669 Agreeableness × Tertiary Education (3.288) (3.305) -1.860b -1.432a -1.770a -1.601b Neuroticism (0.702) (0.430) (0.591) (0.657) -1.884 -2.180 Neuroticism × Female (2.176) (1.875) 0.555 1.072 Neuroticism × Tertiary Education (2.023) (1.958) R-squared 0.533 0.776 0.783 0.788 Observations 250 Table 19. Big Five Personality traits and Performance: Interaction effects

113

Source: Dataset with results drawn from microworkers.com. Author’s calculations. Notes: Dependent variable: Quality of results (Total Correct Answers). Our explanatory variables of non-cognitive skills have been standardized to have a mean of zero and a standard deviation of one. In parentheses corrected standard errors with clustering at the country level. All model specifications from [1] - [4] control for country fixed effects. Statistical significance: *** 1%, ** 5% and * 10%.

Robustness It is well known that the combination of cognitive and non-cognitive traits may play important roles in determining overall performance (Heckman and Rubinstein, 2001) and that the effect of cognitive abilities may have an impact on the personality traits (Heckman et al. 2006). For example, in our study we found that quality of results depends on education which is also positively correlated with conscientiousness. To test whether this mechanism may affect our results we perform two-step estimation.

In the first step, we regress each cognitive skill (i.e., tertiary education, computer skills and English competence) on personality traits, by controlling for subject’s demographic and country characteristics. This allows us to have a more accurate measure of each cognitive ability, defined as the residuals of educational level, computer skills and English language competence in each case, that are left unexplained by personality traits. In the second stage, we regress our model of interest, controlling for our constructed measure of each cognitive skill.

Table 20 presents the second stage estimates of the three model specifications. Column 1 refers to the model that focuses on the effect of personality traits on quality of results using the residuals from the 1st stage regression regarding tertiary education. Obviously, the effect of an individual’s educational level does affect the magnitude of the extraversion and neuroticism on the quality of results (see Table 16, Column 4).

Similar results are drawn from Columns 2 and 3 where we control for computer and English competence net of personality traits, respectively. This exercise supports our interpretation that the effect of personality traits is autonomous, and it does not operate through the interaction with cognitive skills.

[1] [2] [3] 18.310a 17.517a 17.828a Constant (2.705) (2.629) (2.784) Personality Traits -0.474 -0.259 -0.355 Openness (0.482) (1.483) (0.481) 0.429 0.291 0 .475 Conscientiousness (0.460) (0.471) (0 .487) 0.994c 1.241b 0.845c Extraversion (0.499) (0.534) (0.479) 0.367 0.526 0 .400 Agreeableness (0.342) (0.345) (0.348)

114

-1.592a -1.770a -1.690a Neuroticism (0.441) (0.472) (0.457) Cognitive Skills (residuals) 4.541a Tertiary education (0.765) 4.565a Computer competence (1.348) 0.507 English competence (0.975) R-squared 0.771 0.773 0.773 Observations 250 Table 20. Big Five Personality traits and Performance: Two-stage Estimates of Total Correct Answers Source: Dataset with results drawn from microworkers.com. Author’s calculations. Notes: Dependent variable: Quality of results (Total Correct Answers). Our explanatory variables of non-cognitive skills have been standardized to have a mean of zero and a standard deviation of one. In parentheses corrected standard errors with clustering at the country level. All specifications control for demographic, cognitive, individual’s effort and country characteristics. Residuals are derived from a first stage estimation (linear regression) of each cognitive skill conditional on background characteristics and a quadratic term regarding personality traits. All model specifications from [1] - [3] control for country fixed effects. Statistical significance: *** 1%, ** 5% and * 10%.

4.5 Conclusions Skills are emerging as a critical factor in achieving high quality of work in crowdsourcing tasks. In this chapter, we make a first attempt to understand the role of cognitive and non-cognitive skills on micro-workers’ performance regarding online tasks. For analytical purposes, we conducted an online experiment using the microworkers.com platform where the performance of micro-workers is based on their correct answers regarding the listening of a music sample with lyrics. We were also able to also collect information on cognitive skills, personality traits and several socio-demographic characteristics.

According to our results extraversion and neuroticism exerts a statistically significant and robust impact. This indicates that micro-workers in online labor markets with higher levels of neuroticism perform worse, while with higher levels of extraversion perform better, findings that are in line with relevant studies in traditional labor markets. In addition, we found that the online performance of micro-workers from less developed countries is better for those with higher levels of conscientiousness provide better results while those with higher levels of neuroticism report lower levels of quality of work. Moreover, the impact of personality traits on performance seems to differentiate between several sources of heterogeneity (gender and education). The online performance of males is negatively affected by neuroticism. However, males with non-tertiary education perform better, while the ones with tertiary education perform worse, in online tasks for higher levels of conscientiousness.

Future research could include additional experiments in online labor markets that replicate well-known experiments from offline labor market environments and

115 relevant surveys. These attempts are expected to better understand the mechanisms behind the relationship between personality traits and task performance, to realize further how non-cognitive skills can explain performance and its distribution among micro-workers in crowdsourcing marketplaces. Such effort is expected to contribute towards the standardization of the basic “building block” tasks, that would make crowdsourcing more scalable and easier to set prices, spread best practices, build meaningful reputation systems and track quality.

116

Chapter 5 Discussion and Conclusions

5.1 Limitations This thesis has covered the crowdsourcing fundamentals, focusing on the characteristics that may affect the performance of this innovative online business process. In this chapter, we will present the basic limitation of our analysis, our final conclusions and recommendations for future research questions on crowdsourcing.

Crowdsourcing has received in recent years the interest of researchers in various fields that aim to analyze, comprehend, assess and even improve this new form of labor and finally find strategies and frameworks in order to increase the quality of the work being done in high levels (Howe 2008). An overview of the general principles of crowdsourcing aimed towards achieving high quality of work is already given by the existing literature (Yuen et al. 2011).

However, crowdsourcing as a new online way of labor has some general disadvantages that may affect our analysis in this thesis. More specific:

• Lack of a sound theoretical economic framework for crowdsourcing, which would help us by providing guidelines and tested workflows for crowdsourcing labor experiments. • Cost of management – crowdsourcing means outsourcers need to deal with workers directly, which can result in the buyer of services needing to spend time and money to manage their resources effectively. • Creating a fair marketplace – another challenge arising from users dealing directly with one another is that processes and rules need to be constantly re- defined to take into account unlimited possible user cases – now and in the future. • Quality control – again, by creating an online environment in which people can engage with each other directly, trust and safety as well as quality control can be highly challenging and given that this is a base expectation for most users, needs to be managed well from the outset.

Hence, regarding our macroscopic analysis being presented in chapter 3, some important information of crowdsourcing platforms reviewed online was not readily available, which impacted the extent of this research. In particular, existing crowdsourcing platforms did not provide quantitative data related to, for example, the total number of workers registered, the average number and volume of tasks completed per worker per day, the completion rate of tasks, etc. In order to gain

117 insights into such aspects, a survey was conducted in the form of a questionnaire, which was sent to all websites reviewed. The questionnaire asked providers to answer questions related to various aspects of their website, such as their workforce, tasks completed and revenue earned. Fewer than 10% of websites surveyed responded to this request and completed the questionnaire. This made it impossible to include such data in the review and the analysis in order to understand deeper which online platforms’ functional features and characteristics may have a significant effect on their productivity.

Regarding our microscopic analysis being presented in chapter 4, the lack of primary data made it necessary to conduct an online labor experiment in order to construct our database. Hence, the overall amount of the observations addresses the limitation of our budget resulting in 100 workers’ outputs.

Recall, quality of work in crowdsourcing is the extent to which the provided outcome of the worker fulfills the requirements of the requester. Hence, quality of work in such contexts is considered a subjective issue. That’s why many researches try to propose various models and metrics to assess and ensure high the quality of work in such environments. Moreover, workers in crowdsourcing markets usually have different levels of expertise and experience and many times adjust their efforts according to incentives hence affecting the quality of the outcome resulting in high levels of heterogeneity (Li & Reynolds, 1995). Our microscopic analysis, addressed this issue by conducting several heterogeneity tests and by examining this term in detail, through interaction effects of subgroup-specific characteristics estimates (Altman & Matthews, 1996), making our regression results more reliable and robust.

5.2 Conclusions and Recommendations The implications of this research are manifold. First of all, by conducting the macroscopic analysis of the crowdsourcing platforms, we came up with several useful results. In general, several crowdsourcing websites’ characteristics (as online labor marketplaces) have important effects on their overall performance. The study on chapter 3 revealed that the effects of a website’s type of providing crowdsourcing service, quality control mechanisms used and adopted digital marketing strategies on its overall performance are significant. Hence, it has been proved by the analysis that an effective way for a crowdsourcing website to increase its performance and productivity is firstly, by providing the profiles of its workers with information about their job performance in the past and by giving the opportunity to the requesters of a job to conduct skill and practice tests among hired workers and by providing effective mechanisms for spam-detection (i.e. malicious workers) and by adopting several principles of digital marketing. Last but not least, the research revealed that the mobile will be a key component of crowdsourcing procedure in the near future.

118

It is known that crowd workers come from different socio-economic backgrounds, with different skills and several personality traits, resulting in job outcomes with varying levels of quality. As a result, crowdsourcing can suffer from low quality and accuracy outcomes due to sloppy workers’ behavior. Driven by the aforementioned considerations, we decided to investigate deeper the key factors of an individual in relation to his overall performance.

Regarding the microscopic analysis being conducted in chapter 4, the study showed that individuals’ skills which participate on crowdsourcing jobs are emerging as a critical factor in achieving high quality of work. Thus, the research comprise of a comprehensive and serious attempt to understand the role of cognitive and non- cognitive skills of workers and their impact on their performance on online tasks.

By conducting an online crowdsourcing experiment using the microworkers.com platform the study collected detailed information on cognitive skills, personality traits and several socio-demographic characteristics regarding their relationship with a subject’s performance. According to the results neuroticism exerts a statistically significant and robust impact which indicates that micro-workers in online labor markets with higher levels of neuroticism perform worse, a finding that is in line with relevant studies in traditional labor markets. In addition, the online performance of micro-workers from less developed countries is better for those with higher levels of conscientiousness provide better results while those with higher levels of extraversion report lower levels of quality of work.

The impact of personality traits on performance seems to differentiate between several sources of heterogeneity (gender and education). Thus, the online performance of males is negatively affected by neuroticism. However, males with non-tertiary education perform better in online tasks for higher levels of conscientiousness. Lastly, females with higher levels of openness provide better quality of work.

By taking the abovementioned highlighted results, it is obvious that the characteristics of an individual participating on crowdsourcing projects have a significant impact on his overall output. For that reason, researchers in the near future, must focus deeper on workers; characteristics, in order to requesters of a job to have the opportunity to observe quick and efficient a worker’s behavioral pattern on crowdsourcing environment. Based on these patterns, the requester will be easier to define several worker types and potentially identify the best workers for a given online job.

In general, this thesis, adds to the research body which focuses on quality assessment and improvement strategies in crowdsourcing environments, several information regarding key factors which have a significant effect on quality, from the side of the crowdsourcing platforms and the side of individuals that participate on such online procedures.

119

5.3 Future Work Crowdsourcing marketplaces present an opportunity for researchers who require human computation services, especially for tasks that are small, require a variety of different skills or interests, or are intermittent in their availability. They offer a persistent workforce that is available on demand for an affordable price. However, no system is perfect.

Crowdsourcing, it may blend the best aspects of open source philosophy and the benefits of global business (including its outsourcing component), but it might result in low quality outcomes and unreliable data by workers. The crowd is not only part of the online productive process but also produces tangible goods. In other ways, though, crowdsourcing necessarily involves casualties, as any shift in production will.

For that reason, in order to improve this new online and innovative labor marketplace, future research must focus mainly on two dimensions. More particular on:

• How to recruit and retain the most suitable workers for a particular task. Workers as in real life are individuals with several characteristics and skills. As we mentioned before, online tasks also vary and require particular skills in order to be accomplished by an individual. Hence, they are not all suitable for every online job. For that reason, platforms’ mechanisms and strategies need to be created in order each time to have optical match-characteristics between an online task and a worker. This will probably improve the overall quality of the crowdsourcing process because a worker will not spend time and effort to jobs that are out of his interest field and will only participate on online projects which lead to his job satisfaction.

• How to evaluate efficient workers’ outcomes? Crowdsourcing systems (i.e. online platforms) often must manage malicious users. To do so, often use a combination of techniques that block, detect, and deter. Nevertheless, these mixed strategies are still on an initial stage and yet not being adopted by a significant share of crowdsourcing platforms. Our study has shown the importance of the existing quality control mechanisms and their significant impact on the amount of the accomplished online tasks. Hence, it is crucial to improve these quality assessment mechanisms and to structure strategies and techniques that will allow to an online platform, easy, quick and efficient to detect and even though treat the malicious behavior of its workers.

For the aforementioned reasons we designed a large scale experiment being implemented to analyze deeply the characteristics that affect the quality of work being

120 done in the most well-known crowdsourcing online market in experimental research field of behavioral economics named Amazon Mechanical Turk.

The bibliography recalls that Mechanical Turk is a popular crowdsourcing platform that it was started in 2005 by Amazon and is now being used as a source of subjects for experimental research (e.g. Eriksson & Simpson, 2010; Mason & Suri, 2012, Crump et al. 2013, Buhrmester et al. 2018). The AMT offers an online marketplace for work that requires human intelligence. In addition, this crowdsourcing platform allows for easy distribution of small tasks to many anonymous workers by offering as a job requester low wages. The workers of AMT can still be recruited rapidly and inexpensively. Overall, MTurk can be used to obtain high-quality data inexpensively and rapidly. Most American workers use this crowdsourcing platform as a supplementary source of income, and often Mechanical Turk is used by unemployed and underemployed workers. The participants can work from home and can choose their working hours.

The participants of AMT can complete an unlimited number of online jobs depending on their financial needs. The workers who populate this market have been assessed on dimensions that are universally relevant to understanding whether, why, and when they should be recruited as research participants. The research body progressively supports over time the characteristics of MTurk as a participant pool for economics, psychology and other social sciences, highlighting the traits of the MTurk samples, why people become MTurk workers and research participants, and how data quality on MTurk compared that from other pools and depends on controllable and uncontrollable factors.

Lastly, findings indicate that: (a) MTurk participants are slightly more representative of the U.S. population than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods (Buhrmester et al. 2011 & Ipeirotis, 2010).

By taking all the above into consideration, we used the big five personality test. Recall, the big five personality test is a questionnaire of 44-item inventory that measures an individual on the Big Five Factors (dimensions) of personality (Goldberg, 1993), on the scale 1-5, where 1=disagree, 2=slightly disagree, 3=neutral, 4=slightly agree and 5=agree. As we mentioned before, with this validated psychological questionnaire we can easy and efficient observe the five major dimensions of personality and we can discover how the workers’ personality measure up in Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. In addition, we used several tests in order to evaluate the workers’ English competence and computer abilities.

121

Moreover, we will set two criteria for the final selection of our pool of workers. Firstly, workers in order to participate to our study, they must have at least eighty (80) percent approval rate and secondly they should have participate and complete at least fifty (50) previous online tasks.

Then, we will require from the participants to complete a questionnaire according their demographics regarding their gender, age and marital status. After that, they will complete several cognitive skills tests and finally they will have to complete the Big Five Personality questionnaire. Last but not least, after all the above-mentioned questionnaires, a worker will automatically be redirected to a link in order to participate on the online task.

We require at least 500 workers originated from USA so that the experiment will considered to be successfully completed. The participants will be equally distributed in five groups with a random algorithm. Hence, every group will contain 100 workers and it will correspond to a different level of bonus payment. The bonus payment will be used as an indirect tool of treatment for the online performance of the participants of the experiment. What we really want to examine is whether or not, pay-for- performance improves the quality of the online work (Lazear, 2000).

The crowdsourcing experiment will consist of five groups named the Benchmark, the Treat A, the Treat B, the Treat C, the Treat D and will take place in two rounds (time periods T1 and T2). Each round will last a hundred and fifty (150) seconds. Moreover a break will intervene between the rounds with duration of hundred twenty to a hundred fifty seconds (120-150). The break will take place with a YouTube video which the workers are obliged to watch until its end, while being informed by instructions for the additional bonus – payment for their performance in the second round of the experiment.

Recall, in the beginning of the experiment, the participants must fulfill a questionnaire regarding their demographic characteristics and cognitive skills. Afterwards, a link will re-direct them in a static web-page with the instructions of the online task (Figure 20).

122

Figure 20. Welcome Page with the instructions of the online task

In the first period (T1) workers were asked to add as many of 2-digit random numbers as possible in 150 seconds. Once an answer was submitted, it cannot be changed. The task is customized using zTree (Fischbacher 2007). In the first period T1 each worker gets 1$ independently their online performance and group of bonus - payment. After the completion of the first phase the abovementioned mandatory two- minute’s break will take place. During the downtime the workers will be informed about the changes that will occur in the second phase of the online job. Thus, they will be informed about the opportunity, from now on, of collecting additional bonus-payments for each correct answer and the opportunity of a real time monitoring, with a money-counter to be displayed in the screen, of the total amount of their earned money during the T2 period of the online job.

The additional bonus-payment of the second phase of the experiment (T2) will depend on the group to which each worker belongs. More specifically, a worker that belongs to Benchmark group will not have the opportunity of a bonus-payment in both phases of the task (Bonus = 0) and he will have a fix-reward of 1$, while workers which belong to the other four groups of treatment will have the opportunity to earn extra money for each correct answer, through bonus-payments (Treat_A, Treat_B, Treat_C & Treat_D)(Table 20). For example, a worker of the Treat_C group, will certainly earn 1$ for his performance in T1 phase of the task and an extra reward of ten (10) cents for each correct answer in T2 of the task. It is worth mentioning again that, participants will have the opportunity to monitor in real time how much money in total they earn. This information may possible work as an extra incentive, so that the

123

workers can easily adapt their performance to their financial needs in real time during the online job in AMT.

Our cost forecasting showed that, the total cost of the experiment will be 637.2$. Specifically, the initial flat rate for all workers will be five hundred US dollars (500 $), the bonus-payments will be approximately thirty one US dollars (31$) and finally based on the above rewards the fee charged by MTurk will be approximately a hundred and six US dollars (106.2 $) (Amazon Mechanical Turk charges the requester with a flat-fee of 20% of the total rewards for its online services).

Table 21 shows the overall process and the experimental design of our online task on Amazon Mechanical Turk.

Task’s Experimental Design T1 Period Break T2 Period Payout (150 sec) (120-150 sec) (150 sec) /Worker Max Groups Observations Questionnaire Round 1 Treatment Round 2 Cost/worker Only Video 1: Benchmark 100 Yes 1 $/ worker No Bonus 1$ (tr=o) 1$ + MaxBonus=1% Piece Rate 0.001*correct 2: Treat_A 100 Yes 1 $/ worker of 1 $ Bonus (Bonus max (tr=low) 1 cent) = 1.01$ 1$ + MaxBonus=5% Piece Rate 0.005*correct 3: Treat_B 100 Yes 1 $/ worker of 1 $ Bonus (Bonus max (tr=mid) 5 cents)= 1.05$ 1$ + MaxBonus=10% Piece Rate 0.010*correct 4: Treat_C 100 Yes 1 $/ worker of 1 $ Bonus (Bonus max (tr=high) 10 cents)= 1.10$ 1$ + MaxBonus=15% Piece Rate 0.015*correct 5: Treat_D 100 Yes 1 $/ worker of 1 $ Bonus (Bonus max 15 (tr=extremehigh) cents)= 1.15$ Total obs = 500 workers Table 21 shows the experimental design of the task which will be conducted on Amazon Mechanical Turk. Notes: We will require that workers have an 80 percent approval rate and at least 50 approved previous tasks. Maximum Total Cost = 500 $ (Initial flat rate) + 31$ (bonus piece rate) + 106.2$ (AMT 20%) = 637.2$

Much discussion has appeared in the crowdsourcing literature regarding its progression to version 2.0. The future of crowdsourcing relies on its ability to integrate the human component (crowds) with advanced technological capabilities, most prominently artificial intelligence (AI) and big data. A promising direction for crowdsourcing will be a better synergy between sophisticated information technology and the human judgment (Figure 21). New platforms may arise, which will allow businesses to perform tasks with algorithms and machine learning techniques, and

124 then bring in human judgment if they’re not quite as confident in their technology – and the human input will make the algorithms smarter.

Figure 21. Crowdsourcing’s future by integrating humans with machines Drawn by https://singularityhub.com

125

References

Adamic L., Zhang J., Bakshy, E. & Ackerman M. “Knowledge sharing and yahoo answers: everyone knows something”, WWW '08 Proceedings of the 17th international conference on World Wide Web, Beijing, China, 2008, pp 665-674.

Aghaei, S., Nematbakhsh, M. A., & Farsani, H. K. (2012). “Evolution of the world wide web: From WEB 1.0 TO WEB 4.0”, International Journal of Web & Semantic Technology, 3(1), 1.

Agrawal, A. K., Catalini, C., & Goldfarb, A. (2011). “The geography of crowdfunding (No. w16820).” National bureau of economic research.

Agrawal, A., Horton, J., Lacetera, N., & Lyons, E. (2015). “Digitization and the contract labor market: A research agenda“. In Economic analysis of the digital economy (pp. 219-250). Aitamurto, T. (2014). Book Review: Daren C. Brabham, “Crowdsourcing, New Media & Society”, vol. 16, no. 4, pp. 692-693.

Aker, A., El-Haj, M., Albakour, M. D., & Kruschwitz, U. (2012), “Assessing Crowdsourcing Quality through Objective Tasks”, In LREC (pp. 1456-1461).

Alagarai Sampath, H., Rajeshuni, R., & Indurkhya, B. (2014), “Cognitively inspired task design to improve user performance on crowdsourcing platforms”, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 3665-3674), ACM.

Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., & Dustdar, S. (2013), “Quality Control in Crowdsourcing Systems: Issues and Directions“, Internet Computing, IEEE, vol. 17(2), pp. 76–81. Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., & Dustdar, S. (2013), “Quality control in crowdsourcing systems: Issues and directions”, IEEE Internet Computing, 17(2), 76-81.

Alsyouf, I. (2007). “The role of maintenance in improving companies’ productivity and profitability. “ International Journal of production economics, 105(1), 70-78.

Altman, D. G., & Matthews, J. N. (1996), “Statistics Notes: Interaction 1: heterogeneity of effects”, Bmj, 313(7055), 486.

Antin, J., & Shaw, A. (2012). “Social desirability bias and self-reports of motivation: A study of Amazon Mechanical Turk in the US and India. “ In CHI ’12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2925– 2934). New York, NY: ACM.

126

Aparicio, A. F., Vela, F. L. G., Sánchez, J. L. G., & Montes, J. L. I. (2012), “Analysis and application of gamification”, In Proceedings of the 13th International Conference on Interacción Persona-Ordenador (p. 17), ACM.

Autor, D. (2001), “Wiring the labor market”, Journal of Economics Perspectives 15.

Autor, D.H. (2008). “The economics of labor market intermediation: An analytic framework”,Working Paper 14348, National Bureau of Economic Research.

Autor, H.D. and M. J. Handel (2013). “Putting Tasks to the Test: Human Capital, Job Tasks, and Wages, “Journal of Labor Economics, Vol. 31, No. 2, pp. 59-96.

Bassett Jr, Gilbert W., Tam M., and Knight K. (2002), “Quantile Models and Estimators for Data Analysis.“, Metrika, vol.55 (1), pp. 17–26.

Bates, J. A., & Lanza, B. A. (2013). “Conducting psychology student research via the Mechanical Turk crowdsourcing service”, North American Journal of Psychology, 15(2), 385.

Beck, H. (1999). “Jobs on the wire: In search of the perfect labor market“. Netnomics, 1(1), 71-88.

Benwell, G. L., Deans, K. R., & Ghandour, A. (2010). “The relationship between website metrics and the financial performance of online businesses“. ICIS.

Berners T. & Hendler J. & Lassila O. (2001), “The Semantic Web”, The Scientific American, vol. 5(1).

Bonabeau, E. (2009). “Decisions 2.0: The power of collective intelligence”, MIT Sloan management review, 50(2), 45.

Brabham, D. (2008). “Crowdsourcing as a Model for Problem Solving”, Convergence: The International Journal of Research into New Media Technologies, vol. 14, no. 1, pp. 75-90.

Brabham, D. C. (2013). “Crowdsourcing”, John Wiley & Sons, Inc.

Borghans, L., Duckworth, A. L., Heckman, J. J. & Ter Weel, B. (2008) ”,The economics and psychology of personality traits. ”, Journal of Human Resources, Vol. 43, pp. 972-1059. Buchholz, S., & Latorre, J. (2011), “Crowdsourcing preference tests, and how to detect cheating”, In Twelfth Annual Conference of the International Speech Communication Association.

Buchinsky, M. (1998), “Recent Advances in Quantile Regression Models: A Practical Guideline for Empirical Research.“, Journal of Human Resources, vol. 33 (1), pp. 88–126.

127

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). “Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data?”. Perspectives on psychological science, 6(1), 3-5.

Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). “An Evaluation of Amazon’s Mechanical Turk, Its Rapid Rise, and Its Effective Use”. Perspectives on Psychological Science, 13(2), 149-154.

Burtch, G., Ghose, A., & Wattal, S. (2013). “An empirical examination of the antecedents and consequences of contribution patterns in crowd-funded markets.” Information Systems Research, 24(3), 499-519.

Callison-Burch, C., & Dredze, M. (2010). “Creating speech and language data with Amazon's Mechanical Turk”, In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk (pp. 1-12), Association for Computational Linguistics.

Carr, N. (2010), “A typology of crowds”.

Chatzimilioudis, G., Konstantinidis, A., Laoudias, C., & Zeinalipour-Yazti, D. (2012). “Crowdsourcing with smartphones.” IEEE Internet Computing, 16(5), 36-44.

Chandler, D., & Horton, J. J. (2011). “Labor allocation in paid crowdsourcing: Experimental evidence on positioning, nudges and prices”, Human Computation, 11, 11.

Chandler, D., & Kapelner, A. (2013), “Breaking monotony with meaning: Motivation in crowdsourcing markets”, Journal of Economic Behavior & Organization, 90, 123- 133.

Chandler, J., Paolacci, G., & Mueller, P. (2014). “Risks and rewards of crowdsourcing marketplaces”.

Chen, D., Horton, J. (2016). “Are Online Labor Markets Spot Markets for Tasks? A Field Experiment on the Behavioral Response to Wage Cuts. “ Information Systems Research (forthcoming). Chen, Y., & Konstan, J. (2015) “Online field experiments: A selective survey of methods“. Journal of the Economic Science Association, 1(1):29–42. Clark, L.A. & Watson, D. (1991). “General affective dispositions in physical and psychological health. “ In C.R. Snyder & D.R. Forsyth (Eds.) Handbook of social and clinical psychology: The health perspective. New York: Pergamon.Barrick et al., 1993. Cormode, G., & Krishnamurthy, B. (2008). “Key differences between Web 1.0 and Web 2.0” First Monday, 13(6).

128

Costa, J., Silva, C., Antunes, M., & Ribeiro, B. (2011), “On using crowdsourcing and active learning to improve classification performance”, In Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on (pp. 469-474), IEEE.

Cronin, M. J. (1997). “Global advantage on the Internet: From corporate connectivity to international competitiveness.” John Wiley & Sons, Inc.

Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). “Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research“. PloS one, 8(3).

Cubel, M., Nuevo-Chiquero, A., Sanchez-Pages, S. and Vidal-Fernandez, M. (2016), “Do Personality Traits Affect Productivity? Evidence from the Laboratory. “. Economic Journal, 126: 654–681. Cunha F., Heckman J. J., and Schennach S. M. (2010), “Estimating the technology of cognitive and noncognitive skill formation”. Econometrica, vol. 78 (3), pp. 883-931. Dalton, D. R., Todor, W. D., Spendolini, M. J., Fielding, G. J., & Porter, L. W. (1980). “Organization structure and performance: A critical review. “ Academy of management review, 5(1), 49-64.

Dergousoff, K., & Mandryk, R. L. (2015). “Mobile gamification for crowdsourcing data collection: Leveraging the freemium model”, In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(pp. 1065-1074), ACM.

Difallah, D. E., Demartini, G., & Cudré-Mauroux, P. (2012). “Mechanical Cheat: Spamming Schemes and Adversarial Techniques on Crowdsourcing Platforms”, In CrowdSearch (pp. 26-30).

Doan, A., Ramakrishnan, R., and Halevy, A. Y. (2011), “Crowdsourcing systems on the World-Wide Web”, Communications of the ACM, 54, 4, 86.

Donmez, Pinar, Jaime G. Carbonell, and Jeff Schneider. "Efficiently learning the accuracy of labeling sources for selective sampling." Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009.

Dontcheva, M., Morris, R. R., Brandt, J. R., & Gerber, E. M. (2014), “Combining crowdsourcing and learning to improve engagement and performance”, In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 3379-3388), ACM.

Downs, J. S., Holbrook, M. B., Sheng, S., & Cranor, L. F. (2010), “Are Your Participants Gaming the System? Screening Mechanical Turk Workers“, CHI'10 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, pp. 2399-2402.

129

Eagle, N. (2009). “txteagle: Mobile crowdsourcing.” In International Conference on Internationalization, Design and Global Development (pp. 447-456). Springer Berlin

Heidelberg. Eickhoff, C., & de Vries, A. P. (2013), “Increasing cheat robustness of crowdsourcing tasks”, Information retrieval, 1-17.

Eickhoff, C., Harris, C. G., de Vries, A. P., & Srinivasan, P. (2012). “Quality through flow and immersion: gamifying crowdsourced relevance assessments”, In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval (pp. 871-880), ACM.

Erickson, L., Petrick, I., & Trauth, E. (2012), “Hanging with the right crowd: Matching crowdsourcing need to crowd characteristics”.

Eriksson, K., & Simpson, B. (2010). “Emotional reactions to losing explain gender differences in entering a risky lottery“. Judgment and Decision Making, 5(3), 159.

Estellés-Arolas, E., & González-Ladrón-de-Guevara, F. (2012). “Towards an integrated crowdsourcing definition”, Journal of Information science, 38(2), 189-200.

Farrell, M. Anne, Jonathan H. Grenier, and Justin Leiby (2017) Scoundrels or Stars? Theory and Evidence on the Quality of Workers in Online Labor Markets. The Accounting Review, Vol. 92, No. 1, pp. 93-114. Felstiner A. (2011), “Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry”, Berkeley Journal of Employment and Labor Law, Vol. 32, No. 1, pp. 143-203. Feller, J., Finnegan, P., Hayes, J., and O'Reilly, P. (2010), “ Leveraging 'The Crowd': An Exploration of how Solver Brokerages enhance Knowledge Mobility”, In ECIS 2010 Proceedings.

Feyisetan, O., Simperl, E., Van Kleek, M., & Shadbolt, N. (2015), “Improving paid microtasks through gamification and adaptive furtherance incentives”, In Proceedings of the 24th International Conference on World Wide Web (pp. 333- 343),International World Wide Web Conferences Steering Committee.

Finnerty, A., Kucherbaev, P., Tranquillini, S., & Convertino, G. (2013), “Keep it simple: Reward and task design in crowdsourcing”, In Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI (p. 14), ACM.

Fischbacher U. (2007), “z-Tree: Zurich toolbox for ready-made economic experiments”, Experimental Economics, June 2007, Volume 10, Issue 2, pp 171–178.

Folta, T. B., Cooper, A. C., & Baik, Y. S. (2006). “Geographic cluster size and firm performance.” Journal of Business Venturing, 21(2), 217-242.

Frei, B. (2009). “Paid crowdsourcing: Current state & progress toward mainstream business use”, Produced by Smartsheet.com.

130

Fuchs C. & Hofkirchner W. & Schafranek M. & Raffl C. & Sandoval M. & Bichler R. (2010), “Theoretical Foundations of the Web: Cognition, Communication, and Co- Operation. Towards an Understanding of Web 1.0, 2.0, 3.0”, Journal: Future Internets.

Fuchs, V. R. (1964). “Quality of Labor. In Productivity Trends in the Goods and Service Sectors”, 1929–61: A Preliminary Survey (pp. 23-33), NBER.

Gadiraju, U., Fetahu, B., & Kawase, R. (2015), “Training workers for improving performance in crowdsourcing microtasks”, In Design for Teaching and Learning in a Networked World (pp. 100-114), Springer, Cham.

Geerts, S. (2009). “Discovering crowdsourcing: theory, classification and directions for use.” Unpublished Master of Science in Innovation Management thesis, Eindhoven University of Technology, at http://alexandria. tue. nl/extra2/afstversl/tm/Geerts, 202009.

Geiger, D., Seedorf, S., Schulze, T., Nickerson, R. C., & Schader, M. (2011). “Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes”, In AMCIS.

Goldin, C, and L F. Katz. 2008, “The race between education and technology.“ Cambridge MA: Belknap Press of Harvard University Press. Goncalves, J., Hosio, S., Vukovic, M., & Konomi, S. I. (2017). “Mobile and situated crowdsourcing. “ International Journal of Human-Computer Studies, (102), 1-3.

Gupta, A., Thies, W., Cutrell, E., & Balakrishnan, R. (2012).”mClerk: enabling mobile crowdsourcing in developing regions.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1843-1852). ACM.

Hamari, J., Koivisto, J., & Sarsa, H. (2014), “Does gamification work?--a literature review of empirical studies on gamification”, In System Sciences (HICSS), 2014 47th Hawaii International Conference on (pp. 3025-3034), IEEE.

Hassan, U., & Curry, E. (2013). “A capability requirements approach for predicting worker performance in crowdsourcing”, In Collaborative Computing: Networking, Applications and Worksharing (Collaboratecom), 2013 9th International Conference Conference on (pp. 429-437), IEEE.

Heckman, J. J., & Rubinstein, Y. (2001). “The importance of noncognitive skills: Lessons from the GED testing program”. The American Economic Review, 91(2), 145-149.Heckman et al. 2006. Heineck, G. and Anger, S. (2010). “The returns to cognitive abilities and personality traits in Germany“, Labour Economics, vol. 17(3), pp. 535–46.Turban et al. 2009. Hirth, M., Hoßfeld, T., & Tran-Gia, P. (2011). “Cost-optimal validation mechanisms and cheat-detection for crowdsourcing platforms”, In Innovative Mobile and Internet

131

Services in Ubiquitous Computing (IMIS), 2011 Fifth International Conference on (pp. 316-321), IEEE.

Hirth, M., Hoßfeld, T., & Tran-Gia, P. (2013). “Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms”, Mathematical and Computer Modelling, 57(11), 2918-2932.

Hoch, Irving, “Estimation of production function parameters combining time series and cross-section data,” Econometrica, 1962, 30 (1), 34–53.

Hoegg, R., Martignoni, R., Meckel, M., & Stanoevska-Slabeva, K. (2006). “Overview of business models for Web 2.0 communities.”

Hogan, J., & Holland, B. (2003), “Using theory to evaluate personality and job- performance relations: A socioanalytic perspective”.

Horton, J. (2010). “Online labor markets”, Internet and network economics, 515-522.

Horton, J. J., & Chilton, L. B. (2010). “The labor economics of paid crowdsourcing”, In Proceedings of the 11th ACM conference on Electronic commerce (pp. 209-218), ACM.

Horton, J. J., Rand, D. G., & Zeckhauser, R. J. (2011). “The online laboratory: Conducting experiments in a real labor market”. Experimental Economics, 14(3), 399-425. Horton, J.J., and R.J. Zeckhauser (2016). “The Causes of Peer Effects in Production: Evidence from a Series of Field Experiments,” NBER Working Paper No. 22386. Hossfeld, T., Keimel, C., & Timmerer, C. (2014), “Crowdsourcing quality-of- experience assessments”, Computer, 47(9), 98-102.

Howe, J. (2006). “The rise of crowdsourcing”, Wired magazine, 14(6), 1-4.

Howe, J. (2008). “Crowdsourcing: How the power of the crowd is driving the future of business”, Random House.

Hsueh P, Melville P, Sindhwani V (2009), “Data quality from crowdsourcing: a study of annotation selection criteria”, In: Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, Association for Computational Linguistics, pp 27–35.

Ipeirotis, P. G. (2010). “Demographics of mechanical turk”. New York University Working Paper.

Ipeirotis, P. G., & Horton, J. J. (2011). “The need for standardization in crowdsourcing”, In Proceedings of the workshop on crowdsourcing and human computation at CHI.

132

John, O. P., & Srivastava, S. (1999), “The Big-Five trait taxonomy: History, measurement, and theoretical perspectives. “ In L. A. Pervin & O. P. John (Eds.), Handbook of Personality: Theory and Research, vol. 2, pp. 102–138, New York: Guilford Press. Kaufman, G., Flanagan, M., & Punjasthitkul, S. (2016), “Investigating the impact of'emphasis frames' and social loafing on player motivation and performance in a crowdsourcing game”, In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 4122-4128), ACM.

Kaufmann, N., Schulze, T., & Veit, D. (2011), “More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk”, In AMCIS(Vol. 11, No. 2011, pp. 1-11).

Kautz T., Heckman J., Diris R., Weel B., Borghans L. (2014), “Fostering and Measuring Skills: Improving Cognitive and Non-cognitive Skills to Promote Lifetime Success“, OECD Education Working Papers, Paris, No. 110, OECD Publishing. John et al. 1991; 2008.

Kazai G. (2011), “In search of quality in crowdsourcing for search engine evaluation”, In: Proceedings of the 33rd European conference on advances in information retrieval. Lecture Notes in Computer Science 6611. Berlin/Heidelberg: Springer-Verlag, pp. 165–176.

Kazai, G., Kamps, J., & Milic-Frayling, N. (2011), “Worker types and personality traits in crowdsourcing relevance labels“, Proceedings of the 20th ACM international conference on Information and knowledge management, New York, pp.1941-1944. Kazai, G., Kamps, J., & Milic-Frayling, N. (2011), “Worker types and personality traits in crowdsourcing relevance labels”, In Proceedings of the 20th ACM international conference on Information and knowledge management (pp. 1941- 1944), ACM.

Kazai, G., Kamps, J., & Milic-Frayling, N. (2012), “The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy”, In Proceedings of the 21st ACM international conference on Information and knowledge management (pp. 2583-2586), ACM.

Kim, S., & Lee, Y. (2006). Global online marketplace: a cross‐cultural comparison of website quality. “International Journal of Consumer Studies”, 30(6), 533-543.

Kim, W., Jeong, O. R., & Lee, S. W. (2010). “On social Web sites.” Information systems, 35(2), 215-236.

King, J. (2014). “E-books for leisure and learning-The Brisbane Boys“.College experience. Access (Online), 28(3), 42.

Kittur, A., Chi, E. H., & Suh, B. (2008). “Crowdsourcing user studies with Mechanical Turk”, In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 453-456), ACM.

133

Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J. & Horton, J. (2013). “The future of crowd work.” In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 1301-1318). ACM. Kokkodis M. and Ipeirotis G. P. (2016), “Career Development Paths in Online Labor Markets”. mimeo.Mason & Watts, 2009. Kozinets, R.; Hemetsberger, A. & Jensen Schau, H. (2008). “The Wisdom of Con- sumer Crowds: Collective Innovation in the Age of Networked Marketing”, Journal of Macromarketing, vol. 28, no. 4, pp. 339-354.

Krippendorff, K. (2004). “Content analysis: An introduction to its methodology.” Sage.

Lease, M. (2011). “On Quality Control and Machine Learning in Crowdsourcing.” Human Computation, 11(11).

Lazear, E. P. (2000). “Performance pay and productivity“. American Economic Review, 90(5), 1346-1361.

Leimeister, J. M., Huber, M., Bretschneider, U., and Krcmar, H. (2009), “Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition”, Journal of Management Information Systems, 26, 1, 197-224.

Lévy, P. (1997). “Collective intelligence”, New York: Plenum/Harper Collins.

Li, H., & Reynolds, J. F. (1995), “On definition and quantification of heterogeneity”, Oikos, 280-284.

Li, H., Zhao, B., & Fuxman, A. (2014), “The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing”, In Proceedings of the 23rd international conference on World wide web (pp. 165-176), ACM.

Lieberoth, A. (2015), “Shallow Gamification, Testing Psychological Effects of Framing an Activity as a Game, Games and Culture”, Vol 10, Issue 3, pp. 229 – 248. Martineau, E. (2012). “A typology of crowdsourcing participation styles”, (Doctoral dissertation, Concordia University).

Litman, L., Robinson, J., & Rosenzweig, C. (2015). “The relationship between motivation, monetary compensation, and data quality among US-and India-based workers on Mechanical Turk.” Behavior research methods, 47(2), 519-528.

Lo, B. W., & Sedhain, R. S. (2006). “How reliable are website rankings? Implications for e-business advertising and internet search.” Issues in Information Systems, 7(2), 233-238.

Malone, T. W., Laubacher, R. & Dellarocas, C. N., Harnessing, (2009), “Crowds: Mapping the Genome of Collective Intelligence”, MIT Sloan Research Paper No. 4732-09.

134

Malone, T.W., Laubacher, R.J. (1991) : “The dawn of the e-lance economy.” Harvard Business Review, 76(5), 144-152. Mason, W., & Watts, D. J. (2009). “Financial incentives and the performance of crowds.” In Proceedings of the ACM SIGKDD workshop on human computation (pp. 77–85). New York: ACM Press. Mason, W., & Suri, S. (2012). “Conducting behavioral research on Amazon’s Mechanical Turk“. Behavior research methods, 44(1), 1-23. McCrae, R. R., & Costa Jr, P. T. (1999). “A five-factor theory of personality.“, Handbook of personality: Theory and research, 2, 139-153.

McGraw, I., Glass, J., & Seneff, S. (2011). “Growing a spoken language interface on amazon mechanical turk”, In Twelfth Annual Conference of the International Speech Communication Association.

Mekler, E. D., Brühlmann, F., Opwis, K., & Tuch, A. N. (2013), “Disassembling gamification: the effects of points and meaning on user motivation and performance”, In CHI'13 extended abstracts on human factors in computing systems (pp. 1137- 1142), ACM.

Miao, C., Yu, H., Shen, Z., & Leung, C. (2016). “Balancing quality and budget considerations in mobile crowdsourcing.“ Decision Support Systems, 90, 56-64.

Mladenow, A.; Bauer, C. & Strauss, C. (2014). “Social Crowd Integration in New Product Development: Crowdsourcing Communities Nourish the Open Innovation Paradigm”, Global Journal of Flexible Systems Management, vol. 15, no. 1, pp. 77- 86.

Mollick, E. (2014). “The dynamics of crowdfunding: An exploratory study.” Journal of business venturing, 29(1), 1-16.

Moriarty, G. L. (2010). “Psychology 2.0: Harnessing social networking, user‐ generated content, and crowdsourcing”, Journal of Psychological Issues in Organizational Culture, 1(2), 29-39.

Morris, M. Dontcheva, and E. M. Gerber (2012), “Priming for better performance in microtask crowdsourcing environments”, IEEE Internet Computing, vol. 16, no. 5, pp. 13–19.

Morris, R. R., Dontcheva, M., & Gerber, E. M. (2012), “Priming for better performance in microtask crowdsourcing environments”, IEEE Internet Computing, 16(5), 13-19.

Morris, R. R., Dontcheva, M., Finkelstein, A., & Gerber, E. (2013), “Affect and creative performance on crowdsourcing platforms”, In Affective computing and intelligent interaction (ACII), 2013 humaine association conference on (pp. 67-72),

135

IEEE.

Morschheuser, B., Hamari, J., Koivisto, J., & Maedche, A. (2017), “Gamified crowdsourcing: Conceptualization, literature review, and future agenda”, International Journal of Human-Computer Studies, 106, 26-43.

Mount, M. K., Barrick, M. R., & Stewart, G. L. (1998), “Five-factor model of personality and performance in jobs involving interpersonal interactions”, Human performance, 11(2-3), 145-165.

Mourelatos, E., & Tzagarakis, M. (2016), “Investigating Factors Influencing the Quality of Crowdsourced Work under Different Incentives: Some Empirical Results”, International Journal of Innovation in the Digital Economy (IJIDE), 7(2), 15-31.

Mourelatos, E., & Tzagarakis, M. (2016), “Worker’s Cognitive Abilities and Personality Traits as Predictors of Effective Task Performance in Crowdsourcing Tasks”, In PQS 2016 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (pp. 112-116).

Mourelatos, E., Tzagarakis, M., & Dimara, E. (2016). “A review of online crowdsourcing platforms.” South-Eastern Europe Journal of Economics, 14(1), 59- 73.

Muller, J. and Schwieren, C. (2012). “Can personality explain what is underlying womens’ unwillingness to compete?”, Journal of Economic Psychology, vol. 33(3), pp. 448–60. Mundlak, Yair, “Empirical production function free of management bias,” Journal of Farm Economics, 1961, 43 (1), 44–56. 14, 30.

Murphy, K. R. (2005), “Why don't measures of broad dimensions of personality perform better as predictors of job performance?, Human Performance, 18(4), 343- 357.

Naik, U., & Shivalingaiah, D. (2008). “Comparative Study of Web 1.0, Web 2.0 and Web 3.0”.

Nair, A. (2006). “Meta-analysis of the relationship between quality management practices and firm performance—implications for quality management theory development.” Journal of operations management, 24(6), 948-975.

Narula, P., Gutheim, P., Rolnitzky, D., Kulkarni, A., & Hartmann, B. (2011). “MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid.” Human Computation, 11, 11.

Ng, T. W., & Feldman, D. C. (2010). “Human capital and objective indicators of career success: The mediating effects of cognitive ability and conscientiousness.“ Journal of Occupational and Organizational Psychology, 83(1), 207-235.

136

O’reilly, T. (2005). “What is web 2.0.”

Ortega, J. L., & Aguillo, I. (2010). “Differences between web sessions according to the origin of their visits.” Journal of Informetrics, 4(3), 331-337.

Pallais A, Sands E.G. (2016), “Why the Preferential Treatment? Evidence from Field Experiments on Referrals.” Journal of Political Economy. 124 (6):1793-1828. Pallais, A. (2014). “Inefficient hiring in entry-level labor markets.” American Economic Review, 104(11), 3565-3599. Pant G., Srinivasan P. and Menczer F. (2004), “Crawling the web“, Web Dynamics Journal, vol.2, pp. 153-177.Jung. Peng, X., Babar, M. A., & Ebert, C. (2014). “Collaborative Software Development Platforms for Crowdsourcing.” IEEE software, 31(2), 30-36.

Plaza, B. (2011). “Google Analytics for measuring website performance.” Tourism Management, 32(3), 477-481.

Poetz, M. K., & Schreier, M. (2012). “The value of crowdsourcing: can users really compete with professionals in generating new product ideas?”. Journal of Product Innovation Management, 29(2), 245-256.

Prpic, J.; Shukla, P.; Kietzmann, J.; & McCarthy, I. (2015). “How to Work a Crowd: Developing Crowd Capital through Crowdsourcing”, Business Horizons, vol. 58, no. 1, pp. 77-85.

Porter, M. E. (1979). “The structure within industries and companies' performance. “ . The review of economics and statistics, 214-227.

Porter, M.E. (1986). “Changing patterns of international competition“. California Management Review, 28(2), 9-40.

Rappa, M. (2000). “Business models on the web”. North Carolina State University (ecommerce. ncsu. edu), 13.

Redi J. and Povoa I.(2014), “Crowdsourcing for Rating Image Aesthetic Appeal: Better a Paid or a Volunteer Crowd?” In: 3rd International ACM workshop on Crowdsourcing for Multimedia (CrowdMM). Orlando, FL, USA.

Ribiere, V. M., & Tuggle, F. D. (2010). “Fostering innovation with KM 2.0”, Vine, 40(1), 90-101.

Richard, P. J., Devinney, T. M., Yip, G. S., & Johnson, G. (2009). “Measuring organizational performance: Towards methodological best practice.” Journal of management, 35(3), 718-804.

137

Robson, K., Plangger, K., Kietzmann, J. H., McCarthy, I., & Pitt, L. (2015). “Is it all a game? Understanding the principles of gamification”, Business Horizons, 58(4), 411-420.

Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., & Vukovic, M. (2011), “An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets”, ICWSM, 11, 17-21.

Rothschild, M., Stiglitz, J. (1976). “Equilibrium in competitive insurance markets: An essay on the economics of imperfect information”, The Quarterly Journal of Economics 90(4), 629–649.

Ross, Joel, et al. “Who are the crowdworkers?: shifting demographics in mechanical turk.” CHI'10 extended abstracts on Human factors in computing systems. ACM, 2010.

Rothmann, S., & Coetzer, E. P. (2003). “The big five personality dimensions and job performance”. SA Journal of Industrial Psychology, 29(1), 68-74. Graziano and Eisenberg, 1997. Salgado, J.F. (1997). “The Five Factor Model of personality and job performance in the European Community”, Journal of Applied Psychology, vol. 82(1), pp. 30. Saxton, G. D., Oh, O., & Kishore, R. (2013). “Rules of crowdsourcing: Models, issues, and systems of control.” Information Systems Management, 30(1), 2-20.

Schenk E. and Guittard C. (2011) “"Towards a characterization of crowdsourcing practices.”, Journal of Innovation Economics & Management 1: 93-107.

Schenk, E, Guittard C. (2011) ,“Crowdsourcing: what can be outsourced to the crowd, and why?” Technical Report, Available at: http://halshs.archives- ouvertes.fr/halshs-00439256/.

Schenk, E., and Guittard, C. (2011), “Towards a characterization of crowdsourcing practices”, Journal of Innovation Economics, 7, 1, 93.

Schmidt, F. L. (2002). “The role of general cognitive ability and job performance: Why there cannot be a debate.” Human performance, 15(1-2), 187-210. Schmitt, N. (2007). “The interaction of neuroticism and gender and its impact on self-efficacy and performance”. Human Performance, 21(1), 49-61.Barrick & Mount, 1991 Schulze, T., Seedorf, S., Geiger, D., Kaufmann, N., & Schader, M. (2011), “Exploring task properties in crowdsourcing-an empirical study on mechanical turk”, In ECIS (Vol. 11, pp. 1-1).

Seltzer, E. & Mahmoudi, D. (2012). “Citizen Participation, Open Innovation, and Crowdsourcing: Challenges and Opportunities for Planning”, Journal of Planning Literature, vol. 28, no.1, pp.1-16.

138

Shuen, A. (2008). “Web 2.0: A Strategy Guide: Business thinking and strategies behind successful Web 2.0 implementations.“ O'Reilly Media, Inc.".

Sigala, M. (2015), “Gamification for crowdsourcing marketing practices: Applications and benefits in tourism”, In Advances in crowdsourcing (pp. 129- 145),Springer International Publishing.

Singla, A., & Krause, A. (2013), “Truthful incentives in crowdsourcing tasks using regret minimization mechanisms”, In Proceedings of the 22nd international conference on World Wide Web (pp. 1167-1178), ACM.

Sloane, P. (2011). “A Guide to Open Innovation and Crowdsourcing: Advice from Leading Experts”, UK: Kogan Page Publishers.

Stevanovic, D., Vlajic, N., & An, A. (2011).” Unsupervised clustering of Web sessions to detect malicious and non-malicious website users.” Proceedings on Computer Science, 5, 123-131.

Straub, D., Rai, A., & Klein, R. (2004). “Measuring firm performance at the network level: A nomology of the business impact of digital supply networks.” Journal of Management Information Systems, 21(1), 83-114.

Surowiecki, J., & Silverman, M. P. (2007). “The wisdom of crowds”, American Journal of Physics, 75(2), 190-192.

Tett, R.P., Jackson, D.N. and Rothstein, M. (1991). “Personality measures as predictors of job performance: a meta-analytic review”, Personnel Psychology, vol. 44(4), pp. 703–42. Thackeray R., B. L Neiger, C. L. Hanson & J. F. McKenzie, (2008) ”Enhancing promotional strategies within social marketing programs: use of Web 2.0 social media.” Health promotion practice 9.4: 338-343.

Thompson S.H Teo, Vivien K.G Lim, Raye Y.C Lai (1999), “Intrinsic and extrinsic motivation in Internet usage”, Omega, Volume 27, Issue 1, Pages 25–37.

Tierney, H. L., & Pan, B. (2012). “A poisson regression examination of the relationship between website traffic and search engine queries.” NETNOMICS: Economic Research and Electronic Networking, 13(3), 155-189.

Turel, O., & Serenko, A. (2006). “Satisfaction with mobile services in Canada: An empirical investigation.” Telecommunications policy, 30(5), 314-331.

Van Pelt, C. R., Cox, R., Sorokin, A., & Juster, M. (2014), “Predicting future performance of multiple workers on crowdsourcing tasks and selecting repeated crowdsourcing workers”, U.S. Patent No. 8,626,545. 7 Jan. 2014.

139

Vashistha, A., Vaish, R., Cutrell, E., & Thies, W. (2015). “The whodunit challenge: Mobilizing the crowd in india.” In Human-Computer Interaction (pp. 505-521). Springer International Publishing.

Vaughan, L and Yang, R. (2013). “Web Traffic and Organization Performance Measures: Relationships and Data Sources Examined.” Journal of Informetrics 7(3):699–711.

Venetis, P., & Garcia-Molina, H. (2012). “Quality control for comparison microtasks”, In Proceedings of the first international workshop on crowdsourcing and data mining (pp. 15-21), ACM.

Wang, Jing, Panagiotis G. Ipeirotis, and Foster Provost. “Managing crowdsourcing workers.”, The 2011 winter conference on business intelligence. 2011.

Weinstein, R. S. (2013). “Crowdfunding in the US and Abroad: What to Expect When You're Expecting.” Cornell Int'l LJ, 46, 427.

Whitla P. (2009), “Crowdsourcing and its application in marketing”, Contemporary Management Research , 5(1): 15–28.

Williamson, O.E. (1979). “Transaction-cost economics: The governance of contractual relations”, Journal of Law and Economics 22(2), 233–261.

Witt, L. A., Burke, L. A., Barrick, M. R., & Mount, M. K. (2002). “The interactive effects of conscientiousness and agreeableness on job performance”. Journal of Applied Psychology, 87(1), 164.Hörmann and Maschke, 1996. Yuen M.C, King I. and Leun K.S (2011), “A Survey of Crowdsourcing Systems”, Proceedings of International Conference on Social Computing (socialcom), 9-11 Oct.

Zahran, D. I., Al-Nuaim, H. A., Rutter, M. J., & Benyon, D. (2014). “A comparative approach to web evaluation and website evaluation methods.” International Journal of Public Information Systems, 10(1).

Zhu, Y., Zhang, Q., Zhu, H., Yu, J., Cao, J., & Ni, L. M. (2014, June). “Towards truthful mechanisms for mobile crowdsourcing with dynamic smartphones.” In Distributed Computing Systems (ICDCS), 2014 IEEE 34th International Conference on (pp. 11-20). IEEE.

Zimmer, M. (2008).“Preface: critical perspectives on Web 2.0”, First Monday, 13(3).

140

Appendix

A/A Online Platforms A/A Online Platforms 1 https://www.mturk.com/mturk/ 88 http://www.causetofund.com/ 2 http://www.clickworker.com/ 89 http://projectgeld.nl/ 3 https://microworkers.com 90 http://crowdlever.com/ 4 http://www.mykindacrowd.com/ 91 http://www.crowdfundme.it/ 5 http://crowdflower.com/ 92 https://www.crowdbnk.com/ 6 https://www.crowdmed.com/ 93 http://www.crowdbaron.com/ 7 http://www.c- http://www.cloudcrowd.com/ 94 crowd.com/en/home-englisch/ 8 https://www.odesk.com/ 95 https://www.crowdaboutnow.nl 9 http://www.benevolent.net/index. http://www.crowdsource.com 96 html 10 http://www.shorttask.com/ 97 http://www.appbackr.com/ 11 http://www.microtask.com/ 98 http://www.microplace.com/ 12 http://samasource.org/ 99 http://www.craigslist.org/ 13 https://www.mobileworks.com/ 100 https://www.surveymonkey.com/ 14 https://www.crowdguru.de/ 101 https://www.upwork.com/ 15 http://www.optask.com/ 102 https://www.researchgate.net/ 16 https://www.smartsheet.com/ 103 https://www.behance.net/ 17 http://cloudfactory.com/ 104 https://basecamp.com/ 18 http://www.liveops.com/ 105 https://www.chegg.com/study 19 http://www.textbroker.com/ 106 https://studio.envato.com/ 20 http://www.content.de/ 107 http://www.peopleperhour.com/ 21 http://taskarmy.com/ 108 http://www.guru.com/ 22 http://www.asksunday.com/ 109 http://www.mylot.com/ 23 http://www.10eqs.com/ 110 http://www.proz.com/ 24 https://www.starmind.com/ 111 https://www.toptal.com/ 25 http://g2link.com/ 112 https://www.kaggle.com/ 26 http://www.trada.com/ 113 https://www.seoclerks.com/ 27 http://www.taskus.com/ 114 https://www.flexjobs.com/ 28 http://rapidworkers.com/ 115 http://www.coroflot.com/ 29 https://www.hiretheworld.com/ 116 https://www.topcoder.com/ 30 http://www.agentanything.com/ 117 http://home.crowdtap.com/ 31 https://mycrowd.com/ 118 https://www.geniuzz.com/ 32 https://www.taskrunner.co.uk/ 119 https://www.designcontest.com/ 33 https://www.taskrabbit.com/ 120 https://weworkremotely.com/ 34 http://99designs.com/ 121 https://www.mediabistro.com/ 35 http://crowdlab.anythingabout.de/i 122 ndex.php http://www.tutor.com/ 36 http://www.iron.io/ 123 https://www.workmarket.com/ 37 http://gengo.com/ 124 http://www.zilliondesigns.com/ 38 https://www.tolingo.com/en 125 http://gigbucks.com/ (Blocked) 39 http://www.lingotek.com/ 126 http://aquent.com/ 40 http://www.onehourtranslation.co 127 m/ https://www.scripted.com/ 41 http://gigwalk.com/ 128 http://www.freelance.com/en/

141

42 https://www.elance.com/ 129 https://www.fieldnation.com/ 43 http://www.script-lance.com/ 130 https://www.textmaster.com/ 44 http://www.freelancer.com 131 (vworkers.com) http://www.designhill.com/ 45 https://www.atizo.com/ 132 https://authenticjobs.com/ 46 http://www.projektwerk.com/de/ 133 http://www.freelancewriting.com/ 47 http://projects.csail.mit.edu/soylen 134 t/ http://www.48hourslogo.com/ 48 http://www.journalismjobs.com/i http://www.microtoilers.com/ 135 ndex.php 49 http://minijobz.com/ 136 https://www.scribendi.com/ 50 http://www.crowdspring.com/ 137 https://www.fourerr.com/ 51 http://www.freelancewritinggigs. https://crowdin.net/ 138 com/ 52 http://www.babyloan.org/en/ 139 http://www.freelance-info.fr/ 53 http://www.crowdint.com/ 140 https://www.xplace.com/en/ 54 http://www.jobboy.com/ 141 https://www.cloudpeeps.com/ 55 http://www.avvo.com/ 142 https://www.gofundme.com/ 56 https://www.deltasight.com/ 143 https://www.patreon.com/ 57 https://www.crowdcontent.com/ 144 https://teespring.com/ 58 http://minifreelance.com/ 145 https://www.lendingclub.com/ 59 https://www.quirky.com/ 146 https://home.justgiving.com/ 60 http://www.designcrowd.com/ 147 https://www.youcaring.com/ 61 http://domystuff.com/ 148 https://www.crowdrise.com/ 62 http://www.utest.com/ 149 https://www.kiva.org/ 63 http://www.epinions.com/ 150 http://www.donorschoose.org/ 64 http://expertbids.com/ 151 http://www.ulule.com/ 65 http://whinot.com/ 152 http://www.pledgemusic.com/ 66 https://www.crowdworx.com/ 153 https://www.kabbage.com/ 67 https://www.fundingcircle.com/u http://www.fiverr.com/ 154 k/ 68 https://www.polleverywhere.com 155 https://www.ketto.org/ 69 http://www.kickstarter.com/ 156 http://www.firstgiving.com/ 70 http://www.indiegogo.com/ 157 https://www.tilt.com/ 71 https://www.crowdfunder.com/ 158 https://www.razoo.com/us/home/ 72 https://www.crowdtilt.com/ 159 http://www.giveforward.com/ 73 http://www.rockethub.com/ 160 http://www.crowdfunder.co.uk/ 74 http://www.zequs.com/ 161 http://www.kickante.com.br/ 75 http://invested.in/ 162 https://fundrazr.com/ 76 https://www.oocto.com/ 163 https://fundly.com/ 77 https://www.somolend.com/ 164 https://fundrise.com/ 78 https://angel.co/ 165 http://www.fundable.com/ 79 http://babyloan.org/fr/ 166 https://www.upstart.com/ 80 https://www.crowdonomic.com/ 167 https://www.seedinvest.com/ 81 http://www.trucrowd.com/ 168 https://www.startnext.com/ 82 http://akickincrowd.com/ 169 http://www.headstart.co.il/ 83 http://www.openideo.com/ 170 http://gogetfunding.com/ 84 http://ioby.org/ 171 https://www.zeczec.com/

142

85 http://startsomegood.com/ 172 https://circleup.com/ 86 http://www.crowdcube.com/ 173 https://pozible.com/ 87 http://www.ideasplatform.in/ 174 https://www.seedandspark.com/ Table 22. Crowdsourcing Platforms being under investigation on Chapter 3.

143