Studying the ``Wisdom of Crowds'' at Scale

Total Page:16

File Type:pdf, Size:1020Kb

Studying the ``Wisdom of Crowds'' at Scale Studying the “Wisdom of Crowds” at Scale Camelia Simoiu,1 Chiraag Sumanth,1 Alok Mysore,2 Sharad Goel1 1 Stanford University, 2University of California San Diego Abstract a county fair. He famously observed that the median of the guesses—1,207 pounds—was, remarkably, within 1% of the In a variety of problem domains, it has been observed that the true weight (Galton 1907). aggregate opinions of groups are often more accurate than those of the constituent individuals, a phenomenon that has Over the past century, there have been dozens of studies been dubbed the “wisdom of the crowd”. However, due to that document this “wisdom of crowds” effect (Surowiecki the varying contexts, sample sizes, methodologies, and scope 2005). Simple aggregation—as in the case of Galton’s ox of previous studies, it has been difficult to gauge the extent competition—has been successfully applied to aid predic- to which conclusions generalize. To investigate this ques- tion, inference, and decision making in a diverse range tion, we carried out a large online experiment to systemati- of contexts. For example, crowd judgments have been cally evaluate crowd performance on 1,000 questions across 50 topical domains. We further tested the effect of different used to successfully answer general knowledge ques- types of social influence on crowd performance. For exam- tions (Surowiecki 2005), identify phishing websites and ple, in one condition, participants could see the cumulative web spam (Moore and Clayton 2008; Liu et al. 2012), crowd answer before providing their own. In total, we col- forecast current political and economic events (Budescu lected more than 500,000 responses from nearly 2,000 par- and Chen 2014; Griffiths and Tenenbaum 2006; Hill and ticipants. We have three main results. First, averaged across Ready-Campbell 2011), predict sports outcomes (Herzog all questions, we find that the crowd indeed performs bet- and Hertwig 2011; Goel et al. 2010), and predict climate- ter than the average individual in the crowd—but we also related, social, and technological events (Hueffer et al. 2013; find substantial heterogeneity in performance across ques- Kaplan, Skogstad, and Girshick 1950). However, given the tions. Second, we find that crowd performance is generally diversity of experimental designs, subject pools, and analytic more consistent than that of individuals; as a result, the crowd does considerably better than individuals when performance methods employed, it has been difficult to know whether is computed on a full set of questions within a domain. Fi- these documented examples are a representative collection nally, we find that social influence can, in some instances, of a much larger space of tasks that exhibit a wisdom-of- lead to herding, decreasing crowd performance. Our findings crowds phenomenon, or conversely, whether they are highly illustrate some of the subtleties of the wisdom-of-crowds phe- specific instances of an interesting, though ultimately lim- nomenon, and provide insights for the design of social recom- ited occurrence. mendation platforms. Moreover, it is unclear whether these findings generalize to many real-world settings where individuals make deci- Introduction sions under the influence of others’ judgments. This question Are crowds mad or wise? In his 1841 book, “Memoirs of ex- is especially relevant today, as peer influence is oftentimes traordinary popular delusions and the madness of crowds,” explicitly built into online platforms. One might choose a Charles Mackay documents a series of remarkable tales of restaurant, watch a movie, read a news story, or purchase human folly, ranging from the hysteria of the South Sea a book because of the aggregated opinions of the “crowd.” Bubble that ruined many British investors in the 1720s, Recommender systems may display top-rated products first to Holland’s seventeenth-century “tulipomania”, when in- by default, whose quality has been estimated as the most dividuals went into debt collecting tulip bulbs until a sud- popular or highly voted. In recent years, researchers have den depreciation in the bulbs’ value rendered them worth- debated whether social influence undermines or enhances less (Mackay 1841). Decades later, in yet another classic the wisdom of crowds. On the one hand, some have con- example, the statistician Francis Galton watched as eight jectured that if participants receive information about the hundred people competed to guess the weight of an ox at answers of others, that can help ground responses, leading to greater accuracy (Faria et al. 2010; King et al. 2012; Copyright c 2019, Association for the Advancement of Artificial Madirolas and de Polavieja 2015). But, on the other hand, Intelligence (www.aaai.org). All rights reserved. there is also worry that such social influence could result in herding, which in turn could decrease collective perfor- tics (Lorenz et al. 2011), rank ordering problems (e.g., rank- mance (Lorenz et al. 2011; Muchnik, Aral, and Taylor 2013; ing U.S. presidents in chronological order) (Lee, Steyvers, Salganik, Dodds, and Watts 2006). and Miller 2014; Miller and Steyvers 2011), recollecting To systematically explore the wisdom-of-crowds information from memory (Steyvers et al. 2009), and spa- phenomenon—including the effects of social influence—we tial reasoning tasks (Surowiecki 2005). But not all studies carried out a large-scale, online experiment. In one of have been able to replicate this success. For example, Bur- the most comprehensive studies of the wisdom-of-crowds nap et al. consider crowd evaluation of engineering design effect to date, we collected a total of more than 500,000 attributes and find that clusters of consistently wrong eval- responses to 1,000 questions across 50 topical areas. For uators exist along with the cluster of experts. The authors each question, we computed the “crowd” answer by either conclude that both averaging evaluations and a crowd con- taking the median response of participants (in the case sensus model may not be adequate for engineering design of open-ended, numerical questions) or the most popular tasks (Burnap et al. 2015). choice (in the case of categorical questions). This lack of consensus is also evident among the set Averaged across our full set of questions, we found that of studies that consider prediction domains. In the con- the crowd answer was approximately in the 65th percentile text of predicting outcomes for competitive sporting tour- of individual responses, ranked by accuracy. Our results thus naments, collective forecasts were found to consistently per- lend support to the idea that the wisdom-of-crowds effect form above chance and to be as accurate as predictions based indeed holds on a corpus chosen to reflect a wide variety of on official rankings (Herzog and Hertwig 2011). In another topical areas. Further, we found that crowd performance was study involving a competitive bidding task, Lee et al. con- typically more consistent than the performance of individu- sidered eleven different methods to aggregate answers, and als. That is, whereas the crowd performed at least modestly found that aggregation improves performance (Lee, Zhang, better than average on all of the questions, even the best in- and Shi 2011). In contrast, in the betting context consid- dividuals occasionally performed poorly. As a result, when ered by Simmons et al., the authors found no evidence we looked at performance at the level of topical domains, of a wisdom-of-crowds phenomenon. The authors attribute rather than individual questions, the crowd performed con- the failure to the fact that “most bettors have high intu- siderably better than individual respondents, with average itive confidence and are therefore quite reluctant to aban- performance in approximately the 85th percentile. don it”. Similarly, crowd predictions made by thousands Finally, we examined the effect of social influence, ran- of people competing in a fantasy football league were domly assigning participants to one of three different social found to predict favorites in over 90% of the games, even conditions: (1) “concensus”, in which participants saw the though favorites and underdogs were equally likely to win cumulative crowd response before providing their own an- against the spread (Simmons et al. 2010). These studies swer; (2) “most recent”, in which participants saw the three suggest that crowd wisdom may not prevail in contexts in most recent answers; and (3) “most confident”, in which par- which emotional, intuitive responses conflict with more ra- ticipants saw three answers from the most confident individ- tional, deliberative responses (Tversky and Kahneman 2000; uals, based on self-reported assessments. For the latter two Simmons et al. 2010). conditions—“most recent” and “most confident”—we found Several studies focus on the question of how to best ex- that crowd performance was qualitatively similar to the non- tract collective wisdom. Numerous studies have shown that social, control condition. However, for the “consensus” con- simple aggregation techniques (e.g., using the mean or me- dition, the crowd performed worse than when respondents dian for open-ended questions, or the majority vote for did not receive any social signals. Notably, this consensus categorical questions) often perform just as well as more condition mirrors the design of many online rating sites, in complex methods, including confidence-weighted aggrega- which users can see the aggregate rating of others before tion, Bayesian methods, and the Thurstonian latent variable providing their own rating. While such a design has value model (Miller and Steyvers 2011; Griffiths and Tenenbaum (e.g., it facilitates use by those who simply want to see the 2006; Prelec, Seung, and McCoy 2017; Budescu and Chen information, rather than providing a review themselves), our 2014; Hemmer, Steyvers, and Miller 2010). Simple aggrega- results suggest that it can also hurt the quality of results. tion, however, has often been found to perform reasonably well, if not on par with more complex models (Steyvers et Related Work al.
Recommended publications
  • A Hierarchical Bayesian Model for Improving Wisdom of the Crowd
    A hierarchical Bayesian model for improving wisdom of the crowd aggregation of quantities with large between-informant variability Saiwing Yeung ([email protected]) Institute of Education, Beijing Institute of Technology, China Abstract ties can be described by distributions without a natural maxi- The wisdom of the crowd technique has been shown to be very mum and are severely right skewed. For example, Gibrat’s effective in producing judgments more accurate than those of law suggested that the distribution of populations of cities individuals. However, its performance in situations in which follows a log-normal distribution (Eeckhout, 2004). Other the intended estimates would involve responses of greatly dif- fering magnitudes is less well understood. We first carried out distributions with similar characteristics include power-law, an experiment to elicit people’s estimates in one such domain, Pareto, and exponential distributions. They naturally occur in populations of U.S. metropolitan areas. Results indicated that many different contexts, including income and wealth, num- there were indeed vast between-subjects differences in magni- tudes of responses. We then proposed a hierarchical Bayesian ber of friends, waiting time, time till failure, etc. (Barabási, model that incorporates different respondents’ biases in terms 2005). How to best aggregate these quantities in a WoC con- of the overall magnitudes of their answers and the amount of text is not very well understood. In the present research we individual uncertainties. We implemented three variations of this model with different ways of instantiating the individual demonstrate a hierarchical Bayesian approach to the problem. differences in overall magnitude. Estimates produced by the Hierarchical Bayesian models formally express the rela- variation that accounts for the stochasticities in response mag- tionships between psychological constructs, stimuli, and ob- nitude outperformed those based on standard wisdom of the crowd aggregation methods and other variations.
    [Show full text]
  • A Statistical Model for Aggregating Judgments by Incorporating Peer Predictions
    A statistical model for aggregating judgments by incorporating peer predictions John McCoy3 and Drazen Prelec1;2;3 1Sloan School of Management Departments of 2Economics, and 3Brain & Cognitive Sciences Massachusetts Institute of Technology, Cambridge MA 02139 [email protected], [email protected] March 16, 2017 Abstract We propose a probabilistic model to aggregate the answers of respondents answering multiple- choice questions. The model does not assume that everyone has access to the same information, and so does not assume that the consensus answer is correct. Instead, it infers the most prob- able world state, even if only a minority vote for it. Each respondent is modeled as receiving a signal contingent on the actual world state, and as using this signal to both determine their own answer and predict the answers given by others. By incorporating respondent’s predictions of others’ answers, the model infers latent parameters corresponding to the prior over world states and the probability of different signals being received in all possible world states, includ- ing counterfactual ones. Unlike other probabilistic models for aggregation, our model applies to both single and multiple questions, in which case it estimates each respondent’s expertise. The model shows good performance, compared to a number of other probabilistic models, on data from seven studies covering different types of expertise. Introduction It is a truism that the knowledge of groups of people, particularly experts, outperforms that of arXiv:1703.04778v1 [stat.ML] 14 Mar 2017 individuals [43] and there is increasing call to use the dispersed judgments of the crowd in policy making [42].
    [Show full text]
  • Social Learning, Strategic Incentives and Collective Wisdom: an Analysis of the Blue Chip Forecasting Group
    Social Learning, Strategic Incentives and Collective Wisdom: An Analysis of the Blue Chip Forecasting Group J. Peter Ferderer Department of Economics Macalester College St. Paul, MN 55105 [email protected] Adam Freedman Chicago, IL 60601 [email protected] July 22, 2015 ABSTRACT: Using GDP growth forecasts from the Blue Chip survey between 1977 and 2011, we measure absolute consensus errors and forecast dispersion for each of the 24 monthly forecast horizons and provide three main findings. First, the size of consensus errors and forecast dispersion are negatively correlated over longer-term forecast horizons (from 24 to 13 months). This finding is consistent with the hypothesis that the Lawrence Klein forecasting award, which is based on performance of 12-month-head forecasts, increased the group’s collective wisdom by raising the incentive to anti-herd. Second, absolute consensus errors and forecast dispersion display significant negative temporal variation for the longest horizons (24 to 20 months). Third, after the early 1990s (i) there was a dramatic decline in forecast dispersion associated with a significant increase in the size of longer-term consensus errors, and (ii) forecasts bracket realized GDP growth much less frequently. The final two results suggest that increased herding or reduced model diversity caused collective wisdom to diminish in the second part of the sample. JEL Classifications: E37 Forecasting and Simulation: Models and Applications D70 Analysis of Collective Decision-Making, General D83 Search; Learning; Information and Knowledge; Communication; Belief Keywords: social learning, herding, strategic incentives, reputation, consensus forecasts, collective wisdom, forecast errors, forecast dispersion, Great Moderation 0 The key lesson I would draw from our experience is the danger of relying on a single tool, methodology or paradigm.
    [Show full text]
  • Harnessing the Wisdom of Crowds*
    Harnessing the Wisdom of Crowds* Zhi Da,y and Xing Huangz This Draft: September 2016 Abstract We examine the negative information externality associated with herding on a crowd-based earnings forecast platform (Estimize.com). By tracking user viewing activities, we monitor the amount of information a user views before she makes an earnings forecast. We find that the more public information a user views, the less weight she will put on her private information. While this improves the accuracy of each individual forecast, it reduces the accuracy of the consensus forecast, since useful private information is prevented from entering the consensus. Predictable errors made by “influential users" early on persist in the consensus forecast and result in return predictability at earnings announcements. To address endogeneity concerns related to information acquisition choices, we collaborate with Estimize.com to run experiments where we restrict the information set for randomly selected stocks and users. The experiments confirm that \independent" forecasts lead to a more accurate consensus and convince Estimize.com to switch to a \blind" platform from November 2015. Overall, our findings suggest that the wisdom of crowds can be better harnessed by encouraging independent voices from the participants. *We thank Renee Adams, Kenneth Ahern, Qi Chen, Erik Eyster (discussant), Cary Frydman, Stefano DellaVigna, Umit Gurun, David Hirshleifer, Harrison Hong, Byoung-Hyoun Hwang, Russell James (discussant), Petri Jylha (discussant), Peter Kelly, Tse-Chun
    [Show full text]
  • The Wisdom of Crowds (00248391)
    The Wisdom of Crowds “Nobody goes there anymore. It’s too estimate of the fair price. In other words, it is very crowded.” - Yogi Berra difficult for any single investor, even an expert, to be wiser than the wisdom of the crowd. As the summer draws to a close, we Floridians are in transition. We will soon stop complaining about the This phenomenon may seem odd or difficult to heat and start complaining about the crowds. While it believe at first, but consider this: For every buyer of a may be impossible to be sanguine about the stock, there must be a seller. So, by definition, half of oppressive summer heat, we should keep in mind that the market participants think the stock is a good buy the fall and winter crowds actually provide us with and half think it should be sold. With billions of some useful benefits. The crowds of tourists shares being bought and sold each day, the diverse contribute a large share of revenue to the Florida opinions of thousands of investors are fairly reflected economy, and they help to pay for a significant in the resulting stock price (this is what your high portion of our state infrastructure. It might be school chemistry teacher would cite as an example of difficult to keep this in mind when your favorite "equilibrium”). This equilibrium price reflects all of restaurant is booked and the highway looks like a the information available to the crowd, so, in order to parking lot, but the crowds do take a financial burden beat the market, it is not enough to simply outguess off of the locals.
    [Show full text]
  • Debiasing the Crowd: How to Select Social
    1 Debiasing the crowd: how to select social 2 information to improve judgment accuracy? 3 Short title: How to select social information to improve judgment accuracy? 1;∗ 2 1 4 Bertrand Jayles , Cl´ement Sire , Ralf H.J.M Kurvers 1 5 Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 6 14195 Berlin, Germany 2 7 Laboratoire de Physique Th´eorique,Centre National de la Recherche Scientifique (CNRS), 8 Universit´ede Toulouse { Paul Sabatier (UPS), Toulouse, France 9 Abstract 10 Cognitive biases are widespread in humans and animals alike, and can sometimes 11 be reinforced by social interactions. One prime bias in judgment and decision making 12 is the human tendency to underestimate large quantities. Former research on social 13 influence in estimation tasks has generally focused on the impact of single estimates on 14 individual and collective accuracy, showing that randomly sharing estimates does not 15 reduce the underestimation bias. Here, we test a method of social information sharing 16 that exploits the known relationship between the true value and the level of under- 17 estimation, and study if it can counteract the underestimation bias. We performed 18 estimation experiments in which participants had to estimate a series of quantities 19 twice, before and after receiving estimates from one or several group members. Our 20 purpose was threefold: to study (i) whether restructuring the sharing of social infor- 21 mation can reduce the underestimation bias, (ii) how the number of estimates received 22 affects sensitivity to social influence and estimation accuracy, and (iii) the mechanisms 23 underlying the integration of multiple estimates.
    [Show full text]
  • Crowd-Sourced Data Coding for the Social Sciences: Massive Non-Expert Coding of Political Texts*
    Crowd-sourced data coding for the social sciences: massive non-expert coding of political texts* Kenneth Benoit Drew Conway London School of Economics New York University and Trinity College, Dublin Michael Laver Slava Mikhaylov New York University University College London Abstract A large part of empirical social science relies heavily on data that are not observed in the field, but are generated by researchers sitting at their desks, raising obvious issues of both reliability and validity. This paper addresses these issues for a widely used type of coded data, derived from the content analysis of political text. Comparing estimates derived from multiple “expert” and crowd-sourced codings of the same texts, as well as other independent estimates of the same latent quantities, we investigate whether we can analyze political text in a reliable and valid way using the cheap and scalable method of crowd sourcing. Our results show that, contrary to naive preconceptions and reflecting concerns often swept under the carpet, a set of expert coders is also a crowd. We find that deploying a crowd of non-expert coders on the same texts, with careful specification and design to address issues of coder quality, offers the prospect of cheap, scalable and replicable human text coding. Even as computational text analysis becomes more effective, human coding will always be needed, both to validate and interpret computational results and to calibrate supervised methods. While our specific findings here concern text coding, they have implications for all expert coded data in the social sciences. KEYWORDS: text coding; crowd sourcing; expert-coded data; reliability; validity * Prepared for the third annual New Directions in Analyzing Text as Data conference at Harvard University, 5-6 October 2012.
    [Show full text]
  • Counteracting Estimation Bias and Social Influence To
    bioRxiv preprint doi: https://doi.org/10.1101/288191; this version posted March 24, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Counteracting estimation bias and social influence to improve the wisdom of crowds Albert B. Kaoa,∗, Andrew M. Berdahlb, Andrew T. Hartnettc, Matthew J. Lutzd, Joseph B. Bak-Colemane, Christos C. Ioannouf, Xingli Giamg, Iain D. Couzind,h aDepartment of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA, USA bSanta Fe Institute, Santa Fe, NM, USA cArgo AI, Pittsburgh, PA, USA dDepartment of Collective Behaviour, Max Planck Institute for Ornithology, Konstanz, Germany eDepartment of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ, USA fSchool of Biological Sciences, University of Bristol, Bristol, UK gDepartment of Ecology and Evolutionary Biology, University of Tennessee, Knoxville, TN, USA hChair of Biodiversity and Collective Behaviour, Department of Biology, University of Konstanz, Konstanz, Germany Abstract Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual esti- mation biases and information sharing between individuals. Here we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error.
    [Show full text]
  • New Technology Assessment in Entrepreneurial Financing-Can
    New Technology Assessment in Entrepreneurial Financing - Can Crowdfunding Predict Venture Capital Investments? Jermain Kaminski,∗ Christian Hopp,y Tereza Tykvovaz Draft, August 26, 2018 Abstract 1 Introduction Recent years have seen an upsurge of novel sources of new venture financing through crowdfunding (CF). We draw on 54,943 successfully crowdfunded projects "[T]his year, we've been slower to invest partially be- and 3,313 venture capital (VC) investments through- cause in our analysis, there are years where there are out the period 04/2012-06/2015 to investigate, on lots of new ideas and big swings that are going for the aggregate level, how crowdfunding is related to new industries. I feel like last year and maybe the a more traditional source of entrepreneurial finance, year before were better years for big new ideas. This venture capital. Granger causality tests support year, we haven't seen as many."1 the view that VC investments follow crowdfunding investments. Cointegration tests also suggest a Aileen Lee, Cowboy Ventures long-run relationship between crowdfunding and VC investments, while impulse response functions (IRF) indicate a positive effect running from CF to VC Entrepreneurship is \always a voyage of explo- within two to six months. Crowdfunding seems to ration into the unknown, an attempt to discover new help VC investors in assessing future trends rather ways of doing things better than they have been done than crowding them out of the market. before" Hayek (1948, p. 101). The need for experi- mentation is deeply rooted in entrepreneurship. An Keywords environment which facilitates experimentation and Crowdfunding, venture capital, Granger causality, tolerates failures is the mainstay of a prospering eco- crowding out.
    [Show full text]
  • The Wisdom of Crowds? Groupthink and Nonprofit Governance
    THE WISDOM OF CROWDS? GROUPTHINK AND NONPROFIT GOVERNANCE ∗ Melanie B. Leslie Abstract Scandals involving nonprofit boards and conflicts of interest continue to receive considerable public attention. Earlier this year, for example, musician Wyclef Jean’s Yele Haiti charity became the target of intense criticism after the charity disclosed that it had regularly transacted business with Jean and entities controlled by Jean and other directors. Although scandals caused by self-dealing undermine public confidence in the charitable sector, they continue to erupt. Why do charitable boards sanction transactions with insiders? This Article argues that much of the blame lies with the law itself. Because fiduciary duty law is currently structured as a set of fuzzy standards that focus on outcome rather than procedure, it facilitates groupthink. Groupthink occurs when directors place allegiance to fellow board members ahead of the nonprofit’s best interests, and it can undermine social norms that facilitate sound governance procedures. Groupthink blinds directors to conflicts of interest and may also induce directors to refrain from adequately monitoring ongoing business relationships with board members. When groupthink occurs, boards can convince themselves that their conduct falls within the law’s murky limits. As a result, charitable assets are diverted from the charities’ intended beneficiaries and into directors’ pockets. Social norms against self-dealing are the primary tool for combating harmful groupthink. The law should be reformulated to support and reinforce fiduciary duties as social norms. Restructuring laws against self-dealing as a set of clear rules would give needed direction to confused boards and would entrench social norms against self-dealing.
    [Show full text]
  • An Experiment About the Impact of Social Influence on the Wisdom of the Crowds' Effect
    School of Technologies and Architecture, ISCTE Faculty of Sciences, University of Lisbon An experiment about the impact of social influence on the wisdom of the crowds' effect Sofia Silva ([email protected]) Dissertation submitted in partial requirement for the conferral of Master of Sciences degree in Complexity Sciences Supervisor: Professor Luís Correia Faculty of Sciences, University of Lisbon June 2016 Acknowledgments I would like to express my profound gratitude to my dearest friend István Mandak, without whom this dissertation would have not been possible. He contributed with his knowledge and ideas to implement the code for the experiment and committed his time to improve the way data was collected. Preventing duplicates in the framework of MTurk has turned out to be exceptionally challenging and his long term support and patience was invaluable to achieve a reliable setup. I equally wish to thank his wife, Adrienn and his two kids, Maja and Bohus, who inadvertently accompanied this project and with whom I ended sharing dinner, games and a great amount of jellybeans. I would like to sincerely thank my supervisor Luís Correia who patiently guided me throughout this bumpy and rather long journey. I believe that without his continuous support, patience, and readiness to communicate (even if three thousand kilometres away) it would have been extremely difficult to overcome the many challenges of this project. His continuous enthusiasm and good humour were truly motivating. I also want to thank Jorge Louçã for his initial encouragement and support when I first started the program. Furthermore, I would like to thank my dearest friends Sabrina Amendoeira, Diana Ferreira, and Juergen Krasser who have always been encouraging and supportive.
    [Show full text]
  • Groupthink Versus the Wisdom of Crowds: the Social Epistemology of Deliberation and Dissent Miriam Solomon Temple University
    The Southern Journal of Philosophy (ZOOS) Vol. XLN Groupthink versus The Wisdom of Crowds: The Social Epistemology of Deliberation and Dissent Miriam Solomon Temple University Abstract Trust in the practice of rational deliberation is widespread and largely unquestioned. This paper uses recent work from business contexts to challenge the view that rational deliberation in a group improves decisions. Pressure to reach consensus can, in fact, lead to phenomena such as groupthink and to suppression of relevant data. Aggregation of individual decisions, rather than deliberation to a consensus, sur- prisingly, can produce better decisions than those of either group deliberation or individual expert judgment. I argue that dissent is epistemically valuable, not because of the discussion it can provoke (Mill’s and Longino’s view about the benefit of dissent), but because dissenting positions often are associated with particular data or insights that would be lost in consensus formation. Social epis- temologists can usefully pay attention to various methods of aggre- gation of individual opinion for their effectiveness at realizing epistemic goals. 1. Introduction Philosophers from Plato to Mill to Popper to Rawls and Longino have made critical discussion and deliberation central to their social epistemologies. The claim is that rational dialogue between two or more individuals improves reasoning over what can be accomplished by individuals working alone. Longino (1990, 2002) even goes so far as to claim that objectivity is constituted by such critical discourse, provided that the discourse satisfies constraints such as tempered equality of intellectual authority, public forums for criticism, respon- siveness to criticism, and some shared standards of evaluation.
    [Show full text]