Zusammenfassung

Total Page:16

File Type:pdf, Size:1020Kb

Zusammenfassung Zusammenfassung Auf den folgenden beiden Seiten werden die zuvor detailliert darge- stellten 100 Fehler nochmals übersichtlich gruppiert und klassifiziert. Der kompakte Überblick soll Sie dabei unterstützen, die systemati- schen Fehler durch geeignete Maßnahmen möglichst zu verhindern beziehungsweise so zu reduzieren, dass sich keine allzu negativen Auswirkungen auf Ihr operatives Geschäft ergeben! Eine gezielte und strukturierte Auseinandersetzung mit den Fehler(kategorie)n kann Sie ganz allgemein dabei unterstützen, Ihr Unternehmen „resilienter“ und „antifragiler“ aufzustellen, das heißt, dass Sie von negativen Entwicklungen nicht sofort umgeworfen werden (können). Außerdem kann eine strukturierte Auseinander- setzung auch dabei helfen, sich für weitere, hier nicht explizit genannte – aber ähnliche – Fehler zu wappnen. © Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019 402 C. Glaser, Risiko im Management, https://doi.org/10.1007/978-3-658-25835-1 403 Zusammenfassung Zu viele Zu wenig Nicht ausreichend Informationen Informationsgehalt Zeit zur Verfügung Deshalb beachten wir Deshalb füllen wir die Deshalb nehmen wir häufig typischerweise nur… Lücken mit… an, dass… Veränderungen Mustern und bekannten wir Recht haben Außergewöhnliches Geschichten wir das schaffen können Wiederholungen Allgemeinheiten und das Naheliegendste auch Bekanntes/Anekdoten Stereotypen das Beste ist Bestätigungen vereinfachten wir beenden sollten, was Wahrscheinlichkeiten und wir angefangen haben Zahlen unsere Optionen einfacheren Problemen offenhalten sollten unserer aktuellen einfacher auch besser ist Denkweise Beispiele: Beispiele: Beispiele: Z Zu viele historische Z Halo-Effekt Z Selbstüberschätzung Daten Z Mitläufereffekt/ Z Zielscheibenfehler Z Zu viele Durch- Gruppendenken Z Barnum-Effekt schnittswerte Z Spielerfehlschluss Z Nullrisiko-Verzerrung Z Bestätigungsfehler Z Millersche Zahl Z Zurückfeuer-Effekt Z Asymmetrische Z Murphys Gesetz Z Risiken überversichern Aufmerksamkeit Z Überlebensirrtum Z Fehlschluss der Z Einrahmungseffekt Z Stereotypisierung irreversiblen Kosten Z Auffälligkeitsverzerrung Z Anekdotischer Z Status-quo-Verzerrung Z Verfügbarkeitsheuristik Fehlschluss Z Dispositionseffekt Z Beobachter- Z Mentale Buchführung Z Konjunktionsfehler Erwartungseffekt Z Argumentum ad logicam Z Verschieben der Z Verlustaversion Z Clusterillusion Torpfosten Z Auswahlüberlastung Z Tautologien und inhalt- Z Egozentrische Verzerrung Z Fluch des Wissens liche Wiederholungen Z Verwechseln von Ursache Z Prävalenzfehler Z Sicherheits- und und Wirkung Z Heuristiken in einer Möglichkeitseffekt Z Beeinflussungen durch komplexen Welt Z Blind-Validierungen Leit- und Fangfragen Z Kontrollillusion Z Einfluss der Reziprozität Z Lernen aus Erfolgs- und Misserfolgsgeschichten Zusammenfassung 404 Schlechtes Handwerkliche Kulturelle Erinnerungsvermögen Fehler Unzulänglichkeiten Wir merken uns Dinge Handwerkliche und Grundlegende Mängel energiesparend durch… Umsetzungsfehler bei… beim/bei der… das Herunterkürzen von risikorelevanten und offenen und konstrukti- Gedächtnissequenzen strategischen Modellen ven Umgang mit Fehlern Verallgemeinerungen Interpretation und Ermitt- Unternehmenskultur und nur ein Beispiel/ lung der Kennzahlen Tone from the Top Anekdote, das/die wir risikorelevanten Entscheidungsfreude im uns merken Prozessen Management und auf das Zurückgreifen auf Geschäfts- und Risiko- den operativen Ebenen externe Erfahrungswerte strategien Fokussierung auf die Wahrnehmung von notwendigen Ressourcen strategische Ziele nur einem Teil der des Integration aller Realität (Risiko-)Managements Mitarbeiter Beispiele: Beispiele: Beispiele: Z Narrative Täuschung Z Problem der Induktion Z Angst vor Unbekanntem/ Z Inflation der „schwarzen Z Zukunft als „Blackbox" Unkontrollierbarem Schwäne" Z Mathematisierung der Z Rationalitätsillusion Z Selektive Wahrnehmung Zukunft Z Hierarchie-Gläubigkeit Z Selbsterfüllende Z Die Welt als Casino Z Sonnenblumeneffekt Prophezeiung Z Blinde Modellgläubigkeit Z Wissensriesen/ Z Kognitive Dissonanz Z Alpha- und Beta-Fehler Realisierungszwerge Z Negativitätsverzerrung Z Diversifikations-Mythos Z Isoliertes Parallel- Z Argumentum ad Z Fat Tails/Normal- universum hominem verteilungsillusion Z Entscheidungsverstop- Z Strohmann-Argument Z Fehlsteuerung fung beim Management Z Appell an die Ignoranz erkennbarer Risiken Z Intransparenz Z Falsches Dilemma Z Langsame, Z Autoritätsargument Z Dammbruchargument unkoordinierte Reaktion Z Gefangen im Hamsterrad Z Tu quoque Z Kausaler Fehlschluss Z Kurzfristorientierung Z Faktor Mensch Z Keine Verbindung mit der Z Veraltete, leblose Z Vergleich von Äpfeln Planung Risikostrategie und Birnen Z Mangelhafte Risiko- Z Fehlende Anreiz- Z Zu wenig Erfahrung kommunikation steuerung Z Unterschätzen der Z Überschätzen der Z Gelernte Sorg- und adversen Selektion Portfoliobetrachtung Hilflosigkeit Z Nicht-Beachtung von Z Keine übergreifende Folgerisiken Risikosicht Z Detail- statt Zusammen- Z Falscher Umgang mit hangorientierung Innovationen Z Keine Sicherung gegen Z Überwälzen der Dominoeffekt Verantwortung Z Mittelweg vs. fauler Z Zu viele Ja-Sager Kompromiss Z Angst und Gier Z Die Lean Risk Z Umgang mit unange- Management-Illusion nehmen Wahrheiten Z Quantifikations-Irrglaube Schlussbetrachtung Herzlichen Glückwunsch! Sie haben sich durch 100 ausgewählte „Fehler des (Risiko-)Managements“ durchgearbeitet und ich möchte nun noch ein paar abschließende Gedanken mit Ihnen teilen. Risiken sind etwas völlig Normales und so ist es auch mit deren Management. Nicht umsonst setzt sich das chinesische Wort für Krise, nämlich „weiji 危机“, aus „wei 危“, was Gefahr, und ji 机“, was Gelegenheit bedeutet, zusammen. Auch im Griechischen beschreibt das Wort „krisis“ keinesfalls nur eine hoffnungslose Situation, sondern vielmehr den Höhe- oder Wendepunkt einer gefährlichen Lage. Der 1998 verstorbene Bielefelder Soziologe Niklas Luhmann hat dies so ausgedrückt: „Die Tür zum Paradies bleibt versiegelt – durch das Wort Risiko" (vgl. Luhmann 2003, S.26). Um im Diesseits die Chancen zu wahren, musste man schon immer Risiken eingehen. Letzten Endes unterscheidet es den Guten vom sehr Guten und den Schlechten vom sehr Schlechten, wie gut oder wie schlecht er oder sie mit Risiken umgeht. Modellieren Sie Ihre Risiken mit Augenmaß und balancieren Sie empi- rische Fakten, Intuition und State-of-the-Art-Modelle vorsichtig aus. Dann ist Risikomanagement auch keine „Rocket Science“, sondern vielmehr solides Handwerk. Dies zeigte sich auch im Jahr 1998 beziehungsweise 2000, als der von Richard Merton, seines Zeichens Wirtschaftsnobelpreisträger, mitgegründete Fonds „Long-Term Capital Management (LTCM)“ zuerst gerettet und anschließend endgültig aufgelöst werden musste. Merton konnte in der Spitze zwar Kontrakte © Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019 406 C. Glaser, Risiko im Management, https://doi.org/10.1007/978-3-658-25835-1 407 Schlussbetrachtung in Höhe von einer Billion US-Dollar abschließen, gleichzeitig gerieten seine Geschäfte aber durch die schlechteren makroökonomischen Bedingungen in den Emerging Markets, die Russlandkrise sowie Fehlspekulationen mit italienischen Staatsanleihen in schwere Turbulenzen. Faktoren wie Flexibilität, Intuition und auch Anti- fragilität wurden von Merton unterschätzt. Er hat schlichtweg nicht die richtige „Balance“ zwischen Hard und Soft Facts gefunden. Sie haben sicherlich bemerkt, dass ein Großteil der dargestellten Risiken und Beispiele eine große Verknüpfung mit dem Themengebiet der Psychologie haben. Dies ist auch wenig verwunderlich. Schließlich lässt sich „Risiko“ nicht greifen, nicht sehen, nicht berühren und auch nicht wiegen. Es ist vielmehr eine subjektive Empfindung, bei der unsere geistigen beziehungsweise kognitiven Fähigkeiten eine zentrale Rolle einnehmen. Unser Gehirn hat über die Jahrhunderte und Jahrtausende hinweg gewisse Strategien entwickelt, um mit Risiken und Ungewissheit schnell und unkompliziert umzugehen. Viele dieser Techniken und Strategien sind hocherfolgreich, gleichzeitig gibt es aber auch Unschärfen oder Anfälligkeiten für Verzerrungen und Fehlschlüsse. Denken Sie nur an die Verarbeitung bei zu vielen Informationen, etwa mithilfe von Stereotypen, Anekdoten, des Bestätigungsfehlers oder des Beobachter-Erwartungswerts. Oder Fehler, wenn wir besonders schnell handeln müssen, wie beispielsweise die Status-quo- Verzerrung, den Fehlschluss der irreversiblen Kosten, die Selbstüber- schätzung oder den Barnum-Effekt. Ähnlich ist es auch bei der Frage, was wir uns wie lange merken (können) sollen oder wenn uns nur unvollständige Informationen vorliegen und wir diese in Bezug auf Schlussbetrachtung 408 Ähnlichkeiten mit unseren Erfahrungen aus der Vergangenheit abglei- chen. Wohlgemerkt: dies erfolgt in Bruchteilen von Sekunden und größtenteils unterbewusst! Denkfehler führen speziell im Umgang mit Statistiken zu Verzerrungen und Irrwegen. Es gibt allerdings auch Auswege, wie die zahlreichen Tipps und praktischen Tricks zu den einzelnen Fehlern des Risikoma- nagements gezeigt haben. Dies ist auch sehr wichtig, wenn man bedenkt, dass die Bedeutung der Interpretation und Wahrnehmung der Daten und Informationen durch Big Data noch deutlich zunehmen wird. Jetzt, da Sie einen guten Überblick über unterschiedliche Bereiche und mögliche Problemfelder im operativen und strategischen (Risiko-) Management erhalten haben, steht Ihrem langfristigen Erfolg nichts mehr im Wege. – Zumindest nicht aus Sicht des Risikomanagements!
Recommended publications
  • Conjunction Fallacy' Revisited: How Intelligent Inferences Look Like Reasoning Errors
    Journal of Behavioral Decision Making J. Behav. Dec. Making, 12: 275±305 (1999) The `Conjunction Fallacy' Revisited: How Intelligent Inferences Look Like Reasoning Errors RALPH HERTWIG* and GERD GIGERENZER Max Planck Institute for Human Development, Berlin, Germany ABSTRACT Findings in recent research on the `conjunction fallacy' have been taken as evid- ence that our minds are not designed to work by the rules of probability. This conclusion springs from the idea that norms should be content-blind Ð in the present case, the assumption that sound reasoning requires following the con- junction rule of probability theory. But content-blind norms overlook some of the intelligent ways in which humans deal with uncertainty, for instance, when drawing semantic and pragmatic inferences. In a series of studies, we ®rst show that people infer nonmathematical meanings of the polysemous term `probability' in the classic Linda conjunction problem. We then demonstrate that one can design contexts in which people infer mathematical meanings of the term and are therefore more likely to conform to the conjunction rule. Finally, we report evidence that the term `frequency' narrows the spectrum of possible interpreta- tions of `probability' down to its mathematical meanings, and that this fact Ð rather than the presence or absence of `extensional cues' Ð accounts for the low proportion of violations of the conjunction rule when people are asked for frequency judgments. We conclude that a failure to recognize the human capacity for semantic and pragmatic inference can lead rational responses to be misclassi®ed as fallacies. Copyright # 1999 John Wiley & Sons, Ltd. KEY WORDS conjunction fallacy; probabalistic thinking; frequentistic thinking; probability People's apparent failures to reason probabilistically in experimental contexts have raised serious concerns about our ability to reason rationally in real-world environments.
    [Show full text]
  • Thinking About the Ultimate Argument for Realism∗
    Thinking About the Ultimate Argument for Realism∗ Stathis Psillos Department of Philosophy and History of Science University of Athens Panepistimioupolis (University Campus) Athens 15771 Greece [email protected] 1. Introduction Alan Musgrave has been one of the most passionate defenders of scientific realism. Most of his papers in this area are, by now, classics. The title of my paper alludes to Musgrave’s piece “The Ultimate Argument for Realism”, though the expression is Bas van Fraassen’s (1980, 39), and the argument is Hilary Putnam’s (1975, 73): realism “is the only philosophy of science that does not make the success of science a miracle”. Hence, the code-name ‘no-miracles’ argument (henceforth, NMA). In fact, NMA has quite a history and a variety of formulations. I have documented all this in my (1999, chapter 4). But, no matter how exactly the argument is formulated, its thrust is that the success of scientific theories lends credence to the following two theses: a) that scientific theories should be interpreted realistically and b) that, so interpreted, these theories are approximately true. The original authors of the argument, however, did not put an extra stress on novel predictions, which, as Musgrave (1988) makes plain, is the litmus test for the ability of any approach to science to explain the success of science. Here is why reference to novel predictions is crucial. Realistically understood, theories entail too many novel claims, most of them about unobservables (e.g., that ∗ I want to dedicate this paper to Alan Musgrave. His exceptional combination of clear-headed and profound philosophical thinking has been a model for me.
    [Show full text]
  • Graphical Techniques in Debiasing: an Exploratory Study
    GRAPHICAL TECHNIQUES IN DEBIASING: AN EXPLORATORY STUDY by S. Bhasker Information Systems Department Leonard N. Stern School of Business New York University New York, New York 10006 and A. Kumaraswamy Management Department Leonard N. Stern School of Business New York University New York, NY 10006 October, 1990 Center for Research on Information Systems Information Systems Department Leonard N. Stern School of Business New York University Working Paper Series STERN IS-90-19 Forthcoming in the Proceedings of the 1991 Hawaii International Conference on System Sciences Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-19 Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-19 2 Abstract Base rate and conjunction fallacies are consistent biases that influence decision making involving probability judgments. We develop simple graphical techniques and test their eflcacy in correcting for these biases. Preliminary results suggest that graphical techniques help to overcome these biases and improve decision making. We examine the implications of incorporating these simple techniques in Executive Information Systems. Introduction Today, senior executives operate in highly uncertain environments. They have to collect, process and analyze a deluge of information - most of it ambiguous. But, their limited information acquiring and processing capabilities constrain them in this task [25]. Increasingly, executives rely on executive information/support systems for various purposes like strategic scanning of their business environments, internal monitoring of their businesses, analysis of data available from various internal and external sources, and communications [5,19,32]. However, executive information systems are, at best, support tools. Executives still rely on their mental or cognitive models of their businesses and environments and develop heuristics to simplify decision problems [10,16,25].
    [Show full text]
  • Statistical Tips for Interpreting Scientific Claims
    Research Skills Seminar Series 2019 CAHS Research Education Program Statistical Tips for Interpreting Scientific Claims Mark Jones Statistician, Telethon Kids Institute 18 October 2019 Research Skills Seminar Series | CAHS Research Education Program Department of Child Health Research | Child and Adolescent Health Service ResearchEducationProgram.org © CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service, WA 2019 Copyright to this material produced by the CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service, Western Australia, under the provisions of the Copyright Act 1968 (C’wth Australia). Apart from any fair dealing for personal, academic, research or non-commercial use, no part may be reproduced without written permission. The Department of Child Health Research is under no obligation to grant this permission. Please acknowledge the CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service when reproducing or quoting material from this source. Statistical Tips for Interpreting Scientific Claims CONTENTS: 1 PRESENTATION ............................................................................................................................... 1 2 ARTICLE: TWENTY TIPS FOR INTERPRETING SCIENTIFIC CLAIMS, SUTHERLAND, SPIEGELHALTER & BURGMAN, 2013 .................................................................................................................................. 15 3
    [Show full text]
  • Who Uses Base Rates and <Emphasis Type="Italic">P </Emphasis>(D
    Memory & Cognition 1998,26 (1), 161-179 Who uses base rates and P(D/,..,H)? An analysis ofindividual differences KEITHE. STANOVICH University ofToronto, Toronto, Ontario, Canada and RICHARD F. WEST James Madison University, Harrisonburg, Virginia In two experiments, involving over 900 subjects, we examined the cognitive correlates of the ten­ dency to view P(D/- H) and base rate information as relevant to probability assessment. Wefound that individuals who viewed P(D/- H) as relevant in a selection task and who used it to make the proper Bayesian adjustment in a probability assessment task scored higher on tests of cognitive ability and were better deductive and inductive reasoners. They were less biased by prior beliefs and more data­ driven on a covariation assessment task. In contrast, individuals who thought that base rates were rel­ evant did not display better reasoning skill or higher cognitive ability. Our results parallel disputes about the normative status of various components of the Bayesian formula in interesting ways. Itis ar­ gued that patterns of covariance among reasoning tasks may have implications for inferences about what individuals are trying to optimize in a rational analysis (J. R. Anderson, 1990, 1991). Twodeviations from normatively correct Bayesian rea­ and were asked to choose which pieces of information soning have been the focus ofmuch research. The two de­ they would need in order to determine whether the patient viations are most easily characterized ifBayes' rule is ex­ had the disease "Digirosa," The four pieces of informa­ pressed in the ratio form, where the odds favoring the focal tion were the percentage ofpeople with Digirosa, the per­ hypothesis (H) are derived by multiplying the likelihood centage of people without Digirosa, the percentage of ratio ofthe observed datum (D) by the prior odds favor­ people with Digirosa who have a red rash, and the per­ ing the focal hypothesis: centage ofpeople without Digirosa who have a red rash.
    [Show full text]
  • Is Racial Profiling a Legitimate Strategy in the Fight Against Violent Crime?
    Philosophia https://doi.org/10.1007/s11406-018-9945-1 Is Racial Profiling a Legitimate Strategy in the Fight against Violent Crime? Neven Sesardić1 Received: 21 August 2017 /Revised: 15 December 2017 /Accepted: 2 January 2018 # Springer Science+Business Media B.V., part of Springer Nature 2018 Abstract Racial profiling has come under intense public scrutiny especially since the rise of the Black Lives Matter movement. This article discusses two questions: (1) whether racial profiling is sometimes rational, and (2) whether it can be morally permissible. It is argued that under certain circumstances the affirmative answer to both questions is justified. Keywords Racial profiling . Discrimination . Police racism . Black lives matter. Bayes’s theorem . Base rate fallacy. Group differences 1 Introduction The Black Lives Matter (BLM) movement is driven by the belief that the police systematically discriminate against blacks. If such discrimination really occurs, is it necessarily morally unjustified? The question sounds like a provocation. For isn’t it obviously wrong to treat people differently just because they differ with respect to an inconsequential attribute like race or skin color? Indeed, racial profiling does go against Martin Luther King’s dream of a color-blind society where people Bwill not be judged by the color of their skin, but by the content of their character.^ The key question, however, is whether this ideal is achievable in the real world, as it is today, with some persistent statistical differences between the groups. Figure 1 shows group differences in homicide rates over a 29-year period, according to the Department of Justice (DOJ 2011:3): As we see (the red rectangle), the black/white ratio of the frequency of homicide offenders in these two groups is 7.6 (i.e., 34.4/4.5).
    [Show full text]
  • Drøfting Og Argumentasjon
    Bibliotekarstudentens nettleksikon om litteratur og medier Av Helge Ridderstrøm (førsteamanuensis ved OsloMet – storbyuniversitetet) Sist oppdatert 04.12.20 Drøfting og argumentasjon Ordet drøfting er norsk, og innebærer en diskuterende og problematiserende saksframstilling, med argumentasjon. Det er en systematisk diskusjon (ofte med seg selv). Å drøfte betyr å finne argumenter for og imot – som kan støtte eller svekke en påstand. Drøfting er en type kritisk tenking og refleksjon. Det kan beskrives som “systematisk og kritisk refleksjon” (Stene 2003 s. 106). Ordet brukes også som sjangerbetegnelse. Drøfting krever selvstendig tenkning der noe vurderes fra ulike ståsteder/ perspektiver. Den som drøfter, må vise evne til flersidighet. Et emne eller en sak ses fra flere sider, det veies for og imot noe, med styrker og svakheter, fordeler og ulemper. Den personen som drøfter, er på en måte mange personer, perspektiver og innfallsvinkler i én og samme person – først i sitt eget hode, og deretter i sin tekst. Idealet har blitt kalt “multiperspektivering”, dvs. å se noe fra mange sider (Nielsen 2010 s. 108). “Å drøfte er å diskutere med deg selv. En drøfting består av argumentasjon hvor du undersøker og diskuterer et fenomen fra flere sider. Drøfting er å kommentere og stille spørsmål til det du har redegjort for. Å drøfte er ikke bare å liste opp hva ulike forfattere sier, men å vise at det finnes ulike tolkninger og forståelser av hva de skriver om. Gjennom å vurdere de ulike tolkningene opp mot problemstillingen, foretar du en drøfting.” (https://student.hioa.no/argumentasjon-drofting-refleksjon; lesedato 19.10.18) I en drøfting inngår det å “vurdere fagkunnskap fra ulike perspektiver og ulike sider.
    [Show full text]
  • Testing the Abstractedness Account of Base-Rate Neglect, and the Representativeness Heuristic,Using Psychological Distance
    TESTING THE ABSTRACTEDNESS ACCOUNT OF BASE-RATE NEGLECT, AND THE REPRESENTATIVENESS HEURISTIC,USING PSYCHOLOGICAL DISTANCE Jared Branch A Thesis Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS August 2017 Committee: Richard Anderson, Advisor Scott Highhouse Yiwei Chen © 2017 Jared Branch All Rights Reserved iii ABSTRACT Richard Anderson, Advisor Decision-makers neglect prior probabilities, or base-rates, when faced with problems of Bayesian inference (e.g. Bar-Hillel, 1980; Kahneman & Tversky, 1972, 1973; Nisbett and Borgida 1975). Judgments are instead made via the representativeness heuristic, in which a probability judgment is made by how representative its most salient features are (Kahneman & Tversky, 1972). Research has shown that base-rate neglect can be lessened by making individual subsets amenable to overall superset extraction (e.g. Gigerenzer & Hoffrage, 1995; Evans et al. 2000; Evans et al. 2002; Tversky & Kahneman, 1983). In addition to nested sets, psychological distance should change the weight afforded to base-rate information. Construal Level Theory (Trope & Liberman, 2010) proposes that psychological distances—a removal from the subjective and egocentric self—result in differential information use. When we are proximal to an event we focus on its concrete aspects, and distance from an event increases our focus on its abstract aspects. Indeed, previous research has shown that being psychologically distant from an event increases the use of abstract and aggregate information (Burgoon, Henderson, & Wakslak, 2013; Ledgerwood, Wakslak, & Wang, 2010), although these results have been contradicted (Braga, Ferreira, & Sherman, 2015). Over two experiments I test the idea that psychological distance increases base-rate use.
    [Show full text]
  • Introduction to Logic and Critical Thinking
    Introduction to Logic and Critical Thinking Version 1.4 Matthew J. Van Cleave Lansing Community College Introduction to Logic and Critical Thinking by Matthew J. Van Cleave is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Table of contents Preface Chapter 1: Reconstructing and analyzing arguments 1.1 What is an argument? 1.2 Identifying arguments 1.3 Arguments vs. explanations 1.4 More complex argument structures 1.5 Using your own paraphrases of premises and conclusions to reconstruct arguments in standard form 1.6 Validity 1.7 Soundness 1.8 Deductive vs. inductive arguments 1.9 Arguments with missing premises 1.10 Assuring, guarding, and discounting 1.11 Evaluative language 1.12 Evaluating a real-life argument Chapter 2: Formal methods of evaluating arguments 2.1 What is a formal method of evaluation and why do we need them? 2.2 Propositional logic and the four basic truth functional connectives 2.3 Negation and disjunction 2.4 Using parentheses to translate complex sentences 2.5 “Not both” and “neither nor” 2.6 The truth table test of validity 2.7 Conditionals 2.8 “Unless” 2.9 Material equivalence 2.10 Tautologies, contradictions, and contingent statements 2.11 Proofs and the 8 valid forms of inference 2.12 How to construct proofs 2.13 Short review of propositional logic 2.14 Categorical logic 2.15 The Venn test of validity for immediate categorical inferences 2.16 Universal statements and existential commitment 2.17 Venn validity for categorical syllogisms Chapter 3: Evaluating inductive arguments and probabilistic and statistical fallacies 3.1 Inductive arguments and statistical generalizations 3.2 Inference to the best explanation and the seven explanatory virtues 3.3 Analogical arguments 3.4 Causal arguments 3.5 Probability 3.6 The conjunction fallacy 3.7 The base rate fallacy 3.8 The small numbers fallacy 3.9 Regression to the mean fallacy 3.10 Gambler’s fallacy Chapter 4: Informal fallacies 4.1 Formal vs.
    [Show full text]
  • How to Improve Bayesian Reasoning Without Instruction: Frequency Formats1
    Published in: Psychological Review, 102 (4), 1995, 684–704. www.apa.org/journals/rev/ © 1995 by the American Psychological Association, 0033-295X. How to Improve Bayesian Reasoning Without Instruction: Frequency Formats1 Gerd Gigerenzer University of Chicago Ulrich Hoffrage Max Planck Institute for Psychological Research Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the in- formation format in which it is designed to operate. The authors show that Bayesian algorithms are com- putationally simpler in frequency formats than in the probability formats used in previous research. Fre- quency formats correspond to the sequential way information is acquired in natural sampling, from ani- mal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50% of all inferences by Bayesian algorithms. Non-Bayesian algorithms included simple versions of Fisherian and Neyman-Pearsonian inference. Is the mind, by design, predisposed against performing Bayesian inference? The classical proba- bilists of the Enlightenment, including Condorcet, Poisson, and Laplace, equated probability theory with the common sense of educated people, who were known then as “hommes éclairés.” Laplace (1814/1951) declared that “the theory of probability is at bottom nothing more than good sense reduced to a calculus which evaluates that which good minds know by a sort of in- stinct, without being able to explain how with precision” (p.
    [Show full text]
  • Base-Rate Neglect: Foundations and Implications∗
    Base-Rate Neglect: Foundations and Implications∗ Dan Benjamin and Aaron Bodoh-Creedyand Matthew Rabin [Please See Corresponding Author's Website for Latest Version] July 19, 2019 Abstract We explore the implications of a model in which people underweight the use of their prior beliefs when updating based on new information. Our model is an extension, clarification, and \completion" of previous formalizations of base-rate neglect. Although base-rate neglect can cause beliefs to overreact to new information, beliefs are too moderate on average, and a person may even weaken her beliefs in a hypothesis following supportive evidence. Under a natural interpretation of how base-rate neglect plays out in dynamic settings, a person's beliefs will reflect the most recent signals and not converge to certainty even given abundant information. We also demonstrate that BRN can generate effects similar to other well- known biases such as the hot-hand fallacy, prediction momentum, and adaptive expectations. We examine implications of the model in settings of individual forecasting and information gathering, as well studying how rational firms would engage in persuasion and reputation- building for BRN investors and consumers. We also explore many of the features, such as intrinsic framing effects, that make BRN difficult to formulaically incorporate into economic analysis, but which also have important implications. ∗For helpful comments, we thank seminar participants at Berkeley, Harvard, LSE, Queen's University, Stanford, and WZB Berlin, and conference participants at BEAM, SITE, and the 29th International Conference on Game Theory at Stony Brook. We are grateful to Kevin He for valuable research assistance.
    [Show full text]
  • Leading Logical Fallacies in Legal Argument – Part 1 Gerald Lebovits
    University of Ottawa Faculty of Law (Civil Law Section) From the SelectedWorks of Hon. Gerald Lebovits July, 2016 Say It Ain’t So: Leading Logical Fallacies in Legal Argument – Part 1 Gerald Lebovits Available at: https://works.bepress.com/gerald_lebovits/297/ JULY/AUGUST 2016 VOL. 88 | NO. 6 JournalNEW YORK STATE BAR ASSOCIATION Highlights from Today’s Game: Also in this Issue Exclusive Use and Domestic Trademark Coverage on the Offensive Violence Health Care Proxies By Christopher Psihoules and Jennette Wiser Litigation Strategy and Dispute Resolution What’s in a Name? That Which We Call Surrogate’s Court UBE-Shopping and Portability THE LEGAL WRITER BY GERALD LEBOVITS Say It Ain’t So: Leading Logical Fallacies in Legal Argument – Part 1 o argue effectively, whether oral- fact.3 Then a final conclusion is drawn able doubt. The jury has reasonable ly or in writing, lawyers must applying the asserted fact to the gen- doubt. Therefore, the jury hesitated.”8 Tunderstand logic and how logic eral rule.4 For the syllogism to be valid, The fallacy: Just because the jury had can be manipulated through fallacious the premises must be true, and the a reasonable doubt, the jury must’ve reasoning. A logical fallacy is an inval- conclusion must follow logically. For hesitated. The jury could’ve been id way to reason. Understanding falla- example: “All men are mortal. Bob is a entirely convinced and reached a con- cies will “furnish us with a means by man. Therefore, Bob is mortal.” clusion without hesitation. which the logic of practical argumen- Arguments might not be valid, tation can be tested.”1 Testing your though, even if their premises and con- argument against the general types of clusions are true.
    [Show full text]