Very informal fallacies elivian.nl (high quality - poorly written) 06-06-2018 1. Introduction Formal fallacies, or logical fallacies, are patterns of reasoning which can be concluded to be false without further need of context. For example: “Humans are mammals, Peter isn’t human, therefore Peter is not a mammal” (Denying the Antecedent). Or one of my favorites: Proof: booze brings top grades HARRY SHUKMAN A clear correlation has been found between the amount of money colleges spend on alcohol and the percentage of firsts they receive. A genius Cambridge grad has found a link between the money colleges spend on booze and the number of firsts their students achieve. Churchill grad Grayden Reece-Smith has made a chart that appears to show a relationship between the amount of wine supplied by colleges and academic performance. Students have widely accepted that this chart is the best excuse for bad behaviour since telling your mum you only read Playboy for the articles. 1 elivian.nl (correlation does not imply causation, cum hoc ergo propter hoc, or more specific: ignoring common cause. In this case -> the wealth of a college). Formal fallacies, due to their objective nature, are usually easy enough to spot and I find that after attending for example University people are quite adapt at spotting and avoiding these. But humans being humans, we still find plenty of ways to fuck things up without being formally wrong. Plenty of ways to be correct, but hurting the cause. These are called informal fallacies. This article will be about those. The idea of informal fallacies existing is far from new. For example, a list of 129 informal fallacies can be found here [1]. I find the list a boring read, I guess you’d too. So in this article I will focus on fallacies based on three criteria (in order of importance): i. are they new? (e.g. not in the list of 129) ii. do you actually encounter these in practice? iii. are they hard to spot in practice? The third criteria explain the title: very informal fallacies, fallacies which require a lot of context to distinguish and therefore might be actually somewhere between fallacies and errors of thought (/biases). In order to make it easier to spot them in practice I’ve tried to use as many examples as possible. I’ll always sort the examples from what I consider high-importance to low-importance, so if you’re bored you can easily skip the rest of the examples. Examples will be mostly from my own experience as this might give the best representation of what you might encounter in practice. Also, these happen to be the ones I know. Examples will most often feature me as the wise one. This might give the impression that I consider myself to be an (above average) wise person. This however is not the case. I do a lot of stupid things quite frequently. I just don’t always notice and therefore they might not have made it into an example. Informal fallacies which will not be covered in this article because I didn’t have anything to add but which I do consider important are: appeal to authority, ad hominem (personal attacks), circular reasoning, sunk costs, false dilemma, fallacy of the single cause, Occam’s razor. Goal of this article isn’t to help you win an argument with informally irrational people. I’ve never find a good strategy for dealing with irrational people (I still consider running away the best option in these cases). Neither is the goal to make you a more rational person. The goal is to make you adapt at spotting mistakes of reasoning and become more adapt at spotting quality of reasoning in others. Especially when conducting science, being in a meeting and/or being in a discussion. A rule of thumb you might like: as long as you don’t understand the opposing opinion, as long as you cannot pinpoint where/why they went wrong you might want to look harder at what they are saying. Disclaimers: * I try to point out similarities to well-known fallacies. If I don’t the name (or the fallacy) is one I just made up. I think it makes it easier to remember things. If you’ve got better names (or notice something I mention is already described somewhere) please let me know! * This is a first version. There will be many mistakes/inaccuracies/boring parts/parts missing sufficient refferences. Let me know if you find any: [email protected]. The very informal fallacies are sorted in 4 categories: relevance, abstraction, generalization and social. 2 elivian.nl 2. Relevance (necessity) Alice: “I think we should invest more in nuclear energy because it doesn’t contribute to global warming.” Bob: “Have you seen the new Star Wars movie? I think it’s epic!” Although Bob is correct, it doesn’t contribute to the discussion. Fallacies of relevance are relatively easy to spot and understand. But somehow this is still one of the most common types of fallacies I encounter. If you’ve ever been taking minutes at a meeting you might have found that reality actually isn’t far from the Alice and Bob example above. Fallacies of relevance is already a an existing term which include the red herring, irrelevant conclusion (Ignoratio Elenchi). I find the fallacies listed there to be slightly on the formal side (i.e. focused on an incorrect conclusions rather than just hindering progress), so here are a couple of additions. 1.1 Complete irrelevance Irrelevant details. People tend to be really in fond of details. But sometimes you don’t need all the details in order to see the bigger picture. And sometimes the bigger picture is all you need to make a decision. Incorrect assumptions. Something I note in applied sciences is that people are usually really good at taking assumptions and turning them into corresponding correct conclusions. A lot of effort is put into this and there seems to be an unspoken agreement among peers to focus critique solely on someone’s work from assumptions to conclusions, not the assumptions itself. For theoretical research this seems good to me, but in applied sciences the assumptions do matter a lot. Incorrect important assumptions lead to a research which has no practical implications (completely irrelevant). One might think that the research will contribute to other research. In theory this sounds great, but I haven’t seen this happen in practice, probably because it is too applied (not general) to be used by other research. So to sum it up, I see a lot of research which is too practical for theoretical use and too theoretical for practical use. (warning: speculation paragraph!) I think the cause of this is that following rules and combining assumptions is relatively easy, but finding the right assumptions is hard. Perhaps also because we’re never taught to find the correct assumptions in school, but we are usually given the assumptions and asked for the conclusion. Perhaps because finding the right conclusion actually much easier than finding the right assumptions. This comic exactly describes my experience during my master thesis [2]. I was studying the field of the economics of law in which the goal is to find optimal laws with respect to 3 elivian.nl happiness (or utility). I found a field with so many different (strong) assumptions, and corresponding very different conclusions. To some firm believers in a specific theory the field might have felt relevant. But when looking at a distance the field feels completely irrelevant as there is no research in which the assumptions come close to reality. In my thesis I go in great length about the literature and tried to find the most realistic approach. As an illustration of my perceived culture of disregarding the impact of assumptions. In a book comparing many different strategies for CO2 reduction [3] they state: “Assumption 1: Future infrastructure required to sufficiently manufacture and scale each solution globally is in place in the year of adoption, and is included in the cost to the agent (the individual or household, company, community, city, utility, etc.). Because we have made this assumption, we have eliminated the need for analysis of capital spending to enable or augment manufacturing.” This is fundamentally wrong, just because you assume something doesn’t exist, doesn’t eliminate the need to look at it. The need is still there, you just fail to address it. Although this is only a small and in this case rather insignificant example, I see this as a perfect example of the culture in applied sciences. If something is too difficult to incorporate you just assume something, hide the assumption somewhere (like in this case, in the back of the book in the methodology section, or refer people to the internet like in this case) and pretend it isn’t that relevant. In your abstract and conclusions you can still expect people to believe your numbers and there is no need to discuss the important assumptions. If you really cannot hide it you can always suggest it for further research. An example of research where one (seemingly innocuous) assumption makes the research irrelevant. Some background is needed. (warning: this a boring example!). Traffic jams are self-sustaining. A normal highway can handle up to 1500 vehicles per hour per lane, anymore and it will result in a traffic jam. In a traffic jam the speeds are lower so now the highway can now handle only 800 vehicles per hour per lane, any number above that and people will just join the queue.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages28 Page
-
File Size-