Very informal elivian.nl (high quality - poorly written) 06-06-2018

1. Introduction Formal fallacies, or logical fallacies, are patterns of reasoning which can be concluded to be false without further need of context. For example: “Humans are mammals, Peter isn’t human, therefore Peter is not a mammal” (). Or one of my favorites:

Proof: booze brings top grades HARRY SHUKMAN

A clear correlation has been found between the amount of money colleges spend on alcohol and the percentage of firsts they receive.

A genius Cambridge grad has found a link between the money colleges spend on booze and the number of firsts their students achieve.

Churchill grad Grayden Reece-Smith has made a chart that appears to show a relationship between the amount of wine supplied by colleges and academic performance.

Students have widely accepted that this chart is the best excuse for bad behaviour since telling your mum you only read Playboy for the articles.

1 elivian.nl

(correlation does not imply causation, cum hoc ergo propter hoc, or more specific: ignoring common cause. In this case -> the wealth of a college).

Formal fallacies, due to their objective nature, are usually easy enough to spot and I find that after attending for example University people are quite adapt at spotting and avoiding these. But humans being humans, we still find plenty of ways to fuck things up without being formally wrong. Plenty of ways to be correct, but hurting the cause. These are called informal fallacies. This article will be about those.

The idea of informal fallacies existing is far from new. For example, a list of 129 informal fallacies can be found here [1]. I find the list a boring read, I guess you’d too. So in this article I will focus on fallacies based on three criteria (in order of importance): i. are they new? (e.g. not in the list of 129) ii. do you actually encounter these in practice? iii. are they hard to spot in practice? The third criteria explain the title: very informal fallacies, fallacies which require a lot of context to distinguish and therefore might be actually somewhere between fallacies and errors of thought (/biases).

In order to make it easier to spot them in practice I’ve tried to use as many examples as possible. I’ll always sort the examples from what I consider high-importance to low-importance, so if you’re bored you can easily skip the rest of the examples. Examples will be mostly from my own experience as this might give the best representation of what you might encounter in practice. Also, these happen to be the ones I know. Examples will most often feature me as the wise one. This might give the impression that I consider myself to be an (above average) wise person. This however is not the case. I do a lot of stupid things quite frequently. I just don’t always notice and therefore they might not have made it into an example.

Informal fallacies which will not be covered in this article because I didn’t have anything to add but which I do consider important are: appeal to authority, (personal attacks), , sunk costs, , of the single cause, Occam’s razor.

Goal of this article isn’t to help you win an with informally irrational people. I’ve never find a good strategy for dealing with irrational people (I still consider running away the best option in these cases). Neither is the goal to make you a more rational person. The goal is to make you adapt at spotting mistakes of reasoning and become more adapt at spotting quality of reasoning in others. Especially when conducting science, being in a meeting and/or being in a discussion. A rule of thumb you might like: as long as you don’t understand the opposing opinion, as long as you cannot pinpoint where/why they went wrong you might want to look harder at what they are saying.

Disclaimers: * I try to point out similarities to well-known fallacies. If I don’t the name (or the fallacy) is one I just made up. I think it makes it easier to remember things. If you’ve got better names (or notice something I mention is already described somewhere) please let me know! * This is a first version. There will be many mistakes/inaccuracies/boring parts/parts missing sufficient refferences. Let me know if you find any: [email protected].

The very informal fallacies are sorted in 4 categories: relevance, abstraction, generalization and social.

2 elivian.nl

2. Relevance (necessity)

Alice: “I think we should invest more in nuclear energy because it doesn’t contribute to global warming.” Bob: “Have you seen the new Star Wars movie? I think it’s epic!”

Although Bob is correct, it doesn’t contribute to the discussion. Fallacies of relevance are relatively easy to spot and understand. But somehow this is still one of the most common types of fallacies I encounter. If you’ve ever been taking minutes at a meeting you might have found that reality actually isn’t far from the Alice and Bob example above.

Fallacies of relevance is already a an existing term which include the , (Ignoratio Elenchi). I find the fallacies listed there to be slightly on the formal side (i.e. focused on an incorrect conclusions rather than just hindering progress), so here are a couple of additions.

1.1 Complete irrelevance  Irrelevant details. People tend to be really in fond of details. But sometimes you don’t need all the details in order to see the bigger picture. And sometimes the bigger picture is all you need to make a decision.  Incorrect assumptions. Something I note in applied sciences is that people are usually really good at taking assumptions and turning them into corresponding correct conclusions. A lot of effort is put into this and there seems to be an unspoken agreement among peers to focus critique solely on someone’s work from assumptions to conclusions, not the assumptions itself. For theoretical research this seems good to me, but in applied sciences the assumptions do matter a lot. Incorrect important assumptions lead to a research which has no practical implications (completely irrelevant). One might think that the research will contribute to other research. In theory this sounds great, but I haven’t seen this happen in practice, probably because it is too applied (not general) to be used by other research. So to sum it up, I see a lot of research which is too practical for theoretical use and too theoretical for practical use.

(warning: speculation paragraph!) I think the cause of this is that following rules and combining assumptions is relatively easy, but finding the right assumptions is hard. Perhaps also because we’re never taught to find the correct assumptions in school, but we are usually given the assumptions and asked for the conclusion. Perhaps because finding the right conclusion actually much easier than finding the right assumptions.

 This comic exactly describes my experience during my master thesis [2]. I was studying the field of the economics of law in which the goal is to find optimal laws with respect to 3 elivian.nl

happiness (or utility). I found a field with so many different (strong) assumptions, and corresponding very different conclusions. To some firm believers in a specific theory the field might have felt relevant. But when looking at a distance the field feels completely irrelevant as there is no research in which the assumptions come close to reality. In my thesis I go in great length about the literature and tried to find the most realistic approach.  As an illustration of my perceived culture of disregarding the impact of assumptions. In a

book comparing many different strategies for CO2 reduction [3] they state:

“Assumption 1: Future infrastructure required to sufficiently manufacture and scale each solution globally is in place in the year of adoption, and is included in the cost to the agent (the individual or household, company, community, city, utility, etc.). Because we have made this assumption, we have eliminated the need for analysis of capital spending to enable or augment manufacturing.”

This is fundamentally wrong, just because you assume something doesn’t exist, doesn’t eliminate the need to look at it. The need is still there, you just fail to address it. Although this is only a small and in this case rather insignificant example, I see this as a perfect example of the culture in applied sciences. If something is too difficult to incorporate you just assume something, hide the assumption somewhere (like in this case, in the back of the book in the methodology section, or refer people to the internet like in this case) and pretend it isn’t that relevant. In your abstract and conclusions you can still expect people to believe your numbers and there is no need to discuss the important assumptions. If you really cannot hide it you can always suggest it for further research.  An example of research where one (seemingly innocuous) assumption makes the research irrelevant. Some background is needed. (warning: this a boring example!).

Traffic jams are self-sustaining. A normal highway can handle up to 1500 vehicles per hour per lane, anymore and it will result in a traffic jam. In a traffic jam the speeds are lower so now the highway can now handle only 800 vehicles per hour per lane, any number above that and people will just join the queue. So once you’ve got a traffic jam it is hard to get rid of it. Ramp metering solves this by installing a traffic light which only allows vehicles to enter the highway if there is no danger of a traffic jam occurring, thereby optimizing the capacity of the highway.

This system works great, but there is a little cause for concern. Assume a highway from rural town A to rural town B (takes 10 minutes) to city C (takes another 10 minutes) which at morning hour is usually congested. Putting up ramp metering lights at entry B will solve this issue. Unfortunately this means that a lot of people will be waiting at entry B. In most research this is considered unfair, especially because the people at entry B live closer to city C but now might have a longer travelling time to city C when compared to people from A.

There are many papers on solving this by putting a traffic light at entry A to leave space for people entering at B (i.e. “proportionally fair ramp metering”). This sort of works, but is suboptimal because it is impossible to compute how many cars to allow onto the highway at A because you don’t know how many cars will want to enter at B in 10 minutes. 4 elivian.nl

Sometimes the light at entry A will be red because it expects people will want to enter at B, but these people might never show up, leaving the highway underutilized. Fair ramp metering strategies therefore introduces a tradeoff between fairness and efficiency, where sometimes big sacrifices in efficiency are made to achieve a little more fairness..

But in reality there is no such tradeoff (or a much smaller one at least). The problem in the assumption of what constitutes ‘fair’ in the research. The common definition is too strict. The research assumes it to be fair when a car on a trip: incurs the same amount of delay as it causes others. What it doesn’t take into account that not every trip has to be fair, but that every driver has to be treated fair. So ordinary research aims to make every trip fair, but in reality I think it would be completely fair if single driver on one day causes a little more delay than it receives and the next day receives a little more than it causes. This might seem like a petty distinction, but can actually lead to great improvements in efficiency without any loss of fairness (people fairness, not trip fairness). Therefore I consider this body of research to be interesting, but irrelevant. More on this can be found in [4] in chapter four.

1.2 Pareto irrelevance The pareto principle roughly states that 80% of the effect comes from 20% of the causes (therefore also known as the 80/20 rule). This applies to a variety of subjects: 80% of the US tax comes from the top 20% tax payers, 80% of the points on your examination comes from only 20% of the course material, 80% of your electricity bill comes from 20% of your appliances, etc. Of course it need not be 80/20, but the point is that life ain’t fair and that often with a little well placed effort you can get a lot of the results.

So in most cases (meetings, science, daily life choices) you first want to focus on the most important 20%. So even though the other 80% is still relevant, it is much less so.

When people focus on things which are relevant (though barely) but at the expense of focusing on the most important 20% I call this a pareto irrelevance fallacy. A lack of clear emphasis can also be considered to be part of pareto irrelevance as well as an inability to neglect neglectable things.

 A friend of mine used to work at the Dutch Embassy in Rome and told me the following story. Every year a report would be published which compared embassies across the world. For years in a row it concluded that the Dutch embassy in Rome was one of the least sustainable because of terrible energy use. Of course this was cause for concern and they decided to meet in order to make the embassy more green. My friend said the embassy had a glass dome which, combined with the Italian sun, caused the air-conditioning to roar to keep it 20° C. This seemed like an obvious candidate to consider. The conclusion of the meeting however, was that they would no longer serve water because there were plastic cups being used incidentally by visitors.  The list of 129 list informal fallacies linked to in the introduction of this article [1] is a great list and really complete. However, it isn’t really useful in practice because it lacks emphasis. I’m pretty sure that 20% of that list actually account for 80% of the informal fallacies used in daily life, and I think the list could be improved by making this more clear some (that is part of the reason I wrote this article).

1.3 Complaining Some people seem to see complaining about the state of the world as a substitute for action. As if somehow by complaining they are making things better. Or perhaps they are just used to some parent, teacher or friend coming to solve their problems if they but complain loud enough. It sometimes seem that people complaining loudly about the things they cannot change actually miss the situations close at hand which they can change.

5 elivian.nl

I don’t really have a lot against complaining, I just dislike irrelevant complaining, which happens to be most complaining. How to detect if complaining actually is irrelevant? i. If there complainer doesn’t take any action to solve things (other than complaining) ii. does the complainer actually propose solution.

1.4 (Jargon-judo, window dressing) Heavy use of jargon, difficult words and “you cannot have an opinion until you’ve read more about…” often sabotage a meaningful discussion. They put up a really high barrier for the uninitiated to disagree with a statement. Using jargon is great (if you’re insecure) because generally only the people who are already willing to agree with you will take the time to find out what you mean. So heavy use jargon will actually allow you to seem right (and smart!) without actually having to take the effort of being right. A final advantage is that you can even save your own self-esteem because you can tell yourself “they just don’t understand”, making it unnecessary to do further self-analysis.

Obfuscation is often combined with an appeal to authority. I consider this to be a ‘relevance fallacy’ because you are adding an irrelevant layer of complexity. I haven’t found an effective way of dealing with obfuscation.

How to detect if obfuscation is actually irrelevant? i. Is there an easier way to say the same? ii. Can it reasonably be assumed that most of the audience will understand? iii. Is the statement easy to verify (not vague, which can later be turned to mean anything)? iv. If more background knowledge is required: are concrete pointers being given to a small number of articles?

 When reading scientific literature on any subject I frequently encounter a certain claim or number for which they supply a reference. When looking for the reference it turns out to be a complete book! And very often  She replied: “you cannot have an opinion on feminism until you’ve read this book!”. I didn’t feel like reading that book, but didn’t want to be bullied (and was actually curious to see if it would change my opinion). Things got a bit out of hand and I spend nearly 50 hours reading books, reading articles and watching documentaries to understand feminism. This resulted in the article you can find on my website. It didn’t change my opinion. Even when discussing feminism nowadays I’m often told some statistic and a conclusion based on that. When I disagree on the existence of that statistic I’m told to go and look it up! They are firm in their belief that I just don’t know enough. I always try to find the statistic but never find it (sometimes things close to it, but very different). In the end I didn’t really learn about feminism, but did learn about obfuscation.  When I used to work at the Dutch National Institute for Public Health and the Environment I noted a special case of ‘appearing smart’. When I would give a presentation I’d always get really smart questions like: “how does this relate to the research by X and Y?”, “why did you choose for model XYZ?”, “can you elaborate on your implementation of X?”. This had me really impressed! Until one day I decided to make my presentation more interactive and ask questions to the audience. It turned out they didn’t understand even the basic parts of my presentation (they didn’t give answers first and wrong answers when pressed). I realized that people asking “why did you choose model XYZ?” don’t ask the question because they want to know, but just because they want to appear smart.  The sad part of obfuscation… it actually works! Last week I was at the Dutch Impact Conference. After 5 minutes it became clear that it was really jargon-heavy and since I was writing this I decided to find out how much was actually being said. The most stunning part, when really paying attention to talks I noticed that about 20% of the talks didn’t have ANY content but were all pretty words. The audience didn’t seem to mind. For me this shows how little people on average react to the contents. If you want an idea of what Jargon-heavy looks like see appendix 1 (it’s in Dutch, the conference was in Dutch)..  Another example where obfuscation works. Some MIT grad students wrote a “random scientific article generator” which generates unique nonsensical papers in the blink of an eye [5]. They managed to get these papers published in peer-review journals for example from 6 elivian.nl

IEEE, Elsevier and Springer ( of this is most easily verified by looking at a press release by Springer on the website of Springer itself [6], 5th paragraph is probably what you’re looking for).

[5] allows you to generate your own computer science papers. [7] allows you to generate your own essay. [8] is a funny article which gives you more background information on this.

For me the ease with which people get away with obfuscation (and people actually react positively to it) shows that obfuscation is very possible a very widespread phenomenon.  When looking at the Wikipedia article on Pareto efficiency for the section on pareto irrelevance I came across these 2 sentences:

“In the systems science discipline, Epstein and Axtell created an agent-based simulation model called Sugarscape, from a decentralized modeling approach, based on individual behavior rules defined for each agent in the economy. Wealth distribution and Pareto's 80/20 principle became emergent in their results, which suggests the principle is a collective consequence of these individual rules.”

What a lovely example of obfuscation! (actually it makes me a bit angry) At first glance this seems to be written by someone knowledgeable and we might consider ourselves just not knowledgeable enough to understand. The encyclopedic tone and many difficult words might (falsely) make us assume this is a wonderful contribution to Wikipedia, but just too complicated for us and skip this part. But…

Who the fuck cares that the model is called Sugarscape? Who cares about the names of the researchers? Both irrelevant, a reference would have sufficed for those wishing the details. Why say “a collective consequence” when you mean “a consequence”? Why say “became emergent” when you mean “emerged” (or “became visible”). So even not knowing any details the (unnecessarily) presumptuous tone is clear.

Still, even noting this you might still think that it is correct/useful, but in order to really determine the value of this piece you’d need to know agent based modelling is.

Happens that I spend a year full-time making an agent-based model (yeah, we usually leave the word ‘simulation’ out… unless you want to sound cool I guess). Back then I already noted that only the people who don’t know how program properly keep stressing that it is an agent-based model. So I feel informed enough to tell that the adjective ‘agent-based’ is unnecessary, simply “model” would have sufficed. And if that isn’t enough, the sentence “based on individual behavior rules defined for each agent” is a really accurate explanation of what an agent-based model does, so these are even more useless (duplicate) difficult words! Even if you wanted to be cool and explain the concept of an agent-based model (in an article on the pareto principle!!?!), “individual behavior rules” is no more informative than “behavior rules” (I cannot imagine what a model defining collective behavior rules for each agent would look like).

So complete rubbish. But since I’m doing this anyway, let us stray a little and look at the formal correctness of the second sentence. Succinctly put this says: “Our model leads to behavior X, this suggest that X is caused by the assumptions in our model”. Actually, I’m not even going to start here…

So… for me… I never need to meet the writer of this piece of obfuscated irrelevance. Seriously, for which percentage of people visiting the article on ‘pareto principle’ will this be a useful contribution? It really makes me angry that people mask incompetence this way (and often get away with it). Well, it does make for an excellent example.

7 elivian.nl

2. Abstraction Math might be the most disliked high school subject of all. “Something dies in me when I enter Math class.”. This might lead us to conclude that humans aren’t good at abstract thought. I think it is more accurate to say that abstract thought simply is hard, and actually humans are really good at it. Still there are many fallacies which seem to spring from our difficulties with abstraction.

2.1 Theoreticalisation (managers fallacy) Having little contact with reality can result in people believing in a model of reality which actually has very little to do with reality. Especially in groups which reinforce each other there can be high levels of overconfidence in the theory. This confidence is best characterized by not even realizing there might be other options (unknown unknowns).

Theoreticalisation shares similarities with the McNamara fallacy, echo chamber, circle jerking and confirmation bias. Managers (due to their usual distance to reality) are at extra risk for this.

 There are many different combat sports, and many practitioners believe their sport offers the best technique. How do they really size up? To answer this question a mixed martial arts tournament was created in 1993, where everything was legal if it was legal in any martial art (which roughly makes everything legal). The tournament was great because it allows of comparison between a sumo-wrestling champion and a small Brazilian Jiu Jitsu practitioner. Nearly 25 years and over 200 tournaments later it is pretty clear what works and what doesn’t.

8 elivian.nl

Basically the top sports for mixed martial arts (MMA) are those which: i. have competitions, ii. have fairly liberal rules and iii. have grueling training. These include but aren’t limited to wrestling (not sumo), catch wrestling, boxing, kickboxing, sambo and Brazilian Jiu Jitsu. The worst performances turn out to be those which never have the reality check of competitions. This lack allows for great theories to pop up: pressure points, inner energy, using the force of the opponent against them. These somehow never work as advertised in competitions. Examples are Wing Chun, Aikido and Tai Chi.

If you want to experience the results of theoreticalisation searching youtube will find plenty of clips showing Wing Chun masters get beaten in a fight (practice). Ironically enough you can also find plenty of people explaining why Wing Chun really is good (theory). An example of a fight [9], find a copy of the early UFC’s, or if you just want to laugh at the effects of theoreticalisation, or see extreme examples I can recommend: [10].

There exist many more more graphic images, but I think these picture exemplifies the consequences of theoreticalisation, where being pragmatic is replaced by “the crouching tiger technique”, where experience is replaced by flashy techniques in training and where training hard is replaced by “mastering your mind”. Not good.  At the examination training where I work we work in teams of 5 teachers and giving feedback is a big thing. One time I was assigned with a partner who (according to students the previous time) didn’t explain things very clearly. So we frequently listened when the other was explaining something to a student and gave each other feedback afterwards. After a couple of times I noticed she didn’t slowly build things up for the student; she would immediately launch into a pretty complicated explanation without giving the student time to get settled. Her explanation was good, but missed a couple of sentences as introduction. This diagnosis would explain her previously received evaluations from the students.

At the end of weekend we had a general feedback round where everyone joined in. The boss of the team hadn’t seen her explain anything to students, but had heard that her explanations were unclear sometimes. He recommended she should more often point students to their lectures notes instead of explaining things anew. In general this is often a good solution when someone is unclear, but this was absolutely not the most efficient way forward in this case. What strikes me most was not the incorrectness but the confidence with which the boss gave his solution. It resembles the confidence of some fighters about 10 seconds before reality hits.  In the book Sapiens [11] there is a part on the scientific revolution, with an aptly named first chapter “the discovery of ignorance”. It compares two maps made only 26 years apart.

9 elivian.nl

A European world map from 1459 (Europe is in the top left corner). The map is filled with details, even when depicting areas that were completely unfamiliar to Europeans, such as southern Africa.

The Salviati World Map, 1525. While the 1459 world map is full of continents, islands and detailed explanations, the Salviati map is mostly empty. The eye wanders south along the American coastline, until it peters into emptiness. Anyone looking at the map and possessing even minimal curiosity is tempted to ask, ‘What’s beyond this point?’ The map gives no answers. It invites the observer to set sail and find out.

The first map is a good example of theoreticalisation. Somehow we don’t know what we don’t know and just make up a theory and present it as “truth”. Again the effects of theoreticalisation are clearly visible in hindsight.  I used to train unihockey. I specifically chose my club because we mostly played games and didn’t do many exercises. I hated repetitive exercises and just liked to play. When I became a coach I started thinking and reasoning on how to make a training as efficient and useful as possible. I invented many great exercises for stamina, accuracy and team-play. When I stopped being coach I immediately started hating all these exercises again. So who was

10 elivian.nl

right? The coach-me or the player-me? I think I was wrong as a coach. As soon as I became a coach I started reasoning from incorrect assumptions: I thought about winning matches but the goal should have been to have fun. I think this is an excellent example of where I fell to the managers fallacy / theoreticalisation.  Currently in the Netherlands there is a little bit of an anti-muslim sentiment among approx. 20% of the population. I now live in a very muslim rich neighbourhood (I can actually see the mosque right now) and only have found very friendly and welcoming people for the past 6 years. Although I have no evidence I wouldn’t be surprised if the people with the most anti- muslim sentiment hardly ever encounter muslims, and hardly ever look up the facts, but mostly just watch the news and hear stories of friends who heard stories of friends.  2 years ago I was surprised to learn of the existence of the term “evidence based medicine” which describes the practice of using evidence (and scientific reasoning) and apply it to the area of medicine. I was surprised because I was wondering how else you would be doing medicine? Apparently it used to be more the experienced doctor making judgement and research was mainly case studies. I still find it hard to believe that this shift to evidence is so recent (1980-ish!). On the other hand, evidence-based politic would make sense and perhaps in 30 years the new generation will scratch their head and wonder at the minimal role of evidence in politics nowadays. It might feel like politics uses evidence… but do you know? Are the length of our prison sentences based on research? Is the budget we are allocating to different NGO’s based on proper research? I’m sure the answer to the second question is a definite no. If, like me, are also surprised at the lack of evidence in medicine, the best introduction I found is in the book SuperForecasting with an aptly named second chapter: Illusions of knowledge [12]. Or more succinct the wikipedia page will do [13].  If you have a college lecturer who explains things in a very abstract theoretical way (or starts his first lesson with a history of his field) you’ve experienced the effects of theoreticalisation first hand. (The teacher probably explains things in a way which makes most sense to him/her without actually looking at the best way for the students).  When developing the course material for the examination training we suddenly realized that the way we structured things was really illogical. After explaining it to the other teachers they quickly realized that it was indeed pretty illogical. We changed everything the next year to the logical option... Nothing happened, no effect (except a lot of hassle). This is where I learned that if you have to explain teachers something is illogical, students won’t even notice. So the structure was illogical in theory but that didn’t matter at all in practice.

2.2 Unlimited resources fallacy For some reason humans aren’t good at realizing the finiteness of the intangible resources they have. Many of us are in a near-perpetual state of believing we will have plenty of time in a couple of weeks. Many of us spend money with little regard for what it might cost them in the future (this is related to opportunity costs [google it]). Many of us plan to do our homework tomorrow, not realizing that our willpower will still be limited tomorrow.

This often leads us to poor decision making, often resulting in us overextending, being unfocused or postponing.

 Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. –Douglas Hofstadter  Ninety-ninety rule of programming: The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time. –Tom Cargill  I once overheard the following conversation:

Alice: “I think putting solar panels on your rooftop is much less efficient than using the money to invest in wind energy instead.” Bob: “I think climate change is a really big problem and I think we should put all hands on deck. I think we should definitely invest both in wind energy and in residential solar panels”. 11 elivian.nl

Although Bob’s conclusion might be correct his reasoning is not. He disregards the fact that money spend on wind energy cannot be spend on solar panels and vice versa.  Politicians seem to gratefully use the unlimited resource fallacy during election time. They only say where they are going to spend money on, they usually don’t say where they are going to get the money from. Although understandable from the perspective of the politician, you’d expect the people to ask the question where the money is coming from. It has to come from somewhere. I think this is actually not a great state of things because it encourages politicians to overpromise, underdeliver allowing extremist parties to flourish and fostering a climate of distrust in politicians.  In most team-settings like companies there are differing opinions on the best course of action. A tempting solution to solve these differences is to find some form of compromise where both courses of action are actually taken. Since all parties are allowed to do what they think is best with little need for compromise, this initially seems a good solution to everybody. Furthermore because time spend by the others doesn’t really feel like a ‘cost’ this might even feel pretty efficient to all parties. In general it will be very likely that one of the options was superior and that in fact, by avoiding really weighing the options, a lot of time is spend on one of the less efficient options. Also, doing many things might lead to overextension. So I fail this inability to make a choice stems from not realizing the limits of the resources. [14] contains more elaborate examples relating to not being able to make choices in determining a strategy and the cost of this (chapter 4 is the most relevant, I recommend reading chapter 3 first for fun).

2.3 Abstraction denial Although there are many fallacies related to abstraction, this does not mean that abstraction should generally be avoided. Abstraction can be a really useful tool. I think some people have been confronted with the effects of poor abstraction (especially theoreticalisation I think) and might have come to dislike abstraction in general. Abstraction denial refers to denying abstraction without good reason. Below are 6 specific cases.

 Metaphor denial. Sometimes when giving a metaphor people will attack the metaphor by pointing out a difference between the metaphor and reality. Ironically, the existence of differences between the metaphor and reality is essential, otherwise the metaphor would be reality and it would’ve been pointless.. So when stating that a metaphor doesn’t work, one should not only point out differences but also state why these are relevant.

An example of this [15], a youtube clip in which Peter Singer eloquently describes what is less eloquently summarized below.

You're out and about when you hear a loud screaming as you pass a pond. You see a wildly flailing boy who is only just staying afloat. Do you save him? Even if it would ruin the expensive clothes you are wearing? If you do, would that not also mean that you would donate to save starving children in Africa?

Some of the ‘top’ reactions in the comments to this.

Acodswallop321: It's not a good analogy. Of course you would help a child in the circumstances he describes. Solving global poverty is a different problem entirely.

Acodswallop321: Sol: There is a major difference between lending aid that costs you relatively nothing and is within your immediacy, and lending aid that is far removed from your life and, collectively speaking, costly (for a

12 elivian.nl

permanent solution). This is a bad analogy. It's not a good analogy. Of course you would help a child in the circumstances he describes. Solving global poverty is a different problem entirely.

They say it is a bad analogy, point out differences, but fail to say why these differences are relevant. A better example from the same comment section.

El Cid: Very poor analogy. You're not talking about normally well fed children who just happened to run out of food for a week and next week will be eating fine again. You're dealing with endemic poverty and the solution is not hand-outs, however emotionally satisfying that thought may seem. The correct analogy would be something more like a child who keeps walking back into the same pond no matter how many times you save them, and you saving the child potentially encourages more children to walk into ponds because now they expect someone to save them when they do so. Not a perfect analogy there either, but it's closer to reality.

Although El Cid might still have an incorrect conclusion, this is a good example of a proper way of ‘denying’ a metaphor. He gives a difference and (implicitly) says why he thinks this is a relevant difference.  Hypothetical counter-example denial. Sometimes someone employs a faulty way of reasoning. A faulty reasoning can be shown to be faulty by giving a hypothetical counter- example. There is no reason for this example to have anything to do with the discussion (because it is only targeted at the faulty reasoning).

I’m sorry, I haven’t encountered a good example of this recently, so a bit more abstract: Alice: “If A is true then B is true because of C!” Bob: “But that would mean that if D is true then E would be true! Obviously E is true and D is not. ” Alice: “But D and E have nothing to do with what we are discussing”.  Unmeasurable therefore no use in thinking about it.

Me: “Who do you think liked this weekend best?” Her: “How to know for sure? No use in thinking about it.”

Although happiness is hard/impossible to measure objectively I think there is plenty of use in thinking about happiness. In this example we organized a weekend for friends. By trying to answer the question who might have liked it best we might first get an idea who liked it and who liked it less (even though this isn’t sure), we might then proceed to think about why they might have liked it more/less which might have lead us to very real conclusions as to what to improve next time. So just because something cannot be measured does not mean it is useless to think about.  High uncertainty therefore no use in thinking about it. Sometimes a choice has to be made between two options which both are really unpredictable. For some people this is reason to say that there is no use in thinking about this. (When choosing between normally distributed random variables N(휇=0, 휎=2131) and N(1, 341241) you still have an expected benefit of 1 if you choose the second).  Not letting a hypothetical example be hypothetical. This article frequently features examples where I disagree with certain made by feminists or feminism. This doesn’t mean I’m against feminism. They are just handy examples. I find that some people have a hard time to talk about things hypothetically, especially when it is easy to take it personal.  Inability for abstract thought. Sometimes using an 푥 instead of a number can cause trouble for 14 year olds. Funnily enough I had a similar experience at the Dutch National Institute for Public Health and the Environment.

I was making a simulation model (agent based!) which modelled the spread of the Sexually Transmitted Infections (STI) Chlamydia and Gonorrhea concurrently. In the literature they 13 elivian.nl

were almost exclusively modelled separately. All models used very similar assumption for the STI’s (i.e. duration, % of the infections which have no symptoms, transmission probability). So instead of programming the same thing twice I made one module which had all the assumptions as parameters and called this module “generalized STI”. This module could then be given the right values for the parameters to make it simulate Chlamydia or Gonorrhea and could do everything the models in literature could. This practice is really common in programming, instead of making a really specific code, you make more general code which then can be tailored to fit the problem at hand. This avoids code duplication and makes code much more flexible.

My supervisors were far from happy: “there ain’t no such thing as a generalized STI”. Even serious attempts on my part to explain that the module could be tailored to the desired STI didn’t make them change their mind. In the end I just changed the name of the module to Gonorrhea, keeping everything else the same and my supervisors were happy. Weird.

2.4 Causal marginal contribution fallacy In dealing with abstraction we often employ heuristics: shortcuts in thinking which make everything much more simple. The problem arises when we don’t realize that they are simplifications. An easy way to verify that no (incorrect) simplification has been used is to define at least 2 possible choices to compare, and for each choice neatly list the effects (costs and benefits) and ensure that the effects really follow from the choices. Comparing the effects of the choices to eachother to decide what to choose. For some reason I’m most familiar with examples from the fields of altruism (possibly because that is what I’ve been doing in my time when I wasn’t writing this) I hope to have a more diverse set of examples in the future.

 A friend of mine showed me her bamboo tooth brush. She chose the toothbrush because there are huge plastic islands floating in the oceans which kill or hurt wildlife. When I pointed out that her plastic toothbrush wouldn’t end up in the ocean anyway she wasn’t so sure. Although buying a wooden toothbrush might be a good idea, it is no doesn’t help in avoiding larger plastic islands. A plastic toothbrush used by my friend would end up in the garbage bin which in no way would end up in the ocean. There is no direct causal link between her plastic toothbrush and the plastic islands in the ocean.

 A question I frequently ask during my workshops: What do you think contributes more to society in the long run: becoming a biology teacher or becoming a math teacher? When thinking about the answer to this question we are unconsciously tempted to substitute the original question by: What is the contribution of the average math teacher compared to the contribution of the average biology teacher. This however introduces a crucial error. Let me explain why.

I have a couple of friends who are teachers. One of them is a biology teacher. It took him 3 years to find a job. This isn’t incidental, in the Netherlands there are slightly too many biology teachers. On the other hand I know friends who didn’t even finish their teaching degree and were already teaching because of a big shortage of math teachers. This seems to be the same in many regions in the world (US[16], NL[17]).

So when really comparing the two options: when you become a biology teacher some other biology teacher will likely be sitting at home and the impact is only equal to the extent that the new teacher will be a better teacher than the one who will be sitting at home. Basically 200 students will have a different teacher. On the other hand if you become a math teacher 200 students will actually have a teacher. When framed this way the answer becomes a no brainer.

This shows the importance of looking at the marginal effects. The word marginal means “of one extra”. For example, when walking 40 kilometer the average effort might be “not so 14 elivian.nl

much” but the marginal effort of the 40th kilometer might be really high (which would refer to how hard it is to walk the last kilometer of the journey). So when you say marginal you refer to the impact of “one additional”.  Some people are really zealous in not throwing away any food because there are so many people in the world who go hungry. Pointing out that not throwing away food isn’t going to feed the hungry people in the world will sometimes earn you the remark: “I do it on principle” or “it is about the signal”. This is fine, but it starts to be wrong when these people start to judge people who do throw away food. In my experience, sending the signal of not throwing away food is mainly going to convince other to also start sending this signal. I always doubt if all these people actually take any action other than sending this signal (i.e. donating a significant amount to a charity combating this problem). I really wonder if people actually care about the problem or if they are actually just putting up a show of caring.  We all need to make choices in what to do to help other people (we can’t do it all). It makes sense to choose the actions which have the highest effect for the least effort: the highest marginal effect divided by marginal effort. Instead the heuristic many people employ is choosing the actions which have the highest: scale of the problem divided by the effort it takes them (i.e. [3], they sort only on impact, even managing to take into account the required effort). So people gladly cut open their toothpaste to get the remainder out (low effort, it helps reducing waste in the problem), or people gladly spend time on properly sorting out their waste. Although it might feel good, and although it might actually produce the right actions on occasion, comparing personal effort to the worldwide problem is very common and not a proper reasoning.

Two related fallacies which might be adding to the problem are: i. the unlimited resource fallacy when someone states that action X takes NO effort/time etc. (which it actually does, but very little) ii. people are generally not really good at dealing with orders of magnitude, solving 0.001% or 0.00001% of the world’s waste problem actually feels the same but actually is really far from the same.

15 elivian.nl

3. Generalization Faulty generalizations are well known. There are two types I would like to expand upon. The first because it is really common, the second because I feel it isn’t well known (new?) but prevalent.

3.1 Looking at the most extreme of other groups Very similar to the fallacy. When people don’t like another group there is a tendency to misrepresent the other group by only looking at the more extreme opinions of that group.  When studying feminism and MRA (the male equivalent of feminism) I found that a really common type of argument used by both sides is of the form “Feminists say X, that’s crazy! It clearly shows that feminism has gone too far.” I think when disagreeing with another group it is always good to understand the moderate members of that group.  The overall anti-muslim sentiment relies for a large part on extreme muslims dominating the news and people basing their opinion on that.

3.2 Out of context (philosophers fallacy, incorrect copy-paste) The out of context fallacy is taking lessons learned in one context and incorrectly applying them to another context. It is easy to make this mistake because it is very hard being aware of the context you’re in, and sometimes even small changes in context can make a big difference. It is similar to theoreticalisation [see section 2.1] and the historians fallacy [google is your friend].

 During my internship in the UK there was a free-of-charge aerobics class at work every Wednesday afternoon for those interested. When I joined I found that most people interested were 50+ year old ladies. The aerobics lesson was great fun and I started full of enthusiasm. I thought I did a lot of sports back home and was a bit afraid it wasn’t going to be tiring enough (if the ladies were supposed to make it). Halfway through I realized things were going to be a little tougher than I expected, but didn’t feel like quitting because the ladies were still going strong. Oh boy, did my muscles hurt for the coming days (definitely top-5 ever). Really learned a lesson here. This taught me that a little difference in your exercise pattern can actually make a really big difference in your ability.  Former British diplomat and Governor of a province in Iraq, Rory Stewart reflects on his time in Iraq in his book Occupational Hazard [18] (free article by the author: [19]). From his writings it became clear to me that one of the problems of the invasion of Iraq is that the Western world had this great ideal of democracy and knew from its own experience that it worked really well… but didn’t realize that Iraq was so different that any direct route to democracy was doomed to fail. Many prerequisites for democracy weren’t there, but since we in the western world are so used to these prerequisites being there for us we didn’t even bother to take a hard look at what it would take to rebuild the country. A really failed attempt, approx. $1.700.000.000.000, 4800 soldiers casualties, all spend on a rather pointless endeavor.  Correlation does not imply causation. We’ve all heard it, you definitely have if you’ve read this introduction of this article. I find it to be way overused in daily life. Why?

In science we are used to deal in certainty. For this it is important to realize that correlation does not imply causation.

In daily life we often deal in likelihood (instead of certainty). In daily life we don’t need to know for sure if the rollercoaster is causing us to feel sick. If we got sick on two separate occasions in a rollercoaster we know enough to avoid the rollercoaster the third time.

So while correlation does not imply causation, correlation definitely makes causation more likely. And likelihood is often all that is needed in daily life.

In general I find the correct application of knowledge to be a much bigger problem than the lack of knowledge.

16 elivian.nl

 Some people lean too much on what they’ve learned in books/class instead of common sense and incorrectly apply their knowledge (such as in the example above). People who study psychology sometimes somehow believe that this gives them a significant edge at understanding humans in daily life. People who study formal /rationality sometimes suddenly believe this gives them a significant edge in making correct decisions. In general I find the average person studying logic, biases and rationality to be no better at making correct decisions in practice than other people. This is because they start trying to link reality with the textbook problems… and fail at doing this properly. The true problem arises when they become really confident of their conclusion becoming convinced that the average Joe is wrong..

If you want to find out if (in my opinion) you’ve gone too far and have replaced common sense by incorrect application of textbook theory (what I call rationality zealotry) you can answer the following question.

Consider the following two binary strings: string A: 00001111 and string B: 10111001. Which of these has a higher likelihood of having been randomly generated? If your answer is ‘equally likely’ you’ve spend too much time in books and/or lack common sense (in my opinion) because that’s not the correct answer. If you furthermore you are certain I’m wrong… even more fool you .  Some people are really entrenched in their way of thinking. When these people are giving feedback they often comment on everything which deviates from normal and say/imply this is therefore not correct. For me this is always a clear indicator that I shouldn’t take their feedback too seriously because this person clearly doesn’t really understand but only has experience. Not normal does not imply it is wrong.

This is related to the natural fallacy and I think this caused people to caution against trains, caused people to caution against microwaved food and is currently inspiring people to caution against certain additives in food.  At the examination training people always start out by teaching 17-year old students. After some time they sometimes also start teaching the teachers, we’ll call them tutors. So Tutors teach Teachers who teach Students.

Sometimes teachers who become tutors incorrectly copy-paste their teaching skills to tutoring skills: they treat the teachers the same way they treated their students. They ask a lot of questions to which they themselves already know the answer, they ask questions with the sole intention of maneuvering the teachers to a point where they convince themselves of something and finally they give a lot of compliments. These strategies work wonders with students, but since teachers are well aware of these strategies, they sometimes feel belittled and can lose trust in the tutor’s honesty. Not always of course, I’m still trying to figure out which teachers are more susceptible to this and why.  I believe many philosophical arguments rely on the following structure: Step 1. specify a specific context, Step 2. establish a certain principle must be true or cannot be true by using that context, Step 3. mix in some obfuscation Step 4. apply the principle in a more generalized context and marvel at the surprising results Step 5. try and convince others. The best example I know of is John Rawl’s Veil of Ignorance (other examples I know of include some parts of population ethics, and many arguments against utilitarianism).

17 elivian.nl

4. Social Social interactions (perhaps combined with abstraction) might be the most difficult things we encounter as humans. Truly understanding someone else, truly understanding group dynamics is really hard and it is therefore not surprising that we encounter soft logical fallacies in this area.

4.1 Assuming thinking We often incorrectly assume other people are thinking, and that if they are thinking that they have a rational reason for their behavior. This neglects the fact that behavior is often caused by emotions without too much interference of the rational thoughts. This is important because assuming someone is thinking leads us to attempt to convince someone by way of (rational) discourse while other approaches might be much more effective.

Behavior

Behavior Ideas

Behavior Ideas Emotion

Ideas Rational thought Circumstances

Model 1 Model 2 Model 3

 Hanlon’s Razor: “Never attribute to malice that which is adequately explained by stupidity.”  A friend of mine told me about a clip in which Trump claims: “My administration has put an end on the war on clean coal. It has just been announced that a new coal mine, a clean coal mine, meaning that they are going to take out the coal and clean it, will open” [20]. My friend said: “So Trump really thinks that…”. I think my friend already committed a fallacy in the last part. He assumes that Trump thinks rationally. He assumes that Trump thinks that cleaning coal really is scrubbing the coal or something. To me it seems much more likely that Trump didn’t really thinking when issuing this statement.  In unihockey I once encountered someone who was really harsh on himself and our team. Mistakes made by any in our team were met with chagrin, which made people more uncertain resulting in more mistakes resulting in unpleasant matches (for everyone). We tried many things: explaining why it didn’t help performance, behaving negative in return, compensating by being really positive, explaining why having fun is much more important than winning. Nothing really worked. He did agree that it was important not to be negative and agreed to be constructive in the feedback. It didn’t really work because he often didn’t manage to do so in the heat of the moment, or because he kind of managed, but it was just too easy to see the disappointed look on his face. The thing which finally seemed to work was not arguing with him, but just being really kind. Seeing his negativity as a sign of distress and giving him a hug and teasing him a bit (and then leaving him alone) worked wonders. First we made the mistake of being all rational, it solved when we realized that it wasn’t a rational problem. (note: I couldn’t prove causality… obviously).  Similar thing which took me way too long to realize: when your boss is angry at you for not following your rules, the anger isn’t caused by not following the rules (model 2).You probably frequently not follow the rules and there isn’t a problem. The truth probably is, she is angry and the rules are just an easy way of expressing that anger (model 3). The rules often do relate to the cause of the anger, but not necessarily. Just abiding by the rules and not changing anything else only makes things worse (trust me, I know… she will only become more angry because it isn’t easy for her to vent her anger). Find out what is the true reason 18 elivian.nl

she is angry and solve that. Keep breaking the rules for all you care. The rules are not the issue.  The prison system in place in most countries is actually really based on model 1 or model 2. I doubt that locking people up in poor circumstances with a lot of other people who don’t have their shit together either is a good way to reduce crime. Although having people off the street might help, I find it hard to imagine someone considering a murder and then not going through with it because the prison sentence has just been increased from 20 to 30 years.

There is actually an experiment with a really well equipped prison in Norway for the inmates convicted of rape, murder, etc.: the Halden Prison [21]. They have a lot of opportunities to learn, opportunity to do sports, a music studio, they do their own cooking, guards are unarmed. So far the inmates are really well behaved, but this might also be caused by the punishment for not behaving: i. someone talking to you like therapy, ii. if that doesn’t help you’ll be send to another prison.  Charlottesville, Virginia, on August 12th 2017 there was a unite the right rally. The stated intention was to unite the white nationalists. Some of the marchers chanted nationalist, racist and/or antisemitic slogans (“You will not replace us”, “white lives matter”, “blood and soil”), carried semi-automatic rifles, Confederate battle flags, and anti-Muslim and antisemitic banners. There was also a large group of counter-protesters present chanting “I’m gay, I’m here, I hate the KKK” and “black lives matter”. Things turned into small pockets of violence between these groups and at around 1:45 pm a person deliberately drove a car into a group of counter-protesters killing 1 and injuring 19 people.

19 elivian.nl

For more information on this event I highly recommend watching the youtube clip by HBO and vice-news as it seems really unbiased because it lets the images and the people involved do the talking [22] (22 minutes, contains graphic footage).

Reactions to this event often included statements of disappointment at the deplorable state of racism in the US. “Racism was never gone, it was just hidden” , “it was an awful feeling that we haven’t progressed as far as I thought, that there are so many people who have hidden racism feelings, it might even be my neighbour. ”[23]

I believe these reaction contain an important soft logical fallacy. Although stating this might sound heartless and like I don’t care. I’m not saying this because I don’t care, but because I do. Because I think that understanding the problem (and possible misconceptions) are an important step towards solving the problem.

I believe that these reactions are wrong in that they assume that wide-spread hidden racism is at the core of this mayhem (model 1 or model 2). But I think that these white nationalists aren’t healthy individuals doing a lot of rational thinking and arrive the conclusion that they don’t like black people. I think this isn’t mainly a racism problem it is mainly a crazy people problem. These people are crazy people who happen to be racist (not the other way around). Considering this a racism problem first will lead to broad-spectrum solutions targeting racism at large. Because I think there are many many normal (e.g. non-racist) people in the US I expect little progress. On the other hand realizing this ‘movement’ is caused by weird individuals or subgroups may give rise to much more targeted solutions.

In support of this view let us compare the Unite the Right rally to something which happened in the Netherlands (pictures below). In 2011 a girl in a small town (Haren) in the Netherlands organized a birthday party. She made a facebook event but didn’t make it a private event. A friend of her invited 500 people and from there it spread like wildfire throughout the Netherlands. The girl cancelled the event but other people took over, opened another event and dubbed it Project X. After a couple of days (and some attention from the media) about 30.000 people were signed up. After a call from local authorities not to come visit still about 5.000 people turned up. Things turned violent in the evening, resulting in over 30 injured.

For me Project X shows that people don’t need a reason to congregate and cause mayhem. (note: project X was 10 times larger, had similar number of injuries, didn’t have a death and was held in a similarly populated area). Just like a control group using a placebo to test if it is actually the ingredient in a medicine which works I think this shows that racism isn’t necessarily the active ingredient in this rally. One doesn’t conclude that this rally shows that the Netherlands is still a very Birthday Party loving country, similarly one shouldn’t use Charlottesville to conclude that the US is still a very racist country. Even without racism you’d still expect 500 people to come to a rally like the one in Charlottesville. These are crazy people who happen to become racists, (just like sometimes you have crazy people who happen to be muslim). 20 elivian.nl

When looking in more detail at things, the differences between the rally and project X to me seem to suggest that the people attending the Charlottesville rally are a smaller percentage of the population, but more crazy. For example someone of the approx. 500 people at the Charlottesville rally came all the way from Canada (over 800km) which shows a lot of dedication (/severe craziness). On the other hand this also means that people might be coming from all over north-east America, which shows that it is a really small proportion of the population.

Looking at the problem from a perspective of model 3 will lead you to conclude: there is social unrest, there are people who do not feel recognized and are looking for some social connection and a place to belong. There is actually an informative TED talk from a reformed neo-nazi who explains that was exactly how he felt and was the reason he got into the movement. [24]. He suggest that solving this problem is by looking at the problem of these people and help them solve these. Another example can be seen in the documentary [25] where you see the recordings of an US citizen slowly being sucked into joining some rebels in some war.

Looking from this perspective one sees completely different strategies and solutions to deal with people at the rally. One would assume that being very kind, respectful and letting them tell their story would be an effective tactic. Very different from what is currently happening. People screaming “black lives matter”, permits being revoked and physical intimidation can seem about as effective as the US sending Nuclear submarines to patrol Russian shores in hopes of convincing Russians to an American point of view. Really, what were the counter- protestors they thinking?

4.2 People care more about being right than about being different Assume you are the captain in charge of organizing an exploration of Southern America in the year 1525. You have a three different plans, but you’re unsure which to choose. Assume the three plans differ on both the sea journey, the population you’ll meet and the landscape. You do have 10 advisors, 8 of those advisors are specialized in choosing the best sea route, 1 is specialized in the native population, and the last one is specialized in survival in the different landscapes. Because you’re nearly departing you only have time to meet with 3 advisors. Whom do you choose to meet?

This shouldn’t be too hard. You want to have the most complete information on all aspects of the journey so you meet with 1 sailor, the anthropologist and the survival person. This leads us to conclude that a diversity of perspectives improves the overall quality of the decision. As easy as this might seem in theory, it is actually really hard in practice

Firstly, for the captain, usually specialties aren’t this well-defined. The problem feels more like “you have 10 advisors, who do you ask?” without any further information on the specialties of the advisors. So it is hard to tell how different of a perspective a person has. Secondly, for the captain, it is often hard to tell who is right most often. Being different is great, but if that comes at the cost of being wrong most of the time it’s not that great. If it is hard to tell how likely someone is to be right (based on evidence) a captain might be tempted to determine someone’s likelihood of being right to be proportional to how often that persons opinion matches the general consensus. This results in captains actually NOT valuing different perspective.

For the advisors it is hard because the captain often doesn’t really see the value in a different opinion because of the reasons above. So to be valued by the captain you want to be right as often as possible. Or if you cannot be right, you at least want to be with the majority because then you will not stand out if you’re wrong. So you speak to other people, choose the most promising approach to the problem (like everyone else) and thereby optimize the likelihood of you recommending a good decision. This strategy is actually incorrect copy-pasting from a different context. The strategy of 21 elivian.nl maximizing the likelihood of being correct (by for example talking to others) is great if you’re the one making the decision, or if you’re alone. This strategy doesn’t extend well to being in groups. Group decisions thrive on individuals exploring separate aspects of a problem, even if that is at the expense of the likelihood of you being correct.

So to sum up, as a captain: try to identify the people who have a different opinion but still manage to be correct a decent amount of the time. As an advisor: choose a different approach, one that will likely add knowledge to the collective knowledge.

 Market theory (CAPM) is actually a lovely analogy to this. Stocks aren’t valued solely on their return, but on a combination of their return and their independence to the world market.

4.3 People care more about accuracy than about clarity When explaining new concepts to people it is tempting to explain everything in great detail, with all the disclaimers and full nuance. This can result in a bored audience who miss the overall picture or simply don’t understand. I think it is hardly ever useful to explain something in a way the audience doesn’t understand: either explain it so they can understand or don’t explain it at all.

Simplifying a message in such a way that the relevant part of the message is still accurate can be hard. It requires you to really understand what you are talking about and what is the essential part and what is not. When you’re not really intimate with the subject matter often everything might feel important. Furthermore you need to understand your audience well. Some audience understand very well that you’re simplifying things, and repeatedly stating that you’re simplifying is irrelevant.

 A lesson I was taught when studying to become a math teacher is to teach a student 1 lesson at a time. Meaning that if you are teaching algebra and a student fails to properly use the calculator you use the calculator for the student. You’re teaching algebra and although the student also needs to be able to use the calculator that is not the point of this lecture. By taking that problem away the student can really focus (and learn) about the algebra. Next lesson can be about the calculator. Focusing on both will often result in a confused student. In practice (with maths) you’re actually supporting the student with a lot of small things (using the calculator, not making any small random errors, writing things down neatly, helping a student to remember a thing he/she thought of a moment ago but forgot because she was using the calculator). So to sum up, it might be tempting to teach everything at once and let the student struggle at all these things but by taking away all the struggles but one a student will much better learn your lesson. Of course, this depends on the circumstances.  In one of his books [26] Feynman explains quantum electro-dynamics. Normally prerequisite background knowledge include: Field theory, electromagnetism, weak force, strong force, quantum mechanics, special relativity, general relativity, gauge theory, calculus of residues, Lagrangian and Hamiltonian etc. Feynman however, writes his book for the intelligent audience with none of the usual prior knowledge whatsoever. Contrary to many popular- science books he doesn’t oversimplify any concepts and everything he says is accurate (he can know, he won a Nobel prize in this field). So nothing needs to be ‘unlearned’ when you’ll move on to more in-depth quantum electro-dynamics.

He does this by introducing watches and mirrors and everyday objects to replace theoretical things for the relevant parts. He does this by removing nearly all equations and formula’s which he considers the irrelevant part. He does this by replacing the integral with words. In the end the book is indeed very readable, not easy, but doable.

This great feat of simplification, to me, shows that any subject can be simplified to the reasonably smart audience, if only one can choose what not to explain.

4.4 Word creep

22 elivian.nl

Sometimes in the process of using words, words (or sentences) gradually shift in meaning. Words can have different meanings, interpretations and connotation depending on the context. So while every sentence itself might sound very plausible (or even undeniable) putting them together might not make sense.

This play with words can sometimes be accidental, but in my experience it is most frequently (subconsciously) used by people who rather win the argument than by those who care about finding the truth.

Word creep is related to a large number of existing fallacies: , fallacies of definition, weasel words, loaded label, referential fallacy.

7 of the most common types of wordcreep (in my experience) with examples.  One way of word creep is: shifting between implied meaning and literal interpretation of a word. Shifting between common usage and official definition.

For example feminists frequently do this when they say that feminism is: “equal rights for men and women”. This is the official literal definition and therefore is correct. For some reason the conversation than always moves to women’s rights and problems. So the common usage of the word feminism is: “let us spend time on improving rights for women, because they need it more than men do”. If you disagree that this is the common usage: next time you meet a self-identified feminist ask her to name 3 areas in which men are at an unfair disadvantage (seems a fair question if you really care about both sides equally), 99% of them will not have an answer. A full list is easy to find for example: [27].

Of course this is not a problem, it only becomes a problem if it is used for an argument of the form “You are in favor of equal rights, therefore you are a feminist, and as a feminist you should […something in the common usage of feminism…]”.  Another example of word creep: slowly changing meaning of a word

Alice: “I think we really need feminism: there is a big wage gap, females are being objectified on billboards in the streets and we are living in a patriarchy. “ Bob: “I’m not sure things are as bad as you think, I haven’t seen any data supporting the wage gap.” Alice: “You don’t think that women aren’t treated unfairly? Well, just look at what happens in Africa. You don’t think they are treated unfairly there?”

The word feminism starts out as “women in the western world” and moves to “women in poor countries” when desired. When taking together Alice her sentences it becomes: “Women are treated really unfairly in Africa, therefore I think we should do something about the wage gap in our country.”  Another way of word creep: words which have many components.

Alice: “I’m the best at X! The other people don’t even come close!” Bob: “That is a really arrogant thing to say!”

Looking down upon other people is can be called arrogant. But arrogant is also often associated with being overconfident. So because arrogant is associated with two words Bob actually attaches the second meaning (overconfident) to Alice her statement by using the word arrogant. So, to some extent, Bob implies that Alice is overconfident. But is this really what Bob intents to say? If so, Bob should have said so. If he means something else (i.e. “wow, pay attention to what you’re saying, people may start to dislike you for your tone”) he should have said that instead.  Grey-thinking in combination with fundamental uncertainty.

23 elivian.nl

Alice: […] Bob: “But are you really sure?” Alice: “Yeah” Bob: “But can we ever be absolutely sure? Are you 100% sure?” Alice: “Well no, but…” Bob: “So you’re not sure!” Bob: “So it is fair to say we do not know. I think there are definitely two sides to this story. I therefore think we should approach this cautiously. Alice: “Well, no, let’s…” Bob: “You’re so reckless. Anybody else who thinks approaching this cautiously is a bad idea?”

“Yeah, but I think it really depends on the circumstances”, “Yeah, but I think there are many exceptions” or “I think something good can be said for both sides” are (when not followed with why this is relevant) forms of obfuscation which I call grey-thinking because people overusing using these sentences don’t deal in black and white. This is related to the called . Although this is annoying, this becomes even more annoying when combined with a worldview where something has only 3 options: true, false, uncertain. Since these people are almost never certain about something being true or false the only state that remains is uncertain.

This worldview leads some people to a permanent state of greyness: everything is uncertain. These people don’t do 50 shades of grey, everything is just grey (uncertain). This is related to the fallacy of the lost contrast. The problem with this grey worldview is that it leads groups to avoid making choices, to delay stuff because ‘extra research is needed’ and to unnecessarily complicate discussion.

In reality there are many degrees of certainty and making this explicit is the best thing to do. And I think it is usually completely fair to say something is true if you are over 99% sure (no need for nuance usually).  Bullying example 1 Speaking for the group. In my time as a high-school teacher I was interested in understanding bullying behavior and bullying tactics. One thing I noticed was that there are usually a very small number of people who are doing the bullying and a large number of bystanders. The people doing the bullying would always use sentences like: “we don’t like you” or “Nobody thinks your jokes are funny”. Since the bystanders didn’t say anything, everybody seemed to assume that was indeed the opinion of the group, even though I as a teacher knew it was not. This is related to the (where a commonly held opinion is used as an argument in favor of that commonly held opinion), it differs with this) in that in this case the supposedly common opinion isn’t actually true (or at most self-fulfilling). Other examples might be: “I hear what you are saying, but I also frequently hear…”.  Bullying example 2. It is not a joke. I always wondered why I saw so little bullying occur. After a time I figured it out, I did see it but didn’t recognize it as bullying. In instructive movies about bullying (or if you are the one being bullied) it always seems as a never ending stream of stupid remarks. As a bystander this is not what bullying looks like. As a bystander you actually catch only a small part of the movie (i.e. one math lesson as a teacher) and the people doing it will say: “it was just a joke”. This makes it really easy to underestimate the seriousness. Really easy to forget about it after the lesson because you have got many things to do. Really easy for these strings of jokes to remain under the radar so that you never connect all the dots. So in general a good rule of thumb: no, it’s not a joke.

24 elivian.nl

5. Conclusion I hope you enjoyed reading this. I hope you’ll have fun searching for these fallacies when in meetings, reading articles or are talking to others. If in this process you manage to find very informal fallacies I missed, or if you think one of this list is rather uncommon, let me know! ([email protected])

Further reading If you liked the contents of this article and you’re still hungry for more I’ve got a couple of suggestions. But first, you might like any particular part of this document most, if that is the case you might be most interested in the references given in that chapter.

A couple of books/articles didn’t make it to any example but are well worth it.

 An insight in the research into the causes of the crash of the Challenger Rocket by Richard Feynman. Especially the appendix (challenger report) is interesting, but the whole book is pretty good. Feynman, R. P., & Leighton, R. (1988). " What do you care what other people think?": further adventures of a curious character. WW Norton & Company.  A friend of mine wrote a marvelous article on errors in statistics. It is in dutch and can be found here www.vdwaals.nl/pdf/n/n-17 page 32  If you aren’t only interested in how NOT to do things, medium.com/@yegg/mental-models- i-find-repeatedly-useful-936f1cc405d is a list containing a lot of mental models on how to think (or not to think) in certain circumstances. Seems like a wonderful list. I once followed an online course by Dan Ariely (A beginner’s guide to irrational behavior) which was really interesting as it dived deeply into irrational behavior. This course is no longer available, but he did write a book (I didn’t read it): Ariely, D. (2010). Predictably Irrational. The Hidden Forces that Shape our Decisions. ISBN: 9780061353239  Harry potters and methods of rationality. A Harry Potter fan fiction book creating an alternative world where Harry Potter is a scientist studying wizardry. It is meant to include many fallacies. I didn’t read it (I’m afraid of ruining my Harry Potter experience) but it seems really nice and can be found at http://www.hpmor.com/  The Catalogue of Anti-Male Shaming Tactics www.mgtow.com/shaming-tactics/ is a long tactic of informal fallacies allegedly often employed by feminists and the best responses to this.  I find Wikipedia has a lot of experience in writing articles in an unbiased and clear way. If you are interested in this they have a couple of pretty interesting guidelines for writing an article: hen.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Words_to_watch

25 elivian.nl

References

[1] en.wikipedia.org/wiki/List_of_fallacies [2] van den Bosch, A., Boucherie, R., Driessen, T., Scheinhardt, W., & Vink-Timmer, J. (2011). A Mathematician’s view on normative liability studies. www.utwente.nl/en/eemcs/sor/graduates/MSc/reports/Bosch.pdf [3] Hawken, P. (2017). Drawdown: The Most Comprehensive Plan Ever Proposed to Roll Back Global Warming. Penguin. [4] van den Bosch, A., Gibbens, R., Scheinhardt, W. (2010) Fair Ramp Metering elivian.nl/ref/fair ramp metering.pdf Chapter 4 is probably what you are looking for. [5] “SCIgen - An Automatic CS Paper Generator” pdos.csail.mit.edu/archive/scigen/ [6] “Springer and Université Joseph Fourier release SciDetect to discover fake scientific papers” 23 march 2015, www.springer.com/gp/about-springer/media/press- releases/corporate/scidetect/54166 [7] www.elsewhere.org/pomo/ Refresh the page for a new essay. [8] “Science’s big scandal – fake peer review scientific journals publish fraudulent plagiarized or nonsense”, Charles Seife, april 1, 2015 http://www.slate.com/articles/health_and_science/science/2015/04/fake_peer_review_ scientific_journals_publish_fraudulent_plagiarized_or_nonsense.html [9] “Wing Chun Kung Fu vs MMA - Trending Videos In China Commentary (Xu Xiaodong is back)” www.youtube.com/watch?v=Y9YdSFS8Ejc [10] “Psychic Cringe Fails 3 - Touchless KNOCKOUT (Chi) Fails” www.youtube.com/watch?v=5Etblbd3r5I [11] Harari, Y. N., & Perkins, D. (2017). Sapiens: A brief history of humankind. HarperCollins. [12] Tetlock, P. E., & Gardner, D. (2016). Superforecasting: The art and science of prediction. Random House. [13] Wikipedia: Evidence based medicine en.wikipedia.org/wiki/Evidence-based_medicine. [14] Rumelt, R. P. (2012). Good strategy/bad strategy: The difference and why it matters. [15] “Singer: the drowning child” www.youtube.com/watch?v=rBMZiaD-OYo [16] De arbeidsmarkt voor leraren vo 2018-2023, Regio Utrecht, 4 januari 2018, dr. Hendri Adriaens, dr.ir. Peter Fontein www.rijksoverheid.nl/binaries/rijksoverheid/documenten/rapporten/2015/06/17/regionale- arbeidsmarktramingen-voor-leraren/De+arbeidsmarkt+voor_leraren+vo+2018-2023+- +regio+Utrecht.PDF [17] hedition.cnn.com/2017/08/21/health/teacher-shortage-data-trnd/index.html [18] Stewart, R. (2009). Occupational Hazards: My Time Governing in Iraq. Pan Macmillan. [19] www.rorystewart.co.uk/looking-back-on-iraq/ [20] “Donald Trump doesn't understand how "Clean Coal" works (from Last Week Tonight)” www.youtube.com/watch?v=nz0-DS0r30w Note that this is a funny clip but contains the original. [21] “Sing Sing: Norway jail with music studio, cooking classes” www.youtube.com/watch?v=Dg8D8Vh0cd8 Also fun: “The Norden - Nordic prisons (excerpt)” www.youtube.com/watch?v=2g56susrNQY [22] “Charlottesville: Race and Terror – VICE News Tonight on HBO” www.youtube.com/watch?v=RIrcB1sAN8I Warning: Graphic. First 15 minutes are the most interesting. [23] “COLLEGE KIDS REACT TO RACISM IN AMERICA” www.youtube.com/watch?v=uVvbh2_Oh94 [24] “My descent into America's neo-Nazi movement & how I got out | Christian Picciolini | TEDxMileHigh” www.youtube.com/watch?v=SSH5EY-W5oM [25] Point and Shoot (2014), Director: Marshall Curry, Stars: Matt Sager, Matthew Vandyke [26] Feynman, R. P. (2006). QED: The strange theory of light and matter. Princeton University Press. [27] en.wikipedia.org/wiki/Men's_rights_movement#Issues

26 elivian.nl

Appendix - vier uur bij de Nederlandse impactconferentie. Ik ben gisteren bij de Nederlandse Impact Conferentie geweest. Het viel me al snel op dat: i. het best wel saai was en ii. mensen veel moeilijke woorden gebruikten. Omdat ik toch dit stuk aan het schrijven was dacht ik dat het een mooie kans was om uit te zoeken hoeveel er nu daadwerkelijk gezegd werd of dat de moeilijke woorden eigenlijk onkunde verbloemde.

M’n conclusie was dat het gemiddelde niveau niet veel hoger of lager lag dan te verwachten was. Wel waren er iets van 3 praatjes die letterlijk niks zeiden behalve moeilijke woorden. (Letterlijk vertaald was er 1 presentatie die zei: “Je moet dingen integraal aanpakken. Je kan niet zomaar dingen bij elkaar optellen. We hebben hier een plaatje bij (visuele weergave van wat er al gezegd was). Je kan dus niet zomaar dingen bij elkaar optellen.

Lingo Bingo -> meest voorkomende woorden die gemakkelijk te combineren zijn en waardoor je cool lijkt. 1. Integrated (bijv integrated reporting) 12. Life-cycle/long-view/lange termijn 2. Impact (bijv impact investeren) 13. Sturen 3. Maatschappelijk 14. Consument / klant 4. Circulair (Hot! Redelijk nieuw, maar gebruik 15. Social enterprises je vooral als je echt up-to-date bent) 16. Waarde 5. … Kapitaal (bijv. sociaal kapitaal, 17. Dimensie / element (/ perspectief <- intellectueel kapitaal etc) minder cool, maar kan ook) 6. Coalitie/conglomeraat/partnership/ 18. Duurzaam community/enterprise (bijv. multi- 19. Data stakeholder-partnership) 20. De facto 7. Scope (/kader) 21. Inzichtelijk maken 8. Stakeholders / actoren 22. Creeëren 9. Monetarisatie (/kwantificeren) 23. Gecertificeerd (/geaudit) 10. Multi-… (bijv. multi-actor speelveld) 24. Transitie 11. Protocol/proces/tool/methodiek

Ik ben best wel zeker dat als ik goed gekleed ben ik de volgende zinnen prima zou kunnen verkopen en daarna nog een schouderklopje zou krijgen ook: “Bij [je bedrijf] vinden we het sturen op processen heel belangrijk. Multi-stakeholder-partnerships met brede en integrale scope staan dan centraal om uiteindelijk over de hele life-cycle zoveel mogelijk maatschappelijke waarde te leveren, niet alleen voor onszelf maar ook voor de klant. Ontwikkeling van een methodiek voor integrale monetarisatie van de niet-financiele dimensie is dan ook key en dat kan natuurlijk alleen als we hiervoor ook de reeds bestaande sociale enterprises activeren.”

Woorden die vaak onnodig gebruikt werden, of waarvoor prima normale alternatieven bestaan. Monopolist Korte recap (samenvatting) Expeditie Kwantificeren Monetariseren Conglomoraat Perspectief Instutioneel Committeren Brainstormen Wisselwerking Hedentendage Echter (maar) Exersitie Winstmaximalisatie

De moeilijkste woorden die ik tegen kwam Human Capital GRI indicatoren Scope-of-work SDG charter IP&L worksessie Multi-stakeholder Neo-klassieke economische Maatschappelijk presteren, initiatieven theorie Positief-welzijns-surplus Slice en dice Reporting 3.0 Grondstofpaspoort State of the art investeren Multicapital accounting Materialiteitsmatrix Value proposition In dit brede speelveld Circulaire activiteiten Counting the bees Natural capital protocol Netto contante waarde Stressmodellen 27 elivian.nl

Financieel rendement Niet-monetaire dimensie Angelssaksisch model Theory of change Social beliefs Executive education Inclusieve samenleving Mutaties in het Financieel opstalmeester Statushouders klantenbestand Planning your boundaries Job-ready Verduurzaming van het Huidige klassieke economie Maatschappelijk mandaat hypotheeklandschap Onweerstaanbare Harde currency Budgetcoach propositie Data-driven Boardrooms Nutsbedrijven Social enterprises Bankable De nieuwe werkelijkheid Culture impact society Seat-funding Waardeontrekking. RVO Diversificatie Één-partijen politiek

We zien dat dat hier dat de verzelfstandignaamwoordiging van woorden het goed doet, evenals de Englishification van woorden. Om alles nog even in context te kunnen plaatsen heb ik ook nog een aantal pareltjes aan gehele zinnen opgeschreven

“Zodat je ook maatschappelijke waardecreatie in context kan plaatsen.” “Om even de link te maken naar de sustainable development goals…” “Is in het verlicht eigenbelang van aandeelhouders” “We zijn nog meer aan het sturen op awareness om de kritische massa te bereiken van koploers en willen dit doen door een licht coalitieverband.” “Belangrijke onderdelen zijn de acedemic council en publieke consultatie.” “Dit geldt in mijn optiek ook voor waarde creeëren” “Met de learnings van de hypotheken in de hand…” “Het uberhaubt kunnen aansturen van die netwerken in de energietransitie” “Zorgen dat die lasten niet oneverenredig neerdalen bij een kleine groep” “Het impact-element wordt veel meer benadrukt in het stakeholdergesprek.” “Daar hebben we nog 2 deep dives voor gemaakt.” “Toen dachten we: voor impact investering hebben we een aparte guidance nodig.” “… en als een add-on doen we dan…” “Naast onze intensieve begeleiding in humanitaire crisisgebieden…” “Ze kregen een laag honorarium waardoor ze kind van de rekening werden.” “Dit doen we door middel van een robuust proces dat ook ieder jaar geaudit wordt.” “We gaan van meten naar sturen.” “We hebben wind mee en stroming tegen.” “Trek je dan niet een te grote broek aan? Ben je dan niet te exposed?” “Als iets belangrijk is dan moet je het ook belangrijk maken.” “Dus hebben we een dedicated controller op de impactmeting gezet.” “Internationale handelsketens zijn zandlopers waar slechts enkele bedrijven alle prijsvorming in handen hebben.” “Waar je deze data mee kan staven…” “En met die seat-funding konden ze ook gaan functioneren als coörperatie.” “Alleen instumenteel doe je hier wat aan.” “Je ziet dat de impact gaat rollen.” (hoe rolt impact???) “Dan kom ik hier op een final point.” “Dan kunnen we de reeële economie ondersteunen.” “Mail je dat even naar Adriaan zodat we ze wel capturen?” “Voordat we naar de closing gaan.”

Belangrijk bij deze zinnen en uberhaubt alles wat hier staat is dat je vooral doet alsof de woorden heel common en usual zijn en deze dagelijks gebruikt. Vooral niet uitleggen wat je bedoelt! Extra pluspunten als je in (onnodige) bijzinnen nog even een paar woorden kwijt kan, extra-curruculaire bijzinnen die toch weinig steek behoeven te houden.

28 elivian.nl