Read Volume 73: 19th April 2015

Forum for Multidisciplinary Thinking 1 Read Volume 73: 19th April 2015

Content

Page3: Existential Risk: A Conversation With

Page 13: From “Economic Man” to Behavioral Economics By Justin Fox

Page 24: Why C.E.O. Pay Reform Failed By James Surowiecki

Page 28: The anatomy of discovery: a case study By David Field

Chetan Parikh Prashant Patel Arpit Ranka

Compiled & Edited by Prashant Patel

“The best thing a human being can do is to help another human being know more.” — Charlie Munger

Forum for Multidisciplinary Thinking 2 Read Volume 73: 19th April 2015

Existential Risk: A Conversation With Jaan Tallinn

JAAN TALLINN is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The in Cambridge, MA. He is also a founding engineer of Kazaa and . Jaan Tallinn's Edge Bio Page

INTRODUCTION by

I find Jaan Tallinn remarkable in more ways than one. His rags-to-riches entrepreneur story is inspiring in its own right, starting behind the Iron Curtain and ending up connecting the world with Skype. How many times have you skyped? How many people do you know who created a new verb?

Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.

Already after a few short years, Jaan’s impact is remarkable. He is a key supporter of a global network of non-profit existential risk organizations including The Future of Humanity Institute, The Machine Intelligence Research Institute, The Institute, The Centre for the Study of Existential Risk at University of Cambridge, and The Future of Life Institute, the last two of which he co-founded.

I’ve had the pleasure to work with him on The Future of Life Institute from day one, and if you’ve heard of our recent conference, open letter and well-funded research program on keeping artificial intelligence beneficial, then I’d like to make clear that none of this would have happened if it weren’t for Jaan’s support. If you’re an intelligent life form reading this text millions of years from now and marvel at how life is flourishing, you may owe your existence to Jaan.

MAX TEGMARK is a Physicist, MIT; Researcher, Precision Cosmology; Founder, Future of Life Institute; Author, Our Mathematical Universe. Max Tegmark's Edge Bio Page

EXISTENTIAL RISK

Forum for Multidisciplinary Thinking 3 Read Volume 73: 19th April 2015

I split my activity between various organizations. I don't have one big umbrella organization that I represent. I use various commercial organizations and investment companies such as Metaplanet Holdings, which is my primary investment vehicle,to invest in various startups, including artificial intelligence companies. Then I have one nonprofit foundation called Solenum Foundation that I use to support various so-called existential risk organizations around the world.

I was born behind the Iron Curtain, in Soviet-occupied , and looked forward to a pretty bleak life in some scientific institute trying to figure out how to kill more Americans. Luckily though, the Soviet Union collapsed shortly before I was ready for independent life. The year 1990, when I went to university, was also in the middle of big turmoil, where the Soviet Union collapsed and various countries, including Estonia, became independent.

When I went to university, I studied physics there. The reason I studied physics was that I was into computer programming already, since high school or even a little bit earlier, so I thought I should expand my horizons a little bit. And I do think it has helped me quite a lot. If you look around in the so‑called existential risk ecosystem that I support, there's, I would say, an over-representation of physicists, because physics helps you to see the world in a neutral manner. You have a curiosity that helps you to create the world model rather than you try to model the world in a way that suits your predispositions.

After having studied physics, I worked with computers throughout my entire period of university. We jokingly called ourselves the "computer games industry of Estonia," because we were really the only commercial computer games development studio in Estonia. After spending a decade developing computer games, I gave one talk where I described my life as surfing the Moore's Law. Interesting turning points in my life have coincided with things that Moore's Law has made possible or has made no longer feasible.

For example, we exited the computer games industry when graphics cards came along, thus enabling much more powerful storytelling capabilities and therefore reducing the importance that programming played in computer games. Because we, being mostly good at programming, didn't have a good comparative advantage in this new world, we ended up exiting the computer games business and going into Internet programming. At this point we met Niklas Zennström and , who eventually became the main founders of Skype. Together with them, we first did the Kazaa file sharing application, which

Forum for Multidisciplinary Thinking 4 Read Volume 73: 19th April 2015 got us into a bunch of legal trouble. After Kazaa, we did a few smaller projects, and eventually ended up doing Skype.

The way we got into the games industry was almost by accident. The nice thing about starting your computer career with computers that are really slow is that you have to do work in order to make them do something interesting. These days I see that my children, for example, have a tough time starting programming because YouTube is just one click away. Just trying to figure out how to make interesting games was a natural step of evolution for a programmer.

It was in 1989 when I teamed up with a couple of my classmates to develop a simple graphical action-based computer game, and we managed to sell that game to Sweden. We earned a little hard currency—Swedish kronor—as a result, which, at the time of the collapse of the Soviet Union when the Russian ruble was in freefall, was a fortune. I think we made $5,000 based on that, which was an incredibly big sum back then. Hence, we were hooked and thought, okay, we can do something that other people are willing to pay money for, and ended up developing bigger and bigger games and spent about one decade in games.

People do ask me quite a lot, what is the secret of Estonia when it comes to advancing digital ideas, and programming and technology in general. It's hard to track this down to one cause, but there are a few contributing factors. One thing was that even during Soviet times there was this big scientific center called the Institute of Cybernetics, in Tallinn, that hosts many scientists who developed things like expert systems and early precursors of AI.

I'm proud to say that Skype has played quite a big role in the Estonian startup ecosphere, for various reasons. One is that Estonia is a small place, so people know each other. I half-jokingly say that quite a lot of people just knew "the Skype boys," as we were called in Estonia, and they think, if they can do it, well, so can I. The other nice side effect of Skype is that it's a fairly big company in Estonian context, so it works as a training ground. A lot of people meet there and get their experience working there, working in an international context. Skype is no longer a startup, but we used to have a strong startup culture there. Even now, I have invested in three or four companies that are just made by Skype alumni, so there's a strong startup culture there.

Finally, the Estonian government has gotten into a nice positive feedback loop, where they have done a few digital innovations in the domain of e-governance. They had gotten very good positive feedback based on their achievements in

Forum for Multidisciplinary Thinking 5 Read Volume 73: 19th April 2015 things like digital voting and paperless government office. Whenever humans get into a positive feedback loop, they want to do more of the things that they get praise for. Indeed, the latest project was called Estonian digital E-residency, so you can go to an Estonian consulate—as far as I understand—and get a chip card that will give you the ability to give digital signatures that have the power of law in Estonia, and hence in the EU.

Skype started as a project within another company. The other company was called Joltid, founded by Niklas Zennström and Janus Friis. Within that company, we first did various projects, including the back end for Kazaa file sharing network. Skype was started in late 2002 as a project within that company, but just a few months later it was spun off into a separate company, and seven people got founding shares in this new company called Skyper Limited. The name Skyper came from "sky peer", because the original idea wasn't actually to do a Voice over IP client. The original idea was to do Wi-Fi sharing, but Skyper.net ... or was it Skyper.com? ... was taken, so we ended up chopping the 'R' off from the end.

Skype was not the first VoIP client. In fact, when we started with this idea of developing a Wi-Fi sharing network—Skyper—our thinking was that, clearly there is VoIP software out there that we should import and implement on top of our Wi-Fi sharing network to give people incentive to join our network. However, after having evaluated the existing offerings that were out there, we determined that none of them worked properly behind firewalls. The state of Voice over IP back then was that the latest thing, which was called SIP— Session Initiation Protocol—was a new standard that was roughly modeled after email. The problem was that, just like with email, you need an ISP or some third party to set you up with this. You can't start just an email program and be immediately connected; you need to connect that email program to some server, which creates the chicken and egg situation.

In the VoIP world, where there were no Web servers at that point, we figured out that we needed a peer-to-peer solution where people could bootstrap the network without being reliant on their ISPs and installing some gateways or things. After having empirically determined that the existing Voice over IP, although sufficient for our purposes technically, wouldn't work because of the architectural requirements—the chicken and egg situation—we decided, okay, let's do our own Voice over IP program. Eventually we ended up dropping this Wi-Fi sharing network idea altogether and just focused on the Voice over IP.

Skype has been sold three or four times, depending on how you count. Myself and the founders sold our shares during the first sale, which was to eBay in

Forum for Multidisciplinary Thinking 6 Read Volume 73: 19th April 2015 September 2005—two years after we launched Skype. After that, eBay sold the majority of the shares to a private equity company and a consortium of VCs. It was 2010 or 2011 when Microsoft bought the whole thing.

From Skype, I eased out gradually. There was no sharp point where I left Skype. One moment where I significantly reduced my involvement in Skype was 2009, when there was this big lawsuit between the founders of Skype and the private equity companies that bought shares of Skype from eBay. There was some technology licensing issue. Because I ended up on the other side of that lawsuit than Skype, my day-to-day activities in Skype were hindered. When I came back to Skype half a year later, the company had moved along quite a lot, so it was hard for me to fit right back in. I ended up just gradually easing out from the day-to-day activities.

Already, during the lawsuit that Skype had, I was looking out for what other important things there might be to do in the world, and I ended up reading the writings of , an AI researcher in California. I found him very interesting, but also he was making a very important argument that the default outcome from AI is not necessarily good. We have to put in effort in order to make the outcomes from AI, once they get powerful enough, good. Once I got interested in these topics, I was more and more willing to contribute my time and money to advancing what's called the existential risk ecosystem—people thinking about not just the risks from technology, like AI technology, but also from other technologies such as synthetic biology and nanotechnology.

When it comes to existential risks, there are two big categories. One category is natural existential risks such as super volcanoes, for example, or asteroid impacts. Every 10- to 100 million years, a big enough asteroid comes along that just potentially destroys the planet. Now, the nice thing about natural existential risk is that these are risks that we have lived with for our entire history, so they're not necessarily getting bigger over time.

The other category is technological existential risks—risks from either deliberate or accidental misapplication of technology of ever-increasing power. As we know, there's an exponential increase in the power of computers and other technology. We are entering uncharted territory in this century, and therefore, foresight is important to have. We need to figure out how to steer the technological progress to ensure safe outcomes.

I started engaging with existential risk ecosystem in 2009 already. I remember meeting up with Yudkowsky, then starting to engage with other people in the existential risk community and seeing how I could help. First, I started by

Forum for Multidisciplinary Thinking 7 Read Volume 73: 19th April 2015 donating money, but eventually ended up supporting more and more organizations and doing a "cross-pollination" between those organizations by introducing people and making sure that their activities are more coordinated than they otherwise would be. Also, I finally ended up co-founding two new organizations. At Cambridge University, we have an organization called the Centre for the Study of Existential Risk, co-founded with , who is a professor of philosophy at Cambridge University, and , who back then used to be the Master of Trinity College, and is very well-known scientist, who has written a book about existential risks himself. The other organization that I helped to co-found is at MIT, here in the US. It's called the Future of Life Institute, and it's led by Max Tegmark, who is a well-known physicist at MIT.

There's an interesting point about what is the role of computer science. Obviously I'm biased because I'm a computer person, but I have found that there is a very fertile intersection of computer science and philosophy. The reason is that throughout history philosophy has leaned on human intuitions. Analytic philosophy tries to make concepts precise, but when doing so they come up with examples and counterexamples to delineate the concepts that lean on human intuitions.

We know from psychological research that human intuitions aren't fundamental entities in the world. If you do different experiments in different cultures, for example, people have completely different intuitions. They even see different visual illusions. Daniel Dennett has said that computers keep philosophy honest. When you make a philosophical argument and you don't lean on intuitions, you lean on programs, you basically point to a program and say, "This is what I mean," you're on much, much more solid ground because you're no longer influenced on what intuition tells humans and how they differ from culture to culture.

Human philosophy has had thousands of years to come up with interesting passages of thought and explore the thought space, but now we need answers. And these answers have to be there in a decade or two. These answers have to be in the form of computer code.

Elon Musk said at his interview at the TED conference a couple of years ago, that there are two kinds of thinking. All of humanity, most of the time, engages in what you call metaphorical thinking, or analog-based thinking. They bring in metaphors from different domains and then apply them to a domain that they want to analyze, which is like things that they do intuitively. It's quick, cheap, but it's imprecise. The other kind of thinking is that you reason from first principles. It's slow, painful, and most people don't do it, but reasoning from first principles

Forum for Multidisciplinary Thinking 8 Read Volume 73: 19th April 2015 is really the only way we can deal with unforeseen things in a sufficiently rigorous manner. For example, sending a man to the moon, or creating a rocket. If it hasn't been done before, we can't just use our knowledge. We can't just think about "how would I behave if I were a rocket" and then go from there. You have to do the calculations. The thing with existential risks is it's the same. It's hard to reason about them, these things that have never happened. But they're incredibly important, and you have to engage in this slow and laborious process of listening to the arguments and not pattern-matching them to things that you think might be relevant.

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about —in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn't make a significant difference in these areas.

When I found that there is this massively underappreciated topic of existential risks, I saw immediately that I could make a significant difference there, first by bringing my reputation to these arguments and more credibility to those arguments. I basically started with taking those arguments, internalizing them, then repackaging them in my own words, and then using my street credibility to give talks and discussions and meet with people to talk about these issues. As a result of that activity, now we're in much better position in the world, where we do have very strong organizations, reputationally, at Cambridge University and MIT, and organizations that are associated with advancing those topics.

Over the last six years or so there has been an interesting evolution of the existential risk arguments and perception of those arguments. While it is true, especially in the beginning, that these kinds of arguments tend to attract cranks, there is an important scientific argument there, which is basically saying that technology is getting more and more powerful. Technology is neutral. The only reason why we see technology being good is that there is a feedback mechanism between technology and the market. If you develop technology that's aligned with human values, the market rewards you. However, once technology gets more and more powerful, or if it's developed outside of market context, for example in the military, then you cannot automatically rely on this market mechanism to steer the course of technology. You have to think

Forum for Multidisciplinary Thinking 9 Read Volume 73: 19th April 2015 ahead. This is a general argument that can apply to both synthetic biology, artificial intelligence, nanotechnology, and so on.

One good example is the report LA-602, that was developed by the Manhattan Project. During the Manhattan project, it was six months before the first nuclear test. They did a scientific analysis of what is the probability, what are the chances of creating a runaway process in the atmosphere that would burn up the atmosphere and thus destroy the earth? It’s the first solid example of existential risk research that humanity has done.

Really, what I am trying to advance is more reports like that. Nuclear technology is not the last potentially disastrous technology that humans are going to invent. In my view, it's very, very dangerous when people say, "Oh, these people are cranks." You’re basically lumping together those Manhattan Project scientists who developed solid scientific analysis that's clearly beneficial for humanity, and some people who are just clearly crazy and are predicting the end of the world for no reason at all.

It’s too early to tell right now what kind of societal structures we need to contain the technology once the market mechanism is no longer powerful enough to contain them. At this stage, we need more research. There's a research agenda coming out pretty soon that represents a consensus between the AI safety community and the AI research community, of things that are not necessarily commercially motivated research, but the research that needs to be done if you want to steer the course, if you want to make sure that the technology is beneficial in the sense that it's aligned with human values, and thus giving us a better future the way we think the future should be. The AI should also be robust in the sense that it wouldn't accidentally create situations where, even though we developed it with the best intentions, it would still veer off the course and give us a disaster.

There are several technological existential risks. An example was the nuclear weapons before the first nuclear test was done. It wasn't clear whether this was something safe to do on this planet or not. Similarly, as we get more and more powerful technology, we want to think about the potentially catastrophic side effects. It's fairly easy for everyone to imagine that once we get synthetic biology, it becomes much easier to construct organisms or viruses that might be much more robust against human defenses.

I was just talking about technological existential risks in general. One of those technological existential risks could be potentially, artificial intelligence. When I talk about AI risks to people I sometimes ask them two questions. First, do you

Forum for Multidisciplinary Thinking 10 Read Volume 73: 19th April 2015 have children? Second, can you program computers? To people who have children, I can make the point that their children are part of humanity, hence, they can't treat humanity as an abstract object and say things like, "Perhaps humanity doesn't deserve to survive," because their children are part of it, and they're saying that their children don't deserve to survive, which is hardly what they mean.

But the reason why I ask them whether they can program computers is that, can I talk to them about AI in the language of what it really is—it's a computer program. People who are not computer programmers, I can't tell them in the exact language. I have to use metaphors, which are necessarily imprecise. People who don't program don't know what computer programs, and hence AI, really are.

One of the easiest arguments to make is, look around. What you see in the world is, you see a world of human designs and human dominance. The reason why humans dominate this planet has nothing to do with our speed or our manipulators. It has to do with intelligence, however we define it. The thing about AI is, if you're creating machines that are more and more intelligent, you don't want to inadvertently end up in a situation that gorillas are these days, for example, that you have a smarter agent than you dominating the environment.

As Stuart Russell points out in his commentary on edge.org, the worry with AIs isn't necessarily that they would be malevolent or angry at humans. The worry that we need to think through and do research about is that they will get more and more competent. If we have a system that's very competent in steering the world toward something that we don't exactly want, how do we prevent the world ending up in a place that we don't exactly want? We need to solve two challenges. One is to ensure that AI is beneficial, in the sense that in using it, increasing its competence would contribute to the best outcomes as we humans see it. Second, we have to ensure that AI is robust, meaning that once it starts developing its own technologies, once it starts developing further next generations of AIs, it wouldn't drift from the course that we want it to stick to.

When I say we, a lot of times I really mean humanity. It's not chimpanzees who are developing these technologies. It's humans who are developing the technology. If I want to zoom in and narrow it down, then I would say technology developers and people who are funding technologies, people who are regulating technologies. More generally, everyone who is on a causal path of new technologies being developed, is in some way responsible for making sure that the new technologies that are brought into existence as a result of their

Forum for Multidisciplinary Thinking 11 Read Volume 73: 19th April 2015 efforts, they are responsible for ensuring that they are beneficial in the long term for humanity.

I would say that I don't have any favorites, or any particular techniques within the domain of AI that I'm particularly worried about. First of all, I'm much more calm about these things. Perhaps by virtue of just having longer exposure to AI companies and people who develop AI. I know that they are well-meaning and people with good integrity.

Personally, I think the biggest research that we need to advance is how to analyze the consequences of bringing about very competent decision-making systems to always ensure that we have some degree of control over them, and we won't just end up in a situation where this thing is loose and there's nothing we can do now.

There is some research that can be done and has been proposed. The technical term for this is corrigibility. Most of the technology these days is developed in an iterative manner: you create the first version of technology, you see what's wrong with it, you create the next version, and the next version, next version. Each next version tends to be better, in some dimension at least. But the thing is that once you create autonomous systems, and once those autonomous systems get powerful enough to model the activities of their creators, to put it simply, once they figure out that there's an off switch, they have instrumental reasons to disable that off switch. We need to think through how we construct ever more competent systems to ensure that the outcomes are beneficial.

When it comes to control of the future, it eventually ends up in philosophy and moral philosophy, and thinking about topics like how should conflicting interests be reconciled when there are 7 billion, perhaps 10 billion people on this planet. And how should we take into account the interests of animals, for example, and the ecosystem in general. Humanity does not know the answer to the question: what do we really want from the long-term future? And again, in a situation where we might hand off the control to machines, it's something that we need to get right.

Source: edge.org

Forum for Multidisciplinary Thinking 12 Read Volume 73: 19th April 2015

From “Economic Man” to Behavioral Economics By Justin Fox

When we make decisions, we make mistakes. We all know this from personal experience, of course. But just in case we didn’t, a seemingly unending stream of experimental evidence in recent years has documented the human penchant for error. This line of research—dubbed heuristics and biases, although you may be more familiar with its offshoot, behavioral economics—has become the dominant academic approach to understanding decisions. Its practitioners have had a major influence on business, government, and financial markets. Their books—Predictably Irrational; Thinking, Fast and Slow; and Nudge, to name three of the most important—have suffused popular culture.

So far, so good. This research has been enormously informative and valuable. Our world, and our understanding of decision making, would be much poorer without it.

It is not, however, the only useful way to think about making decisions. Even if you restrict your view to the academic discussion, there are three distinct schools of thought. Although heuristics and biases is currently dominant, for the past half century it has interacted with and sometimes battled with the other two, one of which has a formal name—decision analysis—and the other of which can perhaps best be characterized as demonstrating that we humans aren’t as dumb as we look.

Adherents of the three schools have engaged in fierce debates, and although things have settled down lately, major differences persist. This isn’t like David Lodge’s aphorism about academic politics being so vicious because the stakes are so small. Decision making is important, and decision scholars have had real influence.

This article briefly tells the story of where the different streams arose and how they have interacted, beginning with the explosion of interest in the field during and after World War II (for a longer view, see “A Brief History of Decision Making,” by Leigh Buchanan and Andrew O’Connell, HBR, January 2006). The goal is to make you a more informed consumer of decision advice—which just might make you a better decision maker.

The Rational Revolution During World War II statisticians and others who knew their way around probabilities (mathematicians, physicists, economists) played an unprecedented

Forum for Multidisciplinary Thinking 13 Read Volume 73: 19th April 2015 and crucial role in the Allied effort. They used analytical means—known as operational research in the UK and operations research on this side of the Atlantic—to improve quality control in manufacturing, route ships more safely across the ocean, figure out how many pieces antiaircraft shells should break into when they exploded, and crack the Germans’ codes.

After the war hopes were high that this logical, statistical approach would transform other fields. One famous product of this ambition was the nuclear doctrine of mutual assured destruction. Another was decision analysis, which in its simplest form amounts to (1) formulating a problem, (2) listing the possible courses of action, and (3) systematically assessing each option. Historical precedents existed—Benjamin Franklin had written in the 1770s of using a “Moral or Prudential Algebra” to compare options and make choices. But by the 1950s there was tremendous interest in developing a standard approach to weighing options in an uncertain future.

The mathematician John von Neumann, who coined the term mutual assured destruction, helped jump-start research into decision making with his notion of “expected utility.” As outlined in the first chapter of his landmark 1944 book Theory of Games and Economic Behavior, written with the economist Oskar Morgenstern, expected utility is what results from combining imagined events with probabilities. Multiply the likelihood of a result against the gains that would accrue, and you get a number, expected utility, to guide your decisions.

It’s seldom that simple, of course. Von Neumann built his analysis around the game of poker, in which potential gains are easily quantifiable. In lots of life decisions, it’s much harder. And then there are the probabilities: If you’re uncertain, how are you supposed to know what those are?

The winning answer was that there is no one right answer—everybody has to wager a guess—but there is one correct way to revise probabilities as new information comes in. That is what has become known as Bayesian statistics, a revival and advancement of long-dormant ideas (most of them the work not of the English reverend Thomas Bayes but of the French mathematical genius Pierre-Simon Laplace) by a succession of scholars starting in the 1930s. For the purposes of storytelling simplicity I’ll mention just one: Leonard Jimmie Savage, a statistics professor whose 1954 book The Foundations of Statistics laid out the rules for changing one’s probability beliefs in the face of new information.

One early and still-influential product of this way of thinking is the theory of portfolio selection, outlined in 1952 by Savage’s University of Chicago student Harry Markowitz, which advised stock pickers to estimate both the expected

Forum for Multidisciplinary Thinking 14 Read Volume 73: 19th April 2015 return on a stock and the likelihood that their estimate was wrong. Markowitz won a Nobel prize for this in 1990.

The broader field of decision analysis began to come together in 1957, when the mathematician Howard Raiffa arrived at Harvard with a joint appointment in the Business School and the department of statistics. He soon found himself teaching a statistics course for business students with Robert Schlaifer, a classics scholar and fast learner who in the postwar years taught pretty much whatever needed teaching at HBS. The two concluded that the standard statistics fare of regressions and P values wasn’t all that useful to future business leaders, so they adopted a Bayesian approach. Before long what they were teaching was more decision making than statistics. Raiffa’s decision trees, with which students calculated the expected value of the different paths available to them, became a staple at HBS and the other business schools that emulated this approach.

The actual term “decision analysis,” though, was coined by Ronald Howard, an MIT electrical engineer and an expert in statistical processes who had studied with some of the leading figures in wartime operations research at MIT and crossed paths with Raiffa in Cambridge. While visiting Stanford for the 1964– 1965 academic year, Howard was asked to apply the new decision-making theories to a nuclear power plant being contemplated at General Electric’s nuclear headquarters, then located in San Jose. He combined expected utility and Bayesian statistics with computer modeling and engineering techniques into what he dubbed decision analysis and some of his followers call West Coast decision analysis, to distinguish it from Raiffa’s approach. Howard and Raiffa were honored as the two founding fathers of the field at its 50th- anniversary celebration last year.

Irrationality’s Revenge Almost as soon as von Neumann and Morgenstern outlined their theory of expected utility, economists began adopting it not just as a model of rational behavior but as a description of how people actually make decisions. “Economic man” was supposed to be a rational creature; since rationality now included assessing probabilities in a consistent way, economic man could be expected to do that, too. For those who found this a bit unrealistic, Savage and the economist Milton Friedman wrote in 1948, the proper analogy was to an expert billiards player who didn’t know the mathematical formulas governing how one ball would carom off another but “made his shots as if he knew the formulas.”

Forum for Multidisciplinary Thinking 15 Read Volume 73: 19th April 2015 Somewhat amazingly, that’s where economists left things for more than 30 years. It wasn’t that they thought everybody made perfect probability calculations; they simply believed that in free markets, rational behavior would usually prevail.

The question of whether people actually make decisions in the ways outlined by von Neumann and Savage was thus left to the psychologists. Ward Edwards was the pioneer, learning about expected utility and Bayesian methods from his Harvard statistics professor and writing a seminal 1954 article titled “The Theory of Decision Making” for a psychology journal. This interest was not immediately embraced by his colleagues—Edwards was dismissed from his first job, at Johns Hopkins, for focusing too much on decision research. But after a stint at an Air Force personnel research center, he landed at the University of Michigan, a burgeoning center of mathematical psychology. Before long he lured Jimmie Savage to Ann Arbor and began designing experiments to measure how well people’s probability judgments followed Savage’s axioms.

A typical Edwards experiment went like this: Subjects were shown two bags of poker chips—one containing 700 red chips and 300 blue chips, and the other the opposite. Subjects took a few chips out of a random bag and then estimated the likelihood that they had the mostly blue bag or the mostly red one.

Say you got eight red chips and four blue ones. What’s the likelihood that you had the predominantly red bag? Most people gave an answer between 70% and 80%. According to Bayes’ Theorem, the likelihood is actually 97%. Still, the changes in subjects’ probability assessments were “orderly” and in the correct direction, so Edwards concluded in 1968 that people were “conservative information processors”—not perfectly rational according to the rules of decision analysis, but close enough for most purposes.

In 1969 Daniel Kahneman, of the Hebrew University of Jerusalem, invited a colleague who had studied with Edwards at the University of Michigan, Amos Tversky, to address his graduate seminar on the practical applications of psychological research. Tversky told the class about Edwards’s experiments and conclusions. Kahneman, who had not previously focused on decision research, thought Edwards was far too generous in his assessment of people’s information-processing skills, and before long he persuaded Tversky to undertake a joint research project. Starting with a quiz administered to their fellow mathematical psychologists at a conference, the pair conducted experiment after experiment showing that people assessed probabilities and

Forum for Multidisciplinary Thinking 16 Read Volume 73: 19th April 2015 made decisions in ways systematically different from what the decision analysts advised.

“In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction,” they wrote in 1973. “They rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.”

Heuristics are rules of thumb—decision-making shortcuts. Kahneman and Tversky didn’t think relying on them was always a bad idea, but they focused their work on heuristics that led people astray. Over the years they and their adherents assembled a long list of these decision-making flaws—the availability heuristic, the endowment effect, and so on.

As an academic movement, this was brilliantly successful. Kahneman and Tversky not only attracted a legion of followers in psychology but also inspired a young economist, Richard Thaler, and with help from him and others came to have a bigger impact on the field than any outsider since von Neumann. Kahneman won an economics Nobel in 2002—Tversky had died in 1996 and thus couldn’t share the prize—and the heuristics-and-biases insights relating to money became known as behavioral economics. The search for ways in which humans violate the rules of rationality remains a rich vein of research for scholars in multiple fields.

The implications for how to make better decisions, though, are less clear. First- generation decision analysts such as Howard Raiffa and Ward Edwards recognized the flaws described by Kahneman and Tversky as real but thought the focus on them was misplaced and led to a fatalistic view of man as a “cognitive cripple.” Even some heuristics-and-biases researchers agreed. “The bias story is so captivating that it overwhelmed the heuristics story,” says Baruch Fischhoff, a former research assistant of Kahneman and Tversky who has long taught at Carnegie Mellon University. “I often cringe when my work with Amos is credited with demonstrating that human choices are irrational,” Kahneman himself wrote in Thinking, Fast and Slow. “In fact our research only showed that humans are not well described by the rational-agent model.” And so a new set of decision scholars began to examine whether those shortcuts our brains take are actually all that irrational.

When Heuristics Work That notion wasn’t entirely new. Herbert Simon, originally a political scientist but later a sort of social scientist of all trades (the economists gave him a

Forum for Multidisciplinary Thinking 17 Read Volume 73: 19th April 2015 Nobel in 1978), had begun using the term “heuristic” in a positive sense in the 1950s. Decision makers seldom had the time or mental processing power to follow the optimization process outlined by the decision analysts, he argued, so they “satisficed” by taking shortcuts and going with the first satisfactory course of action rather than continuing to search for the best.

Simon’s “bounded rationality,” as he called it, is often depicted as a precursor to the work of Kahneman and Tversky, but it was different in intent. Whereas they showed how people departed from the rational model for making decisions, Simon disputed that the “rational” model was actually best. In the 1980s others began to join in the argument.

The most argumentative among them was and still is Gerd Gigerenzer, a German psychology professor who also did doctoral studies in statistics. In the early 1980s he spent a life-changing year at the Center for Interdisciplinary Research in the German city of Bielefeld, studying the rise of probability theory in the 17th through 19th centuries with a group of philosophers and historians. One result was a well-regarded history, The Empire of Chance, by Gigerenzer and five others (Gigerenzer’s name was listed first because in keeping with the book’s theme, the authors drew lots). Another was a growing conviction in Gigerenzer’s mind that the Bayesian approach to probability favored by the decision analysts was, although not incorrect, just one of several options.

When Gigerenzer began reading Kahneman and Tversky, he says now, he did so “with a different eye than most readers.” He was, first, dubious of some of the results. By tweaking the framing of a question, it is sometimes possible to make apparent cognitive illusions go away. Gigerenzer and several coauthors found, for example, that doctors and patients are far more likely to assess disease risks correctly when statistics are presented as natural frequencies (10 out of every 1,000) rather than as percentages.

But Gigerenzer wasn’t content to leave it at that. During an academic year at Stanford’s Center for Advanced Study in the Behavioral Sciences, in 1989–1990, he gave talks at Stanford (which had become Tversky’s academic home) and UC Berkeley (where Kahneman then taught) fiercely criticizing the heuristics- and-biases research program. His complaint was that the work of Kahneman, Tversky, and their followers documented violations of a model, Bayesian decision analysis, that was itself flawed or at best incomplete. Kahneman encouraged the debate at first, Gigerenzer says, but eventually tired of his challenger’s combative approach. The discussion was later committed to print in a series of journal articles, and after reading through the whole exchange, it’s hard not to share Kahneman’s fatigue.

Forum for Multidisciplinary Thinking 18 Read Volume 73: 19th April 2015

Gigerenzer is not alone, though, in arguing that we shouldn’t be too quick to dismiss the heuristics, gut feelings, snap judgments, and other methods humans use to make decisions as necessarily inferior to the probability-based verdicts of the decision analysts. Even Kahneman shares this belief to some extent. He sought out a more congenial discussion partner in the psychologist and decision consultant Gary Klein. One of the stars of Malcolm Gladwell’s book Blink, Klein studies how people—firefighters, soldiers, pilots—develop expertise, and he generally sees the process as being a lot more naturalistic and impressionistic than the models of the decision analysts. He and Kahneman have together studied when going with the gut works and concluded that, in Klein’s words, “reliable intuitions need predictable situations with opportunities for learning.”

Are those really the only situations in which heuristics trump decision analysis? Gigerenzer says no, and the experience of the past few years (the global financial crisis, mainly) seems to back him up. When there’s lots of uncertainty, he argues, “you have to simplify in order to be robust. You can’t optimize any more.” In other words, when the probabilities you feed into a decision-making model are unreliable, you might be better off following a rule of thumb. One of Gigerenzer’s favorite examples of this comes from Harry Markowitz, the creator of the decision analysis cousin known as modern portfolio theory, who once let slip that in choosing the funds for his retirement account, he had simply split the money evenly among the options on offer (his allocation for each was 1/N). Subsequent research has shown that this so-called 1/N heuristic isn’t a bad approach at all.

The State of the Art The Kahneman-Tversky heuristics-and-biases approach has the upper hand right now, both in academia and in the public mind. Aside from its many real virtues, it is the approach best suited to obtaining interesting new experimental results, which are extremely helpful to young professors trying to get tenure. Plus, journalists love writing about it.

Decision analysis hasn’t gone away, however. HBS dropped it as a required course in 1997, but that was in part because many students were already familiar with such core techniques as the decision tree. As a subject of advanced academic research, though, it is confined to a few universities—USC, Duke, Texas A&M, and Stanford, where Ron Howard teaches. It is concentrated in industries, such as oil and gas and pharmaceuticals, in which managers have to make big decisions with long investment horizons and somewhat reliable data. Chevron is almost certainly the most enthusiastic adherent, with 250

Forum for Multidisciplinary Thinking 19 Read Volume 73: 19th April 2015 decision analysts on staff. Aspects of the field have also enjoyed an informal renaissance among computer scientists and others of a quantitative bent. The presidential election forecasts that made Nate Silver famous were a straightforward application of Bayesian methods.

Those who argue that rational, optimizing decision making shouldn’t be the ideal are a more scattered lot. Gigerenzer has a big group of researchers at the Max Planck Institute for Human Development, in Berlin. Klein and his allies, chiefly in industry and government rather than academia, gather regularly for Naturalistic Decision Making conferences. Academic decision scholars who aren’t decision analysts mostly belong to the interdisciplinary Society for Judgment and Decision Making, which is dominated by heuristics-and-biases researchers. “It’s still very much us and them, where us is Kahneman-and- Tversky disciples and the rest is Gerd and people who have worked with him,” says Dan Goldstein, a former Gigerenzer student now at Microsoft Research. “It’s still 90 to 10 Kahneman and Tversky.” Then again, Goldstein—a far more diplomatic sort than his mentor—is slated to be the next president of the society.

There seems to be more overlap in practical decision advice than in decision research. The leading business school textbook, Judgment in Managerial Decision Making, by Harvard’s Max Bazerman (and, in later editions, UC Berkeley’s Don Moore), devotes most of its pages to heuristics and biases but is dedicated to the decision analyst Howard Raiffa and concludes with a list of recommendations that begins, “1. Use decision analysis tools.” There’s nothing inconsistent there—the starting point of the whole Kahneman-and-Tversky research project was that decision analysis was the best approach. But other researchers in this tradition, when they try to correct the decision-making errors people make, also find themselves turning to heuristics.

One of the best-known products of heuristics-and-biases research, Richard Thaler and Shlomo Benartzi’s Save More Tomorrow program, replaces the difficult choices workers face when asked how much they want to put aside for retirement with a heuristic—a commitment to automatically bump up one’s contribution with every pay raise—that has led to dramatic increases in saving. A recent field experiment with small-business owners in the Dominican Republic found that teaching them the simple heuristic of keeping separate purses for business and personal life, and moving money from one to the other only once a month, had a much greater impact than conventional financial education. “The big challenge is to know the realm of applications where these heuristics are useful, and where they are useless or even harm people,” says the MIT economist Antoinette Schoar, one of the researchers. “At least from

Forum for Multidisciplinary Thinking 20 Read Volume 73: 19th April 2015 what I’ve seen, we don’t know very well what the boundaries are of where heuristics work.”

This has recently been a major research project for Gigerenzer and his allies— he calls it the study of “ecological rationality.” In environments where uncertainty is high, the number of potential alternatives many, or the sample size small, the group argues, heuristics are likely to outperform more-analytic decision-making approaches. This taxonomy may not catch on—but the sense that smart decision making consists of a mix of rational models, error avoidance, and heuristics seems to be growing.

Other important developments are emerging. Advances in neuroscience could change the decision equation as scientists get a better sense of how the brain makes choices, although that research is in early days. Decisions are increasingly shunted from people to computers, which aren’t subject to the same information-processing limits or biases humans face. But the pioneers of artificial intelligence included both John von Neumann and Herbert Simon, and the field still mixes the former’s decision-analysis tools with the latter’s heuristics. It offers no definitive verdict—yet—on which approach is best.

Making Better Decisions So, what is the right way to think about making decisions? There are a few easy answers. For big, expensive projects for which reasonably reliable data is available—deciding whether to build an oil refinery, or whether to go to an expensive graduate school, or whether to undergo a medical procedure—the techniques of decision analysis are invaluable. They are also useful in negotiations and group decisions. Those who have used decision analysis for years say they find themselves putting it to work even for fast judgments. The Harvard economist Richard Zeckhauser runs a quick decision tree in his head before deciding how much money to put in a parking meter in Harvard Square. “It sometimes annoys people,” he admits, “but you get good at doing this.”

Forum for Multidisciplinary Thinking 21 Read Volume 73: 19th April 2015

A firefighter running into a burning building doesn’t have time for even a quick decision tree, yet if he is experienced enough his intuition will often lead him to excellent decisions. Many other fields are similarly conducive to intuition built through years of practice—a minimum of 10,000 hours of deliberate practice to develop true expertise, the psychologist K. Anders Ericsson famously estimated. The fields where this rule best applies tend to be stable. The behavior of tennis balls or violins or even fire won’t suddenly change and render experience invalid.

Management isn’t really one of those fields. It’s a mix of situations that repeat themselves, in which experience-based intuitions are invaluable, and new situations, in which such intuitions are worthless. It involves projects whose risks and potential returns lend themselves to calculations but also includes

Forum for Multidisciplinary Thinking 22 Read Volume 73: 19th April 2015 groundbreaking endeavors for which calculations are likely to mislead. It is perhaps the profession most in need of multiple decision strategies.

Part of the appeal of heuristics-and-biases research is that even if it doesn’t tell you what decision to make, it at least warns you away from ways of thought that are obviously wrong. If being aware of the endowment effect makes you less likely to defend a declining business line rather than invest in a new one, you’ll probably be better off.

Yet overconfidence in one’s judgment or odds of success—near the top of most lists of decision-making flaws—is a trait of many successful leaders. At the very cutting edge of business, it may be that good decision making looks a little like the dynamic between Star Trek’s Captain Kirk and Mr. Spock, with Spock reciting the preposterously long odds of success and Kirk confidently barging ahead, Spock still at his side.

A version of this article appeared in the May 2015 issue (pp.78-85) of Harvard business review.

Justin Fox, a former editorial director of Harvard Business Review, is a columnist for Bloomberg View. He is the author of The Myth of the Rational Market. Follow him on Twitter @foxjust.

Source: hbr.org

Forum for Multidisciplinary Thinking 23 Read Volume 73: 19th April 2015

Why C.E.O. Pay Reform Failed By James Surowiecki

Over the next few weeks, American companies will engage in a quaint ritual: the shareholder meeting. Investors will have a chance to vent about performance and to offer resolutions on corporate policy. Many will also get to do something relatively novel: cast an advisory vote on the pay packages of C.E.O.s and other top executives. This power, known as “say-on-pay,” became law in 2010, as part of the Dodd-Frank bill. In the wake of the financial crisis, which amplified anger about exorbitant C.E.O. salaries, reformers looking for ways to rein in the practice seized on say-on-pay, which the United Kingdom adopted in 2002. The hope was that the practice would, as Barack Obama once put it, help in “restoring common sense to executive pay.”

Forum for Multidisciplinary Thinking 24 Read Volume 73: 19th April 2015

Say-on-pay is the latest in a series of reforms that, in the past couple of decades, have tried to change the mores of the executive suite. For most of the twentieth century, directors were paid largely in cash. Now, so that their interests will be aligned with those of shareholders, much of their pay is in stock. Boards of directors were once populated by corporate insiders, family members, and cronies of the C.E.O. Today, boards have many more independent directors, and C.E.O.s typically have less influence over how boards run. And S.E.C. reforms since the early nineteen-nineties have forced companies to be transparent about executive compensation.

These reforms were all well-intentioned. But their effect on the general level of C.E.O. salaries has been approximately zero. Executive compensation dipped during the financial crisis, but it has risen briskly since, and is now higher than it’s ever been. Median C.E.O. pay among companies in the S. & P. 500 was $10.5 million in 2013; total compensation is up more than seven hundred per cent since the late seventies. There’s little doubt that the data for 2014, once compiled, will show that C.E.O. compensation has risen yet again. And shareholders, it turns out, rather than balking at big pay packages, approve most of them by margins that would satisfy your average tinpot dictator. Last year, all but two per cent of compensation packages got majority approval, and seventy-four per cent of them received more than ninety per cent approval.

Why have the reforms been so ineffective? Simply put, they targeted the wrong things. People are justifiably indignant about cronyism and corruption in the executive suite, but these aren’t the main reasons that C.E.O. pay has soared. If they were, leaving salary decisions up to independent directors or shareholders would have made a greater difference. As it is, studies find that when companies hire outside C.E.O.s—people who have no relationship with the board—they get paid more than inside hires and more than their predecessors, too. Four years of say-on- pay have shown us that ordinary shareholders are pretty much as generous as boards are. And even companies with a single controlling shareholder, who ought to be able to dictate terms, don’t seem to pay their C.E.O.s any less than other companies.

At root, the unstoppable rise of C.E.O. pay involves an ideological shift. Just about everyone involved now assumes that talent is rarer than ever, and that only outsize rewards can lure suitable candidates and insure stellar performance. Yet the evidence for these propositions is sketchy at best, as Michael Dorff, a Forum for Multidisciplinary Thinking 25 Read Volume 73: 19th April 2015 professor of corporate law at Southwestern Law School, shows in his new book, “Indispensable and Other Myths.” Dorff told me that, with large, established companies, “it’s very hard to show that picking one well-qualified C.E.O. over another has a major impact on corporate performance.” Indeed, a major study by the economists Xavier Gabaix and Augustin Landier, who happen to believe that current compensation levels are economically efficient, found that if the company with the two-hundred-and-fiftieth-most-talented C.E.O. suddenly managed to hire the most talented C.E.O. its value would increase by a mere 0.016 per cent.

Dorff also makes a persuasive case that performance pay is overrated. For a start, it’s often tied to things that C.E.O.s have very limited control over, like stock price. Furthermore, as he put it, “performance pay works great for mechanical tasks like soldering a circuit but works poorly for tasks that are deeply analytic or creative.” After all, paying someone ten million dollars isn’t going to make that person more creative or smarter. One recent study, by Philippe Jacquart and J. Scott Armstrong, puts it bluntly: “Higher pay fails to promote better performance.”

So the situation is a strange one. The evidence suggests that paying a C.E.O. less won’t dent the bottom line, and can even boost it. Yet the failure of say-on- pay suggests that shareholders and boards genuinely believe that outsized C.E.O. remuneration holds the key to corporate success. Some of this can be put down to the powerful mystique of a few truly transformative C.E.O.s (like Steve Jobs, at Apple). But, more fundamentally, there’s little economic pressure to change: big as the amounts involved are, they tend to be dwarfed by today’s corporate profits. Big companies now have such gargantuan market caps that a small increase in performance is worth billions. So whether or not the people who sit on compensation committees can accurately predict C.E.O. performance—Dorff argues that they can’t—they’re happy to spend an extra five or ten million dollars in order to get the person they want. That means C.E.O. pay is likely to keep going in only one direction: up.

James Surowiecki is the author of “The Wisdom of Crowds” and writes about economics, business, and finance for the magazine.

Source: newyorker.com

Forum for Multidisciplinary Thinking 26 Read Volume 73: 19th April 2015

Over the next few weeks, American companies will engage in a quaint ritual: the shareholder meeting. Investors will have a chance to vent about performance and to offer resolutions on corporate policy. Many will also get to do something relatively novel: cast an advisory vote on the pay packages of C.E.O.s and other top executives. This power, known as “say-on-pay,” became law in 2010, as part of the Dodd-Frank bill. In the wake of the financial crisis, which amplified anger about exorbitant C.E.O. salaries, reformers looking for ways to rein in the practice seized on say-on-pay, which the United Kingdom adopted in 2002. The hope was that the practice would, as Barack Obama once put it, help in “restoring common sense to executive pay.”

Forum for Multidisciplinary Thinking 27 Read Volume 73: 19th April 2015 The anatomy of discovery: a case study By David Field

How do scientists discover new phenomena, and, just as important, how do they persuade other scientists that they have discovered something new? First, they must persuade themselves; this can be a long and tortuous process. During its course, they do their very best to prove that their discovery is wrong, perhaps because it contradicts some well-established law. They set out to show that their new phenomenon may, in the polite phraseology of science, be an artifact, or in the more colloquial form, a complete cock-up. They think up every reasonable test, using as many different techniques as possible, throwing at their new phenomenon every tool that they can lay hands on — to make sure that they really have gotten something new. Woe betide them if they do not follow this course of action!

Let us restrict ourselves here to the quite serendipitous, experimental discoveries, those that take place quite unexpectedly. A few examples will clarify what I mean. First, take Fleming’s chance observation of destruction of bacteria by penicillin, which apparently must have flown in through a nearby open window in his untidy laboratory. Or consider Bequerel’s discovery of radioactivity in which the chance juxtaposition, over a weekend, of a photographic plate, a key and pitchblende (uranium ore) created an image of the key, when the photographic plate was developed. Even better perhaps, the “experiment,” that many people would have wished to have been present to see, when Roentgen put his hand between his X-ray tube and the detection screen and saw, to his astonishment, the image of the bones in his hand. It is revealing of the character of discovery that subsequently Roentgen conducted further experiments in private, rather than expose himself to the ridicule of the scientific community if he turned out somehow to be imagining, rather than imaging, things. Fear of failure and doubt of their results are not confined to experimentalists. Both Schrödinger and Dirac have both recorded the same sensations in their respective paths to the discovery of quantum mechanics and anti-matter.

We have obviously no choice but to admit that chance plays an important role in scientific discovery. But there is much more to it than that. How many researchers, prior to Fleming, had glanced at the destruction of bacteria and washed the stuff down the sink, without giving it another thought? Again, the fogging of photographic plates by pitchblende had been seen before. The conclusion that had been drawn was that one should not leave pitchblende near photographic plates; it ruins them. Or take the annoying radio-astronomy hiss that seemed to come from every direction in the universe and proved

Forum for Multidisciplinary Thinking 28 Read Volume 73: 19th April 2015 impossible to suppress. This was certainly observed before Penzias and Wilson took it seriously — having first cleaned the pigeon mess off their radio dish, a good example of removing a possible artifact. So it was that observational cosmology was born, through recording of the cosmic microwave background.

The case study which I seek to illustrate here cannot be classed with these great discoveries of the past, but it is “a small thing but mine own” and, I believe, illustrates features of the nature of discovery. If you take a gas, such as laughing gas, that is, nitrous oxide, and expose it to a cold surface at (say) -40 oC, it then condenses to form a solid film. What my colleagues and I found was that the surface of this film apparently had a positive voltage on it, just as if the positive end of a battery was connected to it. This was indeed serendipitous. We did not set out to find this effect: after all, we did not know that it existed. Rather we were studying the interaction of surfaces with low energy electrons. So our discovery was by chance: but more than that, we thought that it was probably wrong, or so we had to assume. Without going into detail, we observed the passage of a current through a system, as if the surface of the nitrous oxide had a positive voltage on it. We did not expect to see this current and it should not have been present. That is, the machine was electronically incontinent and, unless we could put a nappy on the damn thing, it was useless for any meaningful experiments. This, if you like, was the “eureka” moment, but at the time, despite an inkling of something else, we schooled ourselves to believe that the machine had broken down.

Discovery — or not: rip it apart or press on?

Now we had a choice. We could succumb to our damning observations, rip apart the machine, clean it and pamper it and hope that it would work “correctly” when it was back together again. One might term this the “pitchblende fogs photographic plates: keep them away from each other” or “don’t leave the window open near bacterial cultures, it destroys them” approach. Or we could press on making the provisional assumption that, just maybe, there was something in our rather crazy observation that the surface of a thin film of nitrous oxide was spontaneously at a positive voltage of several volts. Remember that, since the film was very thin, just a few per cent of one millionth of a meter, then several volts suggested enormous electric fields in the film, expressed as volts per meter. Our observations implied that there were electric fields in the film of one hundred million volts per meter. This was about as big as it could be, since it was close to the field that would cause electrical breakdown, like a spark in damp air.

Forum for Multidisciplinary Thinking 29 Read Volume 73: 19th April 2015 So it was that there was a knock on my door in the Department of Physics and Astronomy at the University of Aarhus, where this saga took place, and Richard Balog and Peter Cicman, my two postdocs working on the project, entered to tell me that the apparatus was misbehaving. I should mention that the apparatus itself was attached to the ASTRID storage ring synchrotron source and I must acknowledge here the technically outstanding people who built this source and maintain it. Without this resource and these scientists, none of what I describe would have been possible. These same people have recently built another source in the basement of our department. Previously Aarhus Physics and Astronomy was probably the only department in the world to have a synchrotron storage ring in its basement: now we are certainly the only department to have two such sources.

At any rate, Richard and Peter took the brave step of following their instincts and decided to make the assumption that, just maybe, we were on to something. First though, as I said, we needed to persuade each other that we had really discovered a new phenomenon and were not just fooling ourselves. I want to take you through this process — it shows how scientists work, setting ourselves up, perhaps immodestly, as model scientists! Richard and Peter started by showing that the measured voltage on the surface increased exactly in proportion to the thickness of the film. This agreed with the two hundred year old Poisson equation, which is fundamental to the subject known as electrostatics. We had not violated anything basic at this point. So we were on our way.

We then varied the temperature at which the film was condensed. We found that the higher the temperature of condensation of nitrous oxide, the lower the potential measured on the surface, for the same film thickness. For example, for condensation at -213 oC compared to -233 oC, the voltage on the surface was more than two and a half times smaller. This seemed sensible. Higher temperature means that the molecules of nitrous oxide push and shove each other more. You would expect this to create a more disorderly system. A more disorderly system would somehow seem likely to produce a lower voltage on the surface. But now we were confronted with this “somehow.” It is not enough in science just to proclaim the facts of observation, you need also to offer some sort of rationale.

Some dipolar moments

We needed first to address the question: how could there be a spontaneous voltage on the top of films of nitrous oxide? Age old electrostatics tells you that, since you have measured a constant field, there are no free charges, for

Forum for Multidisciplinary Thinking 30 Read Volume 73: 19th April 2015 example electrons, on or in the film. This is not obvious, but I beg you to accept it. Now, while voltages are generally due to the presence of free charges, which we had excluded, the charge does not in fact have to be free to create a voltage, it can be contained within, that is, be intrinsically part of a molecule at the surface. Molecules are overall neutral but can have one end positive and one end negative. Such molecules, and they are the great majority, are said to possess “dipole moments.”

If the positive end of the molecule sticks out of the top of the film, the surface will appear positively charged. Could this be what is happening here, we thought: the positive nitrogen end of nitrous oxide sticks out the surface whilst the negative oxygen end remains buried? This would also explain why higher temperature leads to lower voltages. The amount of voltage on the surface depends on how parallel the molecules are to one another, that is, their degree of orientation. At higher temperatures, they push and shove more and therefore they are less well oriented. Hence, there would be less tendency for positive ends to stick out of the surface.

A schematic illustration of nitrous oxide (light blue) condensed on gold. Observe how there are more positive nitrogen atoms (blue) sticking out of the surface than negative oxygen atoms (red). Courtesy Andrew Cassidy.

There was something rather strange here, however. This model may have explained our observations, but it carried with it some rather unfortunate baggage. The model required that the plus end of one molecule tends to Forum for Multidisciplinary Thinking 31 Read Volume 73: 19th April 2015 associate with the plus end of another, and the same with the minus ends. But plus repels plus and minus, minus, so why should the system configure itself in this way spontaneously? It should find its most favorable state, with plus to minus, just as you sink most comfortably into an armchair. But let us sweep this under the carpet for the present.

With our experimental evidence and despite our reservations about our understanding of the cause of what we had observed, we felt prepared to publish our findings [1]. Yet, there were a lot more experimental questions to answer, quite apart from the theoretical one which we have just swept out of sight. Does it make any difference on what surface you condense the nitrous oxide? Is nitrous oxide the only molecule to show this effect? Have we observed a truly general effect or is it just special to one system? If it were special, it would still be interesting, but it would be much more interesting if it were a general phenomenon. And if you heat a film, would the effect go away?

Physics or stamp collecting?

Let us answer these questions one by one, without going into too much detail. The nature of the surface, upon which you condense the nitrous oxide, makes no difference to our observations. For example, you can condense nitrous oxide on films of condensed atoms of xenon and you see no change in the surface voltage. In answer to the second question, nitrous oxide is by no means the only molecule to show this effect. Taking note of Rutherford’s famous injunction against stamp collecting, we tried nine chemically diverse materials, but all with dipole moments, of which eight showed the same effect as nitrous oxide. Some, however, had a negative voltage on their surface: presumably they had the negative end of the molecule sticking out of the surface. The effect is general. If you heat a film, then, yes, the effect does disappear and it does so rather abruptly over a small range of temperature. For example, a film of isoprene composed of 300 layers of molecules and condensed at -233 oC has a surface potential of about nine volts. Warming this layer to -201 oC causes the potential to disappear, reaching zero at -197 oC. This was one more clear step towards showing that we were beginning to understand the physical basis for the phenomena that we were observing. Also, since we now felt that the phenomenon was general, we gave it a name: the “spontelectric” effect.

However, no physicist can sleep at night unless he or she has some mathematical model to describe quantitatively what they observe. So armed with all these experiments, I began, with help from Hans Fogeby and Axel Svane in our Department, to write down a couple of equations based upon the model of oriented dipoles mentioned above. This was found to fit the

Forum for Multidisciplinary Thinking 32 Read Volume 73: 19th April 2015 observations of electric field versus deposition temperature for nitrous oxide very well. The theoretical model provided one more piece of evidence that we were on the right track. At this stage, chance intervened once again. I had an astronomer working with me, Cécile Favre, interested in the radio-astronomy of methyl formate in Orion. She wanted to know the temperature at which methyl formate sublimes and we decided to measure this on our machine. This was nothing to do with spontelectrics, at first. But since we had methyl formate in the system, we decided to look and see if this was spontelectric too.

The International Brigade: Fate intervenes

At this point, enter Oksana, Ukrainian by birth, with a Russian father, Ukrainian mother, educated in Armenia, working previously at the Synchrotron Laboratory in Trieste in Italy. Enter also Andrew Cassidy, from Ireland who speaks some Irish Gaelic, if pushed, but probably more importantly, he is a chemist with a PhD from Cambridge (England). By the way, Peter Cicman, with two PhDs, one from Japan and one from Austria, and Richard Balog, with a PhD from Berlin, were both Slovak by birth. I mention the lineage of my co-workers to emphasize how international European science has become in the last decade or so. Scientific research makes for good international relations.

As well as performing more experiments on nitrous oxide, Oksana showed that methyl formate was indeed spontelectric. This became quite an epic adventure, as Oksana tried higher and higher temperatures of deposition, sixteen in all. She found that at above -193 oC the electric field in films of methyl formate started sharply to increase, instead of decreasing. Dismay! Everything that we thought that we had understood was wrong — or was it? Oksana found that at -188 oC, the field was the same as it had been at -233 oC and then went more than 50% higher again before collapsing at -183 oC. In fact the spontelectric field at -233 oC deposition temperature was twenty- eight million volts per meter, whereas at -185 oC, it was forty million volts per meter. This was an odd intervention of fate, more than odd, in fact. It appeared to cut through all our understanding of spontelectrics which we had so carefully built up around data for nitrous oxide and other molecules. Surely more pushing and shoving causes less dipole orientation and less field, not more!

And what about the lovely theory which worked so well for nitrous oxide? Was it flawed? I had surely put nothing in the equations that could show the behavior of methyl formate. What was missing? Nothing, as it turned out, to my continuing surprise. Using my two equations, the rate of variation of electric field with temperature of film condensation can be written as a fraction, that is,

Forum for Multidisciplinary Thinking 33 Read Volume 73: 19th April 2015 something divided by something. If the second something is zero, then this expression becomes infinity. On one side of the infinity, that is, at lower temperature, the rate of variation of electric field with temperature is negative, as we expected and observed in nitrous oxide and in methyl formate, below -193 oC. On the other side, it is positive. This latter is the anomalous behavior which we observe in methyl formate above -193 oC, and indeed what one predicts, if, in my equations, one uses the parameters for methyl formate derived from fitting lower temperature spontelectric data for this molecule. Spontelectrics just became curiouser and curiouser. We could reproduce our observations, but we could not be said to understand them physically.

Nature’s Switchback

At this stage we needed to sit back and take a deep breath. Spontelectrics had turned out to be something of a rollercoaster. What did we therefore do? We wrote a review of the whole topic as we understood it in early 2013 [2]. But on a rollercoaster, one moment you think that you are safe and the next you are hurtling down some endless slope, having left your stomach somewhere behind you. For there is still more curious behavior to consider — which we knew about already but had quietly ignored in order to preserve our sanity.

If you lay down a film of toluene at -198 oC, it is not spontelectric for the first one hundred monolayers, where a monolayer is a single layer of molecules. Put down a bit more, and the surface potential takes off and the film becomes spontelectric. The same thing happens with isoprene at -203 oC, except that the spontelectric effect comes in after 50 to 75 monolayers have been laid down. Apparently, the molecules like to get head-to-head and tail-to-tail, plus to plus, minus to minus, only when the film has achieved a certain thickness. Somehow, toluene layer number 101 knows that it is number 101 and decides to “go spontelectric.” In other words, the molecules know about each other’s presence: they communicate with one another. Apparently there has to be a certain number of molecules of toluene or isoprene before the films switches into a spontelectric state.

What we observed showed that every part of the system depends on every other part. Communication extends right across the film, over macroscopic distances for which pair-wise interaction between molecules is completely negligible. This leads one to appreciate that spontelectrics have properties which are much more than just the sum of pairwise interactions between molecules, or even three-, four- or five-wise, but many thousand-wise in the case of toluene laid down at -198 oC. Such systems are called “non-local.”

Forum for Multidisciplinary Thinking 34 Read Volume 73: 19th April 2015 So spontelectrics have two crazy properties: the effect can get greater at higher temperature of deposition, as in methyl formate, and the effect seems to require that all the molecules in the film talk, or feed back, to one another. If we are going to claim to understand spontelectrics, we are going to need to understand these two fundamental aspects. This is where I make my escape: no nice explanation is forthcoming. Simple reasoning based on cause and effect, in a system dominated by feedback, is difficult because the first system, one molecule of toluene, say, influences the second, and the third, the fourth, the fifth etc. and in turn each of these influences each of the others. If you could model a film as a repeating unit of a cube of ten by ten by ten molecules, you would have to consider 499,500 pairwise interactions, some very weak and every one dependent on every other. This might allow you to show that a film would settle into some stable spontelectric state, through mutual agreement between the molecules.

Not so grand Finale

From that last example of 499,500 interactions, you can see why I need to quit the footlights and exit by the stage door. There are other aspects of spontelectrics too: for example, how you can build different spontelectrics on top of each other and make any geometrical form of electric field you wish, work which we did in collaboration with Jack Dunger, a very talented young post-graduate from Cambridge (England) [3]. There are also some lovely experiments carried out by Andrew Cassidy which show what happens when nitrous oxide is diluted in xenon [4], giving us additional insight into the non- local, non-linear nature of spontelectrics. Furthermore, in the spirit of throwing as many techniques as possible at the subject, Jerome Lasne, Alexander Rosu- Finsen and Martin McCoustra at Heriot Watt University, Edinburgh, have recently been performing some experiments on how spontelectrics absorb light. Their independent data also reveal the presence of the spontelectric field. This helped to allay some of the last traces of skepticism which remained in a corner of my mind [5].

To return to a possible explanation of how spontelectrics may form: nature provides you with an accomplished fact, the spontelectric film is there in front of you. I am hesitating by the stage door and I am about to make a run for it because I have no explanation, save conjecture, of how a film gets itself into the spontelectric structure. How do the molecules move about as they make the film, condensing from the gas phase, and why do they spontaneously choose an apparently unfavorable structure, with plus to plus and minus to minus? This is the problem which we swept under the carpet above, but that we need to face. Unfortunately, we do not know how to do this. However, I will allow myself the

Forum for Multidisciplinary Thinking 35 Read Volume 73: 19th April 2015 luxury of speculation, without thinking about 499,500 interactions, before I finally do make my exit.

Our current speculation goes something like this [4]. Fluctuating movements of the molecules at the surface locally create, by chance, some fleeting orientation of the molecules, with plus to plus and minus to minus. This in turn creates an electric field opposing this orientation. The electric field will also be found in a region outside the fluctuation that caused it. There, the field creates a dipole orientation in the opposite sense to that of the fluctuation and this propagates throughout the film, locking the dipole orientation into position and creating the spontelectric state. We cannot say how much truth there is in this hand- waiving — maybe not too much. There is something called chemical dynamics which may, using mighty computers, give us the answer. The doors of the Department of Physics and Astronomy at Aarhus University are open for anyone who would like to join us in this search for greater understanding.

At all events, at the very start I asked how scientists discover things, and how they convince first themselves and then other scientists that they have discovered something new. In this case study, I have described how a quite unexpected discovery was made in solid state physics, where counter-intuitive stacking of molecules leads to properties never before observed in solids. I have tried to sketch the thought processes which accompanied the verification of this discovery: first denial and doubt, then testing to destruction, and ultimately some degree of confidence. My hope is that this short account has provided some insight into how discoveries in general are made and eventually validated.

Source: https://scientiasalon.wordpress.com

Forum for Multidisciplinary Thinking 36