Asymptotically Unambitious Artificial General Intelligence Michael K. Cohen Badri Vellambi Marcus Hutter Research School of Engineering Science Department of Computer Science Department of Computer Science Oxford University University of Cincinnati Australian National University Oxford, UK OX1 3PJ Cincinnati, OH, USA 45219 Canberra, ACT, Australia 2601 michael-k-cohen.com
[email protected] hutter1.net Abstract it (read: people), it could intervene in the provision of its own reward to achieve maximal reward for the rest of its General intelligence, the ability to solve arbitrary solvable lifetime (Bostrom 2014; Taylor et al. 2016). “Reward hijack- problems, is supposed by many to be artificially constructible. ing” is just the correct way for a reward maximizer to behave Narrow intelligence, the ability to solve a given particularly difficult problem, has seen impressive recent development. (Amodei et al. 2016). Insofar as the RL agent is able to maxi- Notable examples include self-driving cars, Go engines, im- mize expected reward, (3) fails. One broader principle at work age classifiers, and translators. Artificial General Intelligence is Goodhart’s Law: “Any observed statistical regularity [like (AGI) presents dangers that narrow intelligence does not: if the correlation between reward and task-completion] will something smarter than us across every domain were indif- tend to collapse once pressure is placed upon it for control ferent to our concerns, it would be an existential threat to purposes.” (Goodhart 1984). Krakovna (2018) has compiled humanity, just as we threaten many species despite no ill will. an annotated bibliography of examples of artificial optimiz- Even the theory of how to maintain the alignment of an AGI’s ers “hacking” their objective.