On Intelligence Jeff Hawkins with Sandra Blakeslee

Total Page:16

File Type:pdf, Size:1020Kb

On Intelligence Jeff Hawkins with Sandra Blakeslee On Intelligence Jeff Hawkins with Sandra Blakeslee 1 Contents Prologue 1. Artificial Intelligence 2. Neural Networks 3. The Human Brain 4. Memory 5. A New Framework of Intelligence 6. How the Cortex Works 7. Consciousness and Creativity 8. The Future of Intelligence Epilogue Appendix: Testable Predictions Bibliography Acknowledgments 2 On Intelligence 3 Prologue This book and my life are animated by two passions. For twenty-five years I have been passionate about mobile computing. In the high- tech world of Silicon Valley, I am known for starting two companies, Palm Computing and Handspring, and as the architect of many handheld computers and cell phones such as the PalmPilot and the Treo. But I have a second passion that predates my interest in computers— one I view as more important. I am crazy about brains. I want to understand how the brain works, not just from a philosophical perspective, not just in a general way, but in a detailed nuts and bolts engineering way. My desire is not only to understand what intelligence is and how the brain works, but how to build machines that work the same way. I want to build truly intelligent machines. The question of intelligence is the last great terrestrial frontier of science. Most big scientific questions involve the very small, the very large, or events that occurred billions of years ago. But everyone has a brain. You are your brain. If you want to understand why you feel the way you do, how you perceive the world, why you make mistakes, how you are able to be creative, why music and art are inspiring, indeed what it is to be human, then you need to understand the brain. In addition, a successful theory of intelligence and brain function will have large societal benefits, and not just in helping us cure brain-related diseases. We will be able to build genuinely intelligent machines, although they won't be anything like the robots of popular fiction and computer science fantasy. Rather, intelligent machines will arise from a new set of principles about the nature of intelligence. As such, they will help us accelerate our knowledge of the world, help us explore the universe, and make the world safer. And along the way, a large industry will be created. Fortunately, we live at a time when the problem of understanding intelligence can be solved. Our generation has access to a mountain of data about the brain, collected over hundreds of years, and the rate at which we are gathering more data is accelerating. The United States alone has thousands of neuroscientists. Yet we have no productive theories about what intelligence is or how the brain works as a whole. Most neurobiologists don't think much about overall theories of the brain because they're engrossed in doing experiments to collect more data about the brain's many subsystems. And although legions of computer programmers have tried to make computers intelligent, they have failed. I believe they will continue to fail as long as they keep ignoring the differences between computers and brains. What then is intelligence such that brains have it but computers don't? Why can a six-year-old hop gracefully from rock to rock in a streambed while the most 4 advanced robots of our time are lumbering zombies? Why are three-year-olds already well on their way to mastering language while computers can't, despite half a century of programmers' best efforts? Why can you tell a cat from a dog in a fraction of a second while a supercomputer cannot make the distinction at all? These are great mysteries waiting for an answer. We have plenty of clues; what we need now are a few critical insights. You may be wondering why a computer designer is writing a book about brains. Or put another way, if I love brains why didn't I make a career in brain science or in artificial intelligence? The answer is I tried to, several times, but I refused to study the problem of intelligence as others have before me. I believe the best way to solve this problem is to use the detailed biology of the brain as a constraint and as a guide, yet think about intelligence as a computational problem— a position somewhere between biology and computer science. Many biologists tend to reject or ignore the idea of thinking of the brain in computational terms, and computer scientists often don't believe they have anything to learn from biology. Also, the world of science is less accepting of risk than the world of business. In technology businesses, a person who pursues a new idea with a reasoned approach can enhance his or her career regardless of whether the particular idea turns out to be successful. Many successful entrepreneurs achieved success only after earlier failures. But in academia, a couple of years spent pursuing a new idea that does not work out can permanently ruin a young career. So I pursued the two passions in my life simultaneously, believing that success in industry would help me achieve success in understanding the brain. I needed the financial resources to pursue the science I wanted, and I needed to learn how to affect change in the world, how to sell new ideas, all of which I hoped to get from working in Silicon Valley. In August 2002 I started a research center, the Redwood Neuroscience Institute (RNI), dedicated to brain theory. There are many neuroscience centers in the world, but no others are dedicated to finding an overall theoretical understanding of the neocortex— the part of the human brain responsible for intelligence. That is all we study at RNI. In many ways, RNI is like a start-up company. We are pursuing a dream that some people think is unattainable, but we are lucky to have a great group of people, and our efforts are starting to bear fruit. * * * The agenda for this book is ambitious. It describes a comprehensive theory of how the brain works. It describes what intelligence is and how your brain creates it. The theory I present is not a completely new one. Many of the individual ideas you are about to read have existed in some form or another before, but not together in a coherent fashion. This should be expected. It is said that "new ideas" are often old ideas repackaged and reinterpreted. That certainly applies to the theory proposed here, but packaging and interpretation can make a world of difference, the difference between a mass of details and a satisfying theory. I hope it strikes 5 you the way it does many people. A typical reaction I hear is, "It makes sense. I wouldn't have thought of intelligence this way, but now that you describe it to me I can see how it all fits together." With this knowledge most people start to see themselves a little differently. You start to observe your own behavior saying, "I understand what just happened in my head." Hopefully when you have finished this book, you will have new insight into why you think what you think and why you behave the way you behave. I also hope that some readers will be inspired to focus their careers on building intelligent machines based on the principles outlined in these pages. I often refer to this theory and my approach to studying intelligence as "real intelligence" to distinguish it from "artificial intelligence." AI scientists tried to program computers to act like humans without first answering what intelligence is and what it means to understand. They left out the most important part of building intelligent machines, the intelligence! "Real intelligence" makes the point that before we attempt to build intelligent machines, we have to first understand how the brain thinks, and there is nothing artificial about that. Only then can we ask how we can build intelligent machines. The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. I then introduce and develop the core idea of the theory, what I call the memory-prediction framework. In chapter 6 I detail how the physical brain implements the memory-prediction model— in other words, how the brain actually works. I then discuss social and other implications of the theory, which for many readers might be the most thought-provoking section. The book ends with a discussion of intelligent machines— how we can build them and what the future will be like. I hope you find it fascinating. Here are some of the questions we will cover along the way: Can computers be intelligent? For decades, scientists in the field of artificial intelligence have claimed that computers will be intelligent when they are powerful enough. I don't think so, and I will explain why. Brains and computers do fundamentally different things. Weren't neural networks supposed to lead to intelligent machines? Of course the brain is made from a network of neurons, but without first understanding what the brain does, simple neural networks will be no more successful at creating intelligent machines than computer programs have been. Why has it been so hard to figure out how the brain works? 6 Most scientists say that because the brain is so complicated, it will take a very long time for us to understand it. I disagree. Complexity is a symptom of confusion, not a cause. Instead, I argue we have a few intuitive but incorrect assumptions that mislead us. The biggest mistake is the belief that intelligence is defined by intelligent behavior.
Recommended publications
  • Memory-Prediction Framework for Pattern
    Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback).
    [Show full text]
  • In the United States District Court for the Eastern District of Texas Marshall Division
    Case 2:05-cv-00199-TJW Document 3 Filed 10/31/05 Page 1 of 13 IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF TEXAS MARSHALL DIVISION QINETIQ LIMITED, § § Plaintiff, § § v. § CIVIL ACTION NO. 2:05-CV-00199 § PICVUE ELECTRONICS, LTD. § JURY TRIAL DEMANDED § Defendant. § § FIRST AMENDED COMPLAINT Plaintiff, QinetiQ Limited (hereinafter “QinetiQ”), by and through its undersigned attorneys, files this First Amended Complaint against Picvue Electronics, Ltd. (hereinafter “Defendant” or “Picvue”) and alleges as follows: NATURE OF THIS ACTION 1. This is an action for patent infringement arising under the Patent Laws of the United States, 35 U.S.C. § 101 et. seq. THE PARTIES 2. QinetiQ is a company registered under the laws of the United Kingdom with its principal place of business at 85 Buckingham Gate, London SW1E 6PD, United Kingdom. QinetiQ is engaged in the research and development of various technologies, including liquid crystal display (LCD) technologies. 3. Defendant Picvue Electronics, Ltd. is a company organized under the laws of Taiwan with its principal place of business at 526, Sec. 2, Chien-Hsing Rd., Hsin-Fung, Hsin Chu, Taiwan. Defendant may be served by means of Letters Rogatory. Defendant develops, designs, manufactures, and provides after-sales service for LCD products, including super- QinetiQ’s First Amended Complaint for Patent Infringement Case 2:05-cv-00199-TJW Document 3 Filed 10/31/05 Page 2 of 13 twisted nematic (“STN”) liquid crystal modules and panels that infringe the patent-in-suit, U.S. Patent No. 4,596,446 (the “‘446 patent”). JURISDICTION AND VENUE 4.
    [Show full text]
  • Brain Science
    BRAIN SCIENCE with Ginger Campbell, MD Episode #139 Interview with Jeff Hawkins, Author of On Intelligence aired 11/28/17 [music] INTRODUCTION Welcome to Brain Science, the show for everyone who has a brain. I'm your host, Dr. Ginger Campbell, and this is Episode 139. Ever since I launched Brain Science back in 2006, my goal has been to explore how recent discoveries in neuroscience are helping unravel the mystery of how our brains make us human. I'm really excited about today's interview because, in some ways, it takes us back to the beginning. My guest today is Jeff Hawkins, author of On Intelligence, and founder of Numenta, a company that is dedicated to discovering how the human cortex works. Jeff's book actually inspired the first Brain Science podcast, and I interviewed him way back in Episode 38. Today he gives us an update on the last 15 years of his research. As always, episode show notes and transcripts are available at brainsciencepodcast.com. You can send me feedback at [email protected] or audio feedback via SpeakPipe at speakpipe.com/docartemis. I will be back after the interview to review the key !1 ideas and to share a few brief announcements, including a look forward to next month's episode. [music] INTERVIEW Dr. Campbell: Jeff, it is great to have you back on Brain Science. Mr. Hawkins: It's great to be back, Ginger. I always enjoy talking to you. Dr. Campbell: It's actually been over nine years since we last talked, so I thought we would start by asking you to just give my audience a little bit of background, and I'd like you to start by telling us just a little about your career before Numenta.
    [Show full text]
  • Unsupervised Anomaly Detection in Time Series with Recurrent Neural Networks
    DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2019 Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD CARL PIEHL KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Unsupervised anomaly detection in time series with recurrent neural networks JOSEF HADDAD, CARL PIEHL Bachelor in Computer Science Date: June 7, 2019 Supervisor: Pawel Herman Examiner: Örjan Ekeberg School of Electrical Engineering and Computer Science Swedish title: Oövervakad avvikelsedetektion i tidsserier med neurala nätverk iii Abstract Artificial neural networks (ANN) have been successfully applied to a wide range of problems. However, most of the ANN-based models do not attempt to model the brain in detail, but there are still some models that do. An example of a biologically constrained ANN is Hierarchical Temporal Memory (HTM). This study applies HTM and Long Short-Term Memory (LSTM) to anomaly detection problems in time series in order to compare their performance for this task. The shape of the anomalies are restricted to point anomalies and the time series are univariate. Pre-existing implementations that utilise these networks for unsupervised anomaly detection in time series are used in this study. We primarily use our own synthetic data sets in order to discover the networks’ robustness to noise and how they compare to each other regarding different characteristics in the time series. Our results shows that both networks can handle noisy time series and the difference in performance regarding noise robustness is not significant for the time series used in the study. LSTM out- performs HTM in detecting point anomalies on our synthetic time series with sine curve trend but a conclusion about the overall best performing network among these two remains inconclusive.
    [Show full text]
  • Tucker14.Pdf
    1 CONTENTS Title Page Copyright Dedication Introduction CHAPTER 1 Namazu the Earth Shaker CHAPTER 2 The Signal from Within CHAPTER 3 #sick CHAPTER 4 Fixing the Weather CHAPTER 5 Unities of Time and Space CHAPTER 6 The Spirit of the New CHAPTER 7 Relearning How to Learn CHAPTER 8 When Your Phone Says You’re in Love CHAPTER 9 Crime Prediction: The Where and the When CHAPTER 10 Crime: Predicting the Who CHAPTER 11 The World That Anticipates Your Every Move Acknowledgments Notes Index 2 INTRODUCTION IMAGINE waking up tomorrow to discover your new top-of-the-line smartphone, the device you use to coordinate all your calls and appointments, has sent you a text. It reads: Today is Monday and you are probably going to work. So have a great day at work today!—Sincerely, Phone. Would you be alarmed? Perhaps at first. But there would be no mystery where the data came from. It’s mostly information that you know you’ve given to your phone. Now consider how you would feel if you woke up tomorrow and your new phone predicted a much more seemingly random occurrence: Good morning! Today, as you leave work, you will run into your old girlfriend Vanessa (you dated her eleven years ago), and she is going to tell you that she is getting married. Do try to act surprised! What conclusion could you draw from this but that someone has been stalking your Facebook profile and knows you have an old girlfriend named Vanessa? And that this someone has probably been stalking her profile as well and spotted her engagement announcement.
    [Show full text]
  • Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 1 Neuromorphic Architecture for the Hierarchical Temporal Memory Abdullah M. Zyarah, Student Member, IEEE, Dhireesha Kudithipudi, Senior Member, IEEE, Neuromorphic AI Laboratory, Rochester Institute of Technology Abstract—A biomimetic machine intelligence algorithm, that recognition and classification [3]–[5], prediction [6], natural holds promise in creating invariant representations of spatiotem- language processing, and anomaly detection [7], [8]. At a poral input streams is the hierarchical temporal memory (HTM). higher abstraction, HTM is basically a memory based system This unsupervised online algorithm has been demonstrated on several machine-learning tasks, including anomaly detection. that can be trained on a sequence of events that vary over time. Significant effort has been made in formalizing and applying the In the algorithmic model, this is achieved using two core units, HTM algorithm to different classes of problems. There are few spatial pooler (SP) and temporal memory (TM), called cortical early explorations of the HTM hardware architecture, especially learning algorithm (CLA). The SP is responsible for trans- for the earlier version of the spatial pooler of HTM algorithm. forming the input data into sparse distributed representation In this article, we present a full-scale HTM architecture for both spatial pooler and temporal memory. Synthetic synapse design is (SDR) with fixed sparsity, whereas the TM learns sequences proposed to address the potential and dynamic interconnections and makes predictions [9]. occurring during learning. The architecture is interweaved with A few research groups have implemented the first generation parallel cells and columns that enable high processing speed for Bayesian HTM. Kenneth et al., in 2007, implemented the the HTM.
    [Show full text]
  • Memory-Prediction Framework for Pattern Recognition: Performance
    Memory–Prediction Framework for Pattern Recognition: Performance and Suitability of the Bayesian Model of Visual Cortex Saulius J. Garalevicius Department of Computer and Information Sciences, Temple University Room 303, Wachman Hall, 1805 N. Broad St., Philadelphia, PA, 19122, USA [email protected] Abstract The neocortex learns sequences of patterns by storing This paper explores an inferential system for recognizing them in an invariant form in a hierarchical neural network. visual patterns. The system is inspired by a recent memory- It recalls the patterns auto-associatively when given only prediction theory and models the high-level architecture of partial or distorted inputs. The structure of stored invariant the human neocortex. The paper describes the hierarchical representations captures the important relationships in the architecture and recognition performance of this Bayesian world, independent of the details. The primary function of model. A number of possibilities are analyzed for bringing the neocortex is to make predictions by comparing the the model closer to the theory, making it uniform, scalable, knowledge of the invariant structure with the most recent less biased and able to learn a larger variety of images and observed details. their transformations. The effect of these modifications on The regions in the hierarchy are connected by multiple recognition accuracy is explored. We identify and discuss a number of both conceptual and practical challenges to the feedforward and feedback connections. Prediction requires Bayesian approach as well as missing details in the theory a comparison between what is happening (feedforward) that are needed to design a scalable and universal model. and what you expect to happen (feedback).
    [Show full text]
  • Participating Companies
    PARTICIPATING COMPANIES COMDEX.com Las Vegas Convention Center November 16–20, 2003 Keynotes Oracle Corporation IDG Ergo 2000 AT&T Wireless O’Reilly Publishing InfoWorld Media Group Expertcity, Inc. Microsoft Corporation PC Magazine Network World Garner Products PalmSource Salesforce.com Computer World Inc. Magazine Siebel Systems, Inc. SAP PC World Infineon Technologies Sun Microsystems Sun Microsystems IEEE Media Kelly IT Resources Symantec Corporation The Economist IEEE Spectrum Lexmark International, Inc. Unisys IEEE Computer Society Logicube, Inc. Innovation Centers Verisign IEEE Software LRP ApacheCon Yankee Group Security & Privacy Luxor Casino/Blue Man Group Aruba ZDNet International Online Computer Society MA Labs, Inc. ASCII Media Partners Linux Certified Maxell Corporation of America Avaya Mobile Media Group MediaLive Intl. France/UBI France Animation Magazine Cerberian Handheld Computing Magazine Min Maw International ApacheCon Imlogic Mobility Magazine Multimedia Development Corp. Bedford Communications: Lexmark National Cristina Foundation MySQL LAPTOP LinuxWorld Our PC Magazine National Semiconductor Corp. PC Upgrade McAfee Pen Computing Magazine Nexsan Technologies, Inc. Tech Edge Mitel Networks Pocket PC Magazine Qualstar Corporation Blue Knot Mozilla Foundation QuarterPower Media Rackframe—A Division of Starcase CMP Media LLC MySQL Linux Magazine Ryan EMO Advertising CRN Nortel Networks ClusterWorld Magazine Saflink Corporation VARBusiness NVIDIA RCR Wireless News Server Technology, Inc. InformationWeek Openoffice.org
    [Show full text]
  • Fair Information Practices in the Electronic Marketplace
    FAIR INFORMATION PRACTICES IN THE ELECTRONIC MARKETPLACE PRIVACY ONLINE: FAIR INFORMATION PRACTICES IN THE ELECTRONIC MARKETPLACE A REPORT TO CONGRESS FEDERAL TRADE COMMISSION MAY 2000 PRIVACY ONLINE: Federal Trade Commission* Robert Pitofsky Chairman Sheila F. Anthony Commissioner Mozelle W. Thompson Commissioner Orson Swindle Commissioner Thomas B. Leary Commissioner This report was prepared by staff of the Division of Financial Practices, Bureau of Consumer Protection. Advice on survey methodology was provided by staff of the Bureau of Economics. * The Commission vote to issue this Report was 3-2, with Commissioner Swindle dissenting and Commissioner Leary concurring in part and dissenting in part. Each Commissioners separate statement is attached to the Report. FAIR INFORMATION PRACTICES IN THE ELECTRONIC MARKETPLACE TABLE OF CONTENTS Executive Summary ................................................................................ i I. Introduction and Background ............................................................. 1 A. The Growth of Internet Commerce .............................................................. 1 B. Consumer Concerns About Online Privacy .................................................... 2 C. The Commissions Approach to Online Privacy - Initiatives Since 1995 .................. 3 1. The Fair Information Practice Principles and Prior Commission Reports ........................ 3 2. Commission Initiatives Since the 1999 Report ........................................................ 5 D. Self-Regulation
    [Show full text]
  • The Neuroscience of Human Intelligence Differences
    Edinburgh Research Explorer The neuroscience of human intelligence differences Citation for published version: Deary, IJ, Penke, L & Johnson, W 2010, 'The neuroscience of human intelligence differences', Nature Reviews Neuroscience, vol. 11, pp. 201-211. https://doi.org/10.1038/nrn2793 Digital Object Identifier (DOI): 10.1038/nrn2793 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Nature Reviews Neuroscience Publisher Rights Statement: This is an author's accepted manuscript of the following article: Deary, I. J., Penke, L. & Johnson, W. (2010), "The neuroscience of human intelligence differences", in Nature Reviews Neuroscience 11, p. 201-211. The final publication is available at http://dx.doi.org/10.1038/nrn2793 General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 02. Oct. 2021 Nature Reviews Neuroscience in press The neuroscience of human intelligence differences Ian J. Deary*, Lars Penke* and Wendy Johnson* *Centre for Cognitive Ageing and Cognitive Epidemiology, Department of Psychology, University of Edinburgh, Edinburgh EH4 2EE, Scotland, UK. All authors contributed equally to the work.
    [Show full text]
  • Spinoff: Handspring
    Stanford eCorner Spinoff: Handspring Jeff Hawkins, Numenta October 23, 2002 Video URL: http://ecorner.stanford.edu/videos/43/Spinoff-Handspring Hawkins shares the various reasons why he and his team finally spun off from 3Com to start Handspring. Although they were reluctant to leave and start a company from scratch, they felt that Palm did not belong in 3Com- a networking company. Palm was the only healthy division in 3Com and they could not continue growing and competing with a financial hand tied behind their backs. Transcript We were then a division of 3Com at Palm. And we were doing our thing. We were having a fair amount of success. We introduced a series of products, including the Palm 3 and the Palm 5. But actually, we left. Now again, I was reluctant this time. This is when we started Handspring. I was reluctant to do this. We didn't want to leave; starting a company is a lot of work. Just who wants to do that again? But in turns out that we felt at the time, and I still believe it was the right thing, that Palm really didn't belong as part of 3Com. 3Com was a networking company and it sick. It was ailing. They were not very profit. Their margins were falling. We were the only healthy division in the entire company and they were not reporting our earnings but they were using it to prop up the rest of the business. So we were growing and made it look like 3Com was growing but really, it was only Palm that was growing.
    [Show full text]
  • On Intelligence As Memory
    Artificial Intelligence 169 (2005) 181–183 www.elsevier.com/locate/artint Book review Jeff Hawkins and Sandra Blakeslee, On Intelligence, Times Books, 2004. On intelligence as memory Jerome A. Feldman International Computer Science Institute, Berkeley, CA 94704-1198, USA Available online 3 November 2005 On Intelligence by Jeff Hawkins with Sandra Blakeslee has been inspirational for non- scientists as well as some of our most distinguished biologists, as can be seen from the web site (http://www.onintelligence.org). The book is engagingly written as a first person memoir of one computer engineer’s search for enlightenment on how human intelligence is computed by our brains. The central insight is important—much of our intelligence comes from the ability to recognize complex situations and to predict their possible outcomes. There is something fundamental about the brain and neural computation that makes us intelligent and AI should be studying it. Hawkins actually understates the power of human associative memory. Because of the massive parallelism and connectivity, the brain essentially reconfigures itself to be constantly sensitive to the current context and goals [1]. For example, when you are planning to buy some kind of car, you start noticing them. The book is surely right that better AI systems would follow if we could develop programs that were more like human memory. For whatever reason, memory as such is no longer studied much in AI—the Russell and Norvig [3] text has one index item for memory and that refers to semantics. From a scientific AI/Cognitive Science perspective, the book fails to tackle most of the questions of interest.
    [Show full text]