<<

CS671 Artificial General Intelligence History and Survey of AI August 31, 2012

Where did it go wrong and where did it go right.

1 Observation:

AI has made steady progress since its inception about 60 years ago.

This is contrary to some accounts in the popular press.

2 The “Gullibility Wave” Phenomenon

• First reaction to AI: Skepticism

• Second (perhaps having seen a robot demo or learned to pro- gram a computer): Wow! Computers can do a lot. Gullibility Doing intelligent task X isn’t so hard. When I think of doing it myself, it seems straightforward! Enthusiasm

• Third reaction: Computers are impossibly literal-minded. They can never break out of this mold. Disenchantment

• Fourth reaction: The brain doesn’t do computation anyway. It does holograms, or dynamical systems, or quantum gravity, or neural nets, or soulstuff. Denial

• Fifth reaction: There’s really no alternative on the horizon to computation (which is what neural nets actually do). Let’s get to work! Maturity

3 The Beginning of Was the Beginning of AI

Alan Turing (1912–1954) invented both. Perhaps it was because he was weird, but even before the end of World War II he discussed machine intelligence with col- leagues at the British Code and Cipher School (“Bletch- ley Park”).

4 1940s: A Gleam In Turing’s Eye

In 1950 Alan Turing published “Computing Machinery and Intelligence.” (Mind 49)

He predicted that a machine would pass the Turing Test by 2000 (i.e., would fool a (naive?) judge 70% of the time into thinking it was human after a short interview).

Other important British early AI researchers: Christopher Strachey (1916–1975), I.J. Good (1916–2009)

5 1950s: Americans Get Involved, Big- Time As Usual

Key researchers: (1916–2001) (MIT), Donald Michie (1923–2007) (Edinburgh), Allen Newell (1927–1992), Herbert Simon (1916–2001) (CMU), John McCarthy (1927–2011) (Stanford), (1926– 2008), (1927–)

Results: Search algorithms*. List processing*. Mini- max with depth cutoff. Alpha-beta pruning.

Simon predicts machine will prove new theorem by 1967.

*Asterisk means technique once considered outr´e,perhaps specif- ically for duplicating intelligence, now assimilated into routine CS practice. 6 1960s: ARPA Golden Years

The huge expansion of science after World War II got huger after Sputnik (1957). Lots of money was show- ered on AI labs, to do whatever they wanted.

Researchers: Alan Robinson (1928–), Terry Winograd (1946–), Nils Nilsson (1933–) (Stanford)

Results: Recursive symbolic programming*. Garbage collection*. Natural-language processing (parsing, some domain-specific “understanding”). Unification* and res- olution algorithms for theorem proving.

7 1970s: Things Begin to Settle Down

Lots of research money around. Labs churn out pro- grams, often hard to evaluate.

Researchers: Erik Sandewall (1945–), Robert Kowalski (1941–), Alain Colmerauer (1941–), Tom´asLozano- Perez

Results: Logic programming*. Logic-based knowledge representation. Robot motion planning*. Constraint propagation.

8 1980s: Commercialization Fails

Several AI-based companies founded; none successful. Symantec is still around, but selling software tools.

Researchers: David Rumelhart (1942–2011) (UCSD), Geoffrey Hinton (1947–), Ray Reiter (1939–2002) (Toronto), Erik Horvitz (Microsoft), Frederick Jelinek (1932–2010) (IBM), Lawrence Rabiner (1943–) (AT&T)

Results: Neural networks. Hidden-Markov Models, Sta- tistical methods for medical diagnosis, speech recogni- tion, translation, . . . . Non-monotonic reasoning.

Major textbook: Charniak & McDermott 9 1990s: Moore’s Law Kicks In

Exponential growth in processor power begins to have qualitative effect on what’s considered a reasonable amount of computation.

Researchers: Feng-hsiung Hsu (CMU), Bart Selman (Cornell), Eugene Charniak (Brown), Ernst Dickmanns (U. of Munich), Sebastian Thrun (Stanford)

Results: Champion -playing programs. SAT solvers. Local search techniques*. Markov-chain Monte Carlo search. Probabilistic phrase-structure grammars. Robot cars.

10 2000s and 2010s: AI Is Everywhere

AI is being commercialized again, but industry is still re- luctant to use the name even 30 years after the debacle of the 1980s.

And the same stupid overhyping is occurring anyway (cf. iPhone commercials with celebrities pretending to hold actual conversations with Siri).

11 Taking Stock

Milestones:

• Deep Blue defeats world chess champion* (1997)

• Proverb — solves New York Times (non-Sunday) puzzles (1999)

• DARPA grand challenge — robots from CMU and Stanford maneuver cars through “urban” environment (2009)

• Spin-off from DARPA CALO (”Cognitive Assistant that Learns and Organizes”) project becomes Apple’s Siri voice interface (2010)

• Watson defeats Jeopardy champions (2011)

• Robots are replacing many skilled assembly-line workers [New York Times] (2012)

*Steroid note: programmers modified Deep Blue between games. 12 . . . And last but not least —

Korean scientists develop way to charge lithium-ion bat- teries 100 times faster than before: http://onlinelibrary.wiley.com/doi/10.1002/anie.201203581/abstract

— just increase the power capacity by factor of 10 and world domination is within the robots’ grasp!

13 So Why Can’t A Program Understand Puzzle Rules?

14 So Why No “Human-Level” Intelligence? AI Assets:

• Logic programming • Constraint satisfaction • SAT solvers • Specialized vision algorithms (e.g., road following, 3D recon- struction) • Part-of-speech tagging • Probabilistic context-free parsers • Model checking • • Robot control (cf. robocup robot-soccer competition) • Humanoid robots • Logistic regression • Neural nets; support-vector machines

15 Possible Ways To Proceed

1. Continue as before; perhaps we just need more task- specific modules

2. Build and plug in a lot of things (but what, if not task-specific modules?)

3. Hit the weak spots: analogy, “usually shallow” KR, NLP-KR interface

4. Rely on super-duper learning algorithm

16 Which Are Being Worked On?

1. Continue as before . . .

Lots of people . . .

2. Build cognitive architecture and plug in a lot of things: SOAR

(Rosenbloom, Newell, and Laird), OpenCOG (Goertzel et al.)

17 Which Are Being Worked On? (cont.)

3. Hit the weak spots: analogy, “usually shallow” KR, NLP-KR interface, KR-neural-net interface

Structure-mapping engine (Gentner and Forbus), almost no one else

4. Rely on super-duper learning algorithm

Marcus Hutter (AIXI), Bengio and LeCun, Hinton?

18