<<

What is AI?

ARTIFICIAL & AGENTS • AI is both science and engineering: – the science of understanding intelligent entities — of developing theories which attempt to explain and predict the nature of such entities; – the engineering of intelligent entities.

cis32-spring2009-parsons-lect02 2

Acting Humanly • Four views of AI: • A problem that has greatly troubled AI researchers: when can we 1. AI as acting humanly count a machine as being intelligent? – as typified by the Turing test • Most famous response due to Alan Turing, British 2. AI as thinking humanly mathematician and computing pioneer: – . 3. AI as thinking rationally – as typified by logical approaches. 4. AI as acting rationally – the intelligent agent approach.

cis32-spring2009-parsons-lect02 3 cis32-spring2009-parsons-lect02 4 • No program has yet passed Turing test! (Annual Loebner competition & prize.) • A program that succeeded would need to be capable of: Human interrogates entity via teletype for 5 minutes. If, – natural language understanding & generation; after 5 minutes, human cannot tell whether entity is human – knowledge representation; or machine, then the entity must be counted as intelligent. – learning; – automated reasoning. • Note no visual or aural component to basic Turing test — augmented test involves video & audio feed to entity.

cis32-spring2009-parsons-lect02 5 cis32-spring2009-parsons-lect02 6

Thinking Humanly Thinking Rationally

• Try to understand how the mind works — how do we think? • Trying to understand how we actually think is one route to AI — • Two possible routes to find answers: but how about how we should think. • Use logic to capture the laws of rational thought as symbols. – by introspection — we figure it out ourselves! • – by experiment — draw upon techniques of psychology to Reasoning involves shifting symbols according to well-defined conduct controlled experiments. (“Rat in a box”!) rules (like algebra). • idealised • The discipline of cognitive science: particularly influential in Result is reasoning. vision, natural language processing, and learning. • Logicist approach theoretically attractive.

cis32-spring2009-parsons-lect02 7 cis32-spring2009-parsons-lect02 8 • Lots of problems: Acting Rationally – transduction — how to map the environment to symbolic representation; • Acting rationally = acting to achieve one’s goals, given one’s – representation — how to represent real world phenomena beliefs. (time, space, . . . ) symbolically; • An agent is a system that perceives and acts; intelligent agent is – reasoning — how to do symbolic manipulation tractability — one that acts rationally w.r.t. the goals we delegate to it. so it can be done by real computers! • Emphasis shifts from designing theoretically best decision making • We are still a long way from solving these problems. procedure to best decision making procedure possible in circumstances. • In general logic-based approaches are unpopular in AI at the moment. • Logic may be used in the service of finding the best action — not an end in itself.

cis32-spring2009-parsons-lect02 9 cis32-spring2009-parsons-lect02 10

Brief History of AI

• Achieving perfect — making the best decision 1943–56: theoretically possible — is not usually possible, due to limited • resources: McCulloch & Pitts (1943) artificial neural net — proved equivalent to Turing machine; – limited time; • – limited computational power; Shannon, Turing (1950) – limited memory; chess playing programs • – limited or uncertain information about environment. Marvin Minsky (1951) first neural net computer — SNARC • The trick is to do the best with what you’ve got! • Dartmouth College (1956) • This is easier than doing perfectly, but still tough. term “AI” coined by John McCarthy Newell & Simon presented LOGIC THEORIST program

cis32-spring2009-parsons-lect02 11 cis32-spring2009-parsons-lect02 12 1956-70:

• Programs written that could: 1970s: plan, learn, play games, prove theorems, solve problems. • Major centres established: • 1970s period of recession for AI – Minsky — MIT (Lighthill report in UK) – McCarthy — Stanford • Techniques developed on microworlds would not scale. – Newell & Simon — CMU • Implications of theory developed in late 1960s, early • Major feature of the period were microworlds — toy problem 1970s began to be appreciated: domains. – brute force techniques will not work. Example: blocks world. – works in principle does not mean works in practice. “It’ll scale, honest. . . ”

cis32-spring2009-parsons-lect02 13 cis32-spring2009-parsons-lect02 14

• Expert systems success stories:

1980s: – MYCIN — blood diseases in humans; – DENDRAL — interpreting mass spectrometers; – • General purpose, brute force techniques don’t work, so use R1/XCON — configuring DEC VAX hardware; knowledge rich solutions. – PROSPECTOR — finding promising sites for mineral deposits; • Early 1980s saw emergence of expert systems as systems capable • Expert systems emphasised knowledge representation: rules, of exploiting knowledge about tightly focussed domains to solve frames, semantic nets. problems normally considered the domain of experts. • Problems: • Ed Feigenbaum’s knowledge principle: – the knowledge elicitation bottleneck; [Knowledge] is power, and computers that amplify that – marrying & traditional software; knowledge will amplify every dimension of power – breaking into the mainstream.

cis32-spring2009-parsons-lect02 15 cis32-spring2009-parsons-lect02 16 Early 1990s: Late 1990s onwards: • Most companies set up to commercialise expert systems technology went bust. • “AI as homeopathic medicine” viewpoint common. • Increased interest in • 1990s: emphasis on understanding the interaction between – Reaction against expert systems? agents and environments. • Rise of statistical methods. • AI as component, rather than as end in itself. • Agent paradigm becomes commonplace. – “Useful first” paradigm — Etzioni (NETBOT, US$35m) • Robotics for the masses. – “Raisin bread” model — Winston.

cis32-spring2009-parsons-lect02 17 cis32-spring2009-parsons-lect02 18

Intelligent Agents • Human “agent”: • An agent is a system that is situated in an environment, and – environment: physical world; which is capable of perceiving its environment and acting in it to – sensors: eyes, ears, . . . satisfy its design objectives. – effectors: hands, legs, . . . • Pictorially: • : sensors – environment: (e.g.) UNIX operating system; Environment – sensors: ls, ps,... Agent – rm chmod ??? effectors: , ,... • Robot: – environment: physical world; – sensors: sonar, camera; effectors – effectors: wheels. cis32-spring2009-parsons-lect02 19 cis32-spring2009-parsons-lect02 20 What to do? • Ideal : Those who do not reason For each percept sequence, an ideal rational agent will act to Perish in the act. maximise its expected performance measure, on the basis of Those who do not act information provided by percept sequence plus any information perish for that reason built in to agent. (W H Auden) • Note that this does not preclude performing actions to find things • The key problem we have is knowing the right thing to do. out. • Knowing what to do can in principle be easy: consider all the • More precisely, we can view an agent as a function: alternatives, and choose the “best”. f : P∗ → A • But any time-constrained domain, we have to make a decision in time for that decision to be useful! from sequences of percepts P to actions A. • A tradeoff.

cis32-spring2009-parsons-lect02 21 cis32-spring2009-parsons-lect02 22

Accessible vs inaccessible Deterministic vs non-deterministic

An accessible environment is one in which the agent can obtain A deterministic environment is one in which any action has a complete, accurate, up-to-date information about the single guaranteed effect — there is no uncertainty about the state environment’s state. that will result from performing an action. Most moderately complex environments (including, for example, The physical world can to all intents and purposes be regarded the everyday physical world and the Internet) are inaccessible. as non-deterministic. The more accessible an environment is, the simpler it is to build Non-deterministic environments present greater problems for agents to operate in it. the agent designer.

cis32-spring2009-parsons-lect02 23 cis32-spring2009-parsons-lect02 24 Episodic vs non-episodic .

Static vs dynamic . In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. A static environment is one that can be assumed to remain An example of an episodic environment would be a mail sorting unchanged except by the performance of actions by the agent. system. A dynamic environment is one that has other processes Episodic environments are simpler from the agent developer’s operating on it, and which hence changes in ways beyond the perspective because the agent can decide what action to perform agent’s control. The physical world is a highly dynamic based only on the current episode — it need not reason about the environment. interactions between this and future episodes.

cis32-spring2009-parsons-lect02 25 cis32-spring2009-parsons-lect02 26

Summary .

• Discrete vs continuous . This lecture has looked at: – The history of AI – The notion of intelligent agents – An environment is discrete if there are a fixed, finite number of A classification of agent environments. actions and percepts in it. • Broadly speaking, the rest course will cover the major techniques Russell and Norvig give a chess game as an example of a discrete of AI, with special reference to agents. environment, and taxi driving as an example of a continuous • The techniques we’ll look at will start with those applicable to one. simple environments and move towards those suitable for more complex environments.

cis32-spring2009-parsons-lect02 27 cis32-spring2009-parsons-lect02 28