<<

AI Magazine Volume 16 Number 2 (1995) (© AAAI) Articles Eye on the Prize

Nils J. Nilsson

■ In its early stages, the field of AI had as its main sufficiently powerful to solve large problems goal the invention of computer programs having of real-world consequence. In their efforts to the general problem-solving abilities of humans. get past the barrier separating toy problems Along the way, a major shift of emphasis devel- from real ones, AI researchers became oped from general-purpose programs toward per- absorbed in two important diversions from formance programs, ones whose competence was their original goal of developing general, highly specialized and limited to particular areas intelligent systems. One diversion was toward of expertise. In this article, I claim that AI is now developing performance programs, ones whose at the beginning of another transition, one that competence was highly specialized and limit- will reinvigorate efforts to build programs of gen- eral, humanlike competence. These programs will ed to particular areas of expertise. Another use specialized performance programs as tools, diversion was toward refining specialized much like humans do. techniques beyond those required for general- purpose intelligence. In this article, I specu- ver 40 years ago, soon after the birth late about the reasons for these diversions of electronic computers, people began and then describe growing forces that are Oto think that human levels of intelli- pushing AI to resume work on its original gence might someday be realized in computer goal of building programs of general, human- programs. Alan Turing (1950) was among the like competence. first to speculate that “machines will eventu- ally compete with men in all purely intellec- The Shift to tual fields.” and Herb Simon Performance Programs (1976) made this speculation more crisp in their hypothesis: “A Sometime during the 1970s, AI changed its physical symbol system [such as a digital focus from developing general problem-solv- computer] has the necessary and sufficient ing systems to developing expert programs means for general intelligent action” (empha- whose performance was superior to that of sis mine). In its early stages, the field of AI any human not having specialized training, had as its main goal the invention of comput- experience, and tools. A representative perfor- er programs having the general problem-solv- mance program was DENDRAL (Feigenbaum et ing abilities of humans. One such program al. 1971). Edward Feigenbaum and colleagues (1971, p. 187), who are credited with having was the GENERAL PROBLEM SOLVER (GPS) (Newell, Shaw, and Simon 1960), which used what led the way toward the development of have come to be called weak methods to expert systems, put it this way: search for solutions to simple problems. General problem-solvers are too weak to be used as the basis for building high performance systems. The behavior of Diversions from the Main Goal the best general problem-solvers we Many of the early AI programs dealt with toy know, human problem solvers, is problems, puzzles and games that humans observed to be weak and shallow, except sometimes find challenging but that they can in the areas in which the human prob- usually solve without special training. When lem-solver is a specialist. these early AI techniques were tried on much Observations such as these resulted in a more difficult problems, it was found that the shift toward programs containing large bodies methods did not scale well. They were not of specialized knowledge and the techniques

Copyright © 1995, AAAI. All rights reserved. 0738-4602-1994 / $2.00 SUMMER 1995 9 Articles

required to deploy this knowledge. The shift much beyond what I think are required by was very fruitful. It is estimated that several general, intelligent systems. I’ll give some thousand knowledge-based expert systems are examples. used in industry today. The American Associ- Let’s look first at automatic planning. It is ation for (AAAI) spon- clear that a general, intelligent system will sors an annual conference entitled Innovative need to be able to plan its actions. An exten- Applications of Artificial Intelligence, and the sive spectrum of work on automatic planning proceedings of these conferences give ample has been done by AI researchers. Early work evidence of AI’s successes.1 I won’t try to was done by Newell, Shaw, and Simon (1960); summarize the application work here, but the McCarthy and Hayes (1969); Green (1969); following list taken from a recent article in and Fikes and Nilsson (1971). These early pro- Business Week (1992) is representative of the grams and ideas were clearly deficient in kinds of programs in operation: many respects. While working on one part of Shearson Lehman uses neural networks to a problem, they sometimes undid an already predict the performance of stocks and bonds. solved part; they had to do too much work to Merced County in California has an expert verify that their actions left most of their sur- system that decides if applicants should roundings unchanged; and they made the receive welfare benefits. unrealistic assumption that their worlds NYNEX has a system that helps unskilled remained frozen while they made their plans. workers diagnose customer phone problems. Some of the deficiencies were ameliorated by Arco and Texaco use neural networks to subsequent research (Sacerdoti 1977; Tate help pinpoint oil and gas deposits deep below 1977; Waldinger 1977; Sussman 1975). the earth’s surface. Recent work by Wilkins (1988), Currie and The Internal Revenue Service is testing soft- Tate (1991), and Chapman (1987) led to quite ware designed to read tax returns and spot complex and useful planning and scheduling fraud. systems. Somewhere along this spectrum, Spiegel uses neural networks to determine however, we began to develop specialized who on a vast mailing list are the most likely planning capabilities that I do not think are buyers of its products. required of a general, intelligent system. After American Airlines has an expert system all, even the smartest human cannot (with- that schedules the routine maintenance of its out the aid of special tools) plan missions for airplanes. the National Aeronautics and Space Adminis- High-performance programs such as these tration or lay out a factory schedule, but are all very useful; they are important and automatic planning programs can now do worthy projects for AI, and undoubtedly, they these things (Deale et al. 1994; Fox 1984). have been excellent investments. Do they Other examples of refinement occur in the move AI closer to its original, main goal of research area dealing with reasoning under developing a general, intelligent system? I uncertainty. Elaborate probabilistic reasoning think not. The components and knowledge schemes have been developed, and perhaps needed for extreme specialization are not some of these computational processes are necessarily those that will be needed for gen- needed by intelligent systems. What I think is eral intelligence. Some medical diagnosis pro- not needed (to give just one example) is a grams, for example, have expert medical dynamic programming system for calculating knowledge comparable to that of human paths of minimal expected costs between physicians who have had years of training states in a Markov decision problem, yet and practice (Miller et al. 1982). However, some high-quality AI research is devoted to these doctors were already far more intelli- this and similar problems (which do arise in gent—generally, before attending medical special settings). More examples exist in sev- school—than the best of our AI systems. They eral other branches of AI, including automat- had the ability then to acquire the knowledge ed theorem proving, intelligent database that they would need in their specialty—an retrieval, design automation, intelligent con- ability AI programs do not yet have. trol, and program verification and synthesis. The development of performance programs Ever-More–Refined Techniques and refined techniques has focused AI research on systems that solve problems In parallel with the move toward perfor- beyond what humans can ordinarily do. Of mance programs, AI researchers working on course, a program must be equipped with the techniques (rather than on specific applica- skills and knowledge that it truly needs in its tions) began to sharpen these techniques area of application. What I am arguing for

10 AI MAGAZINE Articles here is that these skills and knowledge bases Some Reasons for the Diversions be regarded as tools—separate from the intelli- gent programs that use them. It is time to There are several reasons why AI has concen- begin to distinguish between general, intelli- trated on tool building. First, the problem of gent programs and the special performance building general, intelligent systems is very systems, that is, tools, that they use. AI has for hard. Some have argued that we haven’t many years now been working mainly on the made much progress on this problem in the tools—expert systems and highly refined tech- last 40 years. Perhaps we have another 40 niques. Building the tools is important—no years ahead of us before significant results question. Working on the tools alone does will be achieved. It is natural for researchers not move us closer to AI’s original goal—pro- to want to achieve specific results during ducing intelligent programs that are able to their research lifetimes and to become frus- use tools. Such general programs do not need trated when progress is slow and uneven. Sec- to have the skills and knowledge within them ond, sponsors of AI research have encouraged as refined and detailed as that in the tools (and have often insisted on) specialized sys- they use. Instead, they need to be able to find tems. After years of supporting general AI, out about what knowledge and tools are avail- they understandably want a return on their able to match the problems they face and to investment. The problem is that the people learn how to use them. Curiously, this view who have the dollars usually have specific that general intelligence needs to be regarded problems they want solved. The dollars exist as something separate from specialist intelli- in niches, and these niches call forth pro- gence was mentioned in the same paper that grams to fill them. helped to move the field toward concentrat- Third, many of the systems and tools that ing on special intelligence. Feigenbaum and AI has been working on have their own his colleagues (1971, p. 187) said: intrinsic, captivating interest. A community of researchers develops, and momentum car- The “big switch” hypothesis holds that ries the pursuit of techniques into areas per- generality in problem solving is achieved haps not relevant to a general intelligent by arraying specialists at the terminals of agent. Exciting whirlpools always divert some a big switch. The big switch is moved people from the mainstream. Some of the from specialist to specialist as the prob- work in theoretical AI (for example, some lem solver switches its attention from nonmonotonic reasoning research) might be one problem area to another. […The of this character. Fourth, some AI leaders have kinds of problem-solving processes, if argued quite persuasively that the best route any, which are involved in “setting the toward AI’s main goal lies through the devel- switch” (selecting a specialist) is a topic opment of performance systems. Edward that obviously deserves detailed exami- Feigenbaum, for example, has often said that nation in another paper.] he learns the most when he throws AI tech- Unfortunately, work on setting the switch niques against the wall of hard problems to (if, indeed, that’s what is involved in general see where they break. It is true that many of intelligence) has been delayed somewhat. The the early AI methods did not scale up well same authors, however, did go on to give and that confronting hard problems in sci- some recommendations, which seem to me ence, engineering, and medicine made our to be still quite valid (Feigenbaum, Bucha- methods more robust. I believe that, but I nan, and Lederberg 1971, p. 189): think the hard-problem approach has now The appropriate place for an attack on reached the point of diminishing returns. the problem of generality may be at the Throwing our techniques against yet more meta-levels of learning, knowledge trans- (special) hard walls is now not as likely to formation, and representation, not at improve these techniques further or lead to the level of performance programs. Per- new and generally useful ones. (It will, of haps for the designer of intelligent sys- course, result in solving additional specific tems what is most significant about problems.) Fifth, university computer science human general problem-solving behav- departments have increasingly shifted from ior is the ability to learn specialties as understanding-driven to need-driven research. needed—to learn expertness in problem This shift has been encouraged by a number areas by learning problem-specific of factors, not the least of which is the alleged heuristics, by acquiring problem-specific new compact between society and science in information, and by transforming gener- which science is supposed to be directed more al knowledge and general processes into toward national needs. Also, most university specialized forms. computer science departments are in engi-

SUMMER 1995 11 Articles

neering colleges, which often have a very Habile Systems practical outlook. Computer science itself now seems to be more concerned with faster Perhaps a good adjective to describe the gen- algorithms, better graphics, bigger databases, eral, intelligent systems I have in mind is wider networks, and speedier chips than it is habile, which means having general skill. What with the basic problems of AI (or even with are some of the properties of a habile system? the basic problems of computer science). AI Here is my list: Commonsense knowledge and common- faculty, competing in these departments for sense reasoning abilities: Wide-ranging recognition and tenure, want to be perceived knowledge and inference capabilities are nec- as working on real problems—not chasing ill- essary for a system to be generally intelligent. defined and far-off will-o’-the-wisps. The Unlike expert systems, we would expect importance that is attached to being able to habile systems (using appropriate tools) to evaluate research results leads inevitably to perform reasonably, if not expertly, in a vari- working on projects with clear evaluation cri- ety of situations. Of course, what we gain in teria, and typically, it’s easier to evaluate sys- breadth, we will probably have to give up in tems that do specific things than it is to evalu- depth. This trade-off (applied to program- Computer ate systems whose tasks are more general. ming languages) was nicely expressed by science itself Finally, the arguments of those who say it Stroustrup (1994, p. 201]:2 can’t be done might have had some effect. For every single specific question, you now seems to People who know insufficient computer sci- can construct a language or system that be more ence but consider themselves qualified to pro- is a better answer than C++. C++’s nounce on what is possible and what is not concerned strength comes from being a good have been free with their opinions (Penrose with faster answer to many questions rather than 1994, 1989; Dreyfus and Dreyfus 1985; Searle being the best answer to one specific algorithms, 1980). From these pronouncements has come question.… Thus, the most a general- the distinction between strong AI and weak purpose language can hope for is to be better AI. In the words of Searle (1980, p. 417): graphics, “everybody’s second choice.” According to weak AI, the principal val- The fact that a habile system will be a jack of bigger ue of the computer in the study of the all trades and a master of none does not databases, mind is that it gives us a very powerful diminish the value of such a system. It does tool. For example, it enables us to formu- make it more difficult to find funding sources wider late and test hypotheses in a more rigor- for research on habile systems, however. networks, ous and precise fashion. But according to Access abilities: These abilities include strong AI, the computer is not merely a whatever is needed for an agent to get infor- and speedier tool in the study of the mind; rather, the mation about the environment in which it chips than appropriately programmed computer operates and to affect the environment in it is with really is a mind. appropriate ways. For robots, the access abili- These critics acknowledge the successes of ties might include perceptual processing of the basic expert systems and other AI applications, visual images and a suite of effectors. For soft- ware agents, the access abilities might include problems of claiming them to be examples of weak AI. the ability to read e-mail messages and access Strong AI is declared to be impossible (with AI. databases and computer networks. the overtone that we shouldn’t want to The access abilities of habile systems that achieve it anyway), and weak AI is embraced must deal with other agents will include facil- as appropriate, doable, and socially accept- ities for receiving, understanding, and gener- able. Many AI researchers are willing to settle ating communications. Interaction with for the goals of weak AI. The weak AI agenda humans will require natural language–under- is also consistent with much of the rest of standing and natural language–generation present-day computer science, which increas- programs. ingly sees its mission as providing computa- Autonomy and continuous existence: tional tools. Paradoxically, because strong AI Habile systems will be agents that have built- implies the ability to function effectively in a in high-level goals (much like the drives) of variety of environments, it will most proba- animals. They will have an architecture that bly depend on AI’s so-called weak methods, mediates between reasoning (using their namely, ones that are generally useful and commonsense knowledge) and reflexive reac- unspecialized. The strong and specialized tions to urgent situations. methods, however, are used by the niche sys- Ability to learn: Agents having a continu- tems associated with weak AI. ous existence can learn from experience. New

12 AI MAGAZINE Articles demands will create new applications, and ability, and continuous existence. As a step in agents must be able to learn how to solve this direction, the architecture being explored new problems. All the learning methods of AI for CommerceNet uses an agent called a facil- will be needed here. Habile agents must be itator that has quite general capabilities “informable” (Genesereth 1989). Humans (Genesereth 1994). Demand for habile per- will want to give advice to them that varies in sonal assistants will be unceasing and grow- precision from detailed instructions to vague ing as services available on the Internet con- hints. Because so much human knowledge tinue to expand. exists in written form, we will want our agents to be able to get appropriate informa- Entertainment, Education, tion from documents. These abilities also pre- and Simulation suppose natural language skills. Interactive, multimedia video art and enter- There is reason now to think that AI will tainment require characters that are believ- soon be placing much more emphasis on the able in their emotions and actions (Bates development of habile systems. I explain why 1994). The human participants in these inter- in the next section. actions want characters that act and think much like humans do. As long as such char- Some Forces Pushing Us acters are perceived to be simply mechanical and easily predictable, there will be competi- toward Habile Systems tive pressure to do better. Similar needs exist Not all the forces affecting AI are in the direc- as we develop more sophisticated educational tion of niche systems. There have always computer systems. On-the-job training in an been good reasons to build habile systems, environment with customers, co-workers, but now I think there are some new and even adversaries is an important style of needs—just now becoming more pressing. education for many occupations. To provide These new forces arise from the rapid devel- real environments and their inhabitants for opment of the information superhighway; purposes of training is expensive and perhaps multimedia for entertainment, education, dangerous, and therefore, simulations and and simulation; and the growing demand for simulated inhabitants are being used increas- more flexible robots. I’ll make a few com- ingly. This need for realistic simulated agents ments about each of these influences. exerts continuing pressure to develop ones with wide-ranging, humanlike capabilities. The Information Superhighway The exploding access to databases, programs, The Requirement for More media, and other information provided by Flexible Robots computer networks will create a huge A recent article in The New York Times demand for programs that can aid the con- (Holusha 1994) said that “sales are booming sumers and producers of this information. In for robots, which are cheaper, stronger, faster, the words of a Wall Street Journal article about and smarter than their predecessors.” One electronic agents (Hill 1994), “The bigger the reason for the sales increase is that robots are network and the more services on it, the gradually becoming more flexible—in action greater the potential power of agents.” All and in perception. I expect that there will be kinds of special softbot agents (sometimes increasing demand for flexible mobile robots called spiders when they inhabit the World in manufacturing and construction and in Wide Web) have been proposed—personal service industries. Some possible applications assistants, database browsers, e-mail handlers, include delivery vehicles, carpenters’ assis- purchasing agents, and so forth. Several peo- tants, in-orbit space station constructors, ple are working on prototypes that aim robots that work in hazardous environments, toward such agents (Etzioni and Weld 1994; household robots, sentry robots, and under- Maes 1994; Ball and Ling 1993). Even though water robots. Although there will be many a variety of very specialized niche agents will niche systems (just as there are in the biologi- be built to service these demands, the casual cal world), cost considerations will favor user will want a general-purpose personal habile robot architectures that can be applied assistant to act as an intermediary between to a variety of different tasks. I think the him or her and all the specialized agents and main challenge in developing flexible robots the rest of the World Wide Web. Such a per- (in addition to providing those features of sonal assistant should have many of the fea- habile systems already mentioned) is to inte- tures of habile agents: general commonsense grate perception, reasoning, and action in an knowledge, wide-ranging natural language architecture designed especially with such

SUMMER 1995 13 Articles

integration in mind. Several such general- of general physical and electromechanical purpose robot architectures are being laws that would be useful to a wide variety of explored, including one I am currently work- different expert systems. ing on (Benson and Nilsson 1995). These factors will combine with those that The Scientific Interest to Understand have existed for quite some time. To name How the Brain Works just a few of these longer-standing factors, One of the motivations for AI research all there is still a need for more versatile natural along has been to gain insight into mental language–processing systems, more robust processes. Neuroscientists, psychologists, expert systems, and computational models of ethologists, cognitive scientists, and AI human and animal intelligence. researchers are all contributing their own results and points of view to the integrated, Natural Language Processing multilevel picture appropriate for this most Several important applications require more difficult scientific quest. Just as knowledge of general and competent natural language abil- transistor physics alone is not adequate for an ities. These applications include systems for understanding of the computer, so also neu- dictation; automated voice services using the roscience must be combined with higher-lev- telephone system; translation between differ- el concepts, such as those being investigated ent natural languages, interfaces to certain by AI researchers, to fill out our picture of application programs for casual users; agents mental functioning. The steadily accumulat- for filtering voice mail, electronic mail, and ing body of knowledge about neural process- other messages; automatic abstracting; optical es will add to the urgency of understanding character recognition; and information- how the higher-level processes combine with retrieval programs. Both natural language the others to form a mind. understanding and generation are required. Even within AI, several approaches are The demand for these abilities will exert an being followed by people whose main inter- unceasing and growing pressure to create the est is the scientific study of mental function- knowledge bases and programs required for ing. There is what might be called the animat general, wide-domain (we might say habile) approach (Wilson 1991), which holds that AI natural language systems. The desire for bet- should concern itself first with building sim- ter natural language–processing systems will ple, insectlike artifacts and gradually work its not disappear, even though the technical way up the evolutionary scale (Brooks 1991). problems involved are difficult and progress Whatever one might believe about the long- on solving them is slow. range potential for this work, it is contribut- ing significantly to our understanding of The Brittleness of Expert Systems building autonomous systems that must AI application specialists acknowledge that function in a variety of complex, real envi- the main defect of most expert systems is that ronments and, thus, reinforces the trend they are very brittle. Within their specialized toward habile systems. Such work also pro- areas, these systems contain much more vides a base that arguably might be necessary expertise than is needed by a general, intelli- to support higher cognitive functions. gent system, but once off the high mesa of At a distinctly higher level is the work on their specialized knowledge, they fall to the SOAR (Laird et al. 1987), an architecture for flat plain of complete ignorance. Worse, they general intelligence that is aimed at modeling don’t even know when they are off their various cognitive and learning abilities of mesa. These expert systems need what John humans. It is interesting to note that even McCarthy (1990) calls commonsense—with- with these general goals, the SOAR architecture out it they are idiot savants. There is growing can be specialized to function as an expert insistence that these programs be less brittle. system for the configuration of computer sys- Making their knowledge cliff less steep means tems as well as for a number of other special- extending their competence at least to semi- ized tasks (Pearson et al. 1993; Rosenbloom et hability in the areas surrounding their field of al. 1985). At a similarly high level is an at- expertise. The goal of several projects is mak- tempt to duplicate in computer agents some ing expert systems more flexible. One that is of the stages of Piagetian learning (Drescher attempting to do so by giving such systems 1991). more general knowledge surrounding their All these efforts are directed at understand- specialized area is the How Things Work Pro- ing the common mechanisms in naturally ject at Stanford University (Iwasaki and Low occurring, biological individuals. The scientif- 1993), which is producing a ic quest to understand them will never cease

14 AI MAGAZINE Articles and, thus, will always exert a pull on the one attempt to facilitate the construction of development of habile systems. communicating agents. In summary, I think all these factors, old and new, suggest the strong possibility that Summary and Conclusions AI will once again direct a substantial portion of its research energies toward the develop- AI’s founding fathers, Marvin Minsky, John ment of general intelligent systems. McCarthy, and Allen Newell, always kept their eyes on the prize—even though they Some Important pursued different paths toward it. McCarthy’s (1986, 1958) work on commonsense reason- Research Projects ing has been aimed directly at general, intelli- In addition to the research efforts already gent systems. The same can be said for Min- mentioned, several others are quite relevant sky’s (1975) work on structuring knowledge to habile systems. I’ll remark on just three of in frames and on his society of mind (Minsky the ones I know the most about. 1986). Newell’s (1990) work on production One is the Project led by Douglas Lenat systems and SOAR focused on the same prize. (Guha and Lenat 1990). It has as its goal the Now it appears that there are strong and building of a commonsense knowledge base insistent reasons for many others also to containing millions of facts and their interre- resume work on AI’s original goal of building lationships. It is striving to encompass the systems with humanlike capabilities. Even knowledge that is seldom written down— though this prize might still be distant, the knowledge, for example, that the reader of an ultimate benefits of practical, retargetable, … there is encyclopedia is assumed to possess before tool-using systems will more than repay the reading the encyclopedia and that, indeed, is long-term investments. no reason required to understand what he/she reads. It I think there is no reason to be discouraged to be seems clear to many of us that this kind of by the current pressures to concentrate on discouraged knowledge, in some form, will be required by mission-specific research. There are now peo- habile systems, in particular by any systems ple whose very missions require the develop- by the that are expected to use more or less uncon- ment of habile systems, and much basic current strained natural language. I think projects of research needs to be done before their needs this sort are very important to AI’s long-range can be satisfied. Several different architectures pressures goals, and I agree with Marvin Minsky who need to be explored. There are still many to concentrate said, “I find it heartbreaking [that] there still unresolved questions: Is general intelligence are not a dozen other such projects [like CYC] dependent on just a few weak methods (some on in the world” (Riecken and Minsky 1994). still to be discovered) plus lots and lots of mission- Another project of general importance is commonsense knowledge? Does it depend on specific the attempt to build an interlingua for perhaps hundreds or thousands of specialized knowledge representation such as the knowl- minicompetences in a heterarchical society of research. edge interchange format (KIF) (Genesereth mind? No one knows the answers to ques- and Fikes et al. 1992). For efficiency, niche tions such as these, and only experiments and applications will want their specialized trials will provide these answers. We need, as knowledge in customized formats, but some Minsky recommends, 10 more CYC projects. of this knowledge, at least, will be the same We also need support for young investigators as the knowledge needed by other niche sys- and postdoctorates, graduate fellowships, tems. To permit knowledge sharing among individual investigator-initiated grant pro- different systems, knowledge must be trans- grams, and research equipment and facilities. latable from one system’s format into anoth- With the right sort of research support, AI er’s, and a common interlingua, such as KIF, will now proceed along two parallel paths: (1) greatly facilitates the translation process. specialized systems and (2) habile systems. Although, as some argue, it might be too ear- Niche systems will continue to be developed ly to codify standards for such an interlingua, because there are so many niches where com- it is not too early to begin to consider the putation is cost effective. Newell (1992, p. 47) research issues involved. foresaw this path when he charmingly pre- Agents that are part of communities of dicted that there would someday be agents will need knowledge of each other’s brakes that know how to stop on wet cognitive structure and the way to affect the pavement, instruments that can converse beliefs and goals in such structures through with their users, bridges that watch out communication. Yoav Shoham’s (1993) for the safety of those who cross them, agent-oriented programming formalism is streetlights that care about those who

SUMMER 1995 15 Articles

stand under them who know the way, so Chapman, D. 1987. Planning for Conjunctive no one need get lost, [and] little boxes Goals. Artificial Intelligence 32:333–377. that make out your income tax for you. Currie, K. W., and Tate, A. 1991. O-PLAN: The Open Artificial Intelligence He might also have mentioned vacuum clean- Planning Architecture. 52(1): 49–86. ers that know how to vacuum rooms, garden hoses that know how to unroll themselves Deale, M.; Yvanovich, M.; Schnitzius, D.; Kautz, D.; Carpenter, M.; Zweben, M.; Davis, G.; and Daun, B. when needed and roll themselves back up for 1994. The Space Shuttle Ground Processing storage, automobiles that know where you Scheduling System. In Intelligent Scheduling, eds., M. want to go and drive you there, and thousands Zweben and M. Fox, 423–449. San Francisco: Mor- of other fanciful and economically important gan Kaufmann. agents. Society’s real world and its invented Drescher, G. 1991. Made-Up Minds: A Constructivist virtual worlds together will have even more Approach to Artificial Intelligence. Cambridge, Mass.: niches for computational systems than the MIT Press. physical world does for biological ones. AI and Dreyfus, H., and Dreyfus, S. 1985. Mind over computer science have already set about try- Machine. New York: MacMillan. ing to fill some of these niches, a worthy, if Etzioni, O., and Weld, D. 1994. A Softbot-Based never-ending, pursuit. But the biggest prize, I Interface to the Internet. Communications of the think, is for the creation of an artificial intelli- ACM 37(7): 72–76. gence as flexible as the biological ones that Feigenbaum, E.; Buchanan, B.; and Lederberg, J. will win it. Ignore the naysayers; go for it! 1971. On Generality and Problem Solving: A Case Study Using the DENDRAL Program. In Machine Intel- Acknowledgments ligence 6, eds. B. Meltzer and D. Michie, 165–190. This article is based on invited talks given at Edinburgh: Edinburgh University Press. the Iberamia ‘94 Conference in Caracas, Fikes, R. E., and Nilsson, N. J. 1971. STRIPS: A New Venezuela (25 October 1994), and at the Approach to the Application of Theorem Proving Department of Computer Science, University to Problem Solving. Artificial Intelligence 2(3–4): 189–208. of Washington (4 May 1995). I thank my hosts at these venues, Professors Jose Ramirez Fox, M., and Smith, S. 1984. ISIS—A Knowledge- Based System for Factory Scheduling. Expert Systems and Hector Geffner and Professor Steve Han- 1(1): 25–49. ks, respectively. I received many valuable Genesereth, M. 1994. Software Agents. Communica- comments (if not complete agreement) from tions of the ACM 37(7): 48–53. Peter Hart, Barbara Hayes-Roth, Andrew Genesereth, M. 1989. A Proposal for Research on Kosoresow, and Ron Kohavi. Informable Agents, -89-9, Computer Science Notes Logic Group Report, Department of Computer Sci- ence, Stanford University. 1. Vic Reis, a former director of the Advanced Genesereth, M.; Fikes, R.; Bobrow, D.; Brachman, Research Projects Agency (ARPA), was quoted R.; Gruber, T.; Hayes, P.; Letsinger, R.; Lifschitz, V.; as saying that the DART system, used in MacGregor, R.; McCarthy, J.; Norvig, P.; Patil, R.; deployment planning of Operation Desert and Schubert, L. 1992. Knowledge Interchange For- Shield, justified ARPA’s entire investment in mat Version 3 Reference Manual, Logic-92-1, Com- AI technology (Grosz and Davis 1994). puter Science Logic Group Report, Department of 2. I thank Ron Kohavi for bringing this cita- Computer Science, Stanford University. tion to my attention. Green, C. 1969. Application of Theorem Proving to Problem Solving. In Proceedings of the First Inter- References national Joint Conference on Artificial Intelligence, Ball, J. E., and Ling, D. 1993. Natural Language Pro- 219–239. Menlo Park, Calif.: International Joint cessing for a Conversational Assistant, Technical Conferences on Artificial Intelligence. Report MSR-TR-93-13, Microsoft Research, Red- Grosz, B., and Davis, R., eds. 1994. A Report to mond, Washington. ARPA on Twenty-First–Century Intelligent Systems. Bates, J. 1994. The Role of Emotion in Believable AI Magazine 15(3): 10–20. Agents. Communications of the ACM 37(7): 122–125. Guha, R., and Lenat, D. 1990. CYC: A Mid-Term Benson, S., and Nilsson, N. 1995. Reacting, Plan- Report. AI Magazine 11(3): 32–59. ning, and Learning in an Autonomous Agent. In Hill, G. 1994. Cyber Servants, The Wall Street Jour- Machine Intelligence 14, eds. K. Furukawa, D. Michie, nal, 27 September. and S. Muggleton. Oxford, U.K.: Clarendon. In Holusha, J. 1994. Industrial Robots Make the press. Grade, The New York Times, 7 September. Brooks, R. A. 1991. Intelligence without Represen- Iwasaki, Y., and Low, C. M. 1993. Model Genera- tation. Artificial Intelligence 47(1–3): 139–159. tion and Simulation of Device Behavior with Con- Business Week. 1992. Smart Programs Go to Work. tinuous and Discrete Change. Intelligent Systems Business Week, March 2, pp. 96–101. Engineering 1(2): 115–145.

16 AI MAGAZINE Articles

Laird, J.; Newell, A.; and Rosenbloom, P. 1987. SOAR: Analysis and Machine Intelligence 7:561–569. An Architecture for General Intelligence. Artificial Sacerdoti, E. D. 1977. A Structure for Plans and Intelligence 33:1–64. Behavior. New York: Elsevier. McCarthy, J. 1990. Some Expert Systems Need Searle, J. 1980. Minds, Brains, and Programs. Behav- Commonsense. In Formalizing Common Sense: ioral and Brain Sciences 3:417–457. Papers by John McCarthy, ed. V. Lifschitz, 189–197. Shoham, Y. 1993. Agent-Oriented Programming. Norwood, N.J.: Ablex. Artificial Intelligence 60(1): 51–92. McCarthy, J. 1986. Applications of Circumscription Stroustrup, B. 1994. The Design and Evolution of C++. to Formalizing Commonsense Knowledge. Artificial Reading, Mass.: Addison-Wesley. Intelligence 28(1): 89–116. Sussman, G. J. 1975. A Computer Model of Skill McCarthy, J. 1958. Programs with Common Sense. Acquisition. New York: American Elsevier. In Mechanization of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory, Tate, A. 1977. Generating Project Networks. In Pro- volume 1, 77–84. London, U.K.: Her Majesty’s Sta- ceedings of the Fifth International Joint Confer- tionary Office. ence on Artificial Intelligence, 888–893. Menlo Park, Calif.: International Joint Conferences on McCarthy, J., and Hayes, P. J. 1969. Some Philo- Artificial Intelligence. sophical Problems from the Standpoint of Artificial Intelligence. In Machine Intelligence 4, eds. B. Turing, A. 1950. Computing Machinery and Intelli- Meltzer and D. Michie, 463–502. Edinburgh, U.K.: gence. Mind 59:433–460. Edinburgh University Press. Waldinger, R. J. 1977. Achieving Several Goals Maes, P. 1994. Agents That Reduce Work and Infor- Simultaneously. In Machine Intelligence 8: Machine mation Overload. Communications of the ACM Representations of Knowledge, eds. E. Elcock and D. 37(7): 31–40. Michie, 94–136. Chichester, U.K.: Ellis Horwood. Miller, R.; Pople, H.; and Myers, J. 1982. INTERNIST-1: Wilkins, D. E. 1988. Practical Planning: Extending the An Experimental Computer-Based Diagnostic Con- Classical AI Planning Paradigm. San Francisco: Mor- sultant for General Internal Medicine. New England gan Kaufmann. Journal of Medicine 307:468–476. Wilson, S. 1991. The Animat Path to AI. In From Minsky, M. 1986. The Society of Mind. New York: Animals to Animats; Proceedings of the First Interna- Simon and Schuster. tional Conference on the Simulation of Adaptive Behav- Minsky, M. 1975. A Framework for Representing ior, eds. J. A. Meyer and S. Wilson. Cambridge, Knowledge. In The Psychology of Computer Vision, Mass.: MIT Press. ed. P. Winston, 211–277. New York: McGraw-Hill. Newell, A. 1992. Fairy Tales. AI Magazine 13(4): 46–48. Nils J. Nilsson, Kumagai Professor of Engineering Newell, A. 1990. Unified Theories of Cognition. in the Department of Computer Science at Stanford Cambridge, Mass.: Harvard University Press. University, received his Ph.D. in electrical engineer- Newell, A., and Simon, H. A. 1976. Computer Sci- ing from Stanford in 1958. He spent 23 years at the ence as Empirical Inquiry: Symbols and Search. Artificial Intelligence Center at SRI International Communications of the ACM 19(3): 113–126. working on statistical and neural network Newell, A.; Shaw, J. C.; and Simon, H. A. 1960. approaches to pattern recognition, coinventing the Report on a General Problem-Solving Program for a A* heuristic search algorithm and the STRIPS auto- Computer. In Information Processing: Proceedings matic-planning system, directing work on the inte- of the International Conference on Information grated mobile robot SHAKEY, and collaborating on Processing, 256–264. Paris: UNESCO. the development of the PROSPECTOR expert system. He has published four textbooks on AI. Nilsson Pearson, D.; Huffman, S.; Willis, M.; Laird, J.; and returned to Stanford in 1985 as the chairman of Jones, R. 1993. Intelligent Multi-Level Control in a the Department of Computer Science, a position he Highly Reactive Domain. In Intelligent held until August 1990. Besides teaching courses on Autonomous Systems, IAS-3, eds. F. Groen, S. Hirose, and C. Thorpe, 449–458. Washington, D.C.: AI and machine learning, he is conducting research IOS. on flexible robots that are able to react to dynamic worlds, plan courses of action, and learn from Penrose, R. 1994. Shadows of the Mind: Search for the experience. Nilsson served on the editorial board of Missing Science of Consciousness. Oxford, U.K.: The the journal Artificial Intelligence, was an area editor Oxford University Press. for the Journal of the Association for Computing Penrose, R. 1989. The Emperor’s New Mind: Concern- Machinery, and is currently on the editorial board of ing Computers, Minds, and the Laws of Physics. the Journal of Artificial Intelligence Research. He is a Oxford, U.K.: Oxford University Press. past president and fellow of the American Associa- Riecken, D., and Minsky, M. 1994. A Conversation tion for Artificial Intelligence and is also a fellow of with Marvin Minsky about Agents. Communications the American Association for the Advancement of of the ACM 37(7): 23–29. Science. He is a founding director of Morgan Kauf- Rosenbloom, P. 1985. R1-SOAR: An Experiment in mann Publishers, Inc. In 1993, he was elected a for- Knowledge-Intensive Programming in a Problem- eign member of the Royal Swedish Academy of Solving Architecture. IEEE Transactions on Pattern Engineering Sciences.

SUMMER 1995 17