<<

AI Magazine Volume 18 Number 3 (1997) (© AAAI) Workshop Report

search presented at the workshop Intelligent Adaptive Agents according to conceptual and system- atic criteria.

A Highlight of the Field and the What Is an Agent? AAAI-96 Workshop Because the workshop presented diverse definitions of what an adap- tive agent is, a discussion on the defi- Ibrahim F. Imam and Yves Kodratoff nition of an agent, a society of agents, and an intelligent agent was also an important part of the workshop. Some issues that are typically discussed in ■ There is a great dispute among The Workshop on Intelligent Adap- defining any agent were found less researchers about the roles, characteris- tive Agents, part of the Thirteenth important than previously believed, tics, and specifications of what are called National Conference on Artificial agents, intelligent agents, and adaptive especially when clearly differentiating (AAAI-96), presented between an agent and a procedure. agents. Most research in the field focuses state-of-the-art approaches, ideas, and on methodologies for solving specific Some of the main reasons for the methodologies for research and devel- problems (for example, communica- multiplicity of definitions of an agent opment of intelligent adaptive agents. tions, cooperation, architectures), and include the focus on the source of The workshop consisted of two invit- little work has been accomplished to input to the agent (or the way the ed talks, presented by Brian Gaines highlight and distinguish the field of agent interacts with the outside intelligent agents. As a result, more and and Barbara Hayes-Roth; four discus- world), the focus on the functions of more research is cataloged as research on sion sessions, organized and chaired the agent (which in many cases are intelligent agents. Therefore, it was nec- by John Laird, Sandip Sen, Costas not dynamic), and the role of the essary to bring together researchers Tsatsoulis, and Kerstin Voigt; two agent as a part of a multiagent society working in the field to define initial commentary evaluations, presented (no clear boundaries or characteristics boundaries, criteria, and acceptable by Yves Kodratoff and Brad Whitehall; characteristics of the field. The Work- distinguish a complex agent from a and 10 papers, presented by Keith shop on Intelligent Adaptive Agents, society of agents). We present here Decker, Karen Haigh, Ibrahim Imam, presented as part of the Thirteenth abstract definitions of the terms agent, John Laird, Ramiro Liscano, Daniela National Conference on Artificial Intelli- intelligent agent, and society of agents. gence, addressed these issues as well as Rus, Sandip Sen, Rahul Sukthankar, Based on the assumption that any many others that are presented in this Kerstin Voight, and Grace Yee. article. Intelligence, adaptation, and agency agent is a black box, a multiagent soci- are three terms with no standard defi- ety is a group of agents that operate f we were to ask 10 researchers from nitions accepted by researchers in the independently in a cooperative or a different organizations or institu- AI community. Defining the scope of competitive environment. The term Itions what their personal definition the AAAI-96 workshop and under- independently is used here to stress that of an intelligent agent is, we would standing these terms are two associat- although agents in the society might most likely get 8 to 10 different ed issues. In general, the workshop serve each other, their objective answers. Moreover, if a researcher or focused on research involving the should not be limited to the service of any curious person wanted to learn three issues together or in different another agent. For example, a struc- about intelligent agents, he/she might combinations. For example, the scope ture of processes where each process get confused after reading even a few of the workshop covered research and depends on other processes to accom- papers of the hundreds that were development on intelligent adaptive plish its task should be considered as a recently published on agency-related methodologies for agents and intelli- single complex agent with subprocess- subjects. Reasons for this confusion gent agents that behave adaptively. es rather than as a society of agents. include the following: First, there is no The definition of these terms were dis- A single agent is a system or a standard definition of what an intelli- cussed in some presentations as well machine (entity description) that pro- gent agent is. (Today, almost anything as some discussion sessions. A summa- vides assistance to the user (objective can be called an agent and, typically, ry of these discussions is presented in description). Describing the entity and an intelligent agent). Second, no clear the next section. The research present- the objective constitutes a sufficiently goals or objectives for the agent (for ed at the workshop can be classified abstract definition of an agent. An example, the functions of the agent according to varying criteria. These autonomous vehicle can be considered vary from implicit to explicit, system- criteria and a classification of the an agent because it can provide trans- atic-mechanic to environmental, sys- papers are presented in the following portation to the user. An information tem requirement to user requirement, section. A brief description of the talks navigator system can be considered an simple to complex). Third, the agent- presented at the workshop is described agent because it searches and retrieves user relationship is either missing or in a later section. The last section information for the user. A robot agent vague. introduces a classification of the re- can observe the external world and

Copyright © 1997, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1997 / $2.00 FALL 1997 75 Workshop Report inform the user about its observation. lem-solving scenario is considered of whose outcome can be processed or Such agents can retrieve their informa- no importance when modeling an used to accomplish another task. tion from a variety of sources, but their intelligent agent. Also, it is difficult to Examples of such tasks include retriev- common objective is to serve the user. judge whether an agent is intelligent ing information or knowledge about a Defining an intelligent agent by observing its external actions subject of interest to the user and allo- requires defining the methodology (behavior). An interesting approach for cating an appropriate web page rele- used by the agent. An intelligent agent distinguishing intelligent agents can vant to a given query. is a system or a machine that utilizes be based on the relationship between Third is the intermediate task, which inferential or complex computational the external actions (behavior) and the specifies a general requirement needed methodologies to perform the set of internal adaptation recognized in to accomplish other tasks. Examples of tasks of interest to the user. These some workshop presentations. such tasks include learning or discov- methodologies should enhance the The workshop audience also raised ering knowledge from data, planning agent’s capabilities to learn. This defi- an interesting set of questions about a course of action, and determining a nition distinguishes between intelli- the definition of an agent: What isn’t list of web pages to be searched for rel- gent agents in general and nonintelli- an agent? Do we consider the search evant information. gent adaptive agents. An example of for a word or the word-count routines Our view of what isn’t an agent also an adaptive agent that is not intelli- in a word processor program as agents? includes systems or machines that per- gent is the irrigation agent. The irriga- Why don’t we consider animals that form intermediate tasks. In real life, tion agent is a robot that irrigates a assist handicapped or other people as agents that perform generic tasks are green house based on a simple equa- agents? Following Sherlock Holmes’s less popular than agents that perform tion involving the humidity and tem- strategy, we started eliminating the master tasks. Now to specify our perature inside the green house. Such improbable to reach the accurate. The abstract definition of what an agent is, an agent can irrigate the green house first issue is that as long as we are lim- we propose a more concrete definition: different number of times at different ited by the boundaries of computer sci- A single agent is a system or a machine hours during each day. The external ence, agents are either computer sys- that can accomplish master or generic actions of the agent surely reflect tems or machines using computerized tasks that are of interest to the user. adaptive behavior; however, the agent systems. The second and most confus- cannot be considered intelligent ing issue is associated with the term A Summary of Issues (some researchers, however, consider assistance in most agent definitions. it to be intelligent). We can always claim that any comput- Discussed at the Workshop Another way to describe the differ- er program, system, or machine pro- Considering a clear understanding of ence between intelligent and nonintel- vides assistance to the user in one way what an agent is, the workshop ligent agents is by the degree of free- or another. At the workshop, we agreed focused on the adaptive process and dom the agent has when accom- that a word-count routine is an agent its characteristics in intelligent adap- plishing the given task. In other words, (but not an intelligent agent). Also, to tive agents. Intelligent adaptation in the agent should have more flexibility call a system or a machine an agent, it agents was characterized by many in controlling the way it accomplishes should be designed to serve the user. themes, including the goal, the cause, a task independent of any changes in To clarify further, we associate what the systematic architecture, and the the environment. This characteristic is referred to as agent assistance with methodologies of adaptation. can be observed in agents using differ- the user task, then categorize the dif- The goal of adaptation: Optimiza- ent problem-solving methodologies, ferent user tasks and use the categories tion (in different forms) was the main control parameters, knowledge bases, to clearly identify the difference goal in most papers. Papers that dealt different objects, and so on. The ability between agents and nonagent entities. with the optimization using a single to infer or select among different alter- We categorize the user tasks (jobs that agent differed widely (they mainly natives is one of the main characteris- can be requested by the user from the learn to adapt) from those that dealt tics of an intelligent agent. Intelligent agent) into three groups: with multiple agents (they mainly agents act as superior or control sys- First is the master task, which speci- react among themselves to adapt). In tems. Nonintelligent agents act as a fies a final requirement; either, it has no the multiagent society, all agents are function or a procedure. These func- outcome, or its outcome is an atomic based on architectures that ensure tions receive a set of parameter values object that cannot independently be accurate and optimized interactions (representing the environment) and processed inferentially or computation- among agents. In single-agent applica- apply the same set of steps or operators ally. Examples of master tasks include tions, agents optimize either their to accomplish the task. It is true that counting the words in a documents, knowledge or their problem-solving most research on agent modeling does purchasing a plane ticket, transporta- methodology. not distinguish between intelligent tion from point A to point B, checking The cause of intelligent adapta- and nonintelligent agents. Moreover, if the conference room is empty, pick- tion: Changes in the environment distinguishing between a procedure ing up an object, and identifying a face. and inconsistency are the two main that uses a set of parameters and an Second is the generic task, which causes for starting any adaptation pro- algorithm that selects the best prob- specifies a final requirement but cess with intelligent agents. The work-

76 AI MAGAZINE Workshop Report shop speakers argued about the rela- tation: Most of the presented papers versity of Calgary). The continuous tionship between both causes. We demonstrated agent systems that use homogeneity of societies of agents attempt to present initial definitions inferential or complex computational highly depends on the adaptive inter- to illustrate the difference between the algorithms. Because actions among the agent members. two causes. Environmental changes offers the most successful inferential Modeling adaptive interactions among can be viewed in many ways: new algorithms, two approaches for using agents allows agents to account for object or active agents in the environ- learning systems were presented for their capabilities and task allocation. ment, unusual readings from a sensor, building intelligent adaptive agents. Knowledge is viewed as the state vari- actions taken by other agents, unex- The first approach is to build a system able in these models. This talk present- pected or new requests from a user, (agent) that acts as an intermediary ed a simple training strategy of allocat- availability of some resources, or other between the learning system and the ing tasks with increasing difficulties (as observations. Self-improvement is the environment. The agent serves as an the agent adapts to optimize the rate of main motive for an agent to detect intelligent user interface. The second learning or attempts to linearize the and correct inconsistencies internally approach is to modify the learning sigmoidal learning curve) that keeps and externally. Self-improvement is a system to include timer changes, sen- the agent’s performance constant. somewhat more complex problem, sory input, constraint evaluation, and The talk also addressed some issues and it might not reflect any changes in so on, in the learning function. that were shared by other presenta- the behavior (external actions) of the Other issues discussed at the work- tions, including (1) how a program is agent. Agents can optimize the way shop included (1) competitive and manipulated to filter or restructure the they function or adapt their knowl- cooperative adaptive agents in a mul- information sources of another agent, edge. Different views of the problem tiagent society, (2) common experi- (2) how agents communicate with were introduced by many papers at ences or policies among agents in a each other to change the state of the workshop. multiagent society, (3) adaptation at knowledge for one of them, (3) how to The systematic architecture of different levels of knowledge, (4) adap- treat agents as systems at the knowl- adaptation: Two architectures for per- tive control versus learning control edge level by assigning knowledge and forming adaptation were presented in (resolving the problem for different goals to them, (4) how to decide for most of the papers. First is a fixed constraints versus solving different or each agent what task to carry, and (5) architecture that is finely tuned to the similar problems), and (5) user expec- what to learn about the relationships task that the global system must per- tation of adaptive agents. between tasks (an agent accomplish- form. In the other category, adapta- ing certain tasks might be able to tion is performed inside the system’s A Summary of the accomplish other tasks). In such mod- architecture itself (that is, the agent’s els, adaptive agents can be character- architecture), which evolves over Presented Talks ized by the task they perform. Also, time. Another remarkable issue in Barbara Hayes-Roth (Stanford Univer- failing to accomplish a task can be multiagent societies is that they offer sity) presented the first invited talk at cause to assign a simpler task. different types of “rewards” to the the workshop, entitled “Directing The first presented paper, “Adaptive individual agents to make them Improvisational Actors.” The talk Intelligent Vehicle Modules for Tacti- evolve and improve. introduced agents as improvisational cal Driving” by Rahul Sukthankar, It is to be noted that the difference actors that can freely or jointly inter- Shumeet Baluja, John Hancock, Dean between the two approaches (the fixed act to cope with the changing scenario Pomerleau, and Charles Thorpe (all of system architecture and the evolving of a story. The agents are highly inter- Carnegie Mellon University), present- system architecture) was also illustrat- active, and they can answer to unusual ed one of the difficult projects for ed by the two invited speakers. It is constraints such as the “mood” (cho- autonomous agents, where adaptation also worth mentioning that the two sen by the operator) of the other is crucial and necessary. The project is approaches were opposed also by the agents. Such agents follow a structure concerned with building intelligent kind of reward their agents receive. In of functions that separate the story vehicles that can drive on real high- Barbara Hayes-Roth’s case, it is a hedo- line they are engaging in from the role ways in mixed traffic environments. nist “mutual satisfaction,” but in Brian they are ordered to perform and the One can view the vehicle as a global Gaines’s case, agents compete for Spar- performance they are expected to intelligent agent that controls differ- tan fulfillment of the most difficult achieve. The talks introduced a set of ent groups of agents. Each group of task. One should not, however, gener- interesting issues, including the han- agents is responsible for performing alize such a relationship among archi- dling of the generality of task require- different functions, including driving tectures and inducement techniques. ments by agents, the interfacing tasks (for example, recognizing cars An interesting issue for future research between behavioral models and a real ahead or upcoming exits) and self- is to show that the inducement tech- personality, and the viewing of agents state tasks (for example, managing car nique (for the agents to adapt) is good (for example, actor, slave) differently. velocity). Each agent utilizes low-level enough to lead to an efficient architec- The second invited talk, “Adaptive sensors and a large number of internal ture found by the system itself. Interactions in Societies of Agents,” and external parameters to accom- The methodology used for adap- was presented by Brian Gaines (Uni- plish its task. Agent parameters are

FALL 1997 77 Workshop Report automatically selected by a novel pop- situations. At the deliberate level, the able, or estimate the load of the net- ulation-based incremental learning agent uses the reflective knowledge to work), (2) monitor conditions of soft- algorithm. Agents work independent- update its course of action. The paper ware resource (for example, monitor ly but cooperatively. Adaptation included an example for explaining activities of a site or another agent occurs both internally and externally. the approach: An agent (robot) at- that expected to receive or obtain The vehicle is evaluated by an evalua- tempts to perform a task with (and information relevant to the current tion metric that studies the number of without) external instructions. task), and (3) interact with other collisions, lane-keeping consistency, The fourth presented paper was agents (for example, an agent might speed versus desired speed, and so on. “Adaptive Methodologies for Intelli- need to locate other agent locations in The second presented paper, “Adap- gent Agents” by Ibrahim Imam (George the network, gather information tation Using Cases in Cooperative Mason University). In this paper, intel- about the tasks they can perform, or Groups” by Thomas Haynes and ligent adaptive agents are considered as request a task from another agent). Sandip Sen (both of University of Tul- systems or machines that utilize infer- Two types of adaptation can be cat- sa), introduced adaptation as a key ential or complex computational algo- egorized in this work: (1) external component of any group of intelligent rithms to modify or change control adaptation observed by transforming agents. Each agent learns from prob- parameters, knowledge bases, problem- the agent from one location to a better lem-solving cases how to adapt its solving methodologies, a course of one over the network to achieve the model of other agents of the same action, or other objects to accomplish a given task and (2) internal adaptation group. Adapting the agent’s model of set of tasks required by the user. Con- viewed by modifying the problem- other agents usually changes the sidering that environmental changes solving strategy based on the informa- course of actions that the agent can are the main cause of adaptation, adap- tion obtained from monitoring and follow in different situations. The tation for intelligent agents is classified sensing different resources and agents paper demonstrated a distributed AI into three categories based on the rela- in the network. problem called PREDATOR-PREY. The tionship between the internal actions The sixth presentation was “Intelli- problem concerns four agents (preda- and the external actions (behavior) of gent Adaptive Information Agents” by tors) attempting to capture another the agent. The first category is internal Keith Decker, Katia Sycara, and Mike agent (prey). Each predator adapts its adaptation, where changes in the exter- Williamson (all of Carnegie Mellon moves based on the potential moves nal environment are matched by inter- University). In a multiagent society, of other predators to avoid conflicts. nal changes to provide the same solu- predicting environmental changes is The paper also proposed a solution to tion to the given task. The second another approach to planning for intel- avoid deadlock situations that result category is external adaptation, where ligent adaptation. To predict and adapt from overlearning. changes in the external environment to environmental changes in an infor- The third presented paper was directly reflect changes in the external mation-based multiagent society, the “Knowledge-Directed Adaptation in actions of the agent. The third category paper presented an approach where a Multilevel Agents” by John Laird and is complete adaptation, where changes matchmaker information agent gathers Douglas Pearson (both of University of occurred in both the internal and organizational information about all Michigan) and Scott Huffman (Price external actions of the agent, and adap- agents’ functions, each agent plans the Waterhouse Technology Center). Be- tation is not necessarily caused by control flow of its actions or decisions cause adaptation is desired whenever changes in the environment. The paper using information about the relation- errors or unexpected changes in the illustrated these categories with the ships between all current tasks, all environment occur, it is important to applications of an intelligent travel agents use a flexible scheduling mech- detect this error or change, determine agent and an identification agent. anism, and each agent can control its the cause (if possible), determine the The fifth presentation was “Au- active execution load. correct course of modification, and tonomous and Adaptive Agents That Adaptation occurs at all levels. At adapt the agent functions to resolve Gather Information” by Daniela Rus, the organizational level, a brokering such situations. The paper introduced Robert Gray, and David Kotz (all of matchmaker agent can optimize the a nice approach to model adaptation Dartmouth College). In a virtual reali- distribution of tasks on different in agents, which characterizes the ty simulation, agents can indepen- agents to balance the load of each adaptation process as three levels of dently transfer through a network of agent. At the planning level, adapta- knowledge and control: (1) the reflex computers to accomplish a task. This tion is needed in certain situations, for level for reactive response, (2) deliber- paper introduced adaptive agents as example, when an agent becomes ate level for goal-driven behavior, and systems that can completely terminate unavailable or goes offline, the num- (3) a reflective layer for plan delibera- their existence at a given location, ber of actions needed to accomplish a tion and problem decomposition. The transform to a new location, and task is reduced, or the reduction of paper demonstrated adaptation at resume the task they are accomplish- some tasks depends on the comple- both the reflex and the deliberate lev- ing. Such agents have the ability to (1) tion of other tasks by other agents. At els. At the reflex level, the domain the- sense the state of the network (for the scheduling level, agents can adjust ory is modified and extended to deter- example, to check if the local host is the scheduling of new tasks whenever mine needed actions for similar connected, find out if a site is reach- related tasks are about to miss their

78 AI MAGAZINE Workshop Report deadlines. At the execution level, ment for assigning blame to the paths with new observations. On the execu- agents can control their availability supplying inconsistent data and allo- tion level, the robot gets new feedback whenever they are overloaded with cating reliability values to the agent that guides its future actions. tasks. When an agent is overloaded, it communication data, and (3) contra- The tenth presentation was “Coop- can inactivate itself with respect to diction resolution for resolving contra- erative Agents That Adapt for Seamless new tasks in the current agent group dictory information. Adaptation oc- Messaging in Heterogeneous Commu- and create a clone of itself to accom- curs whenever the agent encounters a nication Networks” by Suhayya Abu- plish these tasks. new contradiction or inconsistency. Hakima, Ramiro Liscano, and Roger The seventh presentation was “Sac- The agent traces the source of inconsis- Impey (all of the National Research rificing versus Salvaging Coherence: tency from all agents involved in pro- Council of Canada). When informa- An Issue for Adaptive Agents in Infor- viding the information. The agent tion is mixed with voices, multimedia, mation Navigation” by Kerstin Voigt models of the error-generating agent and faxes in one carrier, real-time (California State University at San are then updated, and the system iso- actions are crucial to the success of Bernardino). Navigation for highly rel- lates the agent. Adaptation can also communication systems. This paper evant information at a lower cost is occur during the conflict-resolution presented an approach for utilizing the goal of any information-naviga- phase when, for example, the sensory groups of agents to solve the problem tion system. The paper introduced an data are accurate, and the agent knowl- of seamless messaging in heteroge- approach for reducing the access cost edge base is not. neous communications networks. The of retrieving information relevant to The ninth presentation was “Using paper introduced a framework for solv- the given query. The paper presented a Perception Information for Robot ing the problem using different groups utility function for measuring the Planning and Execution” by Karen of agents. Two types of agent were used access time and the coherence of Haigh and Manuela Veloso (both of for solving different aspects of the information. Agents can use the access Carnegie Mellon University). If the problem: (1) fine grained and (2) scores from measuring the cost of mind controls the body, the body is coarse grained. The fine-grained agents retrieving specific information and the main information tributary to the are responsible for simplifying the restructure the information to mini- mind. This paper presented an ap- communication load between the user mize this score. Information coher- proach for adapting the domain mod- (or the device agents) and the agent ence is measured from knowledge els of a planner based on a robot’s responsible for message distribution. about the constraints among different direct observations of the environ- Coarse-grained (user) agents manage the information items. These constraints ment. The approach introduced the user environment by observing actions can include the degree of generality, agent ROGUE, which uses the planning and learning models of behaviors. Sur- where general information is present- and learning system PRODIGY to sup- rogate agents transfer across the net- ed first. The paper also introduced a port the robot XAVIER in performing work as messengers among user penalty function for any violation of physical tasks. After performing each agents. Message-transfer agents mediate precedence constraints among infor- task, the robot XAVIER provides the in delivering messages. Other coarse- mation items and a multiobjective agent ROGUE with its observations grained agents with different functions optimization of hierarchical informa- about the environment. ROGUE are also introduced in this paper. tion structures to recover coherence. responds to the observations by The eighth presentation was “Learn- dynamically updating the domain A Synthetic Summary of ing Reliability Model of Other Agents model. This update can affect the set in a Multiagent Society” by Costas of tasks that the robot needs to per- the Presentations Tsatsoulis (University of Kansas) and form. The robot then starts perform- As described in the opening, the Grace Yee (Lockheed Martin Missiles ing tasks of the modified plan. research presented at the workshop and Space). In a multiagent society, The paper illustrated the approach can be categorized according to differ- learning to anticipate actions of other with an example where information is ent criteria. Other views of the pre- agents improves the quality and accu- transferred between the two systems. sented papers are possible and might racy of agent performance. The paper Goals are classified according to well bring some light on the topic. presented an approach for learning importance, and the system attempts reliability models of other agents in a to opportunistically achieve less Single Agents multiagent society. Each agent learns a important tasks while it accomplishes In most papers concerned with intelli- belief model to avoid erroneous data the important tasks. The approach gent adaptive single agents, it seems and optimizes the global problem-solv- demonstrates the system’s ability to that the presented research can be ing process. A system called the error respond to the normal dynamics of a described as choosing a preferred adaptation communication control real-world office environment while it machine-learning system and modify- system (EACCS) was introduced for performs common office tasks. Adap- ing it to include either time changes or solving such problems. EACCS consists tation occurred on the planning level adaptation under constraints, yielding of three components: (1) dependency and the plan-execution level (by the an (often highly) intelligent agent. trace for keeping track of the path of a robot). On the planning level, plans Under this categorization fall the fol- received message, (2) blame assign- are updated online and in accordance lowing papers:

FALL 1997 79 Workshop Report

Haigh and Veloso: Their machine- solution relies on a definition of sens- Jim Mitchell, and Katia Sycara for their learning system is PRODIGY, which ing the network. comments, suggestions, and a careful modifies its model of the world under With regard to a multiagent society review of an earlier draft of this article. failure. It is worth noting that their with an evolving architecture, the fol- application field is robotics, advanced lowing papers were presented: Ibrahim F. Imam is a enough to compete at the American Gaines’s system architecture will be research associate at Association for less precisely defined, and the game is George Mason Universi- competition. now to prove that the inducement ty. He received his B.Sc. in mathematical statis- Laird et al.: Their machine-learning technique (for the agents to adapt) is tics from Cairo Universi- system SOAR represents three kinds of good enough to lead to an efficient ty in 1986 and his M.S. knowledge. Adaptation is done by architecture found by the system in and modifying the deliberate knowledge, itself. his Ph.D. in information which is a kind of model of the world. Tastsoulis and Lee defined agents technology and engineering from George Imam: His machine-learning sys- that learn and evolve following the Mason University in 1992 and 1995, tem is AQDT, which adapts by modify- reliability of their competitors. respectively. Imam’s research interests are ing its knowledge base, again a sort of Haynes and Sen used a case- in intelligent agents, machine learning, adaptive systems, knowledge discovery in model of the world. based–reasoning technique to learn databases, and knowledge-based systems. Voigt: An apparent exception to and induce changes following the Since 1993, he has authored or coauthored this classification is the paper by Voigt agent’s capacity to evolve. over 30 publications in refereed journals, because it focuses on a problem Sukthankar et al. is yet another books, and conference and workshop pro- instead of a system. The paper dealt apparent exception to this classifica- ceedings. His e-mail address is iimam@ with the so-called unsupervised learning tion. It considers the system of agents site.gmu.edu. paradigm, that is, learning to change as black boxes, of which the system the structure of knowledge under sees only a set of parameters. Evolving Yves Kodratoff is a comprehensibility constraints. The here involves optimizing a set of director of research at proposed solution to the problem is parameters. The optimization tech- the French Centre far less sophisticated than machine- nique is hill climbing, which allows National de la Recherche Scientifique at the Uni- learning systems such as COBWEB; faster convergence than most sophisti- versity of Paris-Sud. His hence, the learning is quite primitive. cated approaches. Genetic algorithms interests cover all kinds Inversely, the approach contains a or neural networks might yield more of inductive inference sophisticated system to handle the precise results, but they require more system, including those efficiency of changes. computation time. that merge symbolic and numeric learning techniques, especially with respect to appli- Multiagent Societies Conclusion cations in and knowledge dis- In most papers concerned with intelli- covery in databases. He is also working on gent adaptive multiagent societies, The workshop was successful in the C3I to gather the field knowledge that two approaches can be recognized: (1) addressing interesting and difficult allows an intelligence officer to guess what the enemy is planning. Others interests an approach with a fixed system archi- problems associated with research and include inductive logic programming, the tecture and (2) an approach with an development of intelligent adaptive application of machine learning to vision, evolving system architecture. With agents. One of the most recognizable the use of explanation patterns in plan regard to the multiagent society with a achievements is the general framework recognition, the inclusion of knowledge in fixed architecture, the following drawn by all the attendees to define the neural networks, and epistemological papers were presented: field of intelligent adaptive agents. aspects of machine learning. He helped to Hayes-Roth presented highly inter- Thoughts about this framework create three different spin-offs: (1) Intel- active agents because they can answer include that the general goal of intelli- lisoft in 1988, (2) ISoft in 1990, and (3) EPCAD in 1995. He has published over 100 to nonclassical constraints such as the gent adaptive agents should be books, papers, and so on, on machine mood (chosen by the operator) of the oriented toward serving the user, and learning. His e-mail address is yves. other agents. intelligent adaptation should generally [email protected]. Abu-Hakima et al. showed an be motivated to improve either the architecture for messaging in heteroge- agent’s services or performance. In neous networks and defined the differ- multiagent societies, more problems ent types of agent needed for this task. and research issues should be consid- Decker et al. described a system ered; however, the whole society, as with a large number of different well as any single agent, should be eval- agents, each of them enabled with uated by its objectives and services. some simple adaptability, such that the whole of them is highly adaptable. Acknowledgments Rus et al. addressed a problem quite The authors would like to thank Keith similar to Abu-Hakima et al. Their Decker, Karen Haigh, Ken Kaufman,

80 AI MAGAZINE