AI Magazine Volume 29 Number 2 (2008) (© AAAI)

Articles

Electric Elves: What Went Wrong and Why

Milind Tambe, Emma Bowring, Jonathan P. Pearce, Pradeep Varakantham, Paul Scerri, and David V. Pynadath

I Software personal assistants continue he topic of software personal assis- is to highlight some of what went wrong to be a topic of significant research inter- tants, particularly for office environ- in the E-Elves project and provide a broad est. This article outlines some of the Tments, is of continued and growing overview of technical advances in the important lessons learned from a success- research interest (Scerri, Pynadath, and areas of concern without providing fully deployed team of personal assistant Tambe 2002; Maheswaran et al. 2004; specific technical details. agents (Electric Elves) in an office envi- Modi and Veloso 2005; Pynadath and ronment. In the Electric Elves project, a 1 team of almost a dozen personal assistant Tambe 2003). The goal is to provide soft- Description of Electric Elves agents were continually active for seven ware agent assistants for individuals in an months. Each elf (agent) represented one office as well as software agents that repre- The Electric Elves project deployed an person and assisted in daily activities in sent shared office resources. The resulting agent organization at USC/ISI to support an actual office environment. This project set of agents coordinate as a team to facil- daily activities in a human organization led to several important observations itate routine office activities. This article (Pynadath and Tambe 2003, Chalupsky et about privacy, adjustable autonomy, and outlines some key lessons learned during al. 2002). Dozens of routine tasks are social norms in office environments. In the successful deployment of a team of a addition to outlining some of the key les- required to ensure coherence in a human sons learned we outline our continued dozen agents, called Electric Elves (E- organization’s activities, for example, research to address some of the concerns Elves), which ran continually from June monitoring the status of activities, gather- raised. 2000 to December 2000 at the Informa- ing information relevant to the organiza- tion Sciences Institute (ISI) at the Univer- tion, and keeping everyone in the organi- sity of Southern California (USC) (Scerri, zation informed. Teams of software agents Pynadath, and Tambe 2002; Chalupsky et can aid humans in accomplishing these al. 2002; Pynadath and Tambe 2003, 2001; tasks, facilitating the organization’s coher- Pynadath et al. 2000). Each elf (agent) act- ent functioning, while reducing the bur- ed as an assistant to one person and aided den on humans. in the daily activities of an actual office The overall design of the E-Elves is environment. Originally, the E-Elves proj- shown in figure 1. Each proxy is called Fri- ect was designed to focus on team coordi- day (after Robinson Crusoe’s manservant nation among software agents. However, Friday) and acts on behalf of its user in the while team coordination remained an agent team. The basic design of the Friday interesting challenge, several other unan- proxies is discussed in detail in Pynadath ticipated research issues came to the fore. and Tambe (2003) and Tambe, Pynadath, Among these new issues were adjustable and Chauvat (2000) (where they are autonomy (agents dynamically adjusting referred to as TEAMCORE proxies). Friday their own level of autonomy), as well as can perform a variety of tasks for its user. privacy and social norms in office envi- If a user is delayed to a meeting, Friday ronments. Several earlier publications out- can reschedule the meeting, informing line the primary technical contributions other Fridays, who in turn inform their of E-Elves and research inspired by E-Elves users. If there is a research presentation in detail. However, the goal of this article slot open, Friday may respond to the invi-

Copyright © 2008, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 SUMMER 2008 23 Articles

Friday Friday

Communication broadcast net

Friday Friday

Figure 1. Overall E-Elves Architecture, Showing Friday Agents Interacting with Users.

tation to present on behalf of its user. Friday can VIIs) and WAP-enabled mobile phones, which, also order its user’s meals (see figure 2) and facili- when connected to a global positioning service tate informal meetings by posting the user’s loca- (GPS) device, track users’ locations and enable tion on a web page. Friday communicates with wireless communication between Friday and a users through user workstations and use of wireless user. devices such as personal digital assistants (Palm Each Friday’s team behavior is based on a team-

24 AI MAGAZINE Articles

Figure 2: Friday Asking the User for Input. work model called STEAM (Tambe 1997). STEAM cially if the response is not forthcoming, such as if encodes and enforces the constraints among roles the user is in another meeting. Some decisions are that are required for the success of the joint activ- potentially costly (for example, rescheduling a ity. For example, meeting attendees should arrive meeting to the following day), so an agent should at a meeting simultaneously. When an important avoid taking them autonomously. To buy more role within the team (such as the role of a presen- time for the user to make a decision, an agent has ter for a research meeting) opens up, the team the option of delaying the meeting (that is, chang- needs to find the best person to fill that role. To ing coordination constraints). Overall the agent achieve this, the team auctions off the role, taking has three options: make an autonomous decision, into consideration complex combinations of fac- transfer control, or change coordination con- tors and assigning the best-suited agent or user. Fri- straints. The autonomy reasoning must select from day can bid on behalf of its user, indicating these actions while balancing the various compet- whether its user is capable and/or willing to fill a ing influences. particular role. Lessons from Electric Elves Our first attempt to address AA in E-Elves was to Adjustable Autonomy learn from user input; in particular by using deci- Adjustable autonomy (AA) is clearly important to sion tree learning based on C4.5. In training mode, the E-Elves because, despite the range of sensing Friday recorded values of a dozen carefully select- devices, Friday has considerable uncertainty about ed attributes and the user’s preferred action (iden- the user’s intentions and even location, hence Fri- tified by asking the user) whenever it had to make day will not always be capable of making good a decision. Friday used the data to learn a decision decisions. On the other hand, while the user can tree that encoded various rules. For example, it make good decisions, Friday cannot continually learned a rule: ask the user for input, because it wastes the user’s IF two person meeting with important person AND valuable time. We illustrate the AA problem by user not at department at meeting time THEN delay focusing on the key example of meeting resched- the meeting 15 minutes. uling in E-Elves: A central task for the E-Elves is During training Friday also asked if the user want- ensuring the simultaneous arrival of attendees at a ed such decisions taken autonomously in the meeting. If any attendee arrives late, or not at all, future. From these responses, Friday used C4.5 to the time of all the attendees is wasted. On the oth- learn a second decision tree, which encoded its AA er hand, delaying a meeting is disruptive to users’ reasoning. Initial tests with the C4.5 approach schedules. Friday acts as proxy for its user so its were promising (Pynadath and Tambe 2003), but a responsibility is to ensure that its user arrives at the key problem soon became apparent. When Friday meeting at the same time as other users. Clearly, encountered a decision for which it had learned to the user will often be better able to determine transfer control to the user, it would wait whether he/she needs the meeting to be delayed. indefinitely for the user to make the decision, even However, if the agent transfers control to the user though this inaction could lead to miscoordina- for the decision, it must guard against miscoordi- tion with teammates if the user did not respond or nation while waiting for the user’s response, espe- attend the meeting. To address this problem a fixed

SUMMER 2008 25 Articles

time limit (five minutes) was added, and if the user and uncertainty, so that the agent does not have to did not respond within the time limit, Friday took learn everything from scratch (as was required an autonomous action (the one it had learned to with C4.5). Markov decision processes (MDPs) fit be the user’s preferred action). This led to these requirements and so, in a second incarnation improved performance, and the problem uncov- of E-Elves, MDPs were invoked for each decision ered in initial tests appeared to have been that Friday made: rescheduling meetings, delaying addressed. Unfortunately, when the E-Elves were meetings, volunteering a user for presentation, or first deployed 24/7, there were some dramatic fail- ordering meals (Scerri, Pynadath, and Tambe ures. In example 1, Friday autonomously cancelled 2002). Although MDPs were able to support a meeting with the division director because Friday sequential decision making in the presence of tran- overgeneralized from training examples. In exam- sitional uncertainty (uncertainty in the outcomes ple 2, Friday incorrectly cancelled the group’s of actions), they were hampered by not being able weekly research meeting when a timeout forced to handle observational uncertainty (uncertainty the choice of an autonomous action when Pyna- in sensing). dath did not respond. In example 3, Friday delayed Specifically, Friday’s “sensing” was very coarse, a meeting almost 50 times, each time by 5 min- and while Friday might follow an appropriate utes. It was correctly applying a learned rule but course of action when its observations were cor- ignoring the nuisance to the rest of the meeting rect, when they were incorrect its actions were very participants. Finally, in example 4, Friday auto- poor. In a project inspired by E-Elves, we took the matically volunteered Tambe for a presentation, natural next step to address this issue by using par- but he was actually unwilling to participate. Again tially observable MDPs (or POMDPs) to model Friday had overgeneralized from a few examples observational uncertainty and find appropriate and when a timeout occurred had taken an unde- courses of action with respect to this observation- sirable autonomous action. al uncertainty. However, existing techniques for From the growing list of failures, it became clear solving POMDPs either provide loose quality guar- that the C4.5 approach faced some significant antees on solutions (approximate algorithms) or problems. Indeed, AA in a team context requires are computationally very expensive (exact algo- more careful reasoning about the costs and rithms). Our recent research has developed effi- benefits of acting autonomously and transferring cient exact algorithms for POMDPs, deployed in control and needs to better deal with contingen- service of adjustable autonomy, by exploiting the cies. In particular, an agent needs to avoid taking notions of progress or physical limitations in the risky decisions (like example 1) by taking a lower environment. The key insight was that given an risk delaying action to buy the user more time to initial (possibly uncertain) set of starting states, the respond; deal with failures of the user to quickly agent needs to be prepared to act only in a limited respond (examples 2 and 4) ; and plan ahead to range of belief states; most other belief states are avoid taking costly sequences of actions that could simply unreachable given the dynamics of the be replaced by a single less costly action (example monitored process, so no action needs to be gen- 3). In theory, using C4.5, Friday might have even- erated for such belief states. These bounds on the tually been able to learn rules that would success- belief probabilities are obtained using Lagrangian fully balance costs, deal with uncertainty, and han- techniques in polynomial time (Varakantham et dle all the special cases, but a very large amount of al. 2007; Varakantham, Maheswaran, and Tambe training data would be required, even for this rela- 2005). tively simple decision. Given our experience, we We tested this enhanced algorithm against two decided to pursue an alternative approach that of the fastest exact algorithms: generalized incre- explicitly considered costs and uncertainties. mental pruning (GIP) and region-based incremen- tal pruning (RBIP). Our enhancements in fact pro- Ongoing Research vide orders of magnitude speedup over RBIP and To address the early failures in AA, we wanted a GIP in problems taken from the meeting resched- mechanism that met three important require- uling of Electric Elves, as illustrated in figure 3. In ments. First, it should allow us to explicitly repre- the figure, the x-axis shows four separate problem sent and reason about different types of costs as instances, and the y-axis shows the run time in sec- well as uncertainty, such as costs of miscoordina- onds (the problem runs were cut off at 20,000 sec- tion versus costs of taking an erroneous action. onds). DSGIP is our enhanced algorithm, and it is Second, it should allow lookahead to plan a sys- seen to be at least an order of magnitude faster tematic transfer of decision-making control and than the other algorithms. Another issue that arose provide a response that is better in the longer term during the MDP implementation of E-Elves is pos- (for situations such as a nonresponsive user). Final- sibly of special relevance to personalization in soft- ly, it should allow us to encode significant quanti- ware assistants, above and beyond E-Elves. In par- ties of initial domain knowledge, particularly costs ticular, both MDPs and POMDPs rely on knowing

26 AI MAGAZINE Articles

Comparison of GIP, RBIP, and DSGIP 22000 20000 18000 16000 14000 GIP 12000 RBIP 10000 DS+GIP 8000 6000 4000

Time Taken (in sec) Taken Time 2000 0 ABCD Experiments

Figure 3: Speedup over RBIP and GIP. the probability of events occurring in the environ- were immediately brought to the forefront. First, a ment. Clearly, these probabilities varied from user key part of E-Elves was to assist users in locating to user, and hence it was natural to apply learning other users to facilitate collaborative activities, to adjust these parameters. While the learning such as knowing that a user is in his or her office, itself was effective, the fact that Friday did not nec- which would help determine if it is worth walking essarily behave the same way each day could be down to that person’s office to engage in discus- disconcerting to users—even if the new behavior sions. This was especially relevant in our domain might actually be “better.” The problem was that because ISI and the main USC campus are across Friday would change its behavior without warning, town from each other. Unfortunately, making a after users had adjusted to its (imperfect) behavior. user’s GPS location available to other project mem- Later work (Pynadath and Tambe 2001) addressed bers at all times, even if GPS capabilities were this by allowing users to add hand-constructed switched off outside work hours, was a very sig- inviolable constraints. nificant invasion of privacy. This led to a too trans- parent tracking of people’s locations. For instance, when user Tambe had an early morning 8:30 a.m. Privacy meeting, and he was delayed, he blamed it on the Los Angeles freeway traffic. However, one of the Just as with adjustable autonomy, privacy was meeting attendees had access to Tambe’s GPS data. another area of research that was not initially con- So he could remark that the delay in this meeting sidered important in Electric Elves. Unfortunately, was not because of traffic as was suggested but while several privacy-related problems became rather because Tambe was eating breakfast at a apparent, no systematic solutions were developed small cafe next to ISI. during the course of the project. We will describe Second, even when such obviously intrusive some of the problematic instances of privacy loss location monitoring was switched off and the E- and then some recent steps to quantitatively meas- Elves only indicated whether or not a user was in ure privacy loss that have been inspired by the E- his or her office, privacy loss still occurred. For Elves insights. instance, one standard technique for Tambe to avoid getting interrupted was to hide in the office Lessons from Electric Elves and simulate being away. While finishing a partic- We begin with a few arenas where privacy issues ularly important proposal, he simulated being

SUMMER 2008 27 Articles

Moving Walls

Marcel Schoppers

n the early 1980s, I used Stan Rosenschein’s functional language REX to servo Flakey’s two wheels to set points of speed and direction that were functions of sonar readings. This let Flakey drive along hallways with Ino dead reckoning or planning whatsoever. It seemed miraculous at the time; a situated automaton that knew things without needing any models. However, I thought of it as (sensor-driven) feedback control, versus (plan driven, eyes shut) feed-forward control. I then used Mike Georgeff’s procedural reasoning system (PRS) to make Flakey not only drive but navigate an office building. In some respects this project succeeded: the robot’s “domain knowledge” was nothing more than a static connection graph—no distances to drive, no widths of halls or doorways, no a priori obstacles—such information was acquired en route from sensory input. In other respects, however, progress was unsatisfying. The robot would frequently get stuck facing a wall, interpret it as an obstacle, ask it to move, and then wait for- ever. The robot was just as helpless as if it had made a dead-reckoning error. The fault was that the robot expected a doorway where none existed and perceived the wall as a (presumably) movable obstacle on its path. I soon saw that my program worked well only where it employed sensory feed- back and that the PRS procedures were another form of feed-forward control, with built-in (though branching) expectations overruling sensing. This realization led to the conception of universal plans: feedback control through Boolean state-spaces, view- ing plans as control laws, and planners as reaction-choosers based on a weak model of action effects. From there, what actually happens is up to Nature, and yet “reaction plans” reliably achieve their goals because of their robustness and persistence. Marcel Schoppers is a senior flight software engineer working at the NASA Jet Propulsion Laboratory, Section 316, in Pasade- na, California.

away by switching off his office lights, locking the Fourth, one of the parameters used in determin- door and not responding to knocks on the door or ing meeting importance was the importance telephone calls from colleagues. However, to his attached to each of the people in the meeting. An surprise, a colleague sent him an e-mail, saying agent used this information to determine the that the colleague knew Tambe was in the office actions to take with respect to a meeting; for exam- because his elf was still transmitting that fact to ple, canceling a meeting with someone very others. important in the organization was to be avoided. Third, E-Elves monitored users’ patterns of daily Such information about user importance was sup- activities, including statistics on users’ actions posed to be private but was not considered partic- related to various meetings, such as whether a user ularly controversial. Unfortunately, when infor- was delayed to a meeting, whether he or she mation about meeting importance from someone attended a meeting, and whether the user can- else’s elf was accidentally leaked to Ph.D. students, celled the meetings. These detailed statistics were a minor controversy ensued. Were Ph.D. students another source of privacy loss when they were less important than other researchers at ISI? Part of made available to other users—in this case, to a the complicated explanation provided was not student who was interested in running machine that meetings with Ph.D. students were any less learning on the data. The student noticed and important, but rather the fact that meeting impor- pointed out to a senior researcher that, when his tance rolled multiple factors into one factor: not meetings were with students, he was always late by only the importance of the meeting but also the 5 minutes, while, on the other hand, he was punc- difficulty of scheduling a meeting and the urgency tual for his meetings with other senior researchers. of the meeting.

28 AI MAGAZINE Articles

1 0.9 0.8 0.7 0.6 0.5 0.4

Privacy Loss 0.3 0.2 0.1 0 34567 Number of Time Slots

ProportionalS GuessS ProportionalTS GuessTS EntropyS Centralized EntropyTS OptAPO

Figure 4. Privacy Loss for the SynchBB Algorithm Using Six Different VPS Metrics.

Ongoing Research schedule of meetings by setting their individual Our subsequent research on privacy has focused schedules. primarily on the last issue, that of private informa- Many algorithms exist for solving such prob- tion being leaked during negotiations between lems. However, it was not clear which algorithms team members. These negotiations often took the preserved more privacy than others, or more fun- form of distributed constraint optimization prob- damentally, what metrics should be used for meas- lems (DCOP) (Modi et al. 2005, Mailler and Lesser uring the privacy loss of each algorithm. While 2004), in which cooperative agents exchanged researchers had begun to propose metrics for messages in order to optimize a global objective analysis of privacy loss in multiagent algorithms function to which each agent contributes. For for distributed optimization problems, a general example, agents may try to optimize a global quantitative framework to compare these existing

SUMMER 2008 29 Articles

metrics for privacy loss or to identify dimensions DCOP must address privacy concerns carefully and along which to construct new metrics was lacking. cannot assume that distribution alone provides To address this question, we introduced valuations privacy. of possible states (VPS) (Maheswaran et al. 2006), a general quantitative framework to express, ana- lyze, and compare existing metrics of privacy loss. Social Norms With VPS, we quantify an agent’s privacy loss to Another area that provided unexpected research others in a multiagent system using as a basis the issues was social norms. Day-to-day operation with possible states the agent can be in. In particular, E-Elves exposed several important research issues this quantification is based on an agent’s valuation that we have not yet specifically pursued. In par- on the other agents’ estimates about (a probability ticular, agents in office environments must follow distribution over) its own possible states. For exam- the social norms of the human society within ple, the agent may value poorly the fact that other which the agents function. For example, agents agents are almost certain about the agent’s possible may need to politely lie on behalf of their users in state. order to protect their privacy. If the user is avail- VPS was shown to capture various existing meas- able but does not wish to meet with a colleague, ures of privacy created for specific domains of dis- the agent should not transmit the user’s location tributed constraint satisfaction and optimization and thus indirectly indicate that the user is unwill- problems. Using VPS, we were able to analyze the ing to meet with colleagues. Even more crucially, privacy loss of several algorithms in a simulated the agent should not indicate to the colleague that meeting-scheduling domain according to many meeting with that colleague is considered unim- different metrics. Figure 4 from Maheswaran et al. portant—indicating that the user is unavailable for (2006) shows an analysis of privacy loss for the other reasons is preferable. SynchBB algorithm across six different VPS metrics Another interesting phenomenon was that (ProportionalS, ProportionalTS, GuessS, GuessTS, users would manipulate the E-Elves to allow them- EntropyS, and EntropyTS) for a particular meeting selves to violate social norms without risking scheduling scenario of three agents, averaged over being seen to violate norms. The most illustrative 25 experimental runs in which agents’ personal example of this was the auction for presenter at time-slot preferences were randomly generated. regular group meetings. This was a role that users Also shown on the graph is the privacy loss for the typically did not want to perform, because it OptAPO algorithm (Mailler and Lesser 2004) and required preparing a presentation, but also did not for a centralized solver; both of these were shown want to appear to refuse. Several users manipulat- to have the same privacy loss regardless of the VPS ed the E-Elves role allocation auction to allow metric used. The x-axis shows the number of time themselves to meet both of these conflicting goals. slots when meetings could be scheduled in the One method that was actually used was to let Fri- overall problem, and the y-axis shows the sys- day respond to the auction autonomously, know- temwide privacy loss, expressed as the mean of the ing that the controlling MDP was conservative privacy losses of each agent in the system, where 0 and assigned a very high cost to incorrectly means an agent has lost no privacy to any other accepting the role on the user’s behalf. A more agent and 1 means an agent has lost all privacy to subtle technique that was used was to fill up one’s all other agents. The graph shows that, according calander with many meetings because Friday to four of the six metrics, SynchBB’s privacy loss would take into account how busy the person was. lies in between that of centralized and OptAPO, Unfortunately, Friday was not sophisticated and, interestingly, the effect of increasing the enough to distinguish between a “project meet- number of time slots in the system causes privacy ing” and “lunch” or “basketball.” In all these cas- loss to increase according to one metric, but es, the refusal would be attributed to the agent, decrease according to another. rather than directly to the user. Another source of The key result illustrated in Figure 4 is that dis- manipulation came in when a user had recently tribution in DCOPs does not automatically guar- presented, since the auction would not assign the antee improved privacy when compared to a cen- user the role again immediately. Thus shortly after tralized approach, at least as seen from the presenting, users could manually submit affirma- algorithms tested here—an important result given tive bids safe in the knowledge that their bid that privacy is a key motivation for deploying would not be accepted while they would still get DCOP algorithms in software personal assistants. credit from the rest of the team for their enthusi- Later work (Greenstadt, Pearce, and Tambe 2006) asm. (These techniques came to light only after showed that several other DCOP algorithms (for the project ended!) example, Adopt [Modi et al. 2005]) did perform One important lesson here is that personal assis- better than the centralized approach with respect tants must not violate norms. However, another is to privacy. However, it is clear that algorithms for that personal assistants in this case were designed

30 AI MAGAZINE Articles for group efficiency and, as such, face an incentive Multiagent Systems (AAMAS). New York: Association for compatibility problem: users may have their own Computing Machinery. personal aims as a result of which they work Modi, P. J.; Shen, W.; Tambe, M.; and Yokoo, M. 2005. around the personal assistants. Rather than insist- Adopt: Asynchronous Distributed Constraint Optimiza- ing on group efficiency, it may be more useful to tion With Quality Guarantees. Artificial Intelligence allow users to manage this incompatibility in a 161(1–2): 149–180. more socially acceptable manner. Pynadath, D. V., and Tambe, M. 2001. Revisiting Asi- mov’s First Law: A Response to the Call to Arms. In Intel- ligent Agents VIII: The Eighth Workshop on Agent Theories, Summary Architectures and Languages (ATAL-01), Lecture Notes in Computer Science 2333, 307–320. Berlin: Springer. This article outlined some of the important lessons Pynadath, D. V., and Tambe, M. 2003. Automated Team- learned from a successfully-deployed team of per- work among Heterogeneous Software Agents and sonal assistant agents (Electric Elves) in an office Humans. Journal of Autonomous Agents and Multi-Agent environment. The Electric Elves project led to sev- Systems 7(1): 71–100. eral important observations about privacy, Pynadath, D.; Tambe, M.; Arens, Y.; Chalupsky, H.; Gil, adjustable autonomy, and social norms for agents Y.; Knoblock, C.; Lee, H.; Lerman, K.; Oh, J.; Ramachan- deployed in office environments. This article out- dran, S.; Rosenbloom, P.; and Russ, T. 2000. Electric Elves: lines some of the key lessons learned and, more Immersing an Agent Organization in a Human Organi- importantly, outlines our continued research to zation. In Socially Intelligent Agents: The Human in the address some of the concerns raised. These lessons Loop. Papers from the 2000 Fall Symposium, 150–154. have important implications for similar ongoing AAAI Technical Report FS-00-04. Menlo Park, CA: AAAI research projects. Press. Scerri, P.; Pynadath, D.; and Tambe, M. 2002. Towards Note Adjustable Autonomy for the Real-World. Journal of Artifi- 1. Another example is CALO: Cognitive Agent that cial Intelligence Research 17: 171–228. Learns and Organizes (calo.sri.com). Tambe, M. 1997. Towards Flexible Teamwork. Journal of Artificial Intelligence Research 7: 83–124. Acknowledgments Tambe, M.; Pynadath, D.; and Chauvat, N. 2000. Building This material is based upon work supported by Dynamic Agent Organizations in Cyberspace. IEEE Inter- DARPA, through the Department of the Interior, net Computing 4(2): 65–73. NBC, Acquisition Services Division, under Con- Varakantham, P.; Maheswaran, R.; Gupta, T.; and Tambe, tract No. NBCHD030010. M. 2007. Towards Efficient Computation of Error Bound- ed Solutions in POMDPs: Expected Value Approxima- References tions and Dynamic Disjunctive Beliefs. In Proceedings of the Twentieth International Joint Conference on Artificial Chalupsky, H.; Gil, Y.; Knoblock, C.; Lerman, K.; Oh, J.; Intelligence (IJCAI-07). Menlo Park, CA: AAAI Press. Pynadath, D.; Russ, T.; and Tambe, M. 2002. Electric Elves: Agent Technology for Support Human Organiza- Varakantham, P.; Maheswaran, R.; and Tambe, M. 2005. tions. AI Magazine 23(2): 11–24. Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents. In Proceedings of the International Joint Greenstadt, R.; Pearce, J. P.; and Tambe, M. 2006. Analy- Conference on Autonomous Agents and Multiagent Systems. sis of Privacy Loss in DCOP Algorithms. In Proceedings of the Twenty-First National Conference on Artificial Intelligence New York: Association for Computing Machinery. (AAAI), 647–653. Menlo Park, CA: AAAI Press. Maheswaran, R. T.; Pearce, J. P.; Bowring, E.; Varakan- tham, P.; and Tambe, M. 2006. Privacy Loss in Distributed Milind Tambe is a professor of computer science at the Constraint Reasoning: A Quantitative Framework for University of Southern California. Analysis and Its Applications. Journal of Autonomous Emma Bowring is a faculty member in the School of Agents and Multi-Agent Systems 13(1): 27–60. Engineering and Computer Science at the University of Maheswaran, R. T.; Tambe, M.; Bowring, E.; Pearce, J. P.; the Pacific, Stockton, California. and Varakantham, P. 2004. Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Mul- Jonathan P. Pearce is working in quantitative finance in ti-Event Scheduling. In Proceedings of the International New York city. Joint Conference on Autonomous Agents and Multiagent Sys- Pradeep Varakantham is a postdoctoral fellow in the tems (AAMAS). Los Alamitos, CA: IEEE Computer Society. Robotics Institute at Carnegie Mellon University. Mailler, R., and Lesser, V. 2004. Solving Distributed Con- straint Optimization Problems Using Cooperative Medi- Paul Scerri is a systems scientist in the Robotics Institute ation. In Proceedings of the International Joint Conference on at Carnegie Mellon University. Autonomous Agents and Multiagent Systems (AAMAS). Los David V. Pynadath is a research scientist in the Alamitos, CA: IEEE Computer Society. Inteligent Systems division of the USC Information Sci- Modi, P. J., and Veloso, M. 2005. Bumping Strategies for ences Institute. the Multiagent Agreement Problem. In Proceedings of the International Joint Conference on Autonomous Agents and

SUMMER 2008 31 Articles

New and Recent Books Available from AAAI Press

Twenty-Third AAAI Conference on Artificial Intelligence Dieter Fox and Carla Gomes, Program Chairs AVAILABLE IN JULY International Conference on Automated Planning and Scheduling 2008 Chris Beck and Eric Hansen, Program Chairs AVAILABLE IN SEPTEMBER Twenty-First International FLAIRS Conference David Wilson and Geoff Sutcliffe, Program Chairs AVAILABLE IN JULY Eleventh International Conference on Principles of Knowledge Representation and Reasoning Gerhard Brewka and Jerôme Lang, Program Chairs AVAILABLE IN SEPTEMBER Thinking about Android Epistemology Kenneth Ford, Clark Glymour, and Patrick J. Hayes www.aaai.org/Press/Books/ford.php Smart Machines in Education Kenneth Forbus and Paul Feltovich www.aaai.org/Press/Books/forbus.php Data Mining: Next Generation Challenges and Future Directions Hillol Kargupta, Anupam Joshi, Krishnamoorthy Sivakumar, and Yelena Yesha www.aaai.org/Press/Books/kargupta2.php New Directions in Question Answering Mark Maybury www.aaai.org/Press/Books/maybury3.php

32 AI MAGAZINE