CHI 2018 Paper CHI 2018, April 21–26, 2018, Montréal, QC, Canada How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games Jonathan Dodge, Sean Penney, Claudia Hilderbrand, Andrew Anderson, Margaret Burnett Oregon State University Corvallis, OR; USA f dodgej, penneys, minic, anderan2, burnett
[email protected] ABSTRACT planning and execution such as an AI system trained to control How should an AI-based explanation system explain an agent’s a fleet of drones for missions in simulated environments [36]. complex behavior to ordinary end users who have no back- People without AI training will need to understand and ulti- ground in AI? Answering this question is an active research mately assess the decisions of such a system, based on what area, for if an AI-based explanation system could effectively such intelligent systems recommend or decide to do on their explain intelligent agents’ behavior, it could enable the end own. For example, imagine “Jake,” a domain expert trying users to understand, assess, and appropriately trust (or distrust) to make an educated decision about whether or not to use an the agents attempting to help them. To provide insights into intelligent agent. Ideally, an interactive explanation system this question, we turned to human expert explainers in the could help Jake assess whether and when the AI is making real-time strategy domain —“shoutcasters” — to understand its decisions “for the right reasons,” so as to ward off “lucky (1) how they foraged in an evolving strategy game in real guesses” and legal/ethical concerns (see [15]).