Grounding As a Collaborative Process
Total Page:16
File Type:pdf, Size:1020Kb
Grounding as a Collaborative Process Luciana Benotti Patrick Blackburn Universidad Nacional de Cordoba´ Philosophy and Science Studies, IKH CONICET, Argentina Roskilde University, Denmark [email protected] [email protected] Abstract We proceed as follows. We first discuss obsta- cles to current research on dialog and interactive Collaborative grounding is a fundamental as- systems (Section2) and then present working defi- pect of human-human dialog which allows people to negotiate meaning. In this paper nitions of collaborative grounding and related con- we argue that it is missing from current deep cepts (Section3). Section4 plays with scenarios learning approaches to dialog and interactive illustrating the pitfalls of being unable to ground systems. Our central point is that making mis- collaboratively; we then turn to the language acqui- takes and being able to recover from them col- sition literature for insight into how humans do it laboratively is a key ingredient in grounding (Section5) and to the dialog systems literature to meaning. We illustrate the pitfalls of being discuss what is known (Section6). In Section7, unable to ground collaboratively, discuss what can be learned from the language acquisition we reflect on the progress made and make recom- and dialog systems literature, and reflect on mendations, while in Section8 we note possible how to move forward. objections to our account and conclude. 1 Introduction 2 Motivation Collaborative grounding is shaped by constraints Dialog and interactive systems is one of the most that are not explicit in successful dialog turns. popular research areas in computational linguistics These constraints combine information from world nowadays. But — unlike machine translation and causal relations, the task under discussion, the com- information retrieval — deep learning approaches municative intents of the dialog partners, and much to it have had little impact on products that people else besides. They are used to negotiate meaning, use daily.1 In 1991, cognitive scientist Brennan to decide which beliefs to add to the shared com- asked: Why is it that natural language has yet to mon ground, which is constructed using the joint become a widely used modality of human/computer attention of the dialog partners to things either real interaction? (Brennan, 1991), and in 2020 the or imagined. Once beliefs are grounded, they can- AI researchers de Vries, Bahdanau and Manning not be magically ungrounded without further ne- (de Vries et al., 2020) asked the same question gotiation, for the dialog partners are committed to yet again. In 1990, research on dialog systems them. But it’s tricky: used symbolic approaches; today neural generative Human-analogous natural language understanding models are favoured. Methods have changed, but (NLU) is a grand challenge of artificial intelligence, the question remains the same. which involves mastery of the structure and use of Neural generative models offer flexibility, can be language and the ability to ground it in the world. easily adapted to new domains, and require mini- (Bender and Koller, 2020) mal domain engineering. But though they generate fluent responses (Serban et al., 2016), the result is What does “the ability to ground it in the world” often boring and repetitive (“I don’t know”) or they involve? Implicit constraints that shape ground- contradict themselves, or wander away from the ing become explicit when communication starts to break. We claim that making mistakes and being 1Almost all commercial dialog systems currently avail- able seem to be based on pre-deep learning pipeline architec- able to recover from them collaboratively is a key tures (Takanobu et al., 2020) in spite of efforts such as the ingredient of the ability to ground. Alexa Prize (Ram et al., 2018; Gabriel et al., 2020). 515 Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 515–531 April 19 - 23, 2021. ©2021 Association for Computational Linguistics topic of conversation (Li et al., 2016). De Vries damental to conversation (Benotti, 2010; Benotti et al (2020) note that neural generative models as- and Blackburn, 2014). When the speaker believes sume that the required data is available and ap- that the dialog is on track, positive evidence of propriate. They say: Ideally, the data used for understanding is provided in different forms (de- training and evaluating should reflect the intents pending on the communication channel) such as ex- and linguistic phenomena found in real-world ap- plicit acknowledgements and eye contact. Negative plications and be of reasonable size to accommo- evidence of understanding signals that something date modern data-intensive methods. They remark needs to be negotiated before the dialog partners that data quality and quantity are hard to reconcile, can commit — and negative evidence is ubiquitous: and that the community has prioritized quantity Conversations with other people are rarely fluent over quality; thus dialog systems are inadequate and without mishap, and people do not expect them because datasets are poor. But ineffective use of to be. (Brennan, 1991) dialog history seems to play a role too. Sankar et al (2019) study model sensitivity to artificially Mishaps lead to the need for repair. Repair is introduced context perturbations at test time. Work- fundamental to conversational analysis (Schegloff ing with multi-turn dialog datasets, they found that et al., 1977; Schegloff, 2007), the linguistic study commonly used neural dialog architectures, like of language-driven social interactions. Together recurrent and transformer-based seq2seq models, with the more psychologically oriented work of are rarely sensitive to perturbations such as missing Clark and his colleagues (Clark and Wilkes-Gibbs, or reordered utterances, and word shuffling. 1986; Clark, 1996), conversational analysis, is a The ineffective use of dialog history often goes key inspiration for the lines of research that we unnoticed because of the evaluation practices that review in this paper. are common nowadays. Automatic metrics such We consider collaborative grounding to be as BLUE, ROUGE, and so on, do not correlate distinct from symbol grounding (Harnad, 1990) well with human judgement, either for semantic though they interact in interesting ways (Larsson, preserving natural language generation or for dia- 2018). Symbol grounding (or perceptual ground- log (Mathur et al., 2020). For human evaluation, ing, or language grounded in vision) is the set of it is not enough to show a few turns to the annota- capabilities that link symbols with perceptions; it tors (Liu et al., 2016). This does not measure how is an important research area (Roy, 2005) in which well the system is able to recover from its own mis- accurate connections between systems’ linguistic takes; a human-in-the-loop evaluation that judges representations and sensor data are often viewed as the overall interaction is needed for that (Walker proof that a system means what it says. These con- et al., 1997). nections are important for meaning, but we agree Graphical user interfaces (GUIs), on the other with De Vault et al. (2006) that perceptual ground- hand, tend to get things right. We agree with Bren- ing is neither necessary nor sufficient to justify the nan (1998) that GUIs are more successful in every- attribution of linguistic meaning. Human percep- day use than dialog systems (DSs) because GUIs tion and memory are neither accurate nor stable, enable collaborative grounding effectively while and different people have different abilities and lim- deep learning approaches to DSs do not. itations. Human meaning attributions do not rely on the accurate perceptions and perfect memory 3 What is collaborative grounding? sought by symbol grounding, but on a collabora- In this section we provide working definitions of tive negotiation process in which language speakers collaborative grounding and other key concepts. To coordinate their perceptual memories and linguistic get the ball rolling, we take the common ground to usage with other members of their communities. If be the commitments that the dialog partners have a dialog system commits itself to negotiate its mean- (explicitly or implicitly) agreed upon. ings collaboratively when perception and memory falter, then we claim that this gives grounds for Collaborative grounding not symbol grounding assigning linguistic meaning to it. See Section4 Collaborative grounding is the process of seek- andA for examples. ing and providing incremental evidence of mutual understanding through dialog; we view the ongo- Collaborative grounding: basic mechanisms ing exchange of speaker and hearer roles as fun- When people talk to each other, they tailor their 516 utterances to their partners. People can talk with domain vocabulary, or a lack of common sense; friends, strangers, disembodied voices on the tele- by common sense we here mean the basic knowl- phone, readers who will come along after they are edge and competencies required for successful gone, foreigners, children, and even dogs. Flex- navigation through a world full of objects, time, ibility in tailoring utterances for a particular ad- money, politeness, animals, people, and so on. dressee has been documented even among the very One much-studied component of commonsense young; five year olds have been observed to use involves causality (Pearl, 2009) and the frame prob- more simple language and a different pitch range lem (Shanahan,