Learning Dependency-Based Compositional Semantics
Percy Liang∗ University of California, Berkeley
Michael I. Jordan∗∗ University of California, Berkeley
Dan Klein† University of California, Berkeley
Suppose we want to build a system that answers a natural language question by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to instead learn a semantic parser from question–answer pairs, where the logical form is modeled as a latent variable. We develop a new semantic formalism, dependency-based compositional semantics (DCS) and define a log-linear distribution over DCS logical forms. The model parameters are estimated using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, we show that our system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.
1. Introduction
One of the major challenges in natural language processing (NLP) is building systems that both handle complex linguistic phenomena and require minimal human effort. The difficulty of achieving both criteria is particularly evident in training semantic parsers, where annotating linguistic expressions with their associated logical forms is expensive but until recently, seemingly unavoidable. Advances in learning latent-variable models, however, have made it possible to progressively reduce the amount of supervision
∗ Computer Science Division, University of California, Berkeley, CA 94720, USA. E-mail: [email protected]. ∗∗ Computer Science Division and Department of Statistics, University of California, Berkeley, CA 94720, USA. E-mail: [email protected]. † Computer Science Division, University of California, Berkeley, CA 94720, USA. E-mail: [email protected].
Submission received: 12 September 2011; revised submission received: 19 February 2012; accepted for publication: 18 April 2012. doi:10.1162/COLI a 00127
No rights reserved. This work was authored as part of the Contributor’s official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. law. Computational Linguistics Volume 39, Number 2 required for various semantics-related tasks (Zettlemoyer and Collins 2005; Branavan et al. 2009; Liang, Jordan, and Klein 2009; Clarke et al. 2010; Artzi and Zettlemoyer 2011; Goldwasser et al. 2011). In this article, we develop new techniques to learn accurate semantic parsers from even weaker supervision. We demonstrate our techniques on the concrete task of building a system to answer questions given a structured database of facts; see Figure 1 for an example in the domain of U.S. geography. This problem of building natural language interfaces to databases (NLIDBs) has a long history in NLP, starting from the early days of artificial intelligence with systems such as LUNAR (Woods, Kaplan, and Webber 1972), CHAT-80 (Warren and Pereira 1982), and many others (see Androutsopoulos, Ritchie, and Thanisch [1995] for an overview). We believe NLIDBs provide an appropriate starting point for semantic parsing because they lead directly to practical systems, and they allowus to temporarily sidestep intractable philosophical questions on howto represent meaning in general. Early NLIDBs were quite successful in their respective limited domains, but because these systems were constructed from manually built rules, they became difficult to scale up, both to other domains and to more complex utterances. In response, against the backdrop of a statistical revolution in NLP during the 1990s, researchers began to build systems that could learn from examples, with the hope of overcoming the limitations of rule-based methods. One of the earliest statistical efforts was the CHILL system (Zelle and Mooney 1996), which learned a shift-reduce semantic parser. Since then, there has been a healthy line of work yielding increasingly more accurate semantic parsers by using newsemantic representations and machine learning techniques (Miller et al. 1996; Zelle and Mooney 1996; Tang and Mooney 2001; Ge and Mooney 2005; Kate, Wong, and Mooney 2005; Zettlemoyer and Collins 2005; Kate and Mooney 2006; Wong and Mooney 2006; Kate and Mooney 2007; Wong and Mooney 2007; Zettlemoyer and Collins 2007; Kwiatkowski et al. 2010, 2011). Although statistical methods provided advantages such as robustness and portabil- ity, however, their application in semantic parsing achieved only limited success. One of the main obstacles was that these methods depended crucially on having examples of utterances paired with logical forms, and this requires substantial human effort to obtain. Furthermore, the annotators must be proficient in some formal language, which drastically reduces the size of the annotator pool, dampening any hope of acquiring enough data to fulfill the vision of learning highly accurate systems. In response to these concerns, researchers have recently begun to explore the pos- sibility of learning a semantic parser without any annotated logical forms (Clarke et al.
Figure 1 The concrete objective: A system that answers natural language questions given a structured database of facts. An example is shown in the domain of U.S. geography.
390 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 2 Our statistical methodology consists of two steps: (i) semantic parsing (p(z | x; θ)): an utterance x is mapped to a logical form z by drawing from a log-linear distribution parametrized by a vector θ; and (ii) evaluation ([[z]]w): the logical form z is evaluated with respect to the world w (database of facts) to deterministically produce an answer y. The figure also shows an example configuration of the variables around the graphical model. Logical forms z are represented as labeled trees. During learning, we are given w and (x, y)pairs(shadednodes)andtrytoinfer the latent logical forms z and parameters θ.
2010; Artzi and Zettlemoyer 2011; Goldwasser et al. 2011; Liang, Jordan, and Klein 2011). It is in this vein that we develop our present work. Specifically, given a set of (x, y) example pairs, where x is an utterance (e.g., a question) and y is the corresponding answer, we wish to learn a mapping from x to y. What makes this mapping particularly interesting is that it passes through a latent logical form z, which is necessary to capture the semantic complexities of natural language. Also note that whereas the logical form z was the end goal in much of earlier work on semantic parsing, for us it is just an intermediate variable—a means towards an end. Figure 2 shows the graphical model which captures the learning setting we just described: The question x, answer y,and world/database w are all observed. We want to infer the logical forms z and the parameters θ of the semantic parser, which are unknown quantities. Although liberating ourselves from annotated logical forms reduces cost, it does increase the difficulty of the learning problem. The core challenge here is program induction: On each example (x, y), we need to efficiently search over the exponential space of possible logical forms (programs) z and find ones that produce the target answer y, a computationally daunting task. There is also a statistical challenge: How do we parametrize the mapping from utterance x to logical form z so that it can be learned from only the indirect signal y? To address these two challenges, we must first discuss the issue of semantic representation. There are two basic questions here: (i) what
391 Computational Linguistics Volume 39, Number 2 should the formal language for the logical forms z be, and (ii) what are the compositional mechanisms for constructing those logical forms? The semantic parsing literature has considered many different formal languages for representing logical forms, including SQL (Giordani and Moschitti 2009), Prolog (Zelle and Mooney 1996; Tang and Mooney 2001), a simple functional query language called FunQL (Kate, Wong, and Mooney 2005), and lambda calculus (Zettlemoyer and Collins 2005), just to name a few. The construction mechanisms are equally diverse, in- cluding synchronous grammars (Wong and Mooney 2007), hybrid trees (Lu et al. 2008), Combinatory Categorial Grammars (CCG) (Zettlemoyer and Collins 2005), and shift- reduce derivations (Zelle and Mooney 1996). It is worth pointing out that the choice of formal language and the construction mechanism are decisions which are really more orthogonal than is often assumed—the former is concerned with what the logical forms look like; the latter, with how to generate a set of possible logical forms compositionally given an utterance. (Howto score these logical forms is yet another dimension.) Existing systems are rarely based on the joint design of the formal language and the construction mechanism; one or the other is often chosen for convenience from existing implementations. For example, Prolog and SQL have often been chosen as formal languages for convenience in end applications, but they were not designed for representing the semantics of natural language, and, as a result, the construction mechanism that bridges the gap between natural language and formal language is generally complex and difficult to learn. CCG (Steedman 2000) is quite popular in computational linguistics (for example, see Bos et al. [2004] and Zettlemoyer and Collins [2005]). In CCG, logical forms are constructed compositionally using a small handful of combinators (function application, function composition, and type raising). For a wide range of canonical examples, CCG produces elegant, streamlined analyses, but its success really depends on having a good, clean lexicon. During learning, there is often a great amount of uncertainty over the lexical entries, which makes CCG more cumbersome. Furthermore, in real-world applications, we would like to handle disflu- ent utterances, and this further strains CCG by demanding either extra type-raising rules and disharmonic combinators (Zettlemoyer and Collins 2007) or a proliferation of redundant lexical entries for each word (Kwiatkowski et al. 2010). To cope with the challenging demands of program induction, we break away from tradition in favor of a new formal language and construction mechanism, which we call dependency-based compositional semantics (DCS). The guiding principle behind DCS is to provide a simple and intuitive framework for constructing and representing logical forms. Logical forms in DCS are tree structures called DCS trees. The motivation is two- fold: (i) DCS trees are meant to parallel syntactic dependency trees, which facilitates parsing; and (ii) a DCS tree essentially encodes a constraint satisfaction problem, which can be solved efficiently using dynamic programming to obtain the denotation of a DCS tree. In addition, DCS provides a mark–execute construct, which provides a uniform way of dealing with scope variation, a major source of trouble in any semantic for- malism. The construction mechanism in DCS is a generalization of labeled dependency parsing, which leads to simple and natural algorithms. To a linguist, DCS might appear unorthodox, but it is important to keep in mind that our primary goal is effective program induction, not necessarily to model newlinguistic phenomena in the tradition of formal semantics. Armed with our new semantic formalism, DCS, we then define a discriminative probabilistic model, which is depicted in Figure 2. The semantic parser is a log-linear distribution over DCS trees z given an utterance x. Notably, z is unobserved, and we in- stead observe only the answer y, which is obtained by evaluating z on a world/database
392 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics w. There are an exponential number of possible trees z, and usually dynamic program- ming can be used to efficiently search over trees. However, in our learning setting (independent of the semantic formalism), we must enforce the global constraint that z produces y. This makes dynamic programming infeasible, so we use beam search (though dynamic programming is still used to compute the denotation of a fixed DCS tree). We estimate the model parameters with a simple procedure that alternates be- tween beam search and optimizing a likelihood objective restricted to those beams. This yields a natural bootstrapping procedure in which learning and search are integrated. We evaluated our DCS-based approach on two standard benchmarks, GEO,aU.S. geography domain (Zelle and Mooney 1996), and JOBS, a job queries domain (Tang and Mooney 2001). On GEO, we found that our system significantly outperforms previous work that also learns from answers instead of logical forms (Clarke et al. 2010). What is perhaps a more significant result is that our system obtains comparable accuracies to state-of-the-art systems that do rely on annotated logical forms. This demonstrates the viability of training accurate systems with much less supervision than before. The rest of this article is organized as follows: Section 2 introduces DCS, our new semantic formalism. Section 3 presents our probabilistic model and learning algorithm. Section 4 provides an empirical evaluation of our methods. Section 5 situates this work in a broader context, and Section 6 concludes.
2. Representation
In this section, we present the main conceptual contribution of this work, dependency- based compositional semantics (DCS), using the U.S. geography domain (Zelle and Mooney 1996) as a running example. To do this, we need to define the syntax and semantics of the formal language. The syntax is defined in Section 2.2 and is quite straightforward: The logical forms in the formal language are simply trees, which we call DCS trees. In Section 2.3, we give a type-theoretic definition of worlds (also known as databases or models) with respect to which we can define the semantics of DCS trees. The semantics, which is the heart of this article, contains two main ideas: (i) using trees to represent logical forms as constraint satisfaction problems or extensions thereof, and (ii) dealing with cases when syntactic and semantic scope diverge (e.g., for general- ized quantification and superlative constructions) using a new construct which we call mark–execute. We start in Section 2.4 by introducing the semantics of a basic version of DCS which focuses only on (i) and then extend it to the full version (Section 2.5) to account for (ii). Finally, having fully specified the formal language, we describe a construction mechanism for mapping a natural language utterance to a set of candidate DCS trees (Section 2.6).
2.1 Notation
Operations on tuples will play a prominent role in this article. For a sequence1 v = (v1, ..., vk), we use |v| = k to denote the length of the sequence. For two sequences u and v,weuseu + v = (u1, ..., u|u|, v1, ..., v|v|) to denote their concatenation.
1Weusethetermsequence to refer to both tuples (v1, ..., vk ) and arrays [v1, ..., vk]. For our purposes, there is no functional difference between tuples and arrays; the distinction is convenient when we start to talk about arrays of tuples.
393 Computational Linguistics Volume 39, Number 2
= = For a sequence of positive indices i (i1, ..., im), let vi (vi1 , ..., vim ) consist of the components of v specified by i; we call vi the projection of v onto i. We use negative indices to exclude components: v−i = (v(1,...,|v|)\i). We can also combine sequences of indices by concatenation: vi,j = vi + vj. Some examples: if v = (a, b, c, d), then v2 = b, v3,1 = (c, a), v−3 = (a, b, d), v3,−3 = (c, a, b, d).
2.2 Syntax of DCS Trees
The syntax of the DCS formal language is built from two ingredients, predicates and relations: r Let P be a set of predicates. We assume that P contains a special null predicate ø, domain-independent predicates (e.g., count, <, >,and=), and domain-specific predicates (for the U.S. geography domain, state, river, border, etc.). Right now, think of predicates as just labels, which have yet to receive formal semantics. r Let R be the set of relations. Note that unlike the predicates P, which can vary across domains, the relations R are fixed. The full set of relations are shown in Table 1. For now, just think of relations as labels—their semantics will be defined in Section 2.4.
The logical forms in DCS are called DCS trees. A DCS tree is a directed rooted tree in which nodes are labeled with predicates and edges are labeled with relations; each node also maintains an ordering over its children. Formally:
Definition 1 (DCS trees) Let Z be the set of DCS trees, where each z ∈ Z consists of (i) a predicate z.p ∈ P and (ii) a sequence of edges z.e = (z.e1, ..., z.em). Each edge e consists of a relation e.r ∈ R (see Table 1) and a child tree e.c ∈ Z.
We will either draw a DCS tree graphically or write it compactly as p; r1 :c1; ...; rm :cm where p is the predicate at the root node and c1, ..., cm are its m children connected via edges labeled with relations r1, ..., rm, respectively. Figure 3(a) shows an example of a DCS tree expressed using both graphical and compact formats.
Table 1 Possible relations that appear on edges of DCS trees. Basic DCS uses only the join and aggregate relations; the full version of DCS uses all of them. Relations R
Name Relation Description of semantic function
j ∈{ } join j for j, j 1, 2, ... j-th component of parent = j -th component of child aggregate Σ parent = set of feasible values of child extract E mark node for extraction quantify Q mark node for quantification, negation compare C mark node for superlatives, comparatives ∗ execute Xi for i ∈{1, 2 ...} process marked nodes specified by i
394 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 3 (a) An example of a DCS tree (written in both the mathematical and graphical notations). Each node is labeled with a predicate, and each edge is labeled with a relation. (b) A DCS tree z with only join relations encodes a constraint satisfaction problem, represented here as a lambda calculus formula. For example, the root node label city corresponds to a unary predicate city(c), the right child node label loc corresponds to a binary predicate loc()(where is a pair), and the edge between them denotes the constraint c1 = 1 (where the indices correspond to the two labels on the edge). (c) The denotation of z is the set of feasible values for the root node.
A DCS tree is a logical form, but it is designed to look like a syntactic dependency tree, only with predicates in place of words. As we’ll see over the course of this section, it is this transparency between syntax and semantics provided by DCS which leads to a simple and streamlined compositional semantics suitable for program induction.
2.3 Worlds
In the context of question answering, the DCS tree is a formal specification of the question. To obtain an answer, we still need to evaluate the DCS tree with respect to a database of facts (see Figure 4 for an example). We will use the term world to refer
Figure 4 We use the domain of U.S. geography as a running example. The figure presents an example of a world w (database) in this domain. A world maps each predicate to a set of tuples. For example, the depicted world w maps the predicate loc to the set of pairs of places and their containers. Note that functions (e.g., population) are also represented as predicates for uniformity. Some predicates (e.g., count) map to an infinite number of tuples and would be represented implicitly.
395 Computational Linguistics Volume 39, Number 2 to this database (it is sometimes also called a model, but we avoid this term to avoid confusion with the probabilistic model for learning that we will present in Section 3.1). Throughout this work, we assume the world is fully observed and fixed, which is a realistic assumption for building natural language interfaces to existing databases, but questionable for modeling the semantics of language in general.
2.3.1 Types and Values. To define a world, we start by constructing a set of values V. The exact set of values depends on the domain (we will continue to use U.S. geog- raphy as a running example). Briefly, V contains numbers (e.g., 3 ∈ V), strings (e.g., Washington ∈ V), tuples (e.g., (3, Washington) ∈ V), sets (e.g., {3, Washington}∈V), and other higher-order entities. To be more precise, we construct V recursively. First, define a set of primitive values V, which includes the following: r Numeric values. Each value has the form x:t ∈ V, where x ∈ R is a real number and t ∈{number, ordinal, percent, length, ...} is a tag. The tag allows us to differentiate 3, 3rd, 3%, and 3 miles—this will be important in Section 2.6.3. We simply write x for the value x:number. r Symbolic values. Each value has the form x:t ∈ V, where x is a string (e.g., Washington)andt ∈{string, city, state, river, ...} is a tag. Again, the tag allows us to differentiate, for example, the entities Washington:city and Washington:state.
NowwebuildthefullsetofvaluesV from the primitive values V. To define V,we need a bit more machinery: To avoid logical paradoxes, we construct V in increasing order of complexity using types (see Carpenter [1998] for a similar construction). The casual reader can skip this construction without losing any intuition. Define the set of types T to be the smallest set that satisfies the following properties:
1. The primitive type ∈ T ;
2. The tuple type (t1, ..., tk) ∈ T for each k ≥ 0 and each non-tuple type ti ∈ T for i = 1, ..., k;and 3. The set type {t}∈T for each tuple type t ∈ T .
Note that {}, {{}},and(()) are not valid types. For each type t ∈ T , we construct a corresponding set of values Vt:
1. For the primitive type t = , the primitive values V have already been specified. Note that these types are rather coarse: Primitive values with different tags are considered to have the same type .
2. For a tuple type t = (t1, ..., tk), Vt is the cross product of the values of its component types:
V = { ∀ ∈ V } t (v1, ..., vk): i, vi ti (1)
396 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
3. For a set type t = {t }, Vt contains all subsets of its element type t :
Vt = {s : s ⊂ Vt } (2)
With this last condition, we ensure that all elements of a set must have the same type. Note that a set is still allowed to have values with different tags (e.g., {(Washington:city), (Washington:state)} is a valid set, which might denote the semantics of the utterance things named Washington). Another distinction is that types are domain-independent whereas tags tend to be more domain-specific.
Let V = ∪t∈T Vt be the set of all possible values. A world maps each predicate to its semantics, which is a set of tuples (see Figure 4 T ⊂ T for an example). First, let TUPLE be the tuple types, which are the ones of the form V (t1, ..., tk)forsomek.Let {TUPLE} denote all the sets of tuples (with the same type): V def V {TUPLE} = {t} (3) ∈T t TUPLE
Now we define a world formally.
Definition 2 (World) P → V ∪{V} ∈ Aworldw : {TUPLE} is a function that maps each non-null predicate p P\{ } ∈ V ø to a set of tuples w(p) {TUPLE} and maps the null predicate ø to the set of all values (w(ø) = V).
For a set of tuples A with the same arity, let ARITY(A) = |x|, where x ∈ A is arbitrary; if A is empty, then ARITY(A) is undefined. Nowfor a predicate p ∈ P and world w, define ARITYw(p), the arity of predicate p with respect to w, as follows: 1ifp = ø ARITYw(p) = (4) ARITY(w(p)) if p = ø
The null predicate has arity 1 by fiat; the arity of a non-null predicate p is inherited from the tuples in w(p).
Remarks. In higher-order logic and lambda calculus, we construct function types and values, whereas in DCS, we construct tuple types and values. The two are equivalent in representational power, but this discrepancy does point out the fact that lambda calculus is based on function application, whereas DCS, as we will see, is based on declarative constraints. The set type {(, )} in DCS corresponds to the function type → ( → bool). In DCS, there is no explicit bool type—it is implicitly represented by using sets.
2.3.2 Examples. The world w maps each domain-specific predicate to a set of tuples (usually a finite set backed by a database). For the U.S. geography domain, w has a
397 Computational Linguistics Volume 39, Number 2 predicate that maps to the set of U.S. states (state), another predicate that maps to the set of pairs of entities and where they are located (loc), and so on:
w(state) = {(California:state), (Oregon:state), ...} (5) w(loc) = {(San Francisco:city, California:state), ...} (6) ... (7)
To shorten notation, we use state abbreviations (e.g., CA = California:state). The world w also specifies the semantics of several domain-independent predicates (think of these as helper functions), which usually correspond to an infinite set of tuples. Functions are represented in DCS by a set of input–output pairs. For example, the semantics of the countt predicate (for each type t ∈ T ) contains pairs of sets S and their cardinalities |S|:
{ | | ∈ V }∈V w(countt) = (S, S ):S {(t)} {({(t)},)} (8)
∈ T As another example, consider the predicate averaget (for each t ), which takes a set of key–value pairs (with keys of type t) and returns the average value. For notational convenience, we treat an arbitrary set of pairs S as a set-valued function: We let S1 = {x : (x, y) ∈ S} denote the domain of the function, and abusing notation slightly, we define the function S(x) = {y :(x, y) ∈ S} to be the set of values y that co-occur with the given x. The semantics of averaget contains pairs of sets and their averages: ∈ V | |−1 | |−1 ∈ V w(averaget) = (S, z):S {(t,)}, z = S1 S(x) y {({(t,)},)} x∈S1 y∈S(x) (9)
Similarly, we can define the semantics of argmint and argmaxt, which each takes a set of key–value pairs and returns the keys that attain the smallest (largest) value: ∈ V ∈ ∈ V w(argmint) = (S, z):S {(t,)}, z argmin min S(x) {({(t,)},t)} (10) x∈S1 ∈ V ∈ ∈ V w(argmaxt) = (S, z):S {(t,)}, z argmax max S(x) {({(t,)},t)} (11) x∈S1
The extra min and max is needed because S(x) could contain more than one value. We ∈ also impose that w(argmint) contains only (S, z)suchthaty is numeric for all (x, y) S; thus argmint denotes a partial function (same for argmaxt). These helper functions are monomorphic: For example, countt only computes cardinalities of sets of type {(t)}. In practice, we mostly operate on sets of primitives (t = ). To reduce notation, we omit t to refer to this version: count = count, average = average, and so forth.
398 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
2.4 Semantics of DCS Trees without Mark–Execute (Basic Version)
The semantics or denotation ofaDCStreez with respect to a world w is denoted z w. First, we define the semantics of DCS trees with only join relations (Section 2.4.1). In this case, a DCS tree encodes a constraint satisfaction problem (CSP); this is important because it highlights the constraint-based nature of DCS and also naturally leads to a computationally efficient way of computing denotations (Section 2.4.2). We then allow DCS trees to have aggregate relations (Section 2.4.3). The fragment of DCS which has only join and aggregate relations is called basic DCS.
2.4.1 Basic DCS Trees as Constraint Satisfaction Problems. Let z be a DCS tree with only join relations on its edges. In this case, z encodes a CSP as follows: For each node x in z, the CSP has a variable with value a(x); the collection of these values is referred to as an assignment a. The predicates and relations of z introduce constraints:
1. a(x) ∈ w(p) for each node x labeled with predicate p ∈ P;and j ∈ R 2. a(x)j = a(y)j for each edge (x, y) labeled with j , which says that the j-th component of a(x) must equal the j-th component of a(y).
We say that an assignment a is feasible if it satisfies these two constraints. Next, for a node x, define V(x) = {a(x) : assignment a is feasible} as the set of feasible values for x—these are the ones that are consistent with at least one feasible assignment. Finally, we define the denotation of the DCS tree z with respect to the world w to be z w = V(x0), where x0 is the root node of z. Figure 3(a) shows an example of a DCS tree. The corresponding CSP has four vari- ables c, m, , s.2 In Figure 3(b), we have written the equivalent lambda calculus formula. The non-root nodes are existentially quantified, the root node c is λ-abstracted, and all constraints introduced by predicates and relations are conjoined. The λ-abstraction of c represents the fact that the denotation is the set of feasible values for c (note the equivalence between the Boolean function λc.p(c) and the set {c : p(c)}).
Remarks. Note that CSPs only allowexistential quantification and conjunction. Why did we choose this particular logical subset as a starting point, rather than allowing universal quantification, negation, or disjunction? There seems to be something fun- damental about this subset, which also appears in Discourse Representation Theory (DRT) (Kamp and Reyle 1993; Kamp, van Genabith, and Reyle 2005). Briefly, logical forms in DRT are called Discourse Representation Structures (DRSs), each of which contains (i) a set of existentially quantified discourse referents (variables), (ii) a set of conjoined discourse conditions (constraints), and (iii) nested DRSs. If we exclude nested DRSs, a DRS is exactly a CSP.3 The default existential quantification and conjunction are quite natural for modeling cross-sentential anaphora: Newvariables can be added to
2 Technically, the node is c and the variable is a(c), but we use c to denote the variable to simplify notation. 3 Unlike the CSPs corresponding to DCS trees, the CSPs corresponding to DRSs need not be tree-structured, though economical DRT (Bos 2009) imposes a tree-like restriction on DRSs for computational reasons.
399 Computational Linguistics Volume 39, Number 2 a DRS and connected to other variables. Indeed, DRT was originally motivated by these phenomena (see Kamp and Reyle [1993] for more details).4 Tree-structured CSPs can capture unboundedly complex recursive structures—such as cities in states that border states that have rivers that. . . . Trees are limited, however, in that they are unable to capture long-distance dependencies such as those arising from anaphora. For example, in the phrase a state with a river that traverses its capital, its binds to state, but this dependence cannot be captured in a tree structure. A solution is to simply add an edge between the its node and the state node that forces the two nodes to have the same value. The result is still a well-defined CSP, though not a tree- structured one. The situation would become trickier if we were to integrate the other relations (aggregate, mark, and execute). We might be able to incorporate some ideas from Hybrid Logic Dependency Semantics (Baldridge and Kruijff 2002; White 2006), given that hybrid logic extends the tree structures of modal logic with nominals, thereby allowing a node to freely reference other nodes. In this article, however, we will stick to trees and leave the full exploration of non-trees for future work.
2.4.2 Computation of Join Relations. So far, we have given a declarative definition of the denotation z w of a DCS tree z with only join relations. Now we will show how to compute z w efficiently. Recall that the denotation is the set of feasible values for the root node. In general, finding the solution to a CSP is NP-hard, but for trees, we can exploit dynamic programming (Dechter 2003). The key is that the denotation of a tree depends on its subtrees only through their denotations:
m j1 jm ··· = ∩ { = ∈ } p; j :c1; ; :cm w(p) v : vji tj , t ci w (12) 1 jm w i i=1
On the right-hand side of Equation (12), the first term w(p) is the set of values that satisfy the node constraint, and the second term consists of an intersection across all m edges of {v : v = t , t ∈ c }, which is the set of values v which satisfy the edge constraint ji ji i w with respect to some value t for the child ci. To further flesh out this computation, we express Equation (12) in terms of two operations: join and project. Join takes a cross product of two sets of tuples and retains the resulting tuples that match the join constraint:
A j,j B = {u + v : u ∈ A, v ∈ B, uj = vj } (13)
Project takes a set of tuples and retains a fixed subset of the components:
A[i] = {vi : v ∈ A} (14)
The denotation in Equation (12) can nowbe expressed in terms of these join and project operations: j1 jm ··· = ··· p; j :c1; ; :cm ((w(p) j1,j c1 w)[i] jm,j cm w)[i] (15) 1 jm w 1 m
4 DRT started the dynamic semantics tradition where meanings are context-change potentials, a natural way to capture anaphora. The DCS formalism presented here does not deal with anaphora, so we give it a purely static semantics.
400 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
where i = (1, ...,ARITYw(p)). Projecting onto i retains only components corresponding to p. The time complexity for computing the denotation of a DCS tree z w scales linearly with the number of nodes, but there is also a dependence on the cost of performing the join and project operations. For details on howweoptimize these operations and handle infinite sets of tuples (for predicates such as count), see Liang (2011). The denotation of DCS trees is defined in terms of the feasible values of a CSP, and the recurrence in Equation (15) is only one way of computing this denotation. In light of the extensions to come, however, we now consider Equation (15) as the actual definition rather than just a computational mechanism. It will still be useful to refer to the CSP in order to access the intuition of using declarative constraints.
2.4.3 Aggregate Relation. Thus far, we have focused on DCS trees that only use join relations, which are insufficient for capturing higher-order phenomena in language. For example, consider the phrase number of major cities. Suppose that number corresponds 1 to the count predicate, and that major cities maps to the DCS tree city; 1 : major .We cannot simply join count with the root of this DCS tree because count needs to be joined 1 with the set of major cities (the denotation of city; 1 : major ), not just a single city. We therefore introduce the aggregate relation (Σ) that takes a DCS subtree and reifies its denotation so that it can be accessed by other nodes in its entirety. Consider a tree ø; Σ:c , where the root is connected to a child c via Σ. The denotation of the root is simply the singleton set containing the denotation of c:
{ } ø; Σ:c w = ( c w) (16)
Figure 5(a) shows the DCS tree for our running example. The denotation of the middlenodeis{(s)}, where s is all major cities. Everything above this node is an ordinary CSP: s constrains the count node, which in turns constrains the root node to |s|. Figure 5(b) shows another example of using the aggregate relation Σ. Here, the node right above Σ is constrained to be a set of pairs of major cities and their populations. The average predicate then computes the desired answer. To represent logical disjunction in natural language, we use the aggregate relation and two predicates, union and contains, which are defined in the expected way:
w(union) = {(A, B, C):C = A ∪ B, A ∈ V{}, B ∈ V{}} (17)
w(contains) = {(A, x):x ∈ A, A ∈ V{}} (18) where A, B, C ∈ V{} are sets of primitive values (see Section 2.3.1). Figure 5(c) shows an example of a disjunctive construction: We use the aggregate relations to construct two sets, one containing Oregon, and the other containing states bordering Oregon. We take the union of these two sets; contains takes the set and reads out an element, which then constrains the city node.
Remarks. A DCS tree that contains only join and aggregate relations can be viewed as a collection of tree-structured CSPs connected via aggregate relations. The tree struc- ture still enables us to compute denotations efficiently based on the recurrences in Equations (15) and (16). Recall that a DCS tree with only join relations is a DRS without nested DRSs. The aggregate relation corresponds to the abstraction operator in DRT and is one way of
401 Computational Linguistics Volume 39, Number 2
Figure 5 Examples of DCS trees that use the aggregate relation (Σ) to (a) compute the cardinality of a set, (b) take the average over a set, (c) represent a disjunction over two conditions. The aggregate relation sets the parent node deterministically to the denotation of the child node. Nodes with the special null predicate ø are represented as empty nodes. making nested DRSs. It turns out that the abstraction operator is sufficient to obtain the full representational power of DRT, and subsumes generalized quantification and disjunction constructs in DRT. By analogy, we use the aggregate relation to handle disjunction (Figure 5(c)) and generalized quantification (Section 2.5.6). DCS restricted to join relations is less expressive than first-order logic because it does not have universal quantification, negation, and disjunction. The aggregate rela- tion is analogous to lambda abstraction, and in basic DCS we use the aggregate relation to implement those basic constructs using higher-order predicates such as not, every, and union. We can also express logical statements such as generalized quantification, which go beyond first-order logic.
2.5 Semantics of DCS Trees with Mark–Execute (Full Version)
Basic DCS includes two types of relations, join and aggregate, but it is already quite expressive. In general, however, it is not enough just to be able to express the meaning of a sentence using some logical form; we must be able to derive the logical form compositionally and simply from the sentence. Consider the superlative construction most populous city, which has a basic syntactic dependency structure shown in Figure 6(a). Figure 6(b) shows that we can in principle
402 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 6 Two semantically equivalent DCS trees are shown in (b) and (c). The DCS tree in (b), which uses the join and aggregate relations in the basic DCS, does not align well with the syntactic structure of most populous city (a), and thus is undesirable. The DCS tree in (c), by using the mark–execute construct, aligns much better, with city rightfully dominating its modifiers. The full version of DCS allows us to construct (c), which is preferable to (b). already use a DCS tree with only join and aggregate relations to express the correct semantics of the superlative construction. Note, however, that the two structures are quite divergent—the syntactic head is city and the semantic head is argmax. This diver- gence runs counter to a principal desideratum of DCS, which is to create a transparent interface between coarse syntax and semantics. In this section, we introduce mark and execute relations, which will allow us to use the DCS tree in Figure 6(c) to represent the semantics associated with Figure 6(a); these two are more similar than (a) and (b). The focus of this section is on this mark– execute construct—using mark and execute relations to give proper semantically scoped denotations to syntactically scoped tree structures. The basic intuition of the mark–execute construct is as follows: We mark a node lowin the tree witha mark relation; then, higher up in the tree, we invoke it with a corresponding execute relation (Figure 7). For our example in Figure 6(c), we mark the population node, which puts the child argmax in a temporary store; when we execute the city node, we fetch the superlative predicate argmax from the store and invoke it. This divergence between syntactic and semantic scope arises in other linguistic contexts besides superlatives, such as quantification and negation. In each of these cases, the general template is the same: A syntactic modifier lowin the tree needs to have semantic force higher in the tree. A particularly compelling case of this divergence happens with quantifier scope ambiguity (e.g., Some river traverses every city5), where the
5 The two meanings are: (i) there is a river x such that x traverses every city; and (ii) for every city x,some river traverses x.
403 Computational Linguistics Volume 39, Number 2
Figure 7 The template for the mark–execute construct. A mark relation (one of E, Q, C)“stores”the modifier. Then an execute relation (of the form Xi for indices i) higher up “recalls” the modifier and applies it at the desired semantic point. quantifiers appear in fixed syntactic positions, but the surface and inverse scope read- ings correspond to different semantically scoped denotations. Analogously, a single syn- tactic structure involving superlatives can also yield two different semantically scoped denotations—the absolute and relative readings (e.g., state bordering the largest state6). The mark–execute construct provides a unified framework for dealing all these forms of divergence between syntactic and semantic scope. See Figures 8 and 9 for concrete examples of this construct.
2.5.1 Denotations. We nowformalize the mark–execute construct. We sawthat the mark– execute construct appears to act non-locally, putting things in a store and retrieving them later. This means that if we want the denotation of a DCS tree to only depend on the denotations of its subtrees, the denotations need to contain more than the set of feasible values for the root node, as was the case for basic DCS. We need to augment de- notations to include information about all marked nodes, because these can be accessed by an execute relation higher up in the tree. More specifically, let z be a DCS tree and d = z w be its denotation. The denotation d consists of n columns. The first column always corresponds to the root node of z, and the rest of the columns correspond to non-root marked nodes in z. In the example in Figure 10, there are two columns, one for the root state node and the other for size node, which is marked by C. The columns are ordered according to a pre-order traversal of z, so column 1 always corresponds to the root node. The denotation d contains a set of arrays d.A, where each array represents a feasible assignment of values to the columns of d; note that we quantify over non-marked nodes, so they do not correspond to any column in the denotation. For example, in Figure 10, the first array in d.A corresponds to assigning (OK)tothestate node (column 1) and (TX, 2.7e5) to the size node (column 2). If there are no marked nodes, d.A is basically a set of tuples, which corresponds to a denotation in basic DCS. For each marked node, the denotation d also maintains a store
6 The two meanings are: (i) a state that borders Alaska (which is the largest state); and (ii) a state with the highest score, where the score of a state x is the maximum size of any state that x borders (Alaska is irrelevant here because no states border it).
404 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 8 Examples of DCS trees that use the mark–execute construct with the E and Q mark relations. (a) The head verb borders, which needs to be returned, has a direct object states modified by which. (b) The quantifier no is syntactically dominated by state but needs to take wider scope. (c) Two quantifiers yield two possible readings; we build the same basic structure, marking both quantifiers; the choice of execute relation (X12 versus X21) determines the reading. (d) We use two mark relations, Q on river for the negation, and E on city to force the quantifier to be computed for each value of city. with information to be retrieved when that marked node is executed. A store σ for a marked node contains the following: (i) the mark relation σ.r (C in the example), (ii) the base denotation σ.b, which essentially corresponds to denotation of the subtree rooted at the marked node excluding the mark relation and its subtree ( size w in the example), and (iii) the denotation of the child of the mark relation ( argmax w in the example). The store of any unmarked nodes is always empty (σ = ø).
Definition 3 (Denotations) Let D be the set of denotations, where each denotation d ∈ D consists of r a set of arrays d.A, where each array a = [a1, ..., an] ∈ d.A is a sequence of n tuples for some n ≥ 0; and
405 Computational Linguistics Volume 39, Number 2
Figure 9 Examples of DCS trees that use the mark–execute construct with the E and C relation. (a,b,c) Comparatives and superlatives are handled as follows: For each value of the node marked by E, we compute a number based on the node marked by C; based on this information, a subset of the values is selected as the possible values of the root node. (d) Analog of quantifier scope ambiguity for superlatives: The placement of the execute relation determines an absolute versus relative reading. (e) Interaction between a quantifier and a superlative: The lower execute relation computes the largest city for each state; the second execute relation invokes most and enforces that the major constraint holds for the majority of states.
406 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 10 Example of the denotation for a DCS tree (with the compare relation C). This denotation has two columns, one for each active node—the root node state and the marked node size.
r a sequence of n stores d.σ = (d.σ1, ..., d.σn), where each store σ contains a mark relation σ.r ∈{E, Q, C,ø}, a base denotation σ.b ∈ D ∪{ø},anda child denotation σ.c ∈ D ∪{ø}.
Note that denotations are formally defined without reference to DCS trees (just as sets of tuples were in basic DCS), but it is sometimes useful to refer to the DCS tree that generates that denotation. For notational convenience, we write d as A;(r1, b1, c1); ...;(rn, bn, cn) .Alsolet d.ri = d.σi.r, d.bi = d.σi.b,andd.ci = d.σi.c.Letd{σi = x} be the denotation which is identical to d, except with d.σi = x; d{ri = x}, d{bi = x},andd{ci = x} are defined def analogously. We also define a project operation for denotations: A; σ [i] = { ai : a ∈ A}; σi . Extending this notation further, we use ø to denote the indices of the non-initial columns with empty stores (i > 1suchthatd.σi = ø). We can then use d[−ø] to represent projecting away the non-initial columns with empty stores. For the denotation d in Figure 10, d[1] keeps column 1, d[−ø] keeps both columns, and d[2, −2] swaps the two columns. In basic DCS, denotations are sets of tuples, which works quite well for repre- senting the semantics of wh-questions such as What states border Texas? But what about polar questions such as Does Louisiana border Texas? The denotation should be a simple Boolean value, which basic DCS does not represent explicitly. Using our new deno- tations, we can represent Boolean values explicitly using zero-column structures: true { } corresponds to a singleton set containing just the empty array (dT = [] )andfalse ∅ is the empty set (dF = ). Having described denotations as n-column structures, we now give the formal mapping from DCS trees to these structures. As in basic DCS, this mapping is defined recursively over the structure of the tree. We have a recurrence for each case (the first line is the base case, and each of the others handles a different edge relation):
p = { [v]:v ∈ w(p)};ø [base case] (19) w j −ø p; e; j :c = p; e w j,j c w [join] (20) w −ø p; e; Σ:c w = p; e w ∗,∗ Σ c w [aggregate] (21)
407 Computational Linguistics Volume 39, Number 2
−ø p; e; Xi :c w = p; e w ∗,∗ xi( c w) [execute] (22) p; e; E :c w = M( p; e w, E, c w) [extract] (23) p; e; C :c w = M( p; e w, C, c w) [compare] (24) p; Q :c; e w = M( p; e w, Q, c w) [quantify] (25)
−ø We define the operations j,j , Σ, Xi,andM in the remainder of this section.
2.5.2 Base Case. Equation (19) defines the denotation for a DCS tree z with a single node with predicate p. The denotation of z has one column whose arrays correspond to the tuples w(p); the store for that column is empty.
2.5.3 Join Relations. Equation (20) defines the recurrence for join relations. On the left- j hand side, p; e; j :c is a DCS tree with p at the root, a sequence of edges e followed by j afinaledgewithrelationj connected to a child DCS tree c. On the right-hand side, we take the recursively computed denotation of p; e , the DCS tree without the final edge, −ø and perform a join-project-inactive operation (notated j,j ) with the denotation of the child DCS tree c. The join-project-inactive operation joins the arrays of the two denotations (this is the core of the join operation in basic DCS—see Equation (13)), and then projects away the non-initial empty columns:7
−ø − A; σ j,j A ; σ = A ; σ + σ [ ø], where (26) { ∈ ∈ } A = a + a : a A, a A , a1j = a1j
We concatenate all arrays a ∈ A with all arrays a ∈ A that satisfy the join condition a1j = a1j . The sequences of stores are simply concatenated: (σ + σ ). Finally, any non- initial columns with empty stores are projected away by applying ·[−ø]. Note that the join works on column 1; the other columns are carried along for the ride. As another piece of convenient notation, we use ∗ to represent all components, so −ø ∗,∗ imposes the join condition that the entire tuple has to agree (a1 = a1).
2.5.4 Aggregate Relations. Equation (21) defines the recurrence for aggregate relations. Recall that in basic DCS, aggregate (16) simply takes the denotation (a set of tuples) and puts it into a set. Now, the denotation is not just a set, so we need to generalize this operation. Specifically, the aggregate operation applied to a denotation forms a set out of the tuples in the first column for each setting of the rest of the columns:
Σ ( A; σ ) = A ∪ A ; σ (27) A = {[S(a), a2, ..., an]:a ∈ A} { ∈ } S(a) = a1 :[a1, a2, ..., an] A A = {[∅, a2, ..., an]:∀i ∈{2, ..., n},[ai] ∈ σi.b.A[1], ¬∃a1, a ∈ A}
7 The join and project operations are taken from relational algebra.
408 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
The aggregate operation takes the set of arrays A and produces two sets of arrays, A and A, which are unioned (note that the stores do not change). The set A is the one that first comes to mind: For every setting of a2, ..., an, we construct S(a), the set of tuples a1 in the first column which co-occur with a2, ..., an in A. There is another case, however: what happens to settings of a2, ..., an that do not ∅ co-occur with any value of a1 in A? Then, S(a) = , but note that A by construction will not have the desired array [∅, a2, ..., an]. As a concrete example, suppose A = ∅ and we have one column (n = 1). Then A = ∅, rather than the desired {[∅]}. Fixing this problem is slightly tricky. There are an infinite number of a2, ..., an which ∅ do not co-occur with any a1 in A, so for which ones do we actually include [ , a2, ..., an]? Certainly, the answer to this question cannot come from A, so it must come from the stores. In particular, for each column i ∈{2, ..., n}, we have conveniently stored a base denotation σi.b. We consider any ai that occurs in column 1 of the arrays of this base denotation ([ai] ∈ σi.b.A[1]). For this a2, ..., an,weinclude[∅, a2, ..., an]inA as long as a2, ..., an does not co-occur with any a1. An example is given in Figure 11. The reason for storing base denotations is thus partially revealed: The arrays rep- resent feasible values of a CSP and can only contain positive information. When we aggregate, we need to access possibly empty sets of feasible values—a kind of negative information, which can only be recovered from the base denotations.
Figure 11 An example of applying the aggregate operation, which takes a denotation and aggregates the values in column 1 for every setting of the other columns. The base denotations (b)areusedto put in {} for values that do not appear in A (in this example, AK, corresponding to the fact that Alaska does not border any states).
409 Computational Linguistics Volume 39, Number 2
2.5.5 Mark Relations. Equations (23), (24), and (25) each processes a different mark relation. We define a general mark operation, M(d, r, c) which takes a denotation d,a mark relation r ∈{E, Q, C} and a child denotation c, and sets the store of d in column 1 to be (r, d, c):
M(d, r, c) = d{r1 = r, b1 = d, c1 = c} (28)
The base denotation of the first column b1 is set to the current denotation d.This,in some sense, creates a snapshot of the current denotation. Figure 12 shows an example of the mark operation.
2.5.6 Execute Relations. Equation (22) defines the denotation of a DCS tree where the last edge of the root is an execute relation. Similar to the aggregate case (21), we recurse on the DCS tree without the last edge (p; e ) and then join it to the result of applying the execute operation Xi to the denotation of the child ( c w). The execute operation Xi is the most intricate part of DCS and is what does the heavy lifting. The operation is parametrized by a sequence of distinct indices i that specifies the order in which the columns should be processed. Specifically, i indexes into the subsequence of columns with non-empty stores. We then process this subsequence of columns in reverse order, where processing a column means performing some op- erations depending on the stored relation in that column. For example, suppose that columns 2 and 3 are the only non-empty columns. Then X12 processes column 3 before column 2. On the other hand, X21 processes column 2 before column 3. We first define
Figure 12 An example of applying the mark operation, which takes a denotation and modifies the store of the column 1. This information is used by other operations such as aggregate and execute.
410 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 13 An example of applying the execute operation on column 1 with the extract relation E.The denotation prior to execution consists of two columns: column 1 corresponds to the border node; column 2 to the state node. The join relations and predicates CA and state constrain the arrays A in the denotation to include only the states that border California. After execution, the non-marked column 1 is projected away, leaving only the state column with its store emptied. the execute operation Xi for a single column i. There are three distinct cases, depending on the relation stored in column i:
Extraction. For a denotation d with the extract relation E in column i, executing Xi(d) involves three steps: (i) moving column i to before column 1 (·[i, −i]), (ii) projecting away non-initial empty columns (·[−ø]), and (iii) removing the store (·{σ1 = ø}):
Xi(d) = d[i, −i][−ø]{σ1 = ø} if d.ri = E (29)
An example is given in Figure 13. There are two main uses of extraction.
1. By default, the denotation of a DCS tree is the set of feasible values of the root node (which occupies column 1). To return the set of feasible values of another node, we mark that node with E. Upon execution, the feasible values of that node move into column 1. Extraction can be used to handle in situ questions (see Figure 8(a)). 2. Unmarked nodes (those that do not have an edge with a mark relation) are existentially quantified and have narrower scope than all marked nodes. Therefore, we can make a node x have wider scope than another node y by
411 Computational Linguistics Volume 39, Number 2
marking x (with E) and executing y before x (see Figure 8(d,e) for examples). The extract relation E (in fact, any mark relation) signifies that we want to control the scope of a node, and the execute relation allows us to set that scope.
Generalized Quantification. Generalized quantifiers are predicates on two sets, a restrictor A and a nuclear scope B. For example,
w(some) = {(A, B):A ∩ B > 0} (30) w(every) = {(A, B):A ⊂ B} (31) w(no) = {(A, B):A ∩ B = ∅} (32)
w(most) = {(A, B):|A ∩ B| > 1|A|} (33) 2
We think of the quantifier as a modifier which always appears as the child of a Q relation; the restrictor is the parent. For example, in Figure 8(b), no corresponds to the quantifier and state corresponds to the restrictor. The nuclear scope should be the set of all states that Alaska borders. More generally, the nuclear scope is the set of feasible values of the restrictor node with respect to the CSP that includes all nodes between the mark and execute relations. The restrictor is also the set of feasible values of the restrictor node, but with respect to the CSP corresponding to the subtree rooted at that node.8 We implement generalized quantifiers as follows: Let d be a denotation and suppose we are executing column i. We first construct a denotation for the restrictor dA and a denotation for the nuclear scope dB. For the restrictor, we take the base denotation in column i (d.bi)—remember that the base denotation represents a snapshot of the restric- tor node before the nuclear scope constraints are added. For the nuclear scope, we take the complete denotation d (which includes the nuclear scope constraints) and extract column i (d[i, −i][−ø]{σ1 = ø}—see (29)). We then construct dA and dB by applying the aggregate operation to each. Finally, we join these sets with the quantifier denotation, stored in d.ci: −ø −ø − xi(d) = d.ci 1,1 dA 2,1 dB [ 1] if d.ri = Q, where (34)
dA =Σ(d.bi) (35)
dB =Σ(d[i, −i][−ø]{σ1 = ø}) (36)
When there is one quantifier, think of the execute relation as performing a syntactic rewriting operation, as shown in Figure 14(b). For more complex cases, we must defer to (34). Figure 8(c) shows an example with two interacting quantifiers. The denotation of the DCS tree before execution is the same in both readings, as shown in Figure 15. The
8 Defined this way, we can only handle conservative quantifiers, because the nuclear scope will always be a subset of the restrictor. This design decision is inspired by DRT, where it provides a way of modeling donkey anaphora. We are not treating anaphora in this work, but we can handle it by allowing pronouns in the nuclear scope to create anaphoric edges into nodes in the restrictor. These constraints naturally propagate through the nuclear scope’s CSP without affecting the restrictor.
412 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Figure 14 (a) An example of applying the execute operation on column i with the quantify relation Q. Before executing, note that A = {} (because Alaska does not border any states). The restrictor (A) is the set of all states, and the nuclear scope (B) is empty. Because the pair (A, B)doesexistin w(no), the final denotation is { []} (which represents true). (b) Although the execute operation actually works on the denotation, think of it in terms of expanding the DCS tree. We introduce an extra projection relation [−1], which projects away the first column of the child subtree’s denotation.
quantifier scope ambiguity is resolved by the choice of execute relation: X12 gives the surface scope reading, X21 gives the inverse scope reading. Figure 8(d) shows how extraction and quantification work together. First, the no quantifier is processed for each city, which is an unprocessed marked node. Here, the extract relation is a technical trick to give city wider scope.
Comparatives and Superlatives. Comparative and superlative constructions involve com- paring entities, and for this we rely on a set S of entity–degree pairs (x, y), where x is an
Figure 15 Denotation of Figure 8(c) before the execute relation is applied.
413 Computational Linguistics Volume 39, Number 2 entity and y is a numeric degree. Recall that we can treat S as a function, which maps an entity x to the set of degrees S(x) associated with x. Note that this set can contain multiple degrees. For example, in the relative reading of state bordering the largest state, we would have a degree for the size of each neighboring state. Superlatives use the argmax and argmin predicates, which are defined in Section 2.3. Comparatives use the more and less predicates: w(more) contains triples (S, x, y), where x is “more than” y as measured by S; w(less) is defined analogously:
w(more) = {(S, x, y):maxS(x) > max S(y)} (37) w(less) = {(S, x, y):minS(x) < min S(y)} (38)
We use the same mark relation C for both comparative and superlative construc- tions. In terms of the DCS tree, there are three key parts: (i) the root x, which corresponds to the entity to be compared, (ii) the child c of a C relation, which corresponds to the comparative or superlative predicate, and (iii) c’s parent p, which contains the “degree information” (which will be described later) used for comparison. We assume that the root is marked (usually with a relation E). This forces us to compute a comparison degree for each value of the root node. In terms of the denotation d corresponding to the DCS tree prior to execution, the entity to be compared occurs in column 1 of the arrays d.A, the degree information occurs in column i of the arrays d.A, and the denotation of the comparative or superlative predicate itself is the child denotation at column i (d.ci). First, we define a concatenating function +i (d), which combines the columns i of d by concatenating the corresponding tuples of each array in d.A:
+i ( A; σ ) = A ; σ , where (39) A = {a \i + a + ···+ a + a \ a ∈ A} (1...i1 ) [ i1 i|i| ] (i1...n) i : σ = σ \ + σ + σ (1...i1 ) i [ i1 ] (i1...n)\i
Note that the store of column i1 is kept and the others are discarded. As an example:
+2,1 ( { [(1), (2), (3)],[(4), (5), (6)]}; σ1, σ2, σ3 ) = { [(2,1), (3)],[(5,4), (6)]}; σ2, σ3 (40)
We first create a denotation d where column i, which contains the degree infor- mation, is extracted to column 1 (and thus column 2 corresponds to the entity to be compared). Next, we create a denotation dS whose column 1 contains a set of entity- degree pairs. There are two types of degree information:
1. Suppose the degree information has arity 2 (ARITY(d.A[i]) = 2). This occurs, for example, in most populous city (see Figure 9(b)), where column i is the population node. In this case, we simply set the degree to the −ø second component of population by projection ( ø w 1,2 d ). Now columns 1 and 2 contain the degrees and entities, respectively. We concatenate columns 2 and 1 (+2,1 (·)) and aggregate to produce a denotation dS which contains the set of entity–degree pairs in column 1. 2. Suppose the degree information has arity 1 (ARITY(d.A[i]) = 1). This occurs, for example, in state bordering the most states (see Figure 9(a)), where
414 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
column i is the lower marked state node. In this case, the degree of an entity from column 2 is the number of different values that column 1 can take. To compute this, aggregate the set of values (Σ d ) and apply the count predicate. Nowwiththe degrees and entities in columns 1 and 2, respectively, we concatenate the columns and aggregate again to obtain dS.
Having constructed dS, we simply apply the comparative/superlative predicate which has been patiently waiting in d.ci. Finally, the store of d’s column 1 was destroyed by the concatenation operation +2,1 (() ·), so we must restore it with ·{σ1 = d.σ1}. The complete operation is as follows: −ø −ø { } xi(d) = ø w 1,2 d.ci 1,1 dS σ1 = d.σ1 if d.σi = C, d.σ1 = ø, where (41) −ø Σ +2,1 ø w 1,2 d if ARITY(d.A[i]) = 2 d = S −ø −ø Σ +2,1 ø w 1,2 count w 1,1 Σ d if ARITY(d.A[i]) = 1 (42) d = d[i, −i][−ø]{σ1 = ø} (43)
An example of executing the C relation is shown in Figure 16(a). As with executing a Q relation, for simple cases we can think of executing a C relation as expanding a DCS tree, as shown in Figure 16(b). Figure 9(a) and Figure 9(b) showexamples of superlative constructions withthe ar- ity 1 and arity 2 types of degree information, respectively. Figure 9(c) shows an example of a comparative construction. Comparatives and superlatives use the same machinery, 3 differing only in the predicate: argmax versus more; 1 :TX (more than Texas). But both predicates have the same template behavior: Each takes a set of entity–degree pairs and returns any entity satisfying some property. For argmax, the property is obtaining the highest degree; for more, it is having a degree higher than a threshold. We can handle generalized superlatives (the five largest or the fifth largest or the 5% largest)aswellby swapping in a different predicate; the execution mechanisms defined in Equation (41) remain the same. We sawthat the mark–execute machinery allowsdecisions regarding quantifier scope to be made in a clean and modular fashion. Superlatives also have scope am- biguities in the form of absolute versus relative readings. Consider the example in Figure 9(d). In the absolute reading, we first compute the superlative in a narrow scope (the largest state is Alaska), and then connect it with the rest of the phrase, resulting in the empty set (because no states border Alaska). In the relative reading, we consider the first state as the entity we want to compare, and its degree is the size of a neighboring state. In this case, the lower state node cannot be set to Alaska because there are no states bordering it. The result is therefore any state that borders Texas (the largest state that does have neighbors). The two DCS trees in Figure 9(d) show that we can naturally account for this form of superlative ambiguity based on where the scope-determining execute relation is placed without drastically changing the underlying tree structure.
Remarks. These scope divergence issues are not specific to DCS—every serious semantic formalism must address them. Generative grammar uses quantifier raising to move the quantifier from its original syntactic position up to the desired semantic position before semantic interpretation even occurs (Heim and Kratzer 1998). Other mechanisms such
415 Computational Linguistics Volume 39, Number 2
Figure 16 (a) Executing the compare relation C for an example superlative construction (relative reading of state bordering the largest state from Figure 9(d)). Before executing, column 1 contains the entity to compare, and column 2 contains the degree information, of which only the second component is relevant. After executing, the resulting denotation contains a single column with only the entities that obtain the highest degree (in this case, the states that border Texas). (b) For this example, think of the execute operation as expanding the original DCS tree, although the execute operation actually works on the denotation, not the DCS tree. The expanded DCS tree has the same denotation as the original DCS tree, and syntactically captures the essence of the execute–compare operation. Going through the relations of the expanded DCS tree from bottom to top: The X2 relation swaps columns 1 and 2; the join relation keeps only the second component ((TX, 267K) becomes (267K)); +2,1 concatenates columns 2 and 1 ([(267K), (AR)] becomes [(AR, 267K)]); Σ aggregates these tuples into a set; argmax operates on this set and returns the elements. as Montague’s (1973) quantifying in, Cooper storage (Cooper 1975), and Carpenter’s (1998) scoping constructor handle scope divergence during semantic interpretation. Roughly speaking, these mechanisms delay application of a quantifier, “marking” its spot with a dummy pronoun (as in Montague’s quantifying in) or putting it in a store (as in Cooper storage), and then “executing” the quantifier at a later point in the derivation either by performing a variable substitution or retrieving it from the store. Continuation, from programming languages, is another solution (Barker 2002; Shan 2004); this sets the semantics of a quantifier to be a function from its continuation (which captures all the semantic content of the clause minus the quantifier) to the final denotation of the clause.
416 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics
Intuitively, continuations reverse the normal evaluation order, allowing a quantifier to remain in situ but still outscope the rest of the clause. In fact, the mark and execute relations of DCS are analogous to the shift and reset operators used in continuations. One of the challenges with allowing flexible scope is that free variables can yield invalid scopings, a well-known issue with Cooper storage that the continuation-based approach solves. Invalid scopings are filtered out by the construction mechanism (Section 2.6). One difference between mark–execute in DCS and many other mechanisms is that DCS trees (which contain mark and execute relations) are the final logical forms—the handling of scope divergence occurs in the computing their denotations. The analog in the other mechanisms resides in the construction mechanism—the actually final logical form is quite simple.9 Therefore, we have essentially pushed the inevitable complexity from the construction mechanism into the semantics of the logical form. This is a conscious design decision: We want our construction mechanism, which maps natural language to logical form, to be simple and not burdened with complex linguistic issues, for our focus is on learning this mapping. Unfortunately, the denotation of our logical forms (Section 2.5.1) do become more complex than those of lambda calculus expressions, but we believe this is a reasonable tradeoff to make for our particular application.
2.6 Construction Mechanism
We have thus far defined the syntax (Section 2.2) and semantics (Section 2.5) of DCS trees, but we have only vaguely hinted at how these DCS trees might be connected to natural language utterances by appealing to idealized examples. In this section, we formally define the construction mechanism for DCS, which takes an utterance x and produces a set of DCS trees ZL(x). Because we motivated DCS trees based on dependency syntax, it might be tempting to take a dependency parse tree of the utterance, replace the words with predicates, and attach some relations on the edges to produce a DCS tree. To a first approximation, this is what we will do, but we need to be a bit more flexible for several reasons: (i) some nodes in the DCS tree do not have predicates (e.g., children of an E relation or parent of an Xi relation); (ii) nodes have predicates that do not correspond to words (e.g., in California cities, there is a implicit loc predicate that bridges CA and city); (iii) some words might not correspond to any predicates in our world (e.g., please); and (iv) the DCS tree might not always be aligned with the syntactic structure depending on which syntactic formalism one ascribes to. Although syntax was the inspiration for the DCS formalism, we will not actually use it in construction. It is also worth stressing the purpose of the construction mechanism. In linguistics, the purpose of the construction mechanism is to try to generate the exact set of valid logical forms for a sentence. We viewthe construction mechanism instead as simply a way of creating a set of candidate logical forms. A separate step defines a distribution over this set to favor certain logical forms over others. The construction mechanism should therefore simply overapproximate the set of logical forms. Linguistic constraints that are normally encoded in the construction mechanism (for example, in CCG, that the disharmonic pair S/NP and S\NP cannot be coordinated, or that non-indefinite quantifiers cannot extend their scope beyond clause boundaries) would be instead
9 In the continuation-based approach, this difference corresponds to the difference between assigning a denotational versus an operational semantics.
417 Computational Linguistics Volume 39, Number 2 encoded as features (Section 3.1.1). Because feature weights are estimated from data, one can viewour approach as automatically learning the linguistic constraints relevant to our end task.
2.6.1 Lexical Triggers. The construction mechanism assumes a fixed set of lexical triggers L. Each trigger is a pair (s, p), where s is a sequence of words (usually one) and p is a predicate (e.g., s = California and p = CA). We use L(s) to denote the set of predicates p triggered by s ((s, p) ∈ L). We should think of the lexical triggers L not as pinning down the precise predicate for each word, but rather as producing an overapproximation. For example, L might contain {(city, city), (city, state), (city, river), ...}, reflecting our initial ignorance prior to learning. We also define a set of trace predicates L( ), which can be introduced without an overt lexical element. Their name is inspired by trace/null elements in syntax, but they serve a more practical rather than a theoretical role here. As we shall see in Section 2.6.2, trace predicates provide more flexibility in the construction of logical forms, allowing us to insert a predicate based on the partial logical form constructed thus far and assess its compatibility with the words afterwards (based on features), rather than insisting on a purely lexically driven formalism. Section 4.1.3 describes the lexical triggers and trace predicates that we use in our experiments.
2.6.2 Recursive Construction of DCS Trees. Given a set of lexical triggers L, we will now describe a recursive mechanism for mapping an utterance x = (x1, ..., xn)toZL(x), a set of candidate DCS trees for x. The basic approach is reminiscent of projective labeled dependency parsing: For each span i..j of the utterance, we build a set of trees Ci,j(x). The set of trees for the span 0..n is the final result:
ZL(x) = C0,n(x) (44)
Each set of DCS trees Ci,j(x) is constructed recursively by combining the trees of its subspans Ci,k(x)andCk ,j(x) for each pair of split points k, k (words between k and k are ignored). These combinations are then augmented via a function A and filtered via a function F; these functions will be specified later. Formally, Ci,j(x) is defined recursively as follows: { ∈ }∪ Ci,j(x) = F A p i..j : p L(xi+1..j) T1(a, b)) (45) i≤k≤k