<<

Learning Dependency-Based Compositional

Percy Liang∗ University of California, Berkeley

Michael I. Jordan∗∗ University of California, Berkeley

Dan Klein† University of California, Berkeley

Suppose we want to build a system that answers a natural language by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to instead learn a semantic parser from question–answer pairs, where the logical form is modeled as a latent variable. We develop a new semantic formalism, dependency-based compositional semantics (DCS) and define a log-linear distribution over DCS logical forms. The model parameters are estimated using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, we show that our system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.

1. Introduction

One of the major challenges in natural language processing (NLP) is building systems that both handle complex linguistic phenomena and require minimal human effort. The difficulty of achieving both criteria is particularly evident in training semantic parsers, where annotating linguistic expressions with their associated logical forms is expensive but until recently, seemingly unavoidable. Advances in learning latent-variable models, however, have made it possible to progressively reduce the amount of supervision

Science Division, University of California, Berkeley, CA 94720, USA. E-mail: [email protected]. ∗∗ Computer Science Division and Department of Statistics, University of California, Berkeley, CA 94720, USA. E-mail: [email protected]. † Computer Science Division, University of California, Berkeley, CA 94720, USA. E-mail: [email protected].

Submission received: 12 September 2011; revised submission received: 19 February 2012; accepted for publication: 18 April 2012. doi:10.1162/COLI a 00127

No rights reserved. This work was authored as part of the Contributor’s official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.. 105, no copyright protection is available for such works under U.S. law. Computational Linguistics Volume 39, Number 2 required for various semantics-related tasks (Zettlemoyer and Collins 2005; Branavan et al. 2009; Liang, Jordan, and Klein 2009; Clarke et al. 2010; Artzi and Zettlemoyer 2011; Goldwasser et al. 2011). In this article, we develop new techniques to learn accurate semantic parsers from even weaker supervision. We demonstrate our techniques on the concrete task of building a system to answer questions given a structured database of facts; see Figure 1 for an example in the domain of U.S. geography. This problem of building natural language interfaces to databases (NLIDBs) has a long history in NLP, starting from the early days of artificial intelligence with systems such as LUNAR (Woods, Kaplan, and Webber 1972), CHAT-80 (Warren and Pereira 1982), and many others (see Androutsopoulos, Ritchie, and Thanisch [1995] for an overview). We believe NLIDBs provide an appropriate starting point for semantic parsing because they lead directly to practical systems, and they allowus to temporarily sidestep intractable philosophical questions on howto represent meaning in general. Early NLIDBs were quite successful in their respective limited domains, but because these systems were constructed from manually built rules, they became difficult to scale up, both to other domains and to more complex utterances. In response, against the backdrop of a statistical revolution in NLP during the 1990s, researchers began to build systems that could learn from examples, with the hope of overcoming the limitations of rule-based methods. One of the earliest statistical efforts was the CHILL system (Zelle and Mooney 1996), which learned a shift-reduce semantic parser. Since then, there has been a healthy line of work yielding increasingly more accurate semantic parsers by using newsemantic representations and machine learning techniques (Miller et al. 1996; Zelle and Mooney 1996; Tang and Mooney 2001; Ge and Mooney 2005; Kate, Wong, and Mooney 2005; Zettlemoyer and Collins 2005; Kate and Mooney 2006; Wong and Mooney 2006; Kate and Mooney 2007; Wong and Mooney 2007; Zettlemoyer and Collins 2007; Kwiatkowski et al. 2010, 2011). Although statistical methods provided advantages such as robustness and portabil- ity, however, their application in semantic parsing achieved only limited success. One of the main obstacles was that these methods depended crucially on having examples of utterances paired with logical forms, and this requires substantial human effort to obtain. Furthermore, the annotators must be proficient in some formal language, which drastically reduces the size of the annotator pool, dampening any hope of acquiring enough data to fulfill the vision of learning highly accurate systems. In response to these concerns, researchers have recently begun to explore the pos- sibility of learning a semantic parser without any annotated logical forms (Clarke et al.

Figure 1 The concrete objective: A system that answers natural language questions given a structured database of facts. An example is shown in the domain of U.S. geography.

390 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 2 Our statistical methodology consists of two steps: (i) semantic parsing (p(z | x; θ)): an utterance x is mapped to a logical form z by drawing from a log-linear distribution parametrized by a vector θ; and (ii) evaluation ([[z]]w): the logical form z is evaluated with respect to the world w (database of facts) to deterministically produce an answer y. The figure also shows an example configuration of the variables around the graphical model. Logical forms z are represented as labeled trees. During learning, we are given w and (x, y)pairs(shadednodes)andtrytoinfer the latent logical forms z and parameters θ.

2010; Artzi and Zettlemoyer 2011; Goldwasser et al. 2011; Liang, Jordan, and Klein 2011). It is in this vein that we develop our present work. Specifically, given a set of (x, y) example pairs, where x is an utterance (e.g., a question) and y is the corresponding answer, we wish to learn a mapping from x to y. What makes this mapping particularly interesting is that it passes through a latent logical form z, which is necessary to capture the semantic complexities of natural language. Also note that whereas the logical form z was the end goal in much of earlier work on semantic parsing, for us it is just an intermediate variable—a means towards an end. Figure 2 shows the graphical model which captures the learning setting we just described: The question x, answer y,and world/database w are all observed. We want to infer the logical forms z and the parameters θ of the semantic parser, which are unknown quantities. Although liberating ourselves from annotated logical forms reduces cost, it does increase the difficulty of the learning problem. The core challenge here is program induction: On each example (x, y), we need to efficiently search over the exponential space of possible logical forms (programs) z and find ones that produce the target answer y, a computationally daunting task. There is also a statistical challenge: How do we parametrize the mapping from utterance x to logical form z so that it can be learned from only the indirect signal y? To address these two challenges, we must first discuss the issue of semantic representation. There are two basic questions here: (i) what

391 Computational Linguistics Volume 39, Number 2 should the formal language for the logical forms z be, and (ii) what are the compositional mechanisms for constructing those logical forms? The semantic parsing literature has considered many different formal languages for representing logical forms, including SQL (Giordani and Moschitti 2009), (Zelle and Mooney 1996; Tang and Mooney 2001), a simple functional query language called FunQL (Kate, Wong, and Mooney 2005), and (Zettlemoyer and Collins 2005), just to name a few. The construction mechanisms are equally diverse, in- cluding synchronous grammars (Wong and Mooney 2007), hybrid trees (Lu et al. 2008), Combinatory Categorial Grammars (CCG) (Zettlemoyer and Collins 2005), and shift- reduce derivations (Zelle and Mooney 1996). It is worth pointing out that the choice of formal language and the construction mechanism are decisions which are really more orthogonal than is often assumed—the former is concerned with what the logical forms look like; the latter, with how to generate a set of possible logical forms compositionally given an utterance. (Howto score these logical forms is yet another dimension.) Existing systems are rarely based on the joint design of the formal language and the construction mechanism; one or the other is often chosen for convenience from existing implementations. For example, Prolog and SQL have often been chosen as formal languages for convenience in end applications, but they were not designed for representing the semantics of natural language, and, as a result, the construction mechanism that bridges the gap between natural language and formal language is generally complex and difficult to learn. CCG (Steedman 2000) is quite popular in computational linguistics (for example, see Bos et al. [2004] and Zettlemoyer and Collins [2005]). In CCG, logical forms are constructed compositionally using a small handful of combinators (function application, function composition, and type raising). For a wide range of canonical examples, CCG produces elegant, streamlined analyses, but its success really depends on having a good, clean lexicon. During learning, there is often a great amount of uncertainty over the lexical entries, which makes CCG more cumbersome. Furthermore, in real-world applications, we would like to handle disflu- ent utterances, and this further strains CCG by demanding either extra type-raising rules and disharmonic combinators (Zettlemoyer and Collins 2007) or a proliferation of redundant lexical entries for each word (Kwiatkowski et al. 2010). To cope with the challenging demands of program induction, we break away from tradition in favor of a new formal language and construction mechanism, which we call dependency-based compositional semantics (DCS). The guiding principle behind DCS is to provide a simple and intuitive framework for constructing and representing logical forms. Logical forms in DCS are tree structures called DCS trees. The motivation is two- fold: (i) DCS trees are meant to parallel syntactic dependency trees, which facilitates parsing; and (ii) a DCS tree essentially encodes a constraint satisfaction problem, which can be solved efficiently using dynamic programming to obtain the of a DCS tree. In addition, DCS provides a mark–execute , which provides a uniform way of dealing with variation, a major source of trouble in any semantic for- malism. The construction mechanism in DCS is a generalization of labeled dependency parsing, which leads to simple and natural algorithms. To a linguist, DCS might appear unorthodox, but it is important to keep in mind that our primary goal is effective program induction, not necessarily to model newlinguistic phenomena in the tradition of formal semantics. Armed with our new semantic formalism, DCS, we then define a discriminative probabilistic model, which is depicted in Figure 2. The semantic parser is a log-linear distribution over DCS trees z given an utterance x. Notably, z is unobserved, and we in- stead observe only the answer y, which is obtained by evaluating z on a world/database

392 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics w. There are an exponential number of possible trees z, and usually dynamic program- ming can be used to efficiently search over trees. However, in our learning setting (independent of the semantic formalism), we must enforce the global constraint that z produces y. This makes dynamic programming infeasible, so we use beam search (though dynamic programming is still used to compute the denotation of a fixed DCS tree). We estimate the model parameters with a simple procedure that alternates be- tween beam search and optimizing a likelihood objective restricted to those beams. This yields a natural bootstrapping procedure in which learning and search are integrated. We evaluated our DCS-based approach on two standard benchmarks, GEO,aU.S. geography domain (Zelle and Mooney 1996), and JOBS, a job queries domain (Tang and Mooney 2001). On GEO, we found that our system significantly outperforms previous work that also learns from answers instead of logical forms (Clarke et al. 2010). What is perhaps a more significant result is that our system obtains comparable accuracies to state-of-the-art systems that do rely on annotated logical forms. This demonstrates the viability of training accurate systems with much less supervision than before. The rest of this article is organized as follows: Section 2 introduces DCS, our new semantic formalism. Section 3 presents our probabilistic model and learning algorithm. Section 4 provides an empirical evaluation of our methods. Section 5 situates this work in a broader , and Section 6 concludes.

2. Representation

In this section, we present the main conceptual contribution of this work, dependency- based compositional semantics (DCS), using the U.S. geography domain (Zelle and Mooney 1996) as a running example. To do this, we need to define the syntax and semantics of the formal language. The syntax is defined in Section 2.2 and is quite straightforward: The logical forms in the formal language are simply trees, which we call DCS trees. In Section 2.3, we give a type-theoretic definition of worlds (also known as databases or models) with respect to which we can define the semantics of DCS trees. The semantics, which is the heart of this article, contains two main ideas: (i) using trees to represent logical forms as constraint satisfaction problems or extensions thereof, and (ii) dealing with cases when syntactic and semantic scope diverge (e.g., for general- ized quantification and superlative constructions) using a new construct which we call mark–execute. We start in Section 2.4 by introducing the semantics of a basic version of DCS which focuses only on (i) and then extend it to the full version (Section 2.5) to account for (ii). Finally, having fully specified the formal language, we describe a construction mechanism for mapping a natural language utterance to a set of candidate DCS trees (Section 2.6).

2.1 Notation

Operations on tuples will play a prominent role in this article. For a sequence1 v = (v1, ..., vk), we use |v| = k to denote the length of the sequence. For two sequences u and v,weuseu + v = (u1, ..., u|u|, v1, ..., v|v|) to denote their concatenation.

1Weusethetermsequence to refer to both tuples (v1, ..., vk ) and arrays [v1, ..., vk]. For our purposes, there is no functional difference between tuples and arrays; the distinction is convenient when we start to talk about arrays of tuples.

393 Computational Linguistics Volume 39, Number 2

= = For a sequence of positive indices i (i1, ..., im), let vi (vi1 , ..., vim ) consist of the components of v specified by i; we call vi the projection of v onto i. We use negative indices to exclude components: v−i = (v(1,...,|v|)\i). We can also combine sequences of indices by concatenation: vi,j = vi + vj. Some examples: if v = (a, b, c, d), then v2 = b, v3,1 = (c, a), v−3 = (a, b, d), v3,−3 = (c, a, b, d).

2.2 Syntax of DCS Trees

The syntax of the DCS formal language is built from two ingredients, predicates and relations: r Let P be a set of predicates. We assume that P contains a special null ø, domain-independent predicates (e.g., count, <, >,and=), and domain-specific predicates (for the U.S. geography domain, state, river, border, etc.). Right now, think of predicates as just labels, which have yet to receive formal semantics. r Let R be the set of relations. Note that unlike the predicates P, which can vary across domains, the relations R are fixed. The full set of relations are shown in Table 1. For now, just think of relations as labels—their semantics will be defined in Section 2.4.

The logical forms in DCS are called DCS trees. A DCS tree is a directed rooted tree in which nodes are labeled with predicates and edges are labeled with relations; each node also maintains an ordering over its children. Formally:

Definition 1 (DCS trees) Let Z be the set of DCS trees, where each z ∈ Z consists of (i) a predicate z.p ∈ P and (ii) a sequence of edges z.e = (z.e1, ..., z.em). Each edge e consists of a relation e.r ∈ R (see Table 1) and a child tree e.c ∈ Z.

We will either draw a DCS tree graphically or write it compactly as p; r1 :c1; ...; rm :cm where p is the predicate at the root node and c1, ..., cm are its m children connected via edges labeled with relations r1, ..., rm, respectively. Figure 3(a) shows an example of a DCS tree expressed using both graphical and compact formats.

Table 1 Possible relations that appear on edges of DCS trees. Basic DCS uses only the join and aggregate relations; the full version of DCS uses all of them. Relations R

Name Relation Description of semantic function

j   ∈{ } join j for j, j 1, 2, ... j-th component of parent = j -th component of child aggregate Σ parent = set of feasible values of child extract E mark node for extraction quantify Q mark node for quantification, compare C mark node for superlatives, comparatives ∗ execute Xi for i ∈{1, 2 ...} process marked nodes specified by i

394 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 3 (a) An example of a DCS tree (written in both the mathematical and graphical notations). Each node is labeled with a predicate, and each edge is labeled with a relation. (b) A DCS tree z with only join relations encodes a constraint satisfaction problem, represented here as a lambda calculus formula. For example, the root node label city corresponds to a unary predicate city(c), the right child node label loc corresponds to a binary predicate loc()(where is a pair), and the edge between them denotes the constraint c1 = 1 (where the indices correspond to the two labels on the edge). (c) The denotation of z is the set of feasible values for the root node.

A DCS tree is a logical form, but it is designed to look like a syntactic dependency tree, only with predicates in place of words. As we’ll see over the course of this section, it is this transparency between syntax and semantics provided by DCS which leads to a simple and streamlined compositional semantics suitable for program induction.

2.3 Worlds

In the context of question answering, the DCS tree is a formal specification of the question. To obtain an answer, we still need to evaluate the DCS tree with respect to a database of facts (see Figure 4 for an example). We will use the term world to refer

Figure 4 We use the domain of U.S. geography as a running example. The figure presents an example of a world w (database) in this domain. A world maps each predicate to a set of tuples. For example, the depicted world w maps the predicate loc to the set of pairs of places and their containers. Note that functions (e.g., population) are also represented as predicates for uniformity. Some predicates (e.g., count) map to an infinite number of tuples and would be represented implicitly.

395 Computational Linguistics Volume 39, Number 2 to this database (it is sometimes also called a model, but we avoid this term to avoid confusion with the probabilistic model for learning that we will present in Section 3.1). Throughout this work, we assume the world is fully observed and fixed, which is a realistic assumption for building natural language interfaces to existing databases, but questionable for modeling the semantics of language in general.

2.3.1 Types and Values. To define a world, we start by constructing a set of values V. The exact set of values depends on the domain (we will continue to use U.S. geog- raphy as a running example). Briefly, V contains numbers (e.g., 3 ∈ V), strings (e.g., Washington ∈ V), tuples (e.g., (3, Washington) ∈ V), sets (e.g., {3, Washington}∈V), and other higher-order entities. To be more precise, we construct V recursively. First, define a set of primitive values V, which includes the following: r Numeric values. Each value has the form x:t ∈ V, where x ∈ R is a real number and t ∈{number, ordinal, percent, length, ...} is a tag. The tag allows us to differentiate 3, 3rd, 3%, and 3 miles—this will be important in Section 2.6.3. We simply write x for the value x:number. r Symbolic values. Each value has the form x:t ∈ V, where x is a string (e.g., Washington)andt ∈{string, city, state, river, ...} is a tag. Again, the tag allows us to differentiate, for example, the entities Washington:city and Washington:state.

NowwebuildthefullsetofvaluesV from the primitive values V. To define V,we need a bit more machinery: To avoid logical paradoxes, we construct V in increasing order of complexity using types (see Carpenter [1998] for a similar construction). The casual reader can skip this construction without losing any intuition. Define the set of types T to be the smallest set that satisfies the following properties:

1. The primitive type  ∈ T ;

2. The tuple type (t1, ..., tk) ∈ T for each k ≥ 0 and each non-tuple type ti ∈ T for i = 1, ..., k;and 3. The set type {t}∈T for each tuple type t ∈ T .

Note that {}, {{}},and(()) are not valid types. For each type t ∈ T , we construct a corresponding set of values Vt:

1. For the primitive type t = , the primitive values V have already been specified. Note that these types are rather coarse: Primitive values with different tags are considered to have the same type .

2. For a tuple type t = (t1, ..., tk), Vt is the cross product of the values of its component types:

V = { ∀ ∈ V } t (v1, ..., vk): i, vi ti (1)

396 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

  3. For a set type t = {t }, Vt contains all of its type t :

Vt = {s : s ⊂ Vt } (2)

With this last condition, we ensure that all elements of a set must have the same type. Note that a set is still allowed to have values with different tags (e.g., {(Washington:city), (Washington:state)} is a valid set, which might denote the semantics of the utterance things named Washington). Another distinction is that types are domain-independent whereas tags tend to be more domain-specific.

Let V = ∪t∈T Vt be the set of all possible values. A world maps each predicate to its semantics, which is a set of tuples (see Figure 4 T ⊂ T for an example). First, let TUPLE be the tuple types, which are the ones of the form V (t1, ..., tk)forsomek.Let {TUPLE} denote all the sets of tuples (with the same type): V def V {TUPLE} = {t} (3) ∈T t TUPLE

Now we define a world formally.

Definition 2 (World) P → V ∪{V} ∈ Aworldw : {TUPLE} is a function that maps each non-null predicate p P\{ } ∈ V ø to a set of tuples w(p) {TUPLE} and maps the null predicate ø to the set of all values (w(ø) = V).

For a set of tuples A with the same arity, let ARITY(A) = |x|, where x ∈ A is arbitrary; if A is empty, then ARITY(A) is undefined. Nowfor a predicate p ∈ P and world w, define ARITYw(p), the arity of predicate p with respect to w, as follows: 1ifp = ø ARITYw(p) = (4) ARITY(w(p)) if p = ø

The null predicate has arity 1 by fiat; the arity of a non-null predicate p is inherited from the tuples in w(p).

Remarks. In higher-order and lambda calculus, we construct function types and values, whereas in DCS, we construct tuple types and values. The two are equivalent in representational power, but this discrepancy does point out the fact that lambda calculus is based on function application, whereas DCS, as we will see, is based on declarative constraints. The set type {(, )} in DCS corresponds to the function type  → ( → bool). In DCS, there is no explicit bool type—it is implicitly represented by using sets.

2.3.2 Examples. The world w maps each domain-specific predicate to a set of tuples (usually a finite set backed by a database). For the U.S. geography domain, w has a

397 Computational Linguistics Volume 39, Number 2 predicate that maps to the set of U.S. states (state), another predicate that maps to the set of pairs of entities and where they are located (loc), and so on:

w(state) = {(California:state), (Oregon:state), ...} (5) w(loc) = {(San Francisco:city, California:state), ...} (6) ... (7)

To shorten notation, we use state abbreviations (e.g., CA = California:state). The world w also specifies the semantics of several domain-independent predicates (think of these as helper functions), which usually correspond to an infinite set of tuples. Functions are represented in DCS by a set of input–output pairs. For example, the semantics of the countt predicate (for each type t ∈ T ) contains pairs of sets S and their cardinalities |S|:

{ | | ∈ V }∈V w(countt) = (S, S ):S {(t)} {({(t)},)} (8)

∈ T As another example, consider the predicate averaget (for each t ), which takes a set of key–value pairs (with keys of type t) and returns the average value. For notational convenience, we treat an arbitrary set of pairs S as a set-valued function: We let S1 = {x : (x, y) ∈ S} denote the domain of the function, and abusing notation slightly, we define the function S(x) = {y :(x, y) ∈ S} to be the set of values y that co-occur with the given x. The semantics of averaget contains pairs of sets and their averages:        ∈ V | |−1 | |−1  ∈ V w(averaget) = (S, z):S {(t,)}, z = S1 S(x) y  {({(t,)},)} x∈S1 y∈S(x) (9)

Similarly, we can define the semantics of argmint and argmaxt, which each takes a set of key–value pairs and returns the keys that attain the smallest (largest) value:  ∈ V ∈ ∈ V w(argmint) = (S, z):S {(t,)}, z argmin min S(x) {({(t,)},t)} (10) x∈S1  ∈ V ∈ ∈ V w(argmaxt) = (S, z):S {(t,)}, z argmax max S(x) {({(t,)},t)} (11) x∈S1

The extra min and max is needed because S(x) could contain more than one value. We ∈ also impose that w(argmint) contains only (S, z)suchthaty is numeric for all (x, y) S; thus argmint denotes a partial function (same for argmaxt). These helper functions are monomorphic: For example, countt only computes cardinalities of sets of type {(t)}. In practice, we mostly operate on sets of primitives (t = ). To reduce notation, we omit t to refer to this version: count = count, average = average, and so forth.

398 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

2.4 Semantics of DCS Trees without Mark–Execute (Basic Version)

  The semantics or denotation ofaDCStreez with respect to a world w is denoted z w. First, we define the semantics of DCS trees with only join relations (Section 2.4.1). In this case, a DCS tree encodes a constraint satisfaction problem (CSP); this is important because it highlights the constraint-based nature of DCS and also naturally leads to a computationally efficient way of computing (Section 2.4.2). We then allow DCS trees to have aggregate relations (Section 2.4.3). The fragment of DCS which has only join and aggregate relations is called basic DCS.

2.4.1 Basic DCS Trees as Constraint Satisfaction Problems. Let z be a DCS tree with only join relations on its edges. In this case, z encodes a CSP as follows: For each node x in z, the CSP has a variable with value a(x); the collection of these values is referred to as an assignment a. The predicates and relations of z introduce constraints:

1. a(x) ∈ w(p) for each node x labeled with predicate p ∈ P;and j ∈ R 2. a(x)j = a(y)j for each edge (x, y) labeled with j , which says that the j-th component of a(x) must equal the j-th component of a(y).

We say that an assignment a is feasible if it satisfies these two constraints. Next, for a node x, define V(x) = {a(x) : assignment a is feasible} as the set of feasible values for x—these are the ones that are consistent with at least one feasible assignment. Finally, we define   the denotation of the DCS tree z with respect to the world w to be z w = V(x0), where x0 is the root node of z. Figure 3(a) shows an example of a DCS tree. The corresponding CSP has four vari- ables c, m, , s.2 In Figure 3(b), we have written the equivalent lambda calculus formula. The non-root nodes are existentially quantified, the root node c is λ-abstracted, and all constraints introduced by predicates and relations are conjoined. The λ-abstraction of c represents the fact that the denotation is the set of feasible values for c (note the equivalence between the λc.p(c) and the set {c : p(c)}).

Remarks. Note that CSPs only allowexistential quantification and . Why did we choose this particular logical as a starting point, rather than allowing universal quantification, negation, or disjunction? There seems to be something fun- damental about this subset, which also appears in Representation Theory (DRT) (Kamp and Reyle 1993; Kamp, van Genabith, and Reyle 2005). Briefly, logical forms in DRT are called Discourse Representation Structures (DRSs), each of which contains (i) a set of existentially quantified discourse referents (variables), (ii) a set of conjoined discourse conditions (constraints), and (iii) nested DRSs. If we exclude nested DRSs, a DRS is exactly a CSP.3 The default existential quantification and conjunction are quite natural for modeling cross-sentential : Newvariables can be added to

2 Technically, the node is c and the variable is a(c), but we use c to denote the variable to simplify notation. 3 Unlike the CSPs corresponding to DCS trees, the CSPs corresponding to DRSs need not be tree-structured, though economical DRT (Bos 2009) imposes a tree-like restriction on DRSs for computational reasons.

399 Computational Linguistics Volume 39, Number 2 a DRS and connected to other variables. Indeed, DRT was originally motivated by these phenomena (see Kamp and Reyle [1993] for more details).4 Tree-structured CSPs can capture unboundedly complex recursive structures—such as cities in states that border states that have rivers that. . . . Trees are limited, however, in that they are unable to capture long-distance dependencies such as those arising from anaphora. For example, in the phrase a state with a river that traverses its capital, its binds to state, but this dependence cannot be captured in a tree structure. A solution is to simply add an edge between the its node and the state node that forces the two nodes to have the same value. The result is still a well-defined CSP, though not a tree- structured one. The situation would become trickier if we were to integrate the other relations (aggregate, mark, and execute). We might be able to incorporate some ideas from Hybrid Logic Dependency Semantics (Baldridge and Kruijff 2002; White 2006), given that hybrid logic extends the tree structures of modal logic with nominals, thereby allowing a node to freely other nodes. In this article, however, we will stick to trees and leave the full exploration of non-trees for future work.

2.4.2 Computation of Join Relations. So far, we have given a declarative definition of the   denotation z w of a DCS tree z with only join relations. Now we will show how to   compute z w efficiently. Recall that the denotation is the set of feasible values for the root node. In general, finding the solution to a CSP is NP-hard, but for trees, we can exploit dynamic programming (Dechter 2003). The key is that the denotation of a tree depends on its subtrees only through their denotations:

  m j1 jm  ···  = ∩ { = ∈   } p; j :c1; ; :cm w(p) v : vji tj , t ci w (12) 1 jm w i i=1

On the right-hand side of Equation (12), the first term w(p) is the set of values that satisfy the node constraint, and the second term consists of an intersection across all m edges of {v : v = t , t ∈ c  }, which is the set of values v which satisfy the edge constraint ji ji i w with respect to some value t for the child ci. To further flesh out this computation, we express Equation (12) in terms of two operations: join and project. Join takes a cross product of two sets of tuples and retains the resulting tuples that match the join constraint:

A  j,j B = {u + v : u ∈ A, v ∈ B, uj = vj } (13)

Project takes a set of tuples and retains a fixed subset of the components:

A[i] = {vi : v ∈ A} (14)

The denotation in Equation (12) can nowbe expressed in terms of these join and project operations:   j1 jm  ···  =    ···   p; j :c1; ; :cm ((w(p) j1,j c1 w)[i] jm,j cm w)[i] (15) 1 jm w 1 m

4 DRT started the tradition where meanings are context-change potentials, a natural way to capture anaphora. The DCS formalism presented here does not deal with anaphora, so we give it a purely static semantics.

400 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

where i = (1, ...,ARITYw(p)). Projecting onto i retains only components corresponding to p.   The time complexity for computing the denotation of a DCS tree z w scales linearly with the number of nodes, but there is also a dependence on the cost of performing the join and project operations. For details on howweoptimize these operations and handle infinite sets of tuples (for predicates such as count), see Liang (2011). The denotation of DCS trees is defined in terms of the feasible values of a CSP, and the recurrence in Equation (15) is only one way of computing this denotation. In light of the extensions to come, however, we now consider Equation (15) as the actual definition rather than just a computational mechanism. It will still be useful to refer to the CSP in order to access the intuition of using declarative constraints.

2.4.3 Aggregate Relation. Thus far, we have focused on DCS trees that only use join relations, which are insufficient for capturing higher-order phenomena in language. For example, consider the phrase number of major cities. Suppose that number corresponds  1  to the count predicate, and that major cities maps to the DCS tree city; 1 : major .We cannot simply join count with the root of this DCS tree because count needs to be joined  1  with the set of major cities (the denotation of city; 1 : major ), not just a single city. We therefore introduce the aggregate relation (Σ) that takes a DCS subtree and reifies its denotation so that it can be accessed by other nodes in its entirety. Consider a tree ø; Σ:c , where the root is connected to a child c via Σ. The denotation of the root is simply the singleton set containing the denotation of c:

  {   } ø; Σ:c w = ( c w) (16)

Figure 5(a) shows the DCS tree for our running example. The denotation of the middlenodeis{(s)}, where s is all major cities. Everything above this node is an ordinary CSP: s constrains the count node, which in turns constrains the root node to |s|. Figure 5(b) shows another example of using the aggregate relation Σ. Here, the node right above Σ is constrained to be a set of pairs of major cities and their populations. The average predicate then computes the desired answer. To represent in natural language, we use the aggregate relation and two predicates, and contains, which are defined in the expected way:

w(union) = {(A, B, C):C = A ∪ B, A ∈ V{}, B ∈ V{}} (17)

w(contains) = {(A, x):x ∈ A, A ∈ V{}} (18) where A, B, C ∈ V{} are sets of primitive values (see Section 2.3.1). Figure 5(c) shows an example of a disjunctive construction: We use the aggregate relations to construct two sets, one containing Oregon, and the other containing states bordering Oregon. We take the union of these two sets; contains takes the set and reads out an element, which then constrains the city node.

Remarks. A DCS tree that contains only join and aggregate relations can be viewed as a collection of tree-structured CSPs connected via aggregate relations. The tree struc- ture still enables us to compute denotations efficiently based on the recurrences in Equations (15) and (16). Recall that a DCS tree with only join relations is a DRS without nested DRSs. The aggregate relation corresponds to the abstraction operator in DRT and is one way of

401 Computational Linguistics Volume 39, Number 2

Figure 5 Examples of DCS trees that use the aggregate relation (Σ) to (a) compute the cardinality of a set, (b) take the average over a set, (c) represent a disjunction over two conditions. The aggregate relation sets the parent node deterministically to the denotation of the child node. Nodes with the special null predicate ø are represented as empty nodes. making nested DRSs. It turns out that the abstraction operator is sufficient to obtain the full representational power of DRT, and subsumes generalized quantification and disjunction constructs in DRT. By analogy, we use the aggregate relation to handle disjunction (Figure 5(c)) and generalized quantification (Section 2.5.6). DCS restricted to join relations is less expressive than first-order logic because it does not have universal quantification, negation, and disjunction. The aggregate rela- tion is analogous to lambda abstraction, and in basic DCS we use the aggregate relation to implement those basic constructs using higher-order predicates such as not, every, and union. We can also express logical statements such as generalized quantification, which go beyond first-order logic.

2.5 Semantics of DCS Trees with Mark–Execute (Full Version)

Basic DCS includes two types of relations, join and aggregate, but it is already quite expressive. In general, however, it is not enough just to be able to express the meaning of a sentence using some logical form; we must be able to derive the logical form compositionally and simply from the sentence. Consider the superlative construction most populous city, which has a basic syntactic dependency structure shown in Figure 6(a). Figure 6(b) shows that we can in principle

402 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 6 Two semantically equivalent DCS trees are shown in (b) and (c). The DCS tree in (b), which uses the join and aggregate relations in the basic DCS, does not align well with the syntactic structure of most populous city (a), and thus is undesirable. The DCS tree in (c), by using the mark–execute construct, aligns much better, with city rightfully dominating its modifiers. The full version of DCS allows us to construct (c), which is preferable to (b). already use a DCS tree with only join and aggregate relations to express the correct semantics of the superlative construction. Note, however, that the two structures are quite divergent—the syntactic head is city and the semantic head is argmax. This diver- gence runs counter to a principal desideratum of DCS, which is to create a transparent interface between coarse syntax and semantics. In this section, we introduce mark and execute relations, which will allow us to use the DCS tree in Figure 6(c) to represent the semantics associated with Figure 6(a); these two are more similar than (a) and (b). The of this section is on this mark– execute construct—using mark and execute relations to give proper semantically scoped denotations to syntactically scoped tree structures. The basic intuition of the mark–execute construct is as follows: We mark a node lowin the tree witha mark relation; then, higher up in the tree, we invoke it with a corresponding execute relation (Figure 7). For our example in Figure 6(c), we mark the population node, which puts the child argmax in a temporary store; when we execute the city node, we fetch the superlative predicate argmax from the store and invoke it. This divergence between syntactic and semantic scope arises in other linguistic contexts besides superlatives, such as quantification and negation. In each of these cases, the general template is the same: A syntactic modifier lowin the tree needs to have semantic force higher in the tree. A particularly compelling case of this divergence happens with quantifier scope (e.g., Some river traverses every city5), where the

5 The two meanings are: (i) there is a river x such that x traverses every city; and (ii) for every city x,some river traverses x.

403 Computational Linguistics Volume 39, Number 2

Figure 7 The template for the mark–execute construct. A mark relation (one of E, Q, C)“stores”the modifier. Then an execute relation (of the form Xi for indices i) higher up “recalls” the modifier and applies it at the desired semantic point. quantifiers appear in fixed syntactic positions, but the surface and inverse scope read- ings correspond to different semantically scoped denotations. Analogously, a single syn- tactic structure involving superlatives can also yield two different semantically scoped denotations—the absolute and relative readings (e.g., state bordering the largest state6). The mark–execute construct provides a unified framework for dealing all these forms of divergence between syntactic and semantic scope. See Figures 8 and 9 for concrete examples of this construct.

2.5.1 Denotations. We nowformalize the mark–execute construct. We sawthat the mark– execute construct appears to act non-locally, putting things in a store and retrieving them later. This means that if we want the denotation of a DCS tree to only depend on the denotations of its subtrees, the denotations need to contain more than the set of feasible values for the root node, as was the case for basic DCS. We need to augment de- notations to include information about all marked nodes, because these can be accessed by an execute relation higher up in the tree.   More specifically, let z be a DCS tree and d = z w be its denotation. The denotation d consists of n columns. The first column always corresponds to the root node of z, and the rest of the columns correspond to non-root marked nodes in z. In the example in Figure 10, there are two columns, one for the root state node and the other for size node, which is marked by C. The columns are ordered according to a pre-order traversal of z, so column 1 always corresponds to the root node. The denotation d contains a set of arrays d.A, where each array represents a feasible assignment of values to the columns of d; note that we quantify over non-marked nodes, so they do not correspond to any column in the denotation. For example, in Figure 10, the first array in d.A corresponds to assigning (OK)tothestate node (column 1) and (TX, 2.7e5) to the size node (column 2). If there are no marked nodes, d.A is basically a set of tuples, which corresponds to a denotation in basic DCS. For each marked node, the denotation d also maintains a store

6 The two meanings are: (i) a state that borders Alaska (which is the largest state); and (ii) a state with the highest score, where the score of a state x is the maximum size of any state that x borders (Alaska is irrelevant here because no states border it).

404 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 8 Examples of DCS trees that use the mark–execute construct with the E and Q mark relations. (a) The head verb borders, which needs to be returned, has a direct object states modified by which. (b) The quantifier no is syntactically dominated by state but needs to take wider scope. (c) Two quantifiers yield two possible readings; we build the same basic structure, marking both quantifiers; the choice of execute relation (X12 versus X21) determines the reading. (d) We use two mark relations, Q on river for the negation, and E on city to force the quantifier to be computed for each value of city. with information to be retrieved when that marked node is executed. A store σ for a marked node contains the following: (i) the mark relation σ.r (C in the example), (ii) the base denotation σ.b, which essentially corresponds to denotation of the subtree rooted at   the marked node excluding the mark relation and its subtree ( size w in the example),   and (iii) the denotation of the child of the mark relation ( argmax w in the example). The store of any unmarked nodes is always empty (σ = ø).

Definition 3 (Denotations) Let D be the set of denotations, where each denotation d ∈ D consists of r a set of arrays d.A, where each array a = [a1, ..., an] ∈ d.A is a sequence of n tuples for some n ≥ 0; and

405 Computational Linguistics Volume 39, Number 2

Figure 9 Examples of DCS trees that use the mark–execute construct with the E and C relation. (a,b,c) Comparatives and superlatives are handled as follows: For each value of the node marked by E, we compute a number based on the node marked by C; based on this information, a subset of the values is selected as the possible values of the root node. (d) Analog of quantifier scope ambiguity for superlatives: The placement of the execute relation determines an absolute versus relative reading. (e) Interaction between a quantifier and a superlative: The lower execute relation computes the largest city for each state; the second execute relation invokes most and enforces that the major constraint holds for the majority of states.

406 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 10 Example of the denotation for a DCS tree (with the compare relation C). This denotation has two columns, one for each active node—the root node state and the marked node size.

r a sequence of n stores d.σ = (d.σ1, ..., d.σn), where each store σ contains a mark relation σ.r ∈{E, Q, C,ø}, a base denotation σ.b ∈ D ∪{ø},anda child denotation σ.c ∈ D ∪{ø}.

Note that denotations are formally defined without reference to DCS trees (just as sets of tuples were in basic DCS), but it is sometimes useful to refer to the DCS tree that generates that denotation. For notational convenience, we write d as  A;(r1, b1, c1); ...;(rn, bn, cn) .Alsolet d.ri = d.σi.r, d.bi = d.σi.b,andd.ci = d.σi.c.Letd{σi = x} be the denotation which is identical to d, except with d.σi = x; d{ri = x}, d{bi = x},andd{ci = x} are defined def analogously. We also define a project for denotations:  A; σ [i] =  { ai : a ∈ A}; σi . Extending this notation further, we use ø to denote the indices of the non-initial columns with empty stores (i > 1suchthatd.σi = ø). We can then use d[−ø] to represent projecting away the non-initial columns with empty stores. For the denotation d in Figure 10, d[1] keeps column 1, d[−ø] keeps both columns, and d[2, −2] swaps the two columns. In basic DCS, denotations are sets of tuples, which works quite well for repre- senting the semantics of wh-questions such as What states border Texas? But what about polar questions such as Does Louisiana border Texas? The denotation should be a simple Boolean value, which basic DCS does not represent explicitly. Using our new deno- tations, we can represent Boolean values explicitly using zero-column structures: true  { } corresponds to a singleton set containing just the empty array (dT = [] )andfalse  ∅ is the empty set (dF = ). Having described denotations as n-column structures, we now give the formal mapping from DCS trees to these structures. As in basic DCS, this mapping is defined recursively over the structure of the tree. We have a recurrence for each case (the first line is the base case, and each of the others handles a different edge relation):

p  =  { [v]:v ∈ w(p)};ø [base case] (19)   w j −ø       p; e; j :c = p; e w  j,j c w [join] (20) w       −ø   p; e; Σ:c w = p; e w  ∗,∗ Σ c w [aggregate] (21)

407 Computational Linguistics Volume 39, Number 2

    −ø   p; e; Xi :c w = p; e w  ∗,∗ xi( c w) [execute] (22)       p; e; E :c w = M( p; e w, E, c w) [extract] (23)       p; e; C :c w = M( p; e w, C, c w) [compare] (24)       p; Q :c; e w = M( p; e w, Q, c w) [quantify] (25)

−ø We define the operations  j,j , Σ, Xi,andM in the remainder of this section.

2.5.2 Base Case. Equation (19) defines the denotation for a DCS tree z with a single node with predicate p. The denotation of z has one column whose arrays correspond to the tuples w(p); the store for that column is empty.

2.5.3 Join Relations. Equation (20) defines the recurrence for join relations. On the left- j hand side, p; e; j :c is a DCS tree with p at the root, a sequence of edges e followed by j afinaledgewithrelationj connected to a child DCS tree c. On the right-hand side, we take the recursively computed denotation of p; e , the DCS tree without the final edge, −ø and perform a join-project-inactive operation (notated  j,j ) with the denotation of the child DCS tree c. The join-project-inactive operation joins the arrays of the two denotations (this is the core of the join operation in basic DCS—see Equation (13)), and then projects away the non-initial empty columns:7

 −ø       − A; σ  j,j A ; σ = A ; σ + σ [ ø], where (26)  {  ∈  ∈   } A = a + a : a A, a A , a1j = a1j

We concatenate all arrays a ∈ A with all arrays a ∈ A that satisfy the join condition   a1j = a1j . The sequences of stores are simply concatenated: (σ + σ ). Finally, any non- initial columns with empty stores are projected away by applying ·[−ø]. Note that the join works on column 1; the other columns are carried along for the ride. As another piece of convenient notation, we use ∗ to represent all components, so −ø   ∗,∗ imposes the join condition that the entire tuple has to agree (a1 = a1).

2.5.4 Aggregate Relations. Equation (21) defines the recurrence for aggregate relations. Recall that in basic DCS, aggregate (16) simply takes the denotation (a set of tuples) and puts it into a set. Now, the denotation is not just a set, so we need to generalize this operation. Specifically, the aggregate operation applied to a denotation forms a set out of the tuples in the first column for each setting of the rest of the columns:

  Σ ( A; σ ) =  A ∪ A ; σ (27)  A = {[S(a), a2, ..., an]:a ∈ A} {   ∈ } S(a) = a1 :[a1, a2, ..., an] A  A = {[∅, a2, ..., an]:∀i ∈{2, ..., n},[ai] ∈ σi.b.A[1], ¬∃a1, a ∈ A}

7 The join and project operations are taken from .

408 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

The aggregate operation takes the set of arrays A and produces two sets of arrays, A and A, which are unioned (note that the stores do not change). The set A is the one that  first comes to mind: For every setting of a2, ..., an, we construct S(a), the set of tuples a1 in the first column which co-occur with a2, ..., an in A. There is another case, however: what happens to settings of a2, ..., an that do not  ∅  co-occur with any value of a1 in A? Then, S(a) = , but note that A by construction will not have the desired array [∅, a2, ..., an]. As a concrete example, suppose A = ∅ and we have one column (n = 1). Then A = ∅, rather than the desired {[∅]}. Fixing this problem is slightly tricky. There are an infinite number of a2, ..., an which  ∅ do not co-occur with any a1 in A, so for which ones do we actually include [ , a2, ..., an]? Certainly, the answer to this question cannot come from A, so it must come from the stores. In particular, for each column i ∈{2, ..., n}, we have conveniently stored a base denotation σi.b. We consider any ai that occurs in column 1 of the arrays of this base  denotation ([ai] ∈ σi.b.A[1]). For this a2, ..., an,weinclude[∅, a2, ..., an]inA as long as a2, ..., an does not co-occur with any a1. An example is given in Figure 11. The reason for storing base denotations is thus partially revealed: The arrays rep- resent feasible values of a CSP and can only contain positive information. When we aggregate, we need to access possibly empty sets of feasible values—a kind of negative information, which can only be recovered from the base denotations.

Figure 11 An example of applying the aggregate operation, which takes a denotation and aggregates the values in column 1 for every setting of the other columns. The base denotations (b)areusedto put in {} for values that do not appear in A (in this example, AK, corresponding to the fact that Alaska does not border any states).

409 Computational Linguistics Volume 39, Number 2

2.5.5 Mark Relations. Equations (23), (24), and (25) each processes a different mark relation. We define a general mark operation, M(d, r, c) which takes a denotation d,a mark relation r ∈{E, Q, C} and a child denotation c, and sets the store of d in column 1 to be (r, d, c):

M(d, r, c) = d{r1 = r, b1 = d, c1 = c} (28)

The base denotation of the first column b1 is set to the current denotation d.This,in some sense, creates a snapshot of the current denotation. Figure 12 shows an example of the mark operation.

2.5.6 Execute Relations. Equation (22) defines the denotation of a DCS tree where the last edge of the root is an execute relation. Similar to the aggregate case (21), we recurse on the DCS tree without the last edge (p; e ) and then join it to the result of applying the   execute operation Xi to the denotation of the child ( c w). The execute operation Xi is the most intricate part of DCS and is what does the heavy lifting. The operation is parametrized by a sequence of distinct indices i that specifies the order in which the columns should be processed. Specifically, i indexes into the subsequence of columns with non-empty stores. We then process this subsequence of columns in reverse order, where processing a column means performing some op- erations depending on the stored relation in that column. For example, suppose that columns 2 and 3 are the only non-empty columns. Then X12 processes column 3 before column 2. On the other hand, X21 processes column 2 before column 3. We first define

Figure 12 An example of applying the mark operation, which takes a denotation and modifies the store of the column 1. This information is used by other operations such as aggregate and execute.

410 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 13 An example of applying the execute operation on column 1 with the extract relation E.The denotation prior to execution consists of two columns: column 1 corresponds to the border node; column 2 to the state node. The join relations and predicates CA and state constrain the arrays A in the denotation to include only the states that border California. After execution, the non-marked column 1 is projected away, leaving only the state column with its store emptied. the execute operation Xi for a single column i. There are three distinct cases, depending on the relation stored in column i:

Extraction. For a denotation d with the extract relation E in column i, executing Xi(d) involves three steps: (i) moving column i to before column 1 (·[i, −i]), (ii) projecting away non-initial empty columns (·[−ø]), and (iii) removing the store (·{σ1 = ø}):

Xi(d) = d[i, −i][−ø]{σ1 = ø} if d.ri = E (29)

An example is given in Figure 13. There are two main uses of extraction.

1. By default, the denotation of a DCS tree is the set of feasible values of the root node (which occupies column 1). To return the set of feasible values of another node, we mark that node with E. Upon execution, the feasible values of that node move into column 1. Extraction can be used to handle in situ questions (see Figure 8(a)). 2. Unmarked nodes (those that do not have an edge with a mark relation) are existentially quantified and have narrower scope than all marked nodes. Therefore, we can make a node x have wider scope than another node y by

411 Computational Linguistics Volume 39, Number 2

marking x (with E) and executing y before x (see Figure 8(d,e) for examples). The extract relation E (in fact, any mark relation) signifies that we want to control the scope of a node, and the execute relation allows us to set that scope.

Generalized Quantification. Generalized quantifiers are predicates on two sets, a restrictor A and a nuclear scope B. For example,

w(some) = {(A, B):A ∩ B > 0} (30) w(every) = {(A, B):A ⊂ B} (31) w(no) = {(A, B):A ∩ B = ∅} (32)

w(most) = {(A, B):|A ∩ B| > 1|A|} (33) 2

We think of the quantifier as a modifier which always appears as the child of a Q relation; the restrictor is the parent. For example, in Figure 8(b), no corresponds to the quantifier and state corresponds to the restrictor. The nuclear scope should be the set of all states that Alaska borders. More generally, the nuclear scope is the set of feasible values of the restrictor node with respect to the CSP that includes all nodes between the mark and execute relations. The restrictor is also the set of feasible values of the restrictor node, but with respect to the CSP corresponding to the subtree rooted at that node.8 We implement generalized quantifiers as follows: Let d be a denotation and suppose we are executing column i. We first construct a denotation for the restrictor dA and a denotation for the nuclear scope dB. For the restrictor, we take the base denotation in column i (d.bi)—remember that the base denotation represents a snapshot of the restric- tor node before the nuclear scope constraints are added. For the nuclear scope, we take the complete denotation d (which includes the nuclear scope constraints) and extract column i (d[i, −i][−ø]{σ1 = ø}—see (29)). We then construct dA and dB by applying the aggregate operation to each. Finally, we join these sets with the quantifier denotation, stored in d.ci:    −ø −ø − xi(d) = d.ci  1,1 dA  2,1 dB [ 1] if d.ri = Q, where (34)

dA =Σ(d.bi) (35)

dB =Σ(d[i, −i][−ø]{σ1 = ø}) (36)

When there is one quantifier, think of the execute relation as performing a syntactic rewriting operation, as shown in Figure 14(b). For more complex cases, we must defer to (34). Figure 8(c) shows an example with two interacting quantifiers. The denotation of the DCS tree before execution is the same in both readings, as shown in Figure 15. The

8 Defined this way, we can only handle conservative quantifiers, because the nuclear scope will always be a subset of the restrictor. This design decision is inspired by DRT, where it provides a way of modeling donkey anaphora. We are not treating anaphora in this work, but we can handle it by allowing pronouns in the nuclear scope to create anaphoric edges into nodes in the restrictor. These constraints naturally propagate through the nuclear scope’s CSP without affecting the restrictor.

412 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 14 (a) An example of applying the execute operation on column i with the quantify relation Q. Before executing, note that A = {} (because Alaska does not border any states). The restrictor (A) is the set of all states, and the nuclear scope (B) is empty. Because the pair (A, B)doesexistin w(no), the final denotation is  { []} (which represents true). (b) Although the execute operation actually works on the denotation, think of it in terms of expanding the DCS tree. We introduce an extra projection relation [−1], which projects away the first column of the child subtree’s denotation.

quantifier scope ambiguity is resolved by the choice of execute relation: X12 gives the surface scope reading, X21 gives the inverse scope reading. Figure 8(d) shows how extraction and quantification work together. First, the no quantifier is processed for each city, which is an unprocessed marked node. Here, the extract relation is a technical trick to give city wider scope.

Comparatives and Superlatives. Comparative and superlative constructions involve com- paring entities, and for this we rely on a set S of entity–degree pairs (x, y), where x is an

Figure 15 Denotation of Figure 8(c) before the execute relation is applied.

413 Computational Linguistics Volume 39, Number 2 entity and y is a numeric degree. Recall that we can treat S as a function, which maps an entity x to the set of degrees S(x) associated with x. Note that this set can contain multiple degrees. For example, in the relative reading of state bordering the largest state, we would have a degree for the size of each neighboring state. Superlatives use the argmax and argmin predicates, which are defined in Section 2.3. Comparatives use the more and less predicates: w(more) contains triples (S, x, y), where x is “more than” y as measured by S; w(less) is defined analogously:

w(more) = {(S, x, y):maxS(x) > max S(y)} (37) w(less) = {(S, x, y):minS(x) < min S(y)} (38)

We use the same mark relation C for both comparative and superlative construc- tions. In terms of the DCS tree, there are three key parts: (i) the root x, which corresponds to the entity to be compared, (ii) the child c of a C relation, which corresponds to the comparative or superlative predicate, and (iii) c’s parent p, which contains the “degree information” (which will be described later) used for comparison. We assume that the root is marked (usually with a relation E). This forces us to compute a comparison degree for each value of the root node. In terms of the denotation d corresponding to the DCS tree prior to execution, the entity to be compared occurs in column 1 of the arrays d.A, the degree information occurs in column i of the arrays d.A, and the denotation of the comparative or superlative predicate itself is the child denotation at column i (d.ci). First, we define a concatenating function +i (d), which combines the columns i of d by concatenating the corresponding tuples of each array in d.A:

  +i ( A; σ ) =  A ; σ , where (39)  A = {a \i + a + ···+ a + a \ a ∈ A} (1...i1 ) [ i1 i|i| ] (i1...n) i : σ = σ \ + σ + σ (1...i1 ) i [ i1 ] (i1...n)\i

Note that the store of column i1 is kept and the others are discarded. As an example:

+2,1 ( { [(1), (2), (3)],[(4), (5), (6)]}; σ1, σ2, σ3 ) =  { [(2,1), (3)],[(5,4), (6)]}; σ2, σ3 (40)

We first create a denotation d where column i, which contains the degree infor- mation, is extracted to column 1 (and thus column 2 corresponds to the entity to be compared). Next, we create a denotation dS whose column 1 contains a set of entity- degree pairs. There are two types of degree information:

1. Suppose the degree information has arity 2 (ARITY(d.A[i]) = 2). This occurs, for example, in most populous city (see Figure 9(b)), where column i is the population node. In this case, we simply set the degree to the   −ø  second component of population by projection ( ø w  1,2 d ). Now columns 1 and 2 contain the degrees and entities, respectively. We concatenate columns 2 and 1 (+2,1 (·)) and aggregate to produce a denotation dS which contains the set of entity–degree pairs in column 1. 2. Suppose the degree information has arity 1 (ARITY(d.A[i]) = 1). This occurs, for example, in state bordering the most states (see Figure 9(a)), where

414 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

column i is the lower marked state node. In this case, the degree of an entity from column 2 is the number of different values that column 1 can take. To compute this, aggregate the set of values (Σ d ) and apply the count predicate. Nowwiththe degrees and entities in columns 1 and 2, respectively, we concatenate the columns and aggregate again to obtain dS.

Having constructed dS, we simply apply the comparative/superlative predicate which has been patiently waiting in d.ci. Finally, the store of d’s column 1 was destroyed by the concatenation operation +2,1 (() ·), so we must restore it with ·{σ1 = d.σ1}. The complete operation is as follows:      −ø −ø { }  xi(d) = ø w  1,2 d.ci  1,1 dS σ1 = d.σ1 if d.σi = C, d.σ1 = ø, where (41)        −ø  Σ +2,1 ø w  1,2 d if ARITY(d.A[i]) = 2 d =      S    −ø   −ø  Σ +2,1 ø w  1,2 count w  1,1 Σ d if ARITY(d.A[i]) = 1 (42)  d = d[i, −i][−ø]{σ1 = ø} (43)

An example of executing the C relation is shown in Figure 16(a). As with executing a Q relation, for simple cases we can think of executing a C relation as expanding a DCS tree, as shown in Figure 16(b). Figure 9(a) and Figure 9(b) showexamples of superlative constructions withthe ar- ity 1 and arity 2 types of degree information, respectively. Figure 9(c) shows an example of a comparative construction. Comparatives and superlatives use the same machinery,  3 differing only in the predicate: argmax versus more; 1 :TX (more than Texas). But both predicates have the same template behavior: Each takes a set of entity–degree pairs and returns any entity satisfying some property. For argmax, the property is obtaining the highest degree; for more, it is having a degree higher than a threshold. We can handle generalized superlatives (the five largest or the fifth largest or the 5% largest)aswellby swapping in a different predicate; the execution mechanisms defined in Equation (41) remain the same. We sawthat the mark–execute machinery allowsdecisions regarding quantifier scope to be made in a clean and modular fashion. Superlatives also have scope am- biguities in the form of absolute versus relative readings. Consider the example in Figure 9(d). In the absolute reading, we first compute the superlative in a narrow scope (the largest state is Alaska), and then connect it with the rest of the phrase, resulting in the empty set (because no states border Alaska). In the relative reading, we consider the first state as the entity we want to compare, and its degree is the size of a neighboring state. In this case, the lower state node cannot be set to Alaska because there are no states bordering it. The result is therefore any state that borders Texas (the largest state that does have neighbors). The two DCS trees in Figure 9(d) show that we can naturally account for this form of superlative ambiguity based on where the scope-determining execute relation is placed without drastically changing the underlying tree structure.

Remarks. These scope divergence issues are not specific to DCS—every serious semantic formalism must address them. uses quantifier raising to move the quantifier from its original syntactic position up to the desired semantic position before semantic interpretation even occurs (Heim and Kratzer 1998). Other mechanisms such

415 Computational Linguistics Volume 39, Number 2

Figure 16 (a) Executing the compare relation C for an example superlative construction (relative reading of state bordering the largest state from Figure 9(d)). Before executing, column 1 contains the entity to compare, and column 2 contains the degree information, of which only the second component is relevant. After executing, the resulting denotation contains a single column with only the entities that obtain the highest degree (in this case, the states that border Texas). (b) For this example, think of the execute operation as expanding the original DCS tree, although the execute operation actually works on the denotation, not the DCS tree. The expanded DCS tree has the same denotation as the original DCS tree, and syntactically captures the essence of the execute–compare operation. Going through the relations of the expanded DCS tree from bottom to top: The X2 relation swaps columns 1 and 2; the join relation keeps only the second component ((TX, 267K) becomes (267K)); +2,1 concatenates columns 2 and 1 ([(267K), (AR)] becomes [(AR, 267K)]); Σ aggregates these tuples into a set; argmax operates on this set and returns the elements. as Montague’s (1973) quantifying in, Cooper storage (Cooper 1975), and Carpenter’s (1998) scoping constructor handle scope divergence during semantic interpretation. Roughly speaking, these mechanisms delay application of a quantifier, “marking” its spot with a dummy pronoun (as in Montague’s quantifying in) or putting it in a store (as in Cooper storage), and then “executing” the quantifier at a later point in the derivation either by performing a variable substitution or retrieving it from the store. , from programming languages, is another solution (Barker 2002; Shan 2004); this sets the semantics of a quantifier to be a function from its continuation (which captures all the semantic content of the clause minus the quantifier) to the final denotation of the clause.

416 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Intuitively, reverse the normal evaluation order, allowing a quantifier to remain in situ but still outscope the rest of the clause. In fact, the mark and execute relations of DCS are analogous to the shift and reset operators used in continuations. One of the challenges with allowing flexible scope is that free variables can yield invalid scopings, a well-known issue with Cooper storage that the continuation-based approach solves. Invalid scopings are filtered out by the construction mechanism (Section 2.6). One difference between mark–execute in DCS and many other mechanisms is that DCS trees (which contain mark and execute relations) are the final logical forms—the handling of scope divergence occurs in the computing their denotations. The analog in the other mechanisms resides in the construction mechanism—the actually final logical form is quite simple.9 Therefore, we have essentially pushed the inevitable complexity from the construction mechanism into the semantics of the logical form. This is a conscious design decision: We want our construction mechanism, which maps natural language to logical form, to be simple and not burdened with complex linguistic issues, for our focus is on learning this mapping. Unfortunately, the denotation of our logical forms (Section 2.5.1) do become more complex than those of lambda calculus expressions, but we believe this is a reasonable tradeoff to make for our particular application.

2.6 Construction Mechanism

We have thus far defined the syntax (Section 2.2) and semantics (Section 2.5) of DCS trees, but we have only vaguely hinted at how these DCS trees might be connected to natural language utterances by appealing to idealized examples. In this section, we formally define the construction mechanism for DCS, which takes an utterance x and produces a set of DCS trees ZL(x). Because we motivated DCS trees based on dependency syntax, it might be tempting to take a dependency parse tree of the utterance, replace the words with predicates, and attach some relations on the edges to produce a DCS tree. To a first approximation, this is what we will do, but we need to be a bit more flexible for several reasons: (i) some nodes in the DCS tree do not have predicates (e.g., children of an E relation or parent of an Xi relation); (ii) nodes have predicates that do not correspond to words (e.g., in California cities, there is a implicit loc predicate that bridges CA and city); (iii) some words might not correspond to any predicates in our world (e.g., please); and (iv) the DCS tree might not always be aligned with the syntactic structure depending on which syntactic formalism one ascribes to. Although syntax was the inspiration for the DCS formalism, we will not actually use it in construction. It is also worth stressing the purpose of the construction mechanism. In linguistics, the purpose of the construction mechanism is to try to generate the exact set of valid logical forms for a sentence. We viewthe construction mechanism instead as simply a way of creating a set of candidate logical forms. A separate step defines a distribution over this set to favor certain logical forms over others. The construction mechanism should therefore simply overapproximate the set of logical forms. Linguistic constraints that are normally encoded in the construction mechanism (for example, in CCG, that the disharmonic pair S/NP and S\NP cannot be coordinated, or that non-indefinite quantifiers cannot extend their scope beyond clause boundaries) would be instead

9 In the continuation-based approach, this difference corresponds to the difference between assigning a denotational versus an operational semantics.

417 Computational Linguistics Volume 39, Number 2 encoded as features (Section 3.1.1). Because feature weights are estimated from data, one can viewour approach as automatically learning the linguistic constraints relevant to our end task.

2.6.1 Lexical Triggers. The construction mechanism assumes a fixed set of lexical triggers L. Each trigger is a pair (s, p), where s is a sequence of words (usually one) and p is a predicate (e.g., s = California and p = CA). We use L(s) to denote the set of predicates p triggered by s ((s, p) ∈ L). We should think of the lexical triggers L not as pinning down the precise predicate for each word, but rather as producing an overapproximation. For example, L might contain {(city, city), (city, state), (city, river), ...}, reflecting our initial ignorance prior to learning. We also define a set of trace predicates L( ), which can be introduced without an overt lexical element. Their name is inspired by trace/null elements in syntax, but they serve a more practical rather than a theoretical role here. As we shall see in Section 2.6.2, trace predicates provide more flexibility in the construction of logical forms, allowing us to insert a predicate based on the partial logical form constructed thus far and assess its compatibility with the words afterwards (based on features), rather than insisting on a purely lexically driven formalism. Section 4.1.3 describes the lexical triggers and trace predicates that we use in our experiments.

2.6.2 Recursive Construction of DCS Trees. Given a set of lexical triggers L, we will now describe a recursive mechanism for mapping an utterance x = (x1, ..., xn)toZL(x), a set of candidate DCS trees for x. The basic approach is reminiscent of projective labeled dependency parsing: For each span i..j of the utterance, we build a set of trees Ci,j(x). The set of trees for the span 0..n is the final result:

ZL(x) = C0,n(x) (44)

Each set of DCS trees Ci,j(x) is constructed recursively by combining the trees of its   subspans Ci,k(x)andCk,j(x) for each pair of split points k, k (words between k and k are ignored). These combinations are then augmented via a function A and filtered via a function F; these functions will be specified later. Formally, Ci,j(x) is defined recursively as follows:    { ∈ }∪ Ci,j(x) = F A p i..j : p L(xi+1..j) T1(a, b)) (45) i≤k≤k

This recurrence has two parts: r The base case: we take the phrase (sequence of words) over span i..j and look up the set of predicates p in the set of lexical triggers. For each predicate, we construct a one-node DCS tree. We also extend the definition of DCS trees in Section 2.2 to alloweach node to store the indices of the  span i..j that triggered the predicate at that node; this is denoted by p i..j. This span information will be useful in Section 3.1.1, where we will need to talk about howan utterance x is aligned with a DCS tree z.

418 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

r The recursive case: T1(a, b), which we will define shortly, that takes two DCS trees, a and b, and returns a set of newDCS trees formed by combining a and b. Figure 17 shows this recurrence graphically.

We nowfocus on howto combine twoDCS trees. Define Td(a, b)asthesetofDCS trees that result by making either a or b the root and connecting the other via a chain of relations and at most d trace predicates (d is a small integer that keeps the set of DCS trees manageable):

∪ Td(a, b) = Td (a, b) Td (b, a) (46)

Here, Td (a, b) is the set of DCS trees where a is the root; for Td (a, b), b is the root. The former is defined recursively as follows:

∅ T0 (a, b) = , (47) {   }∪  Td (a, b) = a.p; a.e; r:b , a.p; a.e; r: Σ:b Td−1(a, p; r:b ) r∈R p∈L()

First, we consider all possible relations r ∈ R and try appending an edge to a with relation r and child b (a.p; a.e; r:b ); an aggregate relation Σ can be inserted in addition (a.p; a.e; r:Σ:b ). Of course, R contains an infinite number of join and execute rela- tions, but only a small finite number of them make sense: We consider join relations j ∈{ }  ∈{ } j only for j 1, ...,ARITY(a.p) and j 1, ...,ARITY(b.p) , and execute relations Xi   for which i does not contain indices larger than the number of columns of b w. Next, we further consider all possible trace predicates p ∈ L( ), and recursively try to connect

Figure 17 An example of the recursive construction of Ci,j(x), a set of DCS trees for span i..j.

419 Computational Linguistics Volume 39, Number 2

Figure 18

Given two DCS trees, a and b, T1 (a, b)andT1 (a, b) are the two sets of DCS trees formed by combining a and b with a at the root and b at the root, respectively; one trace predicate can be inserted in between. In this example, the DCS trees which survive filtering (Section 2.6.3) are shown. a with the intermediate p; r:b , nowallowing d − 1 additional predicates. See Figure 18 for an example. In the other direction, Td is defined similarly:

∅ T0 (a, b) = (48) {   }∪  Td (a, b) = b.p; r:a; b.e , b.p; r: Σ:a ; b.e Td−1(a, p; r:b ) r∈R p∈L()

Inserting trace predicates allows us to build logical forms with more predicates than are explicitly triggered by the words. This ability is useful for several reasons. Sometimes, there is a predicate not overtly expressed, especially in noun compounds (e.g., California cities). For semantically light words such as prepositions (e.g., for)itis difficult to enumerate all the possible predicates that they might trigger; it is simpler computationally to try to insert trace predicates. We can even omit lexical triggers for transitive verbs such as border because the corresponding predicate border can be inserted as a trace predicate. The function T1(a, b) connects two DCS trees via a path of relations and trace predi- cates. The augmentation function A adds additional relations (specifically, E and/or Xi) on a single DCS tree: A(Z) = {z, z; E :ø , Xi :z , Xi :z; E :ø } (49) z∈Z Xi∈R

2.6.3 Filtering using Abstract Interpretation. The construction procedure as described thus far is extremely permissive, generating many DCS trees which are obviously wrong—  1  2  for example, state; 1 : >; 1 3 , which tries to compare a state with the number 3. There is nothing wrong with this expression syntactically: Its denotation will simply be empty (with respect to the world). But semantically, this DCS tree is anomalous. We cannot simply just discard DCS trees with empty denotations, because we  1  2  would incorrectly rule out state; 1 : border; 1 AK . The difference here is that even though the denotation is empty in this world, it is possible that it might not be empty

420 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics in a different world where history and geology took another turn, whereas it is simply impossible to compare cities and numbers. Nowlet us quickly flesh out this intuition before falling into a philosophical dis- cussion about possible worlds. Given a world w, we define an abstract world α(w), to be described shortly. We compute the denotation of a DCS tree z with respect to this abstract world. If at any point in the computation we create an empty denotation, wejudgez to be impossible and throwit away. The filtering function F is defined as follows:10

{ ∈ ∀     ∅} F(Z) = z Z : z subtree of z , z α(w).A = (50)

Now we need to define the abstract world α(w). The intuition is to map concrete values to abstract values: 3:length becomes ∗:length, Oregon:state becomes ∗:state, and in general, primitive value x:t becomes ∗:t. We perform abstraction on tuples componentwise, so that (Oregon:state,3:length) becomes (∗:state, ∗:length). Our abstraction of sets is slightly more complex: The empty set maps to the empty set, a set containing values all with the same abstract value a maps to {a}, and a set containing values with more than one abstract value maps to {MIXED}. Finally, a world maps each predicate onto a set of (concrete) tuples; the corresponding abstract world maps each predicate onto the set of abstract tuples. Formally, the abstraction function is defined as follows:

α(x : t) = ∗ : t [primitive value] (51)

α((v1, ..., vn)) = (α(v1), ..., α(vn)) [tuple] (52)  ∅ if A = ∅ α(A) = {α(x):x ∈ A} if |{α(x):x ∈ A}| = 1 [set] (53)  {MIXED} otherwise α(w) = λp.{α(x):x ∈ w(p)} [world] (54)

As an example, the abstract world might look like this:

α(w)(>) = {(∗:number, ∗:number, ∗:number) (55) (∗:length, ∗:length, ∗:length), ...} α(w)(state) = {(∗:state)} (56) α(w)(AK) = {(∗:state)} (57) α(w)(border) = {(∗:state, ∗:state)} (58)

Nowreturning to our motivating example at the beginning of this section, wesee  1  2   that the bad DCS tree has an empty abstract denotation state; 1 : >; 1 3 α(w) =  ∅  1  ;ø . The good DCS tree has a non-empty abstract denotation: state; 1 : border; 2    { ∗ } 1 AK α(w) = ( :state) ;ø , as desired.

10 To further reduce the search space, F imposes a fewadditional constraints: for example, limiting the number of columns to 2, and only allowing trace predicates between arity 1 predicates.

421 Computational Linguistics Volume 39, Number 2

Remarks. Computing denotations on an abstract world is called abstract interpretation (Cousot and Cousot 1977) and is a very powerful framework commonly used in the programming languages community. The idea is to obtain information about a program (in our case, a DCS tree) without running it concretely, but rather just by running it abstractly. It is closely related to type systems, but the type of abstractions one uses is often much richer than standard type systems.

2.6.4 Comparison with CCG. We nowcompare our construction mechanism withCCG (see Figure 19 for an example). The main difference is that our lexical triggers contain less information than a lexical entry in a CCG. In CCG, the lexicon would have an entry such as

major  N/N : λf.λx.major(x) ∧ f (x) (59) which gives detailed information about how this word should interact with its context. In DCS construction, however, each lexical trigger only has the minimal amount of information:

major  major (60)

A lexical trigger specifies a pre-theoretic “meaning” of a word which does not commit to any formalisms. One advantage of this minimality is that lexical triggers could be easily obtained from non-expert supervision: One would only have to associate words with database table names (predicates). In some sense, the DCS construction mechanism pushes the complexity out of the lexicon. In linguistics, this complexity usually would end up in the grammar, which would be undesirable. We do not have to respect this tradeoff, however, because the

Figure 19 Comparison between the construction mechanisms of CCG and DCS. There are three principal differences: First, in CCG, words are mapped onto lambda calculus expressions; in DCS, words are just mapped onto predicates. Second, in CCG, lambda calculus expressions are built by combining (e.g., via function application) two smaller expressions; in DCS, trees are combined by inserting relations (and possibly other predicates between them). Third, in CCG, all words map to logical expressions; in DCS, only a small subset of words (e.g., state and Texas)mapto predicates; the rest participate in features for scoring DCS trees.

422 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics construction mechanism only produces an overapproximation, which means it is possi- ble to have both a simple “lexicon” and a simple “grammar.” There is an important practical rationale for this design decision. During learning, we never just have one clean lexical entry per word. Rather, there are often many possible lexical entries (and to handle disfluent utterances or utterances in free word- order languages, we might actually need many of them [Kwiatkowski et al. 2010]):

major  N : λx.major(x) (61)

major  N/N : λf.λx.major(x) ∧ f (x) (62)

major  N\N : λf.λx.major(x) ∧ f (x) (63) ... (64)

Nowthink of a DCS lexical trigger major  major as simply a compact representation for a set of CCG lexical entries. Furthermore, the choice of the lexical entry is made not at the initial lexical base case, but rather during the recursive construction by inserting relations between DCS subtrees. It is exactly at this point that the choice can be made, because after all, the choice is one that depends on context. The general principle is to compactly represent the indeterminacy until one can resolve it. Compactly representing a set of CCG lexical entries can also be done within the CCG framework by factoring lexical entries into a lexeme and a lexical template (Kwiatkowski et al. 2011). Type raising is a combinator in CCG that traditionally converts x to λf.f (x). In recent work, Zettlemoyer and Collins (2007) introduced more general type-changing combinators to allowconversion from one entity into a related entity in general (a kind of generalized metonymy). For example, in order to parse Boston flights, Boston is transformed to λx.to(x, Boston). This type changing is analogous to inserting trace predicates in DCS, but there is an important distinction: Type changing is a unary operation and is unconstrained in that it changes logical forms into newones without regard for how they will be used downstream. Inserting trace predicates is a binary operation that is constrained by the two predicates that it is mediating. In the example, to would only be inserted to combine Boston with flight. This is another instance of the general principle of delaying uncertain decisions until there is more information.

3. Learning

In Section 2, we defined DCS trees and a construction mechanism for producing a set of candidate DCS trees given an utterance. We nowdefine a probability distribution over that set (Section 3.1) and an algorithm for estimating the parameters (Section 3.2). The number of candidate DCS trees grows exponentially, so we use beam search to control this growth. The final learning algorithm alternates between beam search and optimization of the parameters, leading to a natural bootstrapping procedure which integrates learning and search.

3.1 Semantic Parsing Model

The semantic parsing model specifies a conditional distribution over a set of candi- date DCS trees C(x) given an utterance x. This distribution depends on a function φ(x, z) ∈ Rd, which takes a (x, z) pair and extracts a set of local features (see Section 3.1.1

423 Computational Linguistics Volume 39, Number 2

for a full specification). Associated with this feature vector is a parameter vector θ ∈ Rd. The inner product between the two vectors, φ(x, z) θ, yields a numerical score, which intuitively measures the compatibility of the utterance x with the DCS tree z. We expo- nentiate the score and normalize over C(x) to obtain a proper probability distribution:

p(z | x; C, θ) = exp{φ(x, z) θ − A(θ; x, C)} (65) 

A(θ; x, C) = log exp{φ(x, z) θ} (66) z∈C(x) where A(θ; x, C) is the log-partition function with respect to the candidate set function C(x).

3.1.1 Features. We nowdefine the feature vector φ(x, z) ∈ Rd, the core part of the seman- tic parsing model. Each component j = 1, ..., d of this vector is a feature, and φ(x, z)j is the number of times that feature occurs in (x, z). Rather than working with indices, we treat features as symbols (e.g., TRIGGERPRED[states, state]). Each feature captures some property about (x, z) that abstracts away from the details of the specific instance and allows us to generalize to new instances that share common features. The features are organized into feature templates, where each feature template instantiates a set of features. Figure 20 shows all the feature templates for a concrete example. The feature templates are as follows: r PREDHIT contains the single feature PREDHIT, which fires for each predicate in z. r PRED contains features {PRED[α(p)] : p ∈ P}, each of which fires on α(p), the abstraction of predicate p, where ∗:t if p = x:t α(p) = (67) p otherwise

The purpose of the abstraction is to abstract away the details of concrete values such as TX = Texas:state. r PREDREL contains features {PREDREL[α(p), q]:p ∈ P, q ∈ ({ , }× R)∗}. A feature fires when a node x has predicate p and is connected via some path q = (d1, r1), ...,(dm, rm) to the lowest descendant node y with the property that each node between x and y has a null predicate. Each (d, r) on the path represents an edge labeled with relation r connecting to a left (d = )orright(d = ) child. If x has no children, then m = 0. The most common case is when m = 1, but m = 2 also occurs with the 1 aggregate and execute relations (e.g., PREDREL[count, 1 Σ]fires for Figure 5(a)). r   PREDRELPRED contains features {PREDRELPRED[α(p), q, α(p )] : p, p ∈ ∗ P, q ∈ ({ , }×R) }, which are the same as PREDREL, except that we include both the predicate p of x and the predicate p of the descendant node y. These features do not fire if m = 0.

424 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 20 For each utterance–DCS tree pair (x, z), we define a feature vector φ(x, z), whose j-th component is the number of times a feature j occurs in (x, z). Each feature has an associated parameter θj, which is estimated from data in Section 3.2. The inner product of the feature vector and parameter vector yields a compatibility score.

r ∗ TRIGGERPRED contains features {TRIGGERPRED[s, p]:s ∈ W , p ∈ P}, where W = {it, Texas, ...} is the set of words. Each of these features fires when a span of the utterance with words s triggers the predicate p—more  precisely, when a subtree p; e i..j exists with s = xi+1..j. Note that these lexicalized features use the predicate p rather than the abstracted version α(p). r ∗ TRACEPRED contains features {TRACEPRED[s, p, d]:s ∈ W , p ∈ P, d ∈ { , }}, each of which fires when a trace predicate p has been inserted

425 Computational Linguistics Volume 39, Number 2

over a word s. The situation is the following: Suppose we have a subtree a that ends at position k (there is a predicate in a that is triggered by a phrase with right endpoint k) and another subtree b that begins at k. Recall that in the construction mechanism (46), we can insert a trace predicate p ∈ L( ) between the roots of a and b. Then, for every word  xj between the spans of the two subtrees (j = {k + 1, ..., k }), the feature TRACEPRED[xj, p, d]fires(d = if b dominates a and d = if a dominates b). r ∗ TRACEREL contains features {TRACEREL[s, d, r]:s ∈ W , d ∈{ , }, r ∈ R}, each of which fires when some trace predicate with parent relation r has been inserted over a word s. r ∗ TRACEPREDREL contains features {TRACEPREDREL[s, p, d, r]:s ∈ W , p ∈ P, d ∈{ , }, r ∈ R}, each of which fires when a predicate p is connected via child relation r to some trace predicate over a word s.

These features are simple generic patterns which can be applied for modeling essentially any distribution over sequences and labeled trees—there is nothing spe- cific to DCS at all. The first half of the feature templates (PREDHIT,PRED,PREDREL, PREDRELPRED) capture properties of the tree independent of the utterance, and are similar to those used for syntactic dependency parsing. The other feature tem- plates (TRIGGERPRED,TRACEPRED,TRACEREL,TRACEPREDREL) connect predicates in the DCS tree with words in the utterance, similar to those in a model of machine translation.

3.2 Parameter Estimation

We have nowfully specified the details of the graphical model in Figure 2: Section 3.1 described semantic parsing and Section 2 described semantic evaluation. Next, we focus on the inferential problem of estimating the parameters θ of the model from data.

3.2.1 Objective Function. We assume that our learning algorithm is given a training data set D containing question–answer pairs (x, y). Because the logical forms are unobserved, we work with log p(y | x; C, θ), the marginal log-likelihood of obtaining the correct answer y given an utterance x. This marginal log-likelihood sums over all z ∈ C(x)that evaluate to y:

log p(y | x; C, θ) = log p(z ∈ Cy(x) | x; C, θ) (68) = A(θ; x, Cy) − A(θ, x, C), where (69)

y def { ∈   } C (x) = z C(x): z w = y (70)

Here, Cy(x) is the set of DCS trees z with denotation y. We call an example (x, y) ∈ D feasible if the candidate set of x contains a DCS tree that evaluates to y (Cy(x) = ∅). Define an objective function O(θ, C) containing two terms. The first term is the sum of the marginal log-likelihood over all feasible

426 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics training examples. The second term is a quadratic penalty on the parameters θ with regularization parameter λ. Formally:

 def O(θ, C) = log p(y | x; C, θ) − λ#θ#2 (71) 2 2 (x,y)∈D Cy(x)= ∅  = (A(θ; x, Cy) − A(θ; x, C)) − λ#θ#2 2 2 (x,y)∈D Cy(x)= ∅

We would like to maximize O(θ, C). The log-partition function A(θ; ·, ·) is convex, but O(θ, C) is the difference of two log-partition functions and hence is not concave (nor convex). Thus we resort to gradient-based optimization. A standard result is that the derivative of the log-partition function is the expected feature vector (Wainwright and Jordan 2008). Using this, we obtain the gradient of our objective function:11

 ∂O(θ, C)   = E | y [φ(x, z)] − E | [φ(x, z)] − λθ (72) ∂θ p(z x;C ,θ) p(z x;C,θ) (x,y)∈D Cy(x)= ∅

Updating the parameters in the direction of the gradient would move the parameters towards the DCS trees that yield the correct answer (Cy) and away from overall can- didate DCS trees (C). We can use any standard numerical optimization algorithm that requires only black-box access to a gradient. Section 4.3.4 will discuss the empirical ramifications of the choice of optimization algorithm.

3.2.2 Algorithm. Given a candidate set function C(x), we can optimize Equation (71) to obtain estimates of the parameters θ. Ideally, we would use C(x) = ZL(x), the candidate sets from our construction mechanism in Section 2.6, but we quickly run into the prob- lem of computing Equation (72) efficiently. Note that ZL(x) (defined in Equation (44)) grows exponentially with the length of x. This by itself is not a show-stopper. Our features (Section 3.1.1) decompose along the edges of the DCS tree, so it is possible 12 E φ to use dynamic programming to compute the second expectation p(z|x;ZL,θ)[ (x, z)] of Equation (72). The problem is computing the first expectation E | Zy [φ(x, z)], p(z x; L ,θ)   which sums over the subset of candidate DCS trees z satisfying the constraint z w = y. Though this is a smaller set, there is no efficient dynamic program for this set because the constraint does not decompose along the structure of the DCS tree. Therefore, we Zy Z need to approximate L, and, in fact, we will approximate L as well so that the two expectations in Equation (72) are coherent. Recall that ZL(x) was built by recursively constructing a set of DCS trees Ci,j(x) for each span i..j. In our approximation, we simply use beam search, which truncates each Ci,j(x) to include the (at most) K DCS trees with the highest score φ(x, z) θ.We

E 11 Notation: p(x)[f (x)] = x p(x)f (x). 12 The state of the dynamic program would be the span i..j and the head predicate over that span.

427 Computational Linguistics Volume 39, Number 2

˜ let Ci,j,θ(x) denote this approximation and define the set of candidate DCS trees with respect to the beam search:

˜ Z˜L,θ(x) = C0,n,θ(x) (73)

We nowhave a chicken-and-egg problem: If wehad good parameters θ,we could generate good candidate sets C(x) using beam search Z˜L,θ(x). If we had good candidate sets C(x), we could generate good parameters by optimizing our objective O(θ, C) in Equation (71). This problem leads to a natural solution: simply alternate between the two steps (Figure 21). This procedure is not guaranteed to converge, due to the heuristic nature of the beam search, but we have found it to be convergent in practice. Finally, we use the trained model with parameters θ to answer new questions x by choosing the most likely answer y, summing out the latent logical form z:

def Fθ(x) = argmax p(y | x; θ, Z˜L,θ) (74) y  = argmax p(z | x; θ, Z˜L,θ) (75) y ∈Z z ˜ L,θ(x)   z w=y

4. Experiments

We have nowcompleted the conceptual part of this article—using DCS trees to rep- resent logical forms (Section 2), and learning a probabilistic model over these trees (Section 3). In this section, we evaluate and study our approach empirically. Our main result is that our system can obtain comparable accuracies to state-of-the-art systems that require annotated logical forms. All the code and data are available at cs.stanford.edu/~pliang/software/.

4.1 Experimental Set-up

We first describe the data sets (Section 4.1.1) that we use to train and evaluate our system. We then mention various choices in the model and learning algorithm (Sec- tion 4.1.2). One of these choices is the lexical triggers, which are further discussed in Section 4.1.3.

Figure 21 The learning algorithm alternates between updating the candidate sets based on beam search and updating the parameters using standard numerical optimization.

428 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

4.1.1 Data sets. We tested our methods on two standard data sets, referred to in this article as GEO and JOBS. These data sets were created by Ray Mooney’s group during the 1990s and have been used to evaluate semantic parsers for over a decade.

U.S. Geography. The GEO data set, originally created by Zelle and Mooney (1996), con- tains 880 questions about U.S. geography and a database of facts encoded in Prolog. The questions in GEO ask about general properties (e.g., area, elevation, and population) of geographical entities (e.g., cities, states, rivers, and mountains). Across all the questions, there are 280 word types, and the length of an utterance ranges from 4 to 19 words, with an average of 8.5 words. The questions involve conjunctions, superlatives, and negation, but no generalized quantification. Each question is annotated with a logical form in Prolog, for example: Utterance: What is the highest point in Florida? Logical form: answer(A,highest(A,(place(A),loc(A,B),const(B,stateid(florida))))) Because our approach learns from answers, not logical forms, we evaluated the annotated logical forms on the provided database to obtain the correct answers. Recall that a world/database w maps each predicate p ∈ P to a set of tuples w(p). Some predicates contain the set of tuples explicitly (e.g., mountain); others can be derived (e.g., higher takes two entities x and y and returns true if elevation(x) > elevation(y)). Other predicates are higher-order (e.g., sum, highest) in that they take other predicates as arguments. We do not use the provided domain-specific higher- order predicates (e.g., highest), but rather provide domain-independent higher-order predicates (e.g., argmax) and the ordinary domain-specific predicates (e.g., elevation). This provides more compositionality and therefore better generalization. Similarly, we use more and elevation instead of higher. Altogether, P contains 43 predicates plus one predicate for each value (e.g., CA).

Job Queries. The JOBS data set (Tang and Mooney 2001) contains 640 natural language queries about job postings. Most of the questions ask for jobs matching various criteria: job title, company, recruiter, location, salary, languages and platforms used, areas of expertise, required/desired degrees, and required/desired years of experience. Across all utterances, there are 388 word types, and the length of an utterance ranges from 2 to 23 words, with an average of 9.8 words. The utterances are mostly based on conjunctions of criteria, with a sprinkling of negation and disjunction. Here is an example: Utterance: Are there any jobs using Java that are not with IBM? Logical form: answer(A,(job(A),language(A,’java’),¬company(A,’IBM’)))

The JOBS data set comes with a database, which we can use as the world w. When the logical forms are evaluated on this database, however, close to half of the answers are empty (no jobs match the requested criteria). Therefore, there is a large discrepancy between obtaining the correct logical form (which has been the focus of most work on semantic parsing) and obtaining the correct answer (our focus). To bring these two into better alignment, we generated a random database as follows: We created m = 100 jobs. For each job j, we go through each predicate p (e.g., company) that takes two arguments, a job, and a target value. For each of the possible target values v,weadd(j, v)tow(p) independently with probability α = 0.8. For exam- ple, for p = company, j = job37,wemightadd(job37, IBM)tow(company). The result is

429 Computational Linguistics Volume 39, Number 2 a database with a total of 23 predicates (which includes the domain-independent ones) in addition to the value predicates (e.g., IBM). The goal of using randomness is to ensure that two different logical forms will most likely yield different answers. For example, consider two logical forms:

z1 = λj.job(j) ∧ company(j, IBM), (76)

z2 = λj.job(j) ∧ language(j, Java). (77)

Under the random construction, the denotation of z1 is S1, a random subset of the jobs, where each job is included in S1 independently with probability α, and the denotation of z2 is S2, which has the same distribution as S1 but importantly is independent of S1. 2 2 m Therefore, the probability that S1 = S2 is [α + (1 − α) ] , which is exponentially small in m. This construction yields a world that is not entirely “realistic” (a job might have multiple employers), but it ensures that if we get the correct answer, we probably also obtain the correct logical form.

4.1.2 Settings. There are a number of settings that control the tradeoffs between compu- tation, expressiveness, and generalization power of our model, shown here. For now, we will use generic settings chosen rather crudely; Section 4.3.4 will explore the effect of changing these settings.

Lexical Triggers The lexical triggers L (Section 2.6.1) define the set of candidate DCS trees for each utterance. There is a tradeoff between expressiveness and computa- tional complexity: The more triggers we have, the more DCS trees we can consider for a given utterance, but then either the candidate sets become too large or beam search starts dropping the good DCS trees. Choosing lexical triggers is important and requires additional supervision (Section 4.1.3). Features Our probabilistic semantic parsing model is defined in terms of feature tem- plates (Section 3.1.1). Richer features increase expressiveness but also might lead to overfitting. By default, we include all the feature templates. Number of training examples (n) An important property of any learning algorithm is its sample complexity—howmany training examples are required to obtain a certain level of accuracy? By default, all training examples are used. Number of training iterations (T) Our learning algorithm (Figure 21) alternates be- tween updating candidate sets and updating parameters for T iterations. We use T = 5 as the default value. Beam size (K) The computation of the candidate sets in Figure 21 is based on beam search where each intermediate state keeps at most K DCS trees. The default value is K = 100. Optimization algorithm To optimize the objective function O(θ, C) our default is to use the standard L-BFGS algorithm (Nocedal 1980) with a backtracking line search for choosing the step size. Regularization (λ) The regularization parameter λ>0 in the objective function O(θ, C) is another knob for controlling the tradeoff between fitting and overfitting. The default is λ = 0.01.

4.1.3 Lexical Triggers. The lexical trigger set L (Section 2.6.1) is a set of entries (s, p), where s is a sequence of words and p is a predicate. We run experiments on two sets of lexical triggers: base triggers LB and augmented triggers LB+P.

430 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Base Triggers. The base trigger set LB includes three types of entries: r Domain-independent triggers: For each domain-independent predicate (e.g., argmax), we manually specify a few words associated with that predicate (e.g., most). The full list is shown at the top of Figure 22. r Values: For each value x that appears in the world (specifically, ∈ ∈ x vj w(p) for some tuple v, index j, and predicate p), LB contains an entry (x, x)(e.g.,(Boston, Boston:city)). Note that this rule implicitly specifies an infinite number of triggers. Regarding predicate names, we do not add entries such as (city, city), because we want our system to be language-independent. In Turkish, for instance, we would not have the luxury of lexicographical cues that associate city with sehir¸ . So we should think of the predicates as just symbols predicate1, predicate2, and so on. On the other hand, values in the database are generally proper nouns (e.g., city names) for which there are generally strong cross-linguistic lexicographic similarities. r Part-of-speech (POS) triggers:13 For each domain-specific predicate p,

we specify a set of POS tags T. Implicitly, LB contains all pairs (x, p) where the word x has a POS tag t ∈ T. For example, for city, we would specify NN and NNS, which means that any word which is a singular or plural common noun triggers the predicate city.Notethatcity triggers city as desired, but state also triggers city.

The POS triggers for GEO and JOBS domains are shown in the left side of Figure 22. Note that some predicates such as traverse and loc are not associated with any POS tags. Predicates corresponding to verbs and prepositions are not included as overt lexical triggers, but rather included as trace predicates L( ). In constructing the logical forms, nouns and adjectives serve as anchor points. Trace predicates can be inserted between these anchors. This strategy is more flexible than requiring each predicate to spring from some word.

Augmented Triggers. We nowdefine the augmented trigger set LB+P, which contains more domain-specific information than LB. Specifically, for each domain-specific predicate (e.g., city), we manually specify a single prototype word (e.g., city) associated with that predicate. Under LB+P, city would trigger only city because city is a prototype word, but town would trigger all the NN predicates (city, state, country, etc.) because it is not a prototype word. Prototype triggers require only a modest amount of domain-specific supervision (see the right side of Figure 22 for the entire list for GEO and JOBS). In fact, as we’ll see in Section 4.2, prototype triggers are not absolutely required to obtain good accuracies, but they give an extra boost and also improve computational efficiency by reducing the set of candidate DCS trees.

13 To perform POS tagging, we used the Berkeley Parser (Petrov et al. 2006), trained on the WSJ Treebank (Marcus, Marcinkiewicz, and Santorini 1993) and the Question Treebank (Judge, Cahill, and v. Genabith 2006)—thanks to Slav Petrov for providing the trained parser.

431 Computational Linguistics Volume 39, Number 2

Figure 22 Lexical triggers used in our experiments.

Finally, to determine triggering, we stem all words using the Porter stemmer (Porter 1980), so that mountains triggers the same predicates as mountain. We also decompose superlatives into two words (e.g., largest is mapped to most large), allowing us to con- struct the logical form more compositionally.

4.2 Comparison with Other Systems

We nowcompare our approach withexisting methods. We used the same training-test splits as Zettlemoyer and Collins (2005) (600 training and 280 test examples for GEO, 500 training and 140 test examples for JOBS). For development, we created five random splits of the training data. For each split, we put 70% of the examples into a development training set and the remaining 30% into a development test set. The actual test set was only used for obtaining final numbers.

4.2.1 Systems that Learn from Question–Answer Pairs. We first compare our system (hence- forth, LJK11) with Clarke et al. (2010) (henceforth, CGCR10), which is most similar to our work in that it also learns from question–answer pairs without using annotated logical forms. CGCR10 works with the FunQL language and casts semantic parsing as integer linear programming (ILP). In each iteration, the learning algorithm solves the

432 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Table 2 Results on GEO with 250 training and 250 test examples. Our system (LJK11 with base triggers and no logical forms) obtains higher test accuracy than CGCR10, even when CGCR10 is trained using logical forms.

System Accuracy (%)

CGCR10 w/answers (Clarke et al. 2010) 73.2 CGCR10 w/logical forms (Clarke et al. 2010) 80.4 LJK11 w/base triggers (Liang, Jordan, and Klein 2011) 84.0 LJK11 w/augmented triggers (Liang, Jordan, and Klein 2011) 87.6

ILP to predict the logical form for each training example. The examples with correct predictions are fed to a structural support vector machine (SVM) and the model param- eters are updated. Though similar in spirit, there are some important differences between CGCR10 and our approach. They use ILP instead of beam search and structural SVM instead of log-linear models, but the main difference is which examples are used for learning. Our approach learns on any feasible example (Section 3.2.1), one where the candidate set contains a logical form that evaluates to the correct answer. CGCR10 uses a much more stringent criterion: The highest scoring logical form must evaluate to the correct answer. Therefore, for their algorithm to progress, the model already must be non-trivially good before learning even starts. This is reflected in the amount of prior knowledge and initialization that CGCR10 uses before learning starts: WordNet features, syntactic parse trees, and a set of lexical triggers with 1.42 words per non-value predicate. Our system with base triggers requires only simple indicator features, POS tags, and 0.5 words per non-value predicate. CGCR10 created a version of GEO which contains 250 training and 250 test exam- ples. Table 2 compares the empirical results of this split. We see that our system (LJK11) with base triggers significantly outperforms CGCR10 (84% vs. 73.2%), and it even outperforms the version of CGCR10 that is trained using logical forms (84.0% vs. 80.4%). If we use augmented triggers, we widen the gap by another 3.6 percentage points.14

4.2.2 State-of-the-Art Systems. We nowcompare our system (LJK11) withstate-of-the- art systems, which all require annotated logical forms (except PRECISE). Here is a brief overviewof the systems: r COCKTAIL (Tang and Mooney 2001) uses inductive logic programming to learn rules for driving the decisions of a shift-reduce semantic parser. It assumes that a lexicon (mapping from words to predicates) is provided. r PRECISE (Popescu, Etzioni, and Kautz 2003) does not use learning, but instead relies on matching words to strings in the database using various heuristics based on WordNet and the Charniak parser. Like our work, it also uses database type constraints to rule out spurious logical forms. One of the unique features of PRECISE is that it has 100% precision—it refuses to parse an utterance which it deems semantically intractable.

14 Note that the numbers for LJK11 differ from those presented in Liang, Jordan, and Klein (2011), which reports results based on 10 different splits rather than the set-up used by CGCR10.

433 Computational Linguistics Volume 39, Number 2

r SCISSOR (Ge and Mooney 2005) learns a generative probabilistic model that extends the Collins (1999) models with semantic labels, so that syntactic and semantic parsing can be done jointly. r SILT (Kate, Wong, and Mooney 2005) learns a set of transformation rules for mapping utterances to logical forms. r KRISP (Kate and Mooney 2006) uses SVMs with string kernels to drive the local decisions of a chart-based semantic parser. r WASP (Wong and Mooney 2006) uses log-linear synchronous grammars to transform utterances into logical forms, starting with word alignments obtained from the IBM models. r λ-WASP (Wong and Mooney 2007) extends WASP to work with logical forms that contain bound variables (lambda abstraction). r LNLZ08 (Lu et al. 2008) learns a generative model over hybrid trees, which are logical forms augmented with natural language words. IBM model 1 is used to initialize the parameters, and a discriminative reranking step works on top of the generative model. r ZC05 (Zettlemoyer and Collins 2005) learns a discriminative log-linear model over CCG derivations. Starting with a manually constructed domain-independent lexicon, the training procedure grows the lexicon by adding lexical entries derived from associating parts of an utterance with parts of the annotated logical form. r ZC07 (Zettlemoyer and Collins 2007) extends ZC05 with extra (disharmonic) combinators to increase the expressive power of the model. r KZGS10 (Kwiatkowski et al. 2010) uses a restricted higher-order unification procedure, which iteratively breaks up a logical form into smaller pieces. This approach gradually adds lexical entries of increasing generality, thus obviating the need for the manually specified templates used by ZC05 and ZC07 for growing the lexicon. IBM model 1 is used to initialize the parameters. r KZGS11 (Kwiatkowski et al. 2011) extends KZGS10 by factoring lexical entries into a template plus a sequence of predicates that fill the slots of the template. This factorization improves generalization.

With the exception of PRECISE, all other systems require annotated logical forms, whereas our system learns only from annotated answers. On the other hand, our system does rely on a fewmanually specified lexical triggers, whereas many of the later systems essentially require no manually crafted lexica. For us, the lexical triggers play a crucial role in the initial stages of learning because they constrain the set of candidate DCS trees; otherwise we would face a hopelessly intractable search problem. The other systems induce lexica using unsupervised word alignment (Wong and Mooney 2006, 2007; Kwiatkowski et al. 2010, 2011) and/or on-line lexicon learning (Zettlemoyer and Collins 2005, 2007; Kwiatkowski et al. 2010, 2011). Unfortunately, we cannot use these automatic techniques because they rely on having annotated logical forms. Table 3 shows the results for GEO. Semantic parsers are typically evaluated on the accuracy of the logical forms: precision (the accuracy on utterances which are

434 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Table 3 Results on GEO: Logical form accuracy (LF) and answer accuracy (Answer) of the various systems. The first group of systems are evaluated using 10-fold cross-validation on all 880 examples; the second are evaluated on the 680 + 200 split of Zettlemoyer and Collins (2005). Our system (LJK11) with base triggers obtains comparable accuracy to past work, whereas with augmented triggers, our system obtains the highest overall accuracy.

System LF (%) Answer (%)

COCKTAIL (Tang and Mooney 2001) 79.4 – PRECISE (Popescu, Etzioni, and Kautz 2003) 77.5 77.5 SCISSOR (Ge and Mooney 2005) 72.3 – SILT (Kate, Wong, and Mooney 2005) 54.1 – KRISP (Kate and Mooney 2006) 71.7 – WASP (Wong and Mooney 2006) 74.8 – λ-WASP (Wong and Mooney 2007) 86.6 – LNLZ08 (Lu et al. 2008) 81.8 –

ZC05 (Zettlemoyer and Collins 2005) 79.3 – ZC07 (Zettlemoyer and Collins 2007) 86.1 – KZGS10 (Kwiatkowski et al. 2010) 88.2 88.9 KZGS11 (Kwiatkowski et al. 2010) 88.6 – LJK11 w/base triggers (Liang, Jordan, and Klein 2011) – 87.9 LJK11 w/augmented triggers (Liang, Jordan, and Klein 2011) – 91.4 successfully parsed) and recall (the accuracy on all utterances). We only focus on recall (a lower bound on precision) and simply use the word accuracy to refer to recall.15 Our system is evaluated only on answer accuracy because our model marginalizes out the latent logical form. All other systems are evaluated on the accuracy of logical forms. To calibrate, we also evaluated KZGS10 on answer accuracy and found that it was quite similar to its logical form accuracy (88.9% vs. 88.2%).16 This does not imply that our system would necessarily have a high logical form accuracy because multiple logical forms can produce the same answer, and our system does not receive a training signal to tease them apart. Even with only base triggers, our system (LJK11) outperforms all but two of the systems, falling short of KZGS10 by only one percentage point (87.9% vs. 88.9%).17 With augmented triggers, our system takes the lead (91.4% vs. 88.9%). Table 4 shows the results for JOBS. The two learning-based systems (COCKTAIL and ZC05) are actually outperformed by PRECISE, which is able to use strong database type constraints. By exploiting this information and doing learning, we obtain the best results.

4.3 Empirical Properties

In this section, we try to gain intuition into properties of our approach. All experiments in this section were performed on random development splits. Throughout this section, “accuracy” means development test accuracy.

15 Our system produces a logical form for every utterance, and thus our precision is the same as our recall. 16 The 88.2% corresponds to 87.9% in Kwiatkowski et al. (2010). The difference is due to using a slightly newer version of the code. 17 The 87.9% and 91.4% correspond to 88.6% and 91.1% in Liang, Jordan, and Klein (2011). These differences are due to minor differences in the code.

435 Computational Linguistics Volume 39, Number 2

Table 4 Results on JOBS:BothPRECISE and our system use database type constraints, which results in a decisive advantage over the other systems. In addition, LJK11 incorporates learning and therefore obtains the highest accuracies.

System LF (%) Answer (%)

COCKTAIL (Tang and Mooney 2001) 79.4 – PRECISE (Popescu, Etzioni, and Kautz 2003) 88.0 88.0

ZC05 (Zettlemoyer and Collins 2005) 79.3 – LJK11 w/base triggers (Liang, Jordan, and Klein 2011) – 90.7 LJK11 w/augmented triggers (Liang, Jordan, and Klein 2011) – 95.0

4.3.1 Error Analysis. To understand the type of errors our system makes, we examined one of the development runs, which had 34 errors on the test set. We classified these errors into the following categories (the number of errors in each category is shown in parentheses): r Incorrect POS tags (8): GEO is out-of-domain for our POS tagger, so the tagger makes some basic errors that adversely affect the predicates that can be lexically triggered. For example, the question What states border states . . . is tagged as WP VBZ NN NNS ...,whichmeansthatthefirststates cannot trigger state. In another example, major river is tagged as NNP NNP, so these cannot trigger the appropriate predicates either, and thus the desired DCS tree cannot even be constructed. r Non-projectivity (3): The candidate DCS trees are defined by a projective construction mechanism (Section 2.6) that prohibits edges in the DCS tree from crossing. This means we cannot handle utterances such as largest city by area, because the desired DCS tree would have city dominating area dominating argmax. To construct this DCS tree, we could allow local reordering of the words. r Unseen words (2): We never saw at least or sea level at training time. The former has the correct lexical trigger, but not a sufficiently large feature weight (0) to encourage its use. For the latter, the problem is more structural: We have no lexical triggers for 0:length,andonly adding more lexical triggers can solve this problem. r Wrong lexical triggers (7): Sometimes the error is localized to a single lexical trigger. For example, the model incorrectly thinks Mississippi is the state rather than the river, and that Rochester is the city in NewYork rather than the name, even though there are contextual cues to disambiguate in these cases. r Extra words (5): Sometimes, words trigger predicates that should be ignored. For example, for population density, the first word triggers population, which is used rather than density. r Over-smoothing of DCS tree (9): The first half of our features (Figure 20) are defined on the DCS tree alone; these produce a form of smoothing that encourages DCS trees to look alike regardless of the words. We found

436 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

several instances where this essential tool for generalization went too far. For example, in state of Nevada, the trace predicate border is inserted between the two nouns, because it creates a structure more similar to that of the common question what states border Nevada?

4.3.2 Visualization of Features. Having analyzed the behavior of our system for individual utterances, let us move from the token level to the type level and analyze the learned parameters of our model. We do not look at rawfeature weights,because there are complex interactions between them not represented by examining individual weights. Instead, we look at expected feature counts, which we think are more interpretable. Consider a group of “competing” features J, for example J = {TRIGGERPRED[city, p]:p ∈ P}. We define a distribution q(·) over J as follows:

Nj q(j) = , where (78) j∈J Nj  N = E | Z [φ(x, z)] j p(z x, ˜ L,θ,θ) (x,y)∈D

Think of q(j) as a marginal distribution (because all our features are positive) that represents the relative frequencies with which the features j ∈ J fire with respect to our training data set D and trained model p(z | x, Z˜L,θ, θ). To appreciate the difference between what this distribution and raw feature weights capture, suppose we had two ≡ features, j1 and j2, which are identical (φ(x, z)j1 φ(x, z)j2 ). The weights would be split across the two features, but the features would have the same marginal distribution (q(j1) = q(j2)). Figure 23 shows some of the feature distributions learned.

4.3.3 Learning, Search, Bootstrapping. Recall from Section 3.2.1 that a training example is feasible (with respect to our beam search) if the resulting candidate set contains a DCS tree with the correct answer. Infeasible examples are skipped, but an example may become feasible in a later iteration. A natural question is howmany training examples are feasible in each iteration. Figure 24 shows the answer: Initially, only around 30% of the training examples are feasible; this is not surprising given that all the parameters are zero, so our beam search is essentially unguided. Training on just these examples improves the parameters, however, and over the next few iterations, the number of feasible examples steadily increases to around 97%. In our algorithm, learning and search are deeply intertwined. Search is of course needed to learn, but learning also improves search. The general approach is similar in spirit to Searn (Daume, Langford, and Marcu 2009), although we do not have any formal guarantees at this point. Our algorithm also has a bootstrapping flavor. The “easy” examples are processed first, where easy is defined by the ability of beam search to generate the correct answer. This bootstrapping occurs quite naturally: Unlike most bootstrapping algorithms, we do not have to set a confidence threshold for accepting newtraining examples, something that can be quite tricky to do. Instead, our threshold falls out of the discrete nature of the beam search.

4.3.4 Effect of Various Settings. So far, we have used our approach with default settings (Section 4.1.2). Howsensitive is the approach to these choices? Table 5 showsthe impact of the feature templates. Figure 25 shows the effect of the number of training examples,

437 Computational Linguistics Volume 39, Number 2

Figure 23 Learned feature distributions. In a feature group (e.g., TRIGGERPRED[city, ·]), each feature is associated with the marginal probability that the feature fires according to Equation (78). Note that we have successfully learned that city means city, but incorrectly learned that sparse means elevation (due to the confounding fact that Alaska is the most sparse state and has the highest elevation). number of training iterations, beam size, and regularization parameter. The overall conclusion is that there are no big surprises: Our default settings could be improved on slightly, but these differences are often smaller than the variation across different development splits. We nowconsider the choice of optimization algorithm to update the parameters given candidate sets (see Figure 21). Thus far, we have been using L-BFGS (Nocedal 1980), which is a batch algorithm. Each iteration, we construct the candidate

Figure 24 The fraction of feasible training examples increases steadily as the parameters, and thus the beam search improves. Each curve corresponds to a run on a different development split.

438 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Table 5 There are two classes of feature templates: lexical features (TRIGGERPRED,TRACE*) and non-lexical features (PREDREL,PREDRELPRED). The lexical features are relatively much more important for obtaining good accuracy (76.4% vs. 23.1%), but adding the non-lexical features makes a significant contribution as well (84.7% vs. 76.4%).

Features Accuracy (%)

PRED 13.4 ± 1.6 PRED + PREDREL 18.4 ± 3.5 PRED + PREDREL + PREDRELPRED 23.1 ± 5.0 PRED + TRIGGERPRED 61.3 ± 1.1 PRED + TRIGGERPRED + TRACE* 76.4 ± 2.3 PRED + PREDREL + PREDRELPRED + TRIGGERPRED + TRACE* 84.7 ± 3.5

sets C(t)(x) for all the training examples before solving the optimization problem O (t) argmaxθ (θ, C ). We nowconsider an on-line algorithm, stochastic gradient descent (SGD) (Robbins and Monro 1951), which updates the parameters after computing the candidate set for each example. In particular, we iteratively scan through the training examples in a random order. For each example (x, y), we compute the candidate set using beam search. We then update the parameters in the direction of the gradient of the marginal log-likelihood for that example (see Equation (72)) with step size t−α: | Z − ∂ log p(y x; ˜ (t) , θ) θ(t+1) ← θ(t) + t α L,θ (79) ∂θ θ=θ(t)

The trickiest aspect of using SGD is selecting the correct step size: A small α leads to quick progress but also instability; a large α leads to the opposite. We let L-BFGS and SGD both take the same number of iterations (passes over the training set). Figure 26 shows that a very small value of α (less than 0.2) is best for our task, even though only values between 0.5 and 1 guarantee convergence. Our setting is slightly different because we are interleaving the SGD updates with beam search, which might also lead to unpredictable consequences. Furthermore, the non-convexity of the objective function exacerbates the unpredictability (Liang and Klein 2009). Nonetheless, with aproperα, SGD converges much faster than L-BFGS and even to a slightly better solution.

5. Discussion

The work we have presented in this article addresses three important themes. The first theme is semantic representation (Section 5.1): Howdo weparametrize the mapping from utterances to their meanings? The second theme is program induction (Section 5.2): How do we efficiently search through the space of logical structures given a weak feedback signal? Finally, the last theme is grounded language (Section 5.3): Howdo weuse constraints from the world to guide learning of language and conversely use language to interact with the world?

439 Computational Linguistics Volume 39, Number 2

Figure 25 (a) The learning curve shows test accuracy as the number of training examples increases; about 300 examples suffices to get around 80% accuracy. (b) Although our algorithm is not guaranteed to converge, the test accuracy is fairly stable (with one exception) with more training iterations—hardly any overfitting occurs. (c) As the beam size increases, the accuracy increases monotonically, although the computational burden also increases. There is a small gain from our default setting of K = 100 to the more expensive K = 300. (d) The accuracy is relatively insensitive to the choice of the regularization parameter for a wide range of values. In fact, no regularization is also acceptable. This is probably because the features are simple, and the lexical triggers and beam search already provide some helpful biases.

5.1 Semantic Representation

Since the late nineteenth century, philosophers and linguists have worked on elucidat- ing the relationship between an utterance and its meaning. One of the pillars of formal semantics is Frege’s principle of compositionality, that the meaning of an utterance is built by composing the meaning of its parts. What these parts are and howthey are composed is the main question. The dominant paradigm, which stems from the seminal work of Richard Montague (1973) in the early 1970s, states that parts are lambda calculus expressions that correspond to syntactic constituents, and composition is function application.

440 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Figure 26 (a) Given the same number of iterations, compared to default batch algorithm (L-BFGS), the on-line algorithm (stochastic gradient descent) is slightly better for aggressive step sizes (small α) and worse for conservative step sizes (large α). (b) The on-line algorithm (with an appropriate choice of α) obtains a reasonable accuracy much faster than L-BFGS.

Consider the compositionality principle from a statistical point of view, where we construe compositionality as factorization. Factorization, the way a statistical model breaks into features, is necessary for generalization: It enables us to learn from pre- viously seen examples and interpret newutterances. Projecting back to Frege’s orig- inal principle, the parts are the features (Section 3.1.1), and composition is the DCS construction mechanism (Section 2.6) driven by parameters learned from training examples. Taking the statistical viewof compositionality, finding a good semantic represen- tation becomes designing a good statistical model. But statistical modeling must also deal with the additional issue of language acquisition or learning, which presents complications: In absorbing training examples, our learning algorithm must inevitably traverse through intermediate models that are wrong or incomplete. The algorithms must therefore tolerate this degradation, and do so in a computationally efficient way. For example, in the line of work on learning probabilistic CCGs (Zettlemoyer and Collins 2005, 2007; Kwiatkowski et al. 2010), many candidate lexical entries must be entertained for each word even when polysemy does not actually exist (Section 2.6.4). To improve generalization, the lexicon can be further factorized (Kwiatkowski et al. 2011), but this is all done within the constraints of CCG. DCS represents a departure from this tradition, which replaces a heavily lexicalized constituency-based formalism with a lightly-lexicalized dependency-based formalism. We can think of DCS as a shift in linguistic coordinate systems, which makes certain factorizations or features more accessible. For example, we can define features on paths between predicates in a DCS tree which capture certain lexical patterns much more easily than in a lambda calculus expression or a CCG derivation. DCS has a family resemblance to a semantic representation called natural logic form (Alshawi, Chang, and Ringgaard 2011), which is also motivated by the benefits of work- ing with dependency-based logical forms. The goals and the detailed structure of the two semantic formalisms are different, however. Alshawi, Chang, and Ringgaard (2011) focus on parsing complex sentences in an open domain where a structured database or world does not exist. Whereas they do equip their logical forms with a full model- theoretic semantics, the logical forms are actually closer to dependency trees: Quantifier scope is left unspecified, and the predicates are simply the words.

441 Computational Linguistics Volume 39, Number 2

Perhaps not immediately apparent is the fact that DCS draws an important idea from Discourse Representation Theory (DRT) (Kamp and Reyle 1993)—not from the treatment of anaphora and which it is known for, but something closer to its core. This is the idea of having a logical form where all variables are existentially quantified and constraints are combined via conjunction—a Discourse Representation Structure (DRS) in DRT, or a basic DCS tree with only join relations. Computationally, these logical structures conveniently encode CSPs. Linguistically, it appears that existen- tial quantifiers play an important role and should be treated specially (Kamp and Reyle 1993). DCS takes this core and focuses on semantic compositionality and computation, whereas DRT focuses more on discourse and . In addition to the statistical viewof DCS as a semantic representation, it is use- ful to think about DCS from the perspective of design. Two programming languages can be equally expressive, but what matters is how simple it is to express a desired type of computation in a given language. In some sense, we designed the DCS formal language to make it easy to represent computations expressed by natural language. An important part of DCS is the mark–execute construct, a uniform framework for dealing with the divergence between syntactic and semantic scope. This construct allows us to build simple DCS tree structures and still handle the complexities of phenomena such as quantifier scope variation. Compared to lambda calculus, think of DCS as a higher-level programming language tailored to natural language, which results in simpler programs (DCS trees). Simpler programs are easier for us to work with and easier for an algorithm to learn.

5.2 Program Induction

Searching over the space of programs is challenging. This is the central computational challenge of program induction, that of inferring programs (logical forms) from their behavior (denotations). This problem has been tackled by different communities in various forms: program induction in AI, programming by demonstration in Human– Computer Interaction, and program synthesis in programming languages. The core computational difficulty is that the supervision signal—the behavior—is a complex function of the program that cannot be easily inverted. What program generated the output Arizona, Nevada,andOregon? Perhaps somewhat counterintuitively, program induction is easier if we infer pro- grams for not a single task but for multiple tasks. The intuition is that when the tasks are related, the solution to one task can help another task, both computationally in navigating the program space and statistically in choosing the appropriate program if there are multiple feasible possibilities (Liang, Jordan, and Klein 2010). In our semantic parsing work, we want to infer a logical form for each utterance (task). Clearly the tasks are related because they use the same vocabulary to talk about the same domain. Natural language also makes program induction easier by providing side informa- tion (words) which can be used to guide the search. There have been several papers that induce programs in this setting: Eisenstein et al. (2009) induce conjunctive for- mulae from natural language instructions, Piantadosi et al. (2008) induce first-order logic formulae using CCG in a small domain assuming observed , and Clarke et al. (2010) induce logical forms in semantic parsing. In the ideal case, the words would determine the program predicates, and the utterance would determine the entire program compositionally. But of course, this mapping is not given and must be learned.

442 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

5.3 Grounded Language

In recent years, there has been an increased interest in connecting language with the world.18 One of the primary issues in grounded language is alignment—figuring out what fragments of utterances refer to what aspects of the world. In fact, semantic parsers trained on examples of utterances and annotated logical form (those discussed in Section 4.2.2) need to solve the task of aligning words to predicates. Some can learn from utterances paired with a set of logical forms, one of which is correct (Kate and Mooney 2007; Chen and Mooney 2008). Liang, Jordan, and Klein (2009) tackle the even more difficult alignment problem of segmenting and aligning a discourse to a database of facts, where many parts on either side are irrelevant. If we know how the world relates to language, we can leverage structure in the world to guide the learning and interpretation of language. We saw that type constraints from the database/world reduce the set of candidate logical forms and lead to more accurate systems (Popescu, Etzioni, and Kautz 2003; Liang, Jordan, and Klein 2011). Even for syntactic parsing, information from the denotation of an utterance can be helpful (Schuler 2003). One of the exciting aspects about using the world for learning language is that it opens the door to many newtypes of supervision. We can obtain answersgiven a world, which are cheaper to obtain than logical forms (Clarke et al. 2010; Liang, Jordan, and Klein 2011). Other researchers have also pushed in this direction in various ways: learning a semantic parser based on bootstrapping and estimating the confidence of its own predictions (Goldwasser et al. 2011), learning a semantic parser from user interac- tions with a dialog system (Artzi and Zettlemoyer 2011), and learning to execute natural language instructions from just a reward signal using reinforcement learning (Branavan et al. 2009; Branavan, Zettlemoyer, and Barzilay 2010; Branavan, Silver, and Barzilay 2011). In general, supervision from the world is indirectly related to the learning task, but it is often much more plentiful and natural to obtain. The benefits can also flow from language to the world. For example, previous work learned to interpret language to troubleshoot a Windows machine (Branavan et al. 2009; Branavan, Zettlemoyer, and Barzilay 2010), win a game of Civilization (Branavan, Silver, and Barzilay 2011), play a legal game of solitaire (Eisenstein et al. 2009; Goldwasser and Roth 2011), and navigate a map by following directions (Vogel and Jurafsky 2010; Chen and Mooney 2011). Even when the objective in the world is defined independently of language (e.g., in Civilization), language can provide a useful bias towards the non- linguistic end goal.

6. Conclusions

The main conceptual contribution of this article is a newsemantic formalism, dependency-based compositional semantics (DCS), and techniques to learn a semantic parser from question–answer pairs where the intermediate logical form (a DCS tree) is induced in an unsupervised manner. Our final question–answering system was able to match the accuracies of state-of-the-art systems that learn from annotated logical forms. There is currently a significant conceptual gap between our question–answering system (which can be construed as a natural language interface to a database) and

18 Here, world need not refer to the physical world, but could be any virtual world. The point is that the world has non-trivial structure and exists extra-linguistically.

443 Computational Linguistics Volume 39, Number 2 open-domain question–answering systems. The former focuses on understanding a question compositionally and computing the answer compositionally, whereas the lat- ter focuses on retrieving and ranking answers from a large unstructured textual corpus. The former has depth; the latter has breadth. Developing methods that can both model the semantic richness of language and scale up to an open-domain setting remains an open challenge. We believe that it is possible to push our approach in the open-domain direction. Neither DCS nor the learning algorithm is tied to having a clean rigid database, which could instead be a database generated from a noisy information extraction process. The key is to drive the learning with the desired behavior, the question–answer pairs. The latent variable is the logical form or program, which just tries to compute the desired answer by piecing together whatever information is available. Of course, there are many open challenges ahead, but with the proper combination of linguistic, statistical, and computational insight, we hope to eventually build systems with both breadth and depth.

Acknowledgments parser. In International Conference on We thank Luke Zettlemoyer and Tom Computational Linguistics (COLING), Kwiatkowski for providing us with data pages 1240–1246, Geneva. and answering questions, as well as the Branavan, S., H. Chen, L. S. Zettlemoyer, and anonymous reviewers for their detailed R. Barzilay. 2009. Reinforcement learning feedback. P. L. was supported by an NSF for mapping instructions to actions. In Graduate Research Fellowship. Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 82–90, Singapore. Alshawi, H., P. Chang, and M. Ringgaard. Branavan, S., D. Silver, and R. Barzilay. 2011. 2011. Deterministic statistical mapping Learning to win by reading manuals in a of sentences to underspecified Monte-Carlo framework. In Association semantics. In International Conference for Computational Linguistics (ACL), on Compositional Semantics (IWCS), pages 268–277. pages 15–24, Oxford. Branavan, S., L. Zettlemoyer, and R. Barzilay. Androutsopoulos, I., G. D. Ritchie, and 2010. Reading between the lines: Learning P. Thanisch. 1995. Natural language to map high-level instructions to interfaces to databases—an introduction. commands. In Association for Computational Journal of Natural Language Engineering, Linguistics (ACL), pages 1268–1277, 1:29–81. Portland, OR. Artzi, Y. and L. Zettlemoyer. 2011. Carpenter, B. 1998. Type-Logical Semantics. Bootstrapping semantic parsers from MIT Press, Cambridge, MA. conversations. In Empirical Methods in Chen, D. L. and R. J. Mooney. 2008. Learning Natural Language Processing (EMNLP), to sportscast: A test of grounded language pages 421–432, Edinburgh. acquisition. In International Conference on Baldridge, J. and G. M. Kruijff. 2002. Machine Learning (ICML), pages 128–135, Coupling CCG with hybrid logic Helsinki. dependency semantics. In Association Chen, D. L. and R. J. Mooney. 2011. for Computational Linguistics (ACL), Learning to interpret natural language pages 319–326, Philadelphia, PA. navigation instructions from observations. Barker, C. 2002. Continuations and the In Association for the Advancement nature of quantification. Natural of Artificial Intelligence (AAAI), Language Semantics, 10:211–242. pages 128–135, Cambridge, MA. Bos, J. 2009. A controlled fragment of Clarke, J., D. Goldwasser, M. Chang, DRT. In Workshop on Controlled Natural and D. Roth. 2010. Driving semantic Language, pages 1–5. parsing from the world’s response. Bos, J., S. Clark, M. Steedman, J. R. Curran, In Computational Natural Language and J. Hockenmaier. 2004. Wide-coverage Learning (CoNLL), pages 18–27, semantic representations from a CCG Uppsala.

444 Liang, Jordan, and Klein Learning Dependency-Based Compositional Semantics

Collins, M. 1999. Head-Driven Statistical Kamp, H., J. van Genabith, and U. Reyle. Models for Natural Language Parsing. 2005. Discourse representation theory. Ph.D. thesis, University of Pennsylvania. In Handbook of Philosophical Logic, Cooper, R. 1975. Montague’s semantic theory Kluwer, Dordrecht. and transformational syntax. Ph.D. thesis, Kate, R. J. and R. J. Mooney. 2006. Using University of Massachusetts at Amherst. string-kernels for learning semantic Cousot, P. and R. Cousot. 1977. Abstract parsers. In International Conference on interpretation: A unified lattice model for Computational Linguistics and Association for static analysis of programs by construction Computational Linguistics (COLING/ACL), or approximation of fixpoints. In Principles pages 913–920, Sydney. of Programming Languages (POPL), Kate, R. J. and R. J. Mooney. 2007. pages 238–252, Los Angeles, CA. Learning language semantics from Daume, H., J. Langford, and D. Marcu. ambiguous supervision. In Association 2009. Search-based structured prediction. for the Advancement of Artificial Machine Learning Journal (MLJ), 75:297–325. Intelligence (AAAI), pages 895–900, Dechter, R. 2003. Constraint Processing. Cambridge, MA. Morgan Kaufmann. Kate, R. J., Y. W. Wong, and R. J. Mooney. Eisenstein, J., J. Clarke, D. Goldwasser, 2005. Learning to transform natural to and D. Roth. 2009. Reading to learn: formal languages. In Association for the Constructing features from semantic Advancement of Artificial Intelligence abstracts. In Empirical Methods in (AAAI), pages 1062–1068. Natural Language Processing (EMNLP), Kwiatkowski, T., L. Zettlemoyer, pages 958–967, Singapore. S. Goldwater, and M. Steedman. 2010. Ge, R. and R. J. Mooney. 2005. A statistical Inducing probabilistic CCG grammars semantic parser that integrates syntax from logical form with higher-order and semantics. In Computational Natural unification. In Empirical Methods in Language Learning (CoNLL), pages 9–16, Natural Language Processing (EMNLP), Ann Arbor, MI. pages1223–1233, Cambridge, MA. Giordani, A. and A. Moschitti. 2009. Kwiatkowski, T., L. Zettlemoyer, Semantic mapping between natural S. Goldwater, and M. Steedman. 2011. language questions and SQL queries Lexical generalization in CCG grammar via syntactic pairing. In International induction for semantic parsing. In Conference on Applications of Natural Empirical Methods in Natural Language Language to Information Systems, Processing (EMNLP), pages 1512–1523, pages 207–221, Saarbrucken.¨ Cambridge, MA. Goldwasser, D., R. Reichart, J. Clarke, Liang, P. 2011. Learning Dependency-Based and D. Roth. 2011. Confidence driven Compositional Semantics. Ph.D. thesis, unsupervised semantic parsing. In University of California at Berkeley. Association for Computational Linguistics Liang, P., M. I. Jordan, and D. Klein. 2009. (ACL), pages 1486–1495, Barcelona. Learning semantic correspondences Goldwasser, D. and D. Roth. 2011. Learning with less supervision. In Association for from natural instructions. In International Computational Linguistics and International Joint Conference on Artificial Intelligence Joint Conference on Natural Language (IJCAI), pages 1794–1800, Portland, OR. Processing (ACL-IJCNLP), pages 91–99, Heim, I. and A. Kratzer. 1998. Semantics in Singapore. Generative Grammar. Wiley-Blackwell, Liang, P., M. I. Jordan, and D. Klein. 2010. Oxford. Learning programs: A hierarchical Judge, J., A. Cahill, and J. v. Genabith. Bayesian approach. In International 2006. Question-bank: Creating a Conference on Machine Learning (ICML), corpus of parse-annotated questions. pages 639–646, Haifa. In International Conference on Computational Liang, P., M. I. Jordan, and D. Klein. Linguistics and Association for Computational 2011. Learning dependency-based Linguistics (COLING/ACL), pages 497–504, compositional semantics. In Association Sydney. for Computational Linguistics (ACL), Kamp, H. and U. Reyle. 1993. From Discourse pages 590–599, Portland, OR. to Logic: An Introduction to the Liang, P. and D. Klein. 2009. Online EM for Model-theoretic Semantics of Natural unsupervised models. In North American Language, Formal Logic and Discourse Association for Computational Linguistics Representation Theory.Kluwer,Dordrecht. (NAACL), pages 611–619, Boulder, CO.

445 Computational Linguistics Volume 39, Number 2

Lu, W., H. T. Ng, W. S. Lee, and L. S. Steedman, M. 2000. The Syntactic Process. Zettlemoyer. 2008. A generative model for MIT Press, Cambridge, MA. parsing natural language to meaning Tang, L. R. and R. J. Mooney. 2001. Using representations. In Empirical Methods in multiple clause constructors in inductive Natural Language Processing (EMNLP), logic programming for semantic parsing. pages 783–792, Honolulu, HI. In European Conference on Machine Learning, Marcus, M. P., M. A. Marcinkiewicz, and pages 466–477, Freiburg. B. Santorini. 1993. Building a large Vogel, A. and D. Jurafsky. 2010. Learning annotated corpus of English: The Penn to follownavigational directions. Treebank. Computational Linguistics, In Association for Computational Linguistics 19:313–330. (ACL), pages 806–814, Uppsala. Miller, S., D. Stallard, R. Bobrow, and Wainwright, M. and M. I. Jordan. 2008. R. Schwartz. 1996. A fully statistical Graphical models, exponential families, approach to natural language interfaces. and variational inference. Foundations and In Association for Computational Linguistics Trends in Machine Learning, 1:1–307. (ACL), pages 55–61, Santa Cruz, CA. Warren, D. and F. Pereira. 1982. An efficient Montague, R. 1973. The proper treatment easily adaptable system for interpreting of quantification in ordinary English. natural language queries. Computational In J. Hiutikka, J. Moravcsik, and Linguistics, 8:110–122. P. Suppes, editors, Approaches to Natural White, M. 2006. Efficient realization of Language, pages 221–242, Dordrecht, coordinate structures in combinatory The Netherlands. . Research on Language Nocedal, J. 1980. Updating quasi-Newton and Computation, 4:39–75. matrices with limited storage. Mathematics Wong, Y. W. and R. J. Mooney. 2006. of Computation, 35:773–782. Learning for semantic parsing with Petrov, S., L. Barrett, R. Thibaux, and statistical machine translation. In North D. Klein. 2006. Learning accurate, American Association for Computational compact, and interpretable tree Linguistics (NAACL), pages 439–446, annotation. In International Conference on NewYork, NY. Computational Linguistics and Association for Wong, Y. W. and R. J. Mooney. 2007. Computational Linguistics (COLING/ACL), Learning synchronous grammars for pages 433–440, Sydney. semantic parsing with lambda calculus. Piantadosi, S. T., N. D. Goodman, B. A. Ellis, In Association for Computational Linguistics and J. B. Tenenbaum. 2008. A Bayesian (ACL), pages 960–967, Prague. model of the acquisition of compositional Woods, W. A., R. M. Kaplan, and semantics. In Proceedings of the Thirtieth B. N. Webber. 1972. The lunar sciences Annual Conference of the Cognitive Science natural language information system: Society, pages 1620–1625, Washington, DC. Final report. Technical Report 2378, Popescu, A., O. Etzioni, and H. Kautz. 2003. Bolt Beranek and Newman Inc., Towards a theory of natural language Cambridge, MA. interfaces to databases. In International Zelle, M. and R. J. Mooney. 1996. Learning to Conference on Intelligent User Interfaces parse database queries using inductive (IUI), pages 149–157, Miami, FL. logic programming. In Association for the Porter, M. F. 1980. An algorithm for suffix Advancement of Artificial Intelligence stripping. Program, 14:130–137. (AAAI), pages 1050–1055, Cambridge, MA. Robbins, H. and S. Monro. 1951. A stochastic Zettlemoyer, L. S. and M. Collins. 2005. approximation method. Annals of Learning to map sentences to logical Mathematical Statistics, 22(3):400–407. form: Structured classification with Schuler, W. 2003. Using model-theoretic probabilistic categorial grammars. semantic interpretation to guide statistical In Uncertainty in Artificial Intelligence parsing and word recognition in a spoken (UAI), pages 658–666. language interface. In Association for Zettlemoyer, L. S. and M. Collins. 2007. Computational Linguistics (ACL), Online learning of relaxed CCG grammars pages 529–536, Sapporo. for parsing to logical form. In Empirical Shan, C. 2004. Delimited continuations in Methods in Natural Language Processing natural language. Technical report, ArXiv. and Computational Natural Language Available at http://arvix.org/abs/ Learning (EMNLP/CoNLL), pages 678–687, cs.CL/0404006. Prague.

446