<<

Advanced Topics in HPSG∗

Andreas Kathol Adam Przepiórkowski Jesse Tseng

1 Introduction

This chapter presents a survey of some of the major topics that have received attention from an HPSG perspective since the publication of Pollard and Sag (1994). In terms of empirical cover- age (of English and other languages) and analytical and formal depth, the analyses summarized here go well beyond the original theory as defined in Pollard and Sag (1987) and (1994), although these naturally remain an indispensable point of reference.1 We will have to make a biased choice among the possible topics to cover here, and the pre- sentation will of course be colored by our own point of view, but we hope that this chapter will give the reader a reasonable idea of current research efforts in HPSG, and directions for further exploration of the literature. In keeping with HPSG’s emphasis on rich lexical descriptions, the first section (§2) concen- trates on the licensing of dependents by lexical heads. We begin with a discussion of the con- ceptual separation between structure and valence in current HPSG work. We examine how the the traditional distinction between arguments and adjuncts fits into this model, and then we turn to the highly influential idea of argument composition as a mechanism for dynamically determining argument structure. In §3, we concentrate on issues of linear order, beginning with lexicalist equivalents of con- figurational analyses and then considering more radical departures from the notion of phrase structure. The topics covered in §4 all have to do with ‘syntactic abstractness’. On the one hand, most work in HPSG avoids the use of empty categories in syntactic structure, preferring concrete, surface-based analyses. On the other hand, there is a current trend towards construction-based approaches, in which analyses are no longer driven only by detailed lexical information, but rely crucially on the definition of phrasal types, or constructions. One of the distinctive design features of HPSG is its integrated view of grammar. Informa- tion about , semantics, /phonology, and (potentially) all other components of the grammar represented in a single structure, with the possibility of complex interactions. In §5 we discuss a number of recent developments in the analysis of the syntax-semantics-pragmatics interface, in particular the treament of scope and illocutionary force, as well as information struc- ture and the representation of speakers’ beliefs and intentions. The discussion of grammatical

∗We would like to thank Bob Borsley, Miriam Butt, Ivan Sag, and especially Georgia Green for extensive com- ments on an earlier draft of this article. All remaining errors are ours.

1 interfaces continues in in §6, devoted to interactions between syntax and morphology. We con- clude the chapter with a summary of recent developments in the formal logical foundations of HPSG (§7).

2 Argument Structure

One of the most significant conceptual changes distinguishing HPSG from Generalized Phrase Structure Grammar is the treatment of combinatorial properties. In GPSG, lexical items carry a numerical index that identifies the subcategorization frame in which they can occur, and there is a distinct immediate dominance rule for each subcategorization type, resulting in a large number of such rules for head- structures. In , lexical descriptions in HPSG include a detailed characterization of their combinatorial potential encoded in a valence , and thus a much smaller set of highly general immediate dominance schemata is sufficient. In this way, HPSG has an affinity with Categorial Grammar, where the categories themselves are complex and encode combinatorial properties, allowing the assumption of a small number of general combination mechanisms. A number of linguistic problems have since been explored in HPSG and solutions have been developed that have significantly refined the original ideas and provided new insights into the nature of valence.

2.1 Valence and Argument Structure One significant development since the original presentation of the theory is the separation of the notions of valence and argument structure. In HPSG1 and HPSG2, valence was encoded in a single attribute, SUBCAT, containing a list of all syntactically selected dependents. Borsley (1987) pointed out, however, that this approach did not allow syntactic functions to be reliably distinguished. For example, the was originally defined as “the single remaining element on SUBCAT”, but this incorrectly identifies some prepositional complements and nominal specifiers as subjects. Borsley’s proposals for treating syntactic functions as primitive notions, and splitting the SUBCAT list into three valence lists, SUBJ(ECT), (SPR), and COMP(LEMENT)S, were adopted in HPSG3, and since then most authors assume these three lists as part of a complex VALENCE attribute.2 The technical consequence of this move is that the head-complement, head-subject, and head- specifier schemata refer to the appropriate valence lists, rather than particular configurations of SUBCAT, and the SUBCAT Principle is replaced by the correspondingly more complex Valence Principle. An alternative default formulation of this principle is proposed by Sag (1997),3 later incorporated into the default Generalized Head Feature Principle (Ginzburg and Sag, 2000). This approach offers a more economical notational representation (at the price of additional formal machinery for allowing default unification), but it can be argued that the essential content of the original Valence Principle—that synsem objects are removed from the valence lists when they are syntactically realized—is then encoded in a piecemeal fashion in the definitions of the individual ID schemata.

2 The decision to split syntactic valence into three lists makes it possible to express mismatches between the syntactic function of a constituent and the way that it is realized in the syntactic struc- ture. This possibility has been exploited mainly in analyses where the synsem of the grammatical subject is encoded in the COMPS list. As a result, the subject is realized not by the head-subject schema, but by the head-complement schema. This has been proposed for verb-initial languages like Welsh (Borsley, 1989), and for finite clauses in German, where the subject appears in the Mittelfeld, just like the complements and adjuncts of the verb (Kiss, 1995). Another example of the same valence/function mismatch is the analysis of subject-auxiliary inversion in Sag and Wasow (1999), where a lexical rule empties the auxiliary’s subject valence, which has the result of forcing the valence corresponding to subject to appear as the first element of the COMPS list instead. This ensures that the subject will not be realized preverbally, but as the first “com- plement” following the auxiliary verb, which is the desired structure. It should be said that many analyses of this type are motivated primarily by word order considerations, and so a possible al- ternative approach would be to use surface linearization constraints, without actually modifying the basic syntactic structure via valence manipulation. After replacing SUBCAT by SUBJ, SPR, and COMPS, researchers soon realized that for the treatment of some phenomena (most notably Theory), they still needed a single list encoding all of the arguments of a head. So the SUBCAT list was revived in the form of the ARG(UMENT)-ST(RUCTURE) list, with one crucial difference: while SUBCAT as a valence fea- ture recorded the level of syntactic saturation for each higher phrase in the tree, ARG-ST was introduced as a static representation of the dependents of the lexical head. In its original concep- tion, this information is only found in the representation of the lexical head (an object of type word). But a variety of recent work (for instance Przepiórkowski 2001) has argued that certain phenomena require that ARG-ST information also be visible on phrasal constituents projected from the head. In simple cases, the ARG-ST list is identified with the concatenation of SUBJ, SPR, and COMPS at the lexical level, i.e., before any valence requirements have been saturated. However, the lists in question do not always line up in this fashion and the possibility of mismatches gives rise to a number of analyses of otherwise puzzling phenomena. We will briefly discuss two of these here, -drop and argument realignments in Austronesian languages. The standard transformational approach to missing subjects in finite environments has been to posit a null (pro) that instantiates the syntactic subject position. In keeping with HPSG’s general avoidance of unpronounced syntactic material, we can instead analyze the un- expressed subject as an ARG-ST element that does not have a corresponding valence expression. The following example from Italian (1a) and the corresponding lexical description of the verb mangia illustrate this idea:

(1) a. Mangia un gelato. eat.3SG a icecream ‘S/he is eating an icecream.’ b. ARG-ST hNP[3sg], NPi  SUBJ hi  COMPS hNPi     3 Dependencies in which the subject participates, such as binding or , can be accommo- dated straightforwardly if they are described as referring to the least oblique ARG-ST element, rather than the value of SUBJ. A more radical mismatch between valence and argument structure has been proposed by Manning and Sag (1998) and Manning and Sag (1999) for the realization of arguments in Western Austronesian languages such as Toba Batak. In this language clause-initial verbs form a VP with the immediately following argument NP. In the case of active (AV) morphology, this NP has the status of non-subject, as evidenced by the fact that a reflexive in that position has to be bound by a later (“higher”) NP. The example in (2) can be analyzed exactly like the corresponding English sentence (apart from the position of the subject NP). In particular, AGR-ST is the concatenation of SUBJ and COMPS:

(2) a. [Mang-ida diri-na] si John. AV-saw self-his PM John ‘John saw himselfi.’ b.*[Mang-ida si John] diri-na. AV-saw PM John self-his c. S

SUBJ h 1 i 1 NP h i si John SUBJ h 1 i 2 NP  COMPS h 2 i  ARG-ST h 1 , 2 i diri-na     mang-ida

Compare this now with objective voice (OV) verbs. Again, using the distribution of reflexives as a diagnostic, we now have to assume that the VP-internal NP has the status of a subject. But this means that in the OV case, valence and argument structure are aligned in a way that is precisely opposite from the AV cases.

(3) a.*[Di-ida diri-na] si John. OV-saw self-his PM John b. [Di-ida si John] diri-na. OV-saw PM John self-his ‘John saw himselfi.’

c.

4 S

SUBJ h 2 i 2 NP h i diri-na SUBJ h 2 i 1 NP  COMPS h 1 i  ARG-ST h 1 , 2 i si John     di-ida By separating information about valence (i.e., syntactic combinatorial potential) from argu- ment structure (the lexically determined list of syntactic and semantic arguments) it becomes possible to provide a lexical treatment of a number of phenomena that would otherwise have to be handled in syntactic terms. In turn this keeps structural complexity (in terms of the inventory of genuine syntactic elements) to a minimum. The issue of structual complexity will also be of concern in the next subsection, and in §4.

2.2 Dependents and Lexical Amalgamation The following subsections deal with two issues in the area of argument structure that appear at first to be independent of each other but turn out to be closely linked in recent HPSG work. First, is there a fundamental distinction between complements and adjuncts and, second, what is the role of the syntactic head in licensing information about missing dependents?

2.2.1 Complements and Adjuncts It is a common and generally unquestioned assumption in much of contemporary linguistics that there is a syntactic distinction between complements and adjuncts, and that these two classes of dependents occupy different tree-configurational positions (for example, sister of X0 for comple- ments vs. sister of X′ for adjuncts). This was also the position of early HPSG work. However, the evidence for this syntactically encoded complement/ dichotomy has re- cently been re-examined within HPSG. For example, Hukari and Levine (1994, 1995) show that there are no clear differences between complement extraction and adjunct extraction, and Bouma et al. (2001) build on these observations and propose a unified theory of extraction based on the assumption that there is no structural distinction between complements and (at least a class of) adjuncts. Earlier, eliminating the configurational distinction was proposed in Miller (1992) (on the basis of French agreement facts, inter alia), van Noord and Bouma (1994) (on the basis of se- mantic ambiguities in Dutch clusters), and Manning et al. (1999) (on the basis of the behavior of Japanese constructions). This ‘adjuncts-as-complements’ approach is further defended on the basis of case assignment facts in Finnish and other languages (Przepiórkowski 1999c, 1999a), and on the basis of diachronic considerations (Bender and Flickinger, 1999). The central idea of all these analyses is that (at least a class of) adjuncts must be added to the verb’s subcategorization frame at the lexical level and are thus indistinguishable from

5 complements in syntax. For example, in the analysis of Bouma et al. (2001), words are specified for the attribute DEPS(ENDENTS), in addition to the attributes ARG-ST and VALENCE discussed in the previous section. ARG-ST encodes the ‘core’ argument structure, i.e., information about dependents that are more or less idiosyncratically required by the word. This information is eventually mapped into the word’s VALENCE attributes, responsible for the syntactic realization of these dependents. However, in Bouma et al.’s 2001 account there is an intermediate level between ARG-ST and VALENCE, namely DEPS, which encodes all dependents of the verb, both subcategorized (elements of ARG-ST) and non-subcategorized (adjuncts). In other words, DEPS extends ARG-ST to adjuncts, as schematically illustrated in (4). (4) Argument Structure Extension

category word → ... |CAT DEPS 1 ⊕ list(adjunct) ... |HEAD verb    " # ARG-ST 1    The DEPS list is, in turn, mapped into the VALENCE attributes, according to the following schematic constraint. (5) Argument Realization

category SUBJ 1 word → ... |CAT  VALENCE  " COMPS 2 ⊖ list(gap) #      DEPS 1 ⊕ 2        According to this principle, all elements of ARG-ST, except gaps, must be present on VA- LENCE attributes. There are two things to note about (5). First, gaps (encoding information associated with extracted elements) are present on the DEPS list, but they are not mapped to VA- LENCE. This means that, according to this approach, there are no wh-traces (and, more generally, no empty elements) anywhere in the constituent tree. Second, the configurational distinction between complements and adjuncts is lost here: all elements of the extended argument structure DEPS are uniformly mapped to the VALENCE at- tributes, regardless of their complement/adjunct status. As we will in the next section, various grammatical processes are assumed to operate at the level of such an extended argument structure.

2.2.2 Extended Argument Structure Extraction Bouma et al. (2001) propose a theory of extraction that makes crucial use of the extended argument structure encoded in DEPS. They argue that extraction does not distinguish between various kinds of dependents and propose the following principle of SLASH amalgama- tion to account for this observation.4 (6) SLASH Amalgamation:

DEPS h[SLASH 1 ],...,[SLASH n ]i LOC|CAT word → SYNSEM  " BIND 0 # SLASH ( 1 ∪ . . . ∪ n ) − 0       6 This principle is responsible for collecting SLASH values from all dependents of a word, perhaps lexically binding some of them (this happens in case of words such as tough or easy, which are lexical SLASH-binders), and collecting all other elements of these SLASH sets into the word’s own SLASH value. This SLASH value is then shared along the head projection of the word, in accordance with the principle of SLASH inheritance:5

(7) SLASH Inheritance (schematic):

SLASH 1 hd-val-ph → " HD-DTR|SLASH 1 # This approach differs from earlier HPSG approaches to extraction not only in that it treats dependent extraction and argument extraction uniformly, but it also establishes a different divi- sion of labor between parts of the grammar. In the analysis sketched above, the amalgamation of SLASH values takes place at the level of words, never at the level of phrases — phrases only pass SLASH values to the head-filler phrase, where extracted elements are overtly realized. See Bouma et al. 2001 for further details and examples. Similar lexical amalgamation is also assumed for the purposes of the lexical analysis of quan- tifier scoping in Manning et al. (1999) and Przepiórkowski (1997, 1998), and for the flow of pragmatic information in Wilcock (1999). One important aspect of the SLASH Amalgamation Principle (6) is that it does not distinguish between slashed arguments and slashed adjuncts: since, in principle, any DEPS element can be a gap, any DEPS element, whether an argument or an adjunct, may be extracted by the same mechanism, in accordance with the observations in Hukari and Levine (1994, 1995).

Case Assignment Apart from extraction, another phenomenon that, contrary to common as- sumptions, does not seem to distinguish between complements and adjuncts is syntactic case assignment. For example, Maling (1993) argues at length that some adjuncts (adverbials of mea- sure, duration and frequency) behave just like objects with respect to case assignment and, in particular, notes the following generalization about syntactic case assignment in Finnish: only one NP dependent of the verb receives the nominative, namely the one with the highest grammat- ical function; other dependents take the accusative. Thus, if no argument bears inherent case, the subject is in the nominative and other dependents are in the accusative (8), but if the subject bears an idiosyncratic case, it is the object that gets the nominative (9). Furthermore, if all arguments (if any) bear inherent case, and the ‘next available’ grammatical function is that of an adjunct, then this adjunct takes the nominative (10)–(11). (8) Liisa muisti matkan vuoden. Liisa.NOM remembered trip.ACC year.ACC ‘Liisa remembered the trip for a year.’

(9) Lapsen täytyy lukea kirja kolmannen kerran. child.GEN must read book.NOM [third time].ACC ‘The child must read the book for a 3rd time.’

7 (10) Kekkoseen luotettiin yksi kerta. Kekkonen.ILL trust.PASSP [one time].NOM ‘Kekkonen was trusted once.’

(11) Kekkoseen luotettiin yhden kerran yksi vuosi. Kekkonen.ILL trust.PASSP [one time].ACC [one year].NOM ‘Kekkonen was trusted for one year once.’

Maling (1993) concludes that syntactic case is assigned according to the grammatical hierarchy and that (at least some) adjuncts belong in this hierarchy. On the basis of these facts, as well as other case assignment facts in Korean, Russian, and es- pecially Polish, Przepiórkowski (1999a) provides an HPSG account of syntactic case assignment taking extended argument structure (i.e., DEPS, assuming Bouma et al.’s 2001 feature architec- ture) as the locus of syntactic case assignment. (See §6.3 below, and Przepiórkowski 1999a, ch.10 for details.)

2.3 Argument Composition Moving subcategorization information into lexical descriptions is at first blush a simple redis- tribution of labor between the syntax and the . But it turns out that this move affords a much wider perspective on the kinds of relationships that are lexically encoded. In particular, the lexicalization of valence makes it possible to express second-order dependencies—i.e., for a word to refer to the valence of its valence elements. The HPSG analysis of controlled complements can be seen as an application of this basic idea, in that the subject requirement of the selected VP is identified with the subject requirement of a selecting that VP:6

(12) SUBCAT 1 , V SUBCAT h 1 synsem i h D h h iiEi More generally, since structure-sharing tags in HPSG can be variables over any kind of struc- ture, they can range over the entire list of valence elements of the selected predicator. The valence list of that predicator consists of the verbal complement followed by (using the list-append nota- tion “⊕”) the list of dependents of that same complement. This is illustrated in (13), where “ 1 ” is used as a variable over lists.

(13) SUBCAT 1 ⊕ V SUBCAT 1 list(synsem) h Dh h iiEi As a result, the arguments of the higher predicator are composed from those of the selected (typically verbal) complement. Another way of thinking about such cases is in terms of the higher predicator ‘attracting’ the valence requirements of the lower one. Many phenomena for which separate operations of “clause union” have been assumed in other syntactic frameworks can thus be treated in terms of a rather straightforward head-driven extension of HPSG’s original valence mechanism.7 Among the original applications of argument composition is Hinrichs and Nakazawa’s analy- sis of the German verb cluster, the clause-final sequence of verbal forms (Hinrichs and Nakazawa

8 1989, 1994). Starting with Bech (1955), two modes of verbal complementation have been as- sumed for German. The first (known as the “incoherent” construction) is very similar to English VP-complement constructions, as for instance in (14):

(14) a. Sandy tries [VP to read the book].

b. daß Otto versucht [VP das Buch zu lesen]. that Otto tries thebook toread

A plausible analysis of (14) is that lesen combines with its NP complement (das Buch) and the resulting phrase serves as the VP complement to versucht. However, it is highly debatable whether the same should be assumed for the relation between gelesen and its notional object das Buch in constructions such as (15), where the main verb co-occurs with the tense auxiliaries haben and werden.

(15) a. daß Peter das Buch gelesen haben wird. that Peter the book read-PSP have-INF will-FIN ‘that Peter will have read the book.’

Hinrichs and Nakazawa propose that in “coherent” constructions of this kind, the valence re- quirements of the main verb (here, lesen) are inherited by the governing tense auxiliaries (haben and wird), so that the satisfaction of the main verb’s valence requirements are now mediated by the highest governing head element (here, wird). Suggestive evidence for such an analysis comes from the fact that the object of the main verb is subject to the same range of order variation as if the main verb itself had been the sole predicator in the clause. Thus, in (16a) the pronominal object es occurs before the subject Peter, which is precisely parallel to the simple case in (16b):

(16) a. daß es Peter gelesen haben wird. that it Peter read-PSP have-INF will-FIN ‘that Peter will have read it.’ b. daß es Peter las. that it Peter read ‘that Peter read it.’

Transformational analyses have usually assumed that such cases are the result of a scrambling transformation that dislocates the object (es) from the phrase that it forms with the main verb.8 Dislocation constructions are generally treated as filler-gap dependencies in HPSG, because they can typically hold across finite clause boundaries. Since cases like (16) are restricted to a single clause, an analysis in terms of dislocation is inappropriate. Instead, order variation of this kind has been analyzed in terms of permissive linear precedence conditions within a local syntactic domain (typically, a local phrase structure tree). If both subject and object end up as arguments of the highest predicator wird via argument composition, the “scrambled” order in (16a) can be explained in terms of order variations among daughters within the same local tree, just as in (16b).

9 Further evidence against the main verb and its notional object forming a constituent comes the fact that in relative clauses, the two do not form a frontable relative phrase (“VP pied piping”), as seen in (17a). This is in contrast to cases such as (17b), where the governing verb versuchen does combine with a frontable VP dependent: (17) a.*ein Buch [[das gelesen] Peter haben wird] a book that read Peter have will b. ein Buch [[das zu lesen] Peter versuchte] a book that to read Peter tried ‘a book which Peter tried to read’ As has been pointed out by Kathol (2000, 180–183), linking the valence requirements of ver- bal material by means of argument composition does not, in fact, determine the phrase structural relations among the participating verbs. Thus, for typical head-final cases as in (15), there have been proposals that assume no subconstituents among the verbal elements at all (Baker 1994, 1999; Bouma and van Noord 1998b; Bouma and van Noord 1998a), a constituent with right- structure (Kiss 1994; 1995), or a constituent with left-branching structure (Hinrichs and Nakazawa 1989; Kathol 2000) illustrated in (18):9

(18) V[fin] " ...|SBCT 1 #

V[inf] V[fin] 3 " ...|SBCT 1 # " ...|SBCT 1 ⊕ h 3 i #

V[inf] V[inf] 2 wird " ...|SBCT 1 hNP[nom],NP[acc]i #" ...|SBCT 1 ⊕ h 2 i #

lesen können Empirical evidence in favor of such structures is presented by Hinrichs and Nakazawa (1989), who point out, among other things, that the order variation known as Oberfeldumstellung (or “aux-flip”) receives an elegant account in terms of reordering of constituents under their left- branching analysis:

(19) V[fin] " ...|SBCT 1 #

V[fin] V[inf] 3 " ...|SBCT 1 ⊕h 3 i# " ...|SBCT 1 #

V[inf] V[inf] wird 2 " ...|SBCT 1 hNP[nom],NP[acc]i# " ...|SBCT 1 ⊕h 2 i#

lesen können

10 In §3, we will return to evidence presented by Kathol (2000) that a purely phrase structure-based view fails to cover the full range of order variation among verb cluster elements seen in German and Dutch. Argument composition analyses in effect establish ‘extended’ valence relations between a governing verb and the phrasal argument of a more deeply embedded verb. This property has been the basis for novel proposals for the treatment of passives suggested by Kathol (1994) and Pollard (1994). Instead of simply copying the valence requirements of the embedded verb, a passive auxiliary can be thought of as actively manipulating the set of valence elements that they inherit from the governed dependents. As a result, passives on the clausal level can be analyzed as a form of object-to-subject . Compared to more standard manipulation of the verb’s valence in terms of lexical rules, such an approach has the advantage of only assuming one participle; no distinction between morphologically identical passive and past participles is needed.10 Other areas of German grammar for which argument composition analyses have been pro- posed include derivational morphology (Gerdemann 1994) and the problem of preposed com- plements of nouns (De Kuthy and Meurers 1998). In addition, Abeillé and Godard (1994) have argued that tense auxiliaries in French should be analyzed as inheriting the arguments of their main verbs via argument composition, albeit with a flat constituent structure. Abeillé et al. (1998) show how this idea can be extended to certain causative constructions with faire. Another language for which argument composition has yielded insightful analysis is Korean, both for auxiliaries (Chung, 1993) and verb constructions (Chung 1998a, cf. also Bratt 1996). Fi- nally, Grover (1995) proposes an analysis of English tough-constructions by means of argument composition, as an alternative to the more standard approach that treats missing objects inside the VP complements of tough-adjectives as the result of an extraction.

3 Phrase Structure and Linear Order

3.1 Configurationality A theme running through much of the HPSG literature is the lexicalization of relationships that have been treated in tree-configurational terms in other theories. HPSG’s binding theory is a prime example of how certain asymmetries among co-arguments can be reinterpreted in terms of obliqueness on valence/argument structure. As a result, there is no longer a need for expressing such asymmetries using structural notions such as c-command. Similarly, variation in phrase order of the kind seen in Japanese or German has typically been seen in terms of liberal linear precedence constraints over flat clausal tree structures rather than the result of manipulating highly articulated phrase structures via scrambling movements (see for instance Uszkoreit 1987 and Pollard 1996 for German and Chung 1998a for Korean).11 HPSG analyses of this kind are thus similar to recent LFG proposals for describing nonconfigurational languages in terms of flat clause structures (cf. Austin and Bresnan 1996). For instance, free order among nominative and accusative dependents in a verb-final language can be described in terms of the linear precedence constraint in (20a), which requires NPs to precede verbal elements,

11 without specifying any order among the NPs. As a result, both constituent orders in (20b) and (20c) are licensed.

(20) a. NP ≺ V c. b. S S

NP[nom] NP[acc] V NP[acc] NP[nom] V An issue closely related to order variation among phrasal dependents is that of the placement of verbal heads in the Germanic languages (and elsewhere). Given a flat structure analysis for the phrasal constituents of the clause, the different positions of the finite verb in verb-initial and verb-final clauses then reduce to clause-initial vs. clause-final placement of that verb (typically mediated by a binary-valued feature such as INV, familiar from GPSG/HPSG analyses of English subject-auxiliary inversion), cf. Pollard (1996):

(21) a. S

V[+INV] NP NP

liest Otto dasBuch b. S

(daß) S

NP NP V[−INV]

Otto dasBuch liest

Analyses of this kind diverge starkly from the standard transformational approach in terms of movement of the finite verb from its clause-final base position to a clause-initial position (Comp) via head movement. The underlying intuition that verb placement is dependent on constituent structure is in fact also shared by various HPSG-based proposals that offer a number of different ways in which verb movement may be implemented in HPSG, cf. Kiss and Wesche (1991), Kiss (1995), Netter (1992), Frank (1994). The representation given in (22) illustrates how to capture the dependency between the finite verb (liest) and its putative base position (occupied by an ) in terms of the additional nonlocal feature DSL (for “double slash”):12

12 (22) SUBCAT hi V " DSL {} #

SUBCAT hi SUBCAT hi V SUBCAT V V " * " DSL {V} #+ # " DSL {V} #

liest SUBCAT hNPi NP V " DSL {V} # Otto

SUBCAT hNP,NPi NP V " DSL {V} # das Buch t Thus, much like SLASH is used to thread information about phrasal constituents from the gap site to the filler, DSL does the same for finite verbs occurring in verb-first or verb-second construc- tions in German. Accounts of verb placement in terms of nonlocal dependencies of this kind are discussed by Kathol (1998) and Kathol (2000), who points out that none of the putative evidence for a dislocation-based analysis in fact holds up under closer scrutiny.13 In addition, Kathol notes a number of technical and conceptual problems involving the of the dependency and the existence of dislocated heads. One area in which verb dislocation approaches appear to provide better analyses than those based on ordering variation within local trees is the interaction between finite verbs and . German and most other Germanic languages exhibit a charac- teristic complementarity of distribution between initial finite verbs and complementizers in root and subordinate clauses, respectively.14 If verbs and complementizers are not subconstituents of the same local tree, it is not clear how they can be made to interact positionally. In contrast, verb movement analyses are able to express a direct functional analogy between those two categories, which can account for the distributional facts. However, like their transformational counterparts, such analyses fail to generalize to phrasal clause-initial categories—that is, wh-phrases in sub- ordinate interrogative and relative clauses—which share the basic distributional and functional properties of complementizers (cf. Kathol and Pollard 1995 and Kathol 2000, for extensive dis- cussion of this point). In fact, one of the major motivating factors behind the linearization-based approach to Germanic clausal syntax pursued in Kathol (2000) is precisely to express this basic parallelism in a comprehensive account of the linear underpinnings of Germanic clause structure. As we will see in the next section, the required extensions of the phrase structure substrate of the HPSG linguistic theory affords a fairly flexible and elegant approach to problems of discontinu- ous constituency within HPSG.

3.2 Nonconcatenative Approaches to Linear Order In much of contemporary syntactic theory, the correlation between hierarchical organization and linear order in terms of a left-to-right concatenation of the leaves of the syntactic tree (“terminal

13 yield”) is taken for granted. However, an interesting consequence of the sign-based approach is that the very ingredient that gave HPSG its name (“phrase structure grammar”) turns out to be a nonessential part of the formalism. While simple concatenation is one mode of computing the phonology of a sign from the phonology of its constituent parts, other relations are perfectly compatible with the sign-based approach. There is now a significant literature that explores such alternatives. Concatenative approaches lends themselves rather straightforwardly to the description of the relation between constituency and order in a language like English. However, it is far less clear whether this also holds of languages such as German. For instance, Reape (1993, 1994, 1996) observes that in German nonfinite complementation constructions of the kind illustrated in (23a), the verb zu lesen occurs separated from its notional object dieses Buch—unlike in the English counterpart (23b).

(23) a. daß dieses Buch niemand zu lesen versuchte. that this book.ACC no one.NOM to read tried ‘that no one tried to read this book.’ b. that no one tried to read this book.

The argument composition approach sketched above in §2 attributes this discontinuity to the formation of a complex predicate (zu lesen versuchte). Reape instead proposes analyzing the German and English constructions in terms of the same basic constituent types (in particular VPs), yet realized in a discontinuous fashion in German. This is illustrated in (24), where each sign now is augmented with a list-valued feature representing that sign’s (WORD) ORDERDO- MAIN. Linear order is determined by mapping the phonology of the domain elements onto the phonology of the phrase, rather as the terminal yield of the constituent structure. This is indicated below in (24) by arrows linking the phonology of individual domain elements to the phonology of the entire constituent. While in standard phrase structure grammar, the region of (potential) order variation is the local tree, order domains expand that region to include elements that are not immediate constituents of the sign in question. For instance, in (24), the NP dieses Buch as a complement of zu lesen is not an immediate constituent of the clause; nevertheless it occurs together with the verbal head versuchte within the clause’s order domain. As a result, both the clausal and the higher VP node have order domains that contain more elements than immediate syntactic daughters.

14 (24) S PHONhdieses Buch niemand zu lesen versuchtei        hdieses Buchi hniemandi hzu leseni hversuchtei   DOM , , ,   NP NP V V   *" #" #" # " #+   

VP NP hdieses Buchi hzu leseni hversuchtei hniemandi  DOM , ,  " # NP V V  *" # " #" #+   

VP V hdieses Buchi hzu leseni hversuchtei  DOM ,  " # NP V  *" # " #+   

NP V " hdieses Buchi # " hzu leseni # Reape’s proposal bears a strong resemblance to previous approaches to discontinuous con- stituents, in particular Pullum and Zwicky’s notion of “liberation” (Pullum 1982, Zwicky 1986) (for related ideas in Categorial Grammar, see Bach 1981 and especially Dowty 1996). Thus, the VP in (24) can be thought of as being liberated in the sense that its immediate constituents may intermingle with elements from outside the VP. Unlike Pullum and Zwicky’s proposals, HPSG order domains provide a level of syntactic representation from which the range of possible in- termingling effects can be represented directly. Thus, while the VP dieses Buch zu lesen gives rise to two list elements in the clausal domain, the NP dieses Buch contributes only one element. Since domain elements cannot themselves be broken apart, it is predicted that discontinuities are allowed only in the former case, but not in the latter. Finally, if order domains take the place of local trees as the range of potential order flexibility, it is natural to interpret linear precedence constraints as well-formedness conditions over order domains rather than as order constraints on daughter nodes in trees.

3.2.1 Linearization-Based vs. Valence-Based Approaches The initial appeal of structures such as (24) is that they allow an analysis of German that, despite differences in linear order, is remarkably similar to the constituency commonly proposed for the equivalent English sentences in nontransformational approaches. Therefore, it appears that argument composition and order domains constitute two alterna- tive ways of allowing for embedded verbs and their objects to occur discontinuously in a “middle distance dependency” construction. There are, however, empirical reasons for preferring one ap- proach over the other. As discussed in detail in Kathol (1998), Reape’s DOMAIN analysis is

15 ultimately unsatisfactory in that it fails to link the argument structure of more deeply embed- ded predicates to that of the governing verb—which is precisely the main intuition behind the argument composition approach. Evidence that such linkage is in fact necessary comes from a phenomenon known as “remote (or long) passive” (cf. Höhle 1978). In (25), the NP der Wagen is the direct object of the embedded verb zu reparieren, yet its nominative case marks it as the subject of the passivized predicate wurde versucht.

(25) ?Der Wagen wurde zu reparieren versucht. the car-NOM was to repair tried ‘Someone tried to repair the car.’

If all predicates of the versuchen-class invariably embed VPs, as suggested by Reape, the direct object of the embedded verb (den Wagen in (25)) would never be ‘visible’ to the valence change that accompanies the passivization of versuchen.15 Thus, Reape’s approach wrongly predicts that such constructions should not exist. In contrast, the argument composition approach can easily accommodate such cases because the syntactic arguments of the most embedded verbal predicate become the syntactic dependents of the governing predicates. Even though facts such as these cast doubt on the appropriateness of order domains in de- scription of the particular phenomena that they were originally developed for, there nevertheless appear to be other discontinuous constituency phenomena for which order domains represent an elegant descriptive tool. For instance, Kathol (1998) points out that argument composition of the kind proposed by Hinrichs and Nakazawa fails to correctly account for certain orderings within Dutch verb clusters. In Dutch we typically find head-first ordering between the governing verb and the governed subcomplex. For example, in (26a), moet as the highest governing verb pre- cedes hebben gelezen. Combinations of tense auxiliaries and their dependent verbs can generally occur in either order; when they occur in head-final order, as in (26b), the preferred occurrence of the governing verb moet (in standard Dutch) turns out to be between gelezen and hebben. This kind of ordering cannot be described assuming only argument composition and binary branching verbal complexes of the kind initially proposed by Hinrichs and Nakazawa.

(26) a. dat Jan ditboek moet1 hebben2 gelezen3. that Jan this book must-FIN have-INF read-INF ‘that Jan must have read the book.’

b. dat Jan dit boek gelezen3 moet1 hebben2. that Jan this book read-PSP must-FIN have-INF

Kathol (2000) shows how facts such as these can be accounted for if argument composition is combined with order domains that permit the discontinuous linearization of governed sub- complexes such as gelezen hebben in (26). Such an analysis goes a long way toward a uniform account of the ordering possibilities in a number of varieties of German and Dutch by factoring out dialect-independent constituency and dialect-dependent linearization constraints.

16 3.2.2 Further Applications Another area in which the adoption of order domains has arguably led to significant progress is in the syntax of left-peripheral elements in German. As was pointed out above, the striking in- terplay between finite verbs and complementizers (and wh-phrases, for that matter) which forms the basis of transformational verb movement accounts has been captured only insufficiently in purely phrase structure-based approaches. However, if order domains are combined with the concept of “topological fields” from traditional German grammar, these facts can be described straightforwardly in purely nonderivational terms (cf. Kathol 2000). The basic idea is to allow elements with different grammatical roles within the clause—verbal head, phrasal complements, filler phrase, , etc.—all to occur within the clause’s order domain and assign each of them to a topological field such as Vorfeld (vf.) (roughly equivalent to [Spec,CP]), linke Satzk- lammer (l. S.) (roughly equivalent to Comp), or Mittelfeld (mf.), etc., determined either lexically or by the combination schema. With the further constraint that the leftmost topological fields (Vorfeld, linke Satzklammer) can be instantiated by at most one element, the distributional com- plementarity of complementizers and finite verbs follows as a natural consequence. Thus, in (27) the finite verb cannot be associated with the same field as the complementizer and must instead occur clause-finally (rechte Satzklammer (r. S.)).

(27) l. S. mf. mf. r. S.  DOM  PHONhdaßi ,  PHON hLisai ,  PHON hdie Blumei ,  PHON hsiehti   * COMPL NP[EMNOM] NP[EMACC] V[EM FIN] +                     In verb-first constructions such as (28) by contrast, there is no complementizer blocking the l. S. position, hence the finite verb can (and in fact, must) occur there.

(28) l. S. mf. mf.  DOM  PHONhsiehti ,  PHONhLisai ,  PHONhdie Blumei  * V[EM FIN] NP[EM NOM] NP[EM ACC] +                 Typical verb-second declarative clauses involve the instantiation of Vorfeld by a non-wh-phrase and linke Satzklammer by a finite verb, as shown in (29):

(29) vf. l. s. mf.  DOM  PHONhdie Blumei, PHONhsiehti ,  PHONhLisai   * NP[EM ACC] V[EM FIN] NP[EM NOM] +                 Kathol (1999, 2000) further describes how clausal domains of this kind can be utilized in a con- structional approach (see 4.2 below) to German sentence types with various kinds of illocutionary force potential. While much of the work employing order domains has concentrated on German (see also Richter 1997 and Müller 1999, ch. 11, Müller 2000), there have been numerous adaptations of linearization-based ideas for a variety of other languages, including Breton (Borsley and Kathol 2000), Danish (Hentze 1996 and Jensen and Skadhauge 2001), Dutch (Campbell-Kibler

17 2002), English (Kathol and Levine 1992), Fox (Crysmann 1999b), French (Bonami et al. 1999), Japanese (Calcagno 1993 and Yatabe 1996, 2001), Ojibwe (Kathol and Rhodes 1999), European Portuguese (Crysmann 2000b), Serbo-Croatian (Penn 1999a, 1999b), and Warlpiri (Donohue and Sag 1999). One of the ongoing issues in the literature on nonconcatenative approaches to syntax is the precise informational content of the elements of order domains. In Reape’s original formulation, order domains contain HPSG signs. But this allows for the formulation of many linear precedence constraints for which there is little or no empirical evidence. As a result, there have been proposals (cf. Kathol 1995) to limit the informational content of domain elements, i.e. the features appropriate for order domain elements. This can be seen as closely related to other proposals that utilize the architecture of features to express linguistically contentful constraints (“geometric prediction”). For instance, the idea that dependents are represented on valence lists as objects of type synsem rather than sign makes predictions about which properties of depen- dents can be selected by heads (e.g. and semantic type, but not phonology). In the case of linearization, the equivalent issue is which aspects of linguistic information appear never to be relevant for linear precedence relations; these features should be rendered inaccessi- ble by means of the feature geometry. For instance, it appears that linear precedence constraints are not sensitive to internal phrase structure, i.e. the number and kind of immediate constituents, as encoded in the DAUGHTERS value. The DOMAIN model should therefore be restricted in certain ways, but it can also be extended in other ways. For the analysis of phenomena involving ‘floating’ affixes, it has been proposed that domain elements can represent objects smaller than words.16 This makes it possible to use linearization constraints to handle discontinuous realization of words in the same way as discontinuous phrases.

4 Syntactic Abstractness and Reductionism

In this section we survey some developments in HPSG that seem to be primarily methodological issues, but on closer inspection also have empirical ramifications. These have to do with the real- ity of phonologically empty syntactic constituents and the division of labor between the lexicon and the combinatorial apparatus in expressing syntactic generalizations. The overriding concern in both is the question of how abstract we should assume syntactic representations to be.

4.1 The (Non-)Reality of Syntactic Traces With the introduction of the structure preserving constraint on transformations in the seven- ties, the notion of a “trace” as the residue of movement operations became a core ingredient of transformational theories of grammar. The presence of inaudible copies of dislocated elements within the syntactic representation has been of crucial importance for the formulation of many principles in transformational theories, including binding, scope of quantificational expressions, distribution of case-marked elements, and constraints on extraction. The definition of a trace in HPSG is quite straightforward. One can simply see it as a phrasal element of some category (usually nominal or prepositional) that is phonologically empty and

18 contributes its own local information to the set of nonlocal SLASH information:

(30) PHON hi LOCAL 1  SYNSEM  NONLOCAL | SLASH { 1 }  " #   However, the reliance on such phonologically empty syntactic elements is generally consid- ered to go against the spirit of HPSG as a surface-oriented theory. This holds for all kinds of empty categories, not only traces (e.g. wh-trace and NP-trace), but also pro, PRO, and the many empty operators and empty functional heads that are assumed in other frameworks. The discus- sion in this section focuses on wh-trace, because most of the other empty categories have never been proposed in standard HPSG analyses. For example, PRO is not needed in infinitival con- structions, because the unrealized subject is identifiable as an unsaturated valence element, and NP-trace is not needed in the HPSG treatment of the passive alternation, which involves related but distinct verbal lexical entries. It should be said that some authors do in fact take advantage of the fact that HPSG can technically accommodate empty categories. As discussed in §3.1 above, a number of proposals for German clause structure assume a ‘head movement’ analysis with clause-final verbal traces. And the account of relative clauses in Pollard and Sag 1994 relies crucially on syntactically complex but phonologically empty relativizing operators. For both of these cases, however, subsequent research has shown that alternative analyses are available that do not involve empty categories (recall §3.2 and see the next section). The main issue that remains to be considered is therefore the elimination of wh-trace. And in fact, the treatment of extraction in terms of traces in the syntactic structure proposed in HPSG2 was supplanted right away in HPSG3 by a traceless approach involving several lexical rules, and later by the unified head-driven constraint-based analysis sketched in §2.2.2. Extrac- tion is encoded as a mismatch between the list of potential syntactic dependents DEPS and the elements on the valence lists, which correspond to canonically realized dependents. An extracted element is instead identified as a gap, a non-canonical subtype of synsem, and its LOCAL value is added to the SLASH set.17 SLASH information propagates by head-driven inheritance and even- tually licenses the appearance of a filler that discharges the long-distance dependency. The syntactic evidence typically offered in support of wh-traces can be equally well ac- counted for by referring to the ARG-ST list, whose membership remains unchanged even if argu- ments are extracted. For instance, fillers in English topicalization constructions can be reflexives with an antecedent in the following clause (31a)—notwithstanding the fact that the reflexive is presumably in a configurationally higher position than its antecedent (Pollard and Sag 1994:265). Similiarly, an extracted subject as in (31b) can still serve as antecedent for a reflexive object of its original verb.

(31) a. (John and Mary are stingy with their children.) But themselvesi, theyi pamper.

b. Which mani do you think perjured himselfi?

19 In transformational analyses, these in situ effects are analyzed by assuming the presence of a trace at the extraction site, but this is unnecessary in HPSG, because the relevant reflexive binding constraints apply to the ARG-ST list of the verb. Many aspects of extraction phenomena are open to both trace-based and traceless analyses in HPSG, but there are empirical motivations for preferring one technical approach to the other. As has been argued by Sag and Fodor (1994), the evidence for the existence of traces proposed in the literature is often extremely weak. At the same time there are phenomena that can be explained more straightfowardly if no traces are assumed in the syntactic structure. As an example of arguments of the first kind, consider wanna-contraction, one of the most celebrated pieces of evidence in favor of traces. The basic idea is that wh-traces disallow the phonological contraction of want and to. The relative clause in (32a) is ambiguous between a subject or object control reading for the understood subject of succeed. In contrast, the variant in (32b) is only said to permit the subject control reading, supposedly because of the impossibility of contraction across a wh-trace.

(32) a. This is theman I want to succeed. b. This is the man I wanna succeed.

However, as has been pointed out by Pullum (1997), there are numerous technical and con- ceptual problems with this explanation. For instance, whether contraction is possible appears to be highly lexically specific: gonna, hafta, but *intenna (intend to), *lufta (love to), *meanna (meant to). This suggests that contraction cannot be a general process. Instead, a fully lexical, traceless analysis of the above contrast is available if wanna is thought of as syntactically underived subject-control verb that does not license an object. Pullum is able to explain all of the phenomena previously discussed in the literature, in addition to data distinguishing his proposal from others that have been advanced. Turning to positive evidence against traces, a strong argument in favor of their abolition comes from data involving extractions from coordination, first discussed by Sag (2000) (see also Bouma et al. 2001). The well-known Coordinate Structure Constraint requires that each conjunct be affected equally in extractions from conjoined phrases; in particular, extraction must apply in an “across-the-board” fashion. This straightforwardly explains the ungrammaticality of (33):

(33) *Whoi did you see [ i and Kim]?

However, as Sag points out, examples such as the following are also ungrammatical, even though here, the extraction affects each conjunct in a parallel fashion:

(34) a.*Whoi did you see [ i and a picture of i]?

b.*Which studenti did you find a picture of [a teacher of i and i]?

c.*Who did you compare [ i and i]?

The pertinent generalization is that no conjunct can consist of an extraction site with no other material. This “Conjunct Constraint” has to be stipulated in addition to the across-the-board

20 condition of the Coordinate Structure Constraint. In an analysis without traces, however, this additional stipulation is unnecessary. In a coordinated structure, the conjuncts must be syntactic constituents, and all syntactic constituents (in such an approach) must have phonological content. Together with the elimination of such inaudibilia as pro (or empty relativizers as discussed in the next subsection), the abolition of traces from syntactic representations is a further step to- ward reducing syntactic abstractness and the complexity of the syntactic representations, i.e. the number of nodes, phonologically empty elements, and derivational relationships within syntactic trees. This has been made possible by the fact that a “word” in HPSG is not an isolated bundle of information, but instead is part of a highly articulated network of lexical generalizations. Thus, the same lexical element (“lexeme”) is typically linked to a set of different ways of realizing its argument structure, which obviates the need for traces or other empty elements. A further, and to some extent complementary, way of reducing complexity of syntactic structures itself is to adopt a more articulated inventory of ways in which elements can be put together syntactically by adopting a “constructional” approach to syntactic licensing. This issue is the topic of the next subsection.

4.2 The Constructional Turn One of the aspects of GPSG that HPSG sought to improve upon was the large number of ID rules that had to be posited in GPSG. A consequence of the GPSG approach was that it was difficult to express a natural correspondence between the semantic valence of a verb and its syntactic frame of occurrence. Such linking relations between syntactic valence and semantic argument structure are stated much more transparently if syntactic valence is represented directly as a property of the lexical element (cf. for instance Davis 2001). Also, as discussed at the beginning of §2, the lexicalization of combinatorial properties reduces the number of the ID schemas needed to execute the instructions encoded in lexical descriptions. Even though this view is initially attractive, its shortcomings become apparent when one considers constructions whose combinatorial potential is not obviously reducible to the proper- ties of particular lexical elements. A case in point is relative clauses. Since they are modifiers, their combination with a modified noun is licensed by means of the head feature MOD. Yet in constructions like that-less relative clauses in English, no lexical element signals this modifier status.18

(35) This is thewoman I love.

The solution proposed in Chapter 5 of Pollard and Sag 1994 preserves the idea that com- binatorial properties are lexically derived, but at the price of introducing phonologically empty syntactic elements. The result is a functional head analysis in which an empty relativizer (“Rel”) takes a clausal complement with a gap. As the head of the whole relative clause, the relativizer contributes its MOD specification to its projection (“Rel′”), which in turn licenses the combina- tion with a modified noun woman.

(36)

21 N′

′ ′ N 1 Rel ′ MODN 1 woman h i Rel S ′ MODN 1 SLASH {NP 1 } h i h i ∅ I love

A rather different approach to relative clauses is pursued by Sag (1997). Instead of associat- ing the internal and external properties of relative clauses to particular lexical elements, Sag treats relative clauses as grammatical entities in their own right. Thus, relative clauses are considered to be CONSTRUCTIONS in the sense of Construction Grammar (Fillmore et al. ming; Zwicky 1994; Goldberg 1995), that is, pairings of meaning and formal syntactic properties that cannot be expressed at the level of smaller components. From such a construction-based perspective, the example in (36) receives an analysis of the kind sketched in (37):

(37) N′

′ N 1 S bare-rel-cl woman ′  MODN 1  SLASH {NP }  1    I love

Here bare-rel-cl is a particular kind of relative clause, a subtype of phrase, with properties that set it apart from other kinds of relative clauses, in particular the absence of any initial wh-filler. The constructional perspective has led to a reevaluation of the division of labor between the lexicon and the supra-lexical units recognized by the grammar. HPSG analyses have habitually focused on lexical description and the hierarchical organization of words, and there has been a tendency to provide lexical treatments of the grammatical aspects of phrases and sentences whenever possible. In contrast, given a fuller model of the hierarchy of phrases, a simpler conception of the lexicon, free of aspects better treated at the constructional level, becomes possible. In addition to relative clauses, two areas in which a construction-based approach has led to significant advances are English interrogative constructions (Ginzburg and Sag 2000, see also §5.1.2) and German clause types (Kathol, 1997, 2000). The latter combines the construction- based perspective with the linearization framework outlined in §3.2. As a result, German clausal constructions can be defined entirely by referring to their topological structure, abstracting away from combinatorial licensing. In the case of root declarative clauses this makes it possible to

22 have a uniform description of the construction, whether the initial element is a filler (as in (29) above) or some other element, such as the positional expletive es, illustrated in (38):

(38) root-decl = (v2 ∧ declarative)  vf. l. s. mf. mf.  DOM PHONhesi , PHONhsahi , PHONhniemandi , PHONhd. Blumei            * EXPL V[FIN] NP[NOM] NP[ACC] +            Whether the first element  is a filler or an expletive, both (29) and (38) satisfy the constraints on root declarative clauses, which are defined as the conjunction of constraints on v2 clauses and declarative clauses, cf. (39) (Kathol 2000:147–148):

(39) a. ... | HEAD 1 v2 → l. s. DOM vf, , ...  ... | HEAD 1  * " # +   b. ... | HEAD 1  ... | MODE proposition  declarative → l. s.  DOM   , ...   *"... | HEAD 2 # +    1 =6 2    The first constraint requires that the verbal head occur in second position in the clausal domain, while the second states that the verbal head must not occur clause-initially, while imposing propo- sitional semantics on the entire clause.19 Next we turn to the question of how the various constructional constraints can be related to each other. The type-based formalism of HPSG allows the information in constructional defini- tions to be organized in terms of hierarchical inheritance.

4.3 Construction Hierarchies Construction-based approaches are sometimes criticized for being merely descriptive and not explanatory and hence failing to reveal the underlying factors responsible for the patterns ob- served in the data. On the other hand, it must be recognized that reductionist approaches often only succeed by arbitrarily selecting a subset of data to be explained (“core” vs. “periphery”). Moreover, in many purportedly reductionist analyses, constructional complexity is often simply hidden in the use of phonologically null elements, for which little or no empirical motivation is provided. Finally, critics of constructional approaches fail to realize that as a result of describing what is special and irreducible about a given entity in the grammar, we end up with an account of what properties a given construction SHARES with other elements of the grammar. Of central importance in this respect are multiple inheritance hierarchies. Such hierarchies are well-known from the way that lexical information is organized in HPSG. For instance, a verbal form such as walks can intuitively be characterized in terms of at least two parameters of variation. The first is the lexical class or part of speech, which groups walks together with such forms as is, proved, and singing. The second is valence, which puts walks in the same class as sleeping, house, or abroad, cf. (40).

23 (40) valence part-of-speech

... transitive intransitive verb noun preposition ...

intransitive-verb

Thus once membership in these and other classes has been established for a given lexeme, only the lexically indiosyncratic properties need to be listed specifically. The use of multiple inheritance hierarchies extends naturally to objects of type phrase as well. For instance, the phrasal type bare-rel-cl in (37) above is simultaneously an instance of the phrase types non-wh-rel-cl and fin-hd-subj-ph, as shown in (41) (Sag 1997:443, 473):

(41) phrase

CLAUSALITY HEADEDNESS

clause ... hd-ph ...

decl-cl inter-cl rel-cl ......

non-wh-rel-cl ... hd-subj-ph ...

fin-hd-subj-ph ...

bare-rel-cl

The type non-wh-rel-cl classifies the construction as a particular subinstance of non-wh relative clauses (rel-cl), which is one way in which the ‘clausality’ of a phrase (its combinatorial and semantic properties) can be specified.20 The type fin-hd-subj-ph accounts for its internal compo- sition as a finite subject-predicate construction (and, for instance, not as a filler-head construction as in the case of wh-relative clauses). This in turn is an instance of a subject-predicate phrase (hd-subj-ph), which is ultimately related to the general type of headed phrases (hd-ph). While there are often residual properties that cannot be accounted for by stating what larger constructional classes a given entity inherits from, both constructions under consideration here— English bare relative clauses and German verb-second declaratives—are defined entirely by their supertypes. Thus, the constraints on phrases of type bare-rel-cl are simply the logical conjunction of the constraints on non-wh-rel-cl and fin-hd-subj-ph. The constructional hierarchy in (42) illustrates the same for the example in (38) above. Unlike HEADEDNESS in (41), which makes reference to the schema responsible for the combination, the subtypes of INT(ERNAL)-SYNTAX in (42) are defined in terms of topological structure (adapted from Kathol 2000:175).

24 (42) finite-clause

INT-SYNTAX CLAUSALITY

root subord inter

v2 v1 imp wh polar decl rel

root-decl

Far from simply providing a list of grammatical constructions, the constructional approach strives to capture generalizations whenever possible. As a result, as Kathol (2000, 176–177) points out, the various inheritance relationships that organize the constructions of a language into a complex web of dependencies of different kinds take the place of the representational complexity inherent in the abstract structures posited in transformational analyses. This orga- nized repository is sometimes referred to as the “constructicon”, the counterpart of the lexicon for constructions.

5 MeaninginHPSG

As Halvorsen (1995) and Nerbonne (1992, 1995) argue at length, LFG- or HPSG-style constraint-based semantics has several advantages over approaches such as Montague grammar that assume a homomorphism between syntax and semantics. Constraint-based semantics allows much greater freedom in stating analyses at the syntax/semantics interface. The notion of the linguistic sign in HPSG makes it easy to formulate phonological and pragmatic constraints on meaning (cf. §5.2). In addition, constraint-based semantics seems to be particularly well-suited for expressing semantic underspecification (cf. §5.1) and it allows the formulation of theories of combinatorial semantics that go beyond compositionality, a notion recently argued to be unnec- essary, if not completely vacuous (Zadrozny,˙ 1994; Lappin and Zadrozny,˙ 2000). In the two subsections below, we will look at recent HPSG approaches to semantics and pragmatics, respectively.

5.1 Advances in Semantics Recent years have witnessed increased interest in foundational semantic issues within HPSG. A number of studies have proposed various novel approaches to HPSG semantics, either extending the account of Pollard and Sag (1994) (cf. §5.1.1 and §5.1.2), or replacing it (cf. §5.1.3).

5.1.1 Scope and Recursive Modification The standard HPSG approach to semantics presents two major flaws: the incorrect interaction of raising and scope, and a failure to account for recursive modification.

25 Raising and Scope The first problem is recognized in Pollard and Sag (1994, p.328). The ambiguous sentences in (43) receive only a single reading under their analysis, namely, the wide scope readings in which the quantifiers a unicorn and each painting outscope the raising verbs.

(43) a.Aunicornappearstobeapproaching. (ambiguous) b. Sandy believes each painting to be fraudulent. (ambiguous)

The problem stems from the fact that, in Pollard and Sag (1994), a quantifier starts its life only at the surface position of the phrase to which it corresponds and from there it can only percolate upwards. Thus, in (43a), the quantifier cannot be in the scope of appears, even though it corresponds to the raised subject of approaching,whichisinthescopeof appears. The solution Pollard and Yoo (1998) propose is to make the quantifier corresponding to a raised constituent available at the ‘initial’ position, e.g. at the level of the embedded verb approaching in (43a). The quantifier can then percolate up and be retrieved either inside or outside the scope of appear. Below, we present an analysis proposed in Przepiórkowski (1997, 1998), which simplifies the analysis of Pollard and Yoo, while at the same time solving a number of problems, such as spurious ambiguity. Przepiórkowski’s analysis rests on the following assumptions: First, in order to treat raising examples such as (43), QSTORE must be present not at the level of sign, as in Pollard and Sag (1994), but at least at the level of synsem. If QSTORE appears on synsem objects, the quantifier corresponding to a unicorn in (43a) will be present on the QSTORE in the synsem of a unicorn, (i.e. the subject of appears). Since appears is a raising verb, this QSTORE value is also present on the subject of approaching and it can therefore be retrieved within the scope of appears. In fact, on the basis of extraction examples such as (44), Pollard and Yoo (1998) argue that QSTORE should actually be appropriate for local objects.

(44) Five books, I believe John read.

Przepiórkowski (1997, 1998) goes further and argues that QSTORE should be part of a sign’s content:

(45) content " QSTORE set(quant) #

psoa nom-obj quant " QUANTS list(quant) #

Second, there is a new set-valued attribute appropriate for word only, namely NEW-QS. If a word introduces a quantifier, the NEW-QS set contains this quantifier; otherwise it is empty. For example, a partial specification of the indefinite a, assumed to be a quantifier, is:

26 (46) word  PHON h a i  det  CAT|HEAD   SPEC N′: 1    " #     SYNSEM|LOC quant       CONT 2 DETERMINER exists           RESTIND 1       ARG-ST h i       NEW-QS { 2 }    Third, quantifier retrieval is only allowed at the lexical level, as proposed in Manning et al. 1999. Przepiórkowski provides a constraint that ensures that words with nom-obj or quant con- tent simply amalgamate the QSTORE values of their arguments and their own NEW-QS value, and the resulting QSTORE set propagates further up the tree in accordance with the standard Seman- tics Principle. A word with psoa content, on the other hand, can retrieve quantifiers from this set and move them to QUANTS (which contains the list of retrieved quantifiers in the order of their scope). Other quantifiers remain in QSTORE for retrieval by a higher lexical head. Przepiórkowski’s QSTORE amalgamation mechanism relies crucially on a distinction between selected and non-selected arguments. For example, a unicorn in (43a) is a selected argument of the lower verb (approaching), but not of the higher verb (appears). In this way, the quantifier is only introduced once, by the synsem of a unicorn at the level of the verb approaching. As desired, however, (43a) has exactly two possible readings: a unicorn is either retrieved by the word approaching, or it remains in QSTORE and is retrieved in the upper clause by the word appears.

Recursive Modification Kasper (1997) notes that the original semantic theory of Pollard and Sag (1994) does not account for modifying phrases that contain modifiers of their own as, e.g. in (47). (47) a. Bob showed us an [[apparently] simple] example. b. Congress reconsidered the [[[[very] obviously] unintentionally] controversial] plan. According to the Semantics Principle of Pollard and Sag (1994), the adjunct daughter is the semantic head in a head-adjunct phrase, and it provides all of the semantic content of the resulting phrase. The CONTENT value of the modified daughter (the syntactic head)—in particular, the semantic relations encoded in the RESTR(ICTIONS) set—must therefore be incorporated into the CONTENT of the adjunct. This is taken care of in the lexical description of the modifier, which has access to the synsem of the modified element through its MOD value. This approach does not produce the correct analysis in cases of recursive modification. In (47), for example, apparently modifies simple, which means that the RESTR set of simple is added to that of apparently. However, the RESTR set of simple in turn includes the RESTR set of example. The entire NP ends up therefore with an incorrect interpretation in which the simple example is apparent, and not the actual reading in which the example is apparently simple. The problem is that there is no way for the embedded modifier apparently to pick out just the inherent semantics of simple while excluding the semantics of example: the value of RESTR

27 is an unstructured set. Kasper addresses this problem by encoding the “internal” content of the adjunct daughter in one part of its representation (in its MOD | ICONT value), and the overall content, incorporating the semantics of the modified element, in another part (MOD | ECONT). The Semantics Principle is revised to specify that the CONTENT of a head-adjunct phrase is shared with the ECONT value of the adjunct daughter. In the original Pollard and Sag analysis, modifiers must have different CONTENT values de- pending on the identity of the modified element or the syntactic context. For example, an at- tributive adjective like simple above has a CONTENT of type nom-obj, but a predicative adjective (as in This problem is apparently simple) has a CONTENT of type psoa. The adverb apparently exhibits exactly the same alternation in these examples, although there is nothing in its local con- text to motivate this. Kasper’s revised approach allows a more uniform representation of modifier meaning: the CONTENT of a modifier always encodes its inherent semantics (an object of type psoa). Assuming lexical specifications of potentially and controversial as in (48) and (49), respec- tively, the structure of the phrase potentially controversial plan according to Kasper’s analysis is schematically described in (50).

(48) word PHON h potentially i   adv    mod       HEAD adj    CAT|HEAD   ARG     MOD CONT 5 psoa     " #   SS|LOC         ICONT 3 psoa           ECONT 3              RELN potential    CONT    " ARG 5 #        (49) word PHON h controversial i   adj    PRD −       mod         nom-obj              ARG|CONT  INDEX 1     CAT|HEAD        RESTR 2   SS|LOC   MOD         ICONT 3                  nom-obj           ECONT  INDEX 1           RESTR 2 & 3                RELN controversial    CONT    " INST 1 #       

28 (50) PHON h potentially controversial plan i " CONT 7 #

PHON h potentially controversial i PHON h plan i ARG 9 INDEX 1   9  ICONT 3 CONT RELN plan  HEAD 4 MOD     RESTR 2   INDEX 1   INST 1   7  " #    ECONT        " RESTR 2 & 3 #         CONT 3      

PHON h potentially i PHON h controversial i ARG 8 HEAD 4   8  MOD  ICONT 3  RELN controversial    CONT 5   ECONT 3   INST 1       " #     RELN potential     CONT 3   ARG 5   " #   The analysis assigns the correct meaning to the NP: ‘x : plan′(x) & potential′(controversial′(x))’. The adverb potentially only modifies the in- herent semantic content of controversial, and the semantics of the entire AdjP potentially controversial is combined with the semantics of the modified noun plan. Another solution to the problem of recursive modification is to abandon the idea of a se- mantic head that is solely responsible for the propagation of semantic content in head-adjunct phrases. In Minimal Recursion Semantics (discussed in 63 below), all daughters contribute their content directly to the higher phrase. The embedding of the modified element’s CONTENT in the modifier’s CONTENT, which was the source of the original problem, is thus avoided.

5.1.2 Propositions, Facts, Outcomes and Questions Ginzburg and Sag (2000) extend previous HPSG approaches to semantics by considering the content of illocutionary acts other than assertions. They propose a type message for the semantic content of clauses, with two immediate subtypes propositional and question. Ginzburg and Sag distinguish three types of propositional objects: proposition, fact, and outcome. Proposition is the semantic type of the complement of predicates such as believe, assert, assume, deny or prove, so called true/false predicates (“TF predicates”). Such predicates, unlike factive predicates, e.g. know or discover, can only occur with nominal complements of which truth can be predicated:

(51) a.#Jackie believed/doubted/assumed... Bo’s weight / my phone number.

29 b. Jackie knows/discovered Bo’s weight / my phone number.

Moreover, TF predicates treat proposition-denoting complements purely referentially, in the sense of Quine (1968):

(52) a. Substitutivity: The Fed’s forecast was that gold reserves will be depleted by the year 2000. Brendan believes/denies. . . the Fed’s forecast. Hence, Brendan believes/denies. . . that gold reserves will be depleted by 2000. b. Existential Generalization: Brendan believes/denies. . . that gold reserves will be depleted by the year 2000. Hence, there is a claim/hypothesis/prediction that Brendan believes/denies...

On the other hand, factive predicates do not seem to treat proposition-denoting complements purely referentially, e.g.:

(53) Substitutivity: The Fed’s forecast was that gold reserves will be depleted by the year 2000. (The Fed’s forecast is true.) Brendan discovered/was aware of the Fed’s forecast. IT DOES NOT FOLLOW THAT Brendan discovered/was aware that gold reserves will be depleted by 2000.

This suggests that the denotation of the complement of a TF predicate (i.e. a proposition) is different from that of the complement of a factive predicate (i.e. a fact or, more generally, a possibility). Another difference between facts and propositions is that only the former can enter into causal relations:

(54) a. The fact that Tony was ruthless made the fight against her difficult. b. The possibility that Glyn might get elected made Tony’s hair turn white. c.#The claim/hypothesis/proposition that Tony was ruthless made the fight against her difficult.

On the other hand, truth can only be predicated of propositions, not of facts (or possibilities):

(55) a.#The fact that Tony was ruthless is true. b. The claim/hypothesis/proposition that Tony was ruthless is true/false.

Apart from complements of factive verbs, facts are the content of the illocutionary acts of re- minding and exclaiming, e.g.:

(56) a. Laurie: Why don’t the vendors here speak Straits Salish? Bo: We’re in New York City for Pete’s sake.

30 b. Bo reminded Laurie (of the fact) that they were in New York City.

It is considerations like these that lead Ginzburg and Sag (2000) to introduce proposition and fact as distinct subtypes of propositional. Another subtype of propositional that they distinguish is outcome, the type of imperative clauses, inter alia. What all these propositional contents have in common is their internal structure, which involves a situation (the value of the attribute SIT) and a state of affairs (the value of the attribute SOA), the latter corresponding to Pollard and Sag’s 1994 parametrized state of affairs (psoa). On the other hand, questions, which correspond to the content of clausal complements of predicates such as wonder and ask, are represented as propositional abstracts, with the relevant notion of abstraction being Aczel and Lunnon’s 1991 “simultaneous abstraction”. In terms of HPSG feature geometry, questions are messages with two new attributes: PARAMS, whose value is the set of (abstracted) parameters, and PROP, with values of type proposition. This is summa- rized below.

(57) message

propositional question  SIT situation   PARAMS set(parameter)  SOA soa PROP proposition        

proposition fact outcome

The following examples illustrate the semantic representation of a simple declarative and a simple yes/no interrogative clauses within Ginzburg and Sag’s 2000 approach. Note that the value of PARAMS in (59) is the empty set, corresponding to the simultaneous abstraction of zero parameters in case of yes/no questions, and that background assumptions are represented as facts.

(58) Brendan left.

proposition    SIT s0   soa   CONT       QUANTS h i      SOA            leave-rel       NUCL       " LEAVER 1 #         SS|LOC    fact            SIT s1       soa        CONX|BKGRND  QUANTS h i              SOA name-rel           NUCL  NAMED 1           NAME Brendan              31 (59) Did Brendan leave?

question PARAMS {}      proposition         SIT s0     CONT        soa      PROP        QUANTS h i              SOA       leave-rel        NUCL    SS|LOC     " LEAVER 1 #                   fact        SIT s1        soa        CONX|BKGRND  QUANTS h i              SOA name-rel           NUCL  NAMED 1           NAME Brendan              See Ginzburg and Sag 2000 for further details and explication in terms of Situation Theory, as well as for extensive application of this approach to an analysis of English interrogatives.21

5.1.3 Beyond Situation Semantics The extensions of HPSG semantics presented in the two preceding subsections are exactly that: extensions of the standard Pollard and Sag (1994) HPSG semantic theory, which was inspired by Situation Semantics. The last decade has also witnessed numerous proposals for integrating other approaches to semantics into HPSG. These proposals can be classified into two overlapping categories: the first category comprises analyses that replace standard HPSG semantics with a version of predicate logic, the second contains HPSG analyses of underspecified meaning. We will present these two classes of approaches in turn below.

Predicate Logics To the best of our knowledge, the first proposal to use a predicate logic as the semantic language of HPSG is that of Nerbonne (1992, 1993): he shows how to encode the language of generalized quantifiers in typed feature structures and provides a treatment of scope ambiguities within this encoding. A more recent proposal of this type is made by Richter and Sailer (1999a, 1999b), who show in technical detail how the higher order type theoretic language Ty2 can be used as the semantic object language of HPSG. In this model, the value of CONTENT is of type me (meaningful ex- pression), which introduces the attribute TYPE for identifying either atomic-types (entity, truth, w-index) or complex-types (with two type-valued attributes, i.e. IN and OUT). The subtypes of me include variable, constant, application, abstraction, equation, negation, logical-constant and quantifier, which can have further subtypes and introduce their own attributes. The following ex- ample shows how lambda abstraction and function application are encoded in Richter & Sailer’s system.

32 ′ (60) a. λxe.professorhe,ti(xe) abstraction c-type   TYPE IN 1 entity      OUT 2 truth          var     VAR 3   "TYPE 1 #    b.    application     professor      c-type      ARG FUNC    TYPE IN 1          OUT 2                 ARG 3        In this approach, combinatorial semantics is particularly simple: the CONTENT value of a phrase is always the result of functional application of the CONTENT value of one daughter to the CONTENT value of the other daughter (with additional applications of β-reduction, as needed). Richter and Sailer also introduce a lexical rule for type shifting.22 word (61) =⇒ "SYNSEM|LOC|CONTENT λx1 ...λxi ...λxn.φ# word "SYNSEM|LOC|CONTENT λx1 . . . λXi ...λxn.Xi(λxi.φ)# With this type shifting lexical rule in hand, one of the two meanings of, say, Someone loves everyone can be derived by type shifting the basic meaning of love (given in (62a)) to (62b) and, subsequently, to (62c), followed by ordinary functional application, indicated in (63). (61) (62) a. λyλx.love′(x, y) =⇒ (61) b. λyλX.X(λx.love′(x, y)) =⇒ c. λY λX.Y (λy.X(λx.love′(x, y))) ′ (63) ∀y∃x.love (x, y)

λQ∃x.Q(x) λX∀y.X(λx.love′(x, y))

λY λX.Y (λy.X(λx.love′(x, y))) λP ∀y.P (y) someone loves everyone Note that in this approach, such scope ambiguities result from different placements of the new variable introduced by the Quantifier Raising rule (61): if λY were placed after λX in (62c), the opposite scoping would result (i.e. ∃x∀y.love′(x, y)). Richter and Sailer (1999a, 1999b) provide extensive discussion of their proposals for HPSG semantics and show how it can be used to analyze Negative Concord in French and Polish.

33 Underspecification One of the first discussions of the proper representation of semantic under- specification in HPSG is Nerbonne (1992, 1993), which shows how underspecified descriptions of meanings can correctly denote semantic objects corresponding to fully specified meanings. For example, although the grammar leaves open which quantifier outscopes the other in Every- body loves somebody, each object corresponding to the content of this sentence is disambiguated one way or the other. This is, in fact, also the approach adopted in Pollard and Sag (1994), Pollard and Yoo (1998), and other works mentioned above. According to all these works, al- though descriptions of (semantic) objects are underspecified with respect to meaning, the objects themselves correspond to fully resolved semantic representations. Recent years have witnessed a number of proposals for truly underspecified semantics for HPSG, i.e. a semantics in which both descriptions and objects correspond to underspecified meanings. According to such approaches, the semantic object described (generated) by an HPSG grammar and corresponding to, say, Everybody loves somebody does not resolve the relative scope of the two quantifiers. An extra-grammatical resolution mechanism steps in to provide disambiguated readings, when necessary. Some logics, such as Underspecified Discourse Rep- resentation Theory (UDRT; Reyle 1993), define truth conditions for underspecified semantic representations and provide a proof theory which makes it possible to draw inferences from such partial semantic structures. In fact, the first (to the best of our knowledge) proposal for under- specified semantics for HPSG simply embeds UDRT into HPSG. Frank and Reyle (1992, 1995, 1996) use this formalism in their analysis of interactions between word order and quantifier scope, and of the collective/distributive readings of NPs. A related approach, which has gained greater attention from HPSG practitioners, is Minimal Recursion Semantics (MRS; Copestake et al. 2006), an underspecified version of predicate cal- culus. Although originally devised as a computationally-oriented semantic formalism, it is also increasingly adopted in theoretical linguistic work. One of the principal characteristics of MRS is that the grammar does not determine scope relations; they are resolved at a post-grammatical resolution stage (if at all). The grammar itself generates semantic representations such as (64), corresponding to Every dog chased some cat:

(64) mrs LTOP 0  HOOK  " INDEX e #    every chase some     LBL 1 dog LBL 3 LBL 4 cat           y   RELS h ARG0 x , LBL 2 , ARG0 e , ARG0 , LBL 5 i            RSTR l  ARG0 x  ARG1 x   RSTR n  ARG0 y                BODY m     ARG2 y   BODY p                 qeq  qeq  qeq        HCONS h HARG 0 , HARG l , HARG n i           LARG 3 LARG 2 LARG 5                  The main attribute of an mrs structure is RELS, whose value is a bag of elementary pred- ications (EPs), consisting of a semantic predicate with an identifying label (an object of type handle) and associated arguments. These arguments take either semantic variables or handles

34 as values. In (64), the variable-valued arguments (ARG0, ARG1, ARG2) are properly identified with the entity and event variables x , y , and e . The appropriate coindexations are determined by lexical and syntactic constraints. The scopal EPs for every and some, on the other hand, are more complicated: the two handle- valued arguments specify the restriction and body (or scope) of the quantifier. The values of these arguments must eventually be equated with the LBL value of some EP, but this linking is not fully determined by syntactic and semantic composition rules. We can see this in (64), where the values l , m , n , and p are not identified with any EP labels, although some handle constraints are introduced in HCONS. The authors assume that the resolution of handle-argument values is done by an extra- grammatical processing module.23 The two ways of disambiguating the example in (64) are shown in (65)–(66) below; they can be represented in the traditional notation as in (65′)–(66′).

(65) mrs  LTOP 0  every chase some    LBL 1 dog LBL 3 LBL 4 cat           x e y   RELS h ARG0 , LBL 2 , ARG0 , ARG0 , LBL 5 i         y    RSTR l  ARG0 x  ARG1 x   RSTR n  ARG0                BODY m     ARG2 y   BODY p                     (66) mrs  LTOP 0  every chase some    LBL 1 dog LBL 3 LBL 4 cat           x e y   RELS h ARG0 , LBL 2 , ARG0 , ARG0 , LBL 5 i         y    RSTR l  ARG0 x  ARG1 x   RSTR n  ARG0                BODY m     ARG2 y   BODY p                     (65′) ∀x(dog′(x) → ∃y(cat′(y) ∧ chase′(x, y)))

(66′) ∃y(cat′(y) ∧ ∀x(dog′(x) → chase′(x, y)))

Not all identifications of RSTR and BODY with EP labels lead to well-formed formulas. In a fully resolved MRS representation, all of the handles must form a tree (where the label of an EP immediately dominates the handles that appear as arguments in that EP) rooted at the LTOP (local top) handle, which corresponds to the EP (or conjunction of EPs) with the widest scope in the phrase. This tree condition is satisfied in the two scope-resolved MRSs above. On the other hand, for example, the handle m could not be equated with 1 , and m and p could not both be equated with 3 , because the resulting structures are not trees. The resolution of an underspecified MRS structure must also respect the constraints in HCONS. These are formulated as “equality modulo quantifiers” or “=q” constraints, which state that either (i) the HARG and LARG handles are identified, or (ii) if there are intervening quan- tifiers, the HARG handle must outscope the LARG handle. In both possible resolutions of the

35 underspecified MRS in (64)), the constraints l =q 2 and n =q 5 are satisfied by handle iden- tification, while 0 =q 3 is satisfied by handle outscoping. As a more interesting example, in a sentence like Kim thinks that Sue did not make it, not cannot outscope thinks, even though such a reading would be allowed by well-formedness conditions alone. The unavailability of this inter- pretation is ensured by adding a constraint to HCONS requiring the ltop of the complement clause to outscope the handle of the negation. See Copestake et al. (2006) for further formal discussion and examples of HCONS constraints. An interesting variant of MRS is presented in Egg 1998 and used as a basis for a (syntactico- )semantic HPSG account of wh-questions. And Richter and Sailer (1999c), following Bos (1995), show how for any logical object language, a semantically underspecified version of this language can be defined as the semantic representation for HPSG, generalizing over previous HPSG approaches to underspecified semantics.

5.2 Forays into Pragmatics 5.2.1 Information Structure The HPSG framework is ideally suited for studying interactions between various grammatical levels. Although the syntax-semantic interface has received the most attention, there has also been work on information structure (also called topic-, theme-rheme, new-given, topic- ground), which is known to interact with syntax and prosody in interesting ways. The most influential approach is based on Vallduví’s 1992 account of information structure, further de- veloped in Engdahl and Vallduví (1994), Engdahl and Vallduví (1996), Vallduví and Engdahl (1996), and Engdahl (1999). We will illustrate this approach with a simple example from Eng- dahl and Vallduví (1996). Consider the mini-dialogue in (67), where bold face corresponds to “B-accent” (L+H*), while SMALL CAPITALS correspond to “A-accent” (H*).

(67) A: In the Netherlands I got a big Delft china tray that matches the set in the living room. Was that a good idea? B: (Maybe.) The president [F HATES] the Delft china set. (But the first lady LIKES it.) Vallduví (1992) assumes a 3-way partition of information structure of sentences. First, the in- formation conveyed by a sentence is split into new information (focus) and information already present in the discourse (ground). Second, ground is further subdivided into link (what the sen- tence is about, sometimes called topic) and tail. Under the assumption that every utterance con- tains new information, this leads to a four-way classification of utterances: all-focus (no ground), link-focus (no tail), focus-tail (no link) and link-focus-tail. The sentence in (67B) represents the link-focus-tail type. Engdahl and Vallduví (1994, 1996) and Engdahl (1999) propose that information structure be represented within in the CONTEXT value of signs in the following way:

36 (68) context info-struc   FOCUS set(content)     INFO-STR ground       GROUND LINK set(content)        TAIL set(content)          They also formulate principles that add a word’s semantic contribution to the focus if and only if it bears the A-accent, and to the link if and only if it bears the B-accent:

word word (69) ↔ CONT 1 PHON|ACCENT A   " # CONX|INFO-STR|FOCUS { 1 }     word word (70) ↔ CONT 1 PHON|ACCENT B   " # CONX|INFO-STR|GROUND|LINK { 1 }     There are additional principles specifying how a phrase’s information structure is constrained by the information structure of its daughters. This leads to the following (much simplified) structure of (67B):

(71) S info-struc FOCUS { 1 }    INFO-STR ground      GROUND LINK { 4 }        TAIL { 2 }         

NP VP PHON|ACCENT B info-struc CONT 4 INFO-STR FOCUS { 1 }      info-struc GROUND|TAIL { 2 }  INFO-STR      GROUND|LINK { 4 }      " #   the president V NP PHON|ACCENT A CONT 2 CONT 1   h i info-struc the Delft china set  INFO-STR   FOCUS { 1 }   " #   HATES

37 Note that this analysis simultaneously accesses and constrains various grammatical levels: prosody (PHON values) and pragmatics (CONX|INFO-STR values), but also semantics (CONT val- ues) and constituent structure (DTRS values, represented here using tree notation). This account clearly illustrates the advantages of constraint-based theories, such as HPSG, over derivational theories, like Minimalism, where it is not clear how such an analysis, making simultaneous use of various levels of grammatical knowledge, could be stated.24 Recent HPSG analyses concerned with information structure in various languages in- clude: Avgustinova (1997), Kolliakou (1998), Kolliakou (1999), Alexopoulou (1998) and Przepiórkowski (1999b).

5.2.2 Beliefs and Intentions Green (1994) and Green (2000) seek to spell out in further detail the kind of information normally represented by CONTEXT values and argue that this information does not correspond to the real world directly (as it apparently does, e.g. in Pollard and Sag (1994) and in (58)–(59) in §5.1.2 above), but rather describes speakers’ beliefs about the world and their intentions. Thus, Green (1994) reiterates arguments that “the relevant background propositions are not about objective aspects of any world, but rather are propositions about beliefs which the speaker supposes to be mutual” (p. 5); for example, sentence (72) is felicitous provided that both the speaker and the hearer believe the presupposition (that french fries are bad for Clinton), even if that presupposition is in fact false.

(72) Clinton realizes french fries are bad for him.

Moreover, Green (1994) argues that restrictions on INDEX values, normally taken to be a part of CONTENT (i.e. values of the attribute RESTRICTION) should rather be treated as beliefs about how referential expressions can be used, i.e. as parts of CONTEXT. This is because “as language users, we are free to use any word to refer to anything at all, subject only to the purely pragmatic constraint that [...] our intended audience will be able to correctly identify our intended referent from our use of the expression we choose” (p. 7). If expressions such as dog nevertheless have certain ‘standard’ or ‘normal’ meanings, this is due to the mutual belief of the speaker and the hearer as to what a normal belief about the referential use of dog is within a given language community. A partial simplified (‘naive’) version of the lexical entry for dog in this kind of framework would be (73).

38 (73) PHON h dog i  CONT|INDEX 1  SPEAKER 2   C-INDS   ADDRESSEE 3     " #       mutually-believe         EXPERIENCER 2         SS|LOC   STANDARD 3    CONX             normally-believe      BKGRND { }      EXPERIENCER English              SOA speakers                     canis        SOA        " INST 1 #                 This approach to referential expressions is a step towards a treatment of transferred reference, extensively discussed by Nunberg (1978), such as (74) below.

(74) The milkshake claims you kicked her purse.

Here, milkshake can refer to whoever purchased the milkshake by virtue of the mutual belief of the speaker and the addressee that, within sales agents’ parlance, the thing purchased can designate the purchaser. This means that the BACKGROUND value of the sign corresponding to (74) contains the following mutual belief (in addition to the belief that milkshake normally refers to a milkshake and other beliefs):

(75) mutually-believe EXPERIENCER 2 (speaker)   STANDARD 3 (addressee)    normally-believe     EXPERIENCER sales agents        SOA rfunction       SOA DEMONSTRATUM purchase        DESIGNATUM purchaser          Green (2000) seeks to represent the illocutionary force of utterances within the value of CONTEXT|BACKGROUND25 as speakers’ intentions. For example, the illocutionary force of promising is analyzed as a speaker’s intention that the addressee recognize the speaker’s in- tention that the addressee believe that the speaker commits himself or herself to be responsible for the content of the promise:

(76) Illocutionary force of promising:

39 SPEAKER 1 C-INDS 2   " ADDRESSEE #  intend      EXPERIENCER 1            recognize          EXPERIENCER 2            intend          CONX    EXPERIENCER 1            BKGRND {  believe }                  SOA    EXPERIENCER 2              SOA  commit                SOA  1                          SOA responsible                   SOA  THEME 1                   SOA psoa                       The grammatical principles giving rise to structures like (76) are not formulated explicitly. Green suggests that “it is not the sign which is the indicator of illocutionary intentions, but the act of uttering it”. A fuller model of speech acts is thus required in order to incorporate these proposals into HPSG grammars.

6 Issues in Morphosyntax

The interface between syntax and morphology has also received considerable attention in re- cent HPSG research. The original presentations of the framework in Pollard and Sag (1987) and (1994) did not address these kinds of issues in detail, but they did establish the Strong Lex- icalist foundations of HPSG. Under this hypothesis, elements smaller than words (i.e. bound morphemes) are not manipulated in the syntax. There are many linguistic phenomena, however, that result from the interaction of syntax and morphology, and this section surveys a number of proposals for handling such phenomena in a way that is consistent with the lexicalist claim about the modularity of grammar.

6.1 Clitics The elements described as ‘clitics’ in various languages are notoriously difficult to analyze pre- cisely because they straddle the boundary between morphology and syntax. They can be charac- terized broadly as once fully independent words that have lost their autonomy in various ways; as this process continues, these elements may lose their syntactic word status, or disappear alto- gether. In this section we will present the analysis of French pronominal clitics proposed by Miller and Sag (1997). As they discuss in detail, the empirical facts confirm that French clitics are actually lexical affixes, rather than syntactic words. They provide a lexicalist account of clitic

40 realization (as bound morphological elements), disproving earlier claims that such a treatment of Romance cliticization cannot be applied uniformly (Sportiche, 1996). In this analysis, French clitics are represented by non-canonical affix-synsem objects. The (partial) type hierarchy under synsem is as follows:

(77) synsem

canon noncan

gap affix

Non-canonical synsem elements on ARG-ST are not realized syntactically as valence elements.26 Instead, in the analysis of Miller and Sag, the presence of an object of type affix on ARG-ST is reflected in the morphological realization of the verb. Specifically, words are assumed to have a feature MORPH, whose values introduce three further features, STEM, I-FORM and FORM:

(78) word  FORM ...  MORPH I-FORM ...     STEM ...        The value of STEM corresponds to the morphological stem of the verb, I-FORM represents the inflected form of the verb before clitics are taken into account, while FORM values represent full inflected forms including any clitics affixed to the verb. For example, the 3rd person singular present tense indicative form of the lexeme LAVER ‘wash’ with its object realized as a 3rd person plural affix has the following MORPH value:

(79) FORM les-lave  I-FORM lave  STEM lav-     FORM values are derived from I-FORM values, taking into account HEAD and ARG-ST informa- tion, via the following constraint:

FORM FPRAF( 0 , 1 , 2 ) (80) MORPH I-FORM 0 word → " #   HEAD 1   SYNSEM|LOC|CAT   " ARG-ST 2 #     If ARG-ST contains no clitics, the function FPRAF behaves like the identity function on its first argument, i.e. the value of FORM is identical to the value of I-FORM. But if there are clitics on

ARG-ST, theFPRAF function encodes a complex constraint that produces the appropriate clitics in the correct positions with respect to each other and with respect to the verb. For example, in the case of an indicative verb with only one pronominal 3rd person plural accusative clitic on its

ARG-ST, theFPRAF function adds the affix les in front of the value of I-FORM:

41 FORM FPRAF( 0 , 1 , 2 ) = les- 0 (81) MORPH  " I-FORM 0 #   verb   HEAD 1    " VFORM indic #     SYNSEM|LOC|CAT    affix       ARG-ST 2 hNP, CASE acc i       INDEX 3pl            The affix element on ARG-ST is not mapped to the verb’s COMPS list, so the resulting form (e.g., les-lave ‘washes them’) can function as a complete, COMPS-saturated VP. The real challenge for a lexicalist approach to Romance cliticization is the phenomenon known as ‘clitic climbing’, where clitics originating on one verb, such as laver in (82), are realized on a higher verb, such as the tense auxiliary avoir in (82). (82) Pierre les a lavés. Pierre 3PL has washed ‘Pierre washed them.’ In order to deal with such cases, Miller and Sag (1997) assume an argument composition approach (cf. §2.3), where the higher verb does not subcategorize for a VP, but rather combines with the lexical verb and copies all of the arguments of this verb to its own ARG-ST list. For example, a schematic lexical entry for the auxiliary avoir is given in (83):

(83) AVOIR (tense auxiliary):

word  HEAD verb  synsem     verb   SS|LOC|CAT         ARG-ST h 1 ,  HEAD  VFORM past-p  i⊕ 2     LOC|CAT      V-AUX avoir              ARG-ST h 1 i⊕ 2              One consequence of this is that any clitics selected by the past participle will also be present on the ARG-ST of avoir. The constraint (80) ensures that these clitics are morphologically re-

alized on avoir. This constraint also applies to the part participle itself, but the function FPRAF (which has access to the HEAD | VFORM value) is defined so that clitics are never overtly real- ized on past participles. Clitic phenomena in Romance have inspired a great deal of research in HPSG. In addition to Miller and Sag (1997) and the references cited therein for French, see Monachesi (1993, 1999) for Italian, and Crysmann (1999a, 2000a) for European Portuguese. The Slavic languages exhibit a wider range of cliticization phenomena, including not only pronominal clitics that serve as arguments, but also verbal clitics that express tense and mood. Polish has received the most attention in HPSG work: see Kups´c´ (1999, 2000) for pronominal clitics, and a series of papers on auxiliary clitics (Borsley, 1999; Kups´c´ and Tseng, 2005; Crys- mann, 2006). See Avgustinova (1997) for clitics in Bulgarian, and Penn (1999b, 1999a) for an extensive treatment of second position clitics in Serbo-Croatian.

42 6.2 Mixed Categories With its type hierarchies and the possibility of multiple inheritance, HPSG is particularly well- suited for analyzing mixed categories, i.e. categories that simultaneously share various properties of different major categories, such as verb and noun. Malouf (1998, 2000b) takes advantage of these mechanisms to provide an HPSG account of verbal gerunds in English (like (84a)–(84b), but not (84c)), well known for exhibiting mixed verbal and nominal properties.

(84) a. Everyone was impressed by [Pat’s artfully folding the napkins]. (verbal POSS-ing gerund) b. Everyone was impressed by [Pat artfully folding the napkins]. (verbal ACC-ing gerund) c. Everyone was impressed by [Pat’s artful folding of the napkins]. (nominal gerund)

On the nominal side, verbal gerunds have a distribution similar to that of NPs, but not that of VPs or sentences; in particular, they can occur as complements of prepositions and as clause- internal subjects. On the verbal side, they project a VP-like structure. Thus, they take the same complements as the corresponding verbs would, including accusative NPs, and they are modified by adverbs, not by adjectives. Malouf (2000b) accounts for this behavior by postulating the (partial) type hierarchy for head in (85a) and the lexical rule (85b).

(85) a. head

noun verbal

common-noun gerund verb adjective

verb b. HEAD HEAD gerund  " VFORM prp #  SUBJ h 1 i SUBJ h 1 NPi ⇒    VALENCE COMPS 2      VALENCE  COMPS 2       SPR h 1 i   SPR hi              Since gerund is a subtype of noun, a gerund projection can occur anywhere an NP is selected for (just like the projection of a common noun). To account for the modification facts, we can assume that adverbs modify any verbal category, including gerunds, but adjectives only modify common nouns. Since the external argument of a gerund is, as indicated in (85b), both its subject and its specifier at the same time, gerund phrases can be either specifier-head constructions or subject- head constructions. More specifically, according to the type hierarchy of phrase assumed in Malouf (2000b), gerund phrases can be either of type nonfin-head-subj-cx, in which case the

43 external argument receives accusative case (cf. (84b)), or of type noun-poss-cx, in which case it takes the genitive (cf. (84a)). Malouf also shows how this analysis accounts for the difference between POSS-ing and ACC-ing verbal gerunds with respect to the possibility of pied-piping with the external argument, cf. (86a).

(86) a. I couldn’t figure out [whose being late every day] Pat didn’t like . (verbal POSS-ing gerund) b.*I couldn’t figure out [who(m) being late every day] Pat didn’t like . (verbal ACC-ing gerund)

Languages with more morphology than English provide additional evidence for this approach to mixed categories. For instance, verbal nouns in Polish are verbal in that their argument struc- ture is systematically related to that of the corresponding verb and, more importantly, in that they show both aspect and negation morphologically, just like ordinary verbs in Polish. On the other hand, verbal nouns are nominal in that they occur in positions reserved for NPs, they have (neuter) gender, decline for case and number, and can be modified by adjectives, just like ordi- nary nouns. Another mixed category in Polish is that of adjectival participles, which inflect for case and number, and modify nouns, just like other adjectives, but can inflect for negation and (to some extent) aspect, like verbs. They also pattern with verbs in the way they assign case to their arguments (e.g., genitive of negation, cf. Przepiórkowski 1999a). These mixed categories in Polish are more complex than English verbal gerunds in that they combine properties of different major categories at the same level; for example, verbal nouns display the internal structure and morphology of both nominal and verbal elements. This makes them ineligible for accounts, often proposed for English verbal gerunds, which posit a purely verbal internal structure, but a nominal outer layer to explain their external distribution (cf. Mal- ouf 1998, 2000b for a review of such approaches). On the other hand, the multiple inheritance approach can be applied straightforwardly (Przepiórkowski, 1999a).

6.3 Case and Case Assignment In the original presentation of HPSG, case assignment was simply dealt with as part of lexical subcategorization requirements, with “no separate theory of case (or Case)” (Pollard and Sag, 1994, 30). It has since become clear that a (partially) syntactic theory of case assignment is needed after all. See Przepiórkowski (1999a, ch. 3) for a brief history of approaches to case assignment in HPSG and other frameworks. The most explicit proposal for HPSG is that of Heinz and Matiasek (1994). This approach consists of three parts. First, the type hierarchy does not simply enumerate the possible morphological cases (nom, acc, etc.) as subtypes of case; intermediate types are introduced to distinguish between lexical/inherent case, assigned directly in lexical entries, and structural case, assigned in the syntax.27 Heinz and Matiasek (1994) propose the following type hierarchy for case in German, which says that nominative is always structural, genitive and accusative are either structural or lexical, and dative is always lexical.

44 (87) case

morph-case syn-case

nom gen dat acc structural lexical

snom sgen sacc lgen ldat lacc Second, the lexical entries of predicates (verbs, nouns, etc.) are assumed to distinguish be- tween structural and lexical arguments: only the latter have lexically specified case. For example, the German verbs unterstützen ‘support’ and helfen ‘help’ have the following ARG-ST (originally, SUBCAT) specifications:

(88) a. unterstützen: [ARG-ST hNP[str], NP[str]i] b. helfen: [ARG-ST hNP[str], NP[ldat]i]

The criterion for deciding whether an argument has structural or lexical case is the stability of the morphological case across syntactic configurations. For instance, the case of the second argument of unterstützen (i.e. its object) is unstable because it is accusative in the active voice but nominative in the passive, cf. (89), whereas the second argument of helfen is consistently dative, cf. (90).

(89) a. Der Mann unterstützt den Installateur. the.NOM man supports the.ACC plumber ‘The man is supporting the plumber.’ b. Der Installateur wird unterstützt. the.NOM plumber AUX supported ‘The plumber is supported.’

(90) a. Der Mann hilft dem Installateur. the.NOM man helps the.DAT plumber ‘The man is helping the plumber.’ b. Dem Installateur wird geholfen. the.DAT plumber AUX helped ‘The plumber is helped.’

Similarly, the subject of most verbs, including unterstützen and helfen, has an unstable (i.e. structural) case, realized as nominative in ordinary subject-verb constructions, but as accusative in subject-to-object raising structures. Third, the resolution of structural is determined by configurational constraints. For example, if the first argument of a finite verb has structural case and is realized locally (not inherited by another predicate), then it is morphologically nominative (snom). Similarly, if the second element of ARG-ST is structural and realized locally (via the COMPS list) its case is sacc.

45 This approach accounts nicely for data like (89)–(90), as well as more complex data involv- ing so-called remote passivization in German (cf. Pollard 1994, Heinz and Matiasek 1994, as well as (25) above). An updated version of Heinz and Matiasek’s 1994 analysis is developed in Przepiórkowski (1999a), in order to overcome various technical and conceptual shortcom- ings. In particular, the configurational case-resolution constraints are replaced by strictly local non-configurational principles, so that the resulting analysis is compatible with current HPSG approaches to extraction and cliticization. At first sight, phenomena like the remote passive in German, where correct resolution of the structural case of an argument seems to crucially depend on its tree-configurational realization, present an obstacle to non-configurational approach to case assignment. Przepiórkowski (1999a) shows, though, that it is only necessary to know whether a given argument is realized locally or inherited by a higher predicate. If this information is encoded for each element on ARG-ST (by means of a binary feature), the case assignment principles can be formulated strictly locally and non-configurationally. See Przepiórkowski (1999a) for a complete presentation of this approach, with an extensive examination of case assignment in Polish and other languages. See also Calcagno and Pollard (1997), Chung (1998b), Kups´c´ (1999), Calcagno (1999), Meurers (1999a, 1999b) and Malouf (2000a) for other applications.

6.4 Agreement Agreement phenomena involve morphosyntax, semantics, and pragmatics, and so it is not sur- prising that this is another domain in which the sign-based formalism of HPSG has yielded significant results. The central concept of the HPSG theory of agreement is the INDEX, which unifies some of the properties of constants and variables from logical formalisms. In the simplest cases an index is an abstract linguistic entity that is referentially linked to some object in the interpretation domain. Indices are also used with quantification, in which case they behave much like variables. Un- like constants and variables in logic, however, indices have an internal organization that reflects properties of the associated linguistic entities or referential objects. In English, this informa- tion includes number, gender, and person. This makes it possible to straightforwardly account for a number of agreement phenomena. For instance, if we assume that person/number/gender information is encoded on indices and that the relation between reflexive and their antecedents involves INDEX identity, then the distribution of forms in (91) follows immediately.

(91) a. I saw {myself/*himself/*herself} in the mirror. b. He saw {*myself/himself/*herself} in the mirror. c. She saw {*myself/*himself/herself} in the mirror.

The need for indices to mediate between form and meaning has been challenged, for instance by Dowty and Jacobson (1988), who propose a strictly semantic approach to agreement relations of the kind illustrated in (92). However, as Pollard and Sag (1994) point out, a purely semantic approach runs into difficulties when several linguistic forms exist for referring to some entity. For

46 instance, English allows pets to be referred to either with the neuter pronoun or by their natural gender. A strictly semantic approach predicts that both alternatives should always be available. While this is the case across sentences, cf. (92), within certain grammatical domains, the same pronoun must be chosen consistently, and switching gives rise to ill-formedness, cf. (93).

(92) That dog is so stupid, every time I see it I want to kick it. He’s a damn good hunter, though.

(93) a. That dog is so ferocious, it even tried to bite itself/*himself. b. That dog is so ferocious, he even tried to bite himself/*itself.

Indices can be used to record aspects of the linguistic form used to introduce an entity into the discourse. In this case, such domain effects can be readily explained by simple structure sharing among indices. There are, however, cases where properties of the linguistic form need to be distinguished from properties of the referent itself. An illustration of such a situation in French is given in (94):

(94) Vous êtes belle. you are-PL beautiful-SG.FEM ‘You are beautiful.’

Here, the subject vous with a single female as the intended referent is involved in two agreement relations. It triggers plural morphology on the verb êtes. At the same time, the predicative adjective exhibits feminine singular morphology. Whereas the first can straightforwardly be attributed to the number properties of the index of vous, Pollard and Sag (1994) argue that the singular marking of the adjective is a reflection of inherent semantic properties of the subject’s referent. Thus, we need to distinguishbetween the index per se and the conditionsunder which an index is referentially anchored to an entity of the world. The singular morphology on belle can be explained pragmatically as the result of using morphologically plural vous with a nonaggregate referent. The split between syntactic/semantic index-agreement and pragmatic agreement of the latter kind is illustrated in (95):

(95) index agreement (GEND)

index agreement (PER, NUM)

hvousi  INDEX fem 2nd pl  êtes belle  ANCH. CONDz }|. nonaggregate{      pragmatic agreement (NUM)

47 Finally, there exist cases of agreement that do not plausibly involve indices at all. Indices only carry information about number, person, and gender (as reflected by the slots in the pronominal paradigm). They do not encode case information. This is well-motivated because pronoun- antecedent relations typically allow case discrepancies (cf. (91)). But in many languages there is covariation of case within NPs, cf. the following data from German:

(96) NOMINATIVE ein lieber Verwandter ACCUSATIVE einen lieben Verwandten a dear relative

Such cases of what Pollard and Sag (1994) call “case concord” are dealt with by assuming that adjectives have their own CASE feature whose value is constrained to be identical to that of the noun. Thus the potential of morphological variation of the head noun and the dependent adjective is directly reflected in their own independent CASE features. This contrasts with Pollard and Sag’s 1994 view of other agreement relations. In particular, subject-verb agreement is taken to be a reflection of the subcategorization requirements of the verb. For instance, English -s tells us that the verb constrains its subject to be 3rd person singular—there is no independent reflection of 3rd singular properties in the lexical description of walks. As Kathol (1999) argues, this position is somewhat unsatisfactory for constructions with no subject. Consider for instance the case of impersonal passives in German. Here, the passive auxiliary shows 3rd singular morphology, but it cannot be said to select a 3rd singular subject:

(97) AnjenemAbend {wurde/*wurden} viel gelacht. on that evening was.3.SG/were.3.PL much laughed ‘There was much laughter that evening.’

Such examples indicate that agreement as a relation between syntactic forms needs to be dis- tinguished from cases where morphological form indicates some syntactic dependency. For this reason Kathol (1999) proposes that all inflecting lexical categories have a head feature MORSYN encoding aspects of their morphological form. The MORSYN value includes the attribute AGR, which groups together all morphosyntactic features that are, in principle, subject to covariation.28 For example, nouns, adjectives, and are typically assumed to have case, gender, and number information in AGR, while verbs have person, number, and in some languages, gender (but never case). As a consequence, NP-internal agreement (between the noun and its adjectival modifiers) can be treated as sharing of all AGR features, not just CASE, as previously assumed by Pollard and Sag (1994). This is illustrated with the following example from Polish where there is NP-internal agreement between demonstratives, adjectives, and nouns involving case, number, and gender:

(98) ten duzy˙ chłopiec this.NOM.SG.MASC big.NOM.SG.MASC boy.NOM.SG.MASC CASE nom AGR 1 AGR 1  AGR 1 GENDER masc  h i h i NUMBER sg       48 The assumption of AGR as a bundle of features participating in agreement allows for a greater differentiation in terms of which pieces of lexical information bear a systematic relation with morphology and semantics/pragmatics—and what kinds of mismatches are possible. A detailed study of these correlations and possible exceptions in Serbo-Croation is undertaken by Wech- sler and Zlatic´ (2001), who also highlight the role that classes play in determining agreement behavior. Wechsler and Zlatic´ distinguish four levels at which agreement-related in- formation are pertinent. In addition to semantics (i.e. anchoring conditions), index, and concord, they propose that declension class should also be represented lexically. While declension class is not a direct parameter of covariation, the morphological shape determined by a declension class can nevertheless give rise to certain concord facts that are in apparent conflict with the seman- tically conditioned feature assignment (in terms of index features). Consider the following data from Serbo-Croatian:

(99) a. det-e (‘child’) declension class I (typically for masc. and neut. nouns) concord neutersing index neutersing b. de´c-a (‘children’) declension class II (typically for fem. nouns) concord femininesing index neuterplural

The plural de´ca (‘children’) is in declension class II, which is normally associated with feminine nouns. For the purposes of adjectival concord, this form behaves as if it were a feminine singular noun (100b):

(100) a. ovo lepo dete that.NEUT.SG beautiful.NEUT.SG child.NEUT.SG ‘that beautiful child’ b. ova lepa deca´ that.FEM.SG beautiful.FEM.SG child.FEM.SG ‘those beautiful children’

Outside of the NP, more semantically-based principles take over, as shown by plural marking on the verb, cf. (101):

(101) Ta dobra deca´ dolaze. that.FEM.SG good.FEM.SG children come.PAST.3.PL ‘Those good children came.’

In other cases, declension class mismatch has no bearing on the covariation of dependent elements. For instance, Steva (‘Steve’) is also inflected according to declension class II, but here the determiner and the adjective exhibit masculine agreement:

49 (102) a. Stev-a (‘Steve’) declension class II (typically for fem. nouns) concord masculinesing index masculinesing b. Vratio mi je ovaj ludi Steva violinu koju sam returned me AUX this.NOM.M.SG crazy.NOM.M.SG Steve.NOM violin which AUX mu pozajmio. him loaned ‘This crazy Steve returned to me the violin which I loaned him.’

The diversity of the data motivates the idea of declension class, concord, index, and seman- tics as four distinct parameters. The simplest cases of totally transparent covariation can be represented as in (103a). Where there is a split between morphology, NP-internal concord and NP-external covariation behavior—as with de´ca in (100, 101), there is a misalignment between concord and index-based properties, as shown in (103b). Finally, if only the morphology is exceptional, as in (102), we have the situation represented in (103c).

(103) a. declension ⇔ concord ⇔ index ⇔ semantics b. declension ⇔ concord || index ⇔ semantics c. declension || concord ⇔ index ⇔ semantics

Wechsler and Zlatic’s´ 2001 conception of agreement as a multi-layer phenomenon based on default alignments between ‘modules’ successfully models the range of covariation phenomena, from the most familiar to the most exceptional.

7 Advances in Logical Foundations (RSRL)

Within the last 15 years or so, a number of different formalisms have been proposed for formal- izing HPSG-style analyses, e.g., Kasper and Rounds (1986), King (1989) and Carpenter (1992). These different formalisms often reflect the state of the art in HPSG theorizing at the time when they were developed and more or less straighforwardly allow to encode large parts of HPSG grammars. However, they lack mechanisms necessary to encode other parts of HPSG analyses, mainly those involving so-called relational constraints and quantification. As Richter et al. (1999) and Richter (2000) point out, analyses making implicit or explicit reference to such mechanisms abound in HPSG literature. One famous case in point is Pollard and Sag’s 1994 Binding Theory, cited in (104) below.

(104) The Binding Theory (Pollard and Sag, 1994, 401): Principle A: A locally o-commanded anaphor must be locally o-bound. Principle B: A personal pronoun must be locally o-free. Principle C: A nonpronoun must be o-free.

50 This principle relies on notions such as (local) o-command, (local) o-binding and (local) o- freeness, which in turn rely on the notion of obliqueness. Relevant definitions are cited below (Pollard and Sag, 1994, 401):

(105) a. One synsem object is more oblique than another provided it appears to the right of the other on the SUBCAT list of some word. b. One referential synsem object locally o-commands another provided they have distinct LOCAL values and either (1) the second is more oblique than the first, or (2) the second is a member of the SUBCAT list of a synsem object that is more oblique than the first. c. One referential synsem object o-commands another provided they have distinct LOCAL values and either (1) the second is more oblique than the first, or (2) the second is a member of the SUBCAT list of a synsem object that is o-commanded by the first, or (3) the second has the same LOCAL | CATEGORY | HEAD value as a synsem object that is o-commanded by the first. d. One referential synsem object (locally) o-binds another provided it (locally) o- commands and is coindexed with the other. A referential synsem object is (locally) o-free provided it is not (locally) o-bound. Two synsem entities are coindexed provided their LOCAL | CONTENT | INDEX values are token-identical.

It is clear that the definitions in (105) are really definitions of relations. For example, ac- cording to (105a), two synsem objects x and y stand in the more oblique relation provided there exsists a word w such that x is to the right of y on w’s SUBCAT list. Similarly, according to (105b), two objects x and y stand in the local o-command relation if and only if both have LO- CAL | CONTENT | INDEX values of type ref, their LOCAL values are not token-identical and, moreover, either y and x stand in the more oblique relation, or there exists a synsem object s such that y is a member of s’s LOCAL | CATEGORY | SUBCAT, and s and x stand in the more oblique relation. Similar paraphrases can be given for the notions introduced in (105c–d). These paraphrases already show that there is a great deal of existential quantification hidden in Pollard and Sag’s 1994 Binding Theory. The definition of the more oblique relation makes reference to some word, the definition of local o-command refers to a synsem object, etc. Any direct formalization of this Binding Theory must also make use of universal quantifi- cation. This is because the logical structure of Principles A–C is actually as follows: For each x such that x is a locally o-commanded anaphor / a personal pronoun / a nonpronoun, x is lo- cally o-bound / locally o-free / o-free, respectively. Note that, apart from universal quantification, these principles also make direct use of existential quantification. For example, the more careful paraphrase of Principle A would be:

(106) Principle A of Pollard and Sag’s 1994 Binding Theory (paraphrased): For each x such that, both, x is an anaphor and there exists y such that y and x stand in the local o-command relation, there exists z such that z and x stand in the local o-binding relation.

51 Finally, the definition of (local) o-freeness in (105d) calls for the presence of logical negation in the underlying formalism: two objects x and y stand in the (local) o-freeness relation if and only if they do not stand in the (local) o-binding relation. Although relations, quantification and general logical negation are commonly (albeit often implicitly) used in HPSG, for a long time there existed no HPSG formalism providing the log- ical foundations for these notions. A formalism meeting these desiderata has been proposed in Richter et al. (1999) and, more comprehensively, in Richter (2000) under the name “RSRL” (Re- lational Speciate Re-entrant Language). It is based on SRL (Speciate Re-entrant Logic; cf. King 1989, 1994, 1999 and Pollard 1999) but extends SRL by introducing relations and restricted quantification. A formal presentation of RSRL is well beyond the scope of this survey, so we will only illustrate this formalism here. Let us look again at Principle A, as paraphrased in (106). Assuming the existence of relation symbols loc-o-command and loc-o-bind which correspond to local o-command and local o-binding, respectively, this principle can rendered in RSRL as follows (Richter, 2000, §4.2): (107) Principle A of Pollard and Sag’s 1994 Binding Theory (in RSRL): ∀x ((x[LOC CONT ana] ∧ ∃y loc-o-command(y, x)) → → ∃z loc-o-bind(z, x))

According to this principle, for each object x, if x’s LOC|CONT is of type ana, and if there exists some y which locally o-commands it, then there must exist some object z which actually locally o-binds x. Similarly, taking into consideration the fact that being (locally) o-free is tantamount to not being (locally) o-bound (cf. (105d)), Principles B and C can be formalized in RSRL as follows: (108) Principles B and C of Pollard and Sag’s 1994 Binding Theory (in RSRL): a. ∀x (x[LOC CONT ppro] → ¬∃y loc-o-bind(y, x)) b. ∀x (x[LOC CONT npro] → ¬∃y o-bind(y, x)) For these formalizations of Principles A–C of Pollard and Sag’s 1994 Binding Theory to have the intended effect, it is necessary to define the meaning of relation symbols loc-o-command, loc-o-bind and o-bind. We will first define the simpler relation more-oblique:

(109) more-oblique(x, y) ⇐∀=

w word ∧ to-the-right(x, y, 1 ) " SS LOC CAT SUBCAT 1 # According to this definition, x and y stand in the more-oblique relation if and only if there are w and 1 such that w is a word whose SYNSEM | LOCAL | CATEGORY | SUBCAT is 29 1 and y is to-the-right of x on 1 . Note that this definition relies on the convention according to which (i) variables present on the left hand sideof‘⇐∀= ’ are quantified universally, while (ii) variables present only on the right hand side of ‘⇐∀= ’ are quantified existentially. The definition of loc-o-command is more complex but its overall logical structure corre- sponds to the prose in (105b).

52 (110) loc-o-command(x, y) ⇐∀= x ( [LOC 1 [CONTINDEX ref]] ∧ y [LOC 2 [CONTINDEX ref]] ∧ ¬ 1 = 2 ) ∧ (more-oblique(y, x) ∨ synsem (s ∧ " LOC CAT SUBCAT 3 # more-oblique(s, x) ∧ member(y, 3 ))) Similar definitions can be given for relations loc-o-bind and o-bind. One important aspect of RSRL that should not be overlooked is the restricted character of its quantification mechanism: the range of quantifiers used in an RSRL description is restricted to components of the described object. Let us illustrate this aspect with a generalization regarding Serbo-Croatian case assignment discussed in Wechsler and Zlatic´ (1999; 2001) and cited in (111) below. (111) Serbo-Croatian Dative/Instrumental Case Realization Condition. If a verb or noun assigns dative or instrumental case to an NP, then that case must be morphologically realized by some element within the NP. The element that realizes the dative/instrumental case on an NP does not have to be the head of this NP: in Serbo-Croatian, there is a class of uninflected female names which do not de- cline for case at all and, hence, are grammatical in dative and instrumental positions only when accompanied by a determiner or an adjective which is overtly inflected for case, cf. (112). (112) Divim se *(mojoj) Miki. admire.1ST.SG REFL my.DAT.SG Miki ‘I admire (my) Miki.’ The generalization in (111) is difficult to state without quantification (cf. Wechsler and Zlatic´ 1999 for an attempt), but straighforward when quantification is available; (113) is a slightly modified version of the constraint given in Wechsler and Zlatic´ (2001).

phrase (113) → " SYNSEM|... |CASE 1 (dative ∨ instrumental) # inflected-word → ∃x | x = " SYNSEM|... |CASE 1 # This constraint is already almost an RSRL principle. The RSRL version is given below.

phrase (113′) ∀ 1 (( ∧ ( 1 dative ∨ 1 instrumental)) → " SYNSEM ... CASE 1 # inflected-word → ∃x x ) " SYNSEM ... CASE 1 #

53 Constraint (113) illustrates an important aspect of RSRL: this constraint has the effect in- tended in the informal description (111) only because RSRL quantification is restricted to com- ponents of the described object, i.e., to values of paths within the described object. Since (113) is a constraint on phrases with dative or instrumental SYNSEM|... |CASE values, the existential quantifier ∃x ranges over objects within such a phrase, so the word overtly inflected for case must be somewhere within this phrase. Without this restriction on quantification, the existential quantifier could pick up a dative or instrumental element somewhere else within the sentence (or, more generally, anywhere within the model), and the constraint (113) would in effect state that, whenever there is a dative or instrumental phrase, there must be an inflected-word with the same case value anywhere in the sentence (or in the model). Despite its relative novelty, RSRL has already been employed in HPSG accounts of a va- riety of phenomena, including German clause structure (Richter, 1997; Richter and Sailer, 2000), semantic scope (Przepiórkowski, 1997), underspecified semantics (Richter and Sailer, 1999c), linearization and cliticization (Penn 1999b, 1999a, Kups´c´ 1999, 2000), negative con- cord (Richter and Sailer 1999a; 1999b), Montagovian semantics (Sailer, 2000), case assignment (Przepiórkowski, 1999a; Meurers, 1999a; Wechsler and Zlatic,´ 2001) and morphology (Rein- hard, 2000).

8 Conclusion

HPSG is probably best seen as a collection of analyses developed by a community of researchers linked by a common commitment to nonderivational, psychologically plausible, lexicalist, for- mally precise, and computationally tractable descriptions of natural language phenomena. It is one of the most popular formal grammar paradigms outside of the transformational mainstream, and the use of HPSG in linguistic research, language engineering applications, and teaching is steadily increasing. Since 1993 the annual conference devoted to HPSG-based work has at- tracted a truly international audience and done much to foster a sense of community among HPSG researchers of all trades. Given the attention to descriptive precision and sound formal foundations, it should not come as a surprise that there are now numerous implementations of the framework.30 While the history of linguistics has seen its share of movements that fizzled out after only a few productive years, we hope to have conveyed to the reader our confidence that there is still a tremendous amount of unrealized potential in HPSG.

54 Notes

1We will sometimes use the following identifiers for the successive versions of the ‘standard theory’ of HPSG: “HPSG1” (Pollard and Sag, 1987), “HPSG2” (the first eight chapters of Pollard and Sag 1994), “HPSG3” (chapter 9 of Pollard and Sag 1994, “Reflections and Revisions”).

2Sag and Wasow (1999) assume a single list SPR for both specifiers (of nouns) and subjects (of verbs), but the formalism presented in this textbook is not meant to be a presentation of the full theory of HPSG.

3See Green (this volume), example (*8*).

4This formulation supersedes the ARG-ST version in Green (this volume), example (*51*).

5The head-driven propagation of SLASH information is incorporated into the Generalized Head Feature Principle of Ginzburg and Sag (2000), which relies on default unification. See also the default formulation in Green (this volume), example (*52*).

6Throughout this section we follow the authors cited in using the SUBCAT list. See Green (this volume), for lexical descriptions of raising and control verbs using SUBJ and COMPS valence.

7A notable exception is Ackerman and Webelhuth’s 1998 HPSG/LFG theory of predicates, in which the valence of complex predicates is presumed to be determined entirely at the level of morphology, rather than in syntax as with argument composition.

8Notable exceptions are the analyses of Haider (1993) and Bierwisch (1990), who assume base-generated verbal complexes similar to the ones proposed by Hinrichs and Nakazawa.

9Chung (1993) argues that similar constructions in Korean should be handled by means of a valence feature distinct from SUBCAT that is exclusively responsible for combining verbal material. Rentier (1994) makes a closely related proposal for Dutch verb clusters, which is extended and further motivated empirically by Kathol (2000). See also Gunji (1999) and Gunji and Hasida (1998) for similar ideas in the closely related framework of Japanese Phrase Structure Grammar.

10As Kathol (1994) shows, passive constructions without auxiliaries, such as adjectival pas- sives, are not necessarily a problem, since they need to have a distinct lexical category from the participles occurring in clausal passive cases. However, as Müller (2000) points out, there are still problems with this approach in light of partial VP fronting constructions, and Müller (2001) argues for a return to a lexical rule-based analysis for German passives.

11Note however that this correlation does not follow by necessity. For instance, Kiss (1995) and related work assume a strictly binary branching clause structure for German of the kind familiar from transformational analyses.

55 12A feature with this name and function was first suggested by Jacobson (1987).

13Cf. also Wechsler (1986) for an earlier critique of movement-type analyses of verb placement in Swedish.

14One notable exception is Yiddish. See Kathol (2000, ch.9) for some discussions of Yiddish and cases of non-complementarity in other Germanic languages.

15This argument of course pressuposes a lexicalist approach to passives in terms of variation in argument structure or valence rather than manipulation of the tree structure.

16See for example Crysmann (2006).

17Another possible option for non-canonical argument realization is pronominalization in some languages (see §6.1 below).

18One possibility would be to attribute the relative clause behavior entirely to the verbal head. In other words, finite verbs would be treated as ambiguous between a “regular” version and a rel- ative clause version licensing a gap and turning the finite clause into a noun-modifier. However, long-distance dependencies of the kind shown in (i) provide evidence against such an approach.

(i) ThisisthewomantheysayIlove.

Here the verb licensing the gap is love, yet the verb responsible for the modification of the noun woman is say.

19The first constraint also characterizes other verb-second structures, such as matrix con- stituent questions. Similarly, the second constraint also applies to subordinate declarative clauses.

20As is also shown in (113), other instances of clausality include declarative (decl-cl) and interrogative (inter-cl) clauses.

21See also Przepiórkowski (1999b) and Przepiórkowski and Kups´c´ (1999) for a related ap- proach to Negative Concord in Italian and in Polish.

22They call it the Quantifier Raising Derivational Rule to indicate that it is a description-level (derivational) lexical rule in the sense of Meurers (1999a).

23Resolution within the grammar using recursive constraints is formally possible, but compu- tationally impractical.

24See Engdahl (1999) for some discussion on how information packaging could be represented in Minimalism and in HPSG.

56 25Contrast this with the representation of basic illocutionary types within CONTENT in Ginzburg and Sag (2000); cf. §5.1.2.

26This idea is also crucial for the analysis of extracted arguments as gaps (recall §4.1).

27This distinction is analogous to the dichotomy assumed in Chomskyan syntax.

28Along similar lines, Wechsler and Zlatic´ (2001, 2003) group locally shared information in CONCORD.

29See Richter (2000, §4.2) for the straightforward definition of to-the-right as used in (109) and member as used in (110). Also, note that both letters and tags are used as variables in RSRL descriptions.

30The activities of the members of the international, multilingual Delph-In consortium (http://www.delph-in.net/) are particularly notable in this regard.

References

Abeillé, A. and Godard, D. (1994). The complementation of French auxiliaries. In Aranovich, R., Byrne, W., Preuss, S., and Senturia, M., editors, Proceedings of the Thirteenth West Coast Conference on Formal Linguistics, volume 13, Stanford University. CSLI Publications/SLA.

Abeillé, A., Godard, D., and Sag, I. A. (1998). Two kinds of composition in French complex predicates. In (Hinrichs et al., 1998), pages 1–41.

Ackerman, F. and Webelhuth, G. (1998). A Theory of Predicates. CSLI Publications, Stanford.

Aczel, P. and Lunnon, R. (1991). Universes and parameters. In Barwise, J., Gawron, J. M., Plotkin, G., and Tutiya, S., editors, Situation Theory and Its Applications, II, number 26 in CSLI Lecture Notes. CSLI Publications, Stanford.

Alexopoulou, T. (1998). Unbounded dependencies and the syntactic realisation of information packaging. In (Bouma et al., 1998).

Austin, P. and Bresnan, J. (1996). Non-configurationality in Australian Aboriginal languages. Natural Language and Linguistic Theory, 14(2):215–268.

Avgustinova, T. (1997). Word Order and Clitics in Bulgarian. PhD thesis, Universität des Saarlandes, Saarbrücken.

Bach, E. (1981). Discontinuous constituents in generalized categorial grammars. In Proceedings of the 11th Annual Meeting of the Northeast Linguistic Society, pages 515–531, Amherst. Graduate Linguistics Student Association.

57 Baker, K. L. (1994). An integrated account of “modal flip” and partial verb phrase fronting in German. In Papers from the 30th Regional Meeting of the Chicago Linguistic Society, volume 30, Chicago, Illinois. CLS, CLS.

Baker, K. L. (1999). “Modal flip” and partial verb phrase fronting in German. In (Levine and Green, 1999), pages 161–198.

Bech, G. (1955). Studien über das deutsche Verbum infinitum. Danske Historisk-filologiske Meddelelser, 35: 2.

Bender, E. and Flickinger, D. (1999). Diachronic evidence for extended argument structure. In (Bouma et al., 1999), pages 3–19.

Bierwisch, M. (1990). Verb cluster formation as a morphological process. In Booij, G. and van Marle, J., editors, Yearbook of Morphology, pages 173–199. Foris, Dordrecht.

Bonami, O., Godard, D., and Maradin, J.-M. (1999). Constituency and word order in French subject inversion. In (Bouma et al., 1999), pages 21–40.

Borsley, R. and Kathol, A. (2000). Breton as a V2 language. Linguistics, 38:665–710.

Borsley, R. D. (1987). Subjects and complements in HPSG. CSLI Report 87-107, Stanford University, Stanford University.

Borsley, R. D. (1989). An HPSG approach to Welsh. Journal of Linguistics, 25:333–354.

Borsley, R. D. (1999). Weak auxiliaries, complex verbs and inflected complementizers. In (Borsley and Przepiórkowski, 1999), pages 29–59.

Borsley, R. D. and Przepiórkowski, A., editors (1999). Slavic in Head-Driven Phrase Structure Grammar. CSLI Publications, Stanford.

Bos, J. (1995). Predicate logic unplugged. In Proceedings of the 10th Amsterdam Colloquium.

Bouma, G., Hinrichs, E., Kruijff, G.-J. M., and Oehrle, R. T., editors (1999). Constraints and Resources in Natural Language Syntax and Semantics. CSLI Publications, Stanford.

Bouma, G., Kruijff, G.-J. M., and Oehrle, R. T., editors (1998). Proceedings of the Joint Con- ference on Formal Grammar, Head-Driven Phrase Structure Grammar, and Categorial Gram- mar, 14–16 August 1998, Saarbrücken.

Bouma, G., Malouf, R., and Sag, I. A. (2001). Satisfying constraints on extraction and adjunc- tion. Natural Language and Linguistic Theory, 19:1–65.

Bouma, G. and van Noord, G. (1998a). Word order constraints on Germanic verb clusters. In Hinrichs, E., Kathol, A., and Nakazawa, T., editors, Complex Predicates in Nonderivational Syntax, volume 30 of Syntax and Semantics, pages 43–72. Academic Press, San Diego.

58 Bouma, G. and van Noord, G. (1998b). Word order constraints on verb clusters in German and Dutch. In (Hinrichs et al., 1998), pages 43–72. Bratt, E. O. (1996). Argument Composition and the Lexicon: Lexical and Periphrastic in Korean. PhD thesis, Stanford University. Bunt, H. and van Horck, A., editors (1996). Discontinuous Constituency. Number 6 in Natural Language Processing. Mouton de Gruyter, Berlin. Calcagno, M. (1993). Toward a linearization-based approach to word order variation in Japanese. In Kathol, A. and Pollard, C., editors, Papers in Syntax, volume 42 of OSU Working Papers in Linguistics, pages 26–45. Department of Linguistics, Ohio State University, Columbus, OH. Calcagno, M. (1999). Some thoughts on tough movement. In (Kordoni, 1999), pages 198–230. Calcagno, M. and Pollard, C. (1997). Argument structure, structural case, and French causatives. Paper delivered during the 4th International Conference on HPSG, 18–20 July 1997, Ithaca, New York. Campbell-Kibler, K. (2002). Bech’s problem, again: Linearization and Dutch r-pronouns. In Eynde, F. V., Hellan, L., and Beermann, D., editors, Proceedings of the 8th International Conference on Head-Driven Phrase Structure Grammar, pages 87–102. CSLI Publications, Stanford. Cann, R., Grover, C., and Miller, P., editors (2000). Grammatical Interfaces in HPSG. CSLI Publications, Stanford. Carpenter, B. (1992). The Logic of Typed Feature Structures. Cambridge University Press, Cambridge. Chung, C. (1993). Korean auxiliary verb constructions without VP-nodes. In Kuno, S., Whitman, J., Kang, Y.-S., Lee, I.-H., Maling, J., and joo Kim, Y., editors, Harvard Studies in Korean Linguistics V, pages 274–286. Hanshin, Seoul. Proceedings of the 1993 Workshop on Korean Linguistics. Chung, C. (1998a). Argument composition and long-distance scrambling in Korean: An exten- sion of the complex predicate analysis. In (Hinrichs et al., 1998), pages 159–220. Chung, C. (1998b). Case, obliqueness, and linearization in Korean. In (Bouma et al., 1998), pages 164–174. Copestake, A., Flickinger, D., Pollard, C., and Sag, I. A. (2006). Minimal Recursion Semantics: An introduction. Research on Language and Computation, 3:281–332. Crysmann, B. (1999a). Licensing proclisis in European Portuguese. In Corblin, F., Dobrovie- Sorin, C., and Marandin, J.-M., editors, Empirical Issues in Formal Syntax and Semantics. Selected papers from the Colloque de Syntaxe et de Sémantique de Paris (CSSP 1997), pages 255–276, The Hague. Thesus.

59 Crysmann, B. (1999b). Morphosyntactic paradoxa in Fox. In (Bouma et al., 1999), pages 41–61.

Crysmann, B. (2000a). Clitics and coordination in linear structure. In Gerlach, B. and Grijzen- hout, J., editors, Clitics in Phonology, Morphology, and Syntax. Benjamins. To appear.

Crysmann, B. (2000b). Syntactic transparency of pronominal affixes. In (Cann et al., 2000), pages 77–96.

Crysmann, B. (2006). Floating affixes in Polish. In Müller, S., editor, Proceedings of the 13th International Conference on Head-Driven Phrase Structure Grammar, pages 123–139, Stan- ford. CSLI Publications.

Davis, A. (2001). Linking by Types in the Hierarchical Lexicon. CSLI Publications, Stanford.

De Kuthy, K. and Meurers, W. D. (1998). Towards a general theory of partial constituent fronting in German. In (Bouma et al., 1998), pages 113–124.

Donohue, C. and Sag, I. (1999). Domains in Warlpiri. Paper presented at the 6th International Conference on HPSG, Edinburgh.

Dowty, D. (1996). Towards a minimalist theory of syntactic structure. In Bunt, H. and van Horck, A., editors, Discontinuous Constituency, pages 11–62. Mouton de Gruyter, Berlin/New York.

Dowty, D. and Jacobson, P. (1988). Agreement as a semantic phenomenon. In Proceedings of the 5th Eastern States Conference on Linguistics, pages 1–17.

Egg, M. (1998). Wh-questions in Underspecified Minimal Recursion Semantics. Journal of Semantics, 15(1):37–82.

Engdahl, E. (1999). Integrating pragmatics into the grammar. In Mereu, L., editor, Boundaries of Morphology and Syntax, volume 180 of Current Issues in Linguistic Theory. Benjamins, Amsterdam.

Engdahl, E. and Vallduví, E. (1994). Information packaging and grammar architecture: A constraint-based approach. In Engdahl, E., editor, Integrating Information Structure into Constraint-based and Categorial Approaches, volume R1.3.B of DYANA, pages 41–79, Ed- inburgh.

Engdahl, E. and Vallduví, E. (1996). Information packaging in HPSG. In Grover, C. and Vall- duví, E., editors, Studies in HPSG, volume 12 of Edinburgh Working Papers in Cognitive Science, pages 1–31. Centre for Cognitive Science, University of Edinburgh.

Fillmore, C. J., Kay, P., Michaelis, L., and Sag, I. A. (Forthcoming). Construction Grammar. CSLI Publications, Stanford.

Flickinger, D. and Kathol, A., editors (2001). Proceedings of the 7th International Conference on Head-Driven Phrase Structure Grammar. CSLI Publications, Stanford.

60 Frank, A. (1994). Verb second by lexical rule or by underspecification. Arbeitsberichte des Sonderforschungsbereichs 340 43, IMS, Stuttgart. Frank, A. and Reyle, U. (1992). How to cope with scrambling and scope. In (Görz, 1992), pages 178–187. Frank, A. and Reyle, U. (1995). Principle based semantics for HPSG. In 7th Conference of the European Chapter of the Association for Computational Linguistics, pages 9–16, University College Dublin, Belfield, Dublin, Ireland. Association for Computational Linguistics. Frank, A. and Reyle, U. (1996). Principle based semantics for HPSG. Unpublished manuscript, Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart. Gerdemann, D. (1994). Complement inheritance as subcategorization inheritance. In (Nerbonne et al., 1994), pages 341–363. Ginzburg, J. and Sag, I. A. (2000). Interrogative Investigations: The Form, Meaning, and Use of English Interogatives. CSLI Publications, Stanford. Goldberg, A. (1995). Constructions: A Construction Grammar Approach to Argument Structure. Cognitive Theory of Language and Culture. University of Chicago Press, Chicago. Görz, G., editor (1992). KONVENS’92, Berlin. Springer-Verlag.

Green, G. M. (1994). The structure of CONTEXT: The representation of pragmatic restrictions in HPSG. In Yoon, J., editor, Proceedings of the 5th Annual Conference of the Formal Linguistics Society of the Mid-America (Studies in the Linguistic Sciences 24), pages 215–232, Urbana, IL. Department of Linguistics, University of Illinois. Version dated January 31, 1997, available from http://www.cogsci.uiuc.edu/˜green/. Green, G. M. (2000). The nature of pragmatic information. Unpublished manuscript, version 2.3 dated January 31, 2000, available from: http://www.cogsci.uiuc.edu/˜green/. Green, G. M. (Forthcoming). Elementary principles of HPSG. In Borsley, R. and Börjars, K., editors, Nonderivational Syntax. Blackwell, Oxford. Grover, C. (1995). Rethinking Some Empty Categories: Missing Objects and Parasitic Gaps in HPSG. PhD thesis, University of Essex. Gunji, T. (1999). On lexicalist treatments of Japanese causatives. In (Levine and Green, 1999), pages 119–160. Gunji, T. and Hasida, K. (1998). Introduction. In Gunji, T. and Hasida, K., editors, Topics in Constraint-Based Grammar of Japanese, volume 68 of Studies in Linguistics and Philosophy, pages 1–14. Kluwer, Dordrecht. Haider, H. (1993). Deutsche Syntax — generativ. Vorstudien zur Theorie einer projektiven Gram- matik. Gunter Narr, Tübingen.

61 Halvorsen, P.-K. (1995). Situation Semantics and semantic interpretation in constraint-based grammars. In Dalrymple, M., Kaplan, R. M., Maxwell III, J. T., and Zaenen, A., editors, Formal Issues in Lexical-Functional Grammar, number 47 in CSLI Lecture Notes, pages 293– 309. CSLI Publications, Stanford.

Heinz, W. and Matiasek, J. (1994). Argument structure and case assignment in German. In (Nerbonne et al., 1994), pages 199–236.

Hentze, R. (1996). Unit accentuation in a topological grammar of Danish. Master’s thesis, University of Copenhagen.

Hinrichs, E., Kathol, A., and Nakazawa, T., editors (1998). Complex Predicates in Nonderiva- tional Syntax, volume 30 of Syntax and Semantics. Academic Press, San Diego.

Hinrichs, E. and Nakazawa, T. (1989). Flipped out: Aux in German. In Papers from the 25th Meeting, pages 193–202, Chicago. Chicago Linguistic Society.

Hinrichs, E. and Nakazawa, T. (1994). Partial VP and split NP topicalization in German: An HPSG analysis. In Hinrichs, E., Meurers, D., and Nakazawa, T., editors, Partial-VP and Split-NP Topicalization in German—An HPSG Analysis and its Implementation, volume 58 of Arbeitsberichte des SFB 340. SFB 340, Tübingen/Stuttgart.

Höhle, T. (1978). Lexikalistische Syntax: die Aktiv-Passiv-Relation und andere Infinitkonstruk- tionen im Deutschen. Niemeyer, Tübingen.

Hukari, T. E. and Levine, R. D. (1994). Adjunct extraction. In Proceedings of the Twelfth Annual West Coast Conference on Formal Linguistics, pages 283–298.

Hukari, T. E. and Levine, R. D. (1995). Adjunct extraction. Journal of Linguistics, 31:195–226.

Jacobson, P. (1987). Phrase structure, grammatical relations, and discontinuous constituents. In Discontinuous Constituency, volume 20 of Syntax and Semantics, pages 27–69. Academic Press, San Diego.

Jensen, P. A. and Skadhauge, P. (2001). Linearization and diathetic alternations in Danish. In Meurers, W. D. and Kiss, T., editors, Constraint-Based Approaches to Germanic Syntax, pages 111–140. CSLI Publications, Stanford.

Kasper, R. (1997). Semantics of recursive modification.

Kasper, R. and Rounds, W. (1986). A logical semantics for feature structures. In Proceedings of the 24th Annual Meeting of thhe Association for Computational Linguistics, Morrisontown, N.J. Association for Computational Linguistics.

Kathol, A. (1994). Passives without lexical rules. In (Nerbonne et al., 1994), pages 237–272.

Kathol, A. (1995). Linearization-Based German Syntax. PhD thesis, Ohio State University.

62 Kathol, A. (1997). Concrete minimalism of German. In d’Avis, F.-J. and Lutz, U., editors, Zur Satzstruktur des Deutschen, number 90 in Arbeitsberichte des SFB 340, pages 81–106. SFB 340, Tübingen/Stuttgart. Kathol, A. (1998). Constituency and linearization of verbal complexes. In (Hinrichs et al., 1998), pages 221–270. Kathol, A. (1999). Agreement and the syntax-morphology interface in HPSG. In (Levine and Green, 1999), pages 223–274. Kathol, A. (2000). Linear Syntax. Oxford University Press, Oxford, UK. Kathol, A. and Levine, R. D. (1992). Inversion as a linearization effect. In Schafer, A., edi- tor, Proceedings of the North East Linguistic Society 23, pages 207–221, Amherst. Graduate Linguistics Student Association. Kathol, A. and Pollard, C. (1995). On the left periphery of German subordinate clauses. In Pro- ceedings of the Fourteenth West Coast Conference on Formal Linguistics, volume 14, Stanford University. CSLI Publications/SLA. Kathol, A. and Rhodes, R. A. (1999). Constituency and linearization of Ojibwe nominals. In Caldecott, M., Gessner, S., and Kim, E.-S., editors, Proceedings of WSCLA 4, pages 75–91. University of BC Department of Linguistics, Vancouver. King, P. J. (1989). A Logical Formalism for Head-driven Phrase Structure Grammar. PhD thesis, University of Manchester. King, P. J. (1994). An expanded logical formalism for Head-driven Phrase Structure Gram- mar. Arbeitspapiere des Sonderforschungsbereichs 340 Bericht Nr. 59, Seminar für Sprach- wissenschaft, Universität Tübingen. King, P. J. (1999). Towards truth in HPSG. In (Kordoni, 1999), pages 301–352. Kiss, T. (1994). Obligatory coherence: The structure of German modal verb constructions. In (Nerbonne et al., 1994), pages 71–107. Kiss, T. (1995). Infinite Komplementation: neue Studien zum deutschen Verbum, volume 333 of Linguistische Arbeiten. Tübingen: Max Niemeyer. Kiss, T. and Wesche, B. (1991). Verb order and head movement. In Herzog, O. and Rollinger, C.-R., editors, Text Understanding in LILOG, number 546 in Lecture Notes in Artificial Intel- ligence, pages 216–240. Springer-Verlag, Berlin. Kolliakou, D. (1998). Linkhood and multiple definite marking. In (Bouma et al., 1998). Kolliakou, D. (1999). Linkhood and polydefinites. In Wyner, A. Z., editor, IATL 6: The Proceed- ings of the Fourteenth Annual Conference Ben Gurion University of The Negev 1998, pages 49–67.

63 Kordoni, V., editor (1999). Tübingen Studies in Head-Driven Phrase Structure Grammar, Arbeitspapiere des Sonderforschungsbereichs 340 Bericht Nr. 132, Seminar für Sprachwis- senschaft, Universität Tübingen.

Kups´c,´ A. (1999). Haplology of the Polish reflexive marker. In (Borsley and Przepiórkowski, 1999), pages 91–124.

Kups´c,´ A. (2000). An HPSG Grammar of Polish Clitics. Ph.D. thesis, Polska Akademia Nauk and Université Paris 7. In progress, preliminary title.

Kups´c,´ A. and Tseng, J. (2005). A new HPSG approach to Polish auxiliary constructions. In Müller, S., editor, Proceedings of the 12th International Conference on HPSG, pages 253–273. CSLI Publications, Stanford, CA.

Lappin, S. and Zadrozny,˙ W. (2000). Compositionality, synonymy, and the systematic represen- tation of meaning. Unpublished manuscript, King’s College London and IBM T.J. Watson Research Center. Available from: http://arXiv.org/abs/cs.CL/0001006.

Levine, R. D. and Green, G., editors (1999). Studies in Contemporary Phrase Structure Gram- mar. Cambridge University Press, Cambridge.

Maling, J. (1993). Of nominative and accusative: The hierarchical assignment of grammati- cal case in Finnish. In Holmberg, A. and Nikanne, U., editors, Case and Other Functional Categories in Finnish Syntax, pages 51–76. Mouton, Dordrecht.

Malouf, R. (1998). Mixed Categories in the Hierarchical Lexicon. PhD thesis, Stanford Univer- sity.

Malouf, R. (2000a). A head-driven account of long-distance case assignment. In (Cann et al., 2000), pages 201–214.

Malouf, R. (2000b). Verbal gerunds as mixed categories in Head-driven Phrase Structure Gram- mar. In Borsley, R. D., editor, The Nature and Function of Syntactic Categories, volume 32 of Syntax and Semantics, pages 133–166. Academic Press, San Diego.

Manning, C. D. and Sag, I. A. (1998). Argument structure, valence, and binding. Nordic Journal of Linguistics, 21:107–144.

Manning, C. D. and Sag, I. A. (1999). Dissociationsbetween argument structure and grammatical relations. In (Webelhuth et al., 1999), pages 63–78.

Manning, C. D., Sag, I. A., and Iida, M. (1999). The lexical integrity of Japanese causatives. In (Levine and Green, 1999), pages 39–79.

Meurers, W. D. (1999a). Lexical Generalizations in the Syntax of German Non-Finite Construc- tions. PhD thesis, Universität Tübingen.

64 Meurers, W. D. (1999b). Raising spirits (and assigning them case). Groninger Arbeiten zur Germanistischen Linguistik (GAGL), 43:173—226.

Miller, P. H. (1992). Clitics and Constituents in Phrase Structure Grammar. Garland, New York.

Miller, P. H. and Sag, I. A. (1997). French clitic movement without clitics or movement. Natural Language and Linguistic Theory, 15:573–639.

Monachesi, P. (1993). Restructuring verbs in Italian HPSG grammar. In Beals, K., Cooke, G., Kathman, D., Kita, S., McCullough, K., and Testen, D., editors, Proceedings of the 29th Regional Meeting of the Chicago Linguistic Society, pages 281–295, Chicago.

Monachesi, P. (1999). A Lexical Approach to Italian Cliticization. Number 84 in CSLI Lecture Notes. CSLI Publications, Stanford.

Müller, S. (1999). Deutsche Syntax deklarativ. Head-Driven Phrase Structure Grammar für das Deutsche. Number 394 in Linguistische Arbeiten. Niemeyer, Tübingen.

Müller, S. (2000). German particle verbs and the predicate complex. In (Cann et al., 2000), pages 215–229.

Müller, S. (2001). The passive as a lexical rule. In Flickinger, D. and Kathol, A., editors, Proceedings of the 7th International Conference on HPSG, pages 247–266, Stanford, CA. CSLI Publications.

Nerbonne, J. (1992). Constraint-based semantics. In Dekker, P. and Stokhof, M., editors, Proceedings of the Eighth Amsterdam Colloquium, pages 425–444, Amsterdam. Institute for Logic, Language and Information. Reprinted in Keh-jiann Chen and Chu-Ren Huang (eds.), Proceedings of Republic of China Computational Linguistics Conference VI , Taipei, 1993. pp. 35–56.

Nerbonne, J. (1993). A feature-based syntax/semantics interface. Annals of Mathematics and Artificial Intelligence, 8:107–132. Special issue on Mathematics of Language edited by Alexis Manaster-Ramer and Włodek Zadrozny.˙ Also published as DFKI Research Report RR-92-42.

Nerbonne, J. (1995). Computational semantics—linguisticsand processing. In Lappin, S., editor, Handbook of Contemporary Semantic Theory, pages 461–484. Blackwell, London.

Nerbonne, J., Netter, K., and Pollard, C., editors (1994). German in Head-Driven Phrase Struc- ture Grammar. Number 46 in CSLI Lecture Notes. CSLI Publications, Stanford.

Netter, K. (1992). On non-head non-movement. In (Görz, 1992), pages 218–227.

Nunberg, G. (1978). The Pragmatics of Reference. PhD thesis, City University of New York.

Penn, G. (1999a). A generalized-domain-based approach to Serbo-Croatian second-position clitic placement. In (Bouma et al., 1999), pages 119–136.

65 Penn, G. (1999b). Linearization and WH-extraction in HPSG: Evidence from Serbo-Croatian. In (Borsley and Przepiórkowski, 1999), pages 149–182.

Pollard, C. (1994). Toward a unified account of passive in German. In (Nerbonne et al., 1994), pages 273–296.

Pollard, C. (1996). On head non-movement. In (Bunt and van Horck, 1996), pages 279–305.

Pollard, C. (1999). Strong generative capacity in HPSG. In (Webelhuth et al., 1999), pages 281–297.

Pollard, C. and Sag, I. A. (1987). Information-Based Syntax and Semantics, Volume 1: Funda- mentals. Number 13 in CSLI Lecture Notes. CSLI Publications, Stanford.

Pollard, C. and Sag, I. A. (1994). Head-driven Phrase Structure Grammar. Chicago University Press / CSLI Publications, Chicago.

Pollard, C. and Yoo, E. J. (1998). A unified theory of scope for quantifiers and wh-phrases. Journal of Linguistics, 34:415–445.

Przepiórkowski, A. (1997). Quantifiers, adjuncts as complements, and scope ambigui- ties. To appear in Journal of Linguistics. Draft of December 2, 1997. Available from: http://www.ling.ohio-state.edu/˜adamp/Drafts/.

Przepiórkowski, A. (1998). ‘A Unified Theory of Scope’ revisited: Quantifier retrieval without spurious ambiguities. In (Bouma et al., 1998), pages 185–195.

Przepiórkowski, A. (1999a). Case Assignment and the Complement-Adjunct Dichotomy: A Non- Configurational Constraint-Based Approach. PhD thesis, Universität Tübingen, Germany. http://www.ling.ohio-state.edu/˜adamp/Dissertation/.

Przepiórkowski, A. (1999b). Negative polarity questions and Italian negative concord. In (Kor- doni, 1999), pages 353–400.

Przepiórkowski, A. (1999c). On case assignment and “adjuncts as complements”. In (Webelhuth et al., 1999), pages 231–245.

Przepiórkowski, A. (2001). ARG-ST on phrases: Evidence from Polish. In (Flickinger and Kathol, 2001), pages 267–284.

Przepiórkowski, A. and Kups´c,´ A. (1999). Eventuality negation and negative concord in Polish and Italian. In (Borsley and Przepiórkowski, 1999), pages 211–246.

Pullum, G. (1982). Free word order and phrase structure rules. In Pustejovsky, J. and Sells, P., editors, Proceedings of the 12th Annual Meeting of the Northeast Linguistic Society, pages 209–220. Graduate Linguistics Student Association, Amherst.

Pullum, G. (1997). The morpholexical nature of to-contraction. Language, 73:79–102.

66 Quine, W. V. O. (1968). From a Logical Point of View. Harper and Row, New York.

Reape, M. (1993). A Formal Theory of Word Order: A Case Study in West Germanic. PhD thesis, University of Edinburgh.

Reape, M. (1994). Domain union and word order variation in German. In (Nerbonne et al., 1994), pages 151–197.

Reape, M. (1996). Getting things in order. In (Bunt and van Horck, 1996), pages 209–254.

Reinhard, S. (2000). Deverbale Komposita an der Morphologie-Syntax-Semantic-Schnittstelle: Ein HPSG-Ansatz. PhD thesis, Universität Tübingen. In progress.

Rentier, G. M. (1994). A lexicalist approach to Dutch cross dependencies. In Beals, K., Den- ton, J., Knippen, E., Melnar, L., Suzuki, H., and Zeinfeld, E., editors, Papers from the 30th Regional Meeting of the Chicago Linguistic Society, pages 376–390, Chicago, Illinois. CLS, CLS.

Reyle, U. (1993). Dealing with ambiguities by underspecification: Construction, representation and deduction. Journal of Semantics, 10(2):123–179.

Richter, F. (1997). Die Satzstruktur des Deutschen und die Behandlung langer Abhängigkeiten in einer Linearisierungsgrammatik. Formale Grundlagen und Implementierung in einem HPSG- Fragment. In Hinrichs, E., Meurers, D., Richter, F., Sailer, M., and Winhart, H., editors, Ein HPSG-Fragment des Deutschen, Teil 1: Theorie, Arbeitspapiere des Sonderforschungs- bereichs 340 Bericht Nr. 95, Seminar für Sprachwissenschaft, Universität Tübingen, pages 13–187.

Richter, F. (2000). A Mathematical Formalism for Linguistic Theories with an Application in Head-Driven Phrase Structure Grammar. PhD thesis, Universität Tübingen. April 28, 2000.

Richter, F. and Sailer, M. (1999a). A lexicalist collocation analysis of sentential negation and negative concord in French. In (Kordoni, 1999), pages 231–300.

Richter, F. and Sailer, M. (1999b). LF conditions on expressions of Ty2: An HPSG analysis of Negative Concord in Polish. In (Borsley and Przepiórkowski, 1999), pages 247–282.

Richter, F. and Sailer, M. (1999c). Underspecified semantics in HPSG. In Bunt, H. C. and Muskens, R., editors, Computing Meaning, pages 95–112. Kluwer, Dordrecht.

Richter, F. and Sailer, M. (2000). On the left periphery of German finite sentences. To appear in T. Kiss and D. Meurers (eds.), Topics in Constraint-Based Germanic Syntax, CSLI Publications.

Richter, F., Sailer, M., and Penn, G. (1999). A formal interpretation of relations and quantifica- tion in HPSG. In (Bouma et al., 1999), pages 281–298.

Sag, I. A. (1997). English relative clause constructions. Journal of Linguistics, 33(2):431–484.

67 Sag, I. A. (2000). Another argument against wh-trace. In Chung, S., McCloskey, J., and Sanders, N., editors, Jorge Hankamer WebFest. http://ling.ucsc.edu/Jorge/, UC Santa Cruz.

Sag, I. A. and Fodor, J. D. (1994). Extraction without traces. In Proceedings of the Thir- teenth West Coast Conference on Formal Linguistics, pages 365–384, Stanford. CSLI Publi- cations/SLA.

Sag, I. A. and Wasow, T. (1999). Syntactic Theory: A Formal Introduction. CSLI Publications, Stanford, CA.

Sailer, M. (2000). The Content of CONTENT: Semantic Construction and Idiomatic Expressions in HPSG. PhD thesis, Universität Tübingen. In progress.

Sportiche, D. (1996). Clitic constructions. In Rooryck, J. and Zaring, L., editors, Phrase Struc- ture and the Lexicon, pages 213–276. UILC Press, Bloomington, Indiana.

Uszkoreit, H. (1987). Word Order and Constituent Structure in German. Number 8 in Lecture Note Series. CSLI Publications, Stanford.

Vallduví, E. (1992). The Informational Component. Garland, New York.

Vallduví, E. and Engdahl, E. (1996). The linguistic realization of information packaging. Lin- guistics, 34:459–519.

van Noord, G. and Bouma, G. (1994). Adjuncts and the processing of lexical rules. In Fifteenth International Conference on Computational Linguistics (COLING ’94), pages 250–256, Ky- oto, Japan.

Webelhuth, G., Koenig, J.-P., and Kathol, A., editors (1999). Lexical and Constructional Aspects of Linguistic Explanation. CSLI Publications, Stanford.

Wechsler, S. (1986). Against verb movement: Evidence from Swedish. In Papers from the 23rd Regional Meeting of the Chicago Linguistic Society, volume 23, Chicago, Illinois. CLS, CLS.

Wechsler, S. and Zlatic,´ L. (1999). Syntax and morphological realization in Serbo-Croatian. In (Borsley and Przepiórkowski, 1999), pages 283–309.

Wechsler, S. and Zlatic,´ L. (2001). Case realization and identity. Lingua, 111:539–560.

Wechsler, S. and Zlatic,´ L. (2001). A theory of agreement and its application to Serbo-Croatian. Language, 76(4):799–832.

Wechsler, S. and Zlatic,´ L. (2003). The Many Faces of Agreement. CSLI Publications, Stanford.

Wilcock, G. (1999). Lexicalization of context. In (Webelhuth et al., 1999), pages 373–387.

Yatabe, S. (1996). Long-distance scrambling via partial compaction. In Formal Approaches to Japanese Linguistics 2, pages 303–317. MIT Working Papers in Linguistics.

68 Yatabe, S. (2001). The syntax and semantics of left-node raising in Japanese. In (Flickinger and Kathol, 2001), pages 325–344.

Zadrozny,˙ W. (1994). From compositional to systematic semantics. Linguistics and Philosophy, 17:329–342.

Zwicky, A. M. (1986). Concatenation and liberation. In Papers from the 22nd Regional Meeting of the Chicago Linguistic Society, pages 65–74, Chicago. Chicago Linguistic Society.

Zwicky, A. M. (1994). Dealing out meaning: Fundamentals of syntactic constructions. In Gahl, S., Dolbey, A., and Johnson, C., editors, Proceedings of the Twentieth Annual Meeting of the Berkeley Linguistics Society, pages 611–625.

69