Movement in Minimalism, Class 2: Linearization and the Problem of Copies July 11, LSA Summer Institute 2017
Total Page:16
File Type:pdf, Size:1020Kb
Movement in Minimalism, Class 2: Linearization and the problem of copies July 11, LSA Summer Institute 2017 This class: . We discuss Nunes’s (1995, 2004) proposal that copy deletion derives from the demands of the linearization algorithm at PF . We look at cases for which a copy theory has an advantage over a trace theory, involving multiple spell-out in predicate clefting/V-fronting constructions as well as wh-copying . We examine exceptional cases of copy spell-out, including deletion of the highest copy and scattered deletion 1 Two approaches to copy deletion As we discussed last week, Merge provides us with a single structure-building mechanism. Merge can create long-distance dependencies as long as we adopt the Copy Theory of Movement: (1) Which book did you read which book? CP DP C’ which book did TP you T’ did VP read which book However, getting rid of traces in this way means that we need to posit a rule of copy deletion. But where does copy deletion come from? 1.1 Three sources of wellformedness constraints In the Minimalist Program, constraints on wellformedness have only three possible sources: 1. Constraints on the PF interface 2. Constraints on the LF interface 3. Natural principles of computation, often referred to as “economy constraints” =) Copy deletion too should derive from one of these sources. 1 Two of these options have been explored in previous work, and we will discuss them this week: 1. Copy deletion as a result of linearization at PF (Nunes 1995, 2004; cf. Fox and Pesetsky 2005; Johnson 2012): . One of the things that must happen when interpreting a syntactic tree at PF is a process of converting a syntactic structure into a linear string (2), a process of linearization. One prominent approach to copy deletion proposes that the existence of copies creates a problem for linearization, because you don’t know which copy to use to determine the position of a moved phrase. 2. Copy deletion as the result of economy (Fanselow and Ćavar 2001; Landau 2006; Van Urk, to appear): . Another feature of dependencies with copies is that they involve repetitive material. We might imagine that a natural principle of computation could be not to pronounce material that you don’t need to. Landau (2006) develops a model in which an economy constraints that maximizes deletion is the source of copy deletion. 1.2 Could copy deletion derive from constraints on LFs? It is worth briefly entertaining what the third option might look like. One reason why an LF-based approach could be promising is that copies are difficult to interpret at LF. A big advantage of traces is that they can straightforwardly be interpreted as variables: (2) [which book] λx: did you see x Interpreting copies is much trickier. If you just interpret each copy in place, you can’t actually make sure that they refer to the same thing and they’ll be interpreted as independent semantic objects. (This is why an operation like Fox’s (1999) Trace Conversion is necessary!) In principle, we could imagine that deletion of a lower copy could be forced by this semantic problem, as long as deleting a copy replaces it with a variable, giving you the representation in (2). What problem does this idea of variable replacement run into? 2 Copy deletion and linearization . In today’s class, we will start by looking at the idea that copies create problems of linearization, an approach developed by Nunes (1995, 2004). The basic idea is that a structure containing copies will always lead to linearization conflicts, because you are forced to order a copy relative to itself. 2 2.1 Kayne (1994) and the Linear Correspondence Axiom The starting point for Nunes’s approach is Kayne’s (1994) proposal that syntactic structures undergo a process of linearization at PF: (3) TP DP T’ Linearization =) =) Lee algorithm Lee > will > read > a > book T VP will V DP read a book Kayne (1994) proposes that linearization is achieved through the Linear Correspondence Axiom: (4) Linear Correspondence Axiom: Precedence relations are determined by asymmetric c-command (if α asymmetrically c-commands β, α precedes β). The LCA incorporates a principle of Totality (orderings must be complete) and a principle of Antisym- metry (orderings cannot be symmetrical). (5) For every distinct syntactic item x and y in a tree, either x < y or y < x Totality and not (x < y and y < x) Antisymmetry 2.2 Adapting the LCA for copies Nunes (1995, 2004): The LCA, as originally stated, ignores traces. Suppose, however, that copies count for calculating ordering statements and, furthermore, that copies are non-distinct. (6) Jess was hugged Jess. Ordering statements for (6): TP Jess > was was > hugged Jess > hugged was > Jess Jess > Jess DP T’ Jess hugged > Jess T VP was V DP hugged Jess This set of ordering statements contains two types of linearization conflicts: . Jess is ordered both before and after was, hugged . Jess is ordered both before and after itself 3 =) Any set of ordering statements containing copies will contain contradictions. This means that copy deletion must apply to allow for a syntactic structure to be interpretable at PF. Deleting the lower copy in (7) gives us the ordering statements below, which are consistent. (7) Jess was hugged Jess. Ordering statements for (7): TP Jess > was was > hugged Jess > hugged DP T’ Jess T VP was V DP hugged Jess To derive structures like (7), we adopt an operation of Chain Reduction, which eliminates copies: (8) Chain Reduction (soon to be revised): Delete the constituents of a nontrivial chain CH that suffice for CH to be mapped into a linear order in accordance with the LCA. (adapted from Nunes 2004:27) 2.3 Restricting Chain Reduction An issue arises with copies that consist of multiple words. Consider an example like (9). (9) [which tiny spider] did you see [which tiny spider]? There are lots of ways of applying Chain Reduction to create a structure that is linearizable. In addition to (10a), the scattered deletion options in (10b–d) should also be fine: (10) a. [which tiny spider] did you see [which tiny spider]? b. [which tiny spider] did you see [which tiny spider]? c. [which tiny spider] did you see [which tiny spider]? d.[ which tiny spider] did you see [which tiny spider]? Nunes: We can rule out the options in (10b–d) by appealing to economy. All scattered deletion options must involve more than one deletion operation, while (10a) achieves a linearizable structure with just one application of deletion. We’ll revise the definition of Chain Reduction to build in this intuition: (11) Chain Reduction: Delete the minimal number of constituents of a nontrivial chain CH that suffices for CH to be mapped into a linear order in accordance with the LCA. (Nunes 2004:27) 4 2.4 Choosing a copy Problem: How do we decide which copy to delete? In principle, all of the options in (12a–d) solve the problem of linearization equally well, and all involve only one application of deletion. (12) a. [which tiny spider] did you see [which tiny spider]? b.[ which tiny spider] did you see [which tiny spider]? c. [Jess] was hugged [Jess]. d.[ Jess] was hugged [Jess]. Nunes: . Suppose movement is always driven by the need to check a feature (Chomsky 1995), like a wh-feature for the example in (12a–b) or a Case feature for (12c–d). The presence of such a feature means the two copies in examples like (12a–d) are not equal. The higher copy will contain a checked feature and the lower copy an unchecked one: (13) a. [which tiny spider]-uWH did you see [which tiny spider]-uWH b. [Jess]-uCase was hugged [Jess]-uCase =) Deleting the lower copy will at the same time get rid of an unchecked feature. In contrast, deleting the higher copy means that the unchecked feature on the lower copy must still somehow be eliminated. 2.5 Copy formation vs. multidominance Nunes’s approach makes use of a mechanism of copy formation. Movement of X means that a copy X’ is made of X and, although non-distinct for the purposes of linearization, can be manipulated independently (e.g. undergo deletion or feature checking). Gärtner (2002), Johnson (2012), and others: Internal Merge can be modeled without copy formation, by assuming a multidominant representation (15), in which one item is linked to different positions: (14) Copy formation: (15) Multidominance: F F B C C A B A B The main difference here is that B cannot be manipulated independently in different positions (e.g. once you value a feature of B in one, it is valued in the other). What are the consequences of adopting a multidominant view for the Nunes approach to linearizing movement chains? 5 3 Multiple spell-out We can derive copy deletion from the demands of linearization at PF. We will now turn to an advantage of the copy theory over trace theory, which is that it can handle multiple spell-out constructions. 3.1 Predicate clefting/V-fronting Much work since Koopman (1984) on Vata has pointed out that many languages possess predicate clefting or V-fronting constructions, in which a verb A-moves¯ into the left periphery, but is pronounced in its base position also: (16) Verb copying in Hebrew, Russian, and Nupe: a. lirkod, Gil lo yirkod ba-xayim. dance.inf Gil not will-dance in-the-life ‘As for dancing, Gil will never dance.’ (Hebrew; Landau 2006:32) b. Citat’ Ivan eë citaet, no nicego ne ponimaet. read.inf Ivan 3fs.acc reads but nothing not understands ‘Ivan DOES read it, but he doesn’t understand a thing.’ (Russian; Abels 2001:1) c.