Introduction to the Course s1
Total Page:16
File Type:pdf, Size:1020Kb
BMN ANGD A2 Linguistic Theory
Lecture 10: Absolute and Relative Grammaticality
1 Introduction
The notion of grammaticality is obviously a central one to linguistics, but so far we haven’t said much about it. It is obvious that the grammar of a language determines the set of grammatical expressions of that language but there are several ways in which it might be seen to do this. For example, the first instances of transformational grammars had grammatical rules which defined grammatical expressions in a straightforward way in that the rules described grammatical configurations and anything that conformed to these descriptions was deemed grammatical by the grammar. Anything that did not conform to them was not.
This is very close to the traditional view of grammaticality: the rules of traditional grammars simply set out the conditions of grammaticality by describing the properties of grammatical expressions and anything that did not conform to these descriptions was considered ungrammatical. Within generative grammar, this positions could not be maintained due to the problems it raises for generality and the pressure to account for the fact of language acquisition: if rules describe permitted expressions, necessarily those rules must be specific to the expressions they define and making general statements which are applicable universally is impossible.
For this reason generative grammar moved towards a different method of determining grammaticality. Instead of grammars describing what grammatical expressions should be like, the focus came to rest on determining what they shouldn’t be like. In other words, grammars became constraint based. From this perspective, general rules set out the boundaries of grammaticality in such a way that all possible expressions (and probably some impossible ones too) are included. Constraints then tell us which of this set are not part of the language, the ones that remain being the grammatical expressions.
Both of these frameworks, however, define grammaticality in what can be referred to as an absolute way: any given sentence is grammatical if it conforms to the rules of the grammar, either by having the properties prescribed by the rules or not having the properties prohibited by the constraints. In this way one only needs to consider an expression in isolation of any other expression to know whether it is grammatical or not. From the end of the 1980s however, a view developed which was inconsistent with the absolute approach to grammaticality and it was discovered that there are more ways of determining grammaticality than had previously been acknowledged.
2 Economy
A well known observation that Government and Binding theory had been particularly successful in accounting for is that the subject of a non-finite clause is required to undergo a movement that the subject of an equivalent finite clause is not:
(1) a it seems [John is rich] b * it seems [John to be rich] c John1 seems [ t1 to be rich] Mark Newson
The account for the ungrammaticality of (2b) was simply that the subject of a non-finite clause is a Caseless position and hence if the NP sitting in the position remains there it will violate the Case Filter.
However, note that with the finite case, not only can the subject remain in its clause, but it must:
(3) * John1 seems [ t1 is rich]
Accounting for this fact is not so simple. In GB the ungrammaticality of (4) was expressed in terms of the properties of the trace: the trace of an NP is an anaphor which must be bound in its governing category, which in this example is the finite clause. Hence no NP can move out of a finite clause as its trace will violate the biding principles. The ungrammaticality of (5) is therefore equated with the ungrammaticality of (6):
(7) * John1 thinks [himself1 is smart]
Thus, while (8b) is seen as ungrammatical because it violates the principles of Case theory, (9) is ungrammatical because it violates the principles of the binding theory. This introduces a bifurcation in the explanation of these ungrammaticalities whereas intuitively there seems to be something more uniform going on: with a non-finite complement the subject must move and with a finite complement it cannot.
The explanation of (10) is also made more complex by the observation that the binding conditions facing traces is not identical to those facing overt anaphors. An overt anaphor can refer out of a finite clause if it is contained within the subject of that clause, though not if it is the subject itself. Compare (11) to (12):
(13) John1 thinks [ [this picture of himself1] is smart]
Data such as that in (14) lead to a refinement of the binding theory, particularly the definition of the governing category, which we do not need to bother with the details of here. The complication arises because we apparently do not need to make any refinements to the theory as it applies to traces, which are ungrammatical in this position:
(15) * John1 seems [ [ this picture of t1 ] is smart]
From this observation one of several possible conclusions might be reached. Either we can maintain that traces and anaphors are essentially the same, but that the finer definitions of the principles that govern their grammaticalities are different. On the other hand we might take (16) to show that the assumption that traces and anaphors are subject to the same principles is inaccurate. Either way, the situation is more complex than we might like it to be.
Let us reconsider the data:
(17) a * it seems [John to be rich] b John1 seems [ t1 to be rich]
(18) a it seems [John is rich] b * John1 seems [ t1 is rich]
2 Grammatical Functions
If we view the situation in (19b) as involving a movement that is made necessary to avoid the ungrammaticality of (20a), then a simple account of (21) also suggests itself: (22b) is not grammatical because (23a) is. In other words, as the movement is not necessary to avoid an ungrammaticality, the movement itself is ungrammatical. But note, grammaticality in these terms is relative to a choice of possibilities. The choice is whether or not to move the subject of the embedded clause. In one case, if the subject does not move the result is an ungrammaticality, therefore the movement is grammatical. In the other case, if the subject does not move the result is grammatical and therefore the movement is ungrammatical.
Chomsky (1991) suggested that we see this situation in terms of ‘economy’: if we consider that every time a movement is made, a price must be paid, and grammatical structures are those that are least costly (most economical), then a movement will only be made if there is no alternative – i.e. if the situation in which the movement does not happen leads to ungrammaticality anyway. In an economical system, we pay the cost of a movement only when it is necessary to do so and hence unnecessary movements lead to ungrammaticality.
But what is economy and how is it measured? One way to look at this is that the cost of a movement is the violation of a constraint. Thus (24b) is ungrammatical because it violates a constraint against movement. (25b) on the other hand is grammatical despite the fact that it violates this constraint: the price is paid because no (cheaper) alternative exists. Thus from this point of view, constraints can be violated and yet the expression is still grammatical.
We can see that this is the real distinction between absolute and relative grammaticality. In both cases there is overgeneration by one part of the grammar which is then cut back by another. In this sense there is a wider set of possibly grammatical expressions than are actually grammatical and hence some element of ‘choice’ in both systems. But with absolute grammaticality constraints are never violated by grammatical expressions. With relative grammaticality however it may be that all possible expressions violate some constraint or another, but still some expressions are better than others and hence are relatively grammatical.
We should not get confused at this point between the notions relative grammaticality and gradient grammaticality. In a relative system, an expression is either grammatical or ungrammatical regardless of how many constraints it violates. It all depends on what the alternatives are. If no other expression exists that is better, then the one that is best is perfectly grammatical. All the others are perfectly ungrammatical. This is very different from a system, which is essentially an absolute one, but which ascribes different degrees of ungrammaticality to the violation of different types of constraints. In this system all constraint violation leads to ungrammaticality, but some ungrammatical expressions are worse than others. How to model gradient grammaticality is actually a tricky issue in a relative system.
There is still an issue to be clarified. We have said that (26b) is grammatical despite the fact that it violates the constraint against movement because no better alternative exists. The reason why the alternative doesn’t exist in this case is because this violates the Case filter. But now there is a question: why is it worse to violate the Case Filter than it is to violate the constraint against movement? There are a number of possible answers to this question. The one Chomsky originally envisaged was that the Case Filter and the constraint against movement are of different natures. The former is an rigid constraint which is never violated
3 Mark Newson under any circumstances. The latter however can be violated, but only if necessary. Such a constraint is called a soft constraint.
Another view is that of Optimality Theory (Prince and Smolensky 1993) which assumes no such difference between constraints. The claim is that all constraints are soft and hence violable under the right circumstances. However, given a conflict between two constraints there is a way to determine which constraint will be violated and which will be upheld. Essentially we rank the constraints with respect to each other and then use the ranking to determine which constraint will be violated: the higher ranked constraint winning out over the lower ranked one. This is very different from a system which sees constraints as being soft or rigid and this can be shown in a number of ways. First, consider a case in which there are three ranked constraints:
(27) C1 > C2 > C3
If there is a conflict between C2 and C3, C2 will be upheld and C3 violated because C2 is higher ranked than C3. However, if there is a conflict between C1 and C2, C2 will be violated as C1 is more highly ranked. This situation cannot be captured in a system which assumes just two types of constraints. In such as system, if C2 is upheld over C3, then C2 must be rigid and should not be violable in any circumstances. But if C2 is violated in favour of C1, then C2 is soft and should not cause the violation of C3. Regardless of whether this every happens in natural languages, and there are indeed cases of it, we can see that there are some things that we can model with a set of ranked soft constraints that we cannot with ranked and rigid constraints.
Another aspect of the ranked system is suggested by the following observations:
(28) a who left he left (English) b who is he he is John
(29) a shei zuo le ta zou le (Chinese) who left perf he left perf b ta shi shei ta shi Zhang he is who he is Zjang
In English the wh-element is always at the front of the clause no matter which grammatical function it is associated with. Of course, we account for this in terms of wh-movement. In Chinese however, the interrogative element appears in exactly the same place as the equivalent non-wh-element does and this depends on the grammatical function (Chinese is an SVO language). Thus for Chinese we have no evidence that there is wh-movement at all. Looking at these facts in terms of economy of movement, we can see in English the constraint against movement is violated because whatever requires wh-elements to be at the front of the sentence is more important. However, in Chinese it seems that the constraint against movement is the more important requirement. This suggests that what is important differs across languages and hence that it is not a matter of some constraints being rigid and other soft universally. In terms of ranking, however, all we need to assume is that different languages rank the constraints differently. The constraints are the same cross-linguistically but the ranking differs.
4 Grammatical Functions
3 Grimshaw 1997
As an example of how an Optimality system works, I will spend the rest of this lecture overviewing an analysis of auxiliary inversion phenomena proposed by Grimshaw in 1997. The data are well enough known, but let us start by detailing them. First, inversion involves the movement of an auxiliary verb across the subject accompanying certain wh-movements:
(30) he will meet John who will he – meet –
Grimshaw adopts the standard assumption that this movement takes an element from the tense position of the clause (termed I for ‘inflection’) and moves it to the complementiser position:
(31) CP
who C'
C IP
will NP I'
he I VP
- meet -
However, when the wh-element is the subject, the auxiliary does not invert with it. Grimshaw interprets this as due to there not being a complementiser system in this case. She proposes that as the wh-subject already stands at the front of the sentence, it does not need to move anywhere and hence the complementiser system is unnecessary. Given that there is no complementiser position, the auxiliary will not move either:
(32) IP
NP I'
who I VP
will meet him
Grimshaw’s idea is that clauses can either extend to the inflectional system (=IP) or to the complementiser system (=CP), but in accordance with an economy principle, they will only be as big as they have to be.
To capture these ideas in an Optimality theoretic way, Grimshaw introduces three basic constraints:
(33) OpSpec: all operators (e.g. wh-phrases) must be in specifier positions
(34) ObHd: all head positions are filled
5 Mark Newson
(35) STAY: don’t move
The last constraint is obviously the constraint against movement we have mentioned previously. OpSpec is the constraint responsible for wh-movement. As an object is generated in a complement position, if a wh-object does not undergo a movement it will violate OpSec. Subjects however are already in specifier positions and hence they can satisfy OpSpec without movement. ObHd is the constraint which accounts for inversion phenomena. As clauses can either be IPs or CPs, when a CP is present, its head position will generally be empty, especially if it is a main clause as complementisers themselves only introduce embedded clauses. If nothing else were to happen, this would violate the ObHd constraint. However, if something were to move to the complementiser position, this would enable the ObHd constraint to be satisfied, at a cost of violating STAY however.
To get the facts for English, Grimshaw proposes the following ranking of the constraints:
(36) OpSpec > ObHd > STAY
The analysis is demonstrated in the following tables:
(37) OpSpec ObHd STAY
[IP who will [VP meet him]]
[CP e [IP who will [VP meet him]]] *!
[CP will1 [IP who t1 [VP meet him]]] *!
[CP who2 will1 [IP t2 t1 [VP meet him]]] **!
[CP who1 e [IP t1 will [VP meet him]]] *!
(38) OpSpec ObHd STAY
[IP he will [VP meet who]] *!
[CP e [IP he will [VP meet who]]] *! *
[CP will1 [IP he t1 [VP meet who]]] *! *
[CP who2 will1 [IP he t1 [VP meet t2]]] **
[CP who1 e [IP he will [VP meet t1]]] *! *
In the tables, the choice of expressions is given in the first column. These represent the candidate set: the set of competing expressions from which the ‘optimal’ (= grammatical) one is to be selected. Note that they differ in whether they are CPs or IPs and whether the wh- phrase and auxiliary have moved to the complementiser system. In the following columns the evaluation of the candidate set with respect to the constraints is shown. Each column represents one constraint and the constraints are included in rank order, highest first. The cells under the constraint names show how each candidate does with respect to that constraints. A star indicates a violation of the constraint. A start with an exclamation after it represents a ‘fatal violation’ of a constraint. This is a constraint violation which deems the candidate to be ungrammatical. A candidate incurs a fatal violation of a constraint if it has not already incurred a fatal violation of a more highly ranked constraint, and hence is a survivor in the competition to that point, and at least one other candidate violates the constraint to a lesser degree. The last surviving candidate is optimal and hence grammatical and this is indicated by the pointy finger to the left of the table.
6 Grammatical Functions
Let us go through the first table in detail. First all candidates are evaluated by OpSpec, the highest ranked constraint. As in all cases the wh-subject is in a specifier position, no candidate incurs a fatal violation and all survive to be evaluated by the next constraint. Two of the candidates have unfilled complementiser positions, marked by ‘e’ standing for ‘empty head’. These two therefore violate ObHd. The remaining candidates have no empty heads and hence they do better on this constraint that the two that violate it. The violations are therefore fatal and these two candidates are eliminated from the competition. The remaining three candidates are then evaluated by the last constraint, STAY. The first candidate satisfies the constraint as it involves no movement. The other two remaining candidates violate STAY to different degrees and hence are ruled out. The first candidate is therefore optimal and grammatical.
Table (39) demonstrates the same competition, but this time involving an object wh-element. As we can see, this time a CP candidate involving two movements is the winner. The movements are sanctioned as they enable the two higher ranked constraints to be satisfied: if the wh-phrase does not move OpSpec is violated and if there is no inversion ObHd is violated. Note that the CP is made necessary in order to provide a specifier for the wh-object to move to. This in turn introduces the complementiser head which then forces the inversion to satisfy ObHd.
Let us now consider what re-ranking the constraints will do. Obviously as both OpSpec and ObHd are ranked higher than STAY for English this means that both wh-movement and inversion will be licensed in this language. If we rank both below STAY we should get a language in which no movement takes place, like Chinese. That this is so is demonstrated by the following tables:
(40) shei = who, zuo = leave, le = perf. “who has left” STAY OpSpec ObHd
[IP shei [VP zuo le]]
[CP e [IP shei [VP zuo le]]] *!
[CP zuo1 [IP shei [VP t1 le]]] *!
[CP shei1 zuo2 [IP t1 [VP t2 le]]] **!
[CP shei1 e [IP t1 [VP zuo le]]] *! *
(41) ta = he, shi = be, shei = who “who is he” STAY OpSpec ObHd
[IP ta [VP shi shei]] *
[CP e [IP ta [VP shi shei]]] * *!
[CP shi1 [IP ta [VP t1shei]]] *! *
[CP shei2 shi1 [IP ta [VP t1 t2]]] **!
[CP shei1 e [IP ta [VP shi t1]]] *! *
As predicted, the example in which nothing moves is grammatical regardless of whether the interrogative is subject or object. Note that in neither of the cases does the OpSpec constraint play a role in the analysis. In (42) both surviving candidates satisfy the constraint and in (43) both violate it. In neither case is OpSpec able to distinguish between the two and so the decision always falls to ObHd. This shows us that if we were to rank OpSpec and ObHd differently, as long as they are both ranked below STAY, the result will be the same.
7 Mark Newson
Now let’s consider what happens when STAY is ranked between the other constraints. If OpSpec is ranked higher than STAY and ObHd is ranked lower then we expect wh- movement to take place, but for it not to be accompanied by inversion. Again the prediction is born out, as the following tables show:
(44) OpSpec STAY ObHd
[IP who will [VP meet him]]
[CP e [IP who will [VP meet him]]] *!
[CP will1 [IP who t1 [VP meet him]]] *!
[CP who2 will1 [IP t2 t1 [VP meet him]]] **!
[CP who1 e [IP t1 will [VP meet him]]] *!
(45) OpSpec STAY ObHd
[IP he will [VP meet who]] *!
[CP e [IP he will [VP meet who]]] *! *
[CP will1 [IP he t1 [VP meet who]]] *! *
[CP who2 will1 [IP he t1 [VP meet t2]]] **!
[CP who1 e [IP he will [VP meet t1]]] * *
I am using English sentences to demonstrate the result, though obviously the ranking is not the one relevant for English. There are languages which have wh-movement without inversion and hence the ranking does produce actual languages. Of course there may be other differences, such as word order, but these are irrelevant to the point being made and can be ignored. The main difference between such languages and English can be seen in table (46) where the grammatical candidate involves a wh-object moving to the complementiser system, but this maintains an empty head rather than having this position filled by inversion.
Finally consider what happens if we rank ObHd above STAY and OpSpec below. The logic of the argument may lead you to think that such a ranking will result in a language which has inversion but not wh-movement. However, this is not the case:
(47) ObHd STAY OpSpec
[IP who will [VP meet him]]
[CP e [IP who will [VP meet him]]] *!
[CP will1 [IP who t1 [VP meet him]]] *!
[CP who2 will1 [IP t2 t1 [VP meet him]]] **!
[CP who1 e [IP t1 will [VP meet him]]] *!
(48) ObHd STAY OpSpec
[IP he will [VP meet who]] *
[CP e [IP he will [VP meet who]]] *! *
[CP will1 [IP he t1 [VP meet who]]] *! *
[CP who2 will1 [IP he t1 [VP meet t2]]] **!
[CP who1 e [IP he will [VP meet t1]]] *! *
In fact, what we get is the Chinese pattern in which nothing moves. The reason for this is the fact that inversion is dependent on wh-movement. It is the wh-movement which requires the presence of the complementiser system and without this there is no empty head to be filled.
8 Grammatical Functions
Thus, in any language without wh-movement there will be no inversion. This is indeed a true statement and the fact that Grimshaw’s analysis predicts it is one point in its favour.
4 Problems for Relative Grammaticality
Although it is possible that grammaticality in human languages is completely relative, as Optimality Theory assumes, there are some aspects of this claim that are problematic and are perhaps more easily handled with a notion of absolute grammaticality. The most obvious of these is absolute grammaticality itself. There are just some things which are ineffable and hence have no grammatical realisation. For example, while it is possible in English to have two wh-phrases in a single clause, only one of these can be fronted and the other must stay in situ:
(49) who said what
There are limitations however on which can be fronted and which must remain unmoved:
(50) * what did who say
Generally speaking, subjects tend to be able to move more freely than objects and certain adverbials can move more freely than objects too:
(51) a how did he fix what b * what did he fix how
Thus subjects and certain adverbials like to move whereas objects may stay put. However this means that when there is a subject and an adverbial wh-phrase, neither wants to stay put, but only one is able to move. The result is that there is no grammatical way to arrange these elements:
(52) a * how did who fix the car b * who fixed the car how
If grammaticality is relative, there should be some optimal arrangement of these elements and hence one grammatical expression containing both a subject and an adverbial wh-element, but this seems not to be the case. If this is true, this is one case where absolute grammaticality is necessary.
One possible solution to this problem would be to claim that we need to look further afield to find the grammatical arrangement. For example, the following is perfectly grammatical:
(53) who fixed the car and how
If this expression is part of the competition then it is the one that is selected as optimal and hence there is no real ineffability here.
Yet this does raise issues concerning what is allowed to compete with what. Chomsky (1996) has claimed that if there is no limit to what can compete with what, whole languages should reduce to a single grammatical form: perhaps ba, he suggests. This problem is solved in Optimality Theory by the assumption that for every competition there is something that
9 Mark Newson stipulates what all of the candidates have in common, typically a foundation of lexical items or a particular semantic interpretation. This is called the input and it is on the basis of this that the general linguistic principles generate the candidate set:
(54) Input → GEN → Candidate Set → Constraint Evaluation → Optimal Candidate
Given that the expressions in the Candidate Set are all related to the same Input, this means that not all possible expressions compete against each other and the ‘ba’-problem is avoided.
Another problem concerning relative grammaticality is the possibility of optional grammatical structures. In a relative system, optionality would have to mean that neither of the options is better than the other and hence both are optimal. This is very difficult to arrange in a constraint based system as for two expressions to be seen as options they must first differ from each other (otherwise they are not options for the same thing, but they ARE the same thing) and second this difference must be irrelevant for any of the constraints which select the grammatical expression. But invariably any linguistically relevant difference in a pair of expressions will mean a difference in the violation of some constraint as it is the constraints that define what is linguistically relevant. One expression uttered on a Wednesday will not differ in a linguistically relevant way from an utterance with the same syntactic organisation uttered on a Thursday and will not violate or satisfy any constraint that the other does not. From a linguistic point of view, these two expressions are the same thing. But if one expression differs from another in terms of its syntactic organisation, they will violated differently any constraint that concerns this organisation and hence they will not be the same expression, but they cannot be optionally grammatical either.
Some have proposed to capture optionality by allowing constraints to have equal rankings and hence the violation of one constraint will be equivalent to the violation of another constraint that it is equally ranked with. Two candidates could then differ in linguistically relevant ways but end up equally grammatical because their differences violate constraints of the same rank. Others however have argued that equal ranked constraints cause problems for other aspects of the system, its learnability for example (Tesar and Smolensky 2000). An alternative would be to deny the existence of options and claim that any two grammatical expressions in a language always express something different and hence are related to different Inputs. For example, it is often claimed that the rightward movement of PPs and relative clauses in English, a process commonly known as extraposition, is optional, giving the following optional ways of expressing the same thing:
(55) a a boy who had been missing for 2 weeks was found by the police b a boy was found by the police who had been missing for 2 weeks
However, it could be argued that there is a difference between these expressions in that the final position in an English sentence tends to be focussed and hence the relative clause is associated with a focus interpretation in (56b) which it is not in (57a). It remains however to be demonstrated that all options can be differentiated so easily.
5 Conclusion
In this lecture we have been looking at the fundamental question of the determination of grammaticality. This is a relatively new issue as up until the beginning of the 1990s grammaticality was assumed to be an absolute thing. It is still under debate whether
10 Grammatical Functions grammaticality is wholly or even partly a relative thing and not everyone agrees that Optimality Theory takes the correct view. However, it does seem that some aspects of grammaticality are better described in a relative way and the fact that Optimality Theory opens up a completely new way of looking at grammatical phenomena makes it interesting from a theoretical point of view even if it turns out to be fundamentally wrong. The debate is set to continue for some time yet.
References
Chomsky, Noam 1991 ‘Some notes on economy of derivation and representation’, in Freidin, Robert (ed.) Principles and Parameters in Comparative Grammar, MIT Press, Cambridge, Massachusetts, 417-545. First published in 1989 in MIT Working Papers in Linguistics, 10, 43-74. Chomsky, Noam 1996 The Minimalist Program, MIT Press, Cambridge, Mass. Grimshaw, Jane 1997 ‘Projection, Heads and Optimality’, in Linguistic Inquiry 28, 373-422. http://roa.rutgers.edu/files/68-0000/roa-68-grimshaw-3.pdf Prince, Alan and Paul Smolensky 1993 ‘Optimality Theory: Constraint Interaction in Generative Grammar’, Technical Report TR-2, Center for Cognitive Science, Rutgers University, New Brunswick, N.J. and Technical Report CU-CS-697-93, Department of Computer Science, University of Colorado, Boulder. Available at http://ruccs.rutgers.edu/ruccs/publications.php Tesar, Bruce and Paul Smolensky 2000 Learnability in Optimality Theory MIT Press, Cambridge, Mass.
11