Deep Structure and Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Deep Structure and Transformations

Deep Structure and Transformations

The purpose of this paper is to provide a clear account of the concept of deep structure throughout the whole evolution of transformational generative grammar proposed by Chomsky. The paper will be divided into four parts. In the first part, the concept of deep structure will be considered from the perspective of Chomsky’s grammatical theory published in his Syntactic Structures. The second section considers deep structure and its impact but after the publication of Chomsky’s Aspects of the

Theory of Syntax. The third part is concerned with deep structure within the Government and Binding theory framework. The last part of the paper considers the status of deep structure within the minimalist approach.

I. Deep Structure after the Publication of Syntactic Structures

After the publication of Chomsky’s Syntactic Structures in 1957, the notion of transformational generative grammar started formally. In Syntactic Structures, Chomsky did not elaborate on his belief in the existence of a universal grammar. However, the concept was certainly there since Chomsky argued for the need of developing grammars of specific languages to be certain that these particular grammars fall within the framework of a general linguistic theory. In addition, Chomsky was interested in determining the basic underlying principles of a successful grammar. He also presented a new formulation: phrase structure rules plus transformation rules. Chomsky’s generative grammar was based on rationalism and it was developed as the perfect alternative to structuralism based on behaviourism. According to Chomsky, a grammar or an adequate grammar should be capable of producing grammatical sentences and rejecting

1 ungrammatical ones. It must be able to mirror the speaker’s ability to both produce and understand an infinite number of grammatical utterances which he has never spoken or heard before. To illustrate this point, consider this classical pair of sentences:

1) Colorless green ideas sleep furiously.

2) Furiously sleep ideas green colorless.

Despite the fact that neither sentence has meaning, every native speaker of English can recognize that the first sentence is grammatical, while the second is not. An adequate grammar of English should be able to account for this ability. Moreover, such examples suggest another important linguistic property that the native speaker can distinguish grammatical and ungrammatical sentences even though the utterances themselves are devoid of semantic meaning. Accordingly, Chomsky concluded that a grammar model should be based on syntax rather than on semantics. Syntax is an independent component of a grammar system and it is the primary one.

In Syntactic Structures, Chomsky examined phrase-structure grammar also called

IC grammar. He showed its limitations and for the first time, he proposed a formal, mathematical phrase-structure grammar model. Chomsky demonstrated that language, like mathematics, is a system in which a finite number of rules are able to generate an infinite number of correct “results” sentences. Chomsky’ s contributions in Syntactic

Structures can be summarized as follows: 1) he introduced a precise, mathematical method of writing grammar rules, 2) he added a third level, the transformational level, to grammar theory. The model generative grammar proposed by him in Syntactic Structures was a three-level, rule-based system: 1) The level of phrase structure

2) The level of transformational rules

3) The level of morphophonemics

2 The rules of the phrase-structure level were supposed to generate a finite set of underlying terminal strings, that is, strings with phrase structure interpretations.

Transformational rules then mapped these underlying terminal strings into other strings, each derived string is assigned a new constituent structure. At that time, Chomsky had no term for the output of the transformational rules more technical than a string of words.

The introduction of the terms deep and surface structure had to await until a clear statement of the competence-performance dichotomy was made. Meaning, in this view, was part of performance and use; it was not part of any linguistic grammatical model.

Consequently, the undeniable semantic correspondences between declaratives and interrogatives, affirmatives and negatives, actives and passives, etc. were not admitted as direct evidence for setting up semantically generative transformations relating them.

Rather, their relationship was explained by syntactically motivated analysis, that is, by the syntactical transformational rules in question. A weak point noticed about this transformational grammar model is that sometimes transformations change meaning. For example, the following active-passive pair has different meanings, with the first quantifier having wider scope in each case:

1) Everyone in this room speaks two languages.

2) Two languages are spoken by everyone in this room.

To summarize, we can notice that at this stage of the development of transformational generative grammar, semantics and meaning were neglected terms; the notion of deep structures has not been introduced yet. However, it was admitted that the underlying string of a sentence might reveal semantically relevant structures that are obscured in the terminal sting of words. Consider Chomsky’s classic pair of sentences:

1) John is easy to please, and

2) John is eager to please. 3 The underlying terminal strings of the first sentence predicts that the sentence should be interpreted as for someone to please John is easy, while the second has to be interpreted as John is eager (for John) to please (someone).

II. Deep Structure after the Publication of Aspects of the Theory of Syntax

In the second section of this paper, we will trace the development of deep structure from Syntactic Structures to Aspects. At this stage, the notion of deep structure started to emerge into the generative grammar model of transformations. Chomsky’s aims in Aspects of the Theory of Syntax (1965) became more ambitious in order to explain all of the linguistic relationships between the sound system and the meaning system of the language. Chomsky’s transformational grammar explained in Aspects had three components: syntactic, semantic, and phonological; the semantic and phonological components have access to the output of the syntactic component, but not vice versa. The syntactic component falls into two major sub-parts: a base containing phrase-structure rules and a lexicon, and transformational rules. Phrase-structure rules state what basic combinations of syntactic categories are permissible within the sentence using labels like

S, NP, V, and so on. Into the trees created, words from the appropriate categories are to be inserted from the lexicon. The resulting trees are the deep structures of English. The second major type of syntactic component is the transformation, which converts trees or deep structures produced by the phrase-structure rules into other trees; therefore, all transformations are structure-dependent. The tree that results from the application of a transformation is a derived structure and the tree resulting from the application of all transformations is the surface structure. Thus, the syntactic component produces two outputs for each sentence: a deep structure, and a surface structure. According to

Chomsky, the syntactic surface structure is relevant to the phonological rules, while the 4 syntactic deep structure is relevant to the semantic interpretation of sentences. At the time of the publication of Aspects of the Theory of Syntax, it seemed that all of the semantically relevant parts of the sentence, all the things that determine its meaning, were contained in the deep structure of the sentence. For example, the sentence “I like her cooking” has different meanings because it has different deep structures though only one surface structure. In contrast, the pair “the boy will read the book” and “the book will be read by the boy” has different surface structures but one and the same deep structure.

This theory of the relation of syntax to semantics and phonology can be shown graphically as follows:

Base Component PSR lexicon

Deep Structures

Transformational Component Semantic Component

Surface Structures Semantic representation of sentences

Phonological Component

Phonological representation of sentences

From the diagram above, we can see that the syntactic component is assigned a prominent role in this grammar model that became known as The Standard Theory. As we have mentioned before, this model consists of a base that generates deep structures and transformational part that maps them into surface structures. The deep structure of a 5 sentence is submitted to the semantic component for semantic interpretation, and its surface structure enters the phonological component and undergoes phonetic interpretation. The final effect of a grammar, then, is to relate a semantic interpretation to a phonetic representation (i.e. to state how a sentence should be interpreted). This relation is mediated by the syntactic component of the grammar, which constitutes its sole creative part, while the semantic and phonological components are assigned interpretive roles. This model differs from the previous one in that Aspects of the Theory of Syntax by

Chomsky included a detailed account of the semantic component (projection rules, global insertion rules, selection restrictions, rules for the interpretation of subcategorization, and semantic distinguishers). Chomsky explained that “the projection rules of the semantic component operate on the deep structure generated by the base, assigning a semantic interpretation (a “reading”) to each constituent, on the basis of the readings assigned to its parts (ultimately, the intrinsic semantic properties of the formatives) and the categories and grammatical relations represented in the deep structure” (1965: 144). Formatives such as boy, the, seem, and so on; they can be subdivided into lexical items (boy, girl, freedom) and grammatical items (perfect, possessive, pronoun, etc.). In Chomsky’s

Aspects, it was argued that dictionary entries should contain subcategorization and selectional information. Radford defined subcategorization restrictions as “restrictions on the range of categories which a given item permits or requires as its Complements. For example, devour takes an obligatory NP Complement, rely takes an obligatory PP

Complement headed by on, and so on. Selection restrictions are the idiosyncratic properties of lexical items. According to Radford, selection restrictions are

“semantic/pragmatic restrictions on the choice of expressions within a given category which can occupy a given sentence-position” (1988: 370). Thus, it follows that the selectional properties of items should be specified in their lexical entries. For example, 6 murder is a verb that requires an NP Complement. Moreover, its subject (first argument) and its object (second argument) must be human. These pieces of information of the lexical entry can be shown as follows:

murder: CATEGORIAL FEATURES: [+V, -N]

SUBCATEGORIZATION FRAME: [NP]

SELECTION RESTRICTIONS:

The third line of the entry tells that both the NP preceding murder and that following it denote a human being.

The above elegant picture of transformational generative grammar impressed many linguists and scholars for many reasons. First, transformational rules facilitate economy; for example, we do not have to say, “I like that she cooks in a certain way,” we can simply say, “I like her cooking”. Although we pay a small price for such economies created by transformations in having ambiguities, they do not hamper communication because when we talk the context usually resolve the ambiguities. Second, transformations facilitate communication by enabling us to emphasize certain things at the expense of others. For instance, we can say only “the exam was difficult for Sara to pass” but also “it was difficult for Sara to pass the exam” and “Sara had difficulty passing the exam”. In general, an understanding of transformational rules requires an understanding of their function in communication since communication is what language is all about.

As mentioned earlier, Chomsky’s transformational generative approach to grammar as claimed in Aspects ran into serious problems. The very appealing view of transformational generative grammar fell apart when the behaviours of quantifiers and reflexives were noticed. For example, the sentence: John is eager to win is derived from

7 the deep structure: John is eager for John to win by means of Chomsky’s EQUI-NP transformation:

S.D NP be ADJ eager for NP to VP

1 2 3 4 5 6  obligatory

S.C 1 2 3 Ø Ø 6

Condition: 1= 4

This transformation seems valid with the above-mentioned example, whereas it seems problematic with the deep structure: Everyone is eager for everyone to win supposed to derive the surface structure: Everyone is eager to win. This deep structure would be better semantically as (Everyone x) (x is eager for x to win); however, the classical deep structure developed by Chomsky cannot handle such meanings. Concerning reflexives, some problems also manifested. For example, the sentence: John voted for himself is assumed to be derived from the deep structure: John voted for John by means of reflexivization that can be stated formally as:

S.D x – NP – Y – NP – Z

1 2 3 4 5  obligatory

 4  S.C 1 2 3   5  Ref 

Condition: 2= 4, and both 2 and 4 are in the same sentence

This transformation proves true for the sentence: John voted for himself; however, it is of a problematic case to account for the sentence: Every candidate voted for himself. This surface structure cannot be derived from the deep structure: Every candidate voted for every candidate. The meaning here is not clear; it is ambiguous and illogical.

Such problems of Chomsky’s classical deep structure led to the development of two different approaches to semantics within transformational generative syntax. Both of

8 these approaches make no appeal to the level of deep structure as Chomsky defined it.

The first semantic theories designed to be almost compatible with transformational syntax were interpretive. Interpretive semanticists argued that syntactic rules enumerate a set of well-formed sentences, each of which is assigned an interpretation by the rules of a separate semantic theory. This left syntax relatively autonomous with respect to semantics, and was the approach preferred by Chomsky. According to Smith and Wilson,

“interpretive semanticists have successively abandoned the defining assumptions which underlay the level of deep structure, transferring more and more of the work of the syntactic rules into the semantic component, so that deep structure has gradually moved nearer and nearer to the surface syntax of the sentence” (1979: 115). The second approach developed against the existence of the classical deep structure became known as generative semantics. Generative semanticists argue that there is no clear-cut distinction between syntactic and semantic rules. Therefore, the level of syntactic deep structure defined as the input to semantics and syntax cannot hold. Generative semanticists took

Chomsky’s concept of deep structure and ran with it, assuming that deep structures were the sole input to semantic representation. This assumption led to the development of more abstract and complex theories of deep structure than those advocated by Chomsky, and hence to abandon altogether the notion of deep structure as a locus of lexical insertion.

Generative semanticists suggested as an alternative a new component which contains both semantic and syntactic rules to replace the transformational component. These rules take semantic representations as their input and produce surface structures as their output, with no intervening level of deep structure between semantic representation and surface structure. The following diagram can illustrate this approach:

Semantic Semantic & Surface representatio syntactic rules structure n 9 On the basis of these two types of argument, we will discuss these alternative approaches in detail. First, interpretive semanticists presume that deep structures are too deep for all practical purposes. They claim that deep structures should be seen as lying much nearer the surface than was thought in Chomsky’s Aspects in 1965. As we all know, one of the basic motives behind transformations is to facilitate economy, and hence the 1965 model was developed to assign the same deep structure to different surface structures generated by transformations. For example, the active and passive sentences: 1) John wrote the book, and 2) The book was written by John are derived from the same deep structure. The passive sentence is derived from the sentence underlying the active one by means of the passive transformational rule. However, one argument against this model is that certain sentences with identical deep structures do not produce identical surface structures. Kebbe stated, “deep structure seems to be unable to account for the idiosyncratic behaviour of particular words in English” (1995: 175). Consider the following examples given by Wilson and Smith:

1) “John caused Bill to die.

2) John killed bill.

3) Mary caused the door to open.

4) Mary opened the door.

5) It is probable that Manchuria will annex Wales.

6) Manchuria will probably annex Wales” (1979: 119).

From the sentences given above, it is clear that 2) is derived from 1), 4) from 3), and 6) from 5). However, the classical deep structure supposed forbids the suggestion of a transformational relationship between the above-mentioned pairs. In Chomsky’s standard theory model, these relationships would be stated, not at the level of deep structure, but in the lexicon, where information about the idiosyncratic behaviours of lexical items is 10 provided. Therefore, it would be mentioned in the lexicon that for intransitive verbs like open there was a transitive counterpart, carrying the meaning to make or cause to open.

Similarly, for adjectives such as probable which can occur with a sentential complement, as in 5), it could be stated in the lexicon that there was a related adverb that could occur in that sentence which acted as a complement to the adjective, and had the same meaning, as in 6). There are many reasons for attributing these relationships to the lexicon rather than to syntactic transformational rules. First, it would be difficult for classical deep structure to handle such cases without further complicating the syntactic transformational rules. Second and more important, not every verb or adjective are subject to enter into such relationships. For instance, it has to be stated for each verb whether it has a transitive causative form or not. As we have just seen, the verb open has a causative counterpart, while the verb creak does not. Consider the following pair:

1) Mary caused the door to creak.

2) * Mary creaked the door.

From this pair of sentences, we can see that the second sentence is deviant whereas its counterpart is well-formed. Similarly, it has to be stated for each adjective whether or not it has a related adverb. The following sentences illustrate the difference between the probable/probably stated earlier versus impossible/impossibly structures:

1) It is impossible that Manchuria will annex Wales.

2) * Manchuria will impossibly annex Wales.

From the sentences above, we can see that the first sentence is well-formed, while the second is deviant.

In addition, it is true that classical deep structure permits a transformational treatment in the case of passivisation because almost every transitive verb has a related passive; however, this fact cannot hold for all verbs. It is worth noticing that there are 11 certain English verbs which enter into structures of the type NP V NP, but which do not have related passives. The following sentences explain this point:

1) The meal costs ten pounds.

2) * Ten pounds are cost by the meal.

3) Sara resembles your sister.

4) * Your sister is resembled by Sara.

5) Fatima became the head of the department.

6) * The head of the department was become by Fatima.

7) The athlete ran two miles.

8) * Two miles were run by the athlete.

The deviant cases in the above list of examples indicate that it is impossible for the classical deep structure to handle such cases without further complicating the syntactic component and the lexicon. One way to solve this problem is to state in the lexicon that for each passive-forming verb if a structure of the form: NP1 – V – NP2 is well-formed, so is the corresponding sentence: NP2 – be – V-ed by NP1. However, when there is no passive form of a verb, such a statement will simply be omitted. Some linguists objected that such a treatment would be better if incorporated into the grammar rather than the lexicon. Any verb which can occur in the active form will also be able to occur in the passive form, unless it is explicitly marked as being unable to. This would have the effect of leaving the general case, e.g. write, unmarked, while the exceptions such as run marked in the lexicon. Consequently, the fact of incorporating information into the grammar would be considered not only misleading but also redundant, a property against the ultimate goal of transformations which is economy and simplicity. The same information of lexical relationships can be captured in the lexicon without any mention of a level of deep structure. In Wilson and Smith, it was argued that “deep structure can be 12 abandoned, not because it is not deep enough to handle all the facts, but because it is deeper than is necessary” (1979: 122).

Moreover, the principle that transformations are meaning-preserving was soon discovered to be one of the weak points of the Standard Theory model from two different points of view. First, as we know Passivization as an example of a transformational rule should not affect the semantic interpretation of the derived passive construction.

However, this principle cannot be true for all sentences. Kebbe (1995) provided two pairs of examples to illustrate this point and he explained that the sentences: Someone has brought the flowers, and The flowers have been brought are synonymous, while the sentences: Beavers build dams, and Dams are built by beavers are not. The latter sentence can be assigned two interpretations. One is identical to its active counterpart, whereas the second differs from it. The second reading implies that all dams are built by beavers; however, this meaning was not obviously suggested by the active form. Cases like this have proved the inadequacy of the claim that transformations do not change meaning, and have raised doubts about the validity of deep structure.

The second argument against classical deep structure was based on the belief that deep structure is not deep enough; this argument is known as generative semantics as opposed to the argument explained earlier known as interpretive semantics. Generative semanticists take classical deep structures and carry them a stage further.

It should be recalled that in Chomsky’s standard theory deep structures contain actual words inserted from the lexicon, and that once a word has been inserted into a tree, it cannot be turned into another word. Thus, no deep structure containing the verb give could be transformed into a surface structure containing the verb receive, for example.

Consider the relationship between the following active and passive pair:

1) Sara gave a book to Lama. 13 2) Lama was given a book by Sara.

These sentences are synonymous because they express the same subject-verb-object relationships. In both sentences, Sara is the giver, and Lama is the receiver. These facts are correct because both sentences are derived from the same deep structure; the second sentence is derived by the passive transformational rule. Similarly, generative semanticists argue that if transformations are allowed to derive one syntactic structure from another, there is no reason why we cannot carry the principle a step further to generate one lexical item from another. To illustrate this point, consider the relationships between the following sentence pairs:

1) John gave a book to Bill.

2) Bill received a book from John.

3) I consider your suggestion insulting.

4) Your suggestion strikes me as insulting.

5) Maria sold Harry an icebox.

6) Harry bought an icebox from Maria.

7) I liked the play.

8) The play pleased me.

9) John struck Bill.

10) Bill received a blow at the hands of John.

Clearly, there is a meaning relation between each pair to the extent that each of these pairs may be regarded as synonymous. The first pair of sentences, for example, expresses the fact that John is the giver and Bill the receiver of the book. Accordingly, these pairs should be given common deep structures, and their relationships should be expressible in transformational terms, as is possible, for example, in these cases:

1) i) John is easy for us to please. 14 ii) It is easy for us to please John.

2) i) It was yesterday that he came.

ii) He came yesterday.

In the above examples, the deep structures of the paired sentence are identical in all respects relevant to semantic interpretation. In these sentences, transformations account for their synonymous relationships. Therefore, from the way deep structure was defined, only the above pairs can be given a common deep structure because they contain only the same words. The sentences from (1) to (10) cannot be dealt with in this way, since they contain different words. The criticism made by generative semanticists is that Chomsky’s classical deep structure misses the obvious relationships between the pairs of (1)-(10).

They claim that deep structure should be more abstract to allow us to derive all synonymous pairs from a common source, even if this means adopting the claim that deep structures do not contain actual words but a representation of the meanings of words.

Furthermore, another argument for having a deeper and more abstract deep structure may be furnished by the implicit assumption of the classical deep structure itself. Classical deep structure argues that every sentence with two or more distinct meanings correspond to two or more distinct deep structures. Given this assumption that an ambiguous surface structure calls for two different deep structures, we are immediately committed to finding two deep structures for the sentence: John wants to fly an aeroplane over the North Pole. This sentence may be assigned two readings, one of which means that there is a particular aeroplane which John wants to fly over the North

Pole, and no other will satisfy him. The second reading can be interpreted as he wants to fly an aeroplane over the North Pole, but any one will do. If these two different interpretations reflect differences in meaning, then, we need two different deep structures 15 to account for this fact. However, classical deep structure analysis gives only one deep structure for the sentence, and both interpretations seem to be syntactically identical. To provide different deep structures, we must go deeper than classical deep structure, and provide such paraphrases to capture the differences in meaning that can be conveyed by the above sentence:

1) There is an aeroplane which John wants to fly over the North Pole.

2) John wants to fly some aeroplane or other over the North Pole.

Again, the conclusion is that deep structures are much deeper than Chomsky’ classical deep structure and that linguistic theory must be revised accordingly.

As a result, the arguments against classical deep structure led most linguists to abandon it opting for the generative or interpretive approach. However, it would be a mistake to think that the notion of transformational grammar is a thing of the past and should be completely abandoned. Classical deep structure has revealed a great wealth of facts and assumptions for the new theories to come.

By 1972, more revisions in Chomsky’s Aspects model took place so that to overcome the problems mentioned above and this led to the development of the extended standard theory. The extended standard theory is the model of grammar without autonomous syntax. There were several reasons for modifications. First, what was referred to as the semantic representation of a sentence was no longer seen as a single, uniform structure. This theory assumes that the syntactic component interacts with the semantic component many times during the processing of syntactic structures. Before the application of transformations, the deep structure enters into the semantic component where it is interpreted in terms of its functional structures. At this stage, the semantic component provides information on the interpretation of the various semantic roles used in the sentence such as agent, patient, goal, experiencer, etc. This deep structure 16 continues to be processed syntactically as it undergoes various cycles of transformations.

This information is again directed to the semantic component for further interpretation.

This time the semantic component provides information on modal structures including the scope of negation, quantifiers, and a table of coreferences. To explain scope relations, consider the following VP: not wholly understand it. Here, we can say that not has scope over the VP: wholly understand it. To shed light on the notion of coreference, the following sentences can be given a table of coreferences in the following way:

1) Mary saw herself in the mirror. (the pronoun here is co-referential)

2) Mary saw her in the mirror. (the pronoun here is not co-referential)

3) Mary thinks that she is attractive. (the sentence here is ambiguous because there is

no specification of referentiality).

These rules could not be processed at the deep structure level; the process has to be delayed until certain modifications and rule applications take place within the transformational component of the language. Finally, at the surface structure level, the output of the final transformational cycle is sent to the semantic component for further processing. At this stage, the semantic representation is analyzed for focus, presuppositions,Base Component and topicalization. The presupposition of a declarative sentence, for PS Rules Semantic Representation example,Lexical is Insertion concerned with what the speaker assumes the hearer knows. Focus, on the other hand, has to do with what the speaker assumes that the hearer does not know. Functional Structures Words Deep assumed Structure to be focused have extra heavy stress. Topicalization differentiates between old and new information. The following examplesModal illustrate Structures this point: Scope of Negation 1) MaryTransformational drank the MILK. Scope of Quantifiers Component Table of Coreferences TheCyclic speaker Transformations presupposes that Mary exists, Mary did something, and Mary drank Final Cycle Transformation something. The focus here is on the MILK in that Mary drank milk and not water.

Concerning topicalization, we can say that it is old informationFocus that Mary did something, Presupposition Topicalization 17 Surface Structure while the new information is that Mary drank MILK. The model of the extended standard theory described as the model of grammar without autonomous syntax can be given the following diagram:

Base Component PS Rules Semantic Representation Lexical Insertion

Functional Structures Deep Structure

Modal Structures Scope of Negation Transformational Scope of Quantifiers Component Table of Coreferences Cyclic Transformations Final Cycle Transformation

Focus Presupposition Topicalization Surface Structure

After the development of the above-mentioned grammatical theory, several attempts have been made to render syntax as autonomous. Some linguists proposed that full lexical forms are assumed to exist at the deep structure level. Reflexivization, for example, requires semantic information on coreferentiality to transform John saw John into John saw himself. In the case of Equi-NP Deletion, coreferentiality is also needed to transform

Mary expects Mary to win into Mary expects to win. However, it is possible to avoid these references to the semantic component if pronouns and dummy subjects (gaps) already exist at the level of deep structure. This introduction of abstract elements and empty categories into the deep structure of sentences marked an important turning point

18 in linguistic theory. In fact, it led to the emergence of Government and Binding GB theory.

III. Deep Structure within GB Theories

The development of GB theory as a grammar model began in 1977 when Chomsky and his colleagues proposed some major revisions in the extended standard theory. Since the concepts of deep structure and surface structure were substantially revised by the inclusion of abstract elements, trace elements, and empty categories, they were given new terms, namely D-structure and S-structure respectively. These are related to each other by means of movement rules. S-structures are further developed into phonetic forms PFs and

Logical forms LFs. Thus GB-theories identify four levels of grammatical representation:

D-structure (DS), S-structure (SS), Logical form (LF), and Phonetic form (PF). D- structure can be described as the phrase marker at which pure grammatical functions and thematic relations may be represented. According to Hornstein, Nunes, and Grohman

“DS is where an expression’s logical/thematic role θ perfectly coincides with its grammatical function GF: logical subjects are DS (grammatical) subjects, and logical objects are DS (grammatical) objects, etc.” (2005: 20). At DS, positions that are thematically active must be filled and positions with no thematic role must be left empty.

To illustrate this point, consider the verbs in the following sentences:

1) John persuaded Sara to interview Mary.

2) John seems to like Mary.

3) It seems that John likes Mary.

In the first sentence and in terms of thematic relations, persuade requires a persuader

(agent), a persuadee (theme or patient), and a propositional complement, while interview requires an interviewer and an interviewee. Given that 1) is an acceptable sentence, each 19 of these θ-roles must correspond to filled positions in its DS representation illustrated as follows: [ John persuader persuaded Sara persuadee [ec interviewer to interview Mary interviewee ] proposition ]. We can notice here that once we assume the notion of DS, its representation must have a filler in the position associated with the interviewer θ-role, although it is not phonetically realized. In GB, the empty category realized as ec is an obligatory controlled PRO, whose antecedent is Sara. In the sentences 2) and 3), the verb like has two θ-roles to assign: the liker (agent), and the likee (theme or patient), whereas seem has only one θ-role to assign to its propositional complement. Specifically, seem does not assign a θ-role to the position occupied by John in 2), as can be seen by the fact that this position may be filled by an expletive as in 3). This means that John was not base-generated but rather it must have gotten there transformationally. Thus, the matrix subject position of the DS representation of 2) is filled by nothing at all, not even a null expression. This can be shown as follows where  represents an empty position:

[ seems [ John liker to like Mary likee ] proposition ]

As regards the functional characterization, DS is defined as the starting point for a derivation. It is the phrase marker that is the output of phrase-structure operations plus lexical insertion and the input to transformational operations. Thus, it is the locus of a grammar’s recursivity. S-structure is the point at which the derivation splits, sending off one copy to PF for phonetic interpretation and one copy to LF for semantic interpretation.

It is regarded as the queen of GB-levels. PF and LF are interface levels within GB. They provide the grammatical information required to assign a phonetic and semantic interpretation to a sentence. This model of grammar can be given the following diagram:

20 D-Structure

Movement Rules

S-Structure

Phonetic Form Logical Form Phonological Semantic Rules Rules

GB embodies a very simple transformational component. It includes two rules: Bind and

Move. Bind allows free indexing of DPs determiner phrases, and Move allows anything to move anywhere anytime. The sentence is now represented as an inflectional phrase IP.

The two very general rules of the transformational component over-generate unacceptable structures. Therefore, GB-grammars deploy a group of information-specific modules that interact to exclude unwanted structures and prepare a phrase marker for interpretation at LF and PF. θ-roles (Theta theory), phrase-structure (X´- theory), case theory are examples of the most essential modules with GB theory. However, as a deeper understanding of the structure of specific constituents was achieved, many of these modules lost empirical motivation. For example, the X´-theory at D-structure had to be revised form a minimalist perspective. As was mentioned, DS within GB is described as the level where lexical properties meet the grammar. Logical subjects are syntactic subjects, logical objects are syntactic objects, etc. Two grammatical modules known as

Theta theory and X´ theory govern lexical properties and phrasal structures at DS. As was shown in the examples above, Theta theory ensures that only thematic positions are filled. X´-theory ensures that the phrasal organization of all syntactic objects has the same 21 general format. XPs consist of a head element within a phrasal construction. The head is a lexical category and the phrasal component is called a projection. For example, N is the head in an N-bar, and N-bar is the intermediate projection of NP or N-double bar. The diagrams below illustrate this point:

Maximal Projection N-Double Bar

Intermediate Projection Spec N-Bar

Base Category the N

president

Many arguments against DS within GB were stated and this led to the revision of this grammatical level. First, DS within GB is the place where grammatical recursion occurs. A language universal is that sentences can be of arbitrary length. This potential within GB is captured at DS by allowing a category A to be embedded within another category of type A. Consider the following examples to clarify this point:

1) i. [ a tall man]

2) ii. [a tall bearded man]

3) iii. [ a tall bearded man with a red shirt]

In terms of functions, DS within GB is the output of phrase-structure operations and lexical insertion, and the input to movement operations. However, some arguments against GB’s DS showed that recursion would appropriately occur without any reliance on DS, as we will see later.

Another weak point about DS relates to the fact that DS within GB is the pure representation of thematic properties in that all lexical/thematic properties must be satisfied there. For instance, if we take the control structure whose subject is understood 22 as playing a semantic role with respect to both the control and the embedded predicate such as Sara hoped to interview Chomsky. Here, the verb hope requires a proposition for a complement and a hoper for its external argument. By the same token, the embedded verb interview must discharge its interviewer and interviewee θ-roles. The subject position associated with interview must be filled at DS although there is no phonetically realized element to occupy this position. IN GB, this position should be filled by the empty category PRO which is coindexed with the matrix subject so that Mary here plays two different semantic roles. DS can be stated formally as follows:

[ Mary hoper hoped [ PRO interviewer to interview Chomsky interviewee ] proposition]

As regards raising structures whose subjects are interpreted as playing only a role associated with the embedded predicate, the sentence Sara seemed to interview Chomsky may illustrate the role of DS within GB. In the example, the verb seem takes a proposition for a complement, but its subject position in non-thematic. Thus, Sara cannot occupy this position at DS. However, the embedded verb interview assigns two θ-roles.

The DS representation of this sentence can be stated formally as follows:

[  seemed[ Sara interviewer to interview Chomsky interviewee] proposition]

These two simple examples argue for the existence of DS to account for the establishment of thematic relations; however, arguments against DS rely on the fact that

Theta-Criterion is furnished at DS and due to the Projection Principle at SS and LF.

According to Radford, Projection Principle can be defined, as “syntactic representations

[i.e. syntactic structures] must be projected from the lexicon, in that they observe the subcategorization properties of lexical items (1988: 391). Assuming that LF is the input to rules mapping to the semantic interface; therefore, notions like agent, patient, etc. are encoded at this level. Moreover, the Projection Principle ensures that some kinds of

23 information are preserved during derivation by inspecting them at subsequent levels of representation. Thus, the thematic relations encoded at DS are a subset of the ones encoded at LF, and the Projection Principle makes the system redundant. Minimalists argue against this redundancy by eliminating DS form grammar, as we will see later.

Third, a serious empirical problem for DS as conceived by GB is posed by tough- constructions such as in the sentence: Moby Dick is hard for Bill to read. If we replace

Moby Dick with these books as in the sentence: These books are hard for Bill to read, the agreement features of the copula change indicating that these elements occupy the matrix subject position of their sentences. The synonymous sentence: It is hard for Bill to read

Moby Dick indicates that Moby Dick is related to the embedded object position; that is, it is understood as the thing being read. The problem here is that it is quite unclear what sort of movement this could be. Some claim that it is a trace of A-movement that can be illustrated as follows:

1) [ Moby Dicki is hard [ for Bill to read ti]] ti is an anaphor and should be bound within the embedded clause in order to comply with

Principle A of Binding Theory. Principle A states that an anaphor (e.g. a reflexive or reciprocal) must be bound in its domain. Since ti is unbounded in this domain, then it cannot be A-movement. Chomsky suggest that it is an instance of A´-movement with a null operator OP moving close to the tough-predicate and forming a complex predicate with it. Its structure can be shown as follows:

2) [ Moby Dick is [ hard [ OPi [ for Bill to read ti ] ] ] ]

Movement of the null operator allows the formation of the complex predicate [ hard [ OPi

[ for Bill to read ti ] ] ], which is predicated by the subject Moby Dick. Therefore, the matrix subject position in the sentence is a θ-position because Moby Dick receives a θ- role under predication. Both analyses prove problematic for DS concerning the thematic 24 status of the matrix subject in 2). The sentence: It is hard fro Bill to read Moby Dick revealed that the matrix subject of a tough-predicate is not inherently a θ-position since it can be occupied by an expletive. This implies that the matrix subject position in 2) is only a θ-position after A´-movement has been applied and the complex predicate has been formed. If Moby Dick is inserted at DS, then it is not inserted at the point when the matrix subject is a θ-position. If after the null operator has moved, the conclusion then is that we can have insertion into a θ-position after DS. Both ways are problematic and require the revision of the status of DS as a grammatical level as we will see within the minimalist approach.

IV. Deep Structure from the Perspective of Minimalism

As was explained, DS is the place where grammatical recursion occurs. In terms of functions, DS within GB is the output of phrase-structure operations and lexical insertion, and the input to movement operations. Although the notion of DS was very important within GB, it became questionable after the development of minimalism. In 1990s,

Chomsky introduced a new grammar model known as minimalism in which DS and SS no longer featured and only PF and LF remained as the only levels of grammatical representation for many reasons.

First, as we have seen DS is the level where recursion occurs. However, minimalists argue that recursion would not be lost after the elimination of DS. Their argument was based upon the principle that earlier approaches to UG adequately captured recursion but did not suggest DS. Within the minimalist approach, the operation merge can ensure recursion in a system that lacks DS because it takes the lexical items together and puts them into phrasal structures that comply with X´-theory. For example, merge takes two lexical items saw and Mary to form the VP, this VP is then merged with Infl to 25 form the I´. Further applications of merge apply to arrive at the IP: John said that Bill saw

Mary, for example. This sentence is a standard example of grammatical recursion because it involves a VP embedded within another VP, an I´ embedded within an I´, and an IP embedded within another IP. Therefore, we can say that recursion can be captured without any mention of DS.

Second, we have seen that within GB, the Projection Principle ends up rendering the system intrinsically redundant. From the perspective of minimalism, this redundancy can be eliminated by assuming that the Theta-Criterion holds at the level of LF. To explain this point, we need to illustrate that the differences between raising and control structures can be accounted for, even though their thematic properties are assumed at LF.

For example, the fact that Sara is understood as both hoper and interviewer in the sentence Sara hoped to interview Chomsky can be captured by the structure in 1), but not by the one in 2).

1) [ Sarai hoped[ PROi to interview Chomsky]]

2) *[ Sarai hoped [ ti to interview Chomsky ]]

If we argue for the existence of DS and assume that the Theta-Criterion holds at this level, we are obliged to choose the representation in 1) because in 2) Sara is not at the matrix subject position at DS and the Theta-Criterion is violated at this level as shown below:

DS: *[  hoped [ Sara interviewer to interview Chomsky interviewee ] proposition ]

However, according to minimalists the elimination of DS does not interfere with the fact that 1) still illustrates the adequate representation of the sentence Sara hoped to interview

Chomsky by exploring the different empty categories that each structure employs. The difference between PRO in 1) and the trace in 2) indicates that θ-relations must be

26 established by lexical insertion and cannot be established by movement. This assumption does not necessarily presuppose DS. As was seen earlier, recursion/generativity, within the minimalist approach, is captured by the operation merge. Moreover, the principle known as the Theta-Role Assignment Principle (TRAP) states that θ-roles can only be assigned under a merge operation (Hornstein, Nunes, and Grohmann, 2005). TRAP is not specific to any level of representation. According to this principle, the structure in 1) is well formed because the interviewer θ-role was assigned to PRO when it was merged with the embedded I´ and the hoper θ-role was assigned to Sara when it merged with the matrix I´. Thus, when the Theta-Criterion applies at LF, the derivation will be judged convergent. By contrast, although Sara can receive the interviewer θ-role in the above 2) when it merges with the embedded I´, it cannot receive the hoper θ-role because it is connected to the matrix clause by move and not by merge. According to Chomsky

(1995), the movement operation means that a category can be moved to a target position.

He goes on to state that “movement of an element α always leaves a trace and, in the simplest case, forms a chain (α, t), where α, the head of the chain, is the moved element and t is its trace” (1995: 43). Therefore, here once the hoper θ-role has not been fulfilled,

2) violates the Theta-Criterion at LF and the derivation fails.

Concerning the raising construction, the same reasoning applies. For example,

Sara seemed to interview Chomsky can be given the LF representation in 3), but not the one in 4). 3) and 4) can be stated formally as follows:

3) LF: [ Sarai seemed [ti to interview Chomsky]]

4) LF: *[ Sarai seemed [ PROi to interview Chomsky]]

3) is well formed because Sara receives its θ-role when it merges with the embedded I´ and moves to a non-thematic position. In contrast, in 4) Sara receives no θ-role when it merges with the matrix I´, violating the Theta-Criterion and causing the derivation to fail 27 at LF. From the above examples, we can see that the TRAP accounts for the well formedness of raising and control structures with out assuming that we need a level like

DS.

Third, as was noted earlier DS within GB is problematic in tough-constructions; however, they become non-problematic if we dispense with DS. For instance, Hornstein,

Nunes, and Grohmann explained this point by showing how the many applications of

Move and Merge operations can account for the sentence: Moby Dick is hard for Bill to read. They went on to show how the derivation of the latter sentence can be traced as follows: “ a. Applications of Merge →

[C´ for Bill to read OP]

b. Move OP →

[CP OPi [for Bill to read ti ] ]

c. CP + Merge hard →

[AP hard [CP OPi [for Bill to read ti ] ] ]

d. AP + Merge is →

[I´ is [AP hard [CP OPi [for Bill to read ti ] ] ] ]

e. I´ + Merge Moby Dick →

[IP Moby Dick is [AP hard [CP OPi [ for Bill to read ti ] ] ] ]” (2005: 67).

In this sentence, read merges with the null operator and after further applications of Merge, we obtain C´ in a. In b. the null operator moves to derive the complementizer phrase CP. Then, CP merges with hard as shown in c. After that, the adjective phrase AP merges with is to form I´ forming a complex predicate that can assign a θ-role to the

28 external argument (Moby Dick). Finally, when Moby Dick merges with I´, it becomes the matrix subject and it will be θ-marked according to the Theta-Role Assignment Principle

(TRAP) stated earlier. Therefore, it seems that we can provide an adequate account of simple and complex predicates of tough-constructions if we eliminate DS and this is the strongest argument against DS.

In conclusion, in the above sections we have examined the evolution of the notion of deep structure throughout the development of transformational generative grammar.

DS was an essential feature until the development of the minimalist approach. Within this approach, it has been argued that we can dispense with DS without any loss of information for the sake of economy and simplicity which constitute the ultimate goals of minimalism as an adequate grammar model.

29 References:

Chomsky, Noam. (1995). The Minimalist Program. Cambridge, Massachusetts: The MIT Press

______. (1993). Lectures on Government and Binding: The Pisa Lectures. Berlin, New York: Mouton de Gruyter

______. (1965). Aspects of the Theory of Syntax. Cambridge, Massachusetts: The MIT Press

Hornstein, N., Nunes, J. and Grohmann, K. (2005). Understanding Minimalism. Cambridge: Cambridge University Press

Kebbe, M. Z. (1995). Lectures in General Linguistics: An Introductory Course.

Radford, Andrew. (1997). Syntax: A Minimalist Introduction. Cambridge: Cambridge University Press

______. (1988). Transformational Grammar: A First Course. Cambridge: Cambridge University Press

Smith, N., and Wilson, D. (1979). Modern Linguistics: The Results of Chomsky’s Revolution. England: Penguin Books

30 31

Recommended publications