Quick viewing(Text Mode)

TESIS DOCTORAL Verbs of Pure Chande of State in English Beatriz

TESIS DOCTORAL Verbs of Pure Chande of State in English Beatriz

TESIS DOCTORAL

Verbs of pure chande of state in English projection, constructions and event structure

Beatriz Martínez Fernández

TESIS DOCTORAL

Verbs of pure chande of state in English projection, constructions and event structure

Beatriz Martínez Fernández

Universidad de La Rioja Servicio de Publicaciones 2004

Esta tesis doctoral, dirigida por el doctor D. Javier Martín Arista, fue leída el 25 de julio de 2007, y obtuvo la calificación de Sobresaliente Cum Laude Unanimidad

© Beatriz Martínez Fernández

Edita: Universidad de La Rioja Servicio de Publicaciones

ISBN 978-84-691-1311-0

Verbs of pure change of state in English:

projection, constructions and event structure

Beatriz Martínez Fernández

Doctoral thesis supervised by Dr Javier Martín Arista

Department of Modern Languages

University of La Rioja CONTENTS

Chapter 1. Introduction

1.1 Research aims 1

1.2 Rationale 3

1.3 Working hypotheses 4

1.4 Chapter outline 5

Chapter 2. Literature Review

2.0 Introduction 11

2.1 Different approaches to lexical representation 12

2.1.1 Levin and Rappaport Hovav (2005) 13

2.1.2 Alsina (1999) 18

2.1.3 Lieber (2004) 20

2.1.4 Faber and Mairal (1999), Van Valin and Mairal (2001),

Mairal and Ruiz de Mendoza (forthcoming 2007),

Ruiz de Mendoza and Mairal (forthcoming 2007) 23

2.1.5 Pustejovsky’s generative 27

2.1.5.a Introduction: aims of the GL 27

2.1.5.b The generative lexicon: structure 28

2.1.5.c Event structure 29

2.1.6 Role and reference grammar 34

2.1.6.a The RRG’s system of lexical representation 34

2.1.6.b RRG’s event structure in comparison to GL’s event structure 40

2.2 Conclusions 43

iii Chapter 3. Methodology

3.0 Introduction 45

3.1 Structure of the analysis 46

3.1.1 Paradigmatic and syntagmatic analysis 46

3.1.2 Type and token analysis 47

3.2 The corpus: main characteristics and compilation process 52

3.3 Verbs under study 55

3.4 WordNet 57

3.5 Sections of the analysis: structure, argument structure and

event structure 65

3.5.1 Word structure 65

3.5.1.a Lexical derivation 65

3.5.1.b Semantic relationships 67

3.5.2 Argument structure (qualitative and quantitative analysis) 83

3.5.2.a Syntactic and semantic valence 83

3.5.2.b Argument-adjuncts 85

3.5.2.c Periphery 85

3.5.2.d Entity type 88

3.5.2.e aspect 91

3.5.2.f Reference 92

3.5.3 Event structure (syntactic analysis) 93

3.5.3a Subeventual structure 96

3.5.3.b Causal sequence 98

3.5.3.c Temporal relation 99

3.5.3.d Type coercion 99 iv 3.6 Conclusions 103

Chapter 4. Description and Results of the Analysis

4.0 Introduction 105

4.1 Word structure 106

4.1.1 Lexical derivation 106

4.1.1.a Distribution of prefixes and 106

4.1.1.b Changes in grammatical category 110

4.1.1.c The origin of verbs: Romance vs. Germanic verbs 120

4.1.2 Semantic relations 120

4.1.2.a Antonymy 120

4.1.2.b Synonymy 123

4.1.2.c Hyponymy and Hyperonymy 131

4.1.2.d Polysemy 134

4.1.3 Summary 149

4.2 The corpus: prototypical, less-prototypical and non-prototypical examples 150

4.2.1 Prototypical examples 154

4.2.1.1 Argument structure 154

4.2.1.1.a Syntactic valence 154

4.2.1.1.b. Semantic valence 164

4.2.1.1.c. Argument-adjuncts 168

4.2.1.1.d Periphery 173

4.2.1.1.e Entity type 185

4.2.1.1.f Nominal aspect 189

4.2.1.1.g Reference 216

v 4.2.1.2 Event structure 223

4.2.1.2.a Subeventual structure 223

4.2.1.2.b Headedness 225

4.2.1.2.c Causal sequence 229

4.2.1.2.d Temporal relation 230

4.2.1.2.e Type coercion 232

4.2.1.2.f Merge: one special case 232

4.2.1.3 Summary 250

4.2.2 Less-prototypical and non-prototypical examples 258

4.2.2.1 Definition 258

4.2.2.2 Differences between the groups of less-prototypical and non-

prototypical examples 273

4.2.2.2.a Less-prototypical examples 273

4.2.2.2.b Non-prototypical examples 277

4.3 Conclusions 282

Chapter 5. Conclusions 285

Chapter 6. Lines of Future Research

6.0 Introduction 297

6.1 Broad lines of future research 297

6.2 More specific lines of future research 299

Bibiliography 305

Appendix: Sample of the Corpus – Prototypical Examples 329

Contents, Summaries and Conclusions in Spanish 359 vi CHAPTER 1: INTRODUCTION

1.1 Research aims

Role and Reference Grammar (RRG) is a theory of clausal relations whose stronghold is typological validity. It aims to capture the relationship that exists between syntax and semantics – and pragmatics – from a communicative perspective. Part of its attractiveness is due to its eclecticism: RRG manages to successfully integrate ideas from a number of theories such as Rikhoff’s theory of phrase structure from

Functional Grammar, the notion of constructional template adapted from Construction

Grammar, Lambrecht’s theory of information structure, Pustejovsky’s theory of nominal qualia, the pragmatic analysis of pronominalization of Kuno, and Jackendoff’s idea about reflexivization, to mention just a few (Van Valin and LaPolla 1997: 640).1

However, there are some weaknesses in its structure. To be precise, although the system of lexical representation of RRG succeeds in accounting for the semantics of verbs as displayed in the lexicon, it also fails to satisfactorily explain the semantics of those structures that extend beyond the basic meaning of the , like resultative constructions or other expressions of event composition. Thus, this dissertation can be considered as a contribution to the development of the system of lexical representation of RRG.

Following Mairal and Van Valin (2001), who already foresee the need to progress in that direction, I turn to Pustejovsky’s (1995a) Generative Lexicon (GL), a theory that is less ambitious in its general goals but more fine-grained in its achievements. Granted this is true, one could imagine that the inclusion of some of those more fine-grained distinctions of the GL into the RRG might lead to a higher

1 For further information, see Butler et al. (1999).

1 refinement of the latter. The question is how. In RRG, lexical information is split into the different components of logical structures. GL, by contrast, captures lexical meaning by splitting lexical information into the four different levels of representation of a highly comprehensive lexicon, namely argument structure, qualia structure, event structure and lexical inheritance structure. My view is that its strength lies here.

Therefore, I expect that the close analysis of those levels of representation will provide the necessary input to strengthen RRG’s system of lexical representation. The 1997

RRG model already incorporates Pustejovsky’s qualia structure for the characterization of the nominal phrase; consequently, in this work I focus on the other three: word structure, argument structure, and event structure. Furthermore, I will also draw from other lexical-semantic theories like Levin and Rappaport Hovav’s (1995, 2005) which, being less projectionist than RRG, expand their scope beyond the predicate.

The analysis of the three levels of representation will not be carried out in a broad, abstract fashion. Rather, I have selected a number of concrete – more tangible – variables that define each level. Furthermore, I take an empirical approach in the analysis; in other , far from dealing with purely theoretical notions, I will carry out the analysis of word, argument and event structures using a corpus of actual language use. Given the extension of the English lexicon, it is not possible to study all its verbs. Hence I have selected a verbal class big enough to offer significant data but small enough to be easily handled, and whose syntactic and semantic characteristics are sufficiently complex to set up a good starting point for the research: verbs of pure change of state.

The fact that the verbs under analysis are verbs of pure change of state has some implications for the analysis. There is generally agreement on the idea that time is understood in terms of the more simple notion of space. However, here I sustain that

2 time is understood in terms of changes throughout a universal flow: timely things are perceived because the change undergone by beings can be perceived. Thus, my analysis assumes that changes have duration and, consequently, all processes have duration.

In short, this dissertation attempts to offer a close semantic analysis of the system of lexical representation of RRG (henceforth RRG) that leads to a more comprehensive picture of the lexicon.

This general aim involves the following more specific aims:

1. to follow the line of work undertaken by Mairal and Van Valin (2001) that

points at Pustejovsky’s (1995a) work to extend logical structures beyond the

limits imposed by the meaning of predicates;

2. to compile a corpus that is quantitatively representative and qualitatively

reliable, and based on real uses of language; and

3. to carry out a systematic and thorough analysis of verbs of change of state that

examines all their senses and combinations with and prepositions, thus

accounting for the polysemic character of this verbal class.

1.2 Rationale

The need for a study of lexical representation along the lines that I have just mentioned is justified for the following four reasons:

1. as a contribution to RRG, especially because this theory, although established on

semantic grounds, is clearly syntactically oriented. Given the lexical-semantic

nature of the approach I follow in this work, it can be considered as a

contribution to the further development of the semantic branch of RRG;

2. as a contribution to developing a global, more comprehensive picture of the

lexicon than that offered by RRG;

3 3. as a contribution to developing an accurate meaning definition by studying the

connection existing between three basic components of meaning, which are

represented in this work by the three levels of analysis: word structure, argument

structure and event structure; and

4. as a contribution from RRG to . Lexical semantics constitutes

an innovative research programme that has produced ingenuous and elegant

publications in the line of accounting for the semantic classes that underlie

syntactic behaviours. Lexical semantics, however, does not put forward a full

grammatical theory, as RRG does; neither does it look for convergences

between different lexical strata, as Pustejovsky does.

1.3 Working hypotheses

The hypotheses that constitute the starting point of this study are of three different characters: meta-theoretical, theoretical and descriptive.

1. Meta-theoretical hypothesis: RRG’s logical structures are flat and do not offer a

global picture of the lexicon. To be precise, they only account for those

structures that derive from the basic meaning of predicates as defined in the

lexical entries of . Thus, more complex structures, like resultatives or

the instrument subject alternation, do not find an appropriate explanation in this

theory. Consequently, RRG’s system of lexical representation needs to

incorporate notions from other theoretical models, such as Pustejovsky’s GL, or

Levin and Rappaport Hovav’s (1995; 2005) and Mairal and Van Valin’s (2001)

lexical grammars.

2. Theoretical hypothesis: all changes of state are durative. As I have said above,

time is understood in terms of changes throughout the universal flow; in other

4 words, the universal flow of time and all timely things could not be perceived

were it not for the changes undergone by beings. Thus, I suggest that whenever

there is a change, there is time passing by, which in turn implies that events have

duration even if this is minimal.

3. Descriptive hypothesis: word structure, argument structure and event structure of

prototypical verbs of change of state provide not only a motivated explanation of

their grammatical behaviour, but also contribute to a motivated meaning

definition in functional and cognitive terms.

The variables under analysis follow from these general assumptions.

1.4 Chapter outline

I have already explained the goals of this work. I have briefly outlined the theoretical framework within which this research will be carried out, as well as the kind of approach I shall take to the analysis. I have put forward the reasons that justify the need for a research like the one I propose here. And, finally, I have presented the working hypotheses that I take as a starting point for my research. Next, I shall proceed to outline the content of each of the chapters that shape this dissertation.

CHAPTER 2

In chapter 2, I describe the theoretical framework underlying this dissertation. I start by introducing Levin and Rappaport Hovav’s (2005) theory of argument realisation. Levin and Rappaport Hovav discuss three approaches to lexical representation, each of which focuses on a different cognitive aspect of events: the localist approach, the aspectual approach, and the causal approach. After discussing them, I examine Levin and

Rappaport Hovav’s own approach to argument and event structure. Levin and

5 Rappaport Hovav adopt the RRG’s system of lexical decomposition and semantic roles to account for event structure, but they enrich this system by incorporating into the event structure those aspects of argument realisation that cannot be derived from the event structure alone, and by accounting, by means of their system of alternations, for the corresponding variations between arguments and adjuncts.

Then I review other representatives of lexical semantics, in this order: Alsina

(1999), Lieber (2004), and Faber and Mairal (1999), Mairal and Van Valin (2001),

Mairal and Ruiz de Mendoza (forthcoming 2007). Alsina’s proposal shares the essentials of Pustejovsky’s (1991) work, but as Alsina explains, his is an attempt at simplification. Lieber’s model of lexical semantic representation aims to explain the semantics of word-formation. For that purpose, she utilizes the system of lexical decomposition, which involves a number of primitives for the description of the lexical semantic properties of words, and to which Lieber and Baayen (1997, 1999) incorporate a number of features that permit to distinguish the different types of verbal classes.

Faber and Mairal (1999), Mairal and Van Valin (2001), Mairal and Ruiz de Mendoza

(forthcoming 2007) are dealt with together. Faber and Mairal’s (1999) model is grounded in Martín Mingorance’s Functional-Lexematic Model, which integrates

Coseriu’s Lexematics, Dik’s Functional Grammar, and some basic tenets of Cognitive

Grammar. The main advantage of Faber and Mairal’s model is that it provides a global vision of the lexicon, which comprehends three axes: a cognitive axis, a paradigmatic axis, and a syntagmatic axis. Mairal and Ruiz de Mendoza (forthcoming 2007) enhance this model by including a constructional component and by strengthening the cognitive axis.

In the last sections of chapter 2, I deal in more depth with the two main theories that conform to the theoretical framework of this research: GL and RRG. In dealing

6 with these two, I first explain the basic tenets that define each theory; I then introduce their systems of lexical representation; and finally I proceed to explain the differences that exist between the two.

The resulting overview covers a great deal of ground, because it moves from the more projectionist theories, such as RRG, to the non-projectionist ones, such as Levin and Rappaport Hovav’s, while also covering the less-projectionist approaches discussed above.

CHAPTER 3

In chapter 3, I describe the theoretical and methodological basis on which I have carried out the analysis of the word, argument and event structures of verbs of pure change of state. This analysis is an attempt to address a perceived failure of RRG’s system of lexical representation to capture the meaning of structures that go beyond its logical structures. Pustejovsky’s GL seems a plausible model to follow, because it is comprehensive in its structure and fine-grained in the details. Van Valin and LaPolla

(1997) already incorporate Pustejovsky’s theory of nominal qualia to characterize the semantics of nominals, which is just one of the hints that lead to think that the two models are compatible. Thus, in this chapter I identify those components of

Pustejovsky’s lexicon which are either missing or simply not fully developed in RRG, and which could enhance the latter. In addition, I select those variables of RRG that I consider relevant for lexical representation; the variables of the two theories will be analysed together in order check the real compatibility of the two models. The variables belong to the three levels of representation that are going to be analysed, namely word structure, argument structure and event structure. Just to mention a few, the analysis

7 will focus on questions such as syntactic and semantic valence, entity type, nominal aspect, or subeventual structure.

In this chapter I also describe the sources from which I have obtained the data that is going to be analysed. For the analysis of word structure I use dictionaries – namely, the Longman of the English Language (1984), The New Oxford

Dictionary of English (1998), Collins Cobuild English Dictionary for Advanced

Learners (2001), the online Cambridge Advanced Learner’s Dictionary and the online

Merriam-Webster Unabridged Dictionary –, thesauri – Longman Lexicon of

Contemporary English (1981) and the New Oxford of English (2000) – as well as the online database WordNet. For the analysis of argument and event structure I use a corpus that comprises approximately 1,200 examples containing break verbs.

Subsequently, I describe the corpus compilation process, and I provide a brief summary of the two main databases, namely the British National corpus and WordNet, and of the verbs which are the object of study: verbs of pure change of state.

At the end of the chapter, I explain the structure of the analysis, and I offer a brief theoretical and methodological introduction to each of the variables that will be analysed in the following chapter.

CHAPTER 4

In this chapter, I elaborate on the data obtained from the analysis of the corresponding variables for the word, argument and event structures of break verbs. I start by discussing the analysis of word structure according to the variables introduced in chapter 3. The analysis of word structure has two parts. In the first part I describe the derivate behaviour of break verbs, and it covers three main questions: (1) distribution of prefixes and suffixes, (2) changes in grammatical category, and (3) the origin of verbs:

8 Romance vs. Germanic verbs. In the second part, I discuss the semantic relations that these verbs take part in, namely antonymy, synonymy, hyperonymy and polysemy, as well as the relationship that exist between them and the morphology of these verbs.

Next I move on to the analysis of argument and event structure. In order to deal with those two levels of representation, I explain the restructuring process that the corpus has undergone in order to meet the requirements of the dissertation. Very briefly, this dissertation is based on the analysis of verbs of pure change of state. Thus, I have restructured the corpus into three groups according to degree of prototypicality. Those examples that encode a pure change of state have been gathered together under the label prototypical examples. Those that carry a completely different meaning have been labelled non-prototypical examples. And those that are half way between the two classes have been termed less-prototypical examples.

Having restructured the corpus, I discuss the data obtained from the analysis of the argument and event structures of the three classes, focusing mainly on the prototypical class, since this is the object of my study. The other two classes are dealt with only superficially, paying especial attention to those points that contrast with the prototypical class. The variables under analysis for argument structure are: syntactic valence, semantic valence, argument-adjuncts, periphery, entity type, nominal aspect, and reference. The variables under analysis for event structure are: subeventual structure, headedness, causal sequence, temporal relation, and type coercion.

CHAPTER 5

In chapter 5, I draw conclusions and discuss the achievements attained in the preceding

4 chapters. Chapter 4 provides an enormous amount of information, and chapter 5 re- elaborates that information in order to answer the questions put forward in chapter 1.

9 First I summarise the information obtained in the previous chapter, and I explain the relationship that exists among the three levels of representation that I have analysed.

Then I go over the goals that I set in chapter 1 to see if they have been reached. And, finally, I examine the hypotheses that I launched in chapter 1 in order to see if they have been validated by the analysis.

CHAPTER 6

In the last chapter of the dissertation, I put forward ten ideas for future research. I start by presenting my two main ideas for future research, both of which derive from the aim of the dissertation itself, and which represent the direction that this line of research should take beyond the limits of this dissertation. Having done so, I then move on to briefly consider some more specific points for further research, points that have arisen during the analysis.

10 CHAPTER 2: LITERATURE REVIEW

2.0 Introduction

Role and Reference Grammar and Generative Lexicon are two different theories of language with different aims and different solutions to those aims. Nonetheless, the two of them are highly compatible. Role and Reference Grammar (hereafter RRG) is, a priori, a more complete model than Generative Lexicon (henceforth GL) in the sense that it integrates semantics, syntax and pragmatics in a theory that attempts to be cross- linguistically valid. GL is less ambitious but more fine-grained, and precisely for that reason, every single concept is handled in much more detail. This theory approaches the question of lexical semantics from a generative perspective, and aims to account for the generation of new word meanings in novel contexts in a compositional fashion, avoiding the uneconomical proliferation of lexical entries in the dictionaries. The lexicon is highly comprehensive, and it becomes an active and key component in this framework. Thus, in this dissertation I evaluate those components of GL that can be incorporated in RRG to enhance the functioning of the latter.

RRG and GL stem from two different conceptions of language structure. Event structure lies at the heart of Pustejovsky’s generative lexicon, and the other levels of representation of the lexicon – semantic relations, argument structure, and qualia structure – are developed from this notion. By contrast, the RRG theory relies on a firm theory of argument structure, based on semantic roles and macroroles, upon which the theory of event structure is built. The resulting event structure is rather limited though and basically reduced to the logical structure. Thus, I take event structure as the starting point for the refinement of RRG’s lexical representation, which I subsequently extend to embrace argument structure and word structure.

11 Apart from GL, I have also made use of other lexical-semantic theories that may provide some insights for the enhancement of RRG. Among these, Faber and Mairal’s

(1999) and Levin and Rappaport Hovav’s (2005) lexical semantic models stand out.

These theories also underline the importance of event structure although, as I explain below, they approach lexical representation differently.

In this chapter, I describe the theoretical framework that underlies this work. I start by briefly introducing Levin and Rappaport Hovav. I then turn to other representatives of lexical semantics, in the following order: Alsina (1999), Lieber

(2004), and Mairal and Faber (1999). In the last sections of this chapter, I deal in more depth with the two main theories that make up theoretical framework of this research:

GL and RRG. In dealing with these two, I first explain the basic tenets that define each theory; then I introduce their systems of lexical representation; and finally I proceed to explain the differences that exist between the two, and my reasons to opt for one or the other. The resulting overview covers a great deal of ground, because it moves from the more projectionist theories, like RRG, to the less projectionist ones, like Levin and

Rappaport Hovav’s. In an intermediate position lies Mairal and Faber’s proposal which, while being partially projectionist, takes logical structures one step further than RRG.

Given the wide range of theoretical issues this dissertation touches upon, in this chapter I only deal with those which are directly relevant to the description of event structure. The rest of the theoretical issues are briefly described in chapter 3, one by one, when introducing each of the variables under analysis.

2.1 Different approaches to lexical representation

Lexical representation and, more precisely, event structure, has been approached from many different perspectives. Here I summarize the main ideas put forward by Levin and

12 Rappaport Hovav (2005), Alsina (1999), Lieber (2004), Faber and Mairal (1999),

Mairal and Van Valin (2001), Mairal and Ruiz de Mendoza (forthcoming 2007), Ruiz de Mendoza and Mairal (forthcoming 2007), Pustejovsky (1995a), and Van Valin and

LaPolla (1997) and Van Valin (2005), in this order, and discuss some of their advantages and disadvantages.

2.1.1 Levin and Rappaport Hovav (2005)

In this section I review several proposals which share a common interest in the identification of those aspects of the meaning of a word which are relevant for the mapping from lexical semantics to syntax. Levin and Rappaport Hovav’s (2005) impressive work on argument structure has provided me with an insightful and critical overview of this issue, as well as with a solid basis on which to start to elaborate on the notion of event structure.

The term event structure is nowadays generally used to refer to ‘the lexical semantic representation which determines argument realization’ (Levin and Rappaport

Hovav 2005: 78). Still, there are obvious differences in its interpretation depending on the perspective adopted. Levin and Rappaport Hovav discuss three approaches, each of which focuses on a different cognitive aspect of events: (i) the localist approach – developed by Gruber (1965) and Jackendoff (1972, 1976, 1983, 1987, 1990; cf. Levin and Rappaport Hovav 2005: 79) among others –, (ii) the aspectual approach and (iii) the causal approach.

(i) The localist approach: in the localist approach, the notions of motion and location lie at the heart of event structure representation. Each of these two types of events has its own set of participants. Thus, motion events relate to a theme and a path, and location events to a theme and a location. The main idea of the localist approach is

13 captured in the Localist Hypothesis: ‘all verbs are construable as verbs of motion or location. Specifically, Jackendoff proposes that even verbs that are not obviously verbs of motion or location can be construed as describing motion or location of an abstract type’ (Levin and Rappaport Hovav 2005: 80). Jackendoff develops this theory further and puts forward the Thematic Relations Hypothesis in an attempt to account for polysemy rather than for argument realization.1 The localist approach satisfactorily accounts for change of state verbs like break: when something breaks it moves from an initial state to a resulting state (cf. DeLancey 1991). However, as Levin and Rappaport

Hovav point out, it fails to do the same for some other verbs – such as chew, cry or play

– which, despite being nonstative, cannot be considered verbs of motion along a path.

Furthermore, the localist approach to event structure seems to lack the adequate semantic apparatus for subject and object selection.

(ii) The aspectual approach: in this theory, events are classified according to their internal temporal properties – that is to say, according to lexical aspect or Aktionsart.

The most basic of these classifications comprises two main classes, stative versus nonstative events, the latter involving activities and transitions – that is, accomplishments and achievements.2 As Levin and Rappaport Hovav (2005: 87) point out, some of the most relevant linguistic studies of lexical aspect have been accomplished by Bach (1981, 1986), Binnick (1991), Brinton (1988), Dowty (1979),

Filip (1999), Freed (1979), Krifka (1989a, 1989b, 1992, 1998), Moens and Steedman

(1988), Mourelatos (1978), CS Smith (1991), Tenny (1994), Vendler (1957) and

Verkuyl (1972, 1993, 1999). Many of these are aimed at providing aspectually motivated evidence for argument realization; however, my focus is on those theories that focus on the representation of event structure, such as the theory of event

1 For a more detailed explanation, see Levin and Rappaport (2005: 80) and Jackendoff (1983). 2 The stative vs. non-stative distinction is similar to Comrie’s (1976) dynamic vs. nondynamic predicates (Levin and Rappaport Hovav 2005: 88).

14 complexity. The idea behind this label is that those lexical theories that approach event structure in terms of predicate decomposition usually incorporate a subeventual analysis. According to Levin and Rappaport Hovav (2005: 112), a subeventual analysis indicates the following: ‘(i) the number and type of constituent subevents; (ii) the number and identity of the arguments participating in the particular subevent; and (iii) the nature of the temporal relations between the subevents’. This is the kind of analysis employed, for example, in Pustejovsky’s (1991) theory of the generative lexicon. (See below for a more detailed explanation of Pustejovsky’s representation of event structure).

(iii) The causal approach: this model develops the notion of event structure around the concept of causation. This model is instantiated in Croft (1990a, 1991, 1994, 1998),

DeLancey (1984, 1985, 1990, 1991), Jackendoff (1990), Langacker (1987, 1990, 1991,

1993) and van Voorst (1988, 1993, 1995), as reported by Levin and Rappaport Hovav

(2005: 117). It is rather obvious, therefore, that most of the studies on the notion of cause follow cognitive and functional models, something which is also noticeable in the argumentation style of Levin and Rappaport Hovav (2005). This model depicts events as causal chains that consist of a number of segments that relate two participants each, as initiator and endpoint (or cause and effect). As illustration of the prototypical event type of this model, I make use of Levin and Rappaport Hovav’s causal chain in (1), borrowed, in turn, from Croft (1994: 38).

(1) Harry broke the vase. (Example taken from Levin and Rappaport Hovav 2005:

118)

15 As Levin and Rappaport Hovav (2005: 118) explain, the example involves ‘a three-part causal chain: (i) Harry acts on the vase, (ii) the vase changes state, and (iii) the vase is in a result state (i.e., broken)’.

Levin and Rappaport Hovav (2005) dissociate themselves from any existing theory of grammar to present their own, a grammar based on the distribution of verbs across classes on the basis of alternations. These authors borrow RRG’s representational system and integrate it in their theory, thus giving it a new use. Levin and Rappaport

Hovav suggest that the mapping from lexical semantics to syntax is governed by event structure. More precisely, the classification of verbs into one group or another is governed by the semantic properties of the events the verbs depict. Furthermore, a theory of the semantics of argument structure, like the one they propose, can only be grounded in a theory of event conceptualization (Levin and Rappaport Hovav 2005: 75).

Thus, they deem it necessary to develop an adequate system of lexical semantic representation that captures the pertinent properties of events, such as subeventual structure or causation. Such representations are generally labelled event structure, and they are based on predicate decomposition; that is, the meaning of the verb is defined by means of those aspects of its meaning that are grammatically relevant.

Event structures embrace two different types of meaning: structural and idiosyncratic. Verbs are gathered together in the same class by virtue of the structural facets of meaning they share, and they are distinguished from one another within the same class by virtue of their idiosyncratic features. Both types of information are relevant for argument realisation and also for event structure. Earlier approaches to event structure focused on structural information mainly, but event structure representations need the idiosyncratic element of the meaning of a verb. Its function, as

Levin and Rappaport Hovav (2005: 71) explain, is best illustrated with an example: the

16 decomposition of the sense of verbs of change of state revolves around two notions, cause and change; however, the verbs differ with respect to the specified state – that is, the idiosyncratic element – as illustrated in (2).

(2) Examples taken from Levin and Rappaport Hovav (2005: 71) a. dry: [[x ACT] CAUSE [y BECOME ]] b. open: [[x ACT] CAUSE [y BECOME ]] c. shorten: [[x ACT] CAUSE [y BECOME ]]

From this it follows that the semantics of the verb involves a primitive predicate together with the idiosyncratic element – or constant – that distinguishes it from the other verbs in the class. The constant, as Levin and Rappaport Hovav (2005: 71) is now generally referred to as ‘root’, following Pesetsky (1995). The root’s ontological type – that is, state, stuff, thing, place, manner, or instrument – is one of the main determinants of its association with a certain event structure. Roots are incorporated into event structures either as fillers of argument positions or as a modifier of a predicate. I illustrate the two options in (3.a) and (3.b), respectively.

(3) Examples taken from Levin and Rappaport Hovav (2005: 72) a. dry: [x ACT] CAUSE [y BECOME ] b. jog: [x ACT]

As can be seen in the event structure templates, the ontological type of the root is indicated in angle brackets.

17 Going deeper into verbs of change of state, Levin and Rappaport Hovav (2005:

73) claim that the verbs in this class normally participate in the causative alternation.

However, not all verbs alternate. Most likely, the reason for this rests in the root, since the event structure is presumably the same as for break. Therefore, there seems to be the constraint on the pairing of roots with event structures ‘that the minimal elements of meaning encoded in the roots be given structural expression in the event structures’

(Levin and Rappaport Hovav 2005: 73).

In short, Levin and Rappaport Hovav (2005) adopt the RRG’s system of lexical decomposition and semantic roles to account for event structure, but they enrich this system by strengthening the theory of event structure, by incorporating those aspects of argument realisation that cannot be derived from the event structure alone, and by accounting, by means of their system of alternations, for the corresponding variations between arguments and adjuncts, as in ‘Devon smeared butter on the toast / Devon smeared the toast with butter’ (Levin and Rappaport Hovav 2005: 209).

The result is satisfactory, but the lack of a comprehensive theoretical framework that brings together all relevant components of the lexicon leads me to look at

Pustejovsky’s GL for support. In combination, the two approaches will provide a firm base for the enrichment of RRG’s lexical representations.

2.1.2 Alsina (1999)

Alsina’s (1999) approach to event structure is based on the premise that only that semantic information that needs to be represented at the level of semantic structure, and no more, must be so specified. The question holds, then, what information needs to be represented in event structure, which is an essential component of semantic structure.

His proposal, Alsina explains, shares the essentials of Pustejovsky’s (1991) work,

18 although his is an attempt at simplification. His theory of event structure takes up

Vendler’s (1976) event classification into states, activities, accomplishments and achievements, but Alsina tries to reduce the semantic load of event structure by relegating the predictable semantic information rendered by tree structure representations of events to the level of conceptual structure (Fong 1999: 123).3

Alsina (1999: 117) summarizes the main ideas of his proposal in six points:

1. of the four event types that I assume a theory of grammatical semantics needs to

distinguish, activities and states have (or may have) representations as simple

structures, whereas accomplishments and achievements have complex

representations that include a state and possibly an activity;

2. both accomplishments and achievements are represented as events consisting of

two subevents, the second one of which is a state;

3. accomplishments and achievements differ in that the former have an activity as

their first subevent, whereas the latter have an unspecified event as their first

subevent;

4. the conceptual substance associated with the different event types and with the

predicates found in these event types is not part of semantic structure, but only

part of conceptual structure;

5. semantic and syntactic constituency are independent and one cannot be derived

from the other; and

6. semantic and conceptual structures are also partially independent so that one

cannot be derived from the other.

3 Following Jackendoff’s (1990) notion of lexical conceptual structure, Alsina (1999: 77) defines ‘conceptual structure as a full representation of the meaning of linguistic expressions, including those aspects of meaning that do not interact with syntactic or phonological information, but are inherent to the linguistic expressions’.

19 Alsina provides evidence to support these points in terms of scope phenomena and alternations. Nonetheless, although these two notions suffice to account for the ability or inability of certain alternations to occur – for instance, between achievement and accomplishment or between activity and accomplishment –, Alsina does not go into

Pustejovsky’s (1991) idea of headedness in any depth, a notion that does not only help disambiguate those alternations but also provides a solid basis for the representation of all event types. Furthermore, Fong (1999: 133) argues that Alsina’s tree structure representations succeed in representing the difference between the four standard event types, but they are not sufficient to deal with . Causatives are restricted for

Alsina to the pattern ‘action-cause-resulting state’. However, there are other causative constructions that do not involve any accompanying action or which, instead of a resulting state, bring about a process or even an activity (Fong 1999: 133).

2.1.3 Lieber (2004)

Lieber (2004) follows Jackendoff to a great extent but, as the former points out, there are some clear divergences produced, above all, by the different goals they pursue.

Lieber’s model of lexical semantic representation is aimed at explaining the semantics of word-formation. Hence, while Jackendoff applies the notion of telicity to characterize aspectual distinctions at the level of events (that is, predicates or even sentences, in syntactic terms), Lieber concentrates on the level of the lexical item.

In order to deal with the semantics of word formation, Lieber utilizes the system of lexical decomposition, which involves a number of primitives for the description of the lexical semantic properties of words.4 Lieber (2004: 6) argues that such a descriptive

4 Fodor (1998) criticises the use of primitives, in the first place, because there is not a fixed number of them, which makes this concept problematic. This implies that Fodor has no faith in decomposition, an assumption that the author confirms in his Informational Atomism, where he offers a non-compositional

20 framework must be cross-categorical, another feature that distinguishes her approach from Jackendoff’s, which she describes as ‘heavily weighted towards the description of verbal meanings, and as yet is insufficiently cross-categorial to allow for a full discussion of the semantics of and ’.

To the system of lexical decomposition, Lieber and Baayen (1999: 30, see also

1997) incorporate a number of features that permit to distinguish the different types of verbal classes. More precisely, they elaborate on the features [IEPS] (Inferable Eventual

Position or State) and [dynamic] in order to classify verbs as in Figure 2.1 (taken from

Lieber 2004: 30):

SITUATIONS

STATES EVENTS [-dynamic] [+dynamic]

SIMPLE ACTIVITY CHANGE [+dynamic] [+dynamic, +/-IEPS]

UNACCUSATIVE/ MANNER OF INCHOATIVE CHANGE [+dynamic, [+dynamic, +IEPS] - IEPS] be eat descend walk remain kiss fall run own listen go amble hear hold evaporate vary cost yawn forget waver know blink grow fluctuate Figure 2.1 Situations based on features (taken from Lieber 2004: 30)

In order to deal with causatives, Lieber includes the well-known notion of event complexity in her methodological apparatus. Causatives have long been characterized as

view of conceptual representation. Rather, in the same line as Pustejovsky, he believes in compositionality.

21 complex situations consisting of two subevents, an activity and a result, and Lieber follows that line of research, in with the system of features mentioned above.5 Thus, Lieber (2004: 33) suggests the following structure for causatives:

(4) grow (causative)

[+dynamic ([i ], [j ])]; [+dynamic [i ], [+dynamic, +IEPS ([j ], [Path ])])]

As in most theories of lexical representation, a causative verb like grow in (4) consists of an activity event and a causative event proper.

Lieber’s system of features, she claims, is similar to that of Jackendoff’s to a certain extent, but hers are applied at word level and Jackendoff’s at the level of events

(that is, predicates, clauses or sentences). Here lies precisely the main weakness of her proposal. Even though the mechanism she develops for lexical semantic representation proves to be highly effective to explain derivational morphology, it overlooks the level of event structure to a certain extent. This can be seen in her final attempt to extrapolate her system of features to the realm of event structure. Quoting Lieber (2004: 143), ‘it would be lovely, of course, if my interpretation of the features [B] and [CI] finally solved the mystery of exactly what the lexical contribution of the verb to telicity really is’.6 However, her attempt does not meet any satisfactory solution, as Lieber herself acknowledges, and she leaves this issue unresolved.

5 The system of features elaborated by Lieber comprises more features than those displayed here. Nevertheless, here I will only mention those that are relevant to the point being made. For more details on this issue, see Lieber (2004). 6 The feature [B] stands for bounded, and [CI] for Composed of Individuals (Lieber 2004: 136).

22 2.1.4 Faber and Mairal (1999), Mairal and Van Valin (2001), Mairal and Ruiz de

Mendoza (forthcoming 2007), Ruiz de Mendoza and Mairal (forthcoming 2007)

Faber and Mairal’s theory of lexical semantics stems out from Martin-Mingorance’s

(1984, 1990, 1995) Functional-Lexematic Model (FLM). FLM integrates Coseriu’s

Lexematics (1977), Dik’s Functional Grammar (1978) and some basic tenets of

Cognitive Grammar (Faber and Mairal 1999: 3). One of the main advantages of this model is that it puts forward an all-inclusive lexicon, divided in lexical domains, and hierarchically organized in terms of semantic and syntactic prototypes for each of the lexical domains. The lexicon revolves around two main axes: one paradigmatic and one syntagmatic. The syntagmatic axis engages in the combinatorial properties of lexical units (Faber and Mairal 1999: 58). On the paradigmatic axis, lexical units are arranged into semantic fields or, as Faber and Mairal (1999) prefer to call them, lexical domains.

Those in the paradigmatic axis which are lower in the hierarchy are defined in relation to the most generic term or superordinate – which, in turn, is defined through

Stepwise Lexical Decomposition. By way of illustration, I offer a fragment of the lexical domain of POSSESSION in (5):

(5) Lexical domain of POSSESSION (taken from Faber and Mairal 1999: 123)

POSSESSION: to come to have get to come to have as a result of receiving, earning, buying, etc.

obtain to get something [fairly formal].

Procure to obtain something with effort, adding it to previous

possessions [formal].

Acquire to obtain something with effort, adding it to previous

possessions.

23

Apart from these two axes, Faber and Mairal (1999) identify a third axis, the cognitive axis, which accounts for conceptual knowledge.

On this basis, Faber and Mairal (1999) attempt to incorporate the phraseological component that is missing in the syntagmatic axis of FLM or, in other words, to develop the relational structure of the lexicon.

Faber and Mairal’s (1999: 3) model is characterised, among other things, by a strong system of lexical representation which, as Martín Mingorance claims, is crucial for syntactic representation. They are aware that RRG’s logical structures do not sufficiently account for the semantics of a verb, and they suggest that further semantic decomposition is needed. By way of illustration, consider the following example from

Mairal (2001-2002: 12-13):

(6) Mairal’s (2001-2002: 12-13) lexical template for contact-by-impact verbs

[[do’ (w, [use.tool(α).in.(β).manner.for.(δ)’(w,x)]] CAUSE [do’(x,

[move.toward’(x,y) & INGR be.in.contact.with’(y, x)], α = x.

This template encodes the syntactico-semantic scenario for contact-by-impact verbs. By contrast to its RRG’s counterpart, which would be something like (7), the structure in

(6) contains the arguments of the verb – an effector (w) and a patient (x) – but it also specifies the tool (α) that the effector uses and the manner in which it is used. In other words, LFM’s lexical templates are further specified than those of RRG.

24 (7) My suggestion for RRG’s lexical template for contact-by-impact verbs.

[do’(x, Ø)] CAUSE [do’ (tool, Ø)] CAUSE [do’(y, [move.toward’ (y,z) & INGR be.in.contact.with (z,y)]

It must be noted that, as Mairal and Cortés (2000-2001: 275) explain, lexical templates are maximal projections of conceptual structure:

An important consideration about our conception of LTs [lexical templates]

is that they are not a receptacle of the particular idiosyncratic linguistic

features of a predicate; but conversely a LT is a maximal projection of the

conceptual substance found within a lexical class, each lexical class being

represented by a canonical LT which accommodates into one compact

representation the regularities found within the given lexical class.

On the basis of these premises, Mairal and Van Valin (2001) suggest improvements to RRG in the line of lexical semantics; specifically, these authors see the need to (1) go deeper into the relationship between syntax and semantics, and (2) account for both internal and external arguments of the verb, as well as adjuncts.

Mairal and Ruiz de Mendoza (forthcoming 2007) and Ruiz de Mendoza and

Mairal (forthcoming 2007) have taken this proposal one step further to include a cognitive and constructional component which, as I explain below, is the path to follow in the future. Mairal and Ruiz de Mendoza’s model is called Lexical Constructional

Model (LCM), and it endeavours to bring together some of the most relevant tenets of functional theories – mainly projectionist ones, like RRG – and constructional theories like Goldberg’s (1995, 2002, 2005) Construction Grammar. LCM makes two main claims (Ruiz de Mendoza and Mairal forthcoming 2007: 3): (1) LCM represents an

25 alternative approach to the relationship between lexicon and grammar; and (2) LCM provides the appropriate framework to bridge the gap ‘between projectionist and construction-based approaches by developing an inventory of constraints that simulate the role of interface (or linking rules) on the one hand, and by vindicating the role of constructions as a crucial part in the semantic representation of the theory.’ LCM is summarised in Figure 2.2.

Lexical Constructional templates Templates

Unification process External and internal constraints

Figure 1:Semantic The generica interpretation architecture of LEXICOM

Figure 2.2 The Lexical Constructional Model (taken from Ruiz de Mendoza and Mairal forthcoming 2007: 4)

Very briefly, semantic interpretation is obtained through a unification process of a lexical template and a constructional template. This unification process is regulated by certain internal – metalinguistic – and external – conceptual – constraints. The basic idea around which the whole mechanism revolves is that constructional templates coerce lexical templates because, as Ruiz de Mendoza and Díez (2002) explain, ‘higher- level structures invariably take in lower-level structures’ (Ruiz de Mendoza and Mairal forthcoming 2007: 4).

26 2.1.5 Pustejovsky’s generative lexicon

2.1.5.a Introduction: aims of the GL

Pustejovsky’s model is similar to more traditional theories of lexical semantics that are build around the notions of argument and event structures, but it is also set apart from them because it is purposely designed to deal with creative uses of language and, more precisely, with polysemy.

Pustejovsky regards the lexicon as an active and integral component in the composition of sentence meaning. Hence, a good lexicon must aim at:

1. accounting for word meaning and for the meaning of words in combination (that

is, compositionality); and

2. accounting for the creative use of words in novel contexts. (It must be noted that

Pustejovsky focuses on complementary polysemy, since contrastive polysemy

implies the use of different lexical entries.)

Therefore, a generative lexical model must be able to explain, for example, the polysemy in (8):

(8) Examples taken from Pustejovsky (1995a: 47)

a. John baked the potatoes

b. John baked a cake

What allows this alternation (change-of-state vs. creation) is the interaction of the semantics of the verb with the semantics of the complement itself, both of which are available in the lexical representations of the generative lexicon. The combination of lexical representations with derivational processes expresses polysemy.

27 2.1.5.b The generative lexicon: structure

Contrary to the widespread trend of decomposition, Pustejovsky outlines a theory based on the notions of cocompositionality and type coercion. Instead of building on the traditional set of primitives, he puts forward a number of generative devices that generate semantic expressions. More precisely, Pustejovsky develops a theory of Qualia

Structure, a formal representation for lexical items which helps explain polysemy while to a great extent eliminating lexical ambiguity from the lexicon. He distinguishes four levels of representation to capture lexical meaning: Argument Structure, Qualia

Structure, Event Structure and Lexical Inheritance Structure.

Argument Structure refers to the semantic arguments that a word takes. Qualia

Structure deals with the more defining features of a word and includes formal, constitutive, telic and agentive roles. Event Structure refers to the classification of the different types of eventualities in the world into semantic verbal classes. In the line of

Vendler (1967) and Dowty (1979) among others, Pustejovsky acknowledges the following: states, activities and transitions – the latter being sometimes subdivided into accomplishments and achievements – as well as some subevents. Lexical Inheritance

Structure refers to the relation between an individual lexical structure and other lexical items in the lexicon, thus providing the necessary principles of global organization for the lexicon.

These four levels of representation are connected by means of three generative devices (or operations that generate polysemy): type coercion, selective binding and co- composition. Type coercion is a semantic operation consisting in coercing or shifting an argument to the type required by another word in the phrase. Selective binding is a mechanism of semantic composition whereby a lexical item selects some feature from the qualia structure of another lexical item in order to correctly interpret the former. It

28 does not involve a change in the semantic type of the lexical entities. Co-composition involves the derivation of new senses or readings of a word through composition with its arguments. In the case of verbs, this implies that the verb is not polysemous in itself; rather, the complements add to its basic meaning by co-composition. For co- composition to occur, both the verb and the complement must share some qualia value.

2.1.5.c Event structure

Some authors postulate a distinct and separate level of representation for event structure

(see, for example, Williams 1981, Grimshaw 1990, Pustejovsky 1991), adopting the view that event structure information concerning time, space, and causation has a different status from other kinds of thematic, conceptual, or lexical information

(Pustejovsky 1992: 47).

Event structure is the definition of the event type of a lexical item and a phrase

(Pustejovsky 1992: 61). Events can be simple and complex independently of their syntactic representation as either a simple or a complex clause; hence, when complex,

Pustejovsky deals with them in terms of subevents. This view contrasts with more traditional approaches, which treated events as atomic units with no internal structure

(or, at least, not an accessible one). Consequently, he puts forward a more detailed description of the structure of events that depicts their internal configuration, and which places special emphasis on aspectual factors such as telicity and causation.

Levin and Rappaport Hovav (2005: 117) offer an interesting counter-proposal which, in their own words, ‘appears to work better than more often cited notions, such as telicity’. Their view is that event complexity is better dealt with in terms of the temporal relation existing between subevents. Thus, in the case of causative events, for

29 example, the cause and the result are different subevents because they are not temporally aligned.

By contrast, Pustejovsky develops the notion of event complexity in relation to that of telicity. This means that, for example, telic events are systematically regarded as complex, because they involve a transition to a new result state. Although Levin and

Rappaport Hovav’s view is highly explicative for argument structure realization,

Pustejovsky’s notion of telicity suffices to explain event complexity effectively. In his

(1995b) work, Pustejovsky outlines what he calls the ‘Orthogonal Parameter Binding’.

This mechanism allows the structure of subevents to be related to the arguments of the verb. The semantics of the verb are defined by combining information from the qualia, the event structure and the argument structure.

Going back to the subeventual structuring of events, the usefulness of this distribution has been highlighted by several linguists, Pustejovsky and Levin and

Rappaport Hovav among them. However, as Pustejovsky himself acknowledges, it does not prove so helpful in the case of causative constructions, given that the varied nature of these constructions prevents an accurate account of the projection of subevents to syntax.

Subevents are ordered according to temporal criteria; thus, in a complex event e, formed by e1 and e2, e1 precedes e2. This structure is referred to as exhaustive ordered part of and is represented by <α.. Obviously, not all subevents follow this pattern; some can occur simultaneously, for example. Pustejovsky (1995a: 69) refers to this relationship as exhaustive overlap part of, and he represents it as ○α. A third option is given in which two subevents are basically simultaneous but one starts before the other.

Pustejovsky (1995a: 70) refers to these as exhaustive ordered overlap, which is

30 annotated as < ○α. The representation of the event structure for a verb like break would be something like (9):

(9) Representation taken from Pustejovksy (1995a: 80)7

break E1 = e1: process EVENTSTR = E2 = e2: state RESTR = <α

FORMAL = broken (e2, y)

QUALIA = AGENTIVE = break_act (e1, x, y) …

Nonetheless, this sequencing of events does not suffice to give an appropriate account of all the relevant – grammatical, mainly – information contained in events. For this reason, Pustejovsky develops what he calls event headedness. The idea behind this label is that subevents are ordered not only temporally, but also according to their prominence. This is represented in the event structure by means of the marker e*:

σ (10) [e e1* <α e2] --- build (taken from Pustejovsky 1995a: 73)

And if we add this feature of headedness to the event structure, the following representation is obtained:

7 ‘E’ stands for ‘event’, ‘e’ stands for ‘(sub)event’, and ‘RESTR’ stands for ‘ordering restrictions over events’.

31 (11) Representation taken from Pustejovsky (1995a: 72)8

α

E1 = …

E2 =

EVENTSTR = RESTR = …

HEAD = EI …

By doing so, one of the subevents is given prominence over the other, thus facilitating the correct interpretation of the whole event, among other things. Therefore, in (10) above, the event expressed by the predicate build is formed by two subevents, e1 and e2. e1 represents an activity which brings about a new state (namely, e2). <α indicates that e1 precedes e2. That is, it is an example of the exhaustive ordered part of structural pattern.

And the asterisk next to e1 indicates that this subevent is given prominence over e2. This way, the action - or activity - subevent is focused, intuitively leading to an accomplishment interpretation (I say ‘intuitively’ because Pustejovsky himself specifies it in this way.) If e2 had been the headed event instead of e1, the event would have been interpreted as an achievement, since the focus – that is, prominence - would have been put on the final state of the event. With the notions of temporal ordering and headedness in mind, Pustejovsky (1995a: 73) puts forward the following set of possible event structures:

8 ‘α’ represents any lexical item.

32 (12) σ a. [e e1* <α e2] --- build σ b. [e e1 <α e2*] --- arrive σ c. [e e1* <α e2*] --- give σ d. [e e1 <α e2] --- UNDERSPECIFIED σ e. [e e1* ○α e2] --- buy σ f. [e e1 ○α e2*] --- sell σ g. [e e1* ○α e2*] --- marry σ h. [e e1 ○α e2] --- UNDERSPECIFIED σ i. [e e1* < ○α e2] --- walk σ j. [e e1* < ○α e2*] --- walk home σ k. [e e1* < ○α e2*] --- ?? σ l. [e e1 < ○α e2] --- UNDERSPECIFIED

As can be observed, the twelve possibilities for event structure displayed in (12) include unheaded and double-headed constructions. The difference existing between (12.a) and

(12.b) have already been explained above in (10). Briefly, structure (12.a) represents an accomplishment because it is the action event that is headed, whereas structure (12.b) illustrates achievement verbs, for the focus of interpretation is placed on the final state instead. Structure (12.c) is used to represent, Pustejovsky explains, ditransitive transfer verbs such as give and take – thus the two subevents express transitions. Structures

(12.d), (12.h) and (12.l) are underspecified. Such state serves a purpose within this framework: it contributes to the explanation of polysemy in verbal alternations. As

Pustejovsky (1995a: 73-74) explains, when a lexical pattern like those in (12) is left unspecified with respect to headedness, it indicates polysemy, for it is open to two interpretations: one with e1 as head, the other focusing e2. Therefore, in the case of polysemic verbs like break, which participate in alternations like the

σ causative/unaccusative (or inchoative), the underspecified structure [e e1 <α e2] captures the two logical senses of these verbs. Structures (12.e) and (12.f) illustrate the

33 two simultaneous events that occur during a transaction, each of them focusing on one of the events only. (12.i) represents two subevents which are basically simultaneous.

Just after the first one begins, the other follows, and it continues as long as the first event keeps going. Finally, Pustejovsky does not provide any explanation for either structure (12.g) or (12.j), but I assume that the former illustrates, like (12.c), two unilateral transitions, thus being double-headed, whereas the latter represents a telic event formed by two almost simultaneous subevents, where one starts before the other.

The headed subevent is e2; therefore, by contrast to (12.i), which illustrates an action,

(12.j) must be interpreted as an accomplishment.

2.1.6 Role and reference grammar

In this section, I first introduce the basic tenets of the RRG’s system of lexical representation, and then I narrow down the focus to concentrate on event structure. In the review of event structure I make reference to some of the features of GL’s event structure above for comparison.

2.1.6.a The RRG’s system of lexical representation

By contrast to Chomsky’s generative theory, RRG posits one single level of syntactic representation, consequently avoiding any kind of transformation.9 Instead, the link between the semantic and syntactic representations is established through direct mappings.10

The lexical meaning of verbs is captured by means of lexical decomposition, a process whereby verbs are paraphrased in terms of primitive elements in a proper, universally valid metalanguage. This idea is also present in Dik (1989); nonetheless, the

9 Functional Grammar also avoids transformations by restricting structure-changing operations (Dik 1989: 17). 10 On configurationality and levels of representation, see Martín Arista (2005).

34 latter advocates a verbal paraphrase in terms of natural language. The system of lexical decomposition of this theory stems from Dowty’s (1979) theory of verb semantics and lexical representation, which is based, in turn, on Vendler’s (1957) classification of verbs. Distinctions among each class, Van Valin and LaPolla explain, are established in terms of their inherent lexical properties, that is, in terms of Aktionsart types. These classes, and the corresponding state-of-affairs they denote, are exemplified in (13); the features that defined them are also included.

(13) State-of-affairs type Aktionsart type Features a. Situation State (be sick/tall/dead) [+static], [-telic],[-punctual] b. Event Achievement (pop, explode, shatter) [-static],[+telic], [+punctual] c. Process Accomplishment (melt, freeze, dry) [-static], [+telic], [-punctual] d. Action Activity (swim, think, rain) [-static], [-telic], [-punctual]

Membership in one class or another is decided upon a number of tests set forth by

Dowty. These tests can be found in Van Valin (1993: 34-35) and refer to all basic

Aktionsart types, which are non-causative; nonetheless, they have a causative counterpart which can be determined by the tests introduced in Van Valin and LaPolla

(1997: 101). The distinctions among the four basic Aktionsart types are formally represented in logical structures (LS),11 as shown in Table 3 (This is the revised version found in Van Valin and LaPolla (1997: 109):

11 Dik (1989) treats causative aspects syntactically. That is to say, whereas RRG places the CAUSE in the lexicon – i.e., in the logical structure of the verb – Functional Grammar codifies it in the syntax by means of expression rules.

35 Verb class Logical structure State predicate’ (x) or (x,y) Activity do’ (x, [predicate’ (x) or (x,y)]) Achievement INGR predicate’ (x) or (x,y), or INGR do’ (x, [predicate’ (x) or (x,y)]) Accomplishment BECOME predicate’ (x) or (x,y), or BECOME do’ (x, [predicate’ (x) or (x,y)]) Active do’ (x, [predicate1’ (x, (y))]) & BECOME accomplishment predicate2’(z,x) or (y) Causative α CAUSE β, where α,β are LSs of any type Table 2.1 Lexical representation for the basic Aktionsart classes

Van Valin (1990:223) explains the derivational relationship existing among states, achievements and accomplishments by arguing that achievement verbs are derived from state verbs by means of the operator BECOME; the argument structure of the predicate is unchanged. Achievement logical structures are a component of accomplishment verb logical structures. However, despite acknowledging the existence of a derivational relationship, Table 3 shows that some changes have been introduced in the 1997 version, for the operator used in achievements is INGR, not BECOME, and achievement logical structures cannot be regarded as a component of accomplishments.

In other words, the logical structure assigned to achievements in Van Valin (1990) is the structure for accomplishment verbs in Van Valin and LaPolla (1997), and their logical structures differ, mainly, as to their telicity. As regards activity verbs, they are primitive predicates in their own right.

These representations, Van Valin (1990: 225) argues, are the basis for the RRG theory of semantic roles. RRG puts forth a theory of semantic roles at two different levels; one corresponds to thematic relations, which describe the semantic function of an argument with respect to a predicate and which, following Jackendoff (1976), are defined in terms of argument positions in logical structures, whereas the other

36 corresponds to macroroles, which have no equivalent in other theories.12 Thus, for a two-argument verb of cognition such as ‘know’, Van Valin and LaPolla define the corresponding thematic relations as follows: know’ (x,y), where x = COGNIZER, y =

CONTENT. The fact that thematic relations be derived from argument positions in logical structures has, according to Van Valin (1990: 226), a highly significant consequence: since there are syntactic and semantic criteria for determining the class of a verb, and since a verb’s thematic relations are to a large extent attributable to its class and hence to its logical structure, the assignment of thematic relations to verbs in RRG is independently motivated.

Semantic macroroles are ‘generalizations across the argument-types found with particular verbs which have significant grammatical consequences’ (Van Valin and

LaPolla 1997: 139); they are called macroroles because each subsumes a number of specific argument types (thematic relation), and they function as the linking device between syntax and semantics. There are two types of macroroles; the generalised agent-type role is called actor, whereas the generalised patient-type role is termed undergoer. The assignment of macrorole to the arguments of a certain verb is carried out following the Actor-Undergoer Hierarchy, henceforth A/U hierarchy.

ACTOR UNDERGOER

Arg. of DO 1st arg. of do’ 1st arg. of 2nd arg. of Arg. of (x...) pred’ (x,y) pred’ (x,y) state pred’ (x) [‘Æ’ = increasing markedness of realization of argument as macrorole]

Figure 2.3 The Actor-Undergoer Hierarchy (taken from Van Valin and LaPolla 1997: 146)

12 Although Dik (1989) only discusses semantic functions, his notions of first and second argument, if further developed, would probably give into RRG’s macroroles. Along the same line, Dowty’s (1991) protoroles are considered to be similar to macroroles, although not identical: ‘Protoroles are constellations of features which define a prototype; they are not generalizations of specific thematic relations, unlike macroroles’ (Van Valin 1994).

37 According to this, for a verb like kill, whose logical structure is [do’ (x,Ø)] CAUSE

[BECOME dead’(y)], the first argument of do’, x, is the actor, whereas the argument of state dead’, y, is the undergoer. There are also some general principles for macrorole assignment which specify their number and nature. They are summarised in (14):

(14) General macrorole assignment principles (Van Valin and LaPolla 1997: 152-

153):13

a. Number: the number of macroroles a verb takes is less than or equal to

the number of arguments in its logical structure.

1. If a verb has two or more arguments in its logical structure, it will take

two macroroles.

2. If a verb has one argument in its logical structure, it will take one

macrorole.

b. Nature: for verbs that take one macrorole:

1. If the verb has an actitivity predicate in its logical structure, the

macrorole is actor.

2. If the verb has no activity predicate in its logical structure, the

macrorole is undergoer.

Traditionally, transitivity has been defined according to the number of syntactic arguments accompanying a verb. Here, by contrast, it is stated that transitivity is semantically determined; that is, the notion of transitivity depends on the number of macroroles a certain verb is assigned. To this, it must be added that the number of direct core arguments need not coincide with that of macroroles. There are sentences like John

13 Exceptions to these principles are dealt with in section 4.2. See Van Valin and LaPolla (1997).

38 lent Peter the car with three direct core arguments, but there can never be more than two macroroles. I deal with syntactic and semantic valence in more depth in chapter 3.

Given the principles in (14), the A/U hierarchy and the linking algorithm between syntax and semantics – which is specified in Van Valin and LaPolla (1997), chapters 6 and 7 – very little information needs to be specified in the lexical entries of verbs.14 Van Valin (1993: 500) writes in this regard:

the types and form of the complements a predicate takes can be deduced

directly from the semantic representation in its lexical entry: the lexical

entry for a complement-taking predicate will contain only its semantic

representation; the syntactic properties of its complements will be

derived from that representation together with a set of independently-

motivated, language-specific and universal semantic, lexical and morpho-

syntactic principles.

As mentioned above, actor and undergoer act as the link between syntax and semantics or, put differently, between thematic and grammatical relations: in grammatical constructions, groups of thematic relations are treated alike; however, these groupings are not equivalent to the grammatical relations subject and direct object, for they hold in passive constructions as well. In this respect, the analysis of, for instance, a passive sentence in terms of microroles (thematic relations) does not provide but a list of semantic roles of very limited significance. However, an analysis in terms of macroroles captures the relationship existing between syntax and semantics: in active clauses the actor is subject and the undergoer direct object, whereas in passive clauses

14 Consequently, Dik’s (1989) redundancy rules are avoided.

39 the actor is the object of by and the undergoer turns into subject. And similar generalisations can be derived for intransitives.

One last point worth mentioning regarding macroroles relates to the fact that while the mapping between these and thematic relations is governed by the universal

A/U hierarchy, the relationship existing between macroroles and grammatical relations is language-specific.

Finally, the linking algorithm connects arguments that have particular thematic relations in a logical structure with the grammatical functions in a clause. It is governed by the completeness constraint, which states that

[a]ll of the arguments explicitly specified in the semantic representation of a sentence must be realized syntactically in the sentence, and all of the referring expressions in the syntactic representation of a sentence must be linked to an argument position in a logical structure in the semantic representations of the sentence. (Van Valin and LaPolla 1997: 325)

As this is not directly related to this study, I do not deem it necessary to go deeper into this point.

2.1.6.b RRG’s event structure in comparison to GL’s event structure

The RRG theory and Pustejovsky’s Generative Lexicon converge in many respects: both set off from Vendler’s (1957) classification of verbs to develop their system of lexical (de)composition and subsequent event structure; both highlight the importance of a good argument structure that contributes to the adequate development of event description (although they take different approaches: Van Valin and LaPolla make use of semantic roles whereas Pustejovsky develops qualia); and both assume the notions of causativity and telicity as central to the latter. However, there are substantial differences

40 in the approach adopted in the two theoretical frameworks. Among them, the following three are of most relevance here:

1. Pustejovsky’s generative lexicon places event structure at the core of the system

of language representation, and the other levels of analysis (lexical relations,

argument structure, etc.) are developed from it. By contrast, the RRG theory

relies on a firm theory of argument structure, based on semantic roles and

macroroles, on which the theory of event structure is built.

2. While Pustejovsky puts forward a number of principles that define event

structure in terms of subevents, Van Valin and LaPolla examine event structure

in terms of notions such as nexus and juncture.

3. RRG’s lexical representations are based on decomposition, whereas Pustejovsky

bases his on co-composition.

With respect to the argument structure of the RRG theory, Van Valin and

LaPolla (1997) put forth a theory of semantic roles at two different levels. One level corresponds to thematic relations, which describe the semantic function of an argument with respect to a predicate and which, following Jackendoff (1976), are defined in terms of argument positions in logical structures; and the other level corresponds to macroroles, which have no equivalent in other theories. Semantic macroroles are

‘generalizations across the argument-types found with particular verbs which have significant grammatical consequences’ (Van Valin and LaPolla 1997: 139). They are called macroroles because each ‘subsumes a number of specific argument types’ (or thematic relations) and, what is more important, they function as the linking device between syntax and semantics. Furthermore, notions such as transitivity, which have

41 traditionally been defined according to the number of syntactic arguments accompanying a verb, are semantically determined here; that is, the notion of transitivity depends on the number of macroroles a certain verb is assigned.

As regards Pustejovsky’s analysis of event structure in terms of events and subevents, it must be noted that the RRG theory also distinguishes among events in those terms. Nonetheless, as Levin and Rappaport Hovav (2005: 112) point out, they do not include any (explicit) principles that make reference to such subeventual analysis – not even in Van Valin (2005). Instead, Van Valin and LaPolla (1997) display a complex system of nexus-juncture to characterize subclausal units in complex constructions.

Nexus relations are ‘the syntactic relations between the units in a complex construction’

(Van Valin 2005: 188); and juncture ‘concerns the nature of the units being linked’

(Van Valin 2005: 188). Van Valin and LaPolla (1997) distinguish three levels of juncture: nuclear, core and clausal juncture; and also three types of nexus: coordination, subordination and co-subordination. The three types of nexus can combine with the three types of junctures, thus rendering nine juncture-nexus types. This system proves very useful to account for those constructions which do not lend themselves to the traditional coordination/subordination division.

Van Valin (2005) introduces several changes at the level of logical structure – for instance, the addition of one more event type, semelfactive, and, consequently, the addition of one more test to those distinguished in Van Valin and LaPolla (1997), to distinguish achievements from semelfactives – which brings his model closer to that of

Pustejovsky’s, but they do not mean any significant improvement with regard to the whole system of lexical representation.

42 2.2 Conclusions

Having considered Levin and Rappaport Hovav (2005), Alsina (1999), Lieber (2004),

Faber and Mairal (1999), Mairal and Van Valin (2001), Mairal and Ruiz de Mendoza

(forthcoming 2007), Ruiz de Mendoza and Mairal (forthcoming 2007), Pustejovsky

(1995a), Van Valin and LaPolla (1997) and Van Valin (2005), I go along with

Pustejovsky’s theory of lexical representation for the following five reasons:

1. The division of events into subevents facilitates the integration of event structure

– which is postulated as a separate level of representation - with the other levels

of representation of the lexicon, namely the argument structure and the qualia.

Other lexical semantic representations like RRG also make use of subeventual

analyses; however, they do not include any principles that make reference to

such information (Levin and Rappaport Hovav, 2005: 112).

2. Although it is not the first one to do so, Pustejovsky’s theory of lexical

representation incorporates the notion of causativity, thus allowing the

expression of a temporal logic between events. As we have seen, neither Lieber

nor Levin and Rappaport Hovav offer a completely satisfactory account of these

constructions (for details, see Alsina 1999).

3. The theory includes the notion of telicity to mediate between event structure,

argument structure, and the syntax of the VP. Moreover, (the result state

definition of) telicity helps define event complexity.

4. Pustejovsky’s theory is broadly cross-categorial, a feature that seems to be

lacking in a great number of other approaches.

5. The notion of event headedness, which is also a consequence of the structuring

of events into subevents, helps distinguish the transitions of an event by pointing

out which subevent is being focused by the corresponding lexical item. In other

43 words, headedness indicates the foregrounding and backgrounding of event arguments (see chapter 4). By adding this property to the event structure, ambiguities such as whether an underspecified event is causative or inchoative are overcome.

44 CHAPTER 3: METHODOLOGY

3.0 Introduction

This chapter is devoted to the description of the theoretical and methodological basis on which I will carry out the analysis of verbs of pure change of state or break verbs.

As I have explained in the introduction, the RRG’s system of lexical representation needs to be enriched. Pustejovsky’s GL seems a plausible model to follow, because it is comprehensive in its structure and fine-grained in the details of its concepts, and also because, in principle, it seems compatible with RRG. The latter is so because, apart from the fact that Van Valin and LaPolla (1997) incorporate the theory of nominal qualia to characterize the semantics of nominals, the two models converge in many other respects, such as the lack of derivative rules and meaning definitions, and the perception of the lexicon as a major component. Thus, I am going to analyse those components of Pustejovsky’s lexicon which are either missing or simply not fully developed in RRG, and which could enhance the latter. Furthermore, I will analyse these components in conjunction with those that I consider relevant for lexical representation in RRG, in order to check the real compatibility of the two models. If they are compatible, RRG will be able to benefit from the alliance. Therefore, my hypothesis is that RRG’s lexical representations can be enhanced by incorporating some of the components of the GL’s system of lexical representation.

Having clarified this, the question now boils down to what components must be analysed. Pustejovsky’s lexicon distinguishes four levels of representation to account for lexical meaning. As mentioned already, qualia structure is already part of RRG; hence, here I focus on the other three: word structure, argument structure and event structure. More precisely, I have selected those variables that, in my view, are more

45 representative of each level of representation. Thus they are the concrete object of my study. Accordingly, in this chapter I start by explaining what I mean by each of those variables, and how I am going to analyse them; in chapter 4 I present the data obtained from the analysis; and in chapter 5 I explain how the information presented in chapter 4 interacts. If the analysis of all variables in conjunction is coherent, it means that the two theories are compatible and hence RRG will be able to benefit from the incorporation of those components of GL into its lexical representations.

3.1 Structure of the analysis

This dissertation is based on the analysis of the word, argument and event structures of verbs of pure change of state or break verbs. However, in doing so, in fact I also carry out two overlapping analyses: on the one hand, a paradigmatic and syntagmatic analysis of break verbs; and on the other hand, a type and token analysis, which I will now explain.

3.1.1 Paradigmatic and syntagmatic analysis

The paradigmatic analysis consists of two parts. On the one hand, I analyse the semantic relationships that break verbs enter into. By semantic relationships I mean antonymy, synonymy, hyponymy and hyperonymy, and polysemy – the most important paradigmatic relationships according to Lehrer (2000: 145). These relationships are not always formally motivated. For example, the antonym of do is undo, but the antonym for high is not *unhigh, but low. Lexical derivation, on the other hand, is formally motivated. Thus, a like less derives into lesser or lessen through affixation.1

1 On lexical and morphological relatedness, see Martín Arista (2006).

46 The syntagmatic analysis is more complex and is based on a corpus. It has four main characteristics: (1) it studies verbs in context – in terms of argument and event structure; (2) it is based on real uses of language; (3) it is based on a quantitatively representative corpus; and (4) it is based on a corpus of qualitative prestige.

3.1.2 Type and token analysis

In addition to the paradigmatic and syntagmatic analysis, I also carry out an analysis of break verbs in terms of types and tokens and, as can be seen in Figure 3.1 below, the two analyses overlap.

On the one hand, I study break verbs as they appear in the dictionary; that is to say, as abstract lexemes – that is, as types. On the other hand, I study the different realizations or instances of that abstraction – that is, as tokens.2 Therefore, the analysis of types is related to the syntagmatic analysis above, and it focuses on the study of the semantic relationships and derivational properties of break verbs, whereas the analysis of tokens relates to the paradigmatic analysis mentioned above and studies the different realizations of break verbs, according to five different patterns (see table below) and in terms of argument and event structure.

2 For an explanation of the type-token distinction, see Bauer (2004).

47 Semantic relationships (not always Paradigmatic analysis formally motivated) (word structure) Lexical derivation (formally motivated) (1) Verbs in context Syntagmatic analysis Real uses of language (argument and event Quantitatively representative corpus Analysis structures) Corpus of qualitative prestige

Analysis of types Break as a lexeme in the dictionary (word (word structure) structure)

(2) Break in its different realizations: break, Analysis of tokens breaks, breaking, broke and broken (argument and event (argument and event structure) structures)

Figure 3.1 Macrostructure of the analysis

As mentioned, the two analyses converge in the analysis of the word, argument and event structures of break verbs – in the line of Pustejovsky (1995a) – and, consequently, in the analysis of a number of variables, which are the focus of my analysis, and which I explain in detail in section 3.5 below. I undertake the analysis of word structure from two different perspectives: word structure in terms of lexical derivation, and word structure in terms of semantic relationships. With respect to derivation, I aim to answer three main questions: (1) how do prefixes and suffixes attach to break verbs? In other words, what is their distribution?; (2) does the affixation process bring about a change in the grammatical category of the word?; and (3) does the origin of the verbs have some grammatical impact? With regard to semantic relationships, I study the behaviour of break verbs in terms of antonymy, synonymy, hyponymy and hyperonymy, and polysemy. For the analysis of argument and event structure I have devised a table that

48 comprehends 3 different parts (Table 3.1 below): quantitative information, qualitative information and syntactic information. The variables in them have been borrowed from

RRG and GL mainly, although there is also some small contribution from Dik’s (1997)

Functional Grammar.

QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0,1,2,3 Semantic valence 0,1,2,3 Argument-adjunct Yes/No Periphery Yes/No

QUALITATIVE ANALYSIS Entity type Arg 1 0,1,2,3,4 Arg 2 0,1,2,3,4 Nominal Aspect Arg 1 Count/Mass, [+telic] / [-telic] Arg 2 Count/Mass, [+telic] / [-telic] Reference Arg 1 Definite/Indefinite Arg 2 Definite/Indefinite Semantic role for Arg 3 Origin/Path/Destination SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes/No Causal sequence Cause and consequence: Yes/No Temporal sequence Simultaneous/Non-simultaneous

“<α” (exhaustive ordered part of): Yes/No

“○α” (exhaustive overlap part of): Yes/No

“< ○α” (exhaustive ordered overlap): Yes/No Headedness 1st or 2nd subevent Type coercion Yes/No Table 3.1 Table of analysis

As can be seen from the table, the first two parts are used to describe argument structure from a quantitative and a qualitative perspective, respectively. For that

49 purpose, I have borrowed some of the concepts put forward by the RRG theory such as syntactic and semantic valence, argument-adjunct and periphery. In the first part, I indicate the number of syntactic arguments of the predicate, the number of semantic arguments, whether there is any argument-adjunct or not, and if there is a periphery. In the second part, I carry out a more qualitative analysis of the same arguments, indicating in each case whether the (referent of the) entity denoted by the argument is of type 0, 1, 2, 3 or 4 (following Dik 1997), whether it is telic or atelic, and whether it is definite or indefinite. In other words, I specify their entity type, nominal aspect and reference. Finally, in the third part I focus on the structure of the event, for which I draw from Pustejovsky. I start by pointing out whether the event is analysable into subevents or not. If it is, then I study the relationship existing between them; more precisely, I am concerned with two questions: (1) the existence or non-existence of a cause- consequence relation, and (2) the temporal relation existing between them. Next I study which of the two subevents is headed or foregrounded, and I finish with the question of whether there is type coercion or not. For greater clarity, I rephrase all this information in the shape of questions, as follows:

(I) Quantitative analysis:

a. Syntactic Valence: how many syntactic arguments does the verb have?

b. Semantic Valence: how many semantic arguments does the verb have?

c. How many Macro-roles are there?

d. What is the M-Transitivity of the verb under analysis in that context?

e. Is there any argument-adjunct?

f. Does the example under analysis have a periphery?

50 (II) Qualitative analysis:

a. What type of entity is denoted by the arguments of the example under

consideration?

b. What are the noun and nominal phrase aspects of the arguments identified

above?

c. Reference in terms of definiteness

(III) Syntactic analysis:

a. Is the event analysable into subevents?

b. Is there a causal relationship between those subevents?

c. What is the temporal relationship existing between the subevents?

d. Which subevent is headed?

e. Is there any case of type coercion?

Each of these questions is described in more detail in sections 3.5.2 and 3.5.3 below.

As regards the source of the data to be analysed, I will do the analysis of word structure using dictionaries – namely, the Longman Dictionary of the English Language

(1984), The New Oxford Dictionary of English (1998), Collins Cobuild English

Dictionary for Advanced Learners (2001), the online Cambridge Advanced Learner’s

Dictionary and the online Merriam-Webster Unabridged Dictionary –, thesauri –

Longman Lexicon of Contemporary English (1981) and the New Oxford Thesaurus of

English (2000) –, and the online database WordNet.3 The analysis of argument and event structure will be carried out on a corpus that comprises approximately 1,200

3 Hereafter I will refer to the dictionaries and thesauri as LDEL, NODE, CCED, CALD, M-WD, LLCE and NOTE respectively.

51 examples containing break verbs. In what follows I describe the corpus (section 3.2), the verbs under analysis (section 3.3) and WordNet (section 3.3).

3.2 The corpus: main characteristics and compilation process

Since linguistics engaged in the task of achieving the scientific status that was out of question regarding other sciences, one of the central questions of the linguistic debate has been what kind – and what amount – of data is reliable. In other words, the question of what is empirically accurate has been a constant throughout the history of 20th century linguistics. This work relies on a corpus which draws from a reliable database, the British National Corpus (henceforth BNC), as the primary source of data. The procedure followed during the compilation task and analysis reveals the twofold character of this study: first, I have compiled the corpus in a deductive fashion. In other words, having identified the main shortcomings of the RRG’s system of lexical representation, I have selected a number of variables whose analysis may throw some light on how to overcome those shortcomings, and I have set up to compile an appropriate corpus upon which to study those variables. The accuracy and appropriateness of the corpus obtained through this procedure will determine the validity of the hypotheses. And second, I shall carry out the analysis in an inductive fashion; that is to say, once the corpus has been compiled and analysed, I shall draw conclusions from the data. Thus, the two traditional approaches in opposition in the linguistic tradition are reconciled in this work, certifying its empirically accurate character.

The BNC is a 100 million-word corpus, a figure which exceeds considerably the

20 million-word minimum suggested by Halliday (1966). Were this not enough, the

BNC allows you to display bibliographic details of the source of a selected match and

52 browse the text from which it has been extracted, apart from providing enough context to understand the sense in which the query focus is used, and the nature of the source text; as Aston and Burnard (1998: 38) put it, ‘the BNC is probably more richly encoded and annotated than any other corpus of comparable size’. In short, the BNC is a very complete database, well suited to my aims, and it reflects real uses of language.

The BNC includes a wide range of genres from written and spoken English.

Ninety per cent of the 4,124 texts that constitute it come from written sources and ten per cent come from spoken sources. The texts from written sources are split between

‘informative’ prose (approximately seventy-five per cent) and ‘imaginative’ or literary works (approximately twenty-five per cent). Kennedy (1998: 51) represents the categories included in the informative prose section as follows: Natural and pure science, Applied science, Social and community, World affairs, Commerce and finance,

Arts (rock & pop, theatre, dance, and so on), Belief and thought (religion, philosophy, and so on), Leisure (sports, gardening, and so on).

The procedure followed in the compilation process is straightforward: I have gathered a total of 100 examples for each of the 12 verbs selected, examples which have been distributed in five equal parts according to five verbal patterns; more precisely, I have picked out 20 matches for each of the five POS (Part-of-speech) patterns that I illustrate below:

(1) POS patterns a. VVB: finite base forms (i.e. “tear”, “crash) b. VVD: past tense forms (i.e. “tore”, “crashed”) c. VVG: -ing forms (i.e. “tearing”, “crashing”) d. VVN: past forms (“torn”, “crashed”)

53 e. VVZ: –s forms (“tears”, “crashes”)

The POS is just one of the seven query modalities offered by the BNC. Very briefly, the seven queries can be defined as follows: 1. Word Query searches the SARA word index, and, as an option, it then also searches the BNC for a word or words selected from among those found;4 2. Phrase Query searches the BNC for a phrase; 3. POS

Query searches the BNC for a word with a specific (POS) code or codes;

4. Pattern Query searches the BNC for words matching a pattern or regular expression;

5. SGML Query searches the BNC for tags in SGML format;5 6. Query Builder Query combines queries of different or the same kinds into a single complex query using a visual interface; and 7. CQL Query searches the BNC using a query defined in the

Corpus Query Language (CQL), SARA’s own internal command language (Aston and

Burnard 1998: 200). The main advantage of POS over the others is that it permits the extraction of different patterns which thus guarantee a certain degree of variety and, therefore, exhaustiveness. The patterns selected are the ones exemplified in (1): VVB,

VVD, VVG, VVN, VVZ. In this way, all basic realisations of these verbs are guaranteed to appear. The reason for choosing the VVB pattern instead of the VVI (the form of lexical verbs) can be found in Barber (1993: 208-209), where the

Modern English basic – unmarked – status of this form of the prevails against that sustained by the infinitive in Old English.6

The only restriction set on the queries during the compilation process is that, for reasons of transparency, the examples are selected at random.

4 SARA is the search program of the BNC. 5 SGML stands for ‘Standard Generalized Markup Language’. 6 The notion of markedness has been revised by typologists like Greenberg (1966), Givón (1984/1990, 1995) and Croft (1990b, 1991). Croft distinguishes between structural markedness, textual markedness and distributive markedness. A priori, the three criteria are convergent; however, Martín Arista and Ortigosa (2000: 259) claim that the relationship among them is not always convergent.

54 The final corpus comprises a total of 1,053 examples, whose original – and random – arrangement will vary as the work advances until it reaches its final state, that in which its examples are arranged, semantically, according to their degree of prototypicality.7 I explain this question in detail in chapter 4.

The examples used for illustration in the argumentation process are displayed as they appear in the BNC. Thus, any misspelling or grammatical error is to be attributed to the corpus and to the fact that it reflects real uses of language.

3.3 Verbs under study

Van Valin (1990) suggests that semantic notions such as activity or change of state are aspects of meaning that are relevant to the classification of verbs, because two apparently similar verbs may not be so with respect to all components of meaning. Such is the case, for instance, with verbs like break and cut. At first sight, both verbs seem to show a similar behaviour; they participate in the middle alternation – ‘Carol cut the whole wheat bread/Whole wheat bread cuts easily’; ‘Tony broke the crystal vase/Crystal vases break easily’ –, in the instrument subject alternation – ‘Carol cut the bread with a knife/The knife cut the bread’; ‘Tony broke the window with a ball/The ball broke the window’ – and allow for the resultative phrase – ‘Carol cut the bread to pieces; Tony broke the piggy bank open’. In addition, both verbs imply a change in the material integrity of the affected entity. However, while break verbs allow for the causative/inchoative alternation, cut verbs do not – for instance, ‘Tony broke the window/The window broke’ (Levin 1993: 241) and ‘Carol cut the bread/*The bread cut’ (Levin 1993: 156). Thus, the former are categorised as change of state verbs, whereas the latter are labelled as verbs of cutting.

7 The BNC does not provide the same number of examples for all verbs. Thus, while some count them by thousands, some others like splinter and fracture do not reach one hundred. Hence the total is not 1,200.

55 Levin (1993) follows suit and lays out an outstanding classification of the

English verbs on the assumption that the syntactic behaviour of a verb is determined, to a great extent, by its meaning.8 Thus, verbs that share a certain number of meaning components and observe a certain number of common alternations and properties are gathered together in the same class.

For my dissertation, I have selected a verbal class big enough to offer significant data but small enough to be easily handled, and whose syntactic and semantic characteristics are sufficiently complex to set up a good starting point for the research: verbs of change of state. Among these, Levin distinguishes between verbs of pure change of state and those which are not; both are found in the middle and resultative alternations and both refer to actions that bring about a change in material integrity, but the latter do not turn up in the causative/inchoative alternation.9 I illustrate the causative/inchoative illustration in (2).

(2)

a. I broke the twig off (of) the branch./The twig broke off (of) the branch. (Levin

1993: 166)

b. Carla slid the books across the table./The books slid across the table. (Levin

1993: 133)

The contrast between pure and non-pure verbs of change of state is exemplified by verbs like break and cut, respectively. For my dissertation, I have selected only those belonging to the class of pure change of state or, as they are also labelled, break verbs.

8 Fillmore (1970) carried out a study on verbs of change of state that set the trend for subsequent studies based on the idea that syntax is semantically motivated, such as Levin’s (1993). 9 Hale and Keyser (1987) suggest the following rule to account for the causative/inchoative alternation: [x cause [y undergo change], (by...)]].

56 In principle, this group comprises 13 verbs; however, I agree with Lemmens (1998: 37) that chip, which Levin categorises as a break verb, actually belongs to the class of cut verbs. As a matter of fact, Levin herself includes it – accidentally, most likely – in the latter class. Therefore, chip is disregarded for analysis. The final set of break verbs consists of 12 verbs, namely break, crack, crash, crush, fracture, rip, shatter, smash, snap, splinter, split and tear, and all of them are characterised by the following: (1) they participate in the causative/inchoative alternation (‘I crashed the car against the fence’ /

‘The car crashed against the fence’), middle alternation (‘Mary cut the rope’ / ‘The rope cuts easily’), and instrument subject alternation (Natalia shattered the window with a rock’ / ‘The rock shattered the window’), whereas they are excluded from the with/against alternation (‘Mila furiously hit the score with the baton’ / ‘Mila furiously hit the baton against the score’), the conative alternation (‘Ainhoa hit the table’ /

‘Ainhoa hit at the table’) and body-part possessor ascension alternation (‘Sara cut the chicken´s wing’ / ‘Sara cut the chicken on the wing’); (2) the unintentional interpretation is available, as in ‘Henry broke his elbow’; (3) they allow for the resultative phrase of the kind ‘As kids, my mum used to sing us asleep’; and (4) they allow for the zero-related nominal (‘a break in the window.’)

3.4 WordNet

WordNet is an online lexical database inspired by modern psycholinguistic theories of language processing.10 Originally developed as a conceptual dictionary, it organises

English verbs, nouns, adjectives and adverbs into sets or synsets – ‘a set of words that are interchangeable in some context’

(http://wordnet.princeton.edu/gloss/#Synset) – each expressing a lexical concept. The

10 The information relating to WordNet has been obtained from: http://wordnet.princeton.edu/.

57 synonym sets are linked by different relations such as hyponymy/hyperonymy or meronymy. WordNet works like a dictionary and thesaurus at the same time. When you introduce a search, it first downloads a list of senses according to their lexical category

(that is, nouns, verbs, adjectives and adverbs). For example, for ‘break’ it puts forward two lists of senses, one for nouns (15 senses) and one for verbs (59 senses). I display the list of verbal senses below for illustration, and because I will refer to them in subsequent sections.

(3) The verb ‘break’ has 59 senses in WordNet:

1. interrupt, break -- (terminate; “She interrupted her pregnancy”; “break a lucky

streak”; “break the cycle of poverty”)

2. break, separate, split up, fall apart, come apart -- (become separated into pieces or

fragments; “The figurine broke”; “The freshly baked loaf fell apart”)

3. break -- (destroy the integrity of; usually by force; cause to separate into pieces or

fragments; “He broke the glass plate”; “She broke the match”)

4. break -- (render inoperable or ineffective; “You broke the alarm clock when you

took it apart!”)

5. break, bust -- (ruin completely; “He busted my radio!”)

6. transgress, offend, infract, violate, go against, breach, break -- (act in disregard of

laws and rules; “offend all laws of humanity”; “violate the basic laws or human

civilization”; “break a law”)

7. break, break out, break away -- (move away or escape suddenly; “The horses

broke from the stable”; “Three inmates broke jail”; “Nobody can break out—this

prison is high security”)

58 8. break -- (scatter or part; “The clouds broke after the heavy downpour”)

9. break, burst, erupt -- (force out or release suddenly and often violently something pent up; “break into tears”; “erupt in anger”)

10. break, break off, discontinue, stop -- (prevent completion; “stop the project”;

“break off the negotiations”)

11. break in, break -- (enter someone’s property in an unauthorized manner, usually with the intent to steal or commit a violent act; “Someone broke in while I was on vacation”; “They broke into my car and stole my radio!”)

12. break in, break -- (make submissive, obedient, or useful; “The horse was tough to break”; “I broke in the new intern”)

13. violate, go against, break -- (fail to agree with; be in violation of; as of rules or patterns; “This sentence violates the rules of syntax”)

14. better, break -- (surpass in excellence; “She bettered her own record”; “break a record”)

15. disclose, let on, bring out, reveal, discover, expose, divulge, impart, break, give away, let out -- (make known to the public information that was previously known only to a few people or that was meant to be kept a secret; “The auction house would not disclose the price at which the van Gogh had sold”; “The actress won’t reveal how old she is”; “bring out the truth”; “he broke the news to her”)

16. break -- (come into being; “light broke over the horizon”; “Voices broke in the air”)

17. fail, go bad, give way, die, give out, conk out, go, break, break down -- (stop operating or functioning; “The engine finally went”; “The car died on the road”;

“The bus we travelled in broke down on the way to town”; “The coffee maker broke”; “The engine failed on the way to town”; “her eyesight went after the

59 accident”)

18. break, break away -- (interrupt a continued activity; “She had broken with the traditional patterns”)

19. break -- (make a rupture in the ranks of the enemy or one’s own by quitting or fleeing; “The ranks broke”)

20. break -- (curl over and fall apart in surf or foam, of waves; “The surf broke”)

21. dampen, damp, soften, weaken, break -- (lessen in force or effect; “soften a shock”; “break a fall”)

22. break -- (be broken in; “If the new teacher won’t break, we’ll add some stress”)

23. break -- (come to an end; “The heat wave finally broke yesterday”)

24. break -- (vary or interrupt a uniformity or continuity; “The flat plain was broken by tall mesas”)

25. break -- (cause to give up a habit; “She finally broke herself of smoking cigarettes”)

26. break -- (give up; “break cigarette smoking”)

27. break -- (come forth or begin from a state of latency; “The first winter storm broke over New York”)

28. break -- (happen or take place; “Things have been breaking pretty well for us in the past few months”)

29. break -- (cause the failure or ruin of; “His peccadilloes finally broke his marriage”; “This play will either make or break the playwright”)

30. break -- (invalidate by judicial action; “The will was broken”)

31. separate, part, split up, split, break, break up -- (discontinue an association or relation; go different ways; “The business partners broke over a tax question”; “The couple separated after 25 years of marriage”; “My friend and I split up”)

60 32. demote, bump, relegate, break, kick downstairs -- (assign to a lower position; reduce in rank; “She was demoted because she always speaks up”; “He was broken down to Sargeant”)

33. bankrupt, ruin, break, smash -- (reduce to bankruptcy; “My daughter’s fancy wedding is going to break me!”; “The slump in the financial markets smashed him”)

34. break -- (change directions suddenly)

35. break -- (emerge from the surface of a body of water; “The whales broke”)

36. collapse, fall in, cave in, give, give way, break, founder -- (break down, literally or metaphorically; “The wall collapsed”; “The business collapsed”; “The dam broke”; “The roof collapsed”; “The wall gave in”; “The roof finally gave under the weight of the ice”)

37. break dance, break-dance, break -- (do a break dance; “Kids were break-dancing at the street corner”)

38. break -- (exchange for smaller units of money; “I had to break a $100 bill just to buy the candy”)

39. break, break up -- (destroy the completeness of a set of related items; “The book dealer would not break the set”)

40. break -- (make the opening shot that scatters the balls)

41. break -- (separate from a clinch, in boxing; “The referee broke the boxers”)

42. break, wear, wear out, bust, fall apart -- (go to pieces; “The lawn mower finally broke”; “The gears wore out”; “The old chair finally fell apart completely”)

43. break, break off, snap off -- (break a piece from a whole; “break a branch from a tree”)

44. break -- (become punctured or penetrated; “The skin broke”)

45. break -- (pierce or penetrate; “The blade broke her skin”)

61 46. break, get out, get around -- (be released or become known; of news; “News of her death broke in the morning”)

47. pause, intermit, break -- (cease an action temporarily; “We pause for station identification”; “let’s break for lunch”)

48. break -- (interrupt the flow of current in; “break a circuit”)

49. break -- (undergo breaking; “The simple vowels broke in many Germanic languages”)

50. break -- (find a flaw in; “break an alibi”; “break down a proof”)

51. break -- (find the solution or key to; “break the code”)

52. break -- (change suddenly from one tone quality or register to another; “Her broke to a whisper when she started to talk about her children”)

53. break, recrudesce, develop -- (happen; “Report the news as it develops”; “These political movements recrudesce from time to time”)

54. crack, check, break -- (become fractured; break or crack on the surface only;

“The glass cracked when it was heated”)

55. break -- (of the male voice in puberty; “his voice is breaking—he should no longer sing in the choir”)

56. break -- (fall sharply; “stock prices broke”)

57. fracture, break -- (fracture a bone of; “I broke my foot while playing hockey”)

58. break -- (diminish or discontinue abruptly; “The patient’s fever broke last night”)

59. break -- (weaken or destroy in spirit or body; “His resistance was broken”; “a man broken by the terrible experience of near-death”)

62 The main advantage that WordNet presents over other dictionaries or thesauri is that it allows to search for different lexical relations for each sense of the word – ,

Coordinate terms, Hypernyms, Hyponyms, Holonyms, Meronyms, Derivationally related forms, and so on – separately, as shown below:11

Synonyms, ordered by estimated frequency Search for of senses

1 Show glosses

1 Show contextual help

Search

Figure 3.2 Illustration of WordNet´s search mechanism

Thus, each of the 59 senses displayed above is provided with a set of lexical relations of its own. Although many dictionaries and thesauri also organize lexical relations around the notion of ‘sense’, none of those consulted distinguish senses with so much precision and detail and, as a result, many of them overlap.

The range of lexical relations available for each word and sense varies according to their characteristics. Thus, whereas ‘bend’ has antonyms for two senses, ‘crease’, another verb of the same family, does not have any; hence, that lexical relation does not appear as an option in the box below:

11 What WordNet calls lexical relations I here call semantic relations.

63

2 of 6 senses of bend

Sense 1 bend, flex -- (form a curve; "The stick does not bend") Antonym of straighten (Sense 1) =>straighten, unbend -- (straighten up or out; make straight)

Sense 3 flex, bend, deform, twist, turn -- (cause (a plastic object) to assume a crooked or angular form; "bend the rod"; "twist the dough into a braid"; "the strong man could turn an iron bar") Antonym of unbend (Sense 3) =>unbend -- (free from flexure; "unbend a bow")

Figure 3.3 Results for ‘Antonyms’ search of verb ‘bend’

Despite its limitations, WordNet provides the kind of information that I need for my research: highly structured sets of synonyms which are connected to other sets of synonyms by way of a number of lexical-semantic relations such as hynonymy/hyperonymy, or antonymy. Through hyponymy and hyperonymy relationships, verbs are organized into hierarchies, which will facilitate the identification of real synonyms in chapter 4. The enumeration of antonyms per sense rather than per word will also contribute to the precision of the results displayed in chapter 4 for the analysis of antonymy. And the great number of senses listed for each verb helps avoid circularity and overlap among the synonyms provided and, consequently, also among the antonyms.

64 3.5 Sections of the analysis: word structure, argument structure and event structure

Before going into the analysis of all the questions and variables mentioned above, it is necessary to explain what I understand by each of them. Thus, in what follows I offer a brief theoretical and methodological introduction to each of these questions that will pave the way for the analysis and subsequent discussion in chapters 4 and 5.

3.5.1 Word structure

As said, the first part of the analysis is concerned with word structure. I use the label

‘word structure’ to refer to the lexical inheritance structure proposed by Pustejovsky

(1995a), which identifies the way in which lexical structures relate to each other in the lexicon, but with the difference that I make special emphasis on lexical derivation too.

The reason for incorporating this extra ingredient is that derivation also plays a role in the connection and interaction between the levels of representation. Furthermore, verbal classes and, by extension, lexical classes (cf. Levin 1993, Levin and Rappaport Hovav

1995, and Levin 2005), are established around certain grammatical behaviours (Cortés

2006, Martín Arista forthcoming-b).

Thus, I undertake the analysis of word structure attending to two different parameters: lexical derivation and lexical relations.

3.5.1.a Lexical derivation

Words have a rather complex internal structure which resembles syntactic structure to a certain extent. Very briefly, words are formed by a series of elements that build on each other in order to create a more complex structure. The new structure is thus endowed

65 with syntactic and semantic information. These elements are roots, affixes, stems and bases.

Morphologically speaking, affixes can be of three types: prefixes, infixes and suffixes. Infixes are which are inserted ‘inside another ’ (Spencer

1991: 12). They seem to be very common in Semitic languages but are rarely used in

English (Baker 2003: 44). Thus, in this work I focus exclusively on the use of prefixes and suffixes in the derivational process.

The definition of the word prefix is problematic for several reasons. Firstly, grammarians do not seem to agree on what a prefix is, and secondly, many of them, especially the older ones, consider prefixes as being part of compounding (Marchand

1969: 129). The latter might be due to the fact that, originally, prefixes – and also suffixes – were words themselves. Although, as Marchand (1969: 129) explains, the term is widely used in linguistics, it is attributed different meanings. To begin with, he says, the Oxford English Dictionary (1933) ‘uses the word for almost any prepositive particle, but employs the term “combining form” for various pre-particles without giving a definition of either’. Other grammarians such as Jespersen or Koziol either do not provide a definition at all or it turns out to be inconsistent (Marchand 1969: 129).

Kruisinga misuses this term to refer to the first of the parts of a compound (Marchand

1969: 129), and so do others such as Kastovsky (1992: 362), who refers to morphemes such as under-, ofer- or forþ- as prefixes but treats them as adverbs and prepositions, thus considering the resulting word a compound instead of a derivative. Furthermore,

Kastovsky (1992: 363) says, ‘compounds must also be kept apart from prefixations and suffixations, but the delimitation is not absolute, there being a number of borderline cases’. Nowadays, however, there seems to be agreement as to the definition of prefixes as particles bound to free morphemes (in initial position). This means that the resulting

66 word is a derivative, not a compound. And from all this it follows that suffixes are bound morphemes which differ from prefixes in being postponed to the free morpheme.

With these ideas in mind, I have undertaken the analysis of the derivational properties of break verbs, which will cover three main questions:

1. Distribution of prefixes and suffixes

2. Changes in grammatical category

3. The origin of verbs: Romance vs. Germanic verbs

For the analysis, I will make use of Marchand’s (1969) work as the primary source. This book provides an extensive list of prefixes and suffixes which guarantees the accuracy of the analysis. Furthermore, it covers all the lists included in other works consulted such as Katamba (1993), Stockwell and Minkova (2001) or Baker (2003).

3.5.1.b Semantic relationships

In this section, I explain how I will tackle word structure attending to a different type of relationships, namely lexical relationships. As Cruse et al. (2002) explain, there are various approaches to the semantic structure of word, among which they distinguish three crucial ones: semasiology vs. onomasiology; sense vs. reference; and denotational vs. non-denotational. In the present section I approach the issue of word structure from the first one, that is, the distinction between semasiology and onomasiology. Very briefly, semasiology analyses word meaning from the signifier to the signified. Thus, from a semasiological point of view I will be concerned with the study of polysemy, where one word (signifier) may have more than one meaning (signified).

Onomasiology, by contrast, analyses word meaning from the signified to the signifier.

67 Thus, from an onomasiological point of view I will be concerned with the study of lexical relations such as synonymy or hyponymy, where different words or expressions relate to one common meaning in different ways.

The section is arranged as follows:

1. Antonymy

2. Synonymy

3. Hyponymy and hyperonymy

4. Polysemy

I revise these questions in the context of this research below.

Antonymy

Antonyms are regarded by the ordinary speaker as words with meanings. Even linguists like Bauer (1998: 31) follow this line of thought and state that antonyms are

‘words that are opposite in meaning’. Nonetheless, here I take this definition to be inconsistent as it is the ‘senses’ of two or more words that are antonyms. This question will be discussed in more detail when dealing with synonymy in the next section.

Antonyms can be of different types. On the one hand, they can be gradable; on the other, non-gradable. Gradable or polar antonyms are said to be the most typical type, and they refer to ‘those opposites which are labelled by adjectives as being at the opposite ends of some scale’ (Bauer 1998: 31).12 They are illustrated in (6):

(6) ‘Good versus bad, deep versus shallow, pleased versus displeased, desirable versus undesirable’ (Bauer 1998: 31)

12 The labels ‘polar’ and ‘non-polar’ are used by Cruse et al. (2002: 470).

68

The property described by gradable antonyms is not absolute but is attributed to an entity to a certain degree. Thus, someone may be either good or bad, but also better or worse than someone else. Non-gradable antonyms are those which divide the world

(‘the universe of discourse’) into mutually exclusive units. This means that the resulting subsets are complementary (Cruse et al. 2002: 470), and the complementary pairs ‘male and female’ and ‘day and night’ serve as examples. Bauer (1998: 31) describes pairs of this kind as incompatible, and therefore studies them within the field of hyponymy.

If the notion of antonymy seems to be rather transparent, it also appears to have been focused mainly on adjectives and nouns, neglecting the question of verbal antonymy to a certain extent.13 Even in thesauri, where the number of adjectival antonyms listed is large, the number of verbal antonyms is considerably reduced.

Whether this is due to a lack of antonyms for this lexical class or to a lack of study is unclear; what is clear is that none of the thesauri/dictionaries consulted devotes a lot of space to them. The most complete database found in this respect is ‘WordNet’. Hence, in the analysis of antonymy, I make use of WordNet to determine all the possible antonyms of break verbs and, in doing so, I will try to answer the question that I have just posed, namely, why there are such few antonyms in English.

Synonymy

Tetescu (1975, in Cruse et al. 2002: 496) defines synomymy as a ‘relation que le sens commun estime claire, mais que les logiciens ne cessent de proclamer crucifiante’.

Synonymy is hard to define. There are those who think that it simply does not exist as such, because there are not two words that have exactly the same meaning. By contrast,

13 There are also studies for antonymy in English prefixes. See Lehrer (2000: 143-54).

69 others – non-specialists – take synonymy in a rather general way as the relationship existing between two or more words that mean the same. This approach is too vague and implies that if two words are synonyms they must be so in all contexts. This is what

Lyons calls full synonymy:14 the relation ‘which holds between lexemes when all their component senses are identical in meaning’ (Lyons 1981, cited in Cruse et al. 2002:

486). However, the notion of synonym as such is usually rejected, as it is rather useless.

What could the purpose of having two words that mean exactly the same in all contexts be? There is no real semantic motivation for such a thing (Cruse et al. 2002: 488); the only role full synonyms may perform is purely aesthetic. Of course, there are also those cases in which a certain word enters a language and becomes part of its lexicon even if there is already a word in the target language for the concept designated by that word. In some cases, the native word of the target language disappears, the new word occupying its place, but in others the native word and the new word coexist, giving as a result a potential case of full synonymy.15 Bearing in mind that language, in its evolution, pursues economy, the coexistence of two words that mean exactly the same in all contexts seems rather futile, and the scarcity of examples existing in English thus corroborates it.

Let us take, as way of illustration, the synonyms put forward by dictionaries and thesauri. I begin by displaying the list of synonyms provided by the NOTE for the appropriate senses of break verbs. The list of words displayed for each of the verbs in this thesaurus is supposed to be a set of synonyms of the head word (see Table 3.2).

14 Full synonyms are also known as absolute synonyms (Cruse et al. 2002: 488) 15 Borrowings of this type tend to concentrate on learned and cultural fields, and they tend to be used for reasons of prestige.

70

BREAK CRACK CRASH CRUSH FRACTURE RIP SYNONYMY SHATTER, smash, smash SPLIT, fracture, fissure, SMASH, wreck, bump; SQUASH, squeeze, press, BREAK, snap, crack, TEAR, snatch, jerk, tug, to smithereens, crack, rupture, break, snap, Brit. Write off¸Brit. compress; pulp, mash, cleave, rupture, shatter, wrench, wrest, prise, snap, fracture, fragment, cleave; rare craze Informal prag; N. Amer. macerate, mangle; falten, smash, smash to force, heave, haul, drag, splinter; disintegrate, fall informal total trample on, tread on; smithereens, fragment, pull, twist, peel, pluck, to bits, fall to pieces; split, informal squidge, splat; N. splinter, split, separate, grab, seize; informal yank burst, blow out; tear, rend, Amer. Informal smush burst, blow out; sever, sever, rupture, separate, divide, tear, rend; divide; informal bust; rare disintegrate, fall to bits, shiver fall to pieces; informal bust; rare shiver SHATTER SMASH SNAP SPLINTER SPLIT TEAR SYNONYMY SMASH, smash to BREAK, break to pieces, BREAK, break in/into SHATTER, break into tiny BREAK, chop, cut, hew, RIP UP, rip in two, pull smithereens, break, break smash to smithereens, two, fracture, splinter, pieces, break into lop, cleave; snap, crack; apart, pull to pieces, shred into pieces, burst, blow shatter; splinter, crack, separate, come apart, part, fragments, smash, smash informal bust out; explode, implode; disintegrate; informal split, crack; informal bust into smithereens, fracture, splinter, crack, fracture, bust; rare shiver split, crack, disintegrate, fragment, disintegrate; crumble; technical spall; informal bust; rare shiver rare shiver Table 3.2 Synonyms for the core sense of break verbs according to the NOTE

71

If the synonyms listed by the NOTE are full synonyms with their corresponding head words, they must be able to replace the head word in all contexts. In order to test this, I have searched one of the synonyms of break in the BNC, namely divide. The number of contexts in which it appears is varied. I illustrate this in (7):

(7) Contexts in which divide may appear according to the BNC

i. With prepositional phrase -

The number of UK groups reached 300 in the early 1980s and has remained around that level ever since — groups of very high quality and certainly not based on ‘ephemeral enthusiasms’! These early groups were called ‘Threes Groups’ because each adopted three prisoners, one from the East, one from the West and one from Afro-Asian countries. In many cases a group would divide into sub-groups and work on more than three at a time.

ii. With inanimate object -

If he wins tomorrow’s Grand Prix, he will be rewarded with a Jaguar car. Only nine horses contested last night’s Puissance but, though the big red wall at 6ft 11in was short of record height, it reached a sporting climax. Both Tim Stockdale, on Supermarket, and Michael Whitaker, on Next Didi, cleared the wall at that height at the second attempt and chose to divide the honours.

ii. With animate object -

It is interesting to speculate what the consequences would have been for the curriculum if his view had prevailed. But the practical difficulty of moving towards such a solution,

72 given existing comparatively small school buildings, was conclusive; and the then

Permanent Secretary, Maurice Holmes, advised the Minister to divide secondary school children up into relatively small units, each with its own type of curriculum, according to ability.

iv. With object and prepositional phrase -

Huffman coding: this is approximately the opposite of LZW, and generates a set of variable length codes for fixed length data items, using the shortest codes for most common data items. Digital TV video compression. The same techniques can be used for bit image files. The most common method is to divide each picture into square blocks of pixels.

v. Middle construction -

But other readers will, I imagine, accept the general drift of what I have said, whilst disagreeing on many points of detail. Not wanting to seem too negative, I have outlined ways in which the discipline of English as currently constituted might divide, to the benefit of both parties.

vi. With preposition -

Not only can they cover a much wider area than they could on their own, but their collective experience assists in finding food. While searching for prey, many dolphin species move in tight schools distributed over a large area, using sound to stay in contact and communicate with each other over significant distances. A dolphin school will often divide up into subgroups while searching for food, which separate and spread

73 out over a wide area of sea, still diving synchronously — evidence that they are probably in some form of acoustic contact.

vii. With preposition, object and prepositional phrase -

You have to distinguish between the Norway species, grown as a street tree, and the common wild sycamore, both of which have large leaves, and the garden kinds originating in China and the eastern regions of North America, which have much smaller leaves and an elegant bearing. Then you have to divide them up into those which make slender trees and others which remain as bushes with a fountain-like outline.

If the synonyms provided by the NOTE are full synonyms with the corresponding head words, they must be interchangeable. That is to say, if divide is a synonym of break, the former must be interchangeable with the latter and also with its other potential synonyms. The test, however, refutes this idea:

(8) Synonyms in different contexts: DIVIDE replaced by FRACTURE and BREAK

i. With prepositional phrase -

The number of UK groups reached 300 in the early 1980s and has remained around that level ever since — groups of very high quality and certainly not based on ‘ephemeral enthusiasms’! These early groups were called ‘Threes Groups’ because each adopted three prisoners, one from the East, one from the West and one from Afro-Asian countries. In many cases a group would ?FRACTURE/BREAK into sub-groups and work on more than three at a time.

74

ii. With inanimate object -

If he wins tomorrow's Grand Prix, he will be rewarded with a Jaguar car. Only nine horses contested last night's Puissance but, though the big red wall at 6ft 11in was short of record height, it reached a sporting climax. Both Tim Stockdale, on Supermarket, and Michael Whitaker, on Next Didi, cleared the wall at that height at the second attempt and chose to *FRACTURE/*BREAK the honours.

ii. With animate object -

It is interesting to speculate what the consequences would have been for the curriculum if his view had prevailed. But the practical difficulty of moving towards such a solution, given existing comparatively small school buildings, was conclusive; and the then

Permanent Secretary, Maurice Holmes, advised the Minister to *FRACTURE/BREAK secondary school children up into relatively small units, each with its own type of curriculum, according to ability.

iv. With object and prepositional phrase -

Huffman coding: this is approximately the opposite of LZW, and generates a set of variable length codes for fixed length data items, using the shortest codes for most common data items. Digital TV video compression. The same techniques can be used for bit image files. The most common method is to *FRACTURE/BREAK each picture into square blocks of pixels.

v. Middle construction -

75 But other readers will, I imagine, accept the general drift of what I have said, whilst disagreeing on many points of detail. Not wanting to seem too negative, I have outlined ways in which the discipline of English as currently constituted might

?FRACTURE/?BREAK, to the benefit of both parties.

vi. With preposition -

Not only can they cover a much wider area than they could on their own, but their collective experience assists in finding food. While searching for prey, many dolphin species move in tight schools distributed over a large area, using sound to stay in contact and communicate with each other over significant distances. A dolphin school will often *FRACTURE/BREAK up into subgroups while searching for food, which separate and spread out over a wide area of sea, still diving synchronously — evidence that they are probably in some form of acoustic contact.

vii. With preposition, object and prepositional phrase -

You have to distinguish between the Norway species, grown as a street tree, and the common wild sycamore, both of which have large leaves, and the garden kinds originating in China and the eastern regions of North America, which have much smaller leaves and an elegant bearing. Then you have to *FRACTURE/BREAK them up into those which make slender trees and others which remain as bushes with a fountain-like outline.

In short, the test reveals that the so-called synonyms in dictionaries and thesauri cannot be regarded as full synonyms, as they cannot replace each other in any context.

This idea is in accord with the Principle of No Synonymy of Grammatical Forms,

76 developed by Goldberg (1995: 68) and attributed to Humboldt, Vendryes, Ogden, and

Richards:

It is important to bear in mind that both semantic and pragmatic aspects of grammatical form are relevant for determining synonymy. Only if two forms have both the same semantics and the same pragmatics, they will be disallowed by the Principle of No Synonymy of Grammatical Forms. This principle is impossible to prove conclusively, since one would have to examine all forms in all languages to do so. (Goldberg 1995: 229; emphasis in original)

Thus, in this work I start off from the idea that full synonymy does not exist and undertake the study of synonymy from a broader perspective. More precisely, I approach this phenomenon in the same line as I did for antonymy, not as a matter of similarity of meaning between words, but as a matter of similarity of meaning between the ‘senses’ of a word.

Hyponymy and hyperonymy

Hyponymy is a lexical phenomenon, also referred to as ‘inclusion’ (Cruse et al. 2002:

471), which is based on sets of hierarchical relationships at the top of which there stands a ‘superordinate’ term or ‘hyperonym’.16 The superordinate term, also known as

‘generic’ or ‘genus’ is the term around which all the members of a certain domain are defined and is, therefore, ‘the most general or prototypical term’ (Faber and Mairal

1999: 187). Bauer (1998: 31) illustrates these concepts by means of the following example:

16 Although ‘hyperonym’ and ‘superordinate’ are not synonym terms, here I use them interchangeably.

77 Dogs and cats are both kinds of animal. If it is true that we can say that every dog is an animal, a dog is a kind of animal, I saw some dogs and

other animals, then we can say that dog is a HYPONYM of animal and that

animal is a SUPERORDINATE of dog. If we try the same exercise with cat, we find that cat is also a hyponym of animal, and that animal is a

superordinate term for cat. Dog and cat are CO-HYPONYMS of the superordinate term animal.

Furthermore, these hyponyms have some hyponyms of their own: ‘bitch’, ‘puppy’,

‘Rottweiler’, ‘tom’, ‘kitten’, etc. are, in turn, hyponyms of ‘dog’ and ‘cat’ respectively

(Bauer 1998: 31). Thus, we can speak of hierarchies of hyponymy.

The superordinate term of a hierarchy can be defined according to different criteria.17 In this study I will focus on two criteria: (1) frequency and (2) the number of complementation patterns in which the term occurs. I display the results in chapter 4.

Polysemy

The study of polysemy has led to many different, and sometimes conflicting, conclusions. Murphy (2003: 19) summarizes some of the main approaches and resolves that there are three main stances: (a) ‘multiple senses are illusory (each word has only one sense)’; (b) ‘additional senses are derived from a basic sense representation’ – this is Jackendoff’s (1976) and Pustejovsky’s (1995a) approach; and (c) ‘no senses are basic, but instead meanings are generated through pragmatic knowledge’ – this tendency is illustrated by Nunberg (1978), among others. In this dissertation, I pay special attention to Pustejovsky’s view that polysemous words have a basic sense from which all the others are derived by means of lexical rules. The number of senses that a word has is not something fixed, and a different approach would turn uneconomical.

17 See Prasada (1999: 135-140) for a different approach.

78 However, the corpus under analysis gives us different examples which cannot be grouped together in the same lexical entry, because finding a basic meaning shared by all their predicates is impossible. Thus, as we will see in chapter 4, the corpus will be organized accordingly, distinguishing between those examples whose meaning can be derived from the basic sense of pure change of state and those which cannot. In order to do that, I start by briefly describing Pustejovsky’s approach to polysemy in this section.

The phenomenon of polysemy lies at the heart of Pustejovsky’s work, which aims to account for the multiplicity of word meaning. Pustejovsky (1995a) is aware that most previous work on polysemy spins around those lexical items that carry unrelated meanings (contrastive ambiguity), which are represented in sense enumeration .18

(9) Contrastive ambiguity (Examples taken from the corpus of analysis) a. These are the children who sometimes crack at university level because they are facing challenge for the first time in their lives. b. This man would forget the purpose of his visit, at least for a brief spell, and the fact that he was on official business, and drink tea and eat a good meal in the prosecutor’s house, crack jokes and make amiable conversation, and sleep through the heat of the day. c. `Crack that, son, and they'll make you chief constable."

In (9.a) ‘crack’ means ‘to fail’ and, more precisely, ‘to lose control or effectiveness under pressure’ (M-WD); in (9.b), it means ‘to tell especially suddenly or strikingly’

18 The labels complementary polysemy and contrastive ambiguity are used by Pustejovsky following Weinreich (1964).

79 (M-WD); and in (9.c), it means ‘to puzzle out and expose, solve, or reveal the mystery of’ (M-WD).

Sense Enumeration Lexicons account for polysemy by listing each sense of a word in a different lexical entry. This system may be successful in accounting for polysemy understood as homonymy – ‘in homonymy, the linguistic form is said to constitute the formal side of different words (or morphemes or lexemes), each one connected with meanings which have nothing in common but share the same form accidentally’ (Cruse et al. 2002: 321). However, it does not offer a satisfactory account of complementary polysemy, because it fails to capture the relationships that exist among the (sometimes) numerous senses of a word.19 Hence, Pustejovsky brings forward an active lexicon capable of generating extended and new senses in novel contexts and, therefore, capable of capturing the semantic relationships that exist among the senses of a word. I illustrate two different types of complementary polysemy in (10) and (11) below.

(10) Examples taken from Pustejovsky (1995a: 28)

a. John crawled through the window.

b. The window is closed.

(11) Examples taken from Pustejovsky (1995a: 28)

a. If the store is open, check the price of coffee.

b. Zac tried to open his mouth for the dentist.

19 Complementary polysemy deals with those lexical senses which share the basic meaning of the word even if they occur in different contexts. According to Cruse et al. (2002: 322), these senses never occur in the same environment, by contrast to cases of contrastive polysemy, where the senses are ‘mutually contrastive senses which may occur in the same environment and cause sentence ambiguity’.

80 In the examples in (10) there is no change in lexical category; in (11.a), by contrast, open is an , whereas the same word is a verb in (11.b). Both (10) and (11) are cases of complementary polysemy, but the former, which does not imply category changing, is referred to as logical polysemy. Logical polysemy also accounts for the multiple complement types that verbs select for, as exemplified in (12), as well as for those polysemies implied in verbal alternations such as the causative/inchoative, central to the verbal class under study here. See (13) for illustration.

(12) Examples taken from Pustejovsky (1995a: 32)

a. Mary began to read the novel.

b. Mary began reading the novel.

c. Mary began the novel.

(13) Examples taken from Pustejovsky (1995a: 33)

a. The bottle broke.

b. John broke the bottle.

In (12), the polysemy of the verb begin is reflected in that it can appear in different semantic and syntactic contexts – in this case, verbal phrase, phrase, and nominal phrase – and its meaning remains basically the same, except for some slight variations depending on the complement it takes (Pustejovsky 1995a: 32-33). The polysemy in (13) also departs from contrastive ambiguity in several respects. In the first place, the senses are related in an obvious way; in the second, the sense in (13.b) is entailed by the sense in (13.a).

81 One of the central mechanisms of the generative lexicon to explain polysemy is co-composition. Instead of listing multiple senses for a word, Pustejovsky proposes that the complements carry crucial information; thus, it is possible to only list one of the senses of the lexical item in the lexicon and derive the other readings through generative mechanisms in composition with its arguments. For example, as illustrated in (14), a verb like bake can have two different meanings: change of state in (14.a) and creation in (14.b).

(14) Examples taken from Pustejovsky (1995a: 122)

a. John baked the potato.

b. John baked the cake.

Pustejovsky claims that there is only one sense for bake (the sense of creation), and it is its complements that provide the information necessary to determine one or the other reading(s). That is to say, he puts some of the semantic weight on the nominal phrase

(NP). In (14.b.), it is the fact that a cake is an artifact that provides bake with the creation sense. For Pustejovsky, this suggests that the verb is not polysemous in itself; rather, the complements add to its basic meaning by co-composition. From this, a type of feature unification occurs, thanks to the fact that both the verb and the complement in

(14.b) share the same qualia value for AGENTIVE (i.e., the manner in which something comes into being), and the FORMAL value of the complement, which makes reference to the coming into existence of the artifact, becomes the FORMAL role for the whole phrase bake a cake. Pustejovsky (1995a: 124) refers to the resulting derived sense of the phrase as qualia unification.20

20 See Pustejovsky (1995a: chapter 7) for a detailed explanation of qualia unification.

82 In this section I attempt to offer a descriptive approach to the phenomenon of polysemy within Pustejovsky’s framework that will contribute to the appropriate arrangement of the corpus as I explain in chapter 4. For that purpose, I will consult several dictionaries and the online lexical reference system WordNet in search of all the senses displayed for break verbs, a highly polysemic verbal class. I discuss the results in chapter 4.

3.5.2 Argument structure (qualitative and quantitative analysis)

In this section, I deal with the following concepts: syntactic and semantic valence, argument-adjuncts, periphery, entity type, nominal aspect, and reference.

3.5.2.a Syntactic and semantic valence

The term valency comes from chemistry, where it denotes the capacity of a chemical element to combine with a specific number of atoms of another element, and it was used for the first time within the linguistics field by Tesnière (1959) to denote the ability of words to attach to other words (Benešova 2005: 67). The RRG theory distinguishes between syntactic and semantic valence. ‘The syntactic valence of a verb is the number of overt morpho-syntactically coded arguments it takes. One can talk about the semantic valence of the verb as well, where valence here refers to the number of semantic arguments that a particular verb can take’ (Van Valin and LaPolla 1997: 147). In dealing with these two notions, Van Valin and LaPolla (1997) use the term argument to designate semantic arguments, as is common in many European linguistic theories, while they reserve the label core argument to designate syntactic arguments. For the moment being, I will specify which type of argument I am referring to by means of the labels semantic and syntactic when necessary. As Van Valin and LaPolla (1997: 147)

83 explain, ‘these two notions need not coincide’. Thus, in a sentence like It rains, the semantic valence is 0, as the verb to rain has no semantic arguments; nonetheless, all

English clauses must have a subject; hence, its syntactic valence is 1 (Van Valin and

LaPolla 1997: 147).21 Another matter to take into account when dealing with valence is the fact that it may change due to some grammatical processes, such as passivisation. In an example like (15) the syntactic valence of the verb is reduced from two to one:

(15) Example taken from Van Valin and LaPolla (1997: 147)

a. The man killed John (syntactic valence: 2)

b. John was killed (syntactic valence: 1)

The semantic valence, these authors note, does not necessarily change, as one can still say ‘John was killed by the man’. The by-phrases are not regarded for the syntactic valence because they are peripheral adjuncts; however, the NP is a semantic argument of the verb. This turns out to be an interesting issue: whether the actor is syntactically expressed or not, the syntactic valence of a passive construction is one. This means that there is a mismatch between the number of arguments overtly expressed in the syntax and the syntactic valence of passive constructions with overt actors. Chapter 4 will go deeper into this question.22

21 See chapter 4 for a brief explanation on imperatives and other structures whose subject is (only) apparently non-existent. 22 Dik (1997) distinguishes the quantitative and the qualitative valence in his predicate frames. Since the very notion of predicate frame has been challenged in the RRG school (Mairal and Van Valin 2001), I turn to more explanatory principles like reference, type of entity, telicity, and so on.

84 3.5.2.b Argument-adjuncts

Van Valin and LaPolla (1997: 159) distinguish two types of adjunct: peripheral PPs and adverbs. For the moment being I will leave adverbs aside and focus on PPs. Jolly (1991,

1993, cited in Van Valin and LaPolla 1997: 159) distinguishes three types of prepositions: (1) argument-marking prepositions, as in (16.a); (2) adjunct prepositions, as in (16.b); and (3) argument-adjunct prepositions, which introduce an (oblique) argument of the verb into the clause, as shown in (16.c).

(16)

a. Mary gave a ring to John. (to John = argument)

b. Sam baked a cake in the kitchen. (in the kitchen = adjunct)

c. She walked to the park./It broke into pieces. (to the park/into pieces = argument-

adjunct)

Argument-adjuncts are part of the logical structure of the verb; however, they do not qualify as macroroles because they are oblique. Thus, a case like It broke into pieces, whose logical structure would be [BECOME broken’ (it) ∧ BECOME be-into’ (pieces, it)], would have 2 syntactic arguments but one single macrorole. In chapter 4 I will discuss the difficulties that the identification of argument-adjuncts implies and will offer some criteria to determine at least some of those that combine with break verbs.

3.5.2.c Periphery

The periphery is one of the three layers that form the clause in RRG, and it contains all those elements which are not arguments of the predicate. Thus, in a clause like Lasse is cooking dinner in the kitchen, the periphery is in the kitchen. Non-arguments are

85 normally labelled adjuncts, and they are usually – but not exclusively23 – adverbials such as in five minutes, in the park, for ten days, with a knife, etc.

It may also be the case that some argument occurs outside the core, as shown in

(17):

(17) What have you done?

WH-words like what above are usually arguments of the verb; however, they appear in a special position called the ‘precore slot’ which, as its name indicates, is outside the core.

One of the strong points of lexical semantics is precisely the treatment of argument-adjuncts and peripheral arguments or adjuncts. For example, Levin (1993) thoroughly revises the peripheral participants that can be promoted to subject position. I illustrate some of them below with Levin’s (1993: 79-83) own examples:

Time Subject Alternation a. The world saw the beginning of a new era in 1492 b. 1492 saw the beginning of a new era

Natural Force Subject Alternation a. I dried the clothes in the sun b. The sun dried the clothes

Instrument Subject Alternation

23 Relative clauses modifying nouns such as ‘I bought the house which you liked best’ or the ‘by-phrases’ of passive clauses are also in the periphery.

86 a. David broke the window with the hammer b. The hammer broke the window

Abstract Cause Subject Alternation a. He established his innocence with this letter b. The letter established his innocence

Locatum Subject Alternation a. I filled the pail with water b. Water filled the pail

Location Subject Alternation a. We sleep five people in each room b. Each room sleeps five

Container Subject Alternation a. I incorporated the new results into the paper b. The paper incorporates new results

Raw Material Subject Alternation a. She baked wonderful bread from that whole wheat flour b. That whole wheat flour bakes wonderful bread

Sum of Money Subject Alternation a. I bought you a ticket for $5

87 b. $5 will buy you a ticket

Source Subject Alternation a. The middle class will benefit from the new tax laws b. The new tax laws will benefit the middle class

In chapter 4 I will discuss the different problems arisen during the identification of the elements of the periphery and the subsequent division of the periphery into layers that parallel those of the structure of the clause to account for the shortcomings of the

1997 model.

3.5.2.d Entity type

The argument positions of predicate frames (Van Valin and LaPolla’s logical structures) are filled with different terms which refer to certain entities in some mental world (Dik

1997: 55). These entities are, prototypically, of the type my father, the park, Mary, etc.

That is, they usually have existence in space. Dik (1997) displays a complete typology of entities which correspond to a great extent to the different layers of underlying clause structure (i.e., Van Valin and LaPolla’s logical structure). This typology is based on

Lyons’s (1977) work, which distinguishes among first-order entities, second-order entities (Dik’s States of Affairs) and third-order entities (Dik’s Possible Facts). Rijkhoff

(2002: 19) describes second order entities as abstract and/or non-spatial entities such as sorrow and wedding, respectively; third order entities are those which have propositional content (e.g. the opinion), liable to be true or false. Starting off from this typology, Dik (1997: 137) puts forward the following:

88 ORDER TYPE VARIABLE 0 Property / Relation f 1 Spatial entity x 2 State of Affairs e 3 Possible Fact X 4 Speech Act E Table 3.3 Types of entities as referred to by terms

These term types are illustrated in (18):

(18) Examples taken from Dik (1997: 137)

a. John admired Peter’s intelligence.

(reference made to a zero-order Property “f” (intelligence) of Peter’s)

b. John admired Peter.

(reference made to a first-order Spatial Entity “x” (Peter))

c. John saw Peter open the safe.

(reference made to a second-order State of Affairs “e” of Peter opening the safe)

d. John believed that Peter had opened the safe.

(reference made to the third-order Possible Fact “X” that Peter had opened the safe)

e. John tried to answer Peter’s question why he had not called earlier.

(reference made to a fourth-order Speech Act “E” of Peter’s)

The distinction between zero-order, first-order and second-order entities is sometimes problematic. Compare the examples in (19):

(19) Examples taken from Dik (1997: 215-216)

a. The table was in room 14.

89 b. The Round Table is at five o’clock in room 14.

The problem here lies in the fact that the same noun is used to indicate both a first-order and a second-order entity, respectively: in (19.a), the noun table indicates a first-order entity, whereas the same noun denotes a second-order entity in (19.b). In order to differentiate between these different entity types, the following information provided by

Dik (1997: 215) must be borne in mind:

1. Locational predicates can only be applied to first and second-order entities, as illustrated in (20):

(20) Examples taken from Dik (1997: 215)

a. *The colour was in room 14. (order 0)

b. The table was in room 14. (order 1)

c. The meeting was in room 14. (order 2)

2. Only second-order entities can be located in time, as illustrated in (21):

(21) Examples taken from Dik (1997: 215)

a. *The colour was at five o’clock. (order 0)

b. *The table was at five o’clock. (order 1)

c. The meeting was at five o’clock. (order 2)

In other words, ‘as space is to first-order (extensional) entitites, so time is to second- order (extensional) entities, situations’ (Lyons 1995: 327).

90

3.5.2.e Nominal aspect

Dik (1997: 163) and Van Valin and LaPolla (1997:56) define nominal aspect in terms of the mass/count distinction and other notions such as ensemble, mass, set, proper, count and collective nouns.24 Count nouns are those that refer to things, people or places that can be counted, such as one car/two cars, one woman/three women, one garage/two garages, etc. As these examples show, these nouns can be made plural. Mass nouns refer to substances, things, or abstract entities that cannot be counted, such as oxygen, powder, milk, tea, information, and so on.25 Therefore, in principle, they cannot be made plural. However, some of these mass nouns can be made countable when inserted in count structures of the type a carton of milk, a tea bag, one kilo of meat, etc. And, in turn, some count nouns may be used to refer to a mass, as shown in (22). Bunt (1985, cited in Dik 1997: 141) refers to the latter as a ‘massified’ individual.

(22) Examples taken from Dik (1997: 141)

a. I saw a chicken in the garden. (count)

b. We had chicken for dinner. (mass)

For this reason, here I will distinguish between noun aspect – an intrinsic, paradigmatic property of the noun - and nominal phrase aspect – an extrinsic, syntagmatic property of the noun. I discuss the results in chapter 4.

24 In this dissertation, I only focus on the mass/count distinction, a distinction commonly attributed to Jespersen. For further information on the classification of nominal subcategories, see Rijkhoff (2002). 25 I am aware that, as Rijkhoff (2002: 28), among others, explains, the referents of nouns and nominal phrases are mental constructs of entities in the real world. However, in order to avoid overburdened formulations, I will use statements such as ‘this noun/nominal phrase refers to a certain entity’ in the following.

91 3.5.2.f Reference

Dik (1997: 127) and Van Valin and LaPolla (1997: 204) deal with the notion of reference within the frame of pragmatics and verbal interaction and relate it to the concepts of topic and focus. Reference can be of two types: constructive and identifying.

Constructive reference relates to those cases in which a term is used to help the addressee construe a referent for that term, whereas identifying reference is used in those cases in which the addressee already ‘knows’ the referent and, therefore, the speaker is simply using that term to help the addressee identify it.26 Constructive reference applies when the topic is first introduced in the conversation – that is, it is a new topic – and it is normally accomplished through indefinite terms (Dik 1997: 130).27

Identifying reference, by contrast, applies when there is some topic continuity and it is accomplished through definite terms. Dik (1997: 130) illustrates this distinction as shown below:28

(23)

Yesterday in the park I saw a black cat [construct a referent with the properties ‘cat’ and

‘black’]. Today I saw it / the cat / that cat again [retrieve the referent which you have just constructed].

Given that reference is generally achieved through the use of definite and indefinite terms, in analysing reference I will pay attention to the definite/indefinite character of the core arguments. In English, definiteness is usually expressed by means of the definite the, personal and proper names. Generic terms, such as

26 ‘By a term we understand any expression which can be used to refer to an entity or entities in some world’ (Dik 1997: 127). 27 For a detailed explanation of topic continuity and topic chain, see Givón (1983a; 1983b; 1995) and Dixon (1972) respectively. 28 The term ‘continuity’ is taken from Givón (1983a and 1983b).

92 animals in Animals are your friends, are treated as accessible identifiable referents in most languages (Van Valin and LaPolla 1997: 200); hence, the tendency will be to treat them as definite too. The prototypical indefinite article is a/an, and words like some or any indicate indefinite quantity (an indefinite quantity of a ). Although, as

Dik (1997: 16) points out, such criteria are not typologically valid, because there are languages like Danish which express definiteness by means of suffixes, it will suffice for the purposes of this dissertation, whose corpus is strictly English.

3.5.3 Event structure (syntactic analysis)

For the analysis of event structure I will draw on Pustejovsky’s GL. As already explained in chapter 2, Pustejovsky does not consider events as atomic units; rather, he claims that events have internal structure and are therefore sometimes divided into two or more subevents, as is the case of the causative/inchoative alternation (see section

2.1.5.c for further detail).

The subeventual analysis of events contributes to the explanation of some grammatical phenomena. Pustejovsky (1992: 48) mentions the following, which I quote ad verbatim:

(I) A subeventual structure for predicates provides a template for verbal

decomposition and lexical semantics. Following Vendler (1967), Dowty

(1979), and others, we distinguish three basic event types: states, processes,

and transitions, where a predicate in the language by default denotes one

particular event type. Unlike previous analyses, however, we assume a more

complex subeventual structure of event types, where event types make

reference to other embedded types.

93 (II) By examining the substructure of events, one can describe much of the

behavior of adverbial modification in terms of scope assignment within an

event structure.

(III) The semantic arguments within an event structure expression can be mapped

onto argument structure in systematic and predictable ways. The event

structure proposed here should be seen as a further refinement of the

semantic responsibilities within an LCS (Jackendoff 1983; Rappaport and

Levin 1988).29

This explains the relevant role played by event structure within this theory: event structure is the core and all the other levels of analysis are defined or explicated around it.

Subevents are ordered according to temporal criteria. Recall that, according to

Pustejovsky (1995a), there are three main types of temporal relationships between subevents. I repeat them below for further convenience:

(24) Temporal relationships between subevents (taken from Pustejovsky 1995a: 69-70)

a. exhaustive ordered part of (<α): the first subevent precedes the second subevent.

b. exhaustive overlap part of (○α): the two subevents occur simultaneously.

c. exhaustive ordered overlap (< ○α): the two subevents are basically simultaneous

but one starts before the other.

As Pustejovsky himself acknowledges, this sequencing of events does not suffice to give an appropriate account of all the relevant information contained in events. For this

29 LCS stands for lexical conceptual structure.

94 reason, I have also introduced other variables in the analysis such as ‘causal sequence’,

‘event headedness’ and ‘type coercion’. By ‘causal sequence’ I mean whether the event under analysis has a cause subevent or not in its logical structure. The idea behind the label ‘event headedness’ is that subevents are ordered not only temporally, but also according to their prominence, which facilitates the correct interpretation of the whole event. Thus, the question is which subevent is the most prominent one. Putting together the information about temporal ordering and headedness, Pustejovsky (1995a: 73) lays out the set of possible event structures that I displayed in chapter 2, and which I repeat below:

(25)

σ a. [e e1* <α e2] --- build

σ b. [e e1 <α e2*] --- arrive

σ c. [e e1* <α e2*] --- give

σ d. [e e1 <α e2] --- UNDERSPECIFIED

σ e. [e e1* ○α e2] --- buy

σ f. [e e1 ○α e2*] --- sell

σ g. [e e1* ○α e2*] --- marry

σ h. [e e1 ○α e2] --- UNDERSPECIFIED

σ i. [e e1* < ○α e2] --- walk

σ j. [e e1* < ○α e2*] --- walk home

σ k. [e e1* < ○α e2*] --- ??

σ l. [e e1 < ○α e2] --- UNDERSPECIFIED

Notice that (25.d), (25.h) and (25.l) appear as underspecified. Pustejovsky (1998: 3) suggests that the phenomena of syntactic alternations should be approached as some kind of ‘polymorphism over several logical parameters’ instead of as movement transformations. Thus, the verb in a diathesis like that of break verbs behaves polymorphically; in other words, it appears in different syntactic contexts or verbal

95 alternations. In order to capture the shared semantics of these pairs, the verbs are treated as underspecified. Such under-specification of the verb’s type is not random but systematic, which allows the association of each verb with the corresponding syntactic projections, that is, the array of choices displayed by the syntactic alternations. Thus,

Pustejovsky puts forward the necessary implements for allowing the semantics of such underspecified forms to be contextually determined in the compositional process.

Finally, in the study of type coercion I will concentrate on whether any of the arguments of the predicate has been coerced to another type or not.

3.5.3a Subeventual structure

Although Pustejovsky describes some events as punctual, this does not imply that they do not have a complex structure. Achievements, for example, are described by

Pustejovsky as occurring instantaneously; however, he puts forward a complex subeventual structure for them. In this work, I approach events from this perspective, since, as I see it, all changes of state involve a process, no matter how brief, extending from an initial state (Si) to a final state (Sf), and this implies complexity. I illustrate this in Figure 3.4. And I also claim that this implies that all events have duration, because no change can occur outside time.

(Unbroken) (Broken)

Si PROCESS Sf

Figure 3.4 Graphic representation of a change of state

Let us assume that the process illustrated in Figure 3.4 represents the time that goes by from the moment a person puts some dirty clothes in hot water until these are taken out

96 of the water in a new, clean state. The process is obviously durative. Let us also assume that the same graphic represents the time that goes by from the moment a person suffers a devastating heart attack until that person dies. Perhaps the whole process takes just one second, which might be regarded by some linguists as punctual and, consequently, monoeventual. However, if a doctor were asked to explain what has happened in that second, he would probably say something like: ‘the coronary artery is blocked by a blood clot. This deprives the heart muscle of blood and oxygen, causing serious injury to the heart muscle and, in the case of devastating heart attacks, its death.’ All this may happen in one or a few seconds. Nonetheless, it is clear that this process is durative: following the pre-Aristotelian tradition of Heraclites, the universal flow of time and all timely things are perceived through the changes undergone by beings or entities. Thus, if there is change, there is passing of time.

The same applies to another typical example of punctual event: suppose that you bounce a ball once. Even if this event is usually described as punctual, its movement could be described as consisting of different stages: first your hand comes into contact with the ball, then you push it downwards, the ball descends a few centimetres until it hits a surface. Therefore, it also has duration.

One more argument in support of this idea is that, as Pustejovsky (1995a: 191) argues, although an overt causative construction is not possible with intransitive verbs such as morire (die) or arrivare (arrive), it is possible to refer to their initial subevent by means of certain adjunct phrases, as shown in (26):

(26) Examples taken from Pustejovsky (1995a: 191) a. Gianni è morto per una polmonite.

“John died from pneumonia.”

97 b. Il tetto è crollato per il peso della neve.

“The roof collapsed from the weight of the snow.” c. Maria è arrossita per l’imbarazzo.

“Mary blushed out of embarrassment.”

Lexical intransitives such as arrivare are right-headed underlyingly (Pustejovsky 1995a:

190). Nonetheless, even if only underlyingly, this suffices to claim that there is an initial subevent, and I argue that the same criterion can be applied to lexical inchoative verbs such as break.

If the reasoning so far is correct, all changes of state consist of an initial state, a

(more or less durative) process, and a final state. If the process is foregrounded, the event is causative. If, by contrast, it is backgrounded and the result state is foregrounded, the event is inchoative. This is the premise from which I start my discussion in chapter 4.

3.5.3.b Causal sequence

Pustejovsky (1995a) accounts for alternations like the causative/inchoative by means of

σ the notions of underspecification and headedness: when a lexical pattern like [e e1 <α e2] is left unspecified with respect to headedness, it is open to two interpretations, one with e1 as head (i.e., causative), and another that focuses e2 (i.e., inchoative).

The fact that the alternation under study is precisely the causative/inchoative alternation does not leave much room for doubt about which examples from those under analysis will exhibit a causal sequence. However, I will analyse all the examples in terms of causation in order to see if the relationship that Pustejovsky puts forward between underspecification, headedness and causation is consistent.

98

3.5.3.c Temporal relation

Pustejovsky (1995a: 80) defines the temporal relation existing between the subevents of transitions and, in particular, of the verb break, as one of exhaustive ordered part of

(<α), which means that one initial act or process is followed by a resulting state.

However, as I see it, the breakage of something involves a process whose end overlaps with the beginning of the result state, given that, in principle, something that is breaking is – at least partially – broken, even if not completely. Suppose that a branch of a tree has been lashed by the wind. Even if it has not been torn off the tree completely and could therefore be broken to a greater extent, the branch can be said to be broken. Thus, the temporal relation existing between the subevents of break verbs might also be claimed to be close to that of ‘exhaustive ordered overlap’ (< ○α): two subevents are basically simultaneous but one starts before the other. Nonetheless, ordered overlap also seems to indicate that the second event continues ‘only while the first event continues to hold’ (Pustejovsky 1995a: 73). Thus, in principle neither ‘exhaustive ordered part of’ nor ‘exhaustive ordered overlap’ appears to fully describe the temporal relation existing between the two subevents of break verbs. The analysis will clarify, at least partly, this problem.

3.5.3.d Type coercion

Following Cardelli and Wegner (1985), Pustejovsky (1995a: 111) defines coercion as ‘a semantic operation that converts an argument to the type which is expected by a function, where it would otherwise result in a type error.’ Rather than type shifting the verb for every new syntactic environment, Pustejovsky assumes that the type of the verb remains the same and what changes is the syntactic type of the complement to the verb

99 by virtue of lexical governance from the verb. Thus, as Pustejovsky (1995a: 113) himself sums up, ‘rather than assigning a new lexical entry to a verb every time a different syntactic environment for it is discovered or every time a new interpretation is need for a new context, we will “spread the semantic load” more evenly throughout the elements in composition.’ This is accomplished by means of generative operations in the qualia.

There are two main types of coercion: subtype coercion and true complement coercion.

Subtype coercion relates to those cases in which the object or the subject is realized by subtypes of the argument of the verb, as illustrated by Honda in (27) below:

(27) Mary drives a Honda to work. (Pustejovsky 1995a: 113)

As Pustejovsky explains, in order to interpret this example correctly one must be able to relate the type denoted by the nominal phrase in each of the argument positions in (27) to the type selected for by the verb – that is, drive. In this case, the relationship is one of subtyping: Honda is a subtype of car; hence, Honda ≤ car ≤ vehicle (Pustejovsky 1995a:

113). If we have a look at the verb’s qualia, the type selected for argument 2 by drive is vehicle, as shown in (28).

100 (28) Representation taken from Pustejovsky (1995a: 114)

drive E1 = e1:process EVENTSTR = E2 = e2:process RESTR = <○α

ARGSTR = ARG1 = x:human ARG2 = y:vehicle

QUALIA = FORMAL = move(e2,y) AGENTIVE = drive_act(e1,x,y)

Thus, the selectional requirements for drive can only be satisfied if the subtyping relation mentioned above exists. In other words, the subtype is coerced to the right type; hence the label subtyping coercion. The coercion relation can be summarized as follows:

(29) Pustejovsky (1995a: 115) a. Ө [Honda ≤ car] : Honda car b. Ө [car ≤ vehicle] : car vehicle

(Ө = ‘subtyping coercion relation’; ≤ = ‘is a subtype of’)

True complement coercion involves type shifting licensed by lexical governance.

Consider the following examples:

(30) Examples taken from Pustejovsky (1995a: 115)

a. John began a book.

b. John began reading a book.

101 c. John began to read a book.

In order to provide the correct interpretation for these sentences, a coercion rule is needed that ensures that the semantic type of the verb is satisfied. In this case, the rule of function application with coercion (FAC) is the most appropriate one to explain how the semantic transformation is produced. This rule states that the complement of begin is actually an event of some sort, as shown in (31).

(31) Representation taken from Pustejovsky (1995a: 116)

begin E1 = transition EVENTSTR = E2 = transition RESTR = <○α

ARGSTR = ARG1 = x:human ARG2 = e2

QUALIA = FORMAL = P(e2,x) AGENTIVE = begin_act(e1,x,e2)

Thus, regardless of the surface syntactic form of the complement, the semantic type is an event. Where this requirement is not met, a coercion applies to reconstruct the semantics of the complement. As Pustejovsky (1995a: 116) explains, this process will only succeed if the NP has an alias of the appropriate type. By alias Pustejovsky

(1995a: 116) means ‘an alternative type that is available to an element, be it lexical or phrasal’. In (30.a) its argument book does not satisfy the type required by the predicate begin and must therefore be coerced into an event denotation, ‘one which is available from the NP’s qualia structure through qualia projection’ (Pustejovsky 1995a: 116):

102 (32) Representation taken from Pustejovsky (1995a: 116)

book

ARGSTR = ARG1 = x:info ARG2 = y:physobj

info·physobj_lcp FORMAL = hold(y,x) QUALIA = TELIC = read(e,w,x,y) AGENT = write(e’,v,x.y)

There are two possible interpretations associated with book: the values of the AGENTIVE and TELIC roles. ‘How an event is actually reconstructed to satisfy the typing environment is a result of the coercion operation itself’ (Pustejovsky 1995a: 116-117).30

In the analysis of the corpus I start off from the idea that the basic argument type for break verbs is not an event but an NP. Thus, in those cases in which this type is not satisfied a coercion process might be required. I discuss this issue in chapter 4.

3.6 Conclusions

In this chapter, I have described the methodology that underlies my research. To be precise, I have explained how I am going to analyse a number of variables of the word structure, the argument structure and the event structure of break verbs in order to fulfil the goals put forward in chapter 1. I have briefly described those components of

Pustejovsky’s lexicon that are either missing or not fully developed in RRG, and which could enhance the latter. Furthermore, I have also described those components of the

RRG that I consider relevant for lexical representation and which will be analysed in

30 For further detail on coercion see Pustejovsky (1995a: chapter 9).

103 conjunction with those of GL in order to check the real compatibility of the two models.

If they are compatible with one another, RRG will be able to benefit from the alliance.

As part of the methodology, I have also reported on the corpus used in this piece of research, and on the nature of the verbs under analysis: verbs of pure change of state.

This corpus will be used for the analysis of argument and event structure. It comprises approximately 1,200 examples that contain different realizations of break verbs, and which have been downloaded from the BNC. For the analysis of word structure, I will make use of dictionaries – namely, LDEL, NODE, CCED, CALD and the M-WD –, thesauri – LLCE and NOTE –, and the online database WordNet.

104 CHAPTER 4: DESCRIPTION AND RESULTS OF THE ANALYSIS

4.0 Introduction

In chapter 3, I introduced the variables of the analysis. Having carried out the analysis of the corresponding variables for the word, argument and event structures of break verbs, I now proceed to the elaboration of the data obtained. In the Appendix, I provide a list of examples that illustrates the analysis process.

In principle, one would expect that the structure of this chapter would parallel that of chapter 3. However, the analysis has brought out unexpected issues that have forced me to introduce some structural changes. Very briefly, by contrast to the analysis of word structure, which is based on the information gathered from dictionaries and thesauri mainly, the analysis of the other two levels is based upon the corpus that I have introduced in chapter 3. As soon as I have started to analyse the argument and event structures of the examples from the corpus, I have realized that not all the examples denote a pure change of state, which is the object of my study. Thus, I have reorganized the corpus, which had a random structure, into three groups, according to degree of prototypicality.1 For that purpose, I have used part of the information obtained in the analysis of word structure because, as we will see, morphology is intimately related to syntax and semantics. Those examples that encode a pure change of state have been gathered together under the label prototypical examples. Those that carry a completely different meaning have been labelled non-prototypical examples. And those that are half way between the two classes have been termed less-prototypical examples.

Therefore, in this chapter I start by discussing the analysis of word structure, following the order stipulated in chapter 3. Then I explain the division of the corpus into

1 On prototypicality, see Givón (1985), Taylor (1989), and Croft (1993).

105 three classes according to degree of prototypicality. And finally, I discuss the data obtained from the analysis of the argument and event structures of the three classes, focusing mainly on the prototypical class, since this is the focus of my study. The other two classes are dealt with by paying special attention to those points that contrast with the prototypical class. Although this chapter is the keystone upon which the rest of the conclusions are built (see chapter 5), I believe that the issues raised here are of interest in themselves and also, in some cases, the source of new research questions.

4.1 Word structure

This section is concerned with the structure of word meaning, and it consists of two main sections. In the first section I describe the derivate behaviour of break verbs; in the second I discuss the semantic relations that these verbs take part in. Each section is in turn subdivided into different subsections, as shown above.

4.1.1 Lexical derivation

As anticipated in chapter 3, the analysis has covered three main questions: (1) distribution of prefixes and suffixes, (2) changes in grammatical category, and (3) the origin of verbs: Romance vs. Germanic verbs. The results of the derivational process have brought forward several interesting questions which I discuss in the following subsections.

4.1.1.a Distribution of prefixes and suffixes

Affixes cannot be added to bases at random. As Bauer (1998: 26) explains, a maximum of three affixes can be added before the base and five after it. Quoting Bauer (1998: 26),

‘even very long and unusual words like praetertranssubstantiationalistically only get

106 above the limit of five if you count both -istic and -ical- as two affixes’ – and there is not even agreement as to the validity of these two. Here, the distribution of affixes has been analysed taking into account five different patterns – prefix + prefix; prefix + ; prefix; sufix; suffix + suffix – as any further combination is rare in English. The results are shown in Table 4.1 below

107 pref+pref pref + suf pref suf suf + suf -ed splintered, cracked, crashed, crushed, fractured, ripped, shattered, smashed, snapped -ing break, crack crash, crush, fracture, rip, shatter, smash, snap, splinter, split, tear

break unbreakable, outbreak2, breakable, unbreaking multibreak, breakage, breaker prebreak

crack anticrack cracker, cracket crackle, crackless, crackling, crackly, cracky crash postcrash, crashingly precrash crush crushable, crusher crushingly fracture postfracture fracturable fractural rip unrip ripper, ripple, rippler, ripply shatter shatterable, shatteringly shatterer, shattery smash smasher, smashingly smashable, smashery snap unsnap snapper, snappy, snappishly, snappish, snappily, snappable, snappiness snapless splinter splintery, splinterize, splinterless split presplit, unsplit splitter tear tearful, tearless, tearfully, tearable, tearer, tearfulness teary, tearage Table 4.1 Derivational patterns for break vers3

The number of verbs that partake of the combinations mentioned above is displayed below:

2 I write in italics those words whose affixes do not appear in Marchand’s list or whose status as derivatives of the verbs under analysis is not clear, like tearless, which derives from the homonym noun tear. 3 I write in italics those words whose affixes do not appear in Marchand’s list or whose status as derivatives of the verbs under analysis is not clear, like tearless, which derives from the homonym noun tear.

108

(1) Affixation patterns for break verbs:4 a. Prefix + prefix: 0/12 b. Prefix + sufix = 1/12 c. Prefix = 0/12 d. Suffix = 7/12 e. Suffix + suffix = 2/12

The results displayed in (1) indicate that break verbs are not very productive in derivational terms: none of the verbs under analysis is modified by two prefixes, and only one out of twelve can be added a prefix and a suffix simultaneously. The combination suffix + suffix is also rather rare, applying to two out of twelve verbs only.

As regards the addition of one single suffix or single prefix, suffixes seem to combine more frequently with these verbs than prefixes, the latter being completely absent. The table also includes the inflectional suffixes ‘-ing’ and ‘-ed’. Although the scope of this analysis is restricted to derivational affixes, both ‘-ing’ and ‘-ed’ are boundary cases which can behave both as derivational and inflectional affixes:

(2) a. The crashed vehicle was abandoned there. (crashed = adjective) b. I could hear crashing in the next room. (crashing = loud noises. Noun) (Example

taken from the CALD)

4 The results displayed here include only those derivatives whose meaning is related to the verbal one of pure change of state.

109 As the table shows, all verbs are compatible with the suffix ‘-ing’, and all regular verbs can be followed by the suffix ‘-ed’, although not all of them will count as derivatives.

4.1.1.b Changes in grammatical category

The grammatical category of the verbs changes in most cases. Some become nouns, some adjectives, some adverbs, and some – but only a few – remain the same. The results are shown in (3):

(3) Grammatical category of the new words resulting from the derivational process: a. Prefix:

INPUT OUTPUT DERIVED RESULTS Verb Noun Ø Verb Adjective Ø Verb Ø Verb Verb Ø b. Suffix:

INPUT OUTPUT DERIVED RESULTS Verb Noun breakage, breaker, crusher, ripper, smasher, splitter, tearer Verb Adjective breakable, crushable, fracturable, tearable Verb Adverb Ø Verb Verb Ø c. Prefix + Prefix:

INPUT OUTPUT DERIVED RESULTS Verb Noun Ø Verb Adjective Ø Verb Adverb Ø Verb Verb Ø

110 d. Prefix + Suffix:

INPUT OUTPUT DERIVED RESULTS Verb Noun Ø Verb Adjective unbreakable Verb Adverb Ø Verb Verb Ø

e. Suffix + Suffix:

INPUT OUTPUT DERIVED RESULTS Verb Noun Ø Verb Adjective Ø Verb Adverb crushingly, shatteringly Verb Verb Ø

Bauer (1983: 218-224) argues that different prefixes and suffixes tend to be attached to different bases. More precisely, he proposes the following patterns for verbs:

(6) Bauer’s (1983: 218-224) derivational patterns:

Prefixes used exclusively with a verb base: de-

Prefixes added to nouns and verbs: fore-, re-

Prefixes added to verbs and adjectives: circum-

Prefixes added to nouns, verbs and adjectives: counter-, dis-, co-, inter-, sub-

Suffixes used to derive nouns from verbs: -ation, -ee, -ure, -al, -ary, -er, -ment

Suffixes used to derive adjectives from verbs: -able, -less, -ant/ent, -atory, -ful, -

ive

The table above shows that only two of those listed by Bauer attach to break verbs: the suffixes -able and -er, and not in all cases.

In order to interpret all these findings correctly, it is necessary to make a few remarks:

111 1. The derivatives listed above have been contrasted against the NODE, the

CALD, and the M-WD. The latter makes use of the internet as a source of information, thus avoiding the problems pointed out by Bauer (1992: 62-63) about the inaccuracy of most dictionaries, which have entries for lexemes that are rarely used but which are validated by their appearance in, for instance, Chaucer’s texts, while leaving out others which are very frequently used and should, therefore, be taken into account. Thus, entries for words such as ‘tearage’ or ‘prebreak’ are available at Merriam-

WebsterUnabridged.com. Nonetheless, it must be noted that this dictionary does not offer an accurate definition for these entries; it merely acknowledges their existence.

For this reason, in order to identify the category of these new words I have resorted to the Internet. More precisely, I have searched each word in Google and I have studied them in context in order to see the function they perform in the sentence and if they illustrate some case of conversion of the verb into some other lexical category, as shown in (4):

(4) She had to stand on her cracket to climb into the tub as it was as tall as she was.

(http://www.marsdenwriters.plus.com/frames/thewriters/margaret_renwick/washd

ay.htm)

Even if the meaning of cracket is not defined in the dictionary, the context helps identify its lexical category as a noun. Thus, this would be a case of conversion.

However, although the meaning of cracket is not defined in the dictionaries, the contest also makes it clear that it is not related to that of change of state. Hence, it has not been regarded for the analysis. Each of these new terms has been carefully studied before including it in or excluding it from the list of derivatives of break verbs.

112 2. In the derivative process, the original meaning of the verbs is sometimes altered. This is illustrated in (5).

(5) ‘unsnap’ (= to loosen or free by or as if by undoing a snap. M-WD)

Consequently, those new words whose meaning differs from that of change of state have also been excluded from the final estimate.

3. Some of the words listed above are the result of the combination of lexemes which mirror the verbs under analysis but whose meaning is completely different (that is, homonyms), and the list of Marchand’s affixes. That is the case with, for example, tearful, tearfully or tearfulness. These words, together with others such as teary or tearless, are actually derived from the noun tear (= a drop of salty liquid which flows from the eye, as a result of strong emotion, especially unhappiness, or pain. CALD).

Although these words appear automatically as the result of my query (that is, the combination of the verb with the corresponding affix), this is due to the fact that the dictionary does not have an option to discriminate grammatical or semantic categories in the process. Therefore, I have not taken them into consideration in the final estimate of the results offered above.

4. The dictionaries have provided one interesting example: outbreak. At first sight, it resembles the other derivatives. Nonetheless, this word is in fact the result of the combination of a verb and an adverb. Particles like out are not listed in Marchand

(1969) because they are not real affixes; rather, although they are preposed to free morphemes, they are also free morphemes in themselves. Therefore, the resulting word

113 is not a derivative but a compound, reason for which I have not considered it for the analysis.

5. There are some affixes which, when combined with the verbs, result in words which are also listed in the lexicon as a different lexeme. That is the case with, for example, crackle. Therefore the question is if crackle is a derivate of the lexeme crack plus the suffix -le, listed in Marchand’s work, or if it is a lexeme (root) by itself.

Furthermore, these words may go one step further in the derivational process and be added other suffixes, as illustrated by crackling. The question then arises whether crackling is the result of the combination lexeme + suffix + suffix or a derivative of crackle (that is, crackle + suffix). There is evidence that supports both views: on the one hand, the meaning of crackle (= to make a lot of short, dry, sharp sounds) could be said to derive metaphorically and metonymically from the meaning of crack (to break with a sudden sharp sound M-WD); on the other, the CALD lists crackling and crackly under the entry for crackle, which is not included within the entry for crack.

Snappily, snappy, snappily or snappiness illustrate a similar case, because they are listed in the CALD under the entry for snappy, but this time snappy is included under the entry for snap. Nonetheless, the explanation would be similar to the previous case, because snap means ‘to cause something which is thin to break suddenly and quickly with a cracking sound’ (CALD).5 In any case, these derivatives have lost the essential meaning component of change of state. Thus, they have not been considered in the final estimate of the results offered above. Despite this, it is interesting to note that many of the break verbs show a tendency to denote both the breakage on the one hand, and a noise (or an action or object characterized by that noise) similar to that produced when the breakage takes place, on the other.

5 I return to this issue in section 4.2.2.

114 6. There are many instances of words which have exactly the same form but different grammatical category, as illustrated by split (verb) and split (noun). Some linguists would claim that there is a derivational relationship between the two words; however, most changes in a part of speech are marked by an affix (Bauer 1992: 30). As a solution to this problem, some linguists propose the use of a zero morph to account for the difference existing between two homophonous forms that perform different functions (Bauer 1992: 31). The use of zero morphs is, however, controversial. The contrast between the zero morph and the other forms within a certain paradigm is clear in inflectional cases. Thus, the demand for an affix (zero morph) is substantiated. In derivation, by contrast, whenever we find one form with different functions, it will be the basic form of all of those that has no zero morph, whereas all the others must have a zero morph. Thus, the zero morph is contrasting with nothing at all on the one hand, and with all the other zeros on the other, which does not sound like a very plausible account of derivational affixation (Bauer 1992: 31-32). For this reason, some scholars prefer to use other terms such as conversion or functional shift (Bauer 1992: 32), two labels which are some times used differently but which, in most cases, are synonymous. One way or another, examples of conversion or zero morph have not been included in the table above, because they only denote a change in the lexical category of the verb which cannot be appreciated in their morphology.

7. The -ing and -ed suffixes have been analysed separately for two reasons: (a) all verbs have an -ing form, and all verbs have an -ed form too, except for those with an irregular past form, and even some of those do - hence, the information provided is not highly significant; and (b) as I have explained above, ‘-ing’ and ‘-ed’ are borderline cases between inflectional and derivational forms. Thus, although in the combinatorial process the dictionaries have immediately identified the combinations ‘verb + -ing’ and

115 ‘verb + -ed’, not all of them can be considered derivational. Consequently, such forms need further consideration before being regarded as derivational.

Having clarified these points, I can now go back to table 4.1 and select only those words that are real derivatives for break verbs. In order to offer more accurate results, I have checked the results against the following dictionaries: NODE, M-WD and CALD. Note that not all the dictionaries offer the same results; thus, below I specify which dictionary lists each derivative in each case. Furthermore, I have left out those words that are listed in the M-WD but only as found on the internet. In other words, here I stick to the dictionaries for reasons of reliability.

116

Affixes break Crack Cras Crus Fracture Rip Shatter Smash Snap Splinter Split tear a- Be- De- Dis- Non- Un- Fore- Re- Mis- Circum- Counter- Dis- Co- Inter- Sub- Out- NODE Over- Under- Up- Down- En-

-ation -ee -ure -al -ary -er NODE NODE M-WD M-WD NODE NODE/M-WD M-WD -ment -able NODE / CALD M-WD -less -ant -ent -atory -ful -ive -ance -ory Table 4.2 Derivatives of break verbs

117

As we can see, break verbs only combine with a couple of affixes from those listed in

Marchand there are few affixes that attach to verbs. In general, non-basic verbs are compounds, like break away. However, other verb types combine with a great number of affixes. Consider, for example, do or interpret:

(6) a. Interpret: reinterpret, misinterpret, interpretation, interpreter, interpretable, interpretive. b. Readable, reread, unread, misread, reader.

The question, then, boils down to why break verbs do not behave like read or interpret and admit affixation. I can think of five reasons:

1. Break verbs are Germanic, and the vast majority of the affixes above are Latin;

hence, they are incompatible (see section 4.1.1.c).

2. Break verbs do not take iterative affixes such as re- because the event is not

repeatable. According to the definition, and in neutral circumstances, once

something breaks, it cannot break again, because it is already broken.

3. Break verbs do not take reversative affixes such as un- because the event is not

reversible.

4. Break verbs do not take transitivising affixes such as -ize/-ise because the

valence of the verb is already two in its causative version.

5. Break verbs do not take quantifying affixes such as over- or -under- because the

event is not repeatable.

118 The only affixes that these verbs are compatible with are the suffixes -able and -er (and, exceptionally, -age, which does not appear in Marchand’s list, for break only), as in crushable and splitter. Interestingly, these affixes code, precisely, the relations of the logical structure of break verbs: [do’ (x, Ø)] CAUSE [BECOME broken’ (y)]. More precisely, the logical structure for break verbs involves an agent (x), the breaker, who breaks something (y), the patient – which in order to be a patient has to be breakable – bringing about, as a result, a breakage (y):

Break (x) (y)

Break er

Break able

Break age

Figure 4.1 Representation of the logical structure of break in terms of its derivatives

Notice that the role of the second argument – the patient – is somehow highlighted by this, as it is represented twice in the derivation: (1) in the result, since the patient is the argument that undergoes the breakage, not the agent; and (2) by the property of being breakable. This is in keeping with this type of verbs, whose second (semantic) argument is more prominent than the first one, because it is always present – both in the inchoative and causative variants of the alternation – and also because these verbs focus on the final state of the second argument. Thus, it can be concluded that derivation is intimately connected to semantics and syntax.

Mairal and Cortés (2000-2001: 271) deal with the semantic and syntactic motivation of morphology from a different perspective: they claim that ‘affixal units

119 should be regarded as lexical units in their own right.’ Their model, which they call

Affixal Lexical Model, will not be discussed here, but it opens up a new way to enrich lexical representation, so it should be taken into consideration in future research.

4.1.1.c The origin of verbs: Romance vs. Germanic verbs

Finally, as I have just mentioned, there is one additional factor that may determine the derivational characteristics of words: the origin of the verb. More precisely, and focusing on the verbs under analysis in this work, whether the base of break verbs is

Romance or Germanic may have a morphological impact because, as Bauer (1992: 69-

70) explains, the base may impose restrictions on the affixation process. Unfortunately, there is only one verb among break verbs whose base is foreign, namely fracture, which makes it impossible to establish generalisations regarding the high or low productivity of such bases. Nonetheless, just as an anecdote, the table reveals that fracture does not admit any affix of those included in Marchand’s (1969) list. The only exceptions are the inflectional suffixes -ed and -ing which, as Bauer (1992: 70) says, ‘can be added to virtually all verbs except modal verbs’.6

4.1.2 Semantic relations

In this section I engage in the discussion of the data obtained from the analysis of antonymy, synonymy, hyperonymy/hyponymy and polysemy.

4.1.2.a Antonymy

In chapter 3, I claimed that antonyms are not words with opposite meanings; rather, it is the senses of two or more words that are antonyms. I also explained that the study of

6 Bauer’s words actually refer to the inflectional suffixes -s and -ing, but I have used them to refer to the inflectional suffixes -ed and -ing, because I consider this to be equally valid.

120 this lexical phenomenon appears to have been focused mainly on adjectives and nouns, neglecting the question of verbal antonymy to a certain extent. Even in thesauri, where the number of adjectival antonyms listed is large, the number of verbal antonyms is considerably reduced. Whether this is due to a lack of antonyms for this lexical class or to a lack of study is unclear; what is clear is that none of the thesauri and dictionaries consulted devotes a lot of space to them. The most complete database found in this respect is, as mentioned, WordNet. Furthermore, it organizes lexical relations around the notion of sense, distinguishing senses with so much precision and detail that it largely avoids overlap. Break, in Figure 4.2 below, may serve as illustration. As shown, the search offers fifty-nine senses for break, but only two of them have antonyms:

2 of 59 senses of break

Sense 5 break, bust -- (ruin completely; "He busted my radio!") Antonym of repair (Sense 1) =>repair, mend, fix, bushel, doctor, furbish up, restore, touch on -- (restore by replacing a part or putting together what is torn or broken; “She repaired her TV set”; “Repair my shoes please”)

Sense 29 break -- (cause the failure or ruin of; “His peccadilloes finally broke his marriage”; “This play will either make or break the playwright”) Antonym of make (Sense 41) =>make -- (assure the success of; “A good review by this critic will make your play!”)

Figure 4.2 Results for ‘Antonyms’ search of verb break

121 Returning to the question of whether the low number of verbal antonyms is due to a lack of antonyms for this lexical class or to a lack of study, both seem plausible, but granted that WordNet is an accurate source of information, and attending to the results offered by this database, the balance seems tipped in favour of the former.7 If this is so, the conclusion is that the number of antonyms of this lexical category is scarce, as

WordNet only puts forward the following antonyms for the verbs under study:

Verbs break crack crash crush fracture rip Antonyms (for sense 5) Ø Ø Ø Ø Ø repair, mend, fix, bushel, doctor, furbish up, restore, touch on; (for sense 29) make Verbs shatter smash snap splinter split tear Antonyms Ø Ø hold Ø Ø Ø Table 4.3 Verbal antonyms for break verbs as found in WordNet

As the table shows, only break and snap have antonyms according to WordNet, and not even all of those displayed for the former are appropriate because they do not correspond to the senses of break that indicate change of state (see Table 4.4 below).

These results are further supported by the NOTE, which offers the following data:

Verbs break crack crash crush fracture rip Antonyms repair, mend Ø Ø Ø Ø Ø Verbs shatter smash snap splinter split tear Antonyms Ø Ø hold Ø Ø Ø Table 4.4 Verbal antonyms for break verbs as found in the NOTE

7 But see Faber and Mairal (1999).

122 As can be seen in the table, only two verbs out of twelve have an antonymic form, and the two of them correspond with those in WordNet. This is consistent with the fact that break verbs do not have a reversative semantics; that is, when an object breaks, it cannot be unbroken or returned to its original state. In other words, the process cannot be reversed.

It must also be noted that break, the verb that names the verbal class, is one of the two that is associated with an antonym. This may be due to the fact that, as I reveal in section 4.1.2.c, break is the hyperonym of this verbal class.

4.1.2.b Synonymy

In chapter 3, I concluded that full synonymy does not exist, and I decided to approach the study of synonymy from a broader perspective. More precisely, I have approached this phenomenon in the same line as I have done for antonymy, not as a matter of similarity of meaning between words, but as a matter of similarity of meaning between the senses of a word. This can be explained in very simple terms: words may have more than one sense, and it may be the case that only one of them is related to only one sense of another word. (7) illustrates this idea:

(7) Nice (adj)

1. pleasant, enjoyable or satisfactory:

Did you have a nice holiday?

Have a nice day/time!

2. kind, friendly or polite:

Jane’s new boyfriend is a really nice guy.

123 I wish you’d be nice to your brother.

3. [before noun] FORMAL based on very slight differences and needing a lot of

careful and detailed thought:

I wasn’t convinced by the minister’s nice distinction between a lie and an untruth

Nice and pleasant are regarded as synonyms in the CALD. However, pleasant does not share the third meaning associated with nice, namely ‘based on very slight differences and needing a lot of careful and detailed thought’. Therefore, only one or two senses of

‘nice’ are synonyms with that of pleasant (‘enjoyable, attractive, friendly, or easy to like’ – CALD). Consequently, in this dissertation synonymy will be understood as similarity between senses.

This approach finds support in the fact that the sets of synonyms that appear in dictionaries and thesauri, which generally do not follow a strict organization in senses, show a high degree of overlap and circularity. Obviously, the degree of overlap varies from some dictionaries and thesauri to others, but even in those whose organization is more careful, like the NOTE (see table 3.2 in chapter 3 for illustration), overlap is still a fact. The NOTE explains in the introduction that two words rarely mean exactly the same and adopts a broad definition of the term synonymy for practical purposes. This way, users will find a wide range of synonyms which are interchangeable in a great number of contexts, but not in all contexts. This implies that, in spite of arranging each lexical entry into different senses, each marked by a different number, such classification is not sufficiently fine-grained. As can be seen in table 3.2 in chapter 3, for my purposes I have picked only that sense that comes closest to the break meaning of the verbs. The NOTE groups together not only each lexical entry but also each sense

124 into one or several synonym sets. Each set of synonyms has a core synonym, which I have marked here by using capital letters. This term is the one that comes closest in meaning to the in that particular sense. Semicolons are used to separate different sets of synonyms which are not synonym among them, but only synonym to the headword. Arranging information in this fashion is definitely useful to the general user, but it turns out to be imprecise for accurate research for two reasons. The first reason is that, as already mentioned, the NOTE, and in fact most thesauri, is organized around a broad definition of the term ‘synonymy’, leaving aside subtle distinctions or nuances. Consequently, each sense allows for a larger number of synonyms, thus producing an overlap of synonyms among different senses of a same word. The second reason is that, apart from the overlap, another problem derived from this lack of accuracy is circularity: if we look at table 3.2 in chapter 3 it is clear that most verbs are defined in terms of the others; more precisely, the core synonym of most verbs is another verb in the same verbal class (shatter for break, split for crack, smash for crash, break for fracture/smash, snap and split, and shatter for splinter).

Some of these shortcomings are overcome in WordNet, since the distribution of word meaning in senses is fairly minute. This allows me to present a more precise set of synonyms for each of these senses, thus avoiding overlap to a greater extent.

As explained in chapter 3, WordNet allows one to carry out searches for different lexical relations – Synonyms, Coordinate terms, Hyperonyms, Hyponyms,

Holonyms, Meronyms, Derivationally related forms, and so on; and, in the case of synonymy, it also permits to organize the search around the notions of similarity and frequency. In order to see what advantages they offer, I have carried out searches in both fashions. The search for ‘Synonyms, grouped by similarity’ gathers together different sets of synonyms whose meaning is similar. For example, if we go back to the

125 list of senses put forward for break in (3) in section 3.4 in chapter 3, senses 2 and 3 are grouped together, and so are 4 and 17, 5 and 42, or 7 and 19. This offers the advantage of knowing which sets of synonyms come closer to the core meaning of break verbs in case there is more than one. The search for ‘Synonyms, ordered by estimated frequency’ arranges the sets of synonyms by frequency of occurrence. This arrangement coincides with that offered by WordNet when displaying the senses of a word in an ordinary search. Thus, sense 1 in (3) in section 3.4 is the most frequent set of synonyms for break, followed by sense 2, sense 3, and so on. One of the advantages of this arrangement is that it makes clear which set of synonyms is closest to the core meaning of each verb, since I assume that frequency is a key to determining meaning. The combination of both searches should make it easier to identify the core synonyms for break verbs, because one indicates which sense is the most frequent – and, consequently, closest to the core meaning – and the other indicates which sets of synonyms (if any) attach to that sense. Furthermore, in order to avoid problems of circularity, I have looked for a prototypical definition of these verbs which should help me discriminate only those synonyms which are prototypical. The organization of

WordNet into similar and frequent sets of synonyms, together with the prototypical definition of the verbs – which leaves out, among others, figurative meanings – and the extra information provided by WordNet every time it displays results, such as the definition(s) and hyperonym(s) for each sense, should suffice to identify or, at least, come close to the prototypical synonym(s) of each verb.

Furthermore, I have developed the meaning definition for prototypical verbs of pure change of state. This way, synonyms can be checked against it in case of doubt. In dealing with prototypicality, Givón (1993: 52) explains that not all members of a certain class show the same features or follow membership criteria to the same degree;

126 however, all of them must share a ‘cluster of criteria’ which is central to the prototypical definition of the verbal class. Some of the members are closer to the prototype whereas others resemble it to a lesser extent. With regard to verbs, Givón

(1993: 55, emphasis in original) defines the prototypical verb as being ‘of intermediate complexity, involving concrete (perceptually accessible) events, either of physical motion or physical action, and above all fast changing events’. And, with respect to transitive clauses, they have a that codes a ‘bounded, fast-changing action’; the subject of a prototypical transitive verb is a ‘volitional, acting agent’, and the object a ‘concrete patient’ that undergoes the results of the agent’s action (Givón

1993: 106). If the subject of these verbs is a noun, they will prototypically tend to be concrete, animate, human, countable and general (Givón 1993: 55-57). Givón (1993:

107-111) considers break a prototypical transitive verb, as it indicates ‘change in the object’s physical state’.8 Thus, taking all this into account, as well as the semantics of these verbs, their definition could be summarized as follows:9

(8) Prototypical definition of transitive break verbs:

Patient: concrete, inanimate, non-human, countable, undergoer, non-volitional

Agent: concrete, animate, human, countable, instigator, volitional

Process: change of state

Break verbs also have a less prototypical, intransitive (or inchoative) version, for which

I suggest the following definition:

8 See also Martín Arista (2005). 9 See Croft (1991), Baker (2003) and Martín Arista and Ibáñez Moreno (2004) for further details on the functional definition of categories.

127 (9) Prototypical definition for the less-prototypical intransitive variant of break verbs:

Undergoer: concrete, inanimate, non-human, countable, non-volitional

Process: change of state

Putting together the data drawn from WordNet and the meaning definition provided above, I have all the information necessary to determine the synonyms of break verbs. The selection process has been systematic; I explain it in (10), and I illustrate it in (11) by means of break:

(10) Synonyms selection process: Steps.

STEP 1: I take the first three sets of synonyms ordered by frequency in WordNet,

since they are supposed to be the closest to the core meaning of the verb.10

STEP 2: I check the list of synonyms grouped by similarity in WordNet and add

those sets of synonyms which are associated with the sets picked in STEP 1, as they

are supposed to be similar in meaning – hence, synonyms among them.

STEP 3: having accomplished steps 1 and 2, I check the selected senses against the

prototypical definition for break verbs to make sure that I have included all – and no

more than – the synonym sets that comply with it.

STEP 4: Wordnet also provides hyperonyms for each sense. Thus, if doubt persists

regarding some synonym or synset, I check it against the hyperonymy provided.

Hyperonyms are a very useful tool to clarify problematic senses because they

condense the basic meaning of the verb in one single word but giving a wider

perspective. In most cases, senses with different hyperonyms will not be synonyms.

10 The number of synonym sets picked varies according to the number of senses distinguished, given that those verbs with a greater number of senses will have a wider range of synonyms, and there are more chances that some of those synonyms fit the prototypical definition.

128 (11)

a. BREAK

STEP 1: sense 1, sense 2 and sense 3 – logically, I leave out those senses that

conform to the definition but which offer no set of synonyms for the head verb.

STEP 2: senses 2 and 3 are grouped together according to the criterion of similarity,

which highlights their status as synonyms of break.

STEP 3: the synonyms provided in sense 1 do not fit the prototypical definition of

break because the object is not concrete. Hence, they are rejected.

STEP 4: Senses 5 and 42, which are grouped together by similarity, might be

regarded as synonyms of break, because their definitions make reference to the

complete destruction or breaking of an object. However, the hyperonym for sense 5

and the examples provided make it clear that there is no change in the material

integrity of the object; thus, it is not a prototypical synonym set. The same applies to

senses 4 and 17, which might at first seem to make reference to a change in the

material integrity of the object but in fact mean a change in the mechanics or

functioning of the object. The set of synonyms displayed in 42, by contrast, do

imply a change in the material integrity of the object – at least as far as the

definition and the examples are concerned. Thus, it is included in the set of

synonyms of break. Sense 39 refers to a change or transformation, but not

necessarily in the material integrity of the object. Therefore, it is disregarded. Sense

44 could also be considered prototypical although its meaning is more restricted

than the general one for break; however, it poses no synonyms at all, so it is also left

out. Sense 50 is similar to sense 5; therefore, it is also rejected. And, finally, Senses

43, 54 and 57 were not contemplated in STEPS 1 and 2; however, their definition as

129 well as their hyperonyms and synonyms fit the prototypical definition for break.

Hence, they are also included in the final set of synonyms of break.

After analysing all break verbs along these lines, the results are the following:

VERB SYNONYMS Break Sense 2 Sense 3 Sense 42 Sense 43 Sense 54 Sense 57 Crack Sense 1 Sense 6 Sense 7 Sense 10 Crash Sense 5 Sense 8 Crush Sense 2 Sense 4 Sense 8 Fracture Sense 3 Sense 4 Sense 5 Sense 6 Rip Sense 1 Shatter Ø Smash Sense 2 Sense 10 Snap Sense 2 Sense 3 Splinter Sense 3 Split Sense 1 Sense 2 Sense 5 Tear Sense 1 Sense 2 Table 4.5 Final set of synonyms for break verbs

This proposal avoids circularity to a great extent and leaves out those synonyms of break verbs that depart from the prototypical definition.

130

4.1.2.c Hyponymy and Hyperonymy

In chapter 3, I have claimed that the hyperonym of a hierarchy can be defined according to different criteria. In this dissertation I focus on two: (1) frequency; and (2) the number of complementation patterns in which the term occurs. I start with the frequency criterion.

(1) Hyperonymy in terms of frequency

In order to determine the hyperonym of break verbs I have searched each verb against the whole BNC. The verb with the highest number of occurrences will be the superordinate term. Given the diversity of the verb morphology, I have carried out six searches for each verb in an attempt to cover the widest range of realisations. The searches correspond to the patterns shown in (12):

(12) BNC searches:

VVB: base form of , except the infinitive (e.g. TAKE, LIVE)

VVI: infinitive of lexical verb

VVZ: -s form of lexical verb (e.g. TAKES, LIVES)

VVG: -ing form of lexical verb (e.g. TAKING, LIVING)

VVD: past tense form of lexical verb (e.g. TOOK, LIVED)

VVN: past participle form of lexical verb (e.g. TAKEN, LIVED)

I display the results of the searches in Table 4.6, descending from the verb with the highest number of occurrences to that with the lowest:

131 VERB Hyponyms Break VVB: 855 VVI: 4391 VVZ:868 VVG: 2719 VVD: 4871 VVN: 3110 (16,814) Split VVB: 315 VVI: 550 VVZ: 100 VVG: 485 VVD: 313 VVN: 884 (2,647) Tear VVB: 46 VVI: 423 VVZ: 34 VVG: 506 VVD: 655 VVN: 879 (2,543) Snap VVB: 241 VVI: 235 VVZ: 81 VVG: 192 VVD: 1507 VVN: 253 (2,509) Crash VVB: 60 VVI: 162 VVZ: 44 VVG: 276 VVD: 1095 VVN: 123 Smash VVB: 53 VVI: 219 VVZ: 42 VVG: 175 VVD: 323 VVN: 242 Crack VVB: 74 VVI: 400 VVZ:1 VVG: 170 VVD: 206 VVN: 189 Rip VVB: 49 VVI: 210 VVZ: 41 VVG: 174 VVD: 228 VVN: 226

Crush VVB: 49 VVI: 186 VVZ: 21 VVG: 131 VVD: 159 VVN: 341

Shatter VVB: 41 VVI: 68 VVZ: 39

132 VVG: 44 VVD: 127 VVN: 302 Fracture VVB: 0 VVI: 0 VVZ: 0 VVG: 31 VVD: 42 VVN: 21 Splinter VVB: 3 VVI: 18 VVZ: 1 VVG: 17 VVD: 14 VVN: 10 Table 4.6 Number of occurrences per verb according to the BNC

The table shows that the hyperonym for break verbs is break, which corresponds with the label used by Levin to designate the verbal class. Although, as Faber and Mairal

(1999: 86) say, there may be more than one superordinate for each domain, the difference existing between the number of occurrences of this verb and the rest of members of its domain leaves out any trace of doubt.

(2) Hyperonymy in terms of patterns of complementation

Verbal hyperonyms are also characterized by showing a higher number of patterns of complementation than the rest of verbs in that same domain. Quoting Faber and Mairal

(1999: 104), ‘the superordinate term of each subdomain tends to take a greater number of complementation patterns than its more specific troponyms’. The inventory of complementation patterns for break verbs according to the CCED is the following:

133 VERB Complementation patterns11 Break V n, V, V n into pl-n, V into pl-n, V adj, V with n, V from n, V n with n, V n of n, V n to n, Chip V, V n Crack V, V n, V P on n, V P Crash V, V into n, V n, V prep/adv Crush V n, V Fracture V, V n, Rip V, V n, V n with adv, V n prep, Shatter V, V into n, V n, Smash V, V n, V into n, V through n, V way prep/adv, V prep/adv, V n prep Snap V, V adv/prep, V n adv/prep, V adv, V n, V with quote, V at n, Splinter V, V prep/adv, V n Split V, V in/into n, V n in/into n, V n, V n between Tear V n, V n prep, V n with adv, V, V prep/adv, Vat n Table 4.7 inventory of complementation patterns for break verbs according to the

CCED

The hyperonym for the domain of break is, once more, break. This result is also in consonance with the information gathered from the Lexicon of contemporary English

(1985), which organizes information in terms of families of hyponyms and their superordinates.

4.1.2.d Polysemy

In chapter 3, I have introduced the notion of polysemy as central to Pustejovsky’s work, which aims to account for the multiplicity of word meaning. In this section, I attempt to offer a descriptive approach to the phenomenon of polysemy within Pustejovsky’s

11 Phrasal verbs are not included.

134 framework that will contribute to the new arrangement of the corpus, distinguishing between those examples whose meaning can be derived from the basic meaning of change of state and those which cannot. For that purpose, I have consulted several dictionaries – NODE, LDEL, and CCED – and the online lexical reference system

WordNet in search of all the senses displayed for break verbs, a highly polysemic verbal class. As I show below, break verbs combine with a great number of prepositions and adverbs which, on many occasions, confer the verb with a completely different meaning. In other words, they generate contrastive ambiguity.

For reasons of space, I cannot display the lists of senses found in WordNet and the dictionaries here, but in Table 4.8 I exhibit the number of senses distinguished in each dictionary and the database for comparison. Save perhaps a few exceptions, all the senses listed in the dictionaries are included in the database.

135 Wordnet Oxford Longman Collins Break 59 senses 5 senses 37 senses 24 senses (plus 36 subsenses) (plus 20 subsenses) Chip 5 senses 2 senses 4 senses (plus 1 subsense) 1 sense (plus 3 subsenses) Crack 12 senses 4 senses 12 senses 10 senses (plus 10 subsenses) (plus 9 subsenses) Crash 12 senses 3 senses 11 4 senses (plus 9 subsenses) (plus 7 subsenses) Crush 8 senses 1 sense 7 senses 4 senses (plus 4 subsenses) (plus 2 subsenses) Fracture 6 senses 1 sense 3 senses 2 senses (plus 1 subsense) (plus 2 subsenses) Rip 4 senses 2 senses 6 senses (plus 1 sense) 3 senses (plus 3 subsenses) Shatter 2 senses 1 senses 6 senses (plus 1 subsense) 3 senses (plus 3 subsenses) Smash 10 senses 2 senses 7 senses 4 senses (plus 9 subsenses) (plus 2 subsenses) Snap 12 senses 3 senses 10 senses 7 senses (plus 6 subsenses) (plus 5 subsenses) Splinter 3 senses 1 sense 3 senses 1 sense Split 5 senses 5 senses 9 senses 4 senses (plus 6 subsenses) (plus 10 subsenses) Tear 5 senses 2 senses 6 senses (plus 1 subsense) 6 senses (plus 6 subsenses) Table 4.8 Number of senses displayed by Wordnet, NODE, LDEL, and CCED for each of the break verbs

It must be noted that information is displayed differently in each source:

WordNet is similar to sense enumeration lexicons in that it lists all senses separately, as if in different lexical entries and without establishing any logical relationship among them. The CCED resembles WordNet in its structure, although the number of senses distinguished for each verb is considerably lower in most cases. The NODE and the

LDEL differ from the other two in that their meanings are organized into a certain number of senses which correspond to the core meanings of the word – in most cases,

136 homonyms –, to which a number of related subsenses is attached. There is a logical relationship between each subsense and the core sense under which it appears. This means that it is possible to see how senses in the language relate to one another, which is central to Pustejovsky’s lexicon. If this is true, and assuming that the NODE is a trustworthy and accurate source of information, it should be possible to derive all the senses displayed in WordNet from those listed in the former.12 Thus, on this premise, I have taken the NODE – always in conjunction with WordNet – as the reference point to study polysemy in this dissertation. Consequently, the core senses of break verbs, from which all the other subsenses are generated, are the following (whenever possible, I illustrate each sense with examples from the corpus):

Break

1. separate or cause to separate into pieces as a result of a blow, shock, or strain: I

fall flat on my bonce and break a chair.

2. [with obj.] interrupt (a continuity, sequence or course): Now, suddenly, a jay

looks down from a hiding place in a lichened oak and slips away without

breaking the silence: stealthy, cunning, typical of the inveterate egg thief.

3. [with obj.] fail to observe (a law, regulation, or agreement): But the Jesuits

broke the agreement; you can see the consoles on the first and second storeys

which once held statues.

4. [with obj.] crush the emotional strength, spirit, or resistance of (a person’s

emotional strength or control, a movement or organization, etc.): It broke my

heart to see that little face and big eyes.

12 At this stage of my work, and for explanatory purposes only, I assume that this hypothesis is valid. However, the real test would consist in developing the qualia for each break verb and checking them against all the senses displayed in WordNet for each verb.

137 5. [no obj.] undergo a change or enter a new state (in particular of the weather, of a

storm, of dawn or a day, of a person’s voice, of news or a scandal, etc.): My

father,’ her voice nearly broke, ‘now lies sheeted in a coffin in the Chapel of St

Peter Ad Vincula.

Crack

1. break or cause to break without a complete separation of the parts: However,

wrote Goldberg, turning the page and tearing it slightly in his eagerness to go

on, my son is rather short-sighted, something he has inherited from his mother,

and recently, during a school trip to Dieppe, he dropped his glasses and cracked

one of the lenses.

2. make or cause to make a sudden sharp or explosive sound: As she did so, a shot

cracked across the ridge, from one of the police marksmen.

3. [with obj.] informal find a solution to; decipher or interpret: Turning up its coat

collar in a hard-boiled way, the voice is briefed on its mission — `leather

couches, portraits of the Queen&hellip this must be MI6" — passes a dingy

border checkpoint getting its forged passport stamped, studies a spy plane view

of Cheltenham, cracks a cipher or two, and spots a glamorous female spy from

the window of the Orient Express to Istanbul.

4. [with obj.] decompose (hydrocarbons) by heat and pressure with or without a

catalyst to produce lighter hydrocarbons, especially in oil refining: Compose

along, cut diamonds along, organise a strike along, crack atoms or safes along.

Crash

1. [no obj.] (of a vehicle) collide violently with an obstacle or another vehicle: A

carbon-fibre brake disc shattered as he slowed from high speed, and his

McLaren crashed into a guard rail.

138 2. [no obj., with adverbial of direction] move with force, speed, and sudden loud

noise: It is an interesting place to stand and watch rocks falling; all the time I

am there I hear them crashing down.

3. [with obj.] informal enter (a party) without an invitation or permission;

gatecrash: He did everything at breakneck speed and was insatiably sociable,

always turning up when Jane was particularly busy, crashing into the room and

asking jauntily: `What's everyone doing?"

Crush

1. [with obj.] press or squeeze (someone or something) with force or violence,

typically causing serious damage or injury: A farmer is using a shire horse to

pull the millstone which crushes the fruit.

Fracture

1. break or cause to break: Brace the valve body against the wall as you do this, to

prevent it turning and fracturing the supply pipe below.

Rip

1. [with obj. and adverbial of direction] tear or pull (something) quickly or forcibly

away from something or someone: On his back like a beetle, three sat on him

while two ripped his daysack from his chest.

2. [no obj., with adverbial of direction] move forcefully and rapidly: But that night,

when he is asleep, the creature enters his chamber and rips back his curtain —

so! — so that he wakes up with a start to find its dreadful gaze upon him, and its

hand out-stretched for his throat!

139 Shatter

1. break or cause to break suddenly and violently into pieces: He says that when he

twists ice out of its plastic container, the ice being extremely cold and dry, he

sees a flash of blue-white light as the cubes shatter into pieces.

Smash

1. [with obj.] violently break (something) into pieces: One of the window’s been

smashed and someone's stuck newspaper over the hole.

2. [no obj., with adverbial of direction] move so as to hit or collide with something

with great force and impact: Well information is being whizzed around your

computer at, lets say, 33MHz until — crunch! — it smashes into the board

running at 8MHz, causing a massive pile-up of data.

Snap

1. break or cause to break suddenly and completely, typically with a sharp cracking

sound: The trick is easy if there are birch trees about; then there is always an

abundance of dead twigs which can be snapped easily off the trunks.

2. [with obj.] take a snapshot of: He took two pals along with him for company and

planned to spend the time snapping rare African wildlife.

3. [with obj.] American Football put (the ball) into play by a quick backward

movement: there are no examples available in the corpus for this sense.

Splinter

1. break or cause to break into small sharp fragments: The rule cracked, splintering

into pieces.

Split

1. break or cause to break forcibly into parts, especially into halves or along the

grain: Split the baked potatoes and pour the sauce over.

140 2. (with reference to a group of people) divide into two or more groups: In spring

1987 he suffered a further blow when the constituent members of the PLO —

split since 1983 — were reconciled at the 18th PNC in Algiers, and formally

renounced the 1985 accord with Jordan.

3. [no obj.] informal (of one’s head) suffer great pain from a headache: there are no

examples available in the corpus for this sense.

4. [no obj.] Brit. informal betray the secrets of or inform on someone: A groan

here, a creak there, the crying of tortured metal, the cracking of the internal

wood fascias as they buckled and split, and always the slow, deadly sound of

gurgling water.

5. [no obj.] informal leave a place, especially suddenly: But she won at their game,

splitting the party in 1969 to assert her supremacy over the bosses — and she

was vindicated when voters gave her a huge majority.

Tear

1. [with obj. and adverbial] pull or rip (something) apart or to pieces with force:

Tear the paper into bits, and flush it down the toilet.

2. [no obj., with adverbial of direction] informal move very quickly, typically in a

reckless or excited manner: She `just misses" trains, turns up hours late for

dates, makes sure we're the last to arrive at parties, dawdles getting ready to go

out, then tears around hysterically, taking her irritation with herself out on me.

The display above shows that break verbs are mostly polysemous. In the following I proceed to characterise the relationship existing among the different senses of each verb in terms of contrastive or complementary polysemy. In doing so I pursue three goals:

(1) check if the list of core senses distinguished by the NODE fully covers the range of

141 meanings of each verb without incurring the proliferation of unnecessary distinctions or if, on the contrary, some of them overlap and should therefore be reduced to a lower number of senses from which the rest will be generated; (2) find out how polysemous break verbs are; and (3) establish a first criterion that allows me to arrange the corpus in a way that discriminates those examples whose meaning departs from that of pure change of state from the other non-related senses.13

Let us start by defining the type of polysemy that exists among the senses of each word:14

Break: all senses are homonyms. That is, they stand in a relation of contrastive polysemy. The fourth one – ‘crush the emotional strength, spirit, or resistance of (a person’s emotional strength or control, a movement or organization, etc.) – might be said to be a metaphorical and metonymical extension of the first one – ‘separate or cause to separate into pieces as a result of a blow, shock, or strain’; hence a potential case of complementary polysemy.

Crack: all senses are homonyms, although the fourth sense – ‘decompose

(hydrocarbons) by heat and pressure with or without a catalyst to produce lighter hydrocarbons, especially in oil refining’ – might be interpreted as a derived sense from the first one – ‘break or cause to break without a complete separation of the parts’ –, in which case they would stand in a relationship of complementary polysemy.15

Crash: all senses are homonyms, although the second one – ‘move with force, speed, and sudden loud noise’ – might be an onomatopeia of the first one – ‘(of a vehicle) collide violently with an obstacle or another vehicle.’ That was also the case with the second sense of crack.

Rip: all senses are homonyms.

13 Additional criteria for the corpus selection process will be provided in section 4.2. 14 Since I am dealing with polysemy, I leave out those verbs for which only one sense is displayed. 15 I come back to the issues of metaphor and metonymy in section 4.2.2.

142 Smash: all senses are homonyms.

Snap: all senses are homonyms.

Split: all senses are homonyms, although the second sense – ‘(with reference to a group of people) divide into two or more groups’ – might be said to be a metaphoric extension of the first one – ‘break or cause to break forcibly into parts, especially into halves or along the grain.’

Tear: all senses are homonyms.

The vast majority of the lexical entries distinguished by the NODE for (polysemic) break verbs display only those senses which stand in a contrastive relationship. There are only a few borderline exceptions – namely break, crack, crash and split – which might be regarded as metaphorical and/or metonymical extensions of another core sense to a certain extent. Still, it can be claimed that the organization of the lexical entries of the NODE can be said to fit the requirements of Pustejovsky’s lexicon adequately.

Thus, going back to the three goals put forward above, the following can be concluded:

(1) There is no need to either reduce the number of senses per verb in order to have

an accurate account of the meaning of these verbs.16 The senses displayed for

each verb do not overlap, which means that they do not stand in a relationship of

complementary polysemy. In turn, this implies that there is not an unnecessary

proliferation of extended senses.

(2) Break verbs are highly polysemous; 8 out of 12 show contrastive ambiguity or

homonymy - the other 4 only have one sense according to the NODE, so they

are irrelevant –, and 4 out of 12 might exhibit complimentary polysemy. If the

16 Other dictionaries like the CALD list more core meanings than the NODE. Nonetheless, I cannot guarantee that those core meanings are real homonyms in all cases. Thus, I stick to the NODE here because, although there is the possibility that it may lack some meaning in some case, the ones it displays are real homonyms and it is, consequently, a highly reliable source of information.

143 assumption above that the core meanings displayed by the NODE cover all the

senses listed for each word by WordNet is true, then the verb break is an

outstanding example of complementary polysemy, since it has 59 senses which

would stand in a relationship of complementary polysemy with five core senses.

(3) Turning to the third goal, a quick look at the corpus has revealed that many of its

examples depart from the meaning of pure change of state and denote some of

the homonym senses of break displayed above (here we should remember that

the examples have been withdrawn at random, and many of them depart

considerably from the prototypical meaning of break verbs). Given that this is a

study of verbs of pure change of state, the first step in the new arrangement of

the corpus will consist in separating those examples that denote change of state

from those that do not. The first criterion upon which such a (re)arrangement

might be carried out is the distinction between contrastive and complementary

polysemy: those senses that stand in a relationship of contrastive polysemy with

the sense of pure change of state will be gathered together in one class (non-

prototypical); those that stand in a relationship of complementary polysemy will

be put together in another class (less-prototypical); and those senses that are

synonyms with the sense of pure change of state will remain together in another

class (prototypical). As we can see, the notions of polysemy and synonymy are

closely related, and the two of them will be indirectly but intrinsically present in

the process of rearrangement of the corpus in the next section.

Polysemy can also contribute to the arrangement of the corpus in another way: so far the criteria discussed in relation to this process have been mainly semantic.

However, there are formal criteria that must also be taken into consideration. During the corpus compilation process, I have noticed three main types of examples:

144 (I) Examples with a predicate only: in each case, the meaning of the verb will be the only criterion to decide whether the example meets the definition of change-of-state verbs. Example (13) illustrates a case that meets the definition whereas (14) illustrates the opposite.

(12) The egg industry, in the form of United

Kingdom Egg Producers’ Association (Ukepra), has submitted a

definitive proof to the Richmond Committee, under the umbrella of the

Secretary of State for Health, Kenneth Clarke, saying intact eggs,

consumed immediately after breaking, cannot cause food poisoning.

(13) Then an animal snorted quietly and broke

the momentary stillness.

(II) Examples with a predicate plus a telicity operator, such as break up or break down: in this case, the operator does not necessarily affect – or affects only slightly – the meaning of the core sense; it is just an intensifier.17 Therefore, as in the previous case, the meaning of the verb will be the only criterion to decide whether the example meets the definition of change-of-state verbs. See (14) and (15) for illustration.

(14) I still think of this when I rip up an old vest for a

piece of silver-polishing rag. .

(15) `Last week," she said, `my nail snapped off.

17 For further details on telicity operators, see Lieber (2004) and Martín Arista (forthcoming-a).

145 (III) Examples with a predicate plus a preposition, such as break through, whose meaning differs completely from that of break verbs. In other words, phrasal verbs which stand in a paradigmatic relationship with break verbs. These examples are cases of contrastive polysemy and will therefore be gathered in a separate class within the corpus. They are illustrated in (16) and (17) below.

(16) Furthermore, true creativity is relatively

uncommon even in those individuals who never overtly break down but

whose mode of thinking resembles that of the diagnosably psychotic: it is

rarer still in those who do pass beyond the threshold into clinical

disorder. (break down = lose control of one’s emotions when in a state of

distress. CALD).

(17) But what has become internalized can always become externalized again;

and so we have seen that, increasingly in modern societies, anti-social

instinctual drives of the id — specifically, the instinctual drives of the

gelada-like sons of the primal fathers — break out in social conflicts

and acts of delinquency and provocation which the modern incarnation

of the primal father — the police, the Establishment, law, order and

standards of all kinds — have to meet. (break out = (of war, fighting, or

similarly undesirable things) start suddenly. CALD).

Apart from these three patterns, the corpus has also revealed that break verbs appear in a great number of collocations such as break the ice, break someone’s heart, break the mould or break surface, among others:

146 (18) Examples taken from the corpus

a. This range, although covered by many

hundreds of metres of water for most of its length, breaks surface in a

few places to form the chain of apparently unconnected volcanic islands.

b. `You wait till I tear a very ugly strip off the

old girl.

Furthermore, break verbs and, especially, break, combine with a wide range of adjectives, resembling copular verbs such as be, become or get to a certain extent. See

(19) for illustration.

(19) Examples taken from the corpus

a. but when you break loose.

b. Break free from chains.

The meaning of most of these examples differs considerably from that of change of state. Thus, they will also be gathered in a different class within the corpus, as I explain in section 4.2.

From all this, it follows that the polysemous character of break verbs is also related to the high frequency with which these verbs combine with prepositions and adverbs, which, especially in the case of phrasal verbs, often results in cases of contrastive ambiguity. I display the range of prepositions and adverbs that each verb selects for in Table 4.9.

147 WordNet Oxford Longman Collins Break Away, down, in, Away, down, sth Down, in, into, off, Away, down, in, into, oneself of sth, down, even, forth, out, up into, off, out, off, out, over, up, free, in, s.o. in, sth through, up with in, in on, into, off, sth off, sth open, out, out in, out of, sth out, through, up, s.o. up, sth up, with Chip Off Away, in, sth in Ø Away at, in Crack Up Down on, on, up, Down, up Down, up off Crash Down, into Ø Ø Out Crush Ø Ø Ø Ø Fracture Ø Ø Ø Ø Rip Into Into, s.o. off, sth Into, off Apart, into, off, up, off, sth up away Shatter Ø Ø Ø Ø Smash Up, into Down, up, into Up Down, up Snap Forward Off, at, away at Ø out of, up Splinter Ø Ø Ø Ø Split Up Off, away, into, Up, off, into, on Off, up over, up, on Tear Ø Up, sth/s.o. apart, s.o. off Apart, away, down, oneself away, into, off, up sth/s.o. down, into Table 4.9 Prepositions and adverbs selected for by each of the break verbs

The table shows the large number of prepositions and adverbs that break verbs pattern with. As I have explained above, many of them confer the predicate with a new meaning, which has forced me to arrange the corpus of analysis in a new fashion for the analysis of the argument and event structures of these verbs.

148 Finally, in section 4.1.2.c I have explained two techniques to determine the hyperonym of a verbal domain. The first one consists in searching each verb against the whole BNC. The verb with the highest number of occurrences is the hyperonym. The second technique is based on the idea that verbal superordinates are characterized by having a higher number of patterns of complementation than the rest of verbs in that same domain. In both cases, the hyperonym for break verbs has turned out to be break, which corresponds with the label used by Levin (1993) to designate the verbal class. As can be seen in Table 4.9, the superordinate term of this verbal class combines with more prepositions and adverbs than its hyponyms. Therefore, this might be considered as a third criterion to identify the hyperonym of a verbal class.

4.1.3 Summary

To sum up, the analysis of word structure has revealed the close relationship that exists between morphology, semantics and syntax. For the analysis of word structure, I have identified four main semantic relationships: antonymy, synonymy, hyponymy and hyperonymy, and polysemy. The analysis of antonymy has revealed that break verbs hardly have any antonym, which is in keeping with the fact that no reversative affixes can be attached to these verbs, as I have shown in the analysis of lexical derivation. The outcome of the analysis of hyperonymy is also in accordance with those offered in the analysis of the derivational features of break verbs, since the hyperonym – which the analysis in terms of frequency has revealed to be break – is the verb that can take more affixes. Furthermore, the hyperonym break is also the verb that appears in a higher number of complementation patterns. Hence, the analyses of morphology, semantic relationships and argument structure of break verbs converge.

149 Furthermore, the analysis of synonymy has revealed that there is not such a thing as full synonymy. However, from a more limited perspective, I have been able to identify those senses of the verbs under analysis that denote pure change of state and, on the basis of this, also their synonyms. In other words, verbal semantics is intrinsically linked to the phenomenon of synonymy. Even more, the analysis has revealed that synonymy, polysemy and hyperonymy are intrinsically linked: polysemy is based on the relationship existing among senses, and synonymy plays a key role in distinguishing between those senses that share a similar meaning or which stand in some kind of logical relationship – that is, they are cases of complementary polysemy – and those which far from being synonyms have a completely different meaning – that is, they stand in a relationship of contrastive ambiguity. Besides, the hyperonym of a synset conveys information that can help to clear up uncertain cases of synonymy. In brief, synonymy, hyperonymy and polysemy overlap and interact to a great extent and, in turn, these semantic relations interact with morphology and syntax.

4.2 The corpus: prototypical, less-prototypical and non-prototypical examples

Having analysed the data obtained from the analysis of word structure, I now proceed to explain the data obtained from the analysis of the other two levels of representation: argument structure and event structure. As I have explained in the introduction to this chapter, as soon as I started to analyse the argument and event structure of the examples from the corpus, one problem came up: not all of them denote change of state. Given that this is a study of verbs of pure change of state, I have distributed the examples of the corpus into three different groups, according to their meaning. Those that encode a pure change of state have been gathered together under the label prototypical examples; those that do carry a completely different meaning – that is, they stand in a relationship

150 of contrastive polysemy with that of pure change of state – have been labelled non- prototypical examples; and those which are half way between the two classes have been termed less-prototypical examples. I illustrate the three classes in (20):

(20)

a. Prototypical example: The England bowler fractures that

knee again.

b. Non-prototypical example: The nurse now cracks with

a deadpan expression: `You die, we take you home, Muhammad."

c. Less-prototypical example: To be alone is one thing, but

to be alone and to have no means at all of shattering the shell of silence around

you to summon the comforting sound of a human voice is quite another.

Classifying the examples has not always been easy, because the line separating one type from the other is not clear-cut in all cases. For this reason, I have started by distinguishing those examples whose meaning fits the prototypical definition of break from those which are either less-prototypical or clearly non-prototypical, and I have done it in a rather systematic way: first, I have identified all phrasal verbs and I have put them in the class of non-prototypical examples; and second, I have taken the lists of senses provided by WordNet for each verb and I have followed the process of elimination that I devised in section 4.1.2.b to leave out those synsets that did not fit the definition of break verbs and which, consequently, could not be regarded as prototypical. The only difference is that, in this case, I have also included those senses that have been excluded in section 4.1.2.b because they do not provide synonyms lists, but which nonetheless supply some relevant meaning connotation or nuance. This process, although based on synonyms sets or synsets, is suitable for the definition of

151 meaning because the synsets are based, precisely, on meaning definitions. Once more, the relationship between morphology/lexicology, syntax and semantics is manifest. For convenience, I explain the selection process again in (21):

(22) Selection process of prototypical examples: Steps

For each verb:

STEP 1: I take the first three sets of synonyms ordered by frequency in WordNet,

since they are supposed to be the closest to the core meaning of the verb.

STEP 2: I check the list of synonyms grouped by similarity in WordNet and add

those sets of synonyms that are associated with the sets picked in STEP 1, as they

are supposed to be similar in meaning – hence, synonyms among them.

STEP 3: having accomplished steps 1 and 2, I check the selected senses against the

prototypical definition for break verbs to make sure that I have included all – and no

more than – the synonym sets that comply with it.

STEP 4: Wordnet also provides hyperonyms for each sense. Thus, if doubt persists

regarding some synonym or synset, I check it against the hyperonymy provided.

Hyperonyms are a very useful tool to clarify problematic senses because they

condense the basic meaning of the verb in one single word but giving a wider

perspective. In most cases, senses with different hyperonyms will not be synonyms.

The resulting list of senses that fit the prototypical definition of break verbs for each verb is displayed in Table 4.10 below:

152 VERB SYNONYMS Break Sense 2 Sense 3 Sense 36 Sense 42 Sense 43 Sense 54 Sense 57 Crack Sense 1 Sense 6 Sense 7 Sense 10 Crash Sense 5 Sense 8 Crush Sense 2 Sense 4 Sense 8 Fracture Sense 3 Sense 4 Sense 5 Sense 6 Rip Sense 1 Shatter Ø Smash Sense 2 Sense 4 Sense 6 Sense 8 Sense 10 Snap Sense 2 Sense 3 Splinter Sense 2 Sense 3 Split Sense 1 Sense 2 Sense 5 Tear Sense 1 Sense 2 Table 4.10 List of the senses of Wordnet that fit the prototypical definition of break verbs

Once the prototypical senses for each verb have been identified, I have checked all the examples of the corpus against them. Thus, all the examples of snap whose meaning complies with either sense 2 or sense 3 have been classified as prototypical, whereas those that differ from them are either non-prototypical or less-prototypical.

153 Having classified all of the examples of the corpus, I have proceeded to the analysis of the argument and event structures of the three classes. In the following sections, I describe the results although, given that this is a study of verbs of pure change of state, I focus mainly of the argument and event structures of the prototypical examples. The other two classes are dealt with very briefly, paying special attention to points of convergence and divergence with respect to the prototypical examples.

4.2.1 Prototypical examples

In this section, I examine the results obtained from the analysis of the argument and event structures of those examples that relate to a pure change of state or, in other words, to a change in the material integrity of some entity. Following the pattern displayed in table 3.1 in chapter 3, I will start by commenting on the data obtained from the quantitative analysis of the arguments of the verbs. I then move on to the qualitative analysis, and I finish with the data drawn from the syntagmatic analysis.

4.2.1.1 Argument structure

4.2.1.1.a Syntactic valence

As explained in chapter 3, the syntactic valence of a verb is the number of arguments it takes that are overtly expressed in the syntax. However, the corpus of examples under analysis has brought to my attention a number of special cases in which the syntactic valence of the verb may change due to certain grammatical processes. The passive is one of them, but it differs from the others – imperatives and clauses implicit subject (or gapping) – in a significant way: whereas the passive may have an overt argument which does not count for the syntactic valence of the verb because of its adjunct condition, the

154 imperatives and clauses with omitted or coordinated subject have an argument which is not overtly expressed in the syntax. I illustrate imperatives and clauses with omitted or coordinated subject in (23).18

(23) Imperatives and clauses with omitted or coordinated subject

a. ‘Break it up,’ said the club wit. (Imperative clause)

b. So then break that down, you are talking at least ten

pounds a call aren't we? (Imperative clause)

c. The plane touched down, bounced up again, slewed

sideways and skidded along the runway, breaking up as it did so; the port wing

broke off and the rest of the plane turned over on top of it. (clause with omitted

subject)

d. Wish art goes on to emphasize the importance of the

electric media in breaking the hegemony of notation, for they enable us to

capture the actual sounds, in all their inflectional complexity — freed from the

‘filtering’ effects of notation — and in experiential rather than spatialized time.

(clause with omitted subject)

e. She took two pills from a bottle by the bed on which

she had thrown herself and crushed them between the pages of Mansfield Park.

(clause with coordinated subject)

f. Over the past two years, Adams has suffered

compressed vertebrae, a broken right leg, fractured two ribs and punctured a

lung, received two black eyes, a dislocated shoulder, and more recently he

smashed his left knee. (clause with coordinated subject)

18 Van Valin and LaPolla (1997: 291) treat the by-phrase of passive sentences and adjuncts in the periphery. Thus, for a sentence such as ‘Sally was surprised by Mary’, they put forward the following clause structure: [CL [C Sally [N was surprised]] [Pby Mary]].

155

In all cases, the subject can be derived from the context. Thus, in the imperatives (23.a) and (23.b), the subject is you, whereas in the clauses with omitted subject (23.c) and

(23.d) the subjects are the plane and the electric media, respectively, and in the clauses with coordinated subject (23.e) and (23.f) the subjects are she and Adams, respectively.

This phenomenon of omission can only be explicated around the notion of pivot:

Van Valin and LaPolla (1997) reject the universality of the grammatical notions subject and object and suggest that grammatical (syntactic) relations must be defined in terms of restricted neutralizations of semantic roles for syntactic purposes.19 The argument that bears the privileged grammatical function is referred to as pivot, and it is construction-specific. In English, the omitted argument in imperatives and participial clauses is the pivot of those constructions, since there is a restricted neutralization with respect to the omitted NP. In other words, both the actor and undergoer arguments can be omitted or matrix-coded. This is illustrated in (24):

(24) Examples taken from Van Valin and LaPolla (1997: 264)

a. The student watched TV while eating pizza.

b. The student looked out the window while being questioned by the police.

As can be seen in the examples, the missing NP in participial clauses is always interpreted as subject, no matter whether it is actor or undergoer, although in English it is necessary to use a passive construction for the latter to be the pivot of the construction.

19 See chapter 6 in Van Valin and LaPolla (1997) for a full account of grammatical relations in the RRG theory.

156 Going back to the question of the syntactic valence, and in line with the imperatives, the two subordinate clauses in (24.a) and (24.b) will have a syntactic valence of 0, for although the missing element is the pivot and can therefore be easily identified, it is not overtly expressed in the syntax. Thus, the syntactic valence of the examples in (25) is as follows:

(25)

a. ‘Break it up,’ … (syntactic valence: 1)

b. So then break that down, … (syntactic valence: 1)

c. …, breaking up as it did so; … (syntactic valence: 0)

d. … in breaking the hegemony of notation, …

(syntactic valence: 1)

e. … and crushed them between … (syntactic valence: 1)

f. … a broken right leg, fractured two ribs and

(syntactic valence: 1)

Summing up, the syntactic valence of imperative clauses, clauses with omitted subject and clauses with coordinated subject will be 1 for causative constructions and 0 for the inchoative version of the alternation.

As regards passive constructions, no matter whether the agent is overtly expressed in the syntax or not, the syntactic valence is reduced by 1 with regard to their active counterpart. Thus, in both (26.a) and (26.b) the syntactic valence is 1, even if the effector is present in (26.a) – erosion – and omitted in (26.b):

157 (26)

a. The whole place is a shambles of falling rock, soft shaly

rock, slaty rock; everything has been split apart by erosion.

b. The cage was now a few fe et off the ground and the

noise, which sounded like metal being torn apart, was almost deafening.

Besides the three phenomena that I have just discussed, there are other factors that play a part in the definition of syntactic valence. One of them is the use of a preposition to introduce the argument(s) of the predicate. Van Valin and LaPolla (1997) and Van Valin (2005) distinguish three types of arguments: direct, oblique, and argument-adjunct. They are illustrated in (27).

(27) Argument types according to Van Valin and LaPolla (1997) and Van Valin (2005)

a. Direct arguments: Javier has already read that book.

b. Oblique core arguments: Ana sent the package to Eli by snail mail. (Ana sent Eli

the package by snail mail.)

c. Argument-adjunct: Laura ran to the bus stop.

Direct arguments are those which are morphologically unmarked or not introduced by a preposition (see (27.a) for illustration); in principle, they count for both the semantic and the syntactic valence. Oblique core arguments, by contrast, are marked by a non- predicative preposition in English, and they can occur in the core without a preposition.

Hence they also count for both the semantic and syntactic valence. They are illustrated in (27.b). Argument-adjuncts are marked by a predicative preposition and cannot occur without it in the core. In this respect, they differ from oblique core arguments, which

158 have the possibility of appearing in the core as direct arguments (see (27.b) above).

Although at first sight they might resemble prepositional adjuncts, they differ from the latter in that they introduce an argument into the core and they either share an argument with the main predicate or they play a part in the logical structure of the verb, as shown in (28). Argument-adjuncts also differ from oblique core arguments in that their preposition is predicative by contrast to that of oblique arguments such as to Mary. This implies that in the case of argument-adjunct prepositions the meaning of the argument is not derived from the logical structure of the verb.

(28) Logical structure with argument-adjunct:

Ana put the book on the shelf: [do’ (Ana, Ø)] CAUSE [BECOME be-on’(shelf, book)]

The occurrence of an argument-adjunct will change both the semantic and syntactic valence in [+1], although the latter will be marked separately because, as Van Valin and

LaPolla (1997) themselves acknowledge, the status of argument-adjuncts as arguments is not one hundred percent clear. In fact, one of the main problems with argument- adjuncts is that they are difficult to identify. I discuss this issue in section 4.2.1.1.c.

Another interesting question brought up by some examples of the corpus which bears on syntactic valence is the position in which certain arguments appear. RRG analyses the clause structure as consisting of three layers: clause, core and periphery.

Additionally, syntactic positions like the ‘precore slot’ and the ‘postcore slot’ are distinguished. The ‘precore slot’ [PrCS] occurs clause-internally but core-externally, and it is the position in which question words appear. This has certain implications for syntactic valence. Van Valin and LaPolla (1997: 146) define the syntactic valence of a verb as ‘the number of overt morphosyntactically coded arguments it takes’; however,

159 they do not specify where. What is unclear here is whether they mean the number of overt arguments in the core or also in the pre- and postcore positions, that is, whether questions words are syntactic arguments of the verb. That leads us to the fundamental question of what criteria define the syntactic valence of a verb. Van Valin and LaPolla

(1997) and Van Valin (2005) do not provide a fixed set of criteria for that. Nonetheless, putting together the information spread throughout their work, the following can be concluded:

1. Syntactic arguments can be either direct or oblique.

2. Syntactic arguments other than the subject must be liable to be passivized.

3. Syntactic arguments must be overtly expressed in the syntax. This leaves out of

the analysis the pivots of imperative clauses and of those with omitted subjects

or objects even if they are recoverable from the context.

4. Syntactic arguments do not occur in the periphery.

The question now boils down to whether those arguments which are expressed in the precore (or postcore) position count for the syntactic valence of the verb or not. RRG only leaves out explicitly those arguments which appear in the periphery like the by- phrase effector of passive clauses. Given that question words can be semantic arguments of the verb and that they are non-peripheral, here they will also be regarded as part of the syntactic valence of the verb. Thus, the syntactic valence of example (29) is 1 [+ 1 argument-adjunct]:

(29) `What withstands the blow is good; what smashes to smithereens is rubbish."

160

Having sorted out these questions, the analysis has brought forward some interesting data that deserve some comment. In the first place, Van Valin and LaPolla

(1997) and Van Valin (2005) agree that all simple English clauses must have subjects.

Thus, a verb like rain with no semantic arguments has nonetheless a syntactic valence of 1 (Van Valin and LaPolla 1997: 147). I emphasize the word simple because the analysis has exposed many cases in which the syntactic valence is 0 (see (30) below).

What might at first seem incorrect or ungrammatical finds an explanation in the fact that most of them are clauses which are part of complex sentences whose subject overtly appears in another clause, as shown in (30.a) and (30.b). In these examples, the subjects of cracking and crash are not overtly expressed in the corresponding clause; however, they can be easily identified in one of the previous clauses in the same sentences. In the case of (30.a), the subject is bars, and in (30.b), him. The only exception to Van Valin and LaPolla’s statement would be intransitive imperative clauses such as Run!, but no similar case has been found in the corpus under analysis.

(30) Examples taken from the corpus with syntactic valence 0 a. And Jay was pacing her attic, bars tightening and cracking around a heart that would not stop hurting; she could not lay her body down though it screamed for rest and knots of fury made her neck and shoulders a steely hunch like a vulture. b. In alarm she glanced back at Samson and saw him hit out wildly, lose his balance and crash back, knocking down the men behind like a row of ninepins.

161 c. They are generally thicker and harder-fired than wall tiles, to enable them to stand up to heavy wear without cracking. d. As he delivered the first ball of his third over on that fateful sunny afternoon, his left knee split apart, fracturing in two. e. For years whenever I've got out of bed in the middle of the night — about whatever pursuit you get out of bed in the night for — my right ankle has made cracking noises, like kindling being snapped. f. BROKEN mast, broken rudder and broken boom — everything seems to have snapped in Japan's maiden America's Cup challenge during the past 11 months except the team's spirit.

Notice that, apart from the clauses with omitted and coordinated subjects discussed at the beginning of this section, here we find examples of clauses with raised subject, as in

(30.f), and passive constructions with omitted subject, as in (30.e).

Given that in this dissertation I am only analysing the clauses – not sentences – whose predicates contain a break verb, and taking into account that only those arguments which are overtly expressed in the syntax are regarded as part of the syntactic valence of the verb, it is not illogical to define the syntactic valence of verbs such as those in (30) as 0. A syntactic valence of 0 has effects on the analysis, for only syntactic arguments can be analysed in terms of entity type, reference or nominal aspect. In other words, only those arguments which are regarded as part of the syntactic valence of the verb have been examined in the qualitative part of the analysis.

Another interesting issue relates to the lack of correlation between the semantic and the syntactic valence of the verb. As Van Valin (2005: 8) explains, core arguments are related to the arguments in the semantic representation of the verb. However, there

162 are many cases, not only in English but also in other languages, in which such correlation does not hold. That is the case of the dummy it in it is snowing, which occurs in the core but is not a semantic argument of snow (Van Valin 2005: 8). Thus, although the syntactic valence of a verb is semantically motivated, there is no absolute correlation between them. Van Valin and LaPolla (1997: 147) illustrate the non-identity of semantic and syntactic valence in the following table:

Semantic valence Syntactic valence rain 0 1 die 1 1 eat 2 1 or 2 put 3 3 or 2 Table 4.11 Non-identity of semantic and syntactic valence (taken from Van Valin and LaPolla 1997: 147)

Except for rain, all the verbs in the table have a syntactic valence of equal or lower value than the semantic valence. However, the analysis has provided one example which points to the opposite:

(31) It broke my heart to see that little face and big eyes.’20

While the semantic valence of break in this example is 2, the syntactic valence is 3: it, my heart, and to see that little face and big eyes. Hence here we are one more example of the non-identity of semantic and syntactic valence which had not being pointed out before.

20 I use this example to illustrate this point; nonetheless, it must be noted that it does not belong in the group of prototypical examples.

163 I would like to conclude this section by remarking that the analysis has revealed that the majority of prototypical examples under analysis have a syntactic value of 1 in the case of inchoative constructions and 2 in their causative versions. These results only undergo variation when the example is a passive construction, in which case the syntactic valence is reduced by 1, and when there is some argument-adjunct although, as I have said above, the latter are counted separately because their status as arguments is not completely clear. The conclusions leave out those cases in which these arguments are not overtly expressed. I summarise all the possibilities below:

Basic syntactic +Argument-adjunct Passive Non-explicit subject valence Inchoative 1 (+1)21 -1 ∄ clause Causative 2 (+1) -1 -1 clause Table 4.12 Syntactic valence of prototypical examples

4.2.1.1.b. Semantic valence

The definition of the semantic valence of the verb has also raised several questions, namely whether there is a basic semantic valence for verbs that does not vary, how it varies if it does vary, and what factors trigger this variation. Van Valin and LaPolla

(1997: 112) discuss some of these questions when dealing with verb alternations. There are verbs like eat which behave both as an activity and as an active accomplishment.

The question is which of the two is to appear in the lexical entry for that verb. Van

Valin and LaPolla (1997: 179) contemplate the idea of positing two different lexical

21 I use brackets because this argument is added to the already existing syntactic valence of the verb but separately. For example, when an argument-adjunct is added to a predicate of basic syntactic valence 1, the resulting syntactic valence is 1 (+ 1).

164 entries for verbs that participate in Aktionsart alternations like, for example, break, which alternates between an accomplishment and a causative accomplishment.

However, they discard this possibility for two main reasons: (1) it is uneconomical, because this would lead to a great increase in the number of lexical entries existing in the lexicon; and (2) it would lead to a lack of the predictiveness which is expected in any lexicon.

Therefore, Van Valin and LaPolla point at the identification of the verb’s basic lexical meaning as a solution. For example, verbs like run or write are activity verbs which can be used as accomplishments (Van Valin and LaPolla 1997: 112). Therefore, they would be represented as activities in their lexical entry in the lexicon, and each of these lexical entries would have (at least) two logical structures: one for the activity interpretation of the verb, and another for the accomplishment interpretation. This complies with Koontz-Garboden’s (2006: 4) Principle of Monotonic Composition

(PMC) according to which ‘Word meaning is constructed monotonically on the basis of event structure constants and operators.’ This principle entails that any derivation can only be meaning adding and the reverse – that is, that derivation be meaning reducing – is not possible. In other words, causative changes of state can be derived from non- causative changes, but not vice versa.

In the case of break, Van Valin and LaPolla (1997: 184) consider the accomplishment reading as the basic meaning, the causative accomplishment being derived from the former by means of a lexical rule. Hence, according to Van Valin and

LaPolla (1997), the examples from the corpus have a basic semantic valence of 1 which will be augmented or reduced as follows:

165 Basic Causative +Argument- Passive Non-explicit semantic counterpart adjunct subject valence (gapping) Inchoative 1 +1 +1 =22 ∄ clause Table 4.13 Semantic valence of prototype examples

Break verbs have a basic semantic valence of 1 in their inchoative version which becomes 2 when the causal subevent is added. If there is an argument-adjunct expressing the result of the event or some other type of information, the semantic valence will be augmented by 1 for each argument-adjunct – there may be more than one.

In principle, the semantic valence does not vary if one of its arguments is not overtly expressed in the syntax or if the clause is expressed in its passive form, although this last point is not completely clear. When dealing with the syntactic and semantic valence of passive constructions, Van Valin and LaPolla (1997: 147) remark: ‘It is not necessary, however, for the semantic valence to change, as one can also say John was killed by the man and The sandwich was eaten by the boy. The by-phrases are not regarded for the syntactic valence because they are peripheral adjuncts; however, the

NP is a semantic argument of the verb.’ This statement is slightly ambiguous: does it is not necessary mean that it may change? In other words, Van Valin and LaPolla only mention those cases in which the actor NP is overtly expressed in the syntax. In that case, it does not count as part of the syntactic valence but it does for the semantic valence. However, they do not specify whether the agent counts as part of the semantic valence of the verb when it is not overtly expressed in the syntax. In my view, it does,

22 Here, ‘=’ means ‘it remains the same’.

166 given that whether it is overtly expressed or not it is part of the logical structure of the verb.

Putting all this information together, the syntactic and semantic valence of break verbs can be defined as follows:

Basic valence +Argument- Passive Non-explicit adjunct subject (gapping) Syntactic 1 (+1) -1 ∄ Inchoative valence clause Semantic 1 +1 = ∄ Valence Syntactic 2 (+1) -123 -1 Causative valence clause Semantic 2 +1 = = valence Table 4.14 Syntactic and semantic valence of prototypical examples following Van Valin and LaPolla (1997)

I illustrate all the options in the table in (32):

(32)

a. Inchoative clause with basic valence: Trying to establish

why a component snapped. (Syntactic valence: 1; semantic valence: 1)

b. Inchoative clause with argument-adjunct: …, the soft

stems crushing to pulp under their feet. (Syntactic valence: 1+1; semantic

valence: 2)

c. Inchoative clause with non-explicit subject with argument-adjunct:

text=CU0 n=1185> …his left knee split apart, fracturing in two. (Syntactic

valence: 0+1; semantic valence: 2)

d. Causative clause with basic valence: … I'd have

probably broken my neck, … (Syntactic valence: 2; semantic valence: 2)

23 Passives always reduce their valence by 1, both with or without an explicit (adjunct) subject.

167 e. Causative clause with argument-adjunct: I watched it

float in slow motion towards the ceramic shower base, which it cracked neatly

in two. (Syntactic valence: 2+1; semantic valence: 3)

f. Passive causative clause with basic valence: 10. …,

but his spirit had not been broken. (Syntactic valence: 1; semantic valence: 2)

g. Causative clause with non-explicit subject: … on the

older woman, crushing her skull. (Syntactic valence: 1; semantic valence: 2)

In short: the semantic valence of the inchoative reading is 1; the semantic valence of the causative reading is 2; and if there is some argument adjunct, each of those valences is augmented by 1 (for each argument-adjunct).

4.2.1.1.c. Argument-adjuncts

Argument-adjuncts are difficult to classify. They resemble adjuncts in that they are marked by a predicative preposition, they are not eligible for macro-role assignment, and their semantics is not derived from that of the verb’s; but on the other hand they are like adjuncts in that they occur in the core, not in the periphery, and they introduce an argument rather than a modifier. Nonetheless, the argument they introduce is only indirectly at best an argument of the verb in the nucleus (Van Valin and LaPolla 1997:

161); that is to say, the logical structure of the verb is incomplete without it, but there usually is more than one option to fill that gap. In the case of put, for example, there is a wide range of argument-adjunct prepositions that can introduce the argument, such as on (the table), under (the bed), behind (the door), etc. (Van Valin and LaPolla 1997:

160). Furthermore, it can also be replaced by an intransitive preposition in other cases, as in ‘Put it down’. Example (33) from the corpus also illustrates this idea:

168 (33) GUIL smashes him down.

In any case, the logical structure for the verb does not include the argument-adjunct; rather, the preposition adds its own semantics to that of the verb’s, resulting in a derived logical structure, which is the sum of the logical structure for the verb plus the logical structure of the preposition.

The difficulty to classify argument-adjuncts has been reflected in the difficulty to identify some of the argument-adjuncts of break verbs. Moreover, sometimes verbs have more than one argument-adjunct, as in Sam ran from his office to the store. Both from his office and to the store are argument-adjunct prepositional phrase (Van Valin and LaPolla 1997: 161). Thus, this adds one more question, namely how freely argument-adjuncts can be added to clauses. Van Valin and LaPolla (1997) argue that there seem to be three basic situations in which the logical structure of the verb can be augmented by means of argument-adjuncts. Van Valin and LaPolla (1997: 162) put it in the following way:

1. specifying the range of motion with a verb of motion (e.g. run, walk) or

induced motion (e.g. push, pull, move), which included specification of a

SOURCE, a PATH and/or a GOAL;

2. specifying an IMPLEMENT with certain types of activity verbs, e.g. eat, look

at, sew, fight, write;

3. specifying a beneficiary of some kind with for.

To this I can add at least one more situation in which the logical structure of break

verbs can be augmented:

169 4. specifying the RESULT of a verb of change of state.

Example (34) illustrates this idea, both for inchoative and causative expressions, respectively:

(34) Result argument-adjuncts a. Their feet left clear tracks through the floor of bluebells as if on dark snow, the soft stems crushing to pulp under their feet. b. Jack plucked the flower out of his morning coat and closed his fist slowly over it, crushing it into a pulp.

Apart from result, the argument-adjuncts of break verbs specify situations of type 1; more precisely, they specify either source, path or goal, although here I have named them origin, path and destination because the latter seem to me more specific of/to motion events, whereas the former can also be used to refer, for instance, to transfer events and physical sensations in the case of source. Thus, origin refers to the place from which something moves, path refers to the object or place traversed in the motion, and goal refers to the place to which something moves. (35) illustrates origin argument- adjuncts; (36) illustrates path argument-adjuncts; and (37) illustrates destination argument-adjuncts.

(35) Origin argument-adjuncts a. On his back like a beetle, three sat on him while two ripped his daysack from his chest.

170 b. The trick is easy if there are birch trees about; then there is always an abundance of dead twigs which can be snapped easily off the trunks.

(36) Path argument-adjuncts a. LETHAL INCENDIARY grade performance on and off stage (Keith Moon brings chat show host Russell to the edge of a Harty attack, terrorist

Townshend smashes guitar through a club roof). b. On arrival the patrol members find that two men have apparently just crashed a lorry through a fence and swum across the River Spree.

(37) Destination argument-adjuncts a. Such bombs often smash into other bigger, stationary boulders at the base of the cone, shattering into smithereens, but in doing so they leave their own mark on the boulder. b. The great metal roof-tree which held the house together had bent inwards as though from some giant's blow, and listening, Rachel could hear tiles still slithering down the slope and crashing into the street forty feet below.

Note that, in order to be part of the group of prototypical examples, all the examples with argument-adjunct must preserve the meaning of change of state. From this it follows that the examples with an origin, path or destination argument-adjunct will only be regarded as part of the group of prototypical examples as long as they preserve the meaning of change of state together with the new one. For example, in (38) below, the argument-adjuncts from shelves and into Guy’s amiable face indicate a change of position but not necessarily a change of state. However, in (39) there is a change of

171 position plus a change of state. Only the latter belongs to the group of prototypical examples.

(38) a. Saturday afternoons, when the streets were solid with white faces, was a carnival of consumerism as goods were ripped from shelves. b. He saw his hand come up and smash itself into Guy's amiable face.

(39) a. When Marevna finally went home at dawn: `Max Jacob stood with a missal in his hand and Modigliani was still on his feet methodically tearing the rest of the wallpaper off the walls and singing an Italian song." b. On his back like a beetle, three sat on him while two ripped his daysack from his chest. c. The kite might start to climb, then perform a curving sweep up to about 45 degrees, execute a beautiful turn and point head downwards until it crashes into the ground. d. LETHAL INCENDIARY grade performance on and off stage (Keith Moon brings chat show host Russell to the edge of a Harty attack, terrorist

Townshend smashes guitar through a club roof).

In (38.a) the goods are not necessarily broken when ripped from the shelves. Rip just indicates the violent way in which the change of position has been carried out. In example (38.b), the hand is not broken; that is, it does not suffer a change of state. All it does is land on Guy’s amiable face; that is, it undergoes a change of position. The

172 examples in (39) are different, because they imply both a change of state plus a change of position. Thus, in (39.a) the wallpaper undergoes a change of state (it is being ripped; that is, broken) and then a change of position (it is being removed from the wall). In

(39.b), the most logical interpretation seems to be that the daysack (or at least its strap) will be broken when ripped. Otherwise, the verb rip, which indicates some violence in the performance of the action, would not be appropriate in this context – although this is open to discussion. (39.c) is more open to interpretation, because the kite may or may not have broken when crashing into the ground.

These examples add new semantics to the original semantics of break verbs.

They are a merge of change of position + change of state. I come back to this idea in section 4.2.1.2.f.

4.2.1.1.d Periphery

The identification of the elements of the periphery almost immediately raised problems due to the inconsistent treatment Van Valin and LaPolla (1997) give to this layer. Van

Valin and LaPolla (1997: 26) define periphery, in a rather broad sense, as that layer of the clause that contains all those elements which are not arguments of the predicate.

Furthermore, they specify that the periphery ‘contains the elements of the clause which are left out of the core’, and make explicit reference to time adverbials such as in five minutes, for five minutes, yesterday and tomorrow. The question now arises what happens with manner, pace and aspectual adverbs. According to Van Valin and LaPolla

(1997: 162), these adverbs are non-peripheral, because they modify other layers of the clause (see Figure 4.3). Yet, according to their own definition, the core is formed by the predicate and its arguments (Van Valin and LaPolla1997: 26), which means that the

173 adverbs mentioned above cannot form part of it. Therefore, RRG’s representations do not seem to have a slot for those adverbs.

SENTENCE

LDP CLAUSE

CORE

ADV ARG NUC ARG ARG

ADV ADV PRED

NP V NP PP

Evidently, Leslie has slowly been completely immersing herself in the new language

V

NUC ASP

ASP NUC

ASP NUC

PACE CORE

TNS CLAUSE

EVID CLAUSE

IF CLAUSE

SENTENCE Figure 4.3 Syntactic representation of ‘Evidently, Leslie has slowly been completely immersing herself in the new language’ (Van Valin and LaPolla 1997: 166)

As Ortigosa (2005) points out, Van Valin (2005: 21) himself has acknowledged the existence of certain inconsistencies in the way adverbs and phrasal adjuncts are represented in the layered structure of the clause. Although he does not go into depth with this idea, he proposes a way of giving adjuncts a consistent treatment that I will call the layered periphery. The layered periphery consists in dividing the periphery in different layers that parallel those of the clause. Thus, each layer of the clause will have

174 its own periphery, including the adverbs and other modifiers at that level. More precisely, nuclear adverbs will be placed in the NUC-PERIPHERY (for instance, partially); core-level adverbs will be contained in the CORE-PERIPHERY.24 Ortigosa notes that this entails the co-existence of two peripheries, one before and one after the

CORE node in languages like English; the pre-core periphery contains only adverbs, and the post-core adverbs and PP adjuncts. Finally, sentence-level adverbs and adverbial clauses such as evidently in Leslie has evidently been slowly immersing herself completely in the new language (Van Valin 2005: 22) will be placed in the

CLAUSE-PERIPHERY. Ortigosa presents a tentative representation for the clause structure in RRG (see Table 4.15 below) and illustrates it at the constituent projection

(see Figure 4.4 below):

Layer Semantic Unit Operators Adjuncts

Nuclear Predicate + arguments imperfective aspectual, pace, predication (also argument aspect manner, etc. adjuncts) + non- progressive aspect arguments (→level 1 negation adjuncts) Core Predicate + arguments perfective aspect Temporal , spatial, + non-arguments (→ the quantificational, level 2 adjuncts) habitual aspect etc tense objective mood Clause Predicate + arguments subjective attitudinal adjuncts + non-arguments modality (→level 3 adjuncts) evidentials Sentence Predicate + arguments illocutionary force illocutionary + non-arguments adjuncts (→level 4 adjuncts) + extra-clausal constituents Table 4.15 Layers, operators and adjuncts in RRG

24 Ortigosa (2005) refers to these layers as PERIPHERY-NUC, PERIPHERY-CORE and PERIPHERY- CLAUSE, respectively; however, I will refer to them as shown in the text.

175 SENTENCE

LDP CLAUSE Clause PERIPHERY CORE Core PERIPHERY NUC. PRED. Nuc.PERIPHERY ARG PRED ADV ADV

Evidently, John hasn’t got up yet today as I had expected Figure 4.4 Constituent projection in RRG (Ortigosa 2005)

This way, all non-arguments, whatever type they are – that is, adverbs, prepositional phrases (PPs), and adverbial clauses – will be represented in the RRG in a systematic fashion. It must also be noted, that, as Ortigosa points out, this idea is not new, as it follows Dik’s (1997) model.

The impact of this new arrangement of the periphery goes beyond Ortigosa’s conclusions. Ortigosa points at the consequences the layered periphery may have for the theory of clause linkage, especially for embedding. However, she does not mention one major corollary of this rearrangement: the uneconomical duplication of adverbs at both the constituent and the operator projections is thus avoided. In the new representation, I suggest that adverbs be represented at the constituent projection only, as the information regarding which layer of the clause they modify is evident from the layer of the periphery they belong to. Thus, if we go back to the example presented in Figure 4.3, I suggest a new representation that I display in Figure 4.5. In order to avoid problems of space, I suggest the use of ‘N-PER’ for ‘NUC-PERIPHREY’, ‘CO-PER’ for ‘CORE

PERIPHERY’, and ‘Cl-PER’ for ‘CLAUSE-PERIPHERY’.

176 SENTENCE

LDP CLAUSE

CO-PER CORE

ADV ARG N-PER NUC ARG ARG

ADV ADV PRED

NP V NP PP

Evidently, Leslie has slowly been completely immersing herself in the new language

Figure 4.5 New syntactic representation of ‘Evidently, Leslie has slowly been completely immersing herself in the new language’

As can be observed in the figure, those elements that occupy one of the special syntactic positions - left/right-detached positions or pre/postcore slots - are not regarded as part of the periphery.

Having clarified these theoretical issues, let us now focus on their impact on the corpus itself. Adjuncts can be of three main types: bare adverbials, temporal and locative prepositional phrases which are adverbial in nature, and PPs that introduce a clause which is also adverbial in nature. The corpus has provided several examples of other peripheral types; nonetheless, here I focus on the most recurrent patterns only.

(40) illustrates the three types.

(40) a. Bare adverbial: Such bombs often smash into other bigger, stationary boulders at the base of the cone, ...

177 b. PP of adverbial nature: During one fiery bout she ripped the telephone out of the wall and smashed it over his head. c. PP + adverbial clause: The helicopter is believed to have crashed after the gases caused engine failure.

In (40.c) the adjunct is in fact a subordinating conjunction which is treated as a predicative preposition that introduces a clausal argument. Van Valin (2005: 194) refers to the relationship existing between this adverbial subordinate clause and the core of the matrix clause as ad-core subordination. It must be noted, by contrast, that adverbial

clauses marked by because, if or although in English, are not in the peripheryCORE. They are cases of peripheral subordination at clause level and, as Bickel (1993, 2003; in Van

Valin 2005: 194) points out, these constructions are different from ad-core subordinate

clauses in several respects: (1) they do not occur in the peripheryCORE but rather in the

peripheryCLAUSE. Hence, Van Valin (2005) refers to them as ad-clausal subordination; and (2) ad-core subordinate clauses express the spatial and temporal relations of the event within the core, whereas ad-clausal subordinates express reason, concession and condition (Van Valin 2005: 196). In other words, these PPs modify the same layer as their equivalent adverbial. Furthermore, there is also a small number of adverbs and PPs that can occur in the periphery modifying the nucleus, like the aspectual adverb completely in (41).

(41) Leslie has evidently been slowly immersing herself completely in the new language. (Example taken from Van Valin 2005: 22)

178 Temporal and locative modifiers are assigned, by default, to the periphery.

However, they can also appear in other marked positions such as the precore slot or the left-detached position (Van Valin and LaPolla 1997: 426). When such modifiers appear in the left-detached position, they are separated from the rest of the clause by a comma, as in (42).

(42) Last month, Paul packed and left for Montreal.

In (43), by contrast, last month appears in its default position in the periphery:

(43) Paul packed and left for Montreal last month.

In (44) one week later is in the precore slot and is to be interpreted as contrasting with last month in the previous clause:

(44) Paul left last month, and one week later he returned.

All these distinctions have been borne in mind during the analysis and only those adverbial modifiers that appear in the periphery have eventually been considered. The vast majority of adverbial modifiers found are locative and temporal adverbials, but there are also quite a few examples of adverbials of manner, reason, purpose, epistemicity, consequence, aspect, pace and emphasis. They are illustrated below, in italics:25

25 Recall that in this analysis only the modifiers of the clause containing a break verb are considered.

179 (45) TIME: The material will succumb to whichever mechanism is the weaker; if it yields before it cracks the material is ductile, if it cracks before it yields it is brittle.

(46) LOCATION: It must be horrible for them when cars crash outside their homes.

(47) LOCATION + TIME: INVESTIGATORS have determined that an explosion destroyed the UTA DC-10 which crashed in the Sahara

Desert on 19 September, killing all 171 people on board, but there is no evidence that a bomb was responsible.

(48) REASON: He'd write two or three words, rip up the sheet because of an error and do this about five or six times.

(49) TIME + LOCATION + CONSEQUENCE: AT one o'clock in the morning a petrol bomb shattered the window in Kathy Sellers' front room shooting flames across the floor and up the curtains.

(50) PURPOSE: Several insurers split their investments so that buyers collect five separate contracts each for £1,000.

(51) PURPOSE + LOCATION: On fine days start cutting sprigs of herbs for drying, hanging them in a shed to dry out completely before crushing for storage in screw-top jars.

180 (52) MANNER: This delicate fern is preserved in a very fine-grained sandstone, which fractures rather irregularly.

(53) ASPECT + MANNER Because the flow stress of glass is very high at room temperature and because glass is very susceptible to fracture by the spread of cracks, glass, and materials like it, nearly always fracture in the familiar brittle manner and we find it difficult to imagine anything different happening.

(54) MANNER + LOCATION + TIME: No, the true hard luck tale can be related by jockey Andrew Adams, 27, who missed the winning ride after smashing his left knee in a fall at Doncaster in February and remains on the sidelines for the rest of the season.

(55) EPISTEMIC: You have to pull very slowly and carefully, try to jerk it and you'll bend or possibly break something.

(56) PACE: 1/My usual method of working is to block in the largest shapes first and then gradually break down into smaller and smaller shapes.

(57) MANNER + MANNER + LOCATION/INSTRUMENT + MANNER: She takes his hand, and gently crushes each of his fingers in turn between her lips like buttered asparagus.

(58) EMPHASIS: We're just going up to Eastgate Bike

Shop to try and get a cable that just snapped.

181 From this small sample it follows that break verbs are modified by all three main kinds of peripheral modifiers. Thus, there are cases of bare (and modified) adverbs, such as rather irregularly, possibly, gradually and gently in (52), (55), (56) and (57), respectively; of PPs such as outside their homes in (46), in the Sahara Desert and on 19

September in (47), in screw-top jars in (51) or in the familiar brittle manner in (53); and of clausal PPs, such as before it yields in (45) or so that buyers collect five separate contracts each for £1,000 in (50).

Besides this set of peripheral adverbials, which are rather prototypical, there are two more types of modifiers that deserve more careful attention. These are instruments and the effectors of passive constructions. The effectors of passive constructions are easily recognised because they are marked by the preposition by and are optional syntactic constituents of the clause. Although Van Valin (2005) does not specify so, they modify the CORE of the clause. (59) illustrates this peripheral modifier:

(59) Effectors of passive constructions a. LOCATION + EFFECTOR + TIME: Scientists at Boston

University, using remote sensing techniques, have found that the desert pavement of

Kuwait, southern Iraq and Saudi Arabia has been broken up over large areas by the massed movement of vehicles during the war. b. EFFECTOR + TIME + MANNER: Sharks stayed outside the reef, while Trent knew Arab net fishermen in the Persian Gulf whose calf muscles had been ripped out by barracudas while they were standing on the coral with the water below their knees.

182 In (59.a) the effector is by the massed movement of vehicles, whereas in (59.b) the effector is by barracudas.

The case of instruments is slightly more complex. Instruments resemble argument-adjuncts in that they are intermediate cases between arguments and adjuncts.

Like arguments, they can occur in the core without a preposition, as shown in (60).

(60) LS for A rock shattered the window. (Example taken from Van Valin 2005: 115):

[do’ (Ø, Ø)] CAUSE [[do’ (rock, Ø)] CAUSE [INGR shattered’ (window)]]

As can be seen, the effector is left unspecified and the instrument has been raised to subject position, thus becoming the effector. If the real effector were overtly expressed, the corresponding logical structure would be as follows:

(61) LS for Leslie shattered the window with a rock. (Example taken from Van Valin

2005: 115): [do’ (Leslie, Ø)] CAUSE [[do’ (rock, Ø)] CAUSE [INGR shattered’

(window)]]

Instruments also resemble arguments in that, having being raised to subject, they can become the effector of a passive construction like any other subject argument of a transitive verb, as shown in (62):

(62) A great anger had heated up, one of Robertson's new windows had been shattered by a stone, and the womenfolk had made a move to drag the teacher out and throw him in the river.

183 In other words, instruments have syntactic impact (Mairal 2001-2002: 11).

On the other hand, instruments resemble adjuncts in their optional character.

They are not semantic arguments of the predicate in the nucleus, and they add their own semantics to the verb.

Van Valin (2005: 58) leaves the instrument out of the list of thematic roles because in fact they subsume two different roles. On the one hand, the term instrument is used to refer to an ‘inanimate effector in a causal chain’, as illustrated in (63.a) and

(63.b) below. On the other, it is just an implement, not a part of a causal chain. Hence, it cannot function as the subject of the clause if the prototypical effector is omitted (2005:

59). This role is exemplified by with a spoon in (63.c).

(63) a. INSTRUMENT: Crush the biscuits with a rolling pin and finely chop the preserved ginger. b. INSTRUMENT: Franca had indeed had those fantasies, and continued to have them, of how she would kill her husband, smashing his head with a hammer, plunging a carving knife into his side, drugging him with sleeping pills and suffocating him. c. INSTRUMENT: Chris ate the soup with a spoon / *The spoon ate the soup. (Van

Valin 2005: 59)

In the analysis I have consistently classified instruments as peripheral modifiers rather than argument-adjuncts. Hence they do not count for either semantic or syntactic valence, and they have not been analysed in either qualitative or quantitative terms.

184 4.2.1.1.e Entity type

Entity type is the first parameter of the qualitative analysis. Those arguments that were not overtly expressed in the syntax have not been analysed with respect to any of the qualitative parameters, namely entity type, nominal aspect and reference.

The vast majority of the arguments analysed have turned out to be of type 1.

This result is not surprising given that the examples under analysis are prototypical and, as I have explained in section 4.1.2.b, the prototypical definition of break verbs involves a concrete, animate, human, countable, instigator, volitional effector and a concrete, inanimate, non-human, countable, receiver, non-volitional patient. Nonetheless, there have been some problematic examples whose arguments denote entities of difficult classification, such as party in (64):

(64) Once again she split the party and proved in subsequent elections that the people regarded her family as the real Congress.

Party can be interpreted either as an ‘organization of people with particular political beliefs’ (CAD) – that is to say, an abstract entity – which, consequently, cannot be literally split into parts, or as a group of individuals which can literally be split into parts, like a group of kids making teams when playing a game. The most logical interpretation seems to be the first one, but still this dichotomy raises doubt as to the liability of the example to be analysed literally.

The corpus has provided examples like this, which are less prototypical than those with prototypical entities of type 1, and also examples with entities of other types such as type 3 which, in most cases, are arguments of examples with a non-prototypical meaning. At first sight one might think that there is a correlation between different

185 entity types and the different groups into which the corpus has been divided. More precisely, it would seem logical that arguments of type 1 participate in prototypical examples and arguments of entity types 0, 2 and 3 participate in less- and non- prototypical examples. Even more, entities could be organized into a hierarchy in which they were ranked in terms of prototypicality with respect to the meaning of change of state, as presented in Figure 4.6:

ET1 ET0 ET2 ET3

Prototypical Less prototypical Non-prototypical

Figure 4.6 Hierarchy of entity type according to prototypicality of meaning.

Contrary to expectations, however, the analysis shows that less-prototypical and non- prototypical examples have arguments of types 0, 2 and 3, but also many of type 1.

Apart from this, the analysis has raised some questions that must be commented upon in order to interpret the results correctly. In the first place, I have found many cases of arguments realized by pronouns and proper names. Given that pronouns only provide information of person and number, it has not been possible to analyse them in terms of entity type, nominal aspect or reference. As for proper names, there is one exceptional case in which I have analysed them in qualitative terms: when they are introduced by a . In that case, the proper name adopts the features of common names, thus being analyzed as follows:

(65) The two Bf110s had crashed nearby, and `&hellip they were scattered over a wide area — we all went out to view the wrecks.

186

In (65) The two Bf110s denotes an entity of type 1 which is count, [+telic] and definite.

Furthermore, those arguments whose referent is not explicitly expressed in the syntax such as two, the different or the original in the examples below have not been analysed in qualitative terms either:

(66) a. On his back like a beetle, three sat on him while two ripped his daysack from his chest. b. Either they win the faint gratitude of listeners by reminding them of the original with a faithful remake, or they rip the original to shreds and reassemble it in a manner pleasing to themselves. c. Split the difference on a cuppa, shall we?

The corpus has also provided examples where one same argument position is filled with two or more entities which are sometimes of the same type, and some other times of different type:

(67) Argument position filled with two or more entities of the same type a. He is a famous writer, so I also tear up his manuscripts and books. b. Robert Brooks, a factory worker, fractured his son's collar bones, legs and an arm, and broke his skull.

187 In (67.a), his manuscripts and his books are of type 1, like his son's collar bones, legs and an arm in (67.b). In (67.a), on the contrary, the noose and the drop are entities of type 1 and 2, respectively, like the ceilings and plasterwork in (67.b).

(68) a. The pinioned hands of the condemned man went suddenly white as the noose and the drop snapped his spinal column. b. All the floorboards had been removed, and the ceilings and plasterwork had crashed down into the basement with the weight of water pouring through the roof.

Some examples present a more complex case in which one same argument position is filled by an entity of a certain type and a proper name or a . In that case, I have only analysed the entity type of the referential argument:

(69) Erasmus Darwin, grandfather of Charles, describes dodders in verse as `harlot-nymphs" which crush their prey in their coils much as the mythological serpents crushed Laocoon and his sons, while Indian myths grant unlimited wealth and the power of invisibility to those who find the roots of dodder.

In (69), the second argument position of crush is filled with Laocoon and his sons.

Thus, in the analysis I indicate that the entity type for his sons is 1, but there is no mention to the entity type of the proper name for the reasons explained above.

188 4.2.1.1.f Nominal aspect

In chapter 3, I established a distinction between noun aspect – an intrinsic, paradigmatic property of the noun - and nominal phrase aspect – an extrinsic, syntagmatic property of the noun. Noun aspect is dealt with in terms of count/mass, whereas nominal phrase aspect is defined in terms of telicity.

Since telicity is the study of the aspect of the noun in context and, consequently, it is defined paradigmatically, I have studied this phenomenon in a number of contexts wide enough to provide the data necessary to build a satisfactory typology of telicity.

These contexts have been defined around two main variables that may bring about a change in telicity: number and reference. By reference I mean the set of and/or premodifiers of the noun. For years, linguists such as Roeper (1983, cited in

Gillon 1999: 51), Bunt (1985), or Jackendoff (1991), among others, have made use of semantic criteria to distinguish between mass and count nominal phrases.26 However, these have turned out to be inadequate. Among the semantic criteria suggested it is worth noting Quine’s cumulativity of reference and Cheng’s divisivity of reference

(cited in Gillon 1999: 51). Cumulativity of reference stipulates that ‘if a mass term such as water is true of each of two items, then it is true of the two items taken together’

(Gillon 1999: 51). Thus, if a is flour and b is flour, a and b together are flour, whereas if a is a boy and b is a boy, a and b together are not a boy, but boys. Divisivity of reference, on the other hand, states that ‘any part of something denoted by a mass noun is denoted by the same mass noun’ (Gillon 1999: 52). Thus, whereas a part (or piece) of cake is cake, a part of a woman cannot be said to be a woman. In other words, mass nouns (or atelic nominal phrases can be divided into parts, whereas count nouns (or

26 For the moment being, and in order to present their ideas, I stick to the terminology used by Cheng and Quine (cited in Gillon 1999). Nonetheless, it must be borne in mind that, in this work, I refer to their mass and count nouns as atelic and telic NPs when the noun does not appear in isolation but modified by some determiner or premodifier.

189 telic NPs) cannot. These two criteria, however pertinent they are, do not succeed to distinguish mass from count NPs (Gillon 1999: 52): cumulativity of reference fails to distinguish mass nouns and plural count nouns. Thus, whereas the kids is count, if a are kids and b are kids, a and b are kids. In other words, divisivity of reference treats plural count nouns as mass nouns. Divisivity of reference, on the other hand, falls short in two ways (Gillon 1999: 52): first, some words pattern morphosyntactically with mass nouns, yet some parts of them cannot be denoted by the same mass noun. Thus, there are parts of sugar, for example, which are too small to be regarded as sugar; second, there are words that can be both mass and count, but even when they behave as count, the divisivity of reference still applies. That is the case, as Gillon (1999: 52) explains, of rope: ‘Suppose, for example, one has an eight foot rope. Cutting it in half, one can be said to obtain two ropes of four feet each. And cutting each of these ropes in half again, one thereby obtains four ropes of two feet each.’

For these reasons, the study of telicity has been carried out according to the morphosyntactic criteria mentioned above. The first one is number. As a general rule,

English count nouns admit both singular and plural morphology, whereas mass nouns do not have plural save exceptional cases. By way of illustration, compare the chair (chairs) and the mass noun peace (*peaces). The second criterion is reference. In principle, mass nouns cannot be preceded by the indefinite article (*a parsley), whereas count nouns can (a glass) (Gillon 1999: 51). Furthermore, cardinal numbers and quasi-cardinal numbers such as several premodify count nouns, but not mass nouns (Gillon 1999: 51). Thus, three boys is satisfactory while three salts is not.27

Going to the detail, some quantifiers can only modify mass nouns, as is the case of little

27 As I explain below, three salts might be acceptable in certain contexts though.

190 and much, and some others can only introduce count nouns, as is the case of few and many (Gillon 1999: 51).

As shown, each determiner or modifier affects telicity in a different way. Thus, in order to offer an accurate typology of telicity, I have made use of Givón’s (1993:

249-256) list of pre-nominal modifiers.28 This way, I delimit the main contexts for the study of this phenomenon. Furthermore, I have taken into account number, and I have introduced examples with both mass and count nouns whenever the two alternatives are possible within the given context. I am aware that the corpus gathered is by no means exhaustive and many more examples of each determiner should be studied before establishing generalizations. For that reason, in the discussion of the examples I talk about tendencies rather than strict rules. Nonetheless, it must be acknowledged that each example has been contrasted with different lists of mass and count nouns in each case, thus conferring more reliability on the suggestions made.

Once the corpus has been gathered, I have studied each example, paying attention first to the definition of the noun as given in the dictionary and then to the aspectual quality (or telicity) of the noun in the specific context.29 If a count noun becomes mass, I refer to this phenomenon as massification. See (70.a) for illustration. I have coined this term after Bunt’s (1985) idea of massified individuals. If a mass noun becomes count, I refer to it, by extention, as countification.30 This is illustrated in

(70.b).

28 In order to offer fragments of real uses of language, most of the examples listed in this work have been taken from the internet and contrasted with native English speakers. 29 The dictionaries consulted for this purpose are mainly two: the CALD and the Longman Dictionary of Contemporary English Online. 30 Along the same line, verbs such as to countify and to massify will be used in the discussion of the examples below.

191 (70) Massification and countification

a. We haven’t seen much of him lately.

b. Give me another whisky.

I come back to this issue below.

According to Van Valin and LaPolla (1997: 58), NP operators realized by, among others, determiners and noun classifiers (e.g. a carton of milk, two litres of milk, etc.), parallel the scope principle of operators in the clause. For example, quantity operators such as many or few are found at core level, whereas locality operators – which are mainly realized by determiners – modify the NP as a whole.31 The layered structure for the NP is illustrated in Figure 4.7 below.

NP NP

COREN COREN

NUCN NUCN

REF REF

N N

the three big books nèi sān běn dà shū

N N

ADJ NUCN ADJ NUCN

COREN NUM NASP NUCN

QNT COREN QNT COREN

DEF NP DEIC NP

Figure 4.7 LSNP (layered structure for the nominal phrase) with operators in English and Madarin Chinese (Van Valin and LaPolla 1997: 59)32

31 I use the term locality operator after Van Valin and LaPolla (1997: 58). 32 NP stands for nominal phrase, NUC for nucleus, REF for reference, N for noun, ADJ for adjective, NUM for numeral, QNT for quantifier, DEF for definite, DEIC for deictic, and NASP for nominal aspect.

192 As can be seen in the figure above, adjectival and nominal modifiers are nuclear operators, but I have left them out of this study because they do not seem to interfere with either reference or telicity. Quantifiers modify the core of the NP and are concerned with quantification (many, few, every, etc.) and negation (i.e., no). Locality operators modify the NP as a whole, and they are usually formally expressed by means of determiners (articles and ).33 Therefore, the basic NP aspectual operators hierarchy could be illustrated as follows:

X

NUCN

QNT COREN

NEG COREN

DET NP Figure 4.8 Basic NP aspectual operators hierarchy34

Consequently, I have organized the range of basic realizations of the NP according to their scope. At core level, there are partitive definite quantifiers and indefinite quantifiers-determiners; at phrase level, definite and indefinite articles, demonstratives, and modifiers:

33 Van Valin and LaPolla (1997: 62) argue that there exists cross-linguistic evidence that articles differ from demonstratives because the latter are pronominal in nature and can therefore occur as referring expressions on their own. However, this distinction seems to be irrelevant in English, where both share the same lexical category and have a similar representation in the layered structure of the NP. 34 QNT stands for quantifier, NEG for negation, N for noun, DET for determiner, NUC for nucleus, and NP for nominal phrase.

193 AT CORE LEVEL

Partitive definite quantifiers

Some of

a. Place some of the apple into the crucible. (atelic)

b. Some of the kids are hungry. (telic)

c. I have used some of the flour you gave me for the cake. (atelic)

d. Some of those waters are relatively benign. (atelic)

All of

a. He likes all of you. (atelic)

b. I’d like to see all of the kids before leaving. (telic)

c. All of the air is still being used to recharge the empty reservoirs. (atelic)

d. They will cut through all of those nonsenses. (telic)

None of

a. Even his lawyer is having none of that lie. (atelic)

b. None of my friends told me anything. (telic)

c. On the deck where the first class passengers were quartered, known as deck A,

there was none of the confusion that was taking place on the lower decks.

(atelic)

d. None of the truths could be verified by experimentation. (telic)

Any of

a. I haven’t read any of the book yet. (atelic)

b. Any of those kids will do. (telic)

c. I haven’t eaten any of the ice-cream yet. (atelic)

194 d. Sugar is “purified” fuel burned in the tissues without contributing any of the

salts, vitamins, biochemic reactions, building or repair material indispensable to

health. (telic)

Most of

a. Her feet stick out, but most of her is squeezed into the cubby-hole with Rose.

(atelic)

b. Most of the accusers were women. (telic)

c. Most of this work is based on Peter’s ideas. (atelic)

d. Laura is right most of the times. (telic)

Lost of

a. Ø

b. Lots of their suggestions were pencilled down by the journalists. (telic)

c. Lots of that information has already been published. (atelic)

d. Lots of the loves Ana had in the 50’s were hippies. (telic)

One of

a. Ø

b. One of the men pointed at me. (telic)

c. Ø

d. Many said one of the waters was bad. (telic)

Two of

a. Ø

b. Two of the men had lived in New York for a while. (telic)

c. Ø

d. Two of the loves she had were rather odd. (telic)

195 A number of

a. Ø

b. I’ve read a number of these books. (telic)

c. Ø

d. There are a number of sands of the bathing beaches with clear water on the

island. (telic)

Indefinite quantifiers-determiners

Some

i. He saw some woman prowling around here. (telic)

ii. Some women like football. (telic)

iii. I need some money. (atelic)

iv. He tried some wines during his stay in California. (telic)

One

a. There’s only one man in the world able to do that. (telic)

b. Ø

c. There’s only one sand finer than this. (telic)

d. Ø

Two

a. Ø

b. Who are those two men in black suits? (telic)

c. Ø

d. I have struggled to live between two waters. (telic)

Another

a. Another day is gone. (telic)

196 b. Ø

c. Have another whisky with me. (telic)

d. Ø

All

a. In the sight of God all man is one man. (telic)

b. All soldiers feel fear at some point. (telic)

c. About 97 percent of all water is in the oceans. (atelic)

d. The Mother of all waters. (telic)

Many

a. Ø

b. He has sent her many flowers in the last 10 years. (telic)

c. Ø

d. Ø

Every

a. I know every person here. (telic)

b. Ø

c. Every flour is different, so you have to find the one you like best. (telic).

d. Ø

Much

a. Ø

b. Ø

c. She had felt much unhappiness in her life. (atelic)

d. Ø

Little

a. Ø

197 b. Ø

c. Pete has little luck with women. (atelic)

d. Ø

A little

a. Ø

b. Ø

c. Wish me a little luck. (atelic)

d. Ø

Any

a. I haven’t seen any woman. (telic)

b. Carmen didn’t seem to have any suggestions. (telic)

c. Have you got any money? (atelic)

d. Have you heard any noises? (telic)

No

a. Roberto obtained no response. (telic)

b. Gema got no questions after the presentation. (telic)

c. There’s no water in the building. (atelic)

d. No waters are currently designated as Seasonal Cold. (telic)

AT PHRASE LEVEL

Definite article

The

a. Pass me the book, please. (telic)

b. Pass me the books, please. (telic)

198 c. Pass me the water, please (telic) / The water is calm. (atelic)

d. Have you tried the waters he brought from Argentina? (telic) / (Looking at the

sea) The waters are calm. (atelic)

Indefinite article

A

a. The Baker is in his shop peeling an apple. (telic)

b. Ø

c. The challenge is in having a water that is not overly weighted in certain

minerals. (telic)

d. Ø

Demonstratives

This

a. Select any style sheet from the list to load it into this page.

b. These boots are made for walking.

c. There is nothing more frustrating on this earth than constantly being asked to

help other people with their petty computer problems.

d. These sands are critically important in digesting all this stuff.

That

a. That name rings a Bell

b. Stop biting those nails

c. For example, the most significant cost of a CD today is the marketing and

promotion of that music.

199 d. The rest is following his destruction of those loves in order to protect the one

that is most meaningful to him

Possessive modifiers

My/your/his/her/its/our/your/their

a. My/your/his/her/its/our/your/their: Pass me my book, please (telic)

b. My/your/his/her/its/our/your/their: Pass me my books, please (telic)

c. My/your/his/her/its/our/your/their: Your money is in the box (atelic)

d. My/your/his/her/its/our/your/their: Your waters are in the fridge (telic)

The information obtained from the analysis of this corpus can be summarised as follows.

AT CORE LEVEL

As the examples above show, quantifiers can be of different types, behaving either as partitive definite quantifiers or as indefinite determiners. When definite, the quantifier is followed by the preposition of and by a definite determiner such as the or that among others. When indefinite, the quantifier is directly followed by the head noun. This formal distribution inversely reflects the scope of the nominal operators: quantifiers operate at core level whereas determiners operate at phrase level. In other words, determiners have scope over quantifiers.

The differences existing between the two groups of quantifiers are not restricted to form. They also differ in the way they modify the noun, massifying it in some occasions, countifying it in others. Let us go back to the examples above to see the

200 effect they all have on both count and mass nouns. The order of appearance of the determiners/quantifiers has been modified to facilitate their discussion.

Some of: When the singular, count noun apple is inserted in the partitive structure some of + determiner, the resulting NP is atelic, since the noun apple is presented as a substance or a material rather than as an individual. Even if, as

Jackendoff (1991) claims, a part of an apple cannot be described as an apple, in this context the shape of the apple shades off, resembling a body of water or flour – that is, a mere amount of material – that can be measured and weighed. In principle, this massification process does not have any special restrictions, for it can be applied even to animated entities, as in ‘I documented some of that woman in the film.’

When the plural, count noun kids is inserted in the partitive structure some of + determiner, the resulting NP is telic. This seems to apply to all plural, count nouns.

When the singular, mass noun ‘flour’ is inserted in the partitive structure ‘some of + determiner, the resulting NP is atelic. This seems to apply to all singular, mass nouns.

When the plural, mass noun waters is inserted in the partitive structure some of

+ determiner, the resulting NP is telic. In this case, the mass noun becomes individualized in terms of, for example, different types of water or bottles of water.

The massification of apple and the countification of flour seem to apply for all cases of singular, count nouns and plural, mass nouns, respectively, when inserted in this structure.

Some: When the singular, count noun woman is preceded by the indefinite quantifier-determiner some, the resulting NP is telic. By contrast to the partitive structure some of, the determiner some does not turn the individual entity into an

201 indeterminate (or mass) structure but projects it as an indefinite entity. Thus, some woman refers to some individual whose identity is not known.

When the plural, count noun women is preceded by the indefinite quantifier some, the resulting NP is telic. As was the case with woman, the determiner some does not turn the individual into an amount of material. Each woman preserves her individuality even if it confers the NP with a sense of group, which can give the impression of a human mass.

When the singular, mass noun money is preceded by the indefinite quantifier- determiner some, the resulting NP is atelic. This applies to all singular, mass nouns.

When the plural, mass noun wines is preceded by the indefinite quantifier- determiner some, the resulting NP is telic.35 The explanation is similar to that offered for the plural, mass noun waters: the mass noun undergoes conversion and becomes countified in terms of different types or kinds of wine.

Any of: When the singular, count noun book is inserted in the partitive structure any of + determiner, the resulting NP seems to be atelic, as if it were a mass noun which can be divided into parts (in this case, pages). The main difference existing between ‘Have you read some of the book?’ and ‘Have you read any of the book?’ lies in the fact that the former seems to imply a stronger belief on the part of the speaker that the addressee has actually read part of it, whereas the latter implies a certain disbelief.

When the plural, count noun kids is inserted into the partitive structure any of + determiner, the resulting NP is telic. This seems to apply to all plural, count nouns.

35 In cases where mass nouns can be made plural, they are automatically endowed with a quality proper of count nouns; hence, they behave as such in the new context. It must be noted that not all mass nouns can be made plural; the analysis suggests that, among others, those which can be classified as kinds or carried in a container meet the required characteristics.

202 When the singular, mass noun ice-cream is inserted in the partitive structure any of + determiner, the resulting NP is atelic. This seems to apply to all singular, mass nouns.

When the plural, mass noun salts is inserted in the partitive structure any of + determiner, the resulting NP is telic. Once more, the mass noun becomes countified in terms of types or units of salt.

Any:36 When the singular, count noun woman is preceded by the indefinite quantifier-determiner any, the resulting NP is telic. Just like some, the determiner any does not turn the individual entity into a mass but presents it as an indefinite entity.

When the plural, count noun suggestions is preceded by the indefinite quantifier any, the resulting NP is telic.

When the singular, mass noun money is preceded by the indefinite quantifier- determiner any, the resulting NP is atelic. This applies to all singular, mass nouns.

When the plural, mass noun noises is preceded by the indefinite quantifier- determiner any, the resulting NP is telic. If we take into account that noise is defined in the dictionary as either mass or count, whether this – and its singular counterpart – is a process of conversion or not is not clear. Let us take another example in which the core of the NP is clearly a mass noun according to the dictionary: ‘Milorganite does not contain any salts so it will not burn plant leaves during dry or hot periods.’ In this case, the mass noun is also countified or converted into count; hence, when a plural, mass noun is preceded by the indefinite quantifier-determiner any, the resulting NP is telic.

As regards its singular counterpart, different contexts will bring about different readings. Consider the following example: ‘I don’t have any salt in my pond currently.’

If used in a negative environment, the NP any plus singular, mass noun remains atelic.

36 Any and some have different connotations depending on whether the clause or phrase in which they appear is negative/affirmative and/or interrogative.

203 And the same applies to its interrogative counterpart: ‘Have you got any salt and pepper to put on the dining table??’ The NP any salt is atelic. There are exceptions to this rule, however. When the mass noun salt encodes a type of salt, then the mass noun is countified. That is, the NP is telic. This is illustrated by ‘Any salt, compound, derivative, or preparation thereof which is chemically equivalent or identical with any of the substances referred to in paragraph (b) (1) of this section.’

All of: When the singular, count pronoun you is inserted in the partitive structure all of, the resulting NP seems to be atelic, as was the case with some of that woman above.37 In both examples, the animated entity designated by the pronoun is treated as if it were a mass which can be divided into parts. In the case of some of that woman, the speaker/writer depicts only some part of that entity, whereas in all of you, the speaker/writer puts together all the parts of the entity. The expression is equivalent to every part of you. If the pronoun is replaced by a common noun such as article in

‘Clearly you didn’t read all of the article’, the resulting NP is atelic too.

When the plural, count noun kids is inserted into the partitive structure all of + determiner, the resulting NP is telic. By contrast to the previous case, where the entity modified by all was a part of a single entity, the entity modified by all in the present structure is not a part but a whole entity in itself which forms part of a group or set. This seems to apply to all plural, count nouns.

When the singular, mass noun air is inserted in the partitive structure all of + determiner, the resulting NP is atelic. This seems to apply to all singular, mass nouns.

When the plural, mass noun nonsenses is inserted in the partitive structure all of

+ determiner, the resulting NP is telic. It must be noted that nonsense can be both mass and count, according to the dictionary. Nonetheless, this fact does not seem to have any

37 In all of, the determiner is excluded from the partitive structure because pronouns cannot be preceded by any except for very particular cases, such as ‘The she he uttered was heard by the most people in the room.’

204 effect on the structure at point; if we substitute nonsenses for a clearly mass noun like air, in plural, the resulting NP is also telic. This is illustrated in ‘There is always an action and reaction; an advance and a retreat; a rising and a sinking; manifested in all of the airs and phenomena of the Universe.’ In this case, the mass noun denotes, following

Gillon (1999: 58), instances or kinds of air.

All: When the singular, count noun man is preceded by the indefinite quantifier- determiner all, the resulting NP is telic. The NP is thus endowed with a generic reading.

When the plural, count noun ‘soldiers’ is preceded by the indefinite quantifier all, the resulting NP is telic. As in the preceding case, the NP is endowed with a generic reading.

When the singular, mass noun water is preceded by the indefinite quantifier- determiner all, as in ‘about 97 percent of all water is in the oceans’, the resulting NP is atelic. Nonetheless, that is not always the case with singular, mass nouns, for all water in ‘I like all water’ can be understood as all kinds of water, in which case it would be telic. Thus, it depends on the context.

When the plural, mass noun waters is preceded by the indefinite quantifier- determiner all, the resulting NP is telic (see the explanation provided for some waters above).

None of: When the singular, count noun lie is inserted in the partitive structure none of + determiner, the resulting NP is atelic, as happened with apple in some of the apple.

When the plural, count noun friends is inserted in the partitive structure none of

+ determiner, the resulting NP is telic.

When the singular, mass noun confusion is inserted in the partitive structure none of + determiner, the resulting NP is atelic.

205 When the plural, mass noun truths is inserted in the partitive structure none of + determiner, the resulting NP is telic. Once more, the atypical use of plural form of the mass noun turns it into a count noun which can thus be inserted in a structure of this type.

No: When the singular, count noun response is preceded by the indefinite quantifier-determiner no, the resulting NP is telic.

When the plural, count noun questions is preceded by the indefinite quantifier no, the resulting NP is also telic.

When the singular, mass noun water is preceded by the indefinite quantifier- determiner no, the resulting NP is atelic. If the example were to be rephrased, the most natural option would be something like ‘There isn’t any water in the building’ (atelic) rather than ‘(?)There isn’t one water in the building’ (telic). Nonetheless, some contexts make room for a telic reading too. Imagine, for example, a person opening the fridge and finding out that there is not any bottle of water left. In that case, no water would metonymically stand for not any unit or bottle of water.

When the plural, mass noun waters is preceded by the indefinite quantifier- determiner no, the resulting NP is telic. In this case, water refers to streams, rivers, lakes, and so on. Thus, the mass noun is converted into a count one when inserted in the phrase.

Most of: When the singular, count pronoun her is inserted in the partitive structure most of + determiner, the resulting NP is atelic, as was the case with some of that woman and all of you above.

When the plural, count noun accusers is inserted in the partitive structure most of + determiner, the resulting NP is telic.

206 When the singular, mass noun work is inserted in the partitive structure most of

+ determiner, the resulting NP is atelic.

When the plural, mass noun times is inserted in the partitive structure most of + determiner, the resulting NP is telic. Note that the previous example – ‘Most of this work is based on Peter’s ideas’ – can be rewritten as ‘Much (of this) work is based on

Peter’s ideas’ (atelic), whereas the present example – ‘You are right most of the times’

– would be rephrased as ‘You are right many (of the) times’ (telic).

Lots of: Singular, count nouns do not fit the pattern lots of + determiner. There are certainly cases such as ‘She was eating lots of cake’ (atelic) where the combination seems possible. However, it must be noticed that this so only with nouns like cake, which can be both count and mass, and which in this case are used in their mass version.

Hence, the example is not a valid illustration of the combination lots of + count noun.

Only if the mass version of nouns like cake were proved to be derived from an originally count noun would the example be acceptable.

When the plural, count noun suggestions is inserted in the partitive structure lots of + determiner, the resulting NP is telic.

When the singular, mass noun information is inserted in the partitive structure lots of + determiner, the resulting NP is atelic.

When the plural, mass noun loves is inserted in the partitive structure lots of + determiner, it is countified; hence, the resulting NP is telic.

One of: Singular, count nouns do not seem to fit the pattern one of + determiner.

When the plural, count noun men is inserted in the partitive structure one of + determiner, the resulting NP is telic.

Singular, mass nouns do not seem to fit the pattern one of + determiner.

207 When the plural, mass noun waters’ is inserted in the partitive structure one of + determiner, the resulting NP is telic.

One: When the singular, count noun man is preceded by the indefinite quantifier-determiner one, the resulting NP is telic.

Plural, count nouns do not seem to fit the pattern one + noun.

When the singular, mass noun sand is preceded by the indefinite quantifier- determiner one, the resulting NP is telic. In this case, sand is to be understood as types or kinds of sand; that is, it is countified.

Plural, mass nouns do not seem to fit the pattern one + noun.

Two of: Singular, count nouns do not seem to fit the pattern two of + determiner.

Exceptionally, singular, count nouns can be inserted in this pattern in some very specific contexts. Imagine, for examples, that someone enters a post office and says: ‘Give me two of the king.’ In this case, two of the king must be interpreted as two (stamps) of the kind that have the king printed on them. The resulting NP would be telic.

When the plural, count noun men is inserted in the partitive structure two of + determiner, the resulting NP is telic.

Singular, mass nouns do not seem to fit the pattern two of + determiner.

Similarly to singular, count nouns, singular, mass nouns can be found in these patterns when they denote a class or type, or units, as in ‘I give you two of camomile and two of red tea’ (‘I give you two packets/bags of camomile and two packets/bags or red tea’) or

‘Give me two of orange juice’ (‘Give me two bottles of orange juice’). In both cases, the resulting NP is telic. Notice, nonetheless, that the definite determiner disappears in these cases.

When the plural, mass noun loves is inserted in the partitive structure two of + determiner, the resulting NP is telic.

208 Two: Singular, count nouns do not seem to fit the pattern two + noun. The only exceptions are those in which, as was the case with the pattern two of, the nouns denotes a kind or unit, as in ‘Give me two mammal and three oviparous’ (i.e., ‘Give me two of the kind mammal and three of the kind oviparous’). In that case, the resulting NP is telic.

When the plural, count noun men is preceded by the indefinite quantifier- determiner two, the resulting NP is telic.

Singular, mass nouns do not seem to fit the pattern two + noun. The only exceptions are those in which, as was the case with the pattern two of, the nouns denotes a kind or unit, as in ‘Give me two camomile and two red tea.’ In that case, the resulting

NP is telic.

When the plural, mass noun waters is preceded by the indefinite quantifier- determiner two, the resulting NP is telic. Once more, the mass noun has been countified and it stands for units of that entity.

A number of: Singular, count nouns do not seem to fit the pattern a number of + determiner. In the case that a pronoun like, for example, me were acceptable – imagine a very specific context such as ‘When I entered the room I discovered, fascinated, that there was a number of me in there for the casting’ – the only possible interpretation would be iterative. That iterative character of the NP would make it compatible with the requirement that it must be plural. Thus, me would be interpreted in that context as copies or doubles of me, which leads to a telic interpretation of the NP.

When the plural, count noun books is inserted in the partitive structure a number of + determiner, the resulting NP is telic.

Singular, mass nouns do not seem to fit the pattern a number of + determiner.

209 When the plural, mass noun sands is inserted in the partitive structure a number of + determiner, the resulting NP is telic.

Another: When the singular, count noun day is preceded by the indefinite quantifier-determiner another, the resulting NP is telic.

Plural, count nouns do not seem to fit the pattern another + noun.

When the singular, mass noun whisky is preceded by the indefinite quantifier- determiner another, the resulting NP is telic. Whisky is to be interpreted as unit or glass of whisky in this context. In other words, it has been countified.

Plural, mass nouns do not seem to fit the pattern another + noun.

Many & much: Many and much are restricted to one single context: many can only be accompanied by plural, count nouns, and much can only be accompanied by singular, mass nouns:

When the plural, count noun flowers is preceded by the indefinite quantifier- determiner two, the resulting NP is telic.

When the singular, mass noun unhappiness is preceded by the indefinite quantifier-determiner two, the resulting NP is atelic.

Exceptionally, many can be followed by a singular, count noun to indicate many of the type X, or iterativeness, as in ‘There is many party-goer here’, ‘There is many footballer who will never die forever’ or ‘There are many ‘you’ in this room’, and the same applies for many a, as illustrated by ‘There Is Many a True Code Spoken in

Contest.’

Little & a little: Along the same line as many and much above, the usage of little and a little is restricted to just one context for each determiner, although in this case it is the same for both of them: singular, mass nouns.

210 When the singular, mass noun luck is preceded by the indefinite quantifier- determiner little, the resulting NP is atelic.

When the singular, mass noun luck is preceded by the indefinite quantifier- determiner a little, the resulting NP is atelic. In this case, a little luck is equivalent to a bit of luck.

It should be noted that, as Jackendoff (1991: 104) points out, negative (few and) little belongs to a different category from nonnegative (a few and) a little. Nonetheless, such irregularities will be disregarded in this work.

Every: The determiner every combines with singular, count and mass nouns only.

When the singular, count noun person is preceded by the indefinite quantifier- determiner every, the resulting NP is telic.

When the singular, mass noun flour is preceded by the indefinite quantifier- determiner every, the resulting NP is telic. Flour must be understood as kind of flour or packet of flour.

Givón (1993) also deals with only as an indefinite determiner; nonetheless, I leave it out of the analysis because it is, as the author himself explains, only superficially in this group.

AT PHRASE LEVEL:

Definite article: The definite article combines with both count and mass nouns in both their plural and singular forms. As derives from the examples above, when the precedes count nouns (both plural and singular), the resulting combination is telic. When the precedes mass nouns, the resulting combination can be either telic or atelic, depending on whether the referent of the NP can be presented as a kind, example or unit of that

211 entity or not. For instance, the water in Pass me the water is telic because it stands for the bottle/glass/jug of water, whereas the water in ‘The water is calm’ is atelic, because it refers to a mass of water. The plural version is, save exceptions, telic. Water has the peculiarity that it can be used in its plural form and still be regarded as mass, as in The waters are calm, which is not a common feature to other nouns.

Indefinite article: The indefinite article only combines with singular nouns.

Furthermore, these are count most of the times. Nonetheless, it is possible to find examples such as ‘Last week I tried a water which tasted of strawberry’, where water stands for a kind of water. In both cases, the resulting NP is telic.

Demonstratives: Demonstratives work in the same vein as the definite article the, only that they change their morphology for the plural (i.e., this and that become these and those).

Possessive modifiers: Possessive modifiers also behave like the definite article the, combining with both count and mass nouns in both their plural and singular forms.

In general terms, the examples above suggest the following four conclusions: a. When partitive definite quantifiers are followed by a plural, count noun, the

resulting NP tends to be telic. b. When partitive definite quantifiers are followed by a plural, mass noun, the

entity referred to by the resulting NP is presented as discrete or bounded.

Therefore, the mass noun is countified. Recall that not all mass nouns can be

made plural; rather, this phenomenon is restricted to those nouns whose

referents can be classified into kinds and/or carried in or packed into containers

such as bottles, packets, glasses, and so on. The resulting NP is, consequently,

telic. This complies with Gillon’s (1999: 57-58) idea that, when mass nouns give

rise to count nouns or telic NPs, this process of countification runs parallel to

212 shifts in denotation. These are some of them: ‘TO BE A KIND OF, TO BE AN

INSTANCE OF, TO BE A UNIT OF, and TO BE A SOURCE OF, etc.’ (Gillón

1999: 58). Water is quite an exceptional noun which can be made plural and still

be telic, as in ‘I’ve sailed some of these waters,’ referring to the water of just

one sea. c. When partitive definite quantifiers are followed by a singular, count noun, the

resulting NP tends to be atelic. This means that the noun undergoes a process of

massification whereby it is treated as an unbounded entity which can be divided

into parts. Not all count nouns can be used in this context, though. Gillon (1999:

58-59) explains this idea as follows:

count nouns, under conversion, give rise to mass nouns with the

following shift of sense: the denotation of the mass noun is the

largest aggregate of the parts of the elements comprising the

denotation of the count noun, where what constitutes a part is

relative to the kind of object denoted by the count noun. Thus, in

the case of animals and plants, the parts are the edible portions; in

the case of trees, the parts are the portions of wood; and in the

case of stones, rocks, ropes, wires, and so forth, the parts are

pieces.

d. When partitive definite quantifiers are followed by a singular, mass noun, the

resulting NP tends to be atelic. Similarly to the previous case, not all mass nouns

can appear in this context.

213 Summing up, partitive definite quantifiers tend to massify singular, count nouns and to countify plural, mass nouns. It must also be noted that one of, two of and a number of only pattern with plural nouns.

As regards indefinite quantifiers-determiners, generalizations are harder to make, because they vary not only according to whether the noun is mass/count and singular/plural but also from determiner to determiner. Singular, count nouns seem to be the most consistent in their behaviour, tending to be telic when used in combination with indefinite quantifiers-determiners. Notice, however, that not all indefinite quantifiers-determiners can combine with singular, count nouns. This kind of restrictions is typical of indefinite determiners like much/many which, save exceptions, combine with only singular, mass and only plural, count nouns, respectively. Little and a little follow the same pattern as much. One and another only combine with singular nouns, whereas two only patterns with plural nouns.

Despite the restrictions mentioned above, both partitive definite quantifiers and indefinite quantifiers-determiners show a systematic behaviour to a great extent, although the latter are subject to many more exceptions. Thus, if the two groups are compared, the result would be as shown in Table 4.16:

Partitive definite quantifiers Indefinite quantifiers-determiners Singular, count nouns Atelic Telic Plural, count nouns Telic Telic Singular, mass nouns Atelic Atelic Plural, mass nouns Telic Telic Table 4.16: Comparison between partitive definite quantifiers and indefinite quantifiers-determiners

214 It follows that partitive definite quantifiers tend to massify singular, count nouns and to countify plural, mass nouns, whereas indefinite quantifiers-determiners tend to countify plural, mass nouns but not to massify singular, count nouns.

Finally, at phrase level, demonstratives and the possessive modifiers seem to behave similarly: when they combine with count nouns, the resulting phrase tends to be telic; when they are followed by a mass noun, the resulting phrase can be either telic or atelic, depending on whether the referent of the NP can be interpreted as a kind or unit of that entity or not. If it can, the resulting NP is telic; if not, atelic. As regards the indefinite article, it combines with singular nouns only, and the resulting NP is telic.

Taking all the examples into consideration, it can be concluded that, in general terms, there are more cases of countification than of massification. This is rather logical taking into account that count nouns have both a singular and a plural morphology – hence, they do not need conversion in order to appear in plural - whereas mass nouns are, by definition, only singular. Thus, whenever one of these nouns is used in plural, it means that is has undergone a process of countification. The interesting fact is that this occurs frequently, because the number of determiners that pattern with mass nouns in plural is large, thus forcing the meaning of these mass nouns to denote kinds of, units of, sources of, and so on (Gillón 1999: 58). Massification, by contrast, only occurs in very specific contexts, mainly referring to people, and food or plants. Furthermore, given that, as said, count nouns can be singular and plural, it is mainly in those cases in which a certain determiner does not allow for count nouns that the massification process takes place. This is the case, for instance, of much, which only patterns, by definition, with mass nouns. Thus, when a count noun such as friend is used in combination with that determiner, as in ‘I haven’t seen much of your friend lately’, the massification process takes place. From all this it can be concluded that the process of countification

215 seems to be more productive than that of massification in English. It is interesting to note that dictionaries tend to present those cases that would imply countification as both count and mass, whereas those cases when a count noun becomes mass tend to appear as count only. This might reflect, again, the more common or natural character of the countification process.

Once the guidelines for defining telicity have been set, I have turned to my corpus and I have observed that the second argument of causative structures and the first of their inchoative counterpart must be telic, as illustrated in (71):

(71) a. Macho, a 25-year-old male in his prime, cracks coula nuts. b. Trying to establish why a component snapped. c. Richard Gough will make his comeback for Rangers tomorrow only six weeks after shattering his jaw in four places.

This is coherent with the semantics of the verb, because a mass continuum cannot be broken. Interestingly, the corpus has not provided any example of countified nouns that qualify as second argument of prototypical verbs of change of state.

4.2.1.1.g Reference

In chapter 3, I related the notion of reference to that of definiteness, and the latter, in turn, to the use of definite and indefinite terms. However, the identification of the definiteness of a word cannot be restricted to that criterion only. The analysis of certain examples from the corpus has revealed that there are some other variables that come into play such as accessibility from the context, general knowledge, and so forth. In

216 other words, definiteness is not only a formal question but also a pragmatic one. Dik

(1997: 130) mentions four different ways in which a referent may be made accessible – or available – to the addressee.38 I list them below:

39 (72) SOURCES OF AVAILABILITY

(a) E may be available in A’s long-term pragmatic information.

(b) E may be available in A’s current pragmatic information because it has been

introduced in the preceding discourse.

(c) A may construe E on the basis of information which is perceptually available in

the situation.

(d) The identity of E may be inferred from information available in any of the

sources (a)-(d).

(72) provides the clues to the explanation as to whether the NPs of the examples of the corpus are definite or indefinite, independently of whether they are introduced by a definite article or pronoun, or if they have or have not been introduced in the proper discourse before.

ANAPHORA

Like many other languages, the English language has the property of reference. This means that there are some items which are semantically interpreted in terms of other items in the discourse, either anaphorically or cataphorically. In the present corpus of

38 It must be noted that Dik restricts the function of reference to those terms that act as either arguments or satellites. 39 S stands for Speaker, A stands for Addressee, and E stands for Entity.

217 analysis there are many examples of pronouns which seem to stand in this type of relationship to other items in the discourse:40

(73) Examples of anaphora a. The language has an insistent rhythm, but the better a rapper is the more they break away from it, tease it, play with it. b. ‘Break it up,’ said the club wit. c. The inguinal swelling (or ‘bubo’) continues to enlarge and eventually forms abscesses which break down and discharge on to the shin.

Thus, in (73.a), they and it refer back to rappers and rhythm, respectively, and in (73.c) the which refers back to abscesses. However, as can be seen in (73.b), the antecedent does not always appear in the fragment under analysis. This may be a problem for the study of nominal aspect, given that the identity – and, therefore, main features – of the antecedent are unknown. Nonetheless, for the analysis of definiteness this does not pose any problem, because whatever the identity of the antecedent, the pronoun will be definite:41 the antecedent of a pronoun is, by definition, accessible to the hearer (or addressee); otherwise its use is incorrect. Even in cases where the reference is cataphoric, the pronoun will also be definite, given that the antecedent will also be made accessible from the context. See (74) for illustration.

(74) It broke my heart to see that little face and big eyes.’

40 By pronouns I mean personal pronouns and demonstratives only; determiners are disregarded here. 41 This example is has another reading: instead of being interpreted anaphorically, it may be interpreted according to the situational information, even if no referent has been mentioned before. Imagine, for example, a situation in which the deictic pronoun is uttered while the speaker points at the entity established by it. See below for further explanation.

218 The pronoun it is cataphorically used to refer to the proposition which is later established by to see that little face and big eyes.

A slightly different case is that of personal pronouns like I, you, we, etc., as in

(75) below, which may have not been mentioned in the discourse before but are still accessible to the addressee. This is due to the immediate, physical context. As Dik

(1997: 130) puts it, ‘A may construe E on the basis of information which is perceptually available in the situation.’ Therefore, they are also definite.

(75) but when you break loose

Another interesting case worth mentioning is that in which the anaphoric item does not refer back to the (entity established by the) antecedent, strictly speaking; rather, it makes reference to a sub-topic of that antecedent, as I illustrate in (76) below.

Sub-topics, as Givón (1993) explains, can be understood in terms of contextual information. Therefore, despite being actually new in the discourse, they must be considered as definite, given that they are accessible to the addressee.

(76) Mary got some picnic supplies out of the car. The BEER was WARM.

Beer has not been mentioned previously; nonetheless, it is accessible to the addressee because it is part of his/her general knowledge that in picnics there usually is beer.

Hence, the latter can be presented as definite.

219 GENERAL AND SITUATIONAL INFORMATION

Some examples bring forward cases of items which have not been mentioned before but which are nonetheless presented as definite. This is the case of the tradition in (77):

(77) While many of the bolts were placed prior to the issue of the BMC policy document, they still break the tradition against placing bolts or pegs on grit for any reason.

In this case, it is unknown whether the tradition has been previously mentioned or not in the discourse. Since the noun tradition is followed by a phrase that explains what this tradition is about, it seems quite logical that it has not; otherwise, this piece of extra information would be redundant. However, the noun tradition is introduced by a definite determiner. There are two possible explanations for this: (1) the tradition is presented as definite because traditions are usually part of the general, popular knowledge, hence accessible to the addressee – using Dik’s (1997: 130) words, it ‘may be available in A’s long-term pragmatic information’; and (2) the nominal phrase the tradition stands in cataphoric relationship with the subordinate clause that follows.

Similar although slightly different are those cases in which the item is unique, like the sun, the moon, and so on. Those cases will be definite by default, unless the presence of an /determiner introduces a special usage of such term, as in ‘A magnificent sun woke me up this morning.’

GENERICS

Finally, there is the case of generic terms. Very briefly, generic terms refer to a whole class of referents. This might lead one to think that they are definite by default, since

220 there is no need on the part of the addressee to identify one referent in particular; the whole class or category of referents is usually available in the addressee’s long-term pragmatic information, as part of his/her general knowledge. However, the fact that generic terms may be formally expressed as definite or indefinite, as well as singular or plural, questions this assumption. I illustrate this below in example (78):

(78) Genericity (examples taken from Dik 1997: 176-177)

a. The dog is man’s best friend.

b. A dog is very faithful.

c. Dogs are very faithful.

Wheras (78.a) is formally definite, (78.b) is indefinite. Thus, in those cases in which there is a grammatical marker, the referential condition of generic terms varies accordingly. The absence of grammatical marks as in (78.c) has been interpreted, by default, as indefinite.

Such decision is not uncontroversial, since the distinctions in (78) are merely formal and vary from one language to another. (78.c), for example, is expressed by means of a definite plural form in French – Les chiens sont très fidèles – instead of the

English indefinite plural (Dik 1997: 178). Nonetheless, and precisely for this reason, here I stick to the formal criteria of English to define the reference of generic terms.

With all this information in mind, I have studied reference in the examples of the corpus. In principle, no particular tendency can be appreciated; definite and indefinite

NPs seem to alternate in a random fashion. I would like to mention, nonetheless, one special case that the analysis has brought to my attention: that of NPs without a determiner. In some cases, these NPs appear in newspaper articles or magazines. Hence,

221 their use as such is rather ordinary given the standards of this specific writing style. I illustrate this in (79).

(79) LETHAL INCENDIARY grade performance on and off stage (Keith Moon brings chat show host Russell to the edge of a Harty attack, terrorist

Townshend smashes guitar through a club roof).

In ordinary speech or writing, guitar would be preceded by a determiner, but its appearance without one in this context is nothing extraordinary. As regards reference, the context invites to think that the omitted determiner would be something like the or his if it were explicit. In that case it would be definite. Examples of this type follow no specific rule and, consequently, have been considered individually in the analysis, paying special attention to the context. Besides those in newspaper articles, the corpus has also revealed some other examples of NPs without a determiner. They are also parts of literary texts in which the author takes licence to innovate and leave out determiners which would be present in non-literary texts. I illustrate this below:

(80) `Australia," he sobbed, forehead cracking forward against the pane.

In this case, forehead is interpreted as definite, given that the context makes the identity of the missing determiner accessible: his.

222 4.2.1.2 Event structure

4.2.1.2.a Subeventual structure

In chapter 3, I suggested that, in principle, both causative and inchoative events are structured into subevents. This idea has been corroborated by the analysis. The only cases in which the event can not be decomposed into subevents are those which do not denote an event but a state or an activity. By way of illustration, I display an activity in

(81):

(81) At that stage, I think David was just crashing around.

In (81) crashing means sleeping. In other words, it encodes an activity, not an event.

Therefore, there is an initial event but not a result state, because there is no change.

Examples like (81) are not prototypical change of state predicates. Consequently, they have been included in the non-prototypical group of examples from the corpus.

The same applies to states:

(82) It's a long walk to Meall Corranaich and just before the top, the ridge splits away to the west which causes some confusion in poor visibility.

In (82), the verb does not denote a change of state but a state. Hence, it is not a prototypical example.

As regards prototypical examples, the analysis has corroborated what I have anticipated in chapter 3: all events have a subeventual structure that involves at least an initial event and a final event that conveys a result state. This schema complies with the

223 theory of force dynamics, which explains change in terms of the PATH schema and the concept of flow. Force dynamics (Talmy 1988) links the two extremes, origin and destination, of the PATH image-schema (Johnson 1987) in such a way that the force exerted by a participant on another participant causes the latter to move and transmit energy towards the destination. Throughout the transmission of force from origin to destination, the participants are either adjacent or non-adjacent, depending on whether they are hitting the other participant or moving towards it. However, what is common to both relative positions of the participants is the flow: energy and participants move towards the same endpoint.

In the pre-Aristotelian tradition of Heraclitus, the flow was thought in terms of the aphorism Panta Rhei: ‘Upon those who are stepping into the same rivers different and again different waters flow’ (Marcovich 1967: 206), that is, no man can ever step in the same river twice, because it will be neither the same man nor the same river. Here, time is understood in terms of changes throughout the universal flow. In other words, the universal flow of time and all timely things could not be perceived were it not for the changes undergone by beings. The highly abstract notion of flow – time – is understood by means of a more concrete notion, namely the notion of change. Change, being complex itself, is ultimately explained as discontinuity: an entity, through consumption or destruction, loses spatial continuity among its constituent parts. This is spatial discontinuity. It is perceivable and helps understanding the more complex notion of temporal discontinuity, whereby an event, through termination, ceases to take place on a provisional or definitive basis.

There is probably agreement on the idea that time is understood in terms of the more simple notion of space – Lakoff (1993) ‘assumes that our metaphorical understanding of time in terms of space is biologically determined’ (Radden 2003: 226),

224 because we are biologically determined to detect motion, objects and locations, but we are not endowed with detectors for time – but the study of verbs of pure change of state in general and of the event structure that these verbs code suggests that time is also understood by resorting to change. I have just made reference to Heraclitus’s river metaphor. The linguistic evidence that can be gathered in this respect is that pure verbs of change of state like break expand their meaning from change of state (thus spatial discontinuity) to interruption (thus temporal discontinuity).

In short, in its linguistic counterpart, this means that there are no punctual events. Events last longer or shorter, but no event lasts no time.

4.2.1.2.b Headedness

The attempt to apply the notion of headedness to the examples of the corpus has brought up an interesting question. As mentioned above, the idea behind the label event headedness is that subevents are ordered not only temporally but also according to their prominence, which contributes to the correct interpretation of the whole event and also to the disambiguation of alternations like the causative/inchoative. Pustejovsky (1995a:

72) explains that the conventional role of a head in syntactic and semantic representation is to indicate prominence and distinction. In other words, the head is the most prominent subevent in an event structure; it contributes to the focus of the interpretation by specifying ‘what part of the matrix event is being focused by the lexical item in question’ (Pustejovsky 1995a: 72). The question is in what terms

‘prominence’ is measured or determined. Pustejovsky presents this idea as something intuitive. Thus, when distinguishing between achievements and accomplishments, he says that, intuitively, the initial event is headed in accomplishment verbs, focusing the process that brings about the result state, whereas the latter is focused or headed in the

225 case of achievements. Pustejovsky goes on to apply this distinction to alternations such as the causative/inchoative, the initial event or process being headed in the case of causatives and the result state in the case of inchoatives. From this one might derive that there is a parallelism between causatives and accomplishments and between inchoatives and achievements but, as I show below, this is not necessarily so.42 What is more, the corpus has supplied examples whose structure is causative while the verb depicts an achievement. I illustrate this in (83):

(83) These twin qualities have given plutonium an almost mythic quality, perhaps best captured when the manic American hero of the TV drama

`Edge of Darkness" cracks two blocks together in an instant miniature explosion.

(83) depicts a causative event which occurs instantaneously, as the adjunct ‘in an instant miniature explosion’ evinces. The problem is that this description is inconsistent with

Pustejovsky’s idea of headedness, because causatives are headed on the first subevent and achievements on the second. Again, the inconsistency lies in the fact that there is no clear definition of prominence and, consequently, of headedness. The same applies to the examples below:

(84) I step on a few and watch them crack in the middle like a sheet of glass.

42 As a matter of fact, on some occasions, Pustejovsky seems to put the pairs accomplishment/achievement and causative/inchoative on the same level, which leads to confusion. Thus, in Pustejovsky (1992: 59) he claims that achievements and accomplishments can be distinguished solely in terms of an agentive/non-agentive distinction, a criterion that he applies a few pages earlier to the causative/inchoative alternation and which, in my opinion, is more correct. Interestingly, he also claims in the same work that the difference existing between achievements and accomplishments is that in the former the change occurs instantaneously, whereas in the latter it is more gradual (Pustejovsky 1992: 50).

226 (84) illustrates an inchoative construction – that is, headed on the second subevent –, which in turn depicts an accomplishment, that is, headed on the first subevent. Van

Valin and LaPolla (1997: 106) explain that the properties of the object broken – for instance, whether it is brittle or hard – may affect the interpretation of the events denoted by break verbs. Thus, if an object is brittle, the breakage is more likely to have a – I would add: more – punctual interpretation than if the object is non-brittle, in which case the event would more likely denote an accomplishment. However, as this example shows, the material of which the object is made is not the only parameter that determines the interpretation of break verbs. Number can also play a part.43 Thus, the accomplishment reading of (84) is mostly motivated by the fact that the object being cracked is plural, which implies the repetition of a same action over some temporal span, because it is not very likely that all its component parts will crack absolutely simultaneously. Hence, it is interpreted as durative.44 Nonetheless, the fact that the object being broken is plural does not mean that the action will necessarily be always durative, as shown in (85):

(85) The speeding Audi and the Volvo slammed into each other with sufficient force to buckle the Audi's grille and shatter both headlights.

43 This idea is shared by Pustejovsky (1992: 50): ‘What are apparently lexical properties of the verb can be affected by factors that could not possibly be lexical. […] The presence of a bare plural object shifts the interpretation of a logically culminating event to an unbounded process.’ 44 In order to explain Van Valin and LaPolla’s argument I stick to their terminology and to the distinction between durative and punctual events. However, it must be borne in mind that in my view, even those events that they describe as punctual or instantaneous have duration. This applies to all future mentions of the terms punctual or durative in this dissertation. For convenience I use the traditional terminology, but from now on the distinction between durative and punctua/instantenousl must be understood as more durative and less durative, respectively.

227 (85) is ambiguous as regards its interpretation, but it might well be understood that the two headlights broke instantaneously, at the same time, because of the collision, as is the case with the two blocks in (83) above.

Another example that illustrates the inconsistency of the notion of headedness is the following:

(86) `Crack" the tile along the cut

If the tile is cracked with one blow, then once more we would be dealing with a causative event – that is, headed on the first subevent – and an achievement – that is, with prominence on the second subevent. However, if an adverb such as ‘slowly’ were added to the example, the interpretation would change to that of an accomplishment.

One last problematic example that I would like to comment on is (87):

(87) Insert sausage-shaped rolls of acid-free tissue paper in folds, so that they do not become permanent — caused, as a microscope would show, by fibres cracking.

This example presents an inchoative event, hence headed on the second subevent that depicts an accomplishment. The durative effect is achieved through the use of a plural object (syntactically the subject) and the -ing form of the verb, which indicates repetitiveness throughout time. Hence, another inconsistency.

Van Valin and LaPolla (1997) and Van Valin (2005) provide different criteria to distinguish between achievements and accomplishments and between causatives and inchoatives. On the one hand, they base the distinction between achievements and

228 accomplishments on the duration of the event. Thus, achievements encode instantaneous changes whereas accomplishments encode changes over some longer temporal span (Van Valin and LaPolla 1997: 104). On the other hand, the distinction between causatives and inchoatives is based on the fact that in the causative paraphrase the activity that brings about the result state is made more explicit by specifying the actor involved, whereas the causer is not mentioned in the inchoative paraphrase

(Pustejovsky 1992: 55). This approach is consistent with the examples displayed.

Nonetheless, the idea of prominence cannot be discarded as such. Although

Pustejovsky does not provide a precise explanation of this notion, it is undeniable that intuition works when it comes to deciding which subevent seems to be more prominent in each case. Therefore, I suggest that the notion of headedness must be reconsidered and developed properly. The question is on what basis we can determine prominence and, consequently, headedness. If prominence had grammatical impact it would be easier to explain, but unfortunately there is little evidence for that. The examples presented above seem to point to number – that is, whether the object being broken is singular or plural – to the kind of adjuncts that coocur with the verb, and also to verbal tense. Pustejovsky (1995a: 191), on the other hand, indirectly points at the expression of arguments by saying that only those arguments which are associated with the headed event are obligatorily expressed at surface structure. This implies that inchoatives will never be left-headed. Thus, we could claim that the notion of headedness is useful for the disambiguation of alternations such as the causative/inchoative but not so much for pairs such as the accomplishment/achievement.

4.2.1.2.c Causal sequence

The analysis of causal sequence has matched the theoretical assumption of chapter 3: inchoative constructions do not depict a causal sequence whereas the causative

229 counterparts do. However, as I have just shown, the relationship between headedness and causation is not so firm.

4.2.1.2.d Temporal relation

In chapter 3, I pointed out that the temporal relation between the subevents of break verbs does not seem to fully accord with any of the options suggested by Pustejovsky.

The analysis has corroborated this idea, evidencing that the relation that exists between the two subevents shares features of both exhaustive ordered overlap and exhaustive ordered part of but does not comply with any of them in isolation. On the one hand, event 2 follows event 1, as dictated by exhaustive ordered overlap; on the other, there is some overlap of the final part of the process and the beginning of the result state. The problem is that the second event continues while the first event continues to hold and beyond, which contradicts Pustejovsky’s definition. Thus, a new label must be introduced here that covers all the temporal features described, for which I propose partial ordered overlap. Furthermore, I would also like to suggest that this relationship can be subject to further refinement. As I have said above, all changes of state imply duration, even if this is minimal. The analysis has provided examples of two main types: (1) events where the transition process from the initial to the final state is of minimal duration; and (2) events where the process takes longer to develop.

(88) a. `Last week," she said, `my nail snapped off. b. Finally a heavy press crushes out the remainder of the must, but this is of an inferior quality.

230 When duration is to its minimum, as in (88.a), the second subevent – that is, the result state – overlaps with the (whole) process and extends beyond it. The overlap is almost total, but not complete. When the process is more durative, as in (88.b), only the final part of the process and the beginning of the result state overlap, which then expands beyond that. Therefore, I suggest that there are two different types of temporal relations between the subevents of break verbs. In the case of events with processes of minimal duration the temporal relation will be known as full ordered overlap; when the process is more durative, the temporal relation will be labelled partial ordered overlap. I illustrate the differences that exist among the three types of temporal relationships in the following diagrams:

a. Exhaustive ordered overlap:

P S b. Partial ordered overlap:

P S c. Full ordered overlap:

* P S

Figure 4.9 Temporal relationships

The fact that all events are durative also has other implications. To be precise, if events are durative, the feature [± punctual] should be dispensed with for the definition of logical structures. At the same time, this would turn the distinction between accomplishments and achievements irrelevant, which would put an end to the difficult task of distinguishing them, as Van Valin and LaPolla (1997) themselves acknowledge.

231 Furthermore, this would also mean that RRG’s LSs would resemble GL’s lexical representations a bit more, because Pustejovsky only distinguishes three event types: states, processes and transitions.

4.2.1.2.e Type coercion

The analysis of prototypical examples has not provided any case of type coercion, given that the interpretation of the events is, in this group, literal and direct. Despite this, the question emerges whether the causative/inchoative alternation could be considered a case of aspectual coercion in itself, as Koontz-Garboden (2006) points out.

For the moment being, I consider the causative/inchoative alternation as a case of logical polysemy expressed through headedness underspecification which makes available the two grammatical constructions. Nonetheless, I believe that Koontz-

Garboden’s (2006: 23) point is enlightening and it certainly deserves further research.

4.2.1.2.f Merge: one special case

The analysis has brought to my attention a series of examples which are characterized by combining the basic meaning of change of state verbs with that of motion or change of position. Goldberg (1995) includes examples of this type within the broader group of the so-called English Caused-Motion Construction although, as I explain below, there are certain differences. This construction is semantically characterised by a causer argument which directly causes the theme argument to move to a new location which is designated by the directional phrase, and it covers the following types of expressions:

232 (89) Expressions covered by the English Caused-Motion Construction (Goldberg 1995:

152):

a. They laughed the poor guy out of the room.

b. Frank sneezed the tissue off the table.

c. Mary urged Bill into the house.

d. Sue let the water out of the bathtub.

e. Sam helped him into the car.

f. They sprayed the paint onto the wall.

Goldberg tries to justify the existence of this construction by showing that its semantics is not compositionally derived from other constructions existing in the grammar, such as the lexical items that instantiate it.45 In other words, the combination of the lexical items that instantiate the construction must not lead to the interpretation of the construction (Goldberg 1995: 153). In order to illustrate this, and focusing on the

English Caused-Motion Construction in particular, Goldberg (1995: 153) explains that it is not strange to find cases of verbs that, in isolation, do not encode the caused-motion semantics: in some cases they do not encode causation, as illustrated by kick in (90.a); in others they do not encode motion, as shown in (90.b):

(90) Examples taken from Goldberg (1995: 153)

a. Joe kicked the wall.

b. Frank squeezed the ball.

45 Constructions also include fully predictable patterns as long as they occur with sufficient frequency (Goldberg 2006: 5). Nonetheless, such statement refers, mainly, to highly common patterns such as interrogatives, relatives, passives, and so forth.

233 Nonetheless, when used in the construction, these verbs encode both causation and motion:

(91) Examples taken from Goldberg (1995: 153)

a. Joe kicked the dog into the bathroom. (He caused the dog to move into the

bathroom.)

b. Frank squeezed the ball through the crack. (The ball necessarily moves.)

Going deeper into this idea, Goldberg (1995: 153-154) claims that many transitive verbs which participate in this construction do not bear the same semantic relation to their direct object as they do in simple transitive sentences. She illustrates this idea by claiming that (92) does not entail (93):

(92) Sam tore a piece off the block.

(93) Sam tore a piece.

In other words, the construction in (92) does not encode a change of state, but only a change of position – plus the manner specification in which the change of position took place. Nonetheless, as I show below, this is not the case with all break verbs and/or not in all contexts.

Besides the basic sense posited above, the caused-motion construction is associated with a number of related senses such as X enables Y to move Z, illustrated in

(94.a), X prevents Y from moving Comp(Z), illustrated in (94.b), X helps Y to move Z, illustrated in (94.c), and X causes Y to move Z, illustrated in (94.d). In this sense, the construction shows a certain degree of polysemy.

234

(94) Examples taken from Goldberg (1995: 161-162)

a. Sam allowed Bob out of the room.

b. Sam barricaded him out of the room.

c. Sam helped him into the car.

d. Sam invited him out of her cabin.

The latter sense, illustrated in (94.d), also includes examples like Frank sneezed the tissue off the nightstand, the difference being that (94.d) involves a force-

(Talmy 1985) that encodes a communicative act and, consequently, the motion is not strictly entailed (that is, the fact that Sam invites someone out of the house does not necessarily entail that the person actually leaves the house). Frank sneezed the tissue off the nightstand, by contrast, entails motion.

Although the construction is claimed to exist independently of the verbs which instantiate it, there seem to be some idiosyncrasies which can only be accounted for through specific semantic constraints such as direct causation or one event per clause.

Constraints on what kinds of situations can be encoded by the caused-motion construction are further specified and, with regard to change of state verbs in particular,

Goldberg claims that there is some incidental motion implied in the action denoted by the verb. Consider the following example:

(95) The butcher sliced the salami onto the wax paper. (Example taken from Goldberg

1995: 171)

235 Even if the directional phrase were not expressed in the syntax, the action of slicing salami somehow involves some motion, since the salami naturally or incidentally falls away from the slicer. Obviously there may be some scenarios where motion is prevented. Nonetheless, in a conventional scenario, the former would be the most natural interpretation. Apart from this, Goldberg points out another constraint for change of state verbs. Consider the following examples:

(96) Examples taken from Goldberg (1995: 171) a. *Sam unintentionally broke the eggs onto the floor. b. Sam carefully broke the eggs into the bowl.

Only if the motion is intended are the examples acceptable.

Summing up these constraints, Goldberg (1995: 174) writes:

if the verb is a change-of-state verb (or a verb of effect), such that the activity causing the change of state (or effect), when performed in a conventional way, effects some incidental motion and, moreover, is performed with the intention of causing the motion, the path of motion may be specified.

Pustejovsky (1991) tries to avoid the proliferation of additional senses and constraints to account for the polysemy of these verbs by arguing that the meaning of the sentence can be derived through co-composition, that is, by composition of the meanings of the constituent parts. Against this view, Goldberg (1995: 155) claims that this approach may predict the meaning of sentences but not produce it. Although both approaches have strengths, they also present weaknesses. In the case of Pustejovsky, as

Goldberg points out, his approach is more plausible from the perspective of

236 interpretation. Goldberg’s, on the other hand, has the basic problem that there is not a precise typology of the possible constructions that exist in language. Furthermore, it requires too many constraints and extended senses to account for the polysemy of these verbs, which is undesirable. On top of that, focusing on verbs of change of state, her approach presents a problem. She argues against the idea that the semantics of the caused-motion construction can be derived from the verb’s inherent semantics, the preposition’s semantics, or the semantics of the combination of verb plus preposition

(Goldberg 1995: 154). This idea may work for examples like Sam tore a piece off the block which, as explained above, does not necessarily entail Sam tore a piece, but the corpus under analysis has provided some other examples where the verb does not change its inherent semantics.46 Consider the following examples:

(97) Examples taken from the BNC a. Rodney cracked two eggs into the frying pan. b.c All the floorboards had been removed, and the ceilings and plasterwork had crashed down into the basement with the weight of water pouring through the roof. c. When Marevna finally went home at dawn: `Max Jacob stood with a missal in his hand and Modigliani was still on his feet methodically tearing the rest of the wallpaper off the walls and singing an Italian song."

46 I disagree with the treatment given to examples like The butcher sliced the salami onto the wax paper in (95) above, which Goldberg considers as illustrations of the caused-motion construction. In my opinion, the change-of-state meaning remains when the verb slice is used in combination with the prepositional phrase onto the wax paper; hence the meaning of the construction can be derived from the combination of its parts. In other words, if we follow Goldberg’s definition of construction, this example cannot be regarded as such.

237 Although examples (97.a) and (97.b) are susceptible of more than one interpretation, the most straightforward one entails, from my point of view, that Rodney cracked two eggs and that the plasterwork crashed. (97.c) is more controversial: at first sight it seems to preclude the change of state reading; however, to a certain extent, it may also be interpreted as entailing that his feet methodically tore the rest of the wallpaper, because wallpaper is stuck to the wall and, consequently, it tends to break while being removed from it. Therefore, contrary to Goldberg’s claim above, the semantics of the sentence can be derived from the combination of the verb plus the semantics of the preposition.

These problems are partly solved in Pustejovsky’s theory. In Pustejovsky

(1991), he claims that the meaning of the whole sentence can be compositionally derived from the combination of the meanings of the constituent parts, thus avoiding positing additional verb senses. By composition we must understand ‘a “bilateral semantic selection” between the verb and its complement, giving rise to a novel sense of the verb in each context of use’ (Pustejovsky 2000: 8). Pustejovsky illustrates this process by means of the verb float, which shows a systematic polysemy between a process and a transitional reading:

(98) Examples taken from Pustejovsky (1995a: 125)

a. The bottle is floating in the river.

b. The bottle floated under the bridge.

The lexical representation for this verb is illustrated in (99):

238 (99) Partial lexical representation for float (taken from Pustejovsky 1995a: 125)

float

ARGSTR = ARG1 = 1 physobj

EVENTSTR = E1= e1:state

QUALIA = agentive = move (e1, 1 )

If the process verb is combined or composed with a PP such as into the cave, whose lexical representation is given in (100), the result is a transition whose lexical representation is illustrated in (101). As Pustejovsky (1995a: 126) explains, the directional PP carries the motion sense as part of its qualia structure and, when in composition with float, the matrix predicate is temporally subordinated to the insertion of the PP.

(100) Representation for into the cave, taken from Pustejovsky (1995a: 126)

into the cave

ARG1 = 1 physobj ARGSTR =

ARG2 = 2 the_cave

E1= e1:process EVENTSTR = E2= e2:state RESTR = <∝ HEAD = e2

1 2 FORMAL = at (e2, , ) QUALIA = AGENTIVE = move (e2, 1 )

239

(101) Representation for float into the cave, taken from Pustejovsky (1995a: 126)

float into the cave

ARG1 = 1 physobj ARGSTR =

ARG2 = 2 the_cave

E1= e1:state EVENTSTR = E2= e2:process E3= e3:state RESTR = <∝ (e2, e3), ο∝ (e1, e2) HEAD = e3

1 2 FORMAL = at (e3, , ) QUALIA = AGENTIVE = move (e2, 1 ), float (e1, 1 )

As the representation shows, the composition process results in a new sense which conflates both the process and the motion senses into a transitional reading. Notice that this new sense of float exists only phrasally, not lexically (Pustejovsky 1995a: 126).

Thus, so far this mechanism does not differ so much from Goldberg’s proposal. The main advantage that Pustejovsky’s proposal has over Goldberg’s is that, while the latter only explains those cases which fit the definition of construction – and, as I have just shown, not always in a satisfactory way -, the former can also account for those examples that depart from such definition.

More precisely, I suggest that Pustejovsky’s representations can also account for those expressions whose meaning can be derived from the addition of the meanings of their parts or, in other words/by contrast to constructions, for those expressions whose meaning exists lexically. I will explain how in the following. Consider again example

(97.a), repeated in (102):

240 (102) Rodney cracked two eggs into the frying pan.

As said above, in my opinion (102) is compositionally derived from the parts that instantiate it: a change of state and a change of position. This means that the new expression cannot be regarded as a construction in the line of Goldberg. After all, we may ask how she would explain that the same syntactic representation or form may have more than one meaning. Likewise, this new sense also differs from those explained by Pustejovsky through co-composition. However, Pustejovsky’s proposal differs from that of Goldberg in that, by using Pustejovky’s compositional representations, examples like (102) can also be explained. Therefore, following Pustejovsky’s lead, I have devised the lexical representation for crack and the representation for the PP into X and

I have combined them as follows:

(103) Representation of the transitive form of crack (adapted from Pustejovsky 1995a:

80)

crack E1 = e1:process EVENTSTR = E2 = e2:state RESTR = <α

QUALIA = FORMAL = cracled(e2,y) AGENTIVE = crack_act(e1,x,y)

241 (104) Representation of into X (adapted from Pustejovsky 1995a: 126)

into X ARG1 = 1 physobj ARGSTR =

ARG2 = 2 X

E1= e1:process EVENTSTR = E2= e2:state Restr = <∝ HEAD = e2

1 2 formal = at (e2, , ) QUALIA = agentive = move (e1, 1 )

(105) My representation of the merge interpretation of x cracks y into z

x cracks y into z

ARG1 = x:human ARGSTR = ARG2 = y:physobj ARG3 = z:physobj

E1= e1:process EVENTSTR = E2= e2:state E3= e3:process E4= e4:state RESTR = <∝ (e3, e4), <∝ (e1, e2), <∝ (e2, e4) HEAD = e2, e4

FORMAL = at (e4, y,z) QUALIA =

AGENTIVE = move (e4, y), crack_act (e1, x,y)

As the representation shows, the composition process results in a new expression which conflates and preserves both the change of state and the change of position meanings.

242 Obviously, it cannot be regarded as a construction because its meaning is derived from the meanings of its parts. Thus, I will refer to this new type of expression as merge. Its corresponding lexical conceptual structure (LCS) would be as follows:

(106) cause ([act (x,y)], become ([pred1 (y)] & [pred2 (y, z)]))

where x stands for the actor, y for the undergoer, and z for the destination. If applied to example (102) above, the resulting LCS would be the following:

(107) LCS for Rodney cracked two eggs into the frying pan: cause ([act (Rodney, two eggs)], become ([cracked (two eggs)] & [into (two eggs, the frying pan)]))

Thus, I suggest that the corresponding event structure would be as shown in Figure

4.10.

T

e1 e2

P S P S Figure 4.10 Event structure for ‘Rodney cracked two eggs into the frying pan’

Notice that the new representation posited in (107) above does not require any new mechanism or data for its depiction; rather, it utilizes Pustejovsky’s representations in a different way. More precisely, instead of producing a new construction with one only head, a merge will have two, and this is reflected in the event structure. Furthermore, the AGENTIVE qualia value depicts not only the final motion sense, but also the

243 change of state. The difference existing between co-composition and merge can be more clearly appreciated if we compare the two representations. Let us assume that

Goldberg’s (1995) interpretation of (102) as a construction is correct and the conflation of crack and the PP into the frying pan results in a caused-motion construction.

Following Pustejovsky (1995a), I suggest that the representation for that construction

(or the result of the full composition) would be as shown in (108):

(108) My lexical representation of the co-composition interpretation of x cracks y into z

x cracks y into z

ARG1 = x:human ARGSTR = ARG2 = y:physobj ARG3 = z:physobj

E1= e1:process EVENTSTR = E2= e2:state E3= e3:process E4= e4:state RESTR = <∝ (e3, e4), <∝ (e1, e2) HEAD = e4

FORMAL = at (e4, y,z) QUALIA = AGENTIVE = move (e4, y)

As can be seen in (108), the new expression conflates both the change-of-state and the motion senses into a new change-of-position reading. In Pustejovsky’s (1995a: 124) terms, the operation of co-composition results in a qualia structure for the VP that reflects aspects of both constituents; these include:

(A) The governing verb crack applies to its complements;

(B) The complements co-specify the verb;

244 (C) The composition of qualia structures results in a derived sense of the verb.

The comparison of the two representations shows that the main differences existing between the merge of a change of state and a change of position and a caused-motion construction can be explained around their AGENTIVE qualia value and their nucleus.

Nonetheless, the question remains why the same syntactic expression is sometimes interpreted as a merge and other times as a case of co-composition.

Thus, Pustejovsky’s compositional system is a powerful mechanism that accounts for constructions like Goldberg’s in a successful way, but which also offers an alternative account for those expressions which cannot be explained by the latter, thanks to the qualia structure.

The differences existing at representational level are also reflected at the syntactic and cognitive levels. If we go back to example (102), which I repeat in (109) for convenience,

(109) Rodney cracked two eggs into the frying pan.

and we compare it, in both its compositional and merge fashions, from four different approaches, namely semantic, semiotic, cognitive and (morpho)syntactic, we can observe the following:

(A) If the example is interpreted as a construction/co-compositional case:

a. From a semantic perspective, it conflates two simple events, a change of state

and a change of position, into a complex one. Notice that the change of state

245 affects the nature of the state which forms part of the event, but the change of

position does not affect the nature of the transferred object.

b. From a semiotic perspective, the conflation of these two simple events into a

more complex one involves a higher effort of interpretation and a lower effort of

production.47

c. Cognitively speaking, an event which originally encoded a change of state is

now conceptualised as a change of position, which is also perceived as less

complex. In other words, a less perceptible and/or more abstract event is now

presented as a more perceptible and/or less abstract one. Cognitivists such as

Lakoff and Turner (1989) explain mappings like this in terms of the metaphor

CHANGES OF STATE ARE CHANGES OF POSITION, which is a complex

version of the more basic one STATES ARE LOCATIONS. Going into more

detail, this conceptual mapping could be explained around the notion of Event

Structure Metaphor (Lakoff, 1993, Lakoff and Johnson, 1999). As Jimenez

Martínez-Losa (2005: 5) explains, ‘the Event Structure metaphor maps the

source domain of motion in space onto the target domain of events’, which

comprises states, changes, processes, actions, causes, purposes, and means.

d. At a syntactic level, and in line with the semiotic approach, both the merge and

the construction/co-composition interpretations reveal some simplification in

their syntactic structure. Thus, if we display the full sequence of events and

provide a syntactic expression for each of them, the final realization would be as

follows:

47 There is a heated debate among contemporary semioticians about the use of terms such as encoding/decoding for the creation and interpretation of texts. For simplification I will refer to the processes of encoding and decoding as production and interpretation respectively, but I refer the reader to Hawkes (1977) and F Smith (1988) for further discussion of this issue.

246 (110) Full syntactic expression for ‘Rodney cracked two eggs into the frying

pan’: Rodney cracked two eggs + the eggs went into the frying pan.

The new expression, by contrast, consists of one single clause:

(111) Simplified representation for (110): Rodney cracked two eggs into the

frying pan

The examples clearly illustrate the reduction of the expression at the syntactic

level.

This is an interesting case of conflicting principles. The economy of language leads to one to use a reduced or simplified syntactic expression that involves a lower cost of production. However, in terms of processing or interpreting the text, the reduced expression implies a higher cost of interpretation.

(B) If the example is interpreted as a merge:

a. From a semantic perspective, it depicts two simple events that combine but still

encode the two events. Hence, the level of complexity remains the same.

b. From a semiotic perspective, the combination of these two simple events into

one only predicate involves a lower effort of production but the same effort of

interpretation.

c. Cognitively speaking, there is no substantial change either. The meanings

encoded by the two events are still present in the merge, and their interpretation

is straightforward.

247 d. At a syntactic level, the merge reveals the same simplification in its syntactic

structure as that undergone by the construction/co-composition.

As was the case with the construction/co-compositional interpretation, the economy of language leads to use a reduced or simplified syntactic expression that involves a lower cost of production. However, in this case, the reduced expression implies neither a higher nor a lower cost of interpretation, but the same as the unreduced expression.

Finally, I would like to point out that, as I have mentioned above, the analysis has provided examples with other directional prepositions indicating path and origin that seem to participate of this merge operation too. Recall the examples displayed in

(39), which I reproduce here for convenience:

a. When Marevna finally went home at dawn: `Max Jacob stood with a missal in his hand and Modigliani was still on his feet methodically tearing the rest of the wallpaper off the walls and singing an Italian song." b. On his back like a beetle, three sat on him while two ripped his daysack from his chest. c. The kite might start to climb, then perform a curving sweep up to about 45 degrees, execute a beautiful turn and point head downwards until it crashes into the ground. d. LETHAL INCENDIARY grade performance on and off stage (Keith Moon brings chat show host Russell to the edge of a Harty attack, terrorist

Townshend smashes guitar through a club roof).

248 As the examples illustrate, the change of position is not restricted to movement to a new location; it may also indicate movement from and through some location. Examples of the former are already considered by Goldberg (1995; 2006) in her study of the caused- motion construction, and, at least at first sight, they seem to be compatible with the merge operation explained above. It must be noted however that examples with through behave slightly differently. More precisely, by contrast to the object of the directional

PPs that indicate origin and destination, the object of the directional path PP undergoes a change of state, just like the undergoer argument. Thus, if we analyse an example like

The singer smashed the guitar through the window, the full syntactic representation of the sentence would conflate 3 events instead of 2: the singer smashed the guitar + the window broke + the guitar went through the window.48 Whether this is the only difference existing between this and the previous cases is not clear yet, although the examples with directional PPs with through seem to be perfectly compatible with the merge operation. Nonetheless, more evidence is needed before making any firm claim, so I leave this question open for further research.

To close this section, I would like to point out that, if we extrapolate these theoretical issues to my analysis, co-composition only applies to less-prototypical and non-prototypical examples, since they preclude the change-of-state meaning. And the same claim can be made with respect to Godberg’s constructions: only less- and non- prototypical examples illustrate Goldberg’s caused-motion construction, since the meaning of those examples cannot be derived from the combination of the meanings of their constituent parts. Prototypical examples, by contrast, always preserve the change- of-state meaning which is then combined with that of the motion prepositional phrase, thus leading to a merge instead.

48 Remember that, in this case, ‘The singer smashed the guitar’ implies that the guitar becomes smashed.

249

4.2.1.3 Summary

The analysis of the argument and event structures of break verbs has revealed that the basic template for this verbal class extends beyond the basic one propounded by the

RRG. Van Valin and LaPolla’s (1997) basic patterns of complementation for break verbs accommodate a nucleus, an argument, and an argument-adjunct, as in Tom tore the picture into tiny bits:

(112) Basic pattern of complementation for break verbs, following from Van Valin and

LaPolla: V + A + AAJ = Core

In terms of event structure, this template may have two readings: (1) an initial state that changes and ends in a different resultative state; and (2) an initial position that changes and ends in a different resultative position.

(113) Event structure for break verbs a. STATE > CHANGE OF STATE > RESULTATIVE STATE b. POSITION > CHANGE OF POSITION > RESULTATIVE POSITION

I illustrate the two readings in (114.a) and (114.b) respectively:

(114) a. Or better still find the source, the cassette player, and rip out the cassette and jump up and down upon it until it is smashed into little pieces.

250 b. When this book crashed through my letterbox recently and

I read the title my heart sank.

And, sometimes, the two of them converge into one that codes both a change of state and a change of position – what I have called merge – as illustrated in (115):

(115) The first step is to crush the grapes into the primary fomenter until it is only 2/3 full.

However, what Van Valin and LaPolla’s (1997) patterns do not account for is the presence of adjuncts. Thus, the question boils down to how the RRG explains an example such as (116):

(116) I opened the door with the key./The key opened the door.

The fact that the RRG cannot account for examples like (116) means that this theory should progress in a less projectionist direction that includes adjuncts in its templates, the way Faber and Mairal (1999) or Levin and Rappaport Hovav (2005) do. Levin and

Rappaport Hovav (2005) represent a non-projectionist approach, mainly based on alternations of arguments and adjuncts. In an intermediate position between this non- projectionist approach and the projectionist one of RRG lies Faber and Mairal’s (1999) and Mairal and Faber’s (2002) Lexical Functional Model. Faber and Mairal’s model is eclectic in nature: it replaces Dik’s predicative frames with the RRG’s logical structures. It also turns to models of semantic universals, like RRG, in search of a metalanguage of universal validity; and it also incorporates Levin’s verbal classes, with

251 the corresponding variations between arguments and adjuncts. Their templates, like those of RRG, include all possible arguments of the verb; but, in addition, they go deeper into the semantic decomposition process to make room for other semantic functions, such as instrumental or manner. Thus, for manner-of-cutting verbs they suggest the following scenario:

(117) Mairal and Faber (2002: 55)

[[do’ (w, [use.sharp-edged.tool(α)in(β)manner’(w,x) & [BECOME be-at’(y,x)]]

CAUSE [[do’(x, [make.cut.on’(x,y)])] CAUSE [BECOME pred’(y, (z))]], α = x.

The structure in (117) depicts not only the arguments of the verb but also the manner in which the cutting activity is carried out upon the patient, and the result state.

Therefore, if a dictionary like the CCED displays the patterns in Table 4.17 for break verbs, Faber and Mairal (1999) build their templates according to the maximal projection for the lexical class (let us remember that, as Mairal and Cortés 2000-2001 explain, lexical templates are maximal projections of conceptual substance that pack together the regularities of a lexical class into one shared representation):

252 VERB Complementation patterns49 Break V n, V, V n into pl-n, V into pl-n, V adj, V with n, V from n, V n with n, V n of n, V n to n, Chip V, V n Crack V, V n, V P on n, V P Crash V, V into n, V n, V prep/adv Crush V n, V Fracture V, V n, Rip V, V n, V n with adv, V n prep, Shatter V, V into n, V n, Smash V, V n, V into n, V through n, V way prep/adv, V prep/adv, V n prep Snap V, V adv/prep, V n adv/prep, V adv, V n, V with quote, V at n, Splinter V, V prep/adv, V n Split V, V in/into n, V n in/into n, V n, V n between Tear V n, V n prep, V n with adv, V, V prep/adv, Vat n Table 4.17 Complementation patterns for break verbs

Therefore, if break is complemented on different occasions by two arguments, one argument and one argument adjunct, two arguments and an adjunct, and so on,

Faber and Mairal’s templates account for all possible semantic functions; the selection restrictions for each individual lexical unit will be expressed in the corresponding lexical entry. Thus, their lexical templates include those adjuncts that impact semantic complementation, like instrument in the case of break:

(118) Lexical template for break in its maximal projection. (Mairal 2001-2002: 14)

[[do’ (x, [use’ (x,y)] CAUSE [do’ (y, Ø)])] CAUSE [BECOME/INGR pred’ (z)]] x = actor, y = instrument, z = patient

49 Phrasal verbs are not included.

253

The question, however, is whether instrument is the only adjunct that must be considered for lexical representation. The patterns displayed by the CCED above are merely descriptive and do not specify which adjuncts can be included in the lexical representation for break verbs. Therefore, the question is where to set the limit in the adjunct hierarchy and, additionally, what the maximal projection for these verbs is. In order to answer these questions, I have started by going through the examples of the corpus to account for the real patterns of complementation, and I have found the following:50

(119) Patterns of complementation for break verbs a. V b. V+ ARG 1 c. V+ ARG 1+ ARG 2 d. V+ ARG 1+ ARG 2+ ARG-ADJ e. V+ ARG 1+ ADJ f. V+ ARG 1+ ARG 2+ ADJ g. V+ ARG 1+ ARG 2+ ARG-ADJ+ ADJ h. V+ ARG 2+ ARG-ADJ+ ADJ i. V+ ARG 2 j. V+ ARG 2+ ADJ k. V+ ARG 2+ ARG-ADJ l. V+ ARG 1+ ARG-ADJ

50 Note that ARG 1 and ARG 2 are semantic arguments. Thus, the prototypical causative pattern is represented as V + ARG 1 + ARG 2, and the prototypical inchoative pattern is represented as V + ARG 1. Consequently, a pattern like V + ARG 2 + ARG-ADJ + ADJ represents a causative structure whose first argument is not overtely expressed in the syntax, as in He was crushed to death so a shrine could be raised to his memory.

254 m. V+ ARG 1+ ARG-ADJ+ ADJ n. V+ ARG-ADJ o. V+ ADJ

According to the data offered by the corpus, the maximal projection is V+ ARG 1+

ARG 2+ ARG-ADJ+ ADJ, and the most frequent pattern is V + ARG 2, followed by V

+ ARG 2 + ADJ, V + ARG 1 + ARG 2 , and V + ARG 1 + ARG 2 + ADJ, in that order.

As regards the adjuncts that appear in those patterns, I have found the following types and frequencies:51

51 Table 4.17 has been elaborated using Dik’s (1997), Van Valin and LaPolla’s (1997) and Van Valin’s (2005) lists of adjuncts. Furthermore, I have arranged them following Dik’s hierarchical order, and I have distributed them in different layers according to Van Valin and LaPolla’s layered periphery outlined in 4.2.1.1.d.

255

LAYER ADJUNCT BREAK CRACK CRASH CRUSH FRACTURE RIP SHATTER SMASH SNAP SPLINTER SPLIT TEAR TOTAL

NUC MANNER 2 8 2 9 7 6 4 4 4 2 5 7 60

DEGREE 0 0 1 0 0 0 0 0 0 0 0 0 1

EMPHASIS 2 1 1 1 0 0 1 1 1 1 0 1 10

CORE BENEFICIARY 1 0 0 0 0 0 0 0 0 0 0 0 1

RESULT 0 0 0 0 1 0 0 0 1 0 0 1 3

PURPOSE 1 1 0 3 0 2 0 2 0 0 2 0 11

CAUSE 0 0 2 1 0 2 1 0 2 0 0 2 10

LOCATION 3 4 9 11 4 0 6 8 2 0 1 2 50

TIME 4 5 9 2 6 9 4 8 2 0 6 2 57

PACE 1 0 0 0 0 0 0 0 0 0 0 0 1

FREQUENCY 1 0 1 0 1 0 0 4 1 0 1 1 10

INSTRUMENT 0 0 0 3 0 2 1 3 0 0 0 0 9

CLAUSE EPISTEMIC 2 0 0 0 0 0 0 0 0 0 0 0 2

EVALUATIVE 0 0 1 0 0 0 0 0 0 0 1 0 2 Table 4.18 Types and number of adjuncts found in the examples of the corpus.

256 The information provided in table 4.17 has at least two readings: one in terms o relative frequency, and another in terms of absolute frequency. More precisely, adjuncts can be arranged according to the layer they are found in – the resulting hierarchy is thus centripetally arranged – and also according to absolute frequency; that is, according to the number of occurrences of the adjuncts. The results are the following:

- Hierarchy for adjuncts according to centripetal organization: the most frequent

adjuncts are those inside the nucleus and the core, as follows: core > nucleus >

core.

- Hierarchy for adjuncts according to absolute frequency: manner > time >

location > purpose > emphasis, cause, frequency > instrument > result >

epistemic, evaluative > degree, beneficiary, pace.

The adjuncts in the clause layer, namely epistemic and evaluative, are by far the least frequent in both hierarchies, only followed by degree, beneficiary and pace in the hierarchy of absolute frequency. Thus, the cut-off point in the adjunct hierarchy for the complementation pattern of break verbs will lie between the core and the clause. This is in keeping with the fact that adjuncts like epistemic or evidential can be extracausal; that is, they modify the clause, not the verb. Hence they are irrelevant to semantic complementation and, consequently, to the lexical representation of the verb. Therefore, the pattern of complementation or template for break verbs, in their maximal projection, will be as follows:

(120) V+ ARG 1+ ARG 2+ ARG-ADJ+ ADJ NUC/CORE

257 In short, by doing this I provide a less projectionist account of argument and event structure that includes those adjuncts that affect semantic complementation, like instrument or manner, and leaves out those that do not, like evidentials, thus accounting for alternations like the instrument-subject which cannot be explicated by the RRG theory.

4.2.2 Less-prototypical and non-prototypical examples

In the following sections, I deal with less-prototypical and non-prototypical examples.

First I define these two classes, and then I comment on those variables that contrast with those of the prototypical class.

4.2.2.1 Definition

Up to now I have dealt with the main features of the group of prototypical examples.

However, as mentioned above, not all examples are prototypical because not all of them denote a pure change of state. Depending on the meaning they encode, the corpus has been divided into two more groups: less-prototypical examples and non-prototypical examples. The former encompasses those examples whose meaning differs from that of prototypical examples but can somehow be derived from or related to them via different processes such as metonymy or metaphor; the latter comprises those examples whose meaning is completely unrelated to that of prototypical examples. Sometimes the line that separates the two groups is blurred, and one same example may seem to fit in both of them. I will now look at these groups a bit more in detail.

The set of less-prototypical examples is rather diverse. Just by looking at the corpus, the following cases can be distinguished:

258 i. Examples which are slightly different in meaning: in some of the examples of

the corpus the predicate, which belongs to the class of break verbs, does not

really denote a pure change of state. Rather, it describes the process by which

two entities or parts of entities come into contact, or one entity comes into

contact with another, but without producing a pure change of state as understood

in Levin (1993). I illustrate this in (121):

(121)

a. I mean to move on silently escaping, but I crash

straight into a trolley, pushed by a bloke looking like one of the heavyweights in

a James Bond film, so I leap away at speed as he snarls after me and knock over

a pile of bean tins.

b. He saw his hand come up and smash itself into

Guy's amiable face.

In (121.a) and (121.b), argument 1 does not suffer a change of state. All it does

is hit itself against a trolley in (121.a) and into a face in (121.b). The process

undergone by argument 1 is the same as that undergone by another entity when

it crashes and is damaged, except for the fact that in this case the result state is

not achieved. Thus, its meaning is slightly different from that of break verbs.

ii. Examples in which a change of position is not accompanied by a change of state:

in section 4.2.1.2.f, I have explained that there are some examples which code

both a change of state plus a change of position. The examples below are similar

with the difference that in this case the argument does not alter its original state

259 and, if it does, it does so to such a limited extent that it cannot be considered a

pure change of state. This is illustrated in (122).

(122)

a. Each week, on shopping day, I tear out the first

page and have a ready-made shopping list.

b. Saturday afternoons, when the streets were solid with

white faces, was a carnival of consumerism as goods were ripped from shelves.

In (122.a), argument 2 is removed from its original location; hence it can be said

to undergo a change of position. Nonetheless, it does not necessarily undergo a

change of state: a page can be torn from a book without being damaged or

broken. This may be seen more clearly in (122.b), where goods are carelessly

removed from the shelves but nothing leads to think that they undergo a change

of state.

iii. Examples that literally express a breakage but which are to be metaphorically

interpreted: these examples do not denote a pure change of state. In spite of this,

the metaphorical meaning of some of these examples can be easily derived from

the literal one, as shown in (123).

(123)

a. He's very naughty and breaks everyone's

heart — don't you darling — but we all love him.’

260 b. That was the point that she knew that the

barrier had been broken.

Obviously, (123.a) does not denote a process by which a heart is broken; rather,

it expresses the pain one would suffer if one’s heart were being broken. Thus, it

is a metaphor. (123.b.) is similar in that there is no physical barrier that can be

broken but a metaphorical one which symbolizes the impossibility for people to

communicate for different reasons. Therefore, it is another metaphor. Notice that

not all examples are metaphorical in the same way. In (123), there is no real

heart in the situation at point. It is just a metaphorical actor in a metaphorical

situation. However, there are other examples which are metaphorical by virtue

of the features of the arguments of the predicate which, by contrast to heart in

(123.a) and barrier in (123.b), are the real actors of the event. Thus, the literal

interpretation of examples like those in (124) is prevented by the fact that their

arguments denote abstract entities and, consequently, cannot be literally broken

or divided into pieces:

(124)

a. It is imperative to persist in the four cardinal

principles [adherence to socialism, the dictatorship of the proletariat, the

leadership of the Communist Party, and Marxism-Leninism and Mao Zedong

thought], oppose bourgeois liberalisation, smash the `peaceful evolution"

schemes of antagonistic international forces and inspire patriotism and socialist

consciousness.

261 b. Thus the typical people criminally victimizing and

forcing us to fear each other and fracture our sense of `community" are young

uneducated males, who are often unemployed, live in a working-class

impoverished neighbourhood; and frequently belong to an ethnic minority.

Neither peaceful evolution schemes of antagonistic international forces nor our

sense of `community"can be broken; they are abstract entities.

On other occasions the undergoer is not abstract, but a mass noun that

cannot be broken, as is the case with the wind in (125.a) or the sea in (125.b).

(125)

a. The needles splinter the wind into dirges and laments

that tell of the long and tragic history of the trees.

b. The silver water shatters under her feet, the child

bounces as he rides on her breast, and she no longer hears Sycorax, only the

pulse of the sea as it breaks in frills on the smooth and shiny sand, the splash of

her stride and the drumming of her heart as she makes for the forest to the north,

her back turned to the bay where the English ship rides at anchor, where the sea

battle will take place.

It is common sense knowledge that needles cannot splinter the wind because the

wind cannot be splintered. Hence the only possible interpretation for such an

example is metaphorical. The same can be said of the sea in (125.b), a mass of

water that cannot be broken in a literal way.

262 iv. Examples that express a breakage but through metonymic extension: within this

group we also find different cases of metonymy. For example, (126.a) illustrates

a case of target-in-source metonymy, where France stands for the players of its

national team. On the other hand, the metaphor ‘people are objects’ explains the

use of the verb ‘shatter’ in this context (Ruiz de Mendoza, personal

communication):

(126)

a. Tight forwards have been Jacques Fouroux's

obsession since France were shattered by New Zealand in the 1987 World Cup

final.

b. As the leaders rushed towards the final hairpin,

Senna appeared to be caught by surprise as Mansell began to slow, the McLaren

slewing sideways under heavy braking before smashing into the back of the

Williams.

(126.b) calls for a similar explanation: ‘the McLaren’ is the name of a car

factory, and it stands for one of its cars. Thus, it is another case of target-in-

source metonymy and, being more precise, of CONTROLLER FOR

CONTROLLED.

The corpus has also provided examples of metaphtonymy (Goossens

1990):

(127) The nurse now cracks with a deadpan

expression: `You die, we take you home, Muhammad."

263 Goossens (1990: 323) introduces the neologism metaphtonymy to refer to the interaction of metaphor and metonymy, two cognitive processes which, this author claims, are distinct but not mutually exclusive. One possible interpretation for example (127) is that the nurse uttered that with a cracking voice, which in turn implies that she made that utterance in a sudden, loud way.

In other words, the donor domain sound gives rise to a metaphor from a metonymy, which is a common cognitive process in those cases where ‘sound hangs together with a human activity that can naturally co-occur with linguistic action’ (Goossens 1990: 329). Thus, (127) illustrates a case of metaphor from metonymy (Goossens 1990: 328).

In (128), the government cannot smash a trade union, because this is an institution. Rather, the government smashes the people that work for the unions.

According to Ruiz de Mendoza (personal communication), this would be a variant of the COMPANY FOR WORKERS metonymy and, splitting hairs perhaps a case of INSTITUTION FOR PEOPLE RUNNING THE

INSTITUTION:

(129) On the political left, the government is regularly reviled for running down the welfare state, smashing the trade unions, and waging class war.

(130) below illustrates a preparatory metonymy (Ruiz de Mendoza, personal communication), where the expression crush a library suggests putting an end to the activity carried out in it. Thus, this is a preparatory metonymy of the type INSTITUTION FOR WORKERS, which is then chained to the

264 metonymy WORKERS FOR THEIR ACTIVITY. According to Ruiz de

Mendoza and Peña (2005: 260), depending on the kind of metonymic

relationship that exists between a matrix domain and its subdomains, two

different types of metonymy can be distinguished: source-in-target and target-in-

source. The two metonymies in question here illustrate target-in-source

metonymies, since the target is a subdomain of the source in both cases. Thus,

(131) is a case of target-in-source + target-in-source metonymy. This kind of

mapping highlights that part of the matrix that is relevant for interpretation, thus

reducing the conceptual domain involved in the metonymy (Ruiz de Mendoza

and Peña 2005: 260). It is for this reason that Ruiz de Mendoza and Peña (2005:

260) refer to target-in-source metonymies as ‘reduction metonymies’. Therefore,

(131) could be defined as a case of double metonymic reduction (Ruiz de

Mendoza, personal communication).

(131) They give us a vivid idea of what the lending

libraries would be like if the popular mass-circulation libraries had not been

crushed by the public libraries in their politically motivated post-war form.

The group of non-prototypical examples is also rather diverse, because as I have shown in section 4.1.2.d, break verbs have a wide range of meanings that differ from that of pure change of state. Some verbs in particular such as break or crash have a large number of senses which stand in a relation of contrastive polysemy with one another. The following are just a few of them:52

52 Here I use the CALD, which supplies more homonyms for break verbs than the NODE.

265 i. Crash = enter without permission

(132) He did everything at breakneck speed and was insatiably sociable, always turning up when Jane was particularly busy, crashing into the room and asking jauntily: `What's everyone doing?"

ii. Crash = sleep

(133) a. At that stage, I think David was just crashing around. b. 17. He was staying up for four or five days at a time and then crashing for four or five days.

iii. Crash = fail or become unsuccessful

(134) When the engagements are abroad, I work feverishly in the last few days and nights to create a reasonably finished piece of work — just in case we crash.

iv. Crack = to make a joke or clever remark

(135) This man would forget the purpose of his visit, at least for a brief spell, and the fact that he was on official business, and drink tea and eat a good meal in the prosecutor's house, crack jokes and make amiable conversation, and sleep through the heat of the day.

v. Snap = to a make a brief noise by pushing your second finger hard against your thumb and then releasing it suddenly so that it hits the base of your thumb

266 (136) When her husband began to sing a little, snapping his fingers in rhythm, she smiled and for a moment looked like a young girl again; but the barman came immediately and reminded them that singing was not permitted.

vi. Snap = to take a lot of photographs quickly

(137) He took two pals along with him for company and planned to spend the time snapping rare African wildlife.

vii. Break = to (make someone) lose self-control

(138) The initial stages of training are not exactly kind but there is no other way of breaking a fully grown elephant.

viii. Break = (disobey); to fail to keep a law, rule or promise to fail to keep a law, rule or promise

(139) If Koresh broke the law, it was the County sheriff who should have intervened.

Leaving aside the division into less-prototypical and non-prototypical, during the analysis I have also noticed that there are several meanings which recur and which are common to several verbs. I list some of them below:53

i. ‘to do much better than the best or fastest result recorded previously/defeat’

(140)

53 Definitions taken from CALD.

267 a. First, she lost the opening singles match 6-3, 6-3 to Jana

Novotna then she dithered far too often in the deciding doubles after Graf had, as expected, levelled the score by crushing Helena Sukova 6-2, 6-1 in the other singles match. b. In a timed swim across Honolulu Harbour in 1911 he covered 100 yards in 55.4 seconds, shattering the existing record by 4.6 seconds. c. I beat the school's fastest runner in the 100 metres sprint, breaking the finishing tape just before the other runners manage to leave their starting blocks; I smash the school long jump record by 15 metres (give or take a metre) and I hurl the discus so far that Miss Harrison, the teacher in charge of the event, has to get her battered Mini from the car park to retrieve the discus for the next competitor (who manages a measly 25cm). d. But he was still heavily hit by the fine, imposed by a three- man Football Association disciplinary panel, which smashes the record sentence for disrepute charges established by Paul McGrath's £8,500 fine in November 1989.

ii. With ‘voice’: ‘to change from one state to another’

(141) a. My father,’ her voice nearly broke, ‘now lies sheeted in a coffin in the Chapel of St Peter Ad Vincula. b. We could sing nicer, too; we don't have to push our voices and rip them up, but it's just not so much fun. c. When she was sure she could talk without her voice cracking, she said, `What can I do, Keith?"

268 d. Straining to be heard above the traffic, his voice hoarse with too much campaigning and cracking with too much emotion, the Labour leader conceded defeat in a dignified and moving statement.

iii. As an utterance verb: ‘to say something in a special way, either angrily, suddenly, etc.’

(142) a. The nurse now cracks with a deadpan expression: `You die, we take you home, Muhammad." b. Meryl Streep is wonderfully comic as an actress who really wants to be a singer, while Shirley Maclaine shows her versatility and talent as she defiantly crashes through `I'm Still Here". c. But Stewart snaps: `I don't want to be disappointed so I'm not going to ask for their help."

iv. As a manner verb: ‘to make a sudden, short noise, or to cause something to make this noise:’

(143) a. Like in the cavalry, going round in circles without stirrups, old Biddy cracking her whip against her jackboots like a Nazi prison warder. b. As she did so, a shot cracked across the ridge, from one of the police marksmen. c. He thought that if he shot the barracuda behind the head, it would leap clear of the water with its jaws snapping. d. They snap their fingers as if the world is their wine-waiter.

269 All the examples displayed so far present the verb in isolation. However, as I have anticipated in section 4.1.2.d, the analysis has also brought forward a great number of phrasal verbs (especially with break), prepositional verbs, and some collocations of the type verb + noun and verb + adjective. They are illustrated in (144)-(146) below.

Some of them retain the meaning of pure change of state whereas some others – phrasal verbs mainly – denote something completely different:

(143) Phrasal verbs (definitions drawn from the CALD) a. Break down = ‘to be unable to control your feelings and to start to cry’: Furthermore, true creativity is relatively uncommon even in those individuals who never overtly break down but whose mode of thinking resembles that of the diagnosably psychotic: it is rarer still in those who do pass beyond the threshold into clinical disorder. b. Break out = ‘If something dangerous or unpleasant breaks out, it suddenly starts’:

But what has become internalized can always become externalized again; and so we have seen that, increasingly in modern societies, anti- social instinctual drives of the id — specifically, the instinctual drives of the gelada-like sons of the primal fathers — break out in social conflicts and acts of delinquency and provocation which the modern incarnation of the primal father — the police, the

Establishment, law, order and standards of all kinds — have to meet. c. Break up = ‘(UK) When schools and colleges, or the teachers and students who go to them break up, their classes stop and the holidays start’: Later in that day he was in the town centre when his school was breaking up. d. Rip through = ‘to move very powerfully through a place or building, destroying it quickly’: There are two sorts of Mudhoney show: the one

270 where they piss about all night and sceptics just scratch bonces and bugger off, and the one where they rip through their best songs, old and new, with hardly a murmur and plenty of bovver. e. Tear off = ‘to remove your clothes quickly and carelessly’:

Alida flailed upwards, tearing off the bedclothes, and switched on the light.

(144) Prepositional verbs a. `Australia," he sobbed, forehead cracking forward against the pane. b. But that night, when he is asleep, the creature enters his chamber and rips back his curtain — so! — so that he wakes up with a start to find its dreadful gaze upon him, and its hand out-stretched for his throat! c. The Grand Army would fall upon that weakness, and smash through. d. She `just misses" trains, turns up hours late for dates, makes sure we're the last to arrive at parties, dawdles getting ready to go out, then tears around hysterically, taking her irritation with herself out on me.

As can be seen from the examples, not all cases of verb plus prepotion/adverb lead to a different meaning from that of pure change of state.

(145) Verb + noun a. Break new ground = ‘to do or discover something new:

On any view that may be taken, this case breaks new ground of great constitutional importance. (Definition drawn from the CALD)

271 b. Break surface: This range, although covered by many hundreds of metres of water for most of its length, breaks surface in a few places to form the chain of apparently unconnected volcanic islands.

(146) Verb + Adjective a. but when you break loose b. Break free from chains c. and the line rips clear, d. She saw his face change, his brows snap together.

Finally, it must be noted that, even in those cases where the meaning of the predicate seems completely unrelated to that of change of state, it may sometimes be possible to find some semantic connection between the new meaning and that of change of state. Consider the following example, mentioned above in (140.b) and which I repeat in (147):

(147) In a timed swim across Honolulu Harbour in 1911 he covered 100 yards in 55.4 seconds, shattering the existing record by 4.6 seconds.

To shatter a record is defined in the CALD as ‘to do much better than the best or fastest result recorded previously’. At first sight, this meaning has nothing to do with that of verbs of change of state. Nonetheless, it is common sense that when some object is broken it is not good any more, and the same reasoning could be applied to the example under analysis: when a record is shattered it is not good any more which, by metaphorical extension, could be equated with the definition provided above. The

272 corpus has provided many examples like this. Nonetheless, this is just a tentative analysis and I leave it for further research. In this dissertation, these examples have been classified as non-prototypical.

4.2.2.2 Differences between the groups of less-prototypical and non-prototypical examples

To determine accurately the differences existing between less-prototypical and non- prototypical examples is a difficult task given the heterogeneity of the two classes.

Thus, here I focus on those features which, in my opinion, are more significant. I start by explaining the group of less-prototypical examples and then I move on to the non- prototypical ones. Finally, I summarise the main differences existing between them and also between them and the group of prototypical examples.

4.2.2.2.a Less-prototypical examples

The examples in this group depart from the meaning definition presented above for break verbs, which in principle leads to expect differences in other aspects of the analysis. Here I examine the most relevant ones:

- Entity type: in principle, entity type is expected to be other than 1 because of two

main reasons: (1) entity type 1 is typical of the class of prototypical examples;

and (2) neither properties or relations nor states of affairs, possible facts or

speech acts can undergo a pure change of state like that described by break

verbs unless under metaphorical interpretation or some other special condition.

Given that the latter is one – but not the only one – criterion used to distinguish

prototypical examples from those which are not, expectations were that many, if

273 not most, of the examples in the other two groups would have arguments of a different type. Interestingly, though, most of the (entities denoted by the) arguments have turned out to be of type 1, although many of them denote abstract, non-tangible entities or entities of abstract interpretation in that context, as illustrated in (148):

(148) Write about a journey to an unfamiliar place, a place that, when you visited it, fractured, ruptured or enlarged your understanding of the world. (a place = 1)

Besides entities of type 1 there are also some cases of type 0, as shown in (149), some of type 2, as shown in (150), and only a few of type 3, as illustrated in

(151).

(149) a. Then an animal snorted quietly and broke the momentary stillness. (The momentary stillness = 0)

He kept his mind completely still, it was like something preserved, like something in a jar in a laboratory, but his body came undone and shook, there was a sound inside him like the sound tracks make when a train's coming, that hiss and crack the length of his veins, that shudder in his blood.

(the length of his veins = 0)

274 (150) In Hungary, reform communism of a sort had

existed ever since the Russians crushed the October uprising in 1956. (the

October uprising = 2)

(151) When you have to take a job, even if it's a job you can mildly

stomach&hellip the fact that you have no choice crushes your enthusiasm for

doing the job." (the fact that you have no choice = 3; your enthusiasm = 0)

It must be noted that a great number of entities are syntactically realized by

personal pronouns but, as explained in section 4.2.1.1.e, these are not analysed

in the qualitative analysis.

- Subeventual structure: the event structure denoted by less-prototypical examples

is also similar to that of prototypical examples in most cases, being subdivided

into two subevents: a process and a result state. If we go back to (some of) the

examples displayed in (121)-(124) above, we can observe that their event

structure is as follows:

(152)

a. …, but I crash straight into a trolley, …: (e1< ○αe2)

b. …, I tear out the first page …:(e1< ○αe2)

c. He …breaks everyone's heart …: (e1< ○αe2)

d. …and fracture our sense of `community" …: (e1< ○αe2)

275 Thus, the subeventual structure does not vary in any significant way with regard

to that of the prototypical class either.

- Temporal relation: as can be observed in the examples displayed in (152), the

temporal relation existing between the subevents of less-prototypical examples

is the same as that of the prototypical class: partial ordered overlap. This

relation is characterised as follows: (1) event 2 follows event 1; (2) nonetheless,

there is some overlap of the final part of the process and the beginning of the

result state; and (3) the second event continues while the first event continues to

hold and beyond.

- Type coercion: by contrast to the examples of the prototypical class, here there

are a few cases of true complement coercion. Recall that, in section 4.2.1.2.e, I

defined the basic semantic type for the argument of break verbs as an NP. Thus,

coercion applies when the entity type of the argument(s) of the verb is other than

1 or 0 – and is syntactically different from an NP – and where such entity has an

alias of the appropriate type in its qualia. Examples (150) and (151) above may

serve as illustration. Given the complexity of this process and the fact that it is

not central to my research, I will not go into further detail, but I refer the reader

to Pustejovsky (1995a: chapters 7 and 9) for a more detailed account of coercion

and its applications.

- Argument-adjuncts: in section 4.2.1.1.c above I have shown that there are at

least four different types of argument-adjuncts for prototypical examples of

break verbs. They specify source, path, goal and result. The group of less-

prototypical examples displays a similar pattern, with the difference that, in this

case, the addition of the directional or resultative PP brings about a change in the

276 verb’s semantics. Thus, if we go back to the examples displayed in (38) above,

neither goods in (38.a) nor his hand in (38.b) are necessarily broken when ripped

from the shelves and landing on Guy’s amiable face, respectively. I repeat the

examples in (153) below for convenience.

(153)

a. Saturday afternoons, when the streets were solid with

white faces, was a carnival of consumerism as goods were ripped from shelves.

b. He saw his hand come up and smash itself into

Guy's amiable face.

Thus, neither rip nor smash denotes a change of state in these expressions;

rather, they encode a change of position. These patterns comply with Goldberg’s

(1995) caused-motion construction and Pustejovsky’s (1995a) co-composition

through directional PPs, which have been discussed in section 4.2.1.2.f above.

4.2.2.2.b Non-prototypical examples

The range of meanings comprised in this group is wide and heterogeneous, not only from verb to verb but also within the paradigm of each verb. Most of these meanings are completely unrelated – that is, they stand in a relationship of contrastive polysemy – as illustrated in (154), although there are some exceptions which seem to be metaphorically or metonymically derived from one another, as we have seen above.

277 (154) Examples of the different meanings of crack:

a. crack = to make a joke or clever remark: This man would forget the purpose of his visit, at least for a brief spell, and the fact that he was on official business, and drink tea and eat a good meal in the prosecutor's house, crack jokes and make amiable conversation, and sleep through the heat of the day. b. crack = to find a solution to a problem: Turning up its coat collar in a hard-boiled way, the voice is briefed on its mission — `leather couches, portraits of the Queen&hellip this must be MI6" — passes a dingy border checkpoint getting its forged passport stamped, studies a spy plane view of Cheltenham, cracks a cipher or two, and spots a glamorous female spy from the window of the Orient Express to Istanbul. c. crack = INFORMAL to become mentally and physically weak:: These are the children who sometimes crack at university level because they are facing challenge for the first time in their lives." d. crack = INFORMAL to fail as a result of problems: Others are bound to arise; not least because, as the Soviet Union cracks apart, documents and material taken there after the war may drift back into the hands of historians. e. crack = to say with a cracking voice: The nurse now cracks with a deadpan expression: `You die, we take you home, Muhammad." f. crack = to use your authority to make someone else behave better or work harder:

Fortified by the support of gleeful Tories and powerful connections (his brother-in-law is Health Secretary William Waldegrave; his wife is lady-in-waiting to the Queen), he has been cracking the whip.

278 g. crack = if a voice cracks, its sound changes because the person is upset: Straining to be heard above the traffic, his voice hoarse with too much campaigning and cracking with too much emotion, the Labour leader conceded defeat in a dignified and moving statement. h. crack = iIf somebody cracks, they weaken and agree that they have been defeated:

Eden cracked under the strains of Suez, and his Foreign

Secretary, Selwyn Lloyd, came close to cracking at times. i. crack = INFORMAL to copy computer programs or recorded material illegally:

As the pirates crack the latest `protection programming", or copyguarding, the software writers devise new methods that are harder to crack.

With regard to the differences existing between these examples and the other two classes of examples, it is evident that a difference in meaning can bring forward differences in all other respects. Consequently, I do not intend to give a thorough account of all of them. Furthermore, this class of examples falls out of the scope of my research; therefore, here I just explain, very briefly, some of their main characteristics:

- Entity type: as was the case with less-prototypical examples, most of the non-

prototypical examples have arguments of type 1, although many of these denote

abstract entities such as joke or record. There are also some examples of type 2,

such as rebellion in (155):

(155) Despite a statement by Selwyn Lloyd, the

Foreign Secretary, in the House of Commons on 23rd July that there was `no

question of large-scale operations by British troops on the ground", Army units

279 had to be flown up to the Oman from Kenya to support the Sultan's armed forces

in crushing the rebellion.

The same applies to the relatively high number of personal pronouns such as I or

we, which, as explained above, are disregarded for the qualitative analysis. They

are illustrated in (156.a) and (156.b) respectively:

(156)

a. I beat the school's fastest runner in the 100 metres

sprint, breaking the finishing tape just before the other runners manage to leave

their starting blocks; I smash the school long jump record by 15 metres (give or

take a metre) and I hurl the discus so far that Miss Harrison, the teacher in

charge of the event, has to get her battered Mini from the car park to retrieve the

discus for the next competitor (who manages a measly 25cm).

b. When the engagements are abroad, I work

feverishly in the last few days and nights to create a reasonably finished piece of

work — just in case we crash.

- Subeventual structure: the event structure of non-prototypical examples differs

quite frequently from that of prototypical and less-prototypical examples. Going

back to the examples displayed in (154) above, the eventual structure for crack

jokes in (154.a) would be like (156); in other words, an activity or process.

280 P

e1 … en Figure 4.11 Event structure for processes (Pustejovsky 1992: 56)

This scheme depicts processes as sequences of events that identify the same

semantic expression (Pustejovsky 1992: 56). The same applies to (157) below:

(157) They crashed on my floor after the party (crash = to sleep at someone

else's house for the night, especially when you have not planned it). (Example

taken from the CALD)

The event structure for (157) would also be the one represented in (156).

- Temporal relation: the temporal relation existing between the subevents of non-

prototypical examples will also vary according to the event type. In some cases,

like the ones illustrated in (154.a) and (157), there is no temporal relationship

because the new meaning does not involve a transition but a process. The same

claim can be made with respect to states. Interestingly, as regards transitions,

most of the new senses denoted by break verbs follow the patterns put forward

for the prototypical examples; that is to say, full and partial ordered overlap.

Thus, the subevents of (the break clause of) an example like (155), which I

repeat in (158), stand in a temporal relation of partial ordered overlap.

(158) Despite a statement by Selwyn Lloyd, the

Foreign Secretary, in the House of Commons on 23rd July that there was `no

281 question of large-scale operations by British troops on the ground", Army units

had to be flown up to the Oman from Kenya to support the Sultan's armed forces

in crushing the rebellion.

In other words, in crushing the rebellion depicts two subevents which partially

overlap, the second holding while the first event continues to hold and beyond.

- Type coercion: non-prototypical examples also have arguments with different

entity type from 0 or 1 and with a different syntactic form from the prototypical

NP. Nonetheless, given the heterogeneous character of this group, the

explanation for each case will vary; even more, in some cases coercion will not

apply and a new lexical entry will be required. It would be interesting to

investigate whether there is some historical connection among the different

senses of each verb that coercion can explain; however, I will not go any deeper

into this question because it falls outside the scope of this research.

4.3 Conclusions

In this chapter, I have discussed the data obtained from the analysis of the variables of the word structure, the argument structure and the event structure of verbs of pure change of state. (See the summaries in 4.1.3 and 4.2.1.3 above for further details.)

Furthermore, I have restructured the corpus of analysis into three groups according to the degree of prototypicality of the examples. And finally, I have explained the main differences that exist among the three groups of examples under study regarding argument and event structure. In a nutshell, the most relevant ones are the following:

282 1. Meaning definition: this is the most evident and the most relevant of all the

differences, given that it is the one that determines the classification of the

examples in different groups. From this it follows that each group has a

different meaning definition.

2. Merge: only prototypical examples participate of the merge process

explained above. Less-prototypical and non-prototypical examples are

excluded from this process, because the former do not preserve the

prototypical meaning of the parts that instantiate them, and the latter

encode completely different meanings. Hence, the merge process is

prevented in both cases.

3. Type coercion: in this case, it is the group of prototypical examples that do

not undergo the process of type coercion, due to the fact that there is no

need for any of the arguments to coerce to the type required by the verb

because of their prototypical status. Thus, type coercion is reserved, in

principle, for the groups of less-prototypical and non-prototypical

examples, although this issue is rather complex and calls for further

research.

In the next chapter I elaborate on the data obtained from the analysis of the prototypical examples in order to examine the interaction existing among the three levels of representation of break verbs.

283

284 CHAPTER 5: CONCLUSIONS

The last pages of this dissertation shall be devoted to drawing conclusions and discussing the achievements attained in the preceding four chapters, paying special attention to the interaction existing among the three levels of representation that I have analysed.

In the introduction, I claimed that, despite its many virtues, the system of lexical representation of RRG needs to be strengthened. Mairal and Van Valin (2001) are aware of the limitations of RRG’s logical structures and acknowledge that Pustejovsky offer a more comprehensive view of the lexicon. They suggest an enrichment of a system which, being partially projectionist – because it includes argument-adjuncts in its lexical representations –, still disregards those adjuncts that impact semantic complementation. In this line of work, in this dissertation I have carried out a close semantic analysis of the word structure, the argument structure and the event structure of verbs, an analysis that has contributed to the enrichment of the RRG’s system of lexical representation.

Having done the close, semantic analysis, there are notions and concepts in need of further development, including argument-adjuncts and merge, while others require improvement, such as telicity. I will return to the latter below.

There are three reasons for turning to Pustejovsky’s generative lexicon and to

Faber and Mairal’s (1999) and Levin and Rappaport’s (2005) theories of lexical semantics.

A first reason for carrying out the fine-grained, semantic analysis is that the system of lexical representation of the RRG is limited to logical structures and does not sufficiently explain the semantics of words. As a consequence, I have had to turn to

285 Pustejovsky, because he offers a more comprehensive view of the lexicon by spreading the semantic load of the expression across all its constituent parts; to Mairal’s lexical semantics, because he incorporates in the logical structures those adjuncts that are relevant to semantic complementation; and to Levin’s lexical semantics, because, by virtue of the organization of the English lexicon into verbal classes, logical structures account for the corresponding variations between arguments and adjuncts. Levin (1993) claims that what determines the syntax of a verb is found in the properties of whole verbal classes rather than in the properties of individual verbs. Hence, I have selected a verbal class that allows me to make syntactic generalisations from the behaviour of verbs as a class. The relationship works in both directions: syntax helps define the verbal class, and vice versa (Levin 1993).

A second reason is that, recently, lexical semantics has moved in the direction of studying not just the semantics of the verb, but also its derivate morphology which, as said in chapter 4, results in an enrichment of lexical representations. This move helps to determine the relationship between, on the one hand, derivative morphology and, on the other hand, the argument structure and the event structure of the verb by means of linking algorithm that resembles the one designed for primary (non-derived) words

(Lieber 2004; Cortés 2006; Martín Arista forthcoming-b).

The third reason for carrying out the close, semantic analysis is that the RRG lexicon does not offer a completely comprehensive picture of the lexicon. It is for that reason that I have had to turn to Faber and Mairal (1999), who provide a global vision of the lexicon and identify a cognitive axis – which is also missing in RRG – together with a paradigmatic and a syntagmatic axes. More precisely, Faber and Mairal (1999) organise semantic fields into lexical domains, at the heart of which lie more or less hyperonymic lexemes that are provided with a formalised meaning definition. Mairal

286 and Van Valin (2001) suggest improvements to RRG in the line of lexical semantics; specifically, they see the need to (1) go deeper into the relationship between syntax and semantics, and (2) account for both internal and external arguments of the verb, as well as adjuncts. Mairal and Ruiz de Mendoza (forthcoming 2007) and Ruiz de Mendoza and

Mairal (forthcoming 2007) have taken this proposal one step further to include a cognitive and constructional component, to which I go back in chapter 6.

In order to delimit the subject matter of this dissertation I have had to contextualise verbs of change of state, examining all their senses and combinations with adverbs and prepositions. Then I have isolated those that imply physical discontinuity – or change of state – from those that imply temporal discontinuity – that is, those that lack the change-of-state meaning component. In other words, I have rearranged the corpus attending to the concept of force dynamics. At the same time, and always within a functional-cognitive paradigm, I have distinguished between prototypical and non- prototypical examples on the basis of the non-discrete conception of the categories as rendered by Givón (1989) and Taylor (1995).

Once the corpus has been divided and organized into prototypical, less- prototypical and non-prototypical examples, I have proceeded to the analysis of the word structure, the argument structure, and the event structure of break verbs. As I showed in chapter 4, the three of them are interrelated in the following way.

For the analysis of word structure, I have identified the four main semantic relationships according to Lehrer (2000): antonymy, synonymy, hyponymy and hyperonymy, and polysemy. The analysis of antonymy has revealed that break verbs hardly have any antonyms, which is in keeping with the fact that no reversative affixes can be attached to these verbs, as shown in the analysis of lexical derivation. The outcome of the analysis of hyperonymy is also in accordance with those offered in the

287 analysis of the derivational features of break verbs, since the hyperonym – which the analysis in terms of frequency has revealed to be break – is the verb that can take more affixes. Furthermore, the hyperonym break is also the verb that appears in a higher number of complementation patterns. Hence, the analyses of morphology, semantic relationships and argument structure of break verbs converge. This contributes to the fulfilment of one of my aims as it corroborates the transversal character of the analysis along the levels of analysis.

The analysis of synonymy has revealed that, as Goldberg (1995) states, full synonymy does not exist. However, with the use of WordNet, I have identified those senses of the verbs under analysis that denote pure change of state. This way, synonymy has played a useful role in the organization of the corpus into classes according to degree of prototypicality. To this task have also contributed the analyses of polysemy and, as I have just said above, of the hyperonyms, against which all the senses displayed for each verb in WordNet have been contrasted. The analysis of polysemy has shown that the polysemous character of break verbs is related to the high frequency with which these verbs combine with prepositions and adverbs, which, especially in the case of phrasal verbs, often results in cases of contrastive ambiguity – hence, non-prototypical.

The analysis of the argument structure of the prototypical examples has highlighted the notion of telicity due to its crosscutting character: telicity impacts on morphology, argument structure and event structure. Pustejovsky, for instance, develops the notion of event complexity in relation to that of telicity: telic events are systematically regarded as complex, since they involve a transition to a new result state.

Being aware that this notion needs further development in relation to the nominal phrase, I have made a proposal that consists in accounting for nominal aspect in terms of [± telic]. More precisely, I have studied telicity according to two main

288 morphosyntactic criteria: (1) number – as a general rule, English count nouns admit both singular and plural morphology, whereas mass nouns do not have plural save exceptional cases; and (2) reference – in principle, mass nouns cannot be preceded by the indefinite article, whereas count nouns can. After the analysis, I have observed that the second argument of causative structures and the first of their inchoative counterpart must be telic. This is coherent with the definition of the verb and, more precisely, with the selection restrictions of the arguments of break verbs – the prototypical patient of verbs like shatter is concrete and inanimate (Faber and Mairal 1999: 128) – but also especially with the notion of flow that I mentioned in the introduction to the dissertation. These verbs of physical discontinuity require a very precise moment in which the change – the discontinuity, the interruption of the duration in time and space

– takes place. This precise moment is provided by the telicity of the argument: the interruption of the flow cannot apply to a mass continuum, but to a concrete point of that mass, that is, a concrete entity. In other words, the second argument cannot be atelic. Interestingly, the corpus has not provided any example of countified nouns that qualify as second argument of prototypical verbs of change of state.

Other aspects of argument structure that I have put emphasis on are external arguments and adjuncts. As said, logical structures must include those adjuncts that have semantic impact, and for that purpose I have found inspiration in Faber and Mairal

(1999) and Mairal and Faber (2002). Mairal and Faber’s model is eclectic in nature: it replaces Dik’s predicative frames with the RRG’s logical structures. It also turns to models of semantic universals, like RRG, in search of a meta-language of universal validity; and it incorporates Levin’s verbal classes, with the corresponding variations between arguments and adjuncts. Their templates, like that of RRG, include all possible arguments of the verb; but, in addition, they go deeper into the semantic decomposition

289 process to make room for other semantic functions, such as instrumental or manner.

Starting from here, I have carried out an analysis of the adjuncts found in the corpus for break verbs. The results have been arranged in two hierarchies, one centripetal and the other one according to absolute frequency. The two of them agree that the adjuncts in the clause layer, namely epistemic and evaluative, are by far the least frequent. Thus, I have established a first cut-off point for the complementation pattern of break verbs between the core and the clause; that is, I have determined which adjuncts cannot be incorporated in the RRG’s logical structures.

Finally, on the basis that events are durative, I have been able to carry out an analysis of event structure which has proved coherent in itself and compatible with the analysis of argument structure and of the semantic and derivational properties of break verbs. It is coherent in the sense that I have been able to analyse all the variables of event structure – namely subeventual structure, temporal sequence, causal sequence, headedness, and coercion – on the basis that events are durative. And it is compatible because I have been able to establish generalizations among the three levels of analyses; the idea of flow, for instance, pervades the three levels of analysis. RRG does not acknowledge a subeventual structure for the inchoative version of break verbs.

However, all events of change of state have an initial state and a final state; hence, subeventual structure. Whether the event is realised in the syntax as inchoative or causative depends, as we have seen in the analysis, on which subevent is headed or foregrounded.

This structure of subevents also implies that the event has duration, because any change from one state to another takes place over time, even if this is very short. The fact that all events are durative also has certain implications. To be precise, if events are durative, the feature [± punctual] should be dispensed with for the definition of logical

290 structures. At the same time, this would turn the distinction between accomplishments and achievements irrelevant, which would put an end to the difficult task of distinguishing them, as Van Valin and LaPolla (1997) themselves acknowledge.

Furthermore, this would also mean that RRG’s logical structures would resemble GL’s lexical representations a bit more, because Pustejovsky only distinguishes three event types: states, processes and transitions.

One of the prevailing virtues of RRG lies in its decompositional nature. RRG decomposes complex notions into simpler ones, which helps define its own notions more precisely and carry out a more accurate analysis of them. Consider, for example, notions such as subject or complex syntax, which can be derived from other more simple ones like pivot and nexus-juncture, respectively (Butler et al. 1999). Nonetheless, RRG is not consistent with this approach when it comes to the description of event structure, because logical structures subsume both the argument and the event structures into one single structure. Thus, following Pustejovsky, I have analysed argument structure and event structure separately. In other words, I have carried out a close semantic analysis.

This is not the only reason why I have moved beyond RRG in the analysis of event structure and drawn upon Pustejovsky. There are four additional reasons for doing so:

1. Logical structures are flat; my analysis, by contrast, allows certain elements of

event structure to stand out from the others by means of the notion of

headedness. Through headedness one subevent is foregrounded before the other,

thus avoiding ambiguity in the case of alternations. More precisely, in the case

of the causative/inchoative alternation, if the event is left-headed, it is causative;

if it is right-headed, it is inchoative.

2. Logical structures are basically limited to the expression of temporal sequence.

My analysis goes deeper in this respect, indicating the specific type of temporal

291 sequence that applies in each case. Furthermore, I have incorporated other

notions such as causation, central to event structure as ideated by Pustejovsky.

3. My analysis provides a complete picture of the event, whereas RRG leaves out

the adjuncts in the periphery, with the disadvantages that this implies: for

instance, it does not account for those adjuncts that have semantic impact, such

as instrument or manner.

4. My analysis emphasises the result of the change of state, whereas logical

structures are not designed to account for resultative structures of the type ‘She

was shaken awake by the earthquake.’

To sum up, I have carried out a cross-cutting and coherent analysis that sets out some guidelines for the enrichment of the system of lexical representation of RRG.

Having summarised the achievements attained in the analysis, it is time now to revise the goals I set in chapter 1 and see whether they have been reached. I shall also discuss whether the working hypotheses I put forward in chapter 1 have been validated by the analysis of empirical data.

In chapter 1, I stated the need to revise the RRG theory in an attempt to identify its main shortcomings and figure out a plausible solution to them. To be precise, I have focused on the study of the system of lexical representation of RRG, which lacks the necessary complexity. Following Mairal and Van Valin’s (2001) lead, I have made a contribution to the enrichment of this system, by developing Pustejovsky’s work through the analysis of concepts such as subeventual structure, telicity, duration and, most importantly, through the call to incorporate adjuncts into the logical structures. To that purpose, I have also revised Pustejovsky’s Generative Lexicon theory and Levin’s lexical semantics in chapter 2.

292 As regards methodological issues, in chapter 1 I aimed to analyse the word structure, the argument structure and the event structure of break verbs in order to carry out a close semantic analysis of the system of lexical representation of RRG. For that purpose, I have compiled a corpus based on real uses of language. The most relevant issues regarding the corpus compilation process as well as the analysis itself are the following:

1. I have taken a twofold approach to the corpus compilation task and analysis in order to ascertain both its empirical validity and systematicity. Thus, data compilation has been carried out in a deductive fashion, whereas conclusions have been drawn inductively. As a result, the two traditional approaches in opposition in the linguistic tradition on this respect, namely Bloomfield’s (1933) inductive method and

Chomsky’s (1965) deductive method have been satisfactorily reconciled in this work.

Not only empirical accuracy, but also practical considerations should make us aim at an at least partial combination of both approaches.

2. Likewise, I have chosen a combined theoretical framework primarily based on RRG and complemented by Pustejovsky’s Generative Lexicon and Levin’s lexical semantics. By virtue of such combination, the shortcomings of the former have been overcome. On the one hand, Pustejovsky’s approach provides for a more complex system of lexical representation based on qualias for nominal phrases and subeventual structure, headedness or coercion for event structure; and on the other, Levin and

Rappaport’s (2005) lexical-semantic approach represents a plausible non-projectionist approach which integrates adjuncts into the lexical representation.

3. I have undertaken the analysis process in a systematic way, commencing by the analysis of word structure and following by the analysis of argument structure and event structure, in this order. Although the analysis consists of the three clearly

293 differentiated parts that I have just mentioned, the outcomes have been displayed separately for word structure on the hand, and for argument and event structure on the other. This corresponds to the fact that the analysis of word structure, despite being illustrated – and, hence, supported – by examples from the corpus, has been carried out on the basis of dictionaries and the lexical database Wordnet, whereas the analysis of the argument and event structures of break verbs has been performed on the corpus.

This division is coherent with the fact that the analysis of word structure is a paradigmatic and type analysis, whereas the analysis of argument and event structure is a syntagmatic and token analysis.

4. Finally, the main goal of the dissertation, namely to carry out a close semantic analysis of the RRG system of lexical representation, has been fulfilled. In this dissertation, I have shown the relationship that exists among the three levels of representation that I have analysed. The analysis of the variables shows that the three levels are intimately connected, so that the information that is codified at one level is coherent with the information codified at the other levels. Therein lies the success of

Pustejovsky’s lexicon. It is a comprehensive and highly cohesive lexicon whose parts are harmonious. By carrying out the analysis of these levels integrating components of both RRG and GL, I have proved that both theories are compatible and, what is more, that they complement to each other, reason for which both can benefit from each other.

Going back to the ideas put forward in chapter 1, RRG can benefit from GL by incorporating elements of its system of lexical representation and, more precisely, from the notion that has permeated this work from beginning to end: RRG can be enhanced by incorporating into its logical structure the components of GL’s event structure.

Turning to the working hypotheses I put forward in chapter 1, they have been validated by the analysis of empirical data:

294 1. The first working hypothesis was meta-theoretical and, as I have just explained

above, the analysis has shown that the incorporation of new components to the

lexical representations of RRG such as subeventual structure and temporal

sequence, among others, as well as its extension to include adjuncts in its

representations, would mean a step forward in the explanation of structures or

constructions that logical structures cannot account for, like resultatives.

2. The second working hypothesis was theoretical, and stated that all changes of

state are durative. As I said in chapter 1, time is understood in terms of changes

throughout a universal flow. In other words, the universal flow of time and all

timely things could not be perceived were it not for the changes undergone by

beings. The analysis has corroborated this hypothesis: on the one hand, all

events have an initial state and a final result state, and in between there is always

a process, which indicates duration, even when this is described as punctual; on

the other hand, a vast majority - if not all – the subevents of the prototypical

examples analysed have turned out to stand in a relationship of partial ordered

overlap (that is, even when the duration of the event is minimal, subevent 2

follows subevent 1; there is some overlap of the final part of the process and the

beginning of the result state; and the second subevent continues while the first

event continues to hold and beyond). Furthermore, on the basis of these

assumptions, I have been able to study all the other concepts relating to event

structure in a coherent way.

3. The third working hypothesis was descriptive: word structure, argument

structure and event structure of prototypical verbs of change of state provide not

only a motivated explanation of their grammatical behaviour, but also contribute

to a motivated meaning definition in functional and cognitive terms. The

295 analysis has proved that the three levels are interrelated, and the three of them

contribute to the meaning definition of break verbs. Thus, the analysis of

semantic relationships such as synonymy, polysemy or hyperonymy has

supplied the main basic meaning components for the arrangement of the corpus

into classes according to degree of prototypicality; the analysis of syntactic and

semantic valence, of argument-adjuncts, and adjuncts, has contributed to the

definition of those meanings that fall out of the scope of the meaning of the

predicate; and the analysis of subeventual structure and temporal sequence has

endowed the event with duration, another fundamental feature in the definition

of changes of state.

In short, based on the guidelines of Pustejovsky’s generative lexicon and Levin and Rappaport Hovav’s lexical-semantics, this dissertation has contributed to the development of a more accurate system of lexical representation for RRG. It has contributed to developing the syntagmatic and paradigmatic dimensions of the lexicon.

And, most importantly, this work has been carried out following the canons of methodological rigour described by McDonough and McDonough (1997), that is, systematically and drawing on empirical data.

296 CHAPTER 6: LINES OF FUTURE RESEARCH

6.0 Introduction

In this last chapter, I outline those questions that have emerged throughout this dissertation, and which have been left open for future research. I start by presenting my two main concerns for future research, which derive from the aim of the dissertation itself, and which represent the natural tendency that this research should follow from now onwards. Having done so, I then move on to briefly discuss those more particular issues that have arisen during the analysis.

6.1 Broad lines of future research

The two main lines of future research, which have arisen from the aim of the dissertation, and which represent the natural path that this research should follow beyond the limits of this dissertation are as follows.

1. Lexical representation (I): in this dissertation, I have shown that the RRG’s system of lexical representation needs to be enriched, and I have pointed out a few ways in which this goal can be reached. However, more emphasis should be placed on Mairal and Ruiz de Mendoza’s new approach to lexical representation; a model that successfully brings together lexical and constructional information, thus extending its scope beyond that of RRG’s logical structures. This new model, called Lexical

Constructional Model (LCM), is functionally and cognitively oriented. According to

Ruiz de Mendoza and Mairal (forthcoming 2007), it combines insights from RRG, from

Goldberg’s Construction Grammar and from cognitive models like Lakoff’s (1987) or

Ruiz de Mendoza’s (1997, 1998, 2000, 2005). Semantic representation is the outcome of a unification process between a lexical template and a constructional template, and it

297 is regulated by a number of internal (that is, metalinguistic) and external (that is, conceptual and cognitive) constraints. The potential of a system of lexical representation that bridges the gap existing between lexical (projectionist) and construction-based models is enormous: by means of the constructional templates it overcomes the limitations of RRG’s logical structures, and the conceptual and cognitive component that this model incorporates and captures the metaphorical and metonymic readings that RRG clearly ignores. This would have been the model to follow from the outset of this dissertation; unfortunately, LCM just surfaced when this dissertation was coming to an end. Therefore, I leave it for future research to engage in the enrichment of

RRG through the incorporation of insights from LCM.

2. Lexical representation (II): in chapter 4, I have ascertained the intimate connection that exists between affixation and the syntax-semantics interface, which is in keeping with Mairal and Cortés’s (2000-2001: 271) assumption that ‘affixal units should be regarded as lexical units in their own right’. Starting from this idea, Mairal and Cortés (2000-2001) point out the need to develop an algorithm that links morphology to syntax and semantics in the same way as RRG’s linking algorithm does for primary (non-derived) words. Hence, Mairal and Cortés borrow RRG’s linking machinery and develop what they call the Affixal Lexical Model (2000-2001: 272). In this line, Martín Arista (forthcoming-a) has put forward a Layered Structure of the

Complex Word in which lexical constituents and operators perform semantically- motivated functions and have scope over word layers, respectively. In this respect, it has to be determined whether the inventory of lexical operators should follow from Mel’čuk

(1994, 1996, 2006) lexical functions or, rather, from the universal set of operators proposed by Beard and Volpe (2005).

298 6.2 More specific lines of future research

Some more particular issues have arisen during the analysis in the preceding chapters, and I mention them here as possible lines of future research.

3. This research has revealed that break verbs are highly polysemous. It is interesting to note, for example, that many of the break verbs show a tendency to denote both a breakage on the one hand, and a noise (or an action or thing characterized by that noise) similar to that produced when the breakage takes place on the other. Crash and snap are particularly productive in this respect. The question thus boils down to whether those verbs that share this feature show some other similar behavioural pattern that distinguishes them from the rest of the verbs in the class and, if so, whether they should be set apart in a class of their own.

4. In the same line, I have also noticed that there are several meanings which recur and which are common to a number of break verbs, like

‘crashing/shattering/smashing a record’. As in the previous case, it would be interesting to find out whether these verbs share some other features. This raises two additional questions: (1) is there some connection between this sense and other contrastive senses like the one above?; and (2) is there some connection between these senses and the core meaning of break verbs – that is, pure change of state?

5. With regard to the last question, two possible explanations spring to mind.

Firstly, there may be some explanation in the historical development of these verbs, in which case, we should leave it to specialists in diachrony to deal with this issue.

Secondly, there may be some metaphorical or metonymic relationship between these senses and the core sense of break verbs. Consider the examples above: ‘To shatter a record’ is defined in the CALD as ‘to do much better than the best or fastest result recorded previously’. At first sight, this meaning has nothing to do with that of verbs of

299 change of state. Nonetheless, as I explain in chapter 4, it is common sense that when some object is broken it is not good any more, and the same reasoning could be applied to the example under analysis: when a record is shattered it is not good any more which, metaphorically, could be equated with the definition provided above. The corpus has provided many examples like this. Nonetheless, this is just a tentative analysis, and I leave it for further research.

6. Qualias: when dealing with polysemy in chapter 4 and with the idea that

Wordnet and the NODE are reliable sources of data, I have assumed that the core senses put forward by the NODE cover the whole range of meanings of break verbs. If this is true, it should be possible to derive all the senses displayed in WordNet from those listed in the former. For the purposes of this dissertation, I have assumed that this hypothesis was valid. However, the real test would consist in developing the qualias for all break verbs and checking them against all the senses displayed in Wordnet for each verb. Only if the senses were grouped together around the same core senses that the

NODE outlines would the hypothesis prove true. This falls outside the scope of my research; hence I leave it for future research.

7. Telicity: chapter 4 has revealed that many of the nouns under analysis are portrayed in the dictionaries as both mass and count. Currency, noise, time, work, shade, territory, truth and nonsense are only a few among many. It would be interesting to find out whether these nouns were originally of one type only and they underwent a process of conversion, thus offering both interpretations, or whether they were originally both mass and count. In connection with this question, it would also be interesting to determine the difference between these nouns and those that appear as either mass or count in the dictionary and undergo conversion when in context.

Consider, for instance, mass nouns such as water or flour, which are defined in the

300 dictionaries as mass, but which become telic in some contexts. These nouns may be contrasted with, for example, cake, which is defined as both mass and count in the dictionaries. To what extent can we claim that the former are cases of conversion and not the latter? And, more importantly, does this have any impact in their combination with certain determiners? My point is that is does. The analysis of the corpus has brought to my attention that those nouns that can be both mass and count are sometimes tricky. For example, single, mass nouns such as money tend to be atelic when used with any. However, there are examples such as ‘If you hear any noise, let me know’ where any noise seems to be telic. This would contradict the idea that single, mass nouns such as money tend to be atelic, but it is not the case. The reason why any noise is telic when used in the example above is that noise can be both mass and count, and in this case it is the count version that is being used. In other words, ‘If you hear any noise, let me know’ illustrates the combination of any plus a count noun, thus resulting into a telic

NP as would be the case with any other count noun such as, for instance, woman. For this reason, in this work I have tried to stick to clear instances of mass and count nouns.

Nonetheless, further research is needed on this issue.

8. Temporal sequence: In chapter 4, I defined the prototypical temporal relation that exists between the subevents of the event structure of break verbs as partial ordered overlap. However, I suggest that this relationship might be subject to further refinement. As said in the introduction and corroborated by the analysis, all changes of state have duration, even if this is minimal. The analysis has provided examples of two different types of events: (1) events where the transition process has a minimal duration, as illustrated by ‘Last week my nail snapped off’; and (2) events where the process takes longer to develop, as in ‘Finally a heavy press crushes out the remainder of the must, but this is of an inferior quality’. When duration is to its minimum, the second

301 subevent – that is, the result state – overlaps with the (whole) process and extends beyond it. The overlap is almost total, but not complete. When the process is more durative, only the final part of the process and the beginning of the result state overlap, which then expands beyond that. Therefore, I suggest that there are two different types of temporal relations between the subevents of break verbs. For events with processes of minimal duration I propose to term full ordered overlap; for events with more durative processes I propose the term partial ordered overlap. Nonetheless, this is only a suggestion, and further research is needed on this topic.

9. Syntactic dimension: this research has been focused, mainly, on semantic complementation; nevertheless, this scope could be enlarged so as to embrace a more syntactic dimension too. A more accurate study of adjuncts, for instance, might be an appropriate syntactic complement to this work, especially taking into account the relevant role that adjuncts play in lexical representation. As I have argued in chapter 4, the presence of some adjuncts like instrument and manner is relevant to semantic complementation. Therefore, it would be interesting to find out what reasons underlie this fact. In Van Valin and LaPolla (1997), different adverbs operate at different layers of the logical structure of the clause (nucleus, core and periphery). Hence, the question is: is the layer the reason that determines the relevance or non-relevance of the adjuncts instrument and manner with respect to the other adjuncts like, for instance, locatives? If so, it would be interesting to make progress along the line I advanced in chapter 4, delimiting first which layer is the cut-off point for the adjuncts of a verbal class, and then going deeper into which precise adjuncts from those layers must be incorporated in the logical structures of those verbs.

10. Merge: in chapter 4, I set the basis for the study of a phenomenon that I have called merge in this dissertation. By contrast to constructions, whose semantics cannot

302 be compositionally derived from the lexical items that instantiate them, merge conflates the meanings of all its component parts. By way of illustration, consider the caused- motion construction discussed in chapter 4. For Goldberg, caused-motion constructions combine the change-of-state meaning and the motion component into a new expression that depicts a change of position. Merge, however, combines the basic meaning of change of state with that of motion or change of position giving into a new expression that conflates both meanings. In other words, the meaning of merge expressions can be derived from the conflation of the meanings of the lexical items that instantiate them.

The comparison of the two representations shows that the main differences existing between the merge of a change of state and a change of position, and a caused-motion construction can be explained around their AGENTIVE qualia value and their nucleus.

Nonetheless, the question remains why a same syntactic expression is sometimes interpreted as a merge and some others as a case of co-composition. Thus, I leave this question open for further research.

303

304 BIBLIOGRAPHY

Alsina, A 1999, ‘On the Representation of Event Structure’, in T Mohanan and L Wee

(eds), Grammatical semantics: Evidence for structure in meaning, Stanford: CSLI

Publications, pp. 77-122.

Aston, G and Burnard, L 1998, The British National Corpus Handbook. Exploring the

BNC with SARA, Edinburgh: Edinburgh textbooks in Empirical Linguistics.

Bach, E 1981, ‘On time, Tense, and Aspect: An Essay in English Metaphysics’, in P

Cole (ed.), Radical Pragmatics, New York: Academic Press, pp. 63-81.

Bach, E 1986, ‘The Algebra of Events’, Linguistics and Philosophy vol. 9, pp. 5-16.

Baker, M 2003, Lexical Categories. Verbs, Nouns and Adjectives, Cambridge:

Cambridge University Press.

Barber, C 1993, The English Language: A Historical Introduction, Canto edn,

Cambridge: Cambridge University Press.

Bauer, L 1983, English word-formation, Cambridge: Cambridge University Press.

Bauer, L 1992, Introducing linguistic morphology, Corrected edn, Edinburgh:

Edinburgh University Press.

305

Bauer, L 1998, , London: Routledge.

Bauer, L 2004, A of morphology, Edinburgh: Edinburgh University Press.

Beard, R and Volpe, M 2005, ‘Lexeme-Morpheme Base Morphology’, in P Stekauer and R Lieber (eds), Handbook of Word-Formation, Dordrecht: Springer, pp. 189-205.

Benešová, V 2005, ‘Valency and Semantic Features of Verb’, in J Safrankova (ed.),

WDS'05 Proceedings of Contributed Papers, Prague: Charles University, Faculty of

Mathematics and Physics, pp. 66–71, viewed 15 April, 2007,

.

Bickel, B 1993, ‘Belhare subordination and the theory of topic’, in K Ebert (ed.),

Studies in clause linkage – Arbeiten des Seminars für Allgemeine Sprachwissenschaft, no. 12, Zurich: University of Zurich, pp. 23-55.

Bickel, B 2003, ‘Clause linkage typology’, lecture series delivered at the International

Role and Reference Grammar Conference, UNESP, São Jose do Rio Preto, Brazil, 14-

20 July.

Binnick, R 1991, Time and the Verb: A Guide to Tense and Aspect, Oxford: Oxford

University Press.

306 Brinton, LJ 1988, The Development of English Aspectual Systems: Aspectualizers and

Post-Verbal Particles, Cambridge: Cambridge University Press.

Bunt, C 1985, Mass Terms and Model Theoretic Semantics, Cambridge: Cambridge

University Press.

Butler, C, Mairal Usón, R, Martín Arista, J and Ruiz de Mendoza, JF 1999, Sobre la

Gramática Funcional, Barcelona: Ariel.

Cardelli, L and Wegner, P 1985, ‘On Understanding Types, Data Abstraction, and

Polymorphism’, ACM Computing Surveys, vol. 17, no. 4, pp. 471-522.

Comrie, B 1976, ‘The Syntax of Causative Constructions: Cross-Language Similarities and divergences’, in M Shibatani (ed.), The Grammar of Causative Constructions.

Syntax and Semantics 6, New York: Academic Press, pp. 261-312.

Cortés Rodríguez, FJ 2006, ‘Derivational Morphology in Role and Reference Grammar: a new Proposal’, RESLA, vol. 19, pp. 41-66.

Coseriu, E 1977, Principios de Semántica Estructural, Madrid: Gredos.

Croft, W 1990a, ‘Possible Verbs and the Structure of Events’, in SL Tsohatzidis (ed.),

Meanings and Prototypes: Studies in Linguistic Categorization, London: Routledge, pp.

48-73.

307 Croft, W 1990b, Typology and Universals, Cambridge: Cambridge University Press.

Croft, W, 1991, Syntactic Categories and Grammatical Relations, Chicago: University of Chicago Press.

Croft, W, 1994, ‘The Semantics of Subjecthood’, in M Yaguello (ed.), Subjecthood and

Subjectivity: The Status of the Subject in Linguistic Theory, Paris: Ophrys, pp. 29-75.

Croft, W 1998, ‘Event Structure in Argument Linking’, M Butt and W Geuder (eds),

The Projection of Arguments: Lexical and Syntactic Constraints, Stanford: CSLI

Publications, pp. 21-63.

Cruse, DA, Hundsnurscher, F, Job, M and Lutzeier, PR (eds), 2002, Lexicology: An

International Handbook on the Nature and Structure of Words and ,

Berlin: Walter de Gruyter.

DeLancey, S 1984, ‘Notes on Agentivity and Causation’, Studies in Language, vol. 8, no. 2, pp. 181-213.

DeLancey, S 1985, ‘Agentivity and Syntax’, Papers from the Parasession on

Causatives and Agentivity, Chicago: Chicago Linguistic Society, pp. 1-12.

DeLancey, S 1990, ‘Ergativity and the Cognitive Model of Event Structure in Lhasa

Tibetan’, Cognitive Linguistics vol. 1, no. 3, pp. 289-321.

308 DeLancey, S 1991, ‘Event Construal and Case Role Assignment’, in Proceedings of the

Seventeenth Annual Meeting of the Berkeley Linguistics Society, Berkeley: Berkeley

Linguistics Society, vol. 17, pp. 338-353.

Dik, SC 1978, Functional grammar, Amsterdam: North Holland.

Dik, SC 1989, The Theory of Functional Grammar, Part 1. Dordrecht: Foris.

Dik, SC 1997, The Theory of Functional Grammar, Vol. 1: The Structure of the Clause.

Berlin: Mouton de Gruyter.

Dixon, RMW 1972, The Dyirbal language of North Queensland, Cambridge:

Cambridge University Press.

Dowty, DR 1979, Word Meaning and Montague Grammar, Dordrecht: Reidel.

Dowty, DR 1991, ‘Thematic Proto-Roles and Argument Selection’, Language, vol. 67, no. 3, pp. 547-619.

Faber, PB and Mairal Usón, R 1999, Constructing a lexicon of English verbs, Berlin:

Mouton de Gruyter.

Filip, H 1999, Aspect, Eventuality Types and Nominal Reference, New York: Garland.

309 Fillmore, CJ 1970, ‘The Grammar of Hitting and Breaking’, in R Jacobs and P

Rosenbaum (eds), Readings in English Transformational Grammar, Waltham, MA:

Ginn, pp. 120-33.

Fodor, JA 1998, Concepts: Where cognitive science went wrong, Oxford: Oxford

University Press.

Fong, V 1999, ‘Comments on Alsina’ T Mohanan and L Wee (eds), Grammatical semantics. Evidence for structure in meaning, Stanford: CSLI Publications, pp. 123-43.

Freed, AF 1979, The Semantics of English Aspectual Complementation, Dordrecht:

Reidel.

Gillon, BS 1999, ‘The Lexical Semantics of English Count and Mass Nouns’, E Viegas

(ed.), The Breadth and Depth of Semantic Lexicons, Dordrecht: Kluwer, pp. 19-37.

Givón, T 1983a, ‘Topic Continuity in Discourse: An Introduction’, in T Givón (ed.),

Topic Continuity in Discourse, Amsterdam: John Benjamins, pp. 4-41.

Givón, T 1983b, ‘Topic Continuity in Spoken English’, in T Givón (ed.), Topic

Continuity in Discourse, Amsterdam: John Benjamins, pp. 343-63.

Givón, T 1984/1990, Syntax: A Functional-Typological Introduction (2 vols.),

Amsterdam: John Benjamins.

310 Givón, T 1989, Mind, Code and Context: Essays in Pragmatics, Amsterdam: John

Benjamins.

Givón, T 1993, . A Function-Based Introduction (I). Amsterdam:

John Benjamins Publishing Company.

Givón, T 1995, Functionalism and grammar, Amsterdam: John Benjamins.

Goldberg, A 1995, A Construction Grammar Approach to Argument Structure,

Chicago: University of Chicago Press.

Goldberg, A 2002, ‘Surface generalizations: An alternative to alternations’, Cognitive

Linguistics, vol. 13, no. 4, pp. 1-31.

Goldberg, A, 2006, Constructions at work: the nature of generalization in language,

Oxford: Oxford University Press.

Goossens, L 1990, ‘Metaphtonymy: the interaction of metaphor and metonymy in expressions for linguistic action’, Cognitive Linguistics, vol. 1, no. 3, pp. 323-40.

Greenberg, J (ed.) 1966, Universals of grammar, Cambridge, MA: MIT Press.

Grimshaw, J 1990, Argument Structure, Cambridge, MA: MIT Press.

311 Gruber, JS 1965, ‘Studies in Lexical Relations’, Doctoral dissertation, MIT, Cambridge,

MA.

Hale, KL and Keyser, SJ 1987, ‘A View from the Middle’, Lexicon Project Working

Papers 10, Cambridge, MA: MIT Press.

Halliday, MAK 1966, ‘Lexis as a linguistic level’, in CE Bazell, JC Catford, MAK

Halliday and RH Robins (eds), In Memory of J. R. Firth, London: Longman, pp. 148-

62.

Hawkes, T 1977, Structuralism and Semiotics, London: Routledge.

Jackendoff, RS 1972,. Semantic Interpretation in Generative Grammar, Cambridge:

MIT Press.

Jackendoff, RS 1976, ‘Toward an Explanatory Semantic Representation’, Linguistic

Inquiry, vol. 7, no. 1, pp. 89-150.

Jackendoff, RS 1983, Semantics and Cognition. Cambridge, MA: MIT Press.

Jackendoff, RS 1987, ‘The Status of Thematic Relations in Linguistic Theory’,

Linguistic Inquiry, vol. 18, no. 3, pp. 369-411.

Jackendoff, RS 1990, Semantic Structures. Cambridge, MA: MIT Press.

312 Jackendoff, RS 1991, ‘Parts and Boundaries’, Cognition, vol. 41, no. 1, pp. 9-45.

Jiménez Martínez-Losa, N 2005, A Cognitive Linguistics Approach to the

Conceptualization of Motion in English: New Perspectives, unpublished MA

Dissertation, Universidad de La Rioja, Logroño.

Johnson, M 1987, The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason, Chicago: The University of Chicago Press.

Jolly, J 1991, Prepositional analysis within the framework of Role and Reference

Grammar, New York: Peter Lang.

Jolly, J 1993, ‘Presposition assignment in English’, in RD Van Valin Jr. (ed.), Advances in Role and Reference Grammar, Amsterdam: John Benjamins, pp. 275-310.

Katamba, F 1993, Morphology, London: MacMillan.

Kastovsky, D 1992, ‘Semantics and Vocabulary’, in RM Hogg (ed.), The Cambridge

History of the English Language. Vol. I: The Beginning to 1066, Cambridge: Cambridge

University Press, pp. 290-408.

Kennedy, G 1998, An introduction to corpus linguistics, London: Longman.

Koontz-Garboden, A 2006, ‘Aspectual coercion and the typology of change of state predicates’, Journal of Linguistics, vol. 43, no. 1, pp. 115-52.

313

Krifka, M 1989a, ‘Nominal reference, Temporal Constitution and Quantification in

Event Semantics’, in R Bartsch, J van Benthem and P van Ende Boas (eds), Semantics and Contextual Expresión, Dordrecht: Foris, pp. 75-115.

Krifka, M 1989b, Nominalreferenz und Zeitkonstitution. Zur Semantik von

Massentermen, Individualtermen, Aspektklassen, Munich: Wilhelm Fink Verlag.

Krifka, M 1992, ‘Thematic Relations as Links between Nominal Reference and

Temporal Constitution’, in IA Sag and A Szabolcsi (eds), Lexical Matters, Stanford:

Stanford University, pp. 29-54.

Krifka, M 1998, ‘The Origins of Telicity’, S Rothstein (ed.), Events and Grammar,

Dordrecht: Kluwer, pp. 197-235.

Lakoff, G 1987, Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, Chicago: University of Chicago Press.

Lakoff, G 1993, ‘The contemporary theory of metaphor’, in A Ortony (ed.), Metaphor and Thought, 2nd edn, Cambridge: Cambridge University Press, pp. 202-51.

Lakoff, G and Johnson, M 1999, Philosophy in the Flesh, New York: Basic Books.

Lakoff, G and Turner, M 1989, More than Cool Reason: A Field Guide to Poetic

Metaphor, Chicago: University of Chicago Press.

314

Langacker, RW 1987, Foundations of Cognitive Grammar 1: Theoretical Prerequisites,

Stanford: Stanford University Press.

Langacker, RW 1990, Concept, Image, and Symbol. Berlin: Mouton de Gruyter.

Langacker, RW 1991, Foundations of Cognitive Grammar 2: Descriptive Application,

Stanford: Stanford University Press.

Langacker, RW 1993, ‘A Localist Model for Event Semantics’, Journal of Semantics, vol. 10, pp. 65-111.

Lehrer, A 2000, ‘Are Affixes Signs? The semantic relationships of English derivational

Affixes’, in WU Dressler, OE Pfeiffer, MA Pöchtrager and JR Rennison (eds),

Morphological analysis in comparison, Amsterdam: John Benjamins.

Lemmens, M 1998,. Lexical Perspectives on Transitivity and Ergativity, Amsterdam:

John Benjamins.

Levin, B 1993, English Verb Classes and Alternations: A Preliminary Investigation,

Chicago: The University of Chicago Press.

Levin, B and Rappaport Hovav, M 1995, Unaccusativity: At the Syntax-Lexical

Semantics Interface, Cambridge, MA: MIT Press.

315 Levin, B and Rappaport Hovav, M 2005, Argument Realization, Research Surveys in

Linguistic Series, Cambridge: Cambridge University Press.

Lieber, R 2004, Morphology and lexical semantics, Cambridge: Cambridge University

Press.

Lieber, R and Baayen, H 1997, ‘A semantic principle of auxiliary selection in Dutch’,

Natural Language and Linguistic Theory, vol. 15, no. 4, pp. 789-845.

Lieber, R and Baayen, H 1999, ‘Nominalizations in a calculus of lexical semantic representations’ in G Booij and J van Marle (eds), Yearbook of Morphology 1998,

Dordrecht: Kluwer, pp. 175-98.

Lyons, J 1977, Semantics, Cambridge: Cambridge University Press.

Lyons, J 1981, Language, meaning and context, London: Fontana.

Lyons, J 1995, Linguistic Semantics. An introduction, Cambridge: Cambridge

University Press.

Mairal Usón, R 2001-2002, ‘Why the Notion of Lexical Template?’, Anglogermanica online: Revista electrónica periódica de filología alemana e inglesa 1, viewed 15 April,

2007, .

316 Mairal Usón, R and Cortés, F 2000-2001, ‘Semantic packaging and syntactic projections in word formation processes: the case of agent nominalizations’, RESLA, vol. 14, pp. 271-94.

Mairal Usón, R and Faber, P 2002, ‘Functional Grammar and lexical templates’, in R

Mairal Usón and MJ Pérez Quintero (eds), New perspectives on argument structure in

Functional Grammar, Berlin: Mouton de Gruyter, pp. 41-98.

Mairal Usón, R and Ruiz de Mendoza, JF Forthcoming 2007, ‘Internal and external constraints in meaning construction: the lexicon-grammar continuum’, in MT Gilabert

Maceda and L Alba Juez (eds), Estudios de Filología Inglesa: Homenaje a la Dra.

Asunción Alba Pelayo, Madrid: UNED.

Mairal Usón, R and Van Valin, RD 2001, ‘What Role and Reference Grammar Can Do for Functional Grammar’, Revista canaria de estudios ingleses, vol. 42, pp. 137-66.

Marchand, H 1969, The Categories and Types of Present-Day English Word-

Formation: a Synchronic-Diachronic Approach, 2nd edn, Munich: C. H. Becksche

Verlagsbuchhandlung.

Marcovich, M 1967, Heraclitus. Greek text with a short commentary by M. Marcovich,

Mérida: Universidad de los Andes Press.

Martín Arista, J 2005, ‘Configurationality’, in P Strazny (ed.), Encyclopedia of

Linguistics, Chicago: Fitzroy Dearborn Publishers, pp. 232-4.

317

Martín Arista, J 2006, ‘Alternations, relatedness and motivation: Old English A-’, in P

Guerrero Medina and E Martínez Jurado (eds), Where Grammar Meets Discourse:

Functional and Cognitive Perspectives, Córdoba: Servicio de Publicaciones de la

Universidad de Córdoba, pp. 113-32.

Martín Arista, J Forthcoming-a, ‘Unification and separation in a functional theory of morphology’.

Martín Arista, J Forthcoming-b, ‘A Template-and-Construction Framework for

Functional Morphology: the Case of Old English hreow’.

Martín Arista, J and Ortigosa Pastor, A 2000, ‘Marca sustantiva: Topicalidad e iconicidad en la frase nominal en inglés antiguo’, Cuadernos de investigación filológica, vol. 26, pp. 247-262.

Martín Arista, J and Ibáñez Moreno, A 2004, ‘Deixis, reference and the functional definition of lexical categories, Atlantis, vol. 26, pp. 63-74.

Martín Mingorance, L 1984, ‘Lexical fields and stepwise lexical decomposition in a contrastive English-Spanish verb valency dictionary’, in R Hartmann (ed.), LEX’eter

‘83 Proceedings. Papers from the International Conference on at Exeter,

Tübingen: Max Niemeyer, pp. 226-36.

318 Martín Mingorance, L 1990, ‘Functional Grammar and Lexematics’, in J Tomaszczyk and B Lewandowska-Tomaszczyk (eds), Meaning and Lexicography, Amsterdam: John

Benjamins, pp. 227-53.

Martín Mingorance, L 1995, ‘Lexical logic and structural semantics: methodological underpinnings in the structuring of a lexical database for natural language processing’,

U Hoinkes (ed.), Panorama der Lexikalischen Semantik, Tübingen: Gunter Narr, pp.

461-74.

McDonough, J and McDonough, S 1997, Research Methods for English Language

Teachers, London: Arnold.

Mel’čuk, I 1994, Cours de Morphologie Générale. Deuxième Partie: Significations

Morphologiques, Montréal: CNRS Éditions.

Mel’čuk, I 1996, ‘Lexical Functions: A Tool for the Description of Lexical Relations in the Lexicon’, in L Wanner (ed.), Lexical Functions in Lexicography and Natural

Language Processing, Amsterdam: John Benjamins, pp. 37-102.

Mel’čuk, I 2006, Aspects of the Theory of Morphology, D Beck (ed.), Berlin: Mouton de

Gruyter.

Moens, M and Steedman, M 1988, ‘Temporal Ontology and Temporal Reference’,

Computacional Linguistics, vol. 14, no. 2, pp. 15-28.

319 Mourelatos, APD 1978, ‘Events, Processes and States’, Linguistics and Philosophy, vol.

2, pp. 415-34.

Murphy, ML 2003, Semantic Relations and the Lexicon. Antonymy, Synonymy, and

Other Paradigms, Cambridge: Cambridge University Press.

Nunberg, GD 1978, The pragmatics of reference, PhD dissertation, University of

California at Berkeley.

Ortigosa Pastor, A 2005, La expresión del tiempo y el aspecto en inglés: adverbios aspectuales y deíctico-temporales en la cláusula (un studio funcional), PhD dissertation, Universidad de La Rioja, Logroño.

Pesetsky, DM 1995, Zero Syntax, Cambridge, MA: MIT Press.

Prasada, S 1999, ‘Names for things and stuff: An Aristotelian perspective’, in R

Jackendoff, P Bloom and K Wynn (eds), Language, logic, and concepts: Essays in honor of John Macnamara, Cambridge, MA: MIT Press, pp. 119–46.

Pustejovsky, J 1991, ‘The generative lexicon’, Computational Linguistics, vol. 17, no.

4, pp. 409-41.

Pustejovsky, J 1992, ‘The syntax of event structure’, in B Levin and S Pinker (eds),

Lexical and Conceptual Semantics, Oxford: Blackwell, pp. 47-81.

320 Pustejovsky, J 1995a, The generative Lexicon. Cambridge, MA: MIT Press.

Pustejovsky, J 1995b, ‘Linguistic Constraints on Type Coercion’, in P Saint-Dizier and

E Viegas (eds), Computational Lexical Semantics, Cambridge: Cambridge University

Press.

Pustejovsky, J 1998, ‘The Semantics of Lexical Underspecification’, Folia Linguistica, vol. 32, no. 3/4, pp. 323-47.

Pustejovsky, J 2000, ‘Syntagmatic Processes’, in Handbook of Lexicology and

Lexicography, Berlin: Mouton de Gruyter, viewed 19 May, 2007,

.

Radden, G 2003, ‘The Metaphor TIME AS SPACE across Languages’, in N

Baumgarten, C Böttger, M Motz, J Probst (eds), Übersetzen, Interkulturelle

Kommunikation, Spracherwerb und Sprachvermittlung - das Leben mit mehreren

Sprachen. Festschrift für Juliane House zum 60. Geburtstag. Zeitschrift für

Interkulturellen Fremdsprachenunterricht, vol. 8, no. 2/3, pp. 226-39.

Rappaport, M and Levin, B 1988, ‘What to do with theta-roles’, in W Wilkins (ed.),

Syntax and Semantics, Vol. 21: Thematic Relations, New York: Academic Press, pp. 7-

36.

Rijkhoff, J 2002, The Noun Phrase, Oxford: Oxford University Press.

321 Roeper, P 1983, ‘Semantics for Mass Terms with Quantifiers’, Noûs, vol. 17, no. 2, pp.

251-65.

Ruiz de Mendoza Ibáñez, FJ 1997, ‘Metaphor, metonymy and conceptual interaction’,

Atlantis, vol. 19, no. 1, pp. 281-95.

Ruiz de Mendoza Ibáñez, FJ 1998, ‘On the nature of blending as a cognitive phenomenon’, Journal of Pragmatics, vol. 30, no. 3, pp. 259-74.

Ruiz de Mendoza Ibáñez, FJ 2000, ‘The role of mappings and domains in understanding metonymy’, in A Barcelona (ed.), Metaphor and Metonymy at the Crossroads, Berlin:

Mouton de Gruyter, pp. 109-32.

Ruiz de Mendoza Ibáñez, FJ 2005, ‘High level cognitive models: in search of a unified framework for inferential and grammatical behavior’, in K Kosecki (ed.), Perspectives on Metonymy, Frankfurt am Main: Peter Lang.

Ruiz de Mendoza Ibáñez, FJ and Díez, O 2002, ‘Patterns of conceptual interaction’, in

R Dirven and R Pörings (eds), Metaphor and Metonymy in Comparison and Contrast,

Berlin: Mouton de Gruyter, pp. 489-532.

Ruiz de Mendoza Ibáñez, FJ and Mairal Usón, R Forthcoming 2007, ‘Levels of semantic representation: where lexicon and grammar meet’, Interlingüística, vol. 17, viewed 25 May, 2007, .

322 Ruiz de Mendoza Ibáñez, FJ and Peña, MS 2005, ‘Conceptual interaction, cognitive operations, and projection spaces’, in Ruiz de Mendoza, F. J. & Peña, M. S. (eds),

Cognitive Linguistics. Internal Dynamics and Interdisciplinary Interaction, Berlin:

Mouton de Gruyter.

Smith, CS 1991, The Parameter of Aspect, Dordrecht: Kluwer.

Smith, F 1988, Understanding Reading, Hillsdale, NJ: Erlbaum.

Spencer, A 1991, Morphological Theory: an Introduction to Word Structure in

Generative Grammar, Oxford: Basil Blackwell.

Stockwell, R and Minkova, D 2001, English Words: History and Structure, Cambridge:

Cambridge University Press.

Talmy, L 1985, ‘Force Dynamics in Language and Thought’, CLS, vol. 21, no. 1, pp.

293-337.

Talmy, L 1988, ‘Force Dynamics in Language and Cognition’, Cognitive Science, vol.

12, pp. 49-100.

Taylor, J 1989, Linguistic Categorization: Prototypes in Linguistic Theory, Oxford:

Oxford University Press.

323 Tenny, C 1994, Aspectual Roles and the Syntax-Semantics Interface, Dordrecht:

Kluwer.

Tesnière, L 1959, Éléments de syntaxe structurale, Paris: Klincksieck.

Tetescu, M. 1975, Précis de sémantique française, Paris: Klincksieck.

Van Valin, RD Jr. 1990, ‘Semantic Parameters of Split Intransitivity’, Language, vol. 2, pp. 225-60.

Van Valin, RD Jr. 1993, A Synopsis of Role and Reference Grammar. Amsterdam: John

Benjamins.

Van Valin, RD Jr. 1994, ‘Functional Relations’, RE Asher (ed.), The Encyclopedia of

Language and Linguistics, Oxford: Pergamon Press, pp. 1327-38.

Van Valin, RD Jr. 2005, Exploring the syntax-semantics interface, Cambridge:

Cambridge University Press.

Van Valin, RD Jr. and LaPolla, RJ 1997, Syntax. Structure, meaning and function,

Cambridge: Cambridge University Press.

Vendler, Z 1957, ‘Verbs and Times’, Philosophical Review, vol. 66, pp. 143-60.

Vendler, Z 1976, Linguistics in Philosophy, Ithaca, NY: Cornell University Press.

324

Verkuyl, HJ 1972, On the Compositional Nature of the Aspects, Dordrecht: Reidel.

Verkuyl, HJ 1993, A Theory of Aspectuality, Cambridge: Cambridge University Press.

Verkuyl, HJ 1999, Aspectual Issues: Studies on Time and Quantity, Stanford: CSLI

Publications.

van Voorst, J 1988, Event Structure, Amsterdam: John Benjamins.

van Voorst, J 1993, ‘Against a Composite Analysis of Certain Causative Constructions’,

Perspectives on Language and Conceptualization, vol. 30, pp. 145-166.

van Voorst, J 1995, ‘The semantic structure of causative constructions’, Studies in

Language, vol. 19, no. 2, pp. 489-524.

Weinreich, U 1964, ‘Webster’s Third: A Critique of its Semantics’, International

Journal of American Linguistics, vol. 30, pp. 405-409.

Williams, E 1981, ‘Argument structure and morphology’, The Linguistic Review, vol. 1, pp. 81-114.

325 DICTIONARIES

Cambridge Advanced Learner’s Dictionary (Online), Cambridge: Cambridge

University Press.

Collins Cobuild English Dictionary for Advanced Learners (2001), 3rd edn, London:

Collins.

Longman Dictionary of Contemporary English (Online), London: Longman.

Longman Dictionary of the English Language (1984), London: Longman.

Longman Lexicon of Contemporary English. The New Vocabulary Source Book (1981),

London: Longman.

Merriam-Webster Unabridged Dictionary (Online), Springfield, MA: Merriam-

Webster.

The New Oxford Dictionary of English (1998), Oxford: Oxford University Press.

New Oxford Thesaurus of English (2000), Oxford: Oxford University Press.

Oxford English Dictionary (1933), Oxford: Oxford University Press.

326 WEB-PAGES

http://www.marsdenwriters.plus.com/frames/thewriters/margaret_renwick/washday.htm

http://wordnet.princeton.edu/

http://wordnet.princeton.edu/gloss/#Synset

327

328 APPENDIX: SAMPLE OF THE CORPUS – PROTOTYPICAL EXAMPLES

BREAK 14. I fall flat on my bonce and break a chair. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): no “○α” (exhaustive overlap part of): no “< ○α” (exhaustive ordered overlap): yes Headedness 1st subevent Type shifting? No

5. Touches everything and breaks things easily QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: easily

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

329 10. In 1934 he crashed at Allessandria, breaking his leg. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

10. The wind drove our ship on to a rock, which broke the ship in half. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 3 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

330 7. Also the larger tablets can be broken up for small fish. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: for small fish

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Definite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

CRACK 3. hit text=ABB n=671> Crack a cooled bean and check that it is evenly coloured throughout; if so, the beans are ready for immediate use of may be cooled and stored in an airtight jar. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

331 7. If, as I used to, one cracks walnuts by banging two of the things together, and keeps the uncracked winner to go forward the next round, will one inevitably finish Christmas with the hardest nut to crack in one's hand? QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: by banging two of the things together

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

2. They are generally thicker and harder-fired than wall tiles, to enable them to stand up to heavy wear without cracking. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0 Semantic valence 1 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

332 2. Rodney cracked two eggs into the frying pan. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 3 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Semantic role for Arg 3 Destination SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

5. Two days earlier his puissance horse, Leonardo, had cracked a bone in his off-fore fetlock which had to be put in plaster. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: two days earlier, in his off-fore fetlock

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Definite Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

333 CRASH 6. […] despite the merchants of doom who insisted they should never fly together in the same plane lest it crash and kill both heir and second-in-line to the throne. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

11. But then, as the new, more streamlined, machine slips into the twentieth century, moving smoothly and with a new confidence — and by now virtually everybody is along for the ride — it crashes catastrophically. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: catastrophically

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

334 11. He said that, besides reducing the tower's elasticity — which may have prevented it crashing before now — the weight of the steel and lead could lead to disaster by causing the soggy ground to give way. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: before now

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

9. And when he was nearly killed when he crashed his car on a motorway he had a Brahms symphony on his radio, which he now uses as his theme tune. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: on a motorway

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

335 9. He had not read the Tower Commission's report, evidently; did not know that McFarlane had pleaded guilty to misdemeanours; did not know the substance of the charges against Poindexter; had not altered in any particular the mythology created when the edifice had first crashed down, in November 1986. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: first / in November 1986

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Definite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

CRUSH 1. If you crush an ear of barley, however, you will not get beer. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Mass, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

336 8. Where the shoe touches the grass it crushes the grass and releases juices and smells from the grass. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Mass, [-telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

3. Raskolnikov uses the blade of his axe on her, whereas he has just used the back of it on the older woman, crushing her skull. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

337 18. She crushed a vertebra in a seventh fence fall from Merry Master at Chepstow, and is likely to be out of action until February. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: in a seventh fence fall from Merry Master… at Chepstow

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

14. The skull's been crushed beneath the coffin nails and the ribs are jumbled. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: beneath the coffin nails

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

338 FRACTURE 1. Because the flow stress of glass is very high at room temperature and because glass is very susceptible to fracture by the spread of cracks, glass, and materials like it, nearly always fracture in the familiar brittle manner and we find it difficult to imagine anything different happening. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: nearly always / in the familiar brittle manner

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Mass, [-telic], Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

2. This delicate fern is preserved in a very fine-grained sandstone, which fractures rather irregularly. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: rather irregularly

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

339 8. hit text=CEG n=505> The stories about singers fracturing panes of glass may well be true. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Indefinite Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

2. The first casualty of the match was, for a change, a West Indian fast bowler, when Marshall fractured his left thumb trying to stop a shot from Broad; yet even this worked against England, for it simply inspired him to his best Test figures when he bowled in the second innings. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: trying to stop a shot from Broad

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

340 9. Then the rock between the boreholes is fractured by high pressure water jets and the heat released, gathered by a circulating stream of water. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: by high pressure water jets

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

RIP 1. Marie would be mad if she saw that, so I rip a page from one of her mags and scrape it up. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 3 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 Origin SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

341 16. Another shrouded figure comes up to him and rips the clothing off. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Mass, [-telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

3. Suddenly he reached across and snatched the photograph out of the Woman's hand, ripping it in half and dropping the pieces on the floor. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 3 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

342 3. Hidden in a bus and estimated to contain about half a ton of dynamite, the bomb ripped a wall of the nine-storey DAS building and badly damaged nearby buildings. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Definite Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

4. Everywhere French flags were ripped in half and trampled on and great gangs of young soldiers joined together to shout and curse the very name of France. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 3 Argument-adjunct 1 Periphery Everywhere

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

343 SHATTER 4. He says that when he twists ice out of its plastic container, the ice being extremely cold and dry, he sees a flash of blue-white light as the cubes shatter into pieces. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Definite Arg 2 Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

1. Although glass surrounds us, if a glass window shatters it may cause horrific and perhaps even lethal injuries. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

344 6. The ending of old Harry's innings is represented by a ball shattering his stumps and an umpire's finger raised unequivocably upwards to that great pavilion in the sky. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Indefinite Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

4.

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

345 9. A nurse, who claims her life has been ruined after her knee-cap was shattered in a fall at a superstore, has lost her fight for compensation. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: in a fall at a superstore

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Definite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

SMASH 9. Smash it with a sledge hammer; you don't need to cut it, just break it. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: with a sledge hammer

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

346 12. In Hard Times (1909) an unemployed father with a dying child smashes a baker's window, is caught by the crowd, and is delivered to the police, who proceed to call in a physician; QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: In Hard Times (1909)

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Indefinite Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

3. Window locks are an essential deterrent to the opportunist, while they will be prepared to break a small area of glass and reach in to open the catch, they will think twice about smashing a whole pane — as it's very noisy and dangerous. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

347 5. Corden's playing career ended abruptly when he smashed his knee on his debut for Darlington. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: on his debut for Darlington

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

19. Megaliths were smashed to make gate-posts or road-stone, blown up or pushed aside to clear space for the plough. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: to make gate-posts or road-stone

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

348 SNAP 7. Whatever the reason, it is recognized that stressed trees then become vulnerable to further damage through drought, wind damage (through weakened root systems), snow damage (the brittle tops of trees snap beneath the weight of snow), frost, insects (e.g. the bark beetle is attracted by chemicals such as terpenes given off by stressed trees) and needle fungal and virus attack. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: beneath the weight of snow

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Definite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

7. A frayed lace snaps under too vigorous a tug, and he curses. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: under too vigorous a tug

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Reference Arg 1 Indefinite Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

349 3. Steel columns and beams, and concrete walls reinforced with steel rods, make the frames flexible — and even ductile, so that they can deform without snapping in the wind. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0 Semantic valence 1 Argument-adjunct 0 Periphery Yes: in the wind

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

7. The pinioned hands of the condemned man went suddenly white as the noose and the drop snapped his spinal column. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1, 2 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic], Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Definite, Definite Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

350 8. I've snapped my phone card in half. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 3 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

SPLIT 16. Split the baked potatoes and pour the sauce over. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

351 6. The specimen occurs in a fine-grained limestone, one which fortunately splits easily around the enclosed fossils. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: fortunately / easily / around the enclosed fossils

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

5. Toby was pouring himself a Campari, as carefully as if he were splitting the atom. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

352 2. A groan here, a creak there, the crying of tortured metal, the cracking of the internal wood fascias as they buckled and split, and always the slow, deadly sound of gurgling water. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0 Semantic valence 1 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

6. The whole place is a shambles of falling rock, soft shaly rock, slaty rock; everything has been split apart by erosion. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: by erosion

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

353 SPLINTER 4. Some of these lorikeets have small bills that are capable only of pinching the skin of human handlers, while other species can easily draw blood and even splinter wood. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: even

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Mass, [-telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

2. This wood easily. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 1 Argument-adjunct 0 Periphery Yes: easily

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Mass, [-telic] Arg 2 Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

354 1. The rule cracked, splintering into pieces. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0 Semantic valence 2 Argument-adjunct 1 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

1. Her massively enlarged liver splintered her diaphragm, making her permanently breathless, and pressed on to her abdominal veins so that her legs became horribly swollen. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Definite Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

355 2. The edges of the plastic cover had cracked and . QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 0 Semantic valence 1 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 Nominal Aspect Arg 1 Arg 2 Reference Arg 1 Arg 2 Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: No Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 2nd subevent Type shifting? No

TEAR 5. He is a famous writer, so I also tear up his manuscripts and books. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: also

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1,1 Nominal Aspect Arg 1 Arg 2 Count, [+telic], Count, [+telic] Reference Arg 1 Arg 2 Definite, Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

356 1. hit text=A06 n=706> She tears the letter. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery No

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

3. The powers that be at the Tate tend to be more interested in `the modern" than in the British tradition, so many fine or interesting British paintings are rarely if ever displayed, and to get through to those that are one has to wade one's way through off-putting modernist rubbish, with the risk of tripping over artistic piles of bricks or tearing your clothes on sharp bits of dustbin sculpture. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 1 Semantic valence 2 Argument-adjunct 0 Periphery Yes: on sharp bits of dustbin sculpture

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Plural, [-telic] Reference Arg 1 Arg 2 Definite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

357 4. On one occasion when Tolkien tore a ligament playing squash, and was told that he would be confined to his bed for ten weeks, Lewis went to see him — but, as Warnie recorded, he `found Madame [i.e. Tolkien's wife] there, so could not have much conversation with him". QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 2 Argument-adjunct 0 Periphery Yes: playing squash

QUALITATIVE ANALYSIS Entity type Arg 1 Arg 2 1 Nominal Aspect Arg 1 Arg 2 Count, [+telic] Reference Arg 1 Arg 2 Indefinite Semantic role for Arg 3 SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

3. Beside the Cross one of the men had torn a paper into little bits and scattered them, to a groaning catcall from the crowd. QUANTITATIVE ANALYSIS (or Constituent Projection) Syntactic valence 2 Semantic valence 3 Argument-adjunct 1 Periphery Yes: Beside the Cross

QUALITATIVE ANALYSIS Entity type Arg 1 1 Arg 2 1 Nominal Aspect Arg 1 Count, [+telic] Arg 2 Count, [+telic] Reference Arg 1 Indefinite Arg 2 Indefinite Semantic role for Arg 3 Result SYNTAGMATIC ANALYSIS (or Event Structure) Analysable into subevents Yes Causal sequence Cause and consequence: Yes Temporal sequence Simultaneous/Non-simultaneous “<α” (exhaustive ordered part of): No “○α” (exhaustive overlap part of): No “< ○α” (exhaustive ordered overlap): Yes Headedness 1st subevent Type shifting? No

358

ÍNDICE

Capítulo 1. Introducción

1.1 Objetivos de la investigación 1

1.2 Relevancia de la investigación 3

1.3 Hipótesis de trabajo 4

1.4 Organización de los capítulos 5

Capítulo 2. Revisión de la literatura

2.0 Introducción 11

2.1 Distintos enfoques hacia la representación léxica 12

2.1.1 Levin y Rappaport Hovav (2005) 13

2.1.2 Alsina (1999) 18

2.1.3 Lieber (2004) 20

2.1.4 Faber y Mairal (1999), Van Valin y Mairal (2001),

Mairal y Ruiz de Mendoza (forthcoming 2007),

Ruiz de Mendoza y Mairal (forthcoming 2007) 23

2.1.5 El lexicon generativo de Pustejovsky 27

2.1.5.a Introducción: objetivos del LG 27

2.1.5.b El lexicón generativo: estructura 28

2.1.5.c Estructura de eventos 29

2.1.6. Gramática del Papel y la Referencia 34

2.1.6.a El sistema de representación léxica de la RRG 34

2.1.6.b Estructura de eventos de la RRG frente a

estructura de eventos del LG 40

2.2 Conclusiones 43 359

Capítulo 3. Metodología

3.0 Introducción 45

3.1 Estructura del análisis 46

3.1.1 Análisis paradigmático y sintagmático 46

3.1.2 Análisis de tipo y ocurrencia 47

3.2 El corpus: principales características y proceso de compilación 52

3.3 Verbos objeto de estudio 55

3.4 WordNet 57

3.5 Secciones del análisis: estructura de la palabra, estructura argumental,

y estructura de eventos 65

3.5.1 Estructura de la palabra 65

3.5.1.a Derivación léxica 65

3.5.1.b Relaciones semánticas 67

3.5.2 Estructura argumental (análisis cualitativo y cuantitativo) 83

3.5.2.a Valencia semántica y sintáctica 83

3.5.2.b Argumentos-adjuntos 85

3.5.2.c Periferia 85

3.5.2.d Tipo de entidad 88

3.5.2.e Aspecto nominal 91

3.5.2.f Referencia 92

3.5.3 Estructura de eventos (análisis sintáctico) 93

3.5.3a Estructura de subeventos 96

3.5.3.b Secuencia causal 98

3.5.3.c Relación temporal 99

3.5.3.d Type coercion 99

360

3.6 Conclusiones 103

Capítulo 4. Descripción y resultados del análisis

4.0 Introducción 105

4.1 Estructura de la palabra 106

4.1.1 Derivación léxica 106

4.1.1.a Distribución de prefijos y sufijo 106

4.1.1.b Cambios en la categoría gramatical 110

4.1.1.c El origen de los verbos: Verbos romance vs. verbos germánicos 120

4.1.2 Relaciones semánticas 120

4.1.2.a Antonimia 120

4.1.2.b Sinonimia 123

4.1.2.c Hiponimia e hiperonimia 131

4.1.2.d Polisemia 134

4.1.3 Resumen 149

4.2 El corpus: ejemplos prototípicos, menos prototípicos y no prototípicos 150

4.2.1 Ejemplos prototípico 154

4.2.1.1 Estructura argumental 154

4.2.1.1.a Valencia sintáctica 154

4.2.1.1.b. Valencia semántica 164

4.2.1.1.c. Argumentos-adjuntos 168

4.2.1.1.d Periferia 173

4.2.1.1.e Tipo de entidad 185

4.2.1.1.f Aspecto nominal 189

361

4.2.1.1.g Referencia 216

4.2.1.2 Estructura de eventos 223

4.2.1.2.a Estructura de subeventos 225

4.2.1.2.b Headedness 229

4.2.1.2.c Secuencia causal 230

4.2.1.2.d Relación temporal 232

4.2.1.2.e Coerción de tipo 232

4.2.1.2.f Fusión: un caso especial 232

4.2.1.3 Resumen 250

4.2.2 Ejemplos menos prototípicos y no prototípicos 258

4.2.2.1 Definición 258

4.2.2.2 Diferencias entre los ejemplos menos prototípicos

y no prototípicos 273

4.2.2.2.a Ejemplos menos prototípicos 273

4.2.2.2.b Ejemplos no prototípicos 277

4.3 Conclusiones 282

Capítulo 5. Conclusiones 285

Capítulo 6. Líneas de investigación futura

6.0 Introducción 297

6.1 Líneas generales de investigación futura 297

6.2 Líneas específicas de investigación futura 299

Bibliografía 305

362

Apéndice: muestra del corpus – ejemplos prototípicos 329

Índice, resúmenes y conclusiones en español 359

363 RESÚMENES Y CONCLUSIONES EN ESPAÑOL

CAPÍTULO 1: INTRODUCCIÓN

En este capítulo introduzco el tema central de la tesis, los objetivos que espero alcanzar, la base lógica sobre la que sustento mi investigación, y las hipótesis de las que parto.

En esta tesis aspiro a fortalecer el sistema de representación léxica de la Teoría de la Gramática del Papel y la Referencia. La Gramática del Papel y la Referencia, a la que a partir de ahora me referiré como ‘RRG’, es una teoría ecléctica que integra de modo satisfactorio principios de diversas teorías como la teoría de la frase nominal de

Rikhoff dentro del paradigma de la Gramática Funcional, las plantillas construccionales de la Gramática de Construcciones, la teoría de la estructura de la información de

Lambrecht, la teoría de los qualia nominales de Pustejovsky, y el análisis pragmático de la pronominalización de Kuno, entre otros (Van Valin y LaPolla 1997: 640).

A pesar de sus muchas virtudes, la RRG presenta algunas limitaciones, sobre todo en lo concerniente a su sistema de representación léxica – que se limita básicamente a las estructuras lógicas –, y a su visión del lexicón – que no es suficientemente global. Por esos motivos, en esta tesis tengo como objetivos primordiales (1) llevar a cabo un análisis semántico fino de la RRG y (2) expandir las estructuras lógicas de la RRG más allá del significado de sus predicados. Para ello, me he inspirado en el Lexicón Generativo de Pustejovsky, que ofrece una visión más completa e integrada del lexicón, y en la Semántica Léxica, en la línea de Faber y

Mairal, y Levin y Rappaport Hovav, que se caracteriza por relacionar la semántica con la sintaxis a través de las clases verbales y las alternancias. De ahí que haya escogido una clase verbal – la clase verbos de cambio de estado puro – que, de manera notoria, me permite llegar a generalizaciones sintácticas a partir del comportamiento de los

367 verbos como clase. Además, recientemente, la Semántica Léxica ha comenzado a mostrar interés, no sólo por la semántica del verbo, sino también por la morfología derivativa con el fin de establecer relaciones con la estructura argumental y de eventos, que es otro de los fundamentos de esta tesis.

Las hipótesis de partida son de distinta índole. En primer lugar, propongo una hipótesis metateórica: es posible fortalecer el sistema de representación léxica de la

RRG, incorporando componentes de otros modelos que permitan expandir sus estructuras lógicas más allá del significado básico de sus predicados como aparecen en las entradas léxicas de los diccionarios. En segundo lugar, propongo una hipótesis teórica: todos los cambios de estado son durativos. Y en tercer lugar, propongo una hipótesis descriptiva: la estructura de la palabra, la estructura argumental y la estructura de eventos de los verbos prototípicos de cambio de estado proporcionan no sólo una explicación motivada de su comportamiento gramatical, sino que también contribuyen a una definición de significado en términos funcionales y cognitivos.

CAPÍTULO 2: REVISIÓN DE LA LITERATURA

En este capítulo describo el marco teórico en el que se encuadra mi investigación. Dado que esta tesis pretende fortalecer una teoría a partir de otra, el marco teórico se fundamenta en dos teorías: la que podríamos llamar ‘teoría origen’ o RRG, y la que podríamos llamar ‘teoría destino’ o Lexicón Generativo. En la descripción de estas dos teorías sigo un patrón similar: comienzo por ofrecer unas pinceladas teóricas sobre sus fundamentos básicos, y a continuación me centro en sus sistemas de representación léxica, haciendo hincapié en la estructura de eventos. La estructura de eventos es central en el lexicón de Pustejovsky; de hecho, Pustejovsky propone un nivel independiente de estructura de eventos a partir del cual se definen todos los demás niveles de

368 representación. Además, como dicen Levin y Rappaport Hovav (2005), toda teoría de semántica léxica ha de estar basada en una estructura de eventos sólida. Finalmente, explico las diferencias que existen entre ambas teorías y las razones que me han llevado a elegir el lexicón generativo de Pustejovsky como el modelo a seguir en esta investigación.

Aparte de estas dos teorías, reviso otros modelos de la semántica léxica como el de Levin y Rappaport Hovav, Alsina, Lieber, y Faber y Mairal. Levin y Rappaport

Hovav presentan tres propuestas de representación léxica, cada uno de los cuales se centra en un aspecto cognitivo distinto del evento: el enfoque localista, el enfoque aspectual, y el enfoque causal. Tras describirlos brevemente, examino la propia propuesta de estas autoras. Su modelo parte del sistema de descomposición léxica de la

RRG, pero lo enriquecen mediante la incorporación de aquellos aspectos de la realización argumental que no pueden derivarse de la estructura de eventos misma, y también mediante el sistema de alternancias, que explica las variaciones entre argumentos y adjuntos. A continuación reviso las propuestas de Alsina – un intento de simplificación del modelo de Pustejovsky –, Lieber – un modelo de representación léxica que intenta explicar la semántica del proceso de formación de palabras utilizando el sistema de descomposición léxica y la incorporación de rasgos, operadores, variables y funciones – y Faber y Mairal – un modelo basado en el Modelo Lexemático-

Funcional de Martín Mingorance, y que cuenta con un lexicón integrado alrededor de tres ejes fundamentales: paradigmático, sintáctico y cognitivo.

En definitiva, en este capítulo cubro un amplio marco teórico, que abarca desde las teorías más proyeccionistas, como la RRG, hasta las no proyeccionistas, como la de

Levin y Rappaport Hovav, pasando por otras que están a medio camino entre una y otra, como la de Faber y Mairal.

369

CAPÍTULO 3: METODOLOGÍA

En este capítulo explico cuál es la base metodológica sobre la que llevo a cabo el análisis de la estructura de la palabra, argumental y de eventos de los verbos de cambio de estado puro. Con este análisis pretendo ofrecer una solución a las limitaciones percibidas en el sistema de representación léxica de la RRG. El lexicón de Pustejovsky parece ser el modelo a seguir, ya que es exhaustivo y detallado, y su sistema de representación por capas del lexicón da buena cuenta del significado léxico. Además, la

RRG y el LEXICÓN GENERATIVO son, al menos a priori, dos modelos compatibles y complementarios. Van Valin y LaPolla son conscientes de ello, ya que incorporan la teoría de la qualia nominal en su modelo del ’97; pero como muestro en esta tesis, ésa es sólo una de las razones. Así pues, en este capítulo identifico aquellos componentes del lexicón de Pustejovsky que no se encuentran en la RRG o que, simplemente, no están suficientemente desarrollados, y que podrían beneficiar a esta última. Además, selecciono aquellas variables de la RRG que considero relevantes para la representación léxica. Las variables de las dos teorías serán analizadas en conjunto para, de este modo, comprobar la compatibilidad de los dos modelos. Las variables pertenecen a los tres niveles de representación del lexicón, en concreto, estructura de la palabra, estructura argumental, y estructura de eventos. Por mencionar alguna, el análisis se centrará en cuestiones como valencia sintáctica, valencia semántica, tipo de entidad, aspecto nominal, o estructura de subeventos.

Al realizar este análisis, en realidad estoy llevando a cabo dos análisis paralelos: por un lado, un análisis paradigmático y sintagmático; por otro, un análisis de tipos y ocurrencias.

370 En este capítulo también describo las fuentes de las que he obtenido la información que voy a analizar. Para el análisis de la estructura de la palabra hago uso de diccionarios - Longman Dictionary of the English Language (1984), The New Oxford

Dictionary of English (1998), Collins Cobuild English Dictionary for Advanced

Learners (2001), Cambridge Advanced Learner’s Dictionary (online) and Merriam-

Webster Unabridged Dictionary (online) –, tesauros – Longman Thesaurus y New

Oxford Thesaurus of English (2000) – y la base de datos online WordNet. Para el análisis de la estructura argumentos y de eventos me baso en un corpus de aproximadamente 1.200 ejemplos, sacados del Corpus del Inglés Británico, que contienen los verbos que estoy analizando.

A continuación, describo el proceso de compilación del corpus, y proporciono un breve resumen de las dos bases de datos principales, el Corpus del Inglés Británico y

WordNet, y de la clase verbal que voy a someter a análisis: los verbos de cambio de estado puro, también conocidos en inglés como verbos break (‘romper’).

Al final del capítulo explico la estructura general del análisis y ofrezco una breve introducción teórica y metodológica de cada una de las variables que analizo en esta investigación.

CAPÍTULO 4: RESULTADOS DEL ANÁLISIS

En este capítulo expongo y discuto los resultados del análisis de las variables correspondientes a la estructura de la palabra, de argumentos y de eventos de los verbos break. Este capítulo es interesante en dos sentidos: por un lado, contiene la información con la que voy a elaborar las conclusiones en el capítulo 5; por otro lado, esa información es atractiva en sí misma, ya que plantea cuestiones relevantes para el desarrollo adecuado de la tesis y también para investigaciones futuras.

371 Comienzo por examinar los datos obtenidos en el análisis de las variables de la estructura de la palabra. Este análisis tiene dos partes. En la primera parte, describo el comportamiento derivativo de los verbos break a partir de tres cuestiones clave: (1) distribución de prefijos y sufijos, (2) cambios en la categoría gramatical de la palabra, y

(3) el origen de los verbos: verbos romances frente a verbos germánicos. En la segunda parte, lidio con las relaciones semánticas en las que participan estos verbos, en concreto antonimia, sinonimia, hiperonimia, y polisemia. El análisis deja patente la estrecha relación que existe entre la morfología y la semántica de estos verbos.

A continuación, procedo a examinar los resultados del análisis de la estructura argumental y de la estructura de eventos. La discusión de estos resultados está condicionada por el corpus, que ha tenido que ser reestructurado para ajustarse a los requisitos de esta investigación. Brevemente, esta tesis está basada en el análisis de verbos de cambio de estado puro, pero el análisis ha revelado que no todos los ejemplos tienen ese significado. En consecuencia, he reestructurado el corpus, dividiéndolo en tres grupos según su nivel de prototipicidad. Aquellos ejemplos que codifican un cambio de estado puro han sido agrupados bajo el nombre de ejemplos prototípicos.

Aquellos que codifican un significado completamente distinto han sido agrupados bajo el nombre de ejemplos no prototípicos. Y aquellos cuyo significado está a medio camino entre los dos grupos anteriores han sido agrupados bajo el nombre de ejemplos menos prototípicos.

Una vez organizado el corpus, examino la información que he obtenido del análisis de la estructura de argumentos y la estructura de eventos de las tres clases.

Dado que esta tesis pretende estudiar los verbos de cambio de estado puro, pongo especial énfasis en la discusión de la información correspondiente al grupo de ejemplos

372 prototípicos. Los otros dos los discuto brevemente al final, centrándome sobre todo en aquellas variables que puedan resultar más interesantes.

Finalmente, las variables que he estudiado en el análisis de la estructura argumental son las siguientes: valencia sintáctica, valencia semántica, argumentos- adjuntos, periferia, tipo de entidad, aspecto nominal, y referencia. Las variables que he analizado para la estructura de eventos son éstas: estructura de subeventos, headedness, secuencia causal, relación temporal, y type coercion.

CAPÍTULO 5: CONCLUSIONES

En este capítulo reelaboro la información ofrecida en el capítulo 4 para dar respuesta a las preguntas planteadas en la introducción.

Como he explicado en el capítulo 1, el objetivo principal de esta tesis es llevar a cabo un análisis semántico fino del sistema de representación léxica de la RRG. Tras haberlo realizado, he constatado que los tres niveles de representación analizados están

íntimamente conectados, de modo que la información que se codifica en un nivel está perfectamente cohesionada con la información que se codifica en los otros niveles. En mi opinión, ahí es donde reside el éxito del lexicón de Pustejovsky. Pustejovsky presenta un modelo de lexicón integrado, dinámico, potencialmente más rico que el de

RRG, y cuyas partes o niveles están en armonía. El hecho de haber podido llevar a cabo el análisis de estos niveles integrando componentes de ambas teorías, RRG y Lexicón

Generativo, demuestra que ambas teorías son compatibles y, más aún, que se complementan, razón por la cual ambas se pueden beneficiar mutuamente. En concreto, y retomando los objetivos planteados en el capítulo 1, la RRG se puede beneficiar del

Lexicón Generativo incorporando a su sistema de representación léxica aquellos

373 componentes del Lexicón Generativo que la pueden fortalecer, especialmente, como se deriva del análisis, los componentes de la estructura de eventos.

Aparte del lexicón generativo de Pustejovsky, he seguido otros modelos de la semántica léxica para enriquecer el sistema de representación léxica de la RRG.

Recientemente, la semántica léxica ha comenzado a mostrar interés, no sólo por la semántica del verbo, sino también por la morfología derivativa, con el fin de establecer relaciones con la estructura argumental y de eventos, que es otra de las ideas clave de esta tesis. Además, la semántica léxica también ofrece una visión dinámica y de conjunto del lexicón.

Para delimitar el objeto de estudio ha sido necesario contextualizar los verbos de cambio de estado, examinando todas sus acepciones y combinaciones con adverbios y preposiciones. A continuación, he aislado aquellos que implican discontinuidad física – es decir, cambio de estado puro –, descartando aquellos que implican discontinuidad temporal. Esto lo he hecho por medio de la reinterpretación de una noción bien establecida en la lingüística, como es la de force dynamics o dinámica de fuerzas. Al mismo tiempo, y siempre dentro de un paradigma funcional-cognitivo, he distinguido entre ejemplos prototípicos y no prototípicos.

Habiendo organizado el corpus, he procedido al análisis de la estructura de la palabra, de la estructura argumental, y de la estructura de eventos de los verbos de cambio de estado, y a su discusión.

Para el análisis de estructura de la palabra, en el capítulo 3 he identificado cuatro relaciones semánticas basándome en Lehrer (2000): antonimia, sinonimia, hiponimia/hiperonimia, y polisemia. El análisis de antonimia revela que los verbos break apenas tienen antónimos, lo que concuerda con el hecho de que éstos no tengan afijos reversativos, como he mostrado en el análisis de la derivación. Los resultados del

374 análisis de hiperonimia también concuerdan con los ofrecidos en el análisis de la derivación de los verbos break, ya que el hiperónimo de estos verbos es el verbo que toma más afijos. Además, el hiperónimo de esta clase verbal es también el verbo que aparece en un mayor número de patrones de complementación. Por tanto, los análisis de morfología, relaciones semánticas y estructura argumental de los verbos break convergen claramente, lo que corrobora en parte la tercera de las hipótesis de partida de mi investigación. Además, como muestro en el capítulo 4, las relaciones de sinonimia, polisemia e hiperonimia están íntimamente relacionadas y todas ellas contribuyen a la definición de significado de los verbos.

El análisis de la estructura argumental de los ejemplos prototípicos de cambio de estado ha puesto de manifiesto el carácter transversal de la noción de telicidad: la telicidad afecta a la morfología, a la estructura de argumentos y a la estructura de eventos (a través del Aktionsart). Por ejemplo, Pustejovsky desarrolla la noción de complejidad de eventos en torno a la idea de telicidad: los eventos télicos – entre ellos, los de cambio de estado – son sistemáticamente complejos, ya que implican una transición a un nuevo estado resultativo. A raíz de esta idea, argumento también que todos los eventos tienen duración. Siendo la más transversal de las nociones, he hecho una propuesta para desarrollar la noción de telicidad en lo relativo a la frase nominal, puesto que está poco estudiada. El análisis de telicidad nominal ha revelado que el segundo argumento de las estructuras causativas y el primero de su alterna incoativa ha de ser télico. Esta idea es coherente con la definición de los verbos y, más concretamente, con las restricciones de selección de los argumentos de los verbos break.

Otro aspecto relevante de la estructura argumental es la representación de los argumentos externos y de los adjuntos. Como indico en los capítulos 1, 3 y 4, las

375 estructuras lógicas de la RRG deben incluir aquellos adjuntos que tengan impacto semántico, y para ello he seguido los pasos de Faber y Mairal (1999) y Mairal y Faber

(2002). Sus plantillas incluyen todos los argumentos del verbo pero, además, profundizan en el proceso de descomposición semántica, incorporando otras funciones semánticas como la instrumental o la de modo. Tomando este modelo como punto de partida, he llevado a cabo el análisis de los adjuntos del corpus, los he organizado jerárquicamente siguiendo a Dik (1997) y Van Valin (2005), y he establecido un primer punto de corte en dicha jerarquía para determinar al menos qué adjuntos no pueden ser incorporados a las estructuras lógicas.

Finalmente, partiendo de la base de que todos los eventos télicos son durativos, he conseguido llevar a cabo un análisis de la estructura de eventos coherente en sí mismo y compatible con el análisis de la estructura argumental y de la palabra. Es coherente en el sentido de que he podido analizar todas las variables propuestas dentro de este paradigma mixto de la RRG y el Lexicón Generativo; y es compatible porque he podido establecer generalizaciones entre los tres niveles de análisis. El hecho de que todos los eventos sean durativos, además, acarrea ciertas consecuencias. En concreto, si todos los eventos son durativos, el rasgo [± puntual] debería desaparecer de la definición de las estructuras lógicas, puesto que los dos Aktionsart se subsumen en un solo tipo, como ocurre en Pustjovsky (1995a).

En resumen, en esta tesis he llevado a cabo un análisis semántico fino del sistema de representación léxica de la RRG. He mostrado la relación que existe entre los distintos niveles de representación del lexicón. He profundizado en las estructuras lógicas, proponiendo la incorporación de componentes de otras teorías, sobre todo en lo referente a la estructura de eventos y a la incorporación de adjuntos con impacto

376 semántico. En definitiva, he propuesto una serie de directrices para el enriquecimiento del sistema de representación de la RRG.

Finalmente, las hipótesis de partida han sido verificadas por el análisis:

1. La primera hipótesis era metateórica, y como acabo de explicar, el análisis ha

demostrado que la incorporación de nuevos componentes a las representaciones

léxicas de la RRG, tales como la estructura de subeventos y la secuencia

temporal, o la inclusión de ciertos adjuntos, permitiría dar cuenta de expresiones

o construcciones que las estructuras lógicas no pueden explicar por sí mismas,

como es el caso de las resultativas.

2. La segunda hipótesis era teórica, y afirmaba que todos los cambios de estado son

durativos. El análisis ha corroborado esta hipótesis: por un lado, todos los

eventos tienen un estado inicial y un estado final, y entre un punto y otro hay

siempre un proceso, aunque éste tenga una duración mínima. En otras palabras,

no hay eventos puntuales. Los eventos duran más o menos, pero ningún evento

tiene duración nula.

3. La tercera hipótesis era descriptiva, y establecía que la estructura de la palabra,

la estructura argumental y la estructura de eventos de los verbos prototípicos de

cambio de estado proporcionan no sólo una explicación motivada de su

comportamiento gramatical, sino que también contribuyen a una definición de

significado motivada en términos funcionales y cognitivos. El análisis ha

demostrado que los tres niveles están interrelacionados, y los tres contribuyen a

la definición de significado de los verbos break. Así, el análisis de relaciones

semánticas como la sinonimia, la polisemia, y la hiperonimia ha proporcionado

las claves sobre las que estructurar el corpus en distintas clases según su nivel de

prototipicidad; el análisis de la valencia semántica y sintáctica, de los

377 argumentos-adjuntos, y de los adjuntos, ha contribuido a la definición de

aquellos significados que van más allá del significado de los predicados; y el

análisis de la estructura de subeventos y de la secuencia temporal ha confirmado

la duración de los eventos, otro rasgo fundamental de la definición de los

cambios de estado.

En resumen, esta tesis ha contribuido al enriquecimiento del sistema de representación léxica para la RRG gracias a la aportación de otras teorías, en especial del lexicón generativo de Pustejovsky y de la semántica léxica de Levin y Rappaport

Hovav; ha contribuido a desarrollar las dimensiones sintagmáticas y paradigmáticas del lexicón; y, aún más importante, este trabajo ha sido llevado a cabo siguiendo los cánones de rigor metodológico de McDonough y McDonough (1997), es decir, sistemáticamente y basándome en datos empíricos.

CAPÍTULO 6: LÍNEAS DE INVESTIGACIÓN FUTURA

En este último capítulo propongo diez líneas de investigación futura. Las dos primeras derivan del objetivo principal de la tesis, e indican la dirección que esta investigación debería seguir más allá de los límites de este trabajo. La primera sugiere que los sistemas de representación léxica de los modelos funcionales han de evolucionar en la dirección marcada por el Modelo Léxico Construccional (Lexicom) de Ruiz de

Mendoza y Mairal. La segunda propone que la relación entre la morfología y la semántica léxica ha de evolucionar en la dirección marcada por Cortés (2006) y Martín

Arista.

Las otras ocho líneas de investigación son más específicas, y han surgido a raíz del análisis.

378 Las tres primeras están relacionadas con el carácter altamente polisémico de los verbos de cambio de estado. Durante el análisis han aparecido casos interesantes de verbos que comparten significados no prototípicos de forma bastante sistemática. Las cuestiones que planteo son (1) si estos verbos que comparten ese significado no prototípico muestran algún otro patrón de comportamiento similar que los distinga del resto de los verbos de su clase y, de ser así, si deberían ser clasificados en una clase propia aparte; (2) si hay alguna conexión entre ese significado no prototípico y otros significados no prototípicos detectados entre verbos de esa misma clase verbal; y (3), si hay alguna conexión entre esos significados no prototípicos y el significado prototípico de los verbos de cambio de estado. En respuesta a estas preguntas, planteo dos soluciones posibles: (1) puede que haya una explicación en el desarrollo histórico de estos verbos, en cuyo caso dejaríamos esta cuestión a los especialistas en diacronía; y

(2) podría existir una relación metafórica o metonímia entre los significados no prototípicos y los prototípicos de estos verbos. Ambas soluciones son sólo tentativas, por lo que dejo esta cuestión abierta para futuras investigaciones.

La sexta está relacionada con las qualia. En el capítulo 4 tomo el New Oxford

Diccionary of English (NODE) como diccionario base para determinar cuáles son los significados centrales de cada uno de los verbos que conforman la clase de cambio de estado, y asumo que sus criterios son válidos. Sin embargo, lo interesante sería contrastar todos los significados (senses) que recoge WordNet para cada verbo con los centrales que propone el NODE para cada verbo, y ver si esos significados centrales dan cabida a todos los de Wordnet. Esta propuesta tendría aún más peso si se desarrollaran las qualia de cada verbo y se contrastaran los significados centrales del NODE con ellas.

379 La séptima está relacionada con la telicidad, y con la caracterización de algunos sustantivos como contables o incontables. El capítulo 4 ha revelado que algunos de ellos son recogidos en el diccionario como contables e incontables a la vez, otros sólo como contables, y otros sólo como incontables. Sería interesante averiguar si estos sustantivos que aparecen como contables e incontables eran originariamente de un solo tipo y sufrieron un proceso de conversión en su evolución, ofreciendo así ambas interpretaciones, o si originariamente eran ya contables e incontables.

La octava está relacionada con la secuencia temporal. En el capítulo 4 expongo la necesidad de añadir un nuevo tipo de secuencia temporal a los planteados por

Pustejovsky, y sugiero que esa distinción podría refinarse aún más

La novena trata sobre la dimensión sintáctica de este trabajo. Esta tesis trata en su mayor parte cuestiones semánticas, por lo que propongo ampliar su alcance para darle una dimensión más sintáctica basada, por ejemplo, en el estudio de los adjuntos.

El estudio de los adjuntos podría ser el complemento sintáctico idóneo para este trabajo, sobre todo teniendo en cuenta el relevante papel que éstos juegan en la representación léxica.

Y finalmente, la décima trata sobre la propuesta de fusión de construcciones que he planteado en el capítulo 4. Esta teoría daría explicación a aquellas expresiones que quedan fuera del marco de las construcciones de Golberg porque su significado deriva de la suma de los significados de sus componentes. Mi idea es que el modelo de

Pustejovsky puede explicar esos casos gracias a las qualias, pero hay que profundizar más en la representación de esas estructuras de fusión. Además, queda una pregunta más en el aire: por qué una misma expresión sintáctica puede unas veces ser interpretada como merge o fusión y otras como un caso de co-composición.

380