Semantics Oup.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Semantics Oup.Pdf Semantics David Beaver and Joey Frazee 1 Introduction Semantics is concerned with meaning: what meanings are, how meanings are assigned to words, phrases and sentences of natural and formal languages, and how meanings can be combined and used for inference and reasoning. The goal of this chapter is to introduce computational linguists and computer scientists to the tools, methods, and concepts required to work on natural language semantics. Semantics, while often paired with pragmatics, is nominally distinct. On a traditional view, se- mantics concerns itself with the compositional buildup of meaning from the lexicon to the sentence level whereas pragmatics concerns the way in which contextual factors and speaker intentions affect meaning and inference (see, e.g., Potts pear in this volume). Although the semantics-pragmatics dis- tinction is historically important, and continues to be widely adopted, in practice it is not clearcut. Work in semantics inevitably involves pragmatics and vice versa. Furthermore, it is not a distinction which is of much relevance for applications in computational linguistics. This chapter is organized as follows. In sections 2 and 2.2 we introduce foundational con- cepts and discuss ways of representing the meaning of sentences, and of combining the meaning of smaller expressions to produce sentential meanings. In section 3 we discuss the representation of meaning for larger units, especially with respect to anaphora, and introduce two formal theories that go beyond sentence meaning: Discourse Representation Theory and Dynamic Semantics. Then, in section 4 we discuss temporality, introducing event semantics and describing standard approaches to the semantics of time. Section 5 concerns the tension between the surface-oriented statistical meth- ods characteristic of mainstream computational linguistics and the more abstract methods typical of formal semantics and includes discussion of phenomena for which it seems particularly important to utilize insights from formal semantics. Throughout the chapter we assume familiarity with ba- sic notions from logic (propositional logic and first-order predicate logic), computer science and, to a lesser extent, computational linguistics (e.g., algorithms, parsing, syntactic categories and tree representations). 2 Sentential semantics 2.1 Representation & logical form Aristotle and the medieval post-Aristotelian tradition apart, work on formal semantic representation only began in earnest with Boole’s (1854) semantics for propositional logic and Frege’s (1879) development of first-order predicate logic (FOPL). Frege’s work provided a precise and intuitive way of characterizing the meaning of sentences.1 The influence of this development has been vast, 1 as seen in the fact that for many years, introductory courses on logic and semantics have commonly included exercises in translating sentences of natural language into statements of propositional or first-order logic. Thus, for example, (1a) might be represented as (1b): (1) a. Fischer played a Russian. b. 9x (russian(x) ^ played(Fischer;x)) Subtle typographical distinctions we have made between (1a) and (1b) relate to a crucial theme in the development of semantics, made completely explicit in the work of Tarski (1944). This is the distinction between object language and metalanguage. For example, the sentence in (1a) and all the words that make it up are expressions of the object language, the language from which we are translating. On the other hand, the translations are given in a semantic metalanguage. Thus far, the semantic metalanguage has been FOPL. We use a sans serif font to mark expressions of the metalanguage, so that, for exMple, played is our way of representing in the semantic metalanguage the meaning of the English word played.2 Representations like that in (1b), so-called Logical Forms (LFs), immediately provide a way of approaching a range of computational tasks. For example, consider a database application, say a database of information about chess tournaments. While it is far from obvious how to query a database with an arbitrary sentence of English, the problem of checking whether a sentence of FOPL is satisfied by a record in that database is straightforward: (1) translate the English sentence into a sentence of FOPL, and (2) verify the sentence’s translation. Thus we can check whether (1a) holds in the database by breaking it down into these sub-problems. We can also consider whether one sentence of natural language follows from another using a similar procedure. The computational process for deciding whether (2a) follows from (1a) could involve translating both sentences into FOPL, (1b,2b), and then verifying the inference with an automated theorem prover. (2) a. Fischer played someone. b. 9x (played(Fischer;x)) Here let us note immediately that while FOPL offers us a strategy for dealing with such problems, it is far from a complete solution. In the case of the above examples, we have reduced one prob- lem (natural language inference) to two problems (translation into FOPL, and inference), neither of which itself is trivial. The simplicity of the sentences in examples (1) and (2) gives off the ap- pearance that translation into FOPL is easy, or uninteresting, but sentences often have multiple LFs as well as LFs that are not necessarily intuitive. This makes the particulars of the choice of an LF representation critical, as noted by Russell (1905) a century ago. As regards the unintuitiveness of LFs, Russell argued that a definite description like The Amer- ican in (3a) does not simply denote or refer to an individual (e.g., Bobby Fischer); rather, it is a quantificational element. So for Russell (3a) ought to be represented as in (3b). (3) a. The American won. b. 9x (american(x) ^ 8y (american(y) ! x = y) ^ won(x)) Instead of The American being Fischer (in the right context), it’s a quantifier that imposes the exis- tence of an individual that (1) is, uniquely, American and (2) has the property denoted by whatever predicate it combines with (i.e., won in the above example). The question of whether this LF is a 2 good representation for definite descriptions is a subject of debate, with many (from Strawson 1950 on) arguing that the full meaning of sentences like (3a) cannot be captured in classical FOPL at all.3 Whatever position is taken about the meaning of definite descriptions, it is hard to escape Rus- sell’s conclusion that surface forms bear a complex relationship to their LFs. This is especially evident when we look at sentences with multiple LFs. For example, a sentence like (4a) may exhibit a scope ambiguity whereby it may either have an LF like (4b) where the universal quantifier takes wide scope or an LF like (4c) where it takes narrow scope. (4) a. Every Russian played an American. b. 8x (russian(x) ! 9y (american(y) ^ play(x;y))) c. 9y (american(y) ^ 8x (russian(x) ! play(x;y))) The wide scope LF corresponds to the reading where every Russian played at least one American, but possibly different ones. The narrow scope LF corresponds to the reading where every Russian played the same American. The LFs show that this distinction can be represented in FOPL, but how do we get from the sentence (4a) to its LFs (4b) and (4c)? 2.2 Compositional semantics Above we have sketched a common way of representing the meanings of sentences, via translation to FOPL, but we have provided no general method for deriving these translations. This raises two questions: (1) how should we represent the meanings of smaller expressions (e.g., verbs like played and noun-phrases like Every Russian), and (2) how should the meanings of smaller expressions combine to yield sentence representations? In the examples so far, some parts of the original sentences have direct counterparts in LF, others do not. For example, while Russian in (4a) has russian as its translation, the expression Every Russian has no counterpart in either of the sentence’s LFs (4b,4c). It’s tempting to say that the translation of Every Russian is “8x (russian(x) !”, but this is not a well-formed expression of FOPL and so it has no meaning in the usual semantics of FOPL. More troubling, though, is that FOPL on its own does not provide a method for deriving the meaning of an expression like “8x (russian(x) !” from the meaning of its parts, Every and Russian. A method for assigning meanings to all syntactic units that make up a sentence and deriving the meanings of that sentence, algorithmically, from those parts, however, is available in Richard Montague’s (1973; 1970a; 1970b) seminal papers. Montague contended that there is no substan- tive difference between natural languages and formal languages (e.g., languages of philosophical logic such as FOPL, and programming languages) and that both can be analyzed in the same way (Montague 1970b,b; see also Halvorsen and Ladusaw 1979). As unlikely as this seems (indeed, Frege, Russell and others were drawn to FOPL thinking that natural language was too messy to be analyzable in precise terms), Montague (1970b,b) showed that significant fragments of English are directly interpretable in a precise and systematic fashion. Translating object an language into a formal language for interpretation (see, e.g., Montague 1973) is usually more straightforward, however, and we will follow this strategy throughout the paper. Indirect interpretation is primarily achieved using the Lambda Calculus (Church, 1932).4 Object language terms are mapped to typed lambda terms which are subsequently used to assign meanings. For example, Russian is paired with the function russian, which maps Boris Spassky to 1 or true and Bobby Fischer and Deep Blue to 0 or false. The type of this function is he;ti, the type of properties, 3 which maps entities e to truth-values t. Quantifiers like every are likewise analyzed as functions, but the corresponding function every is a mapping not from entities but from a pair of properties like russian and won to truth-values.
Recommended publications
  • Semantics and Pragmatics
    Semantics and Pragmatics Christopher Gauker Semantics deals with the literal meaning of sentences. Pragmatics deals with what speakers mean by their utterances of sentences over and above what those sentences literally mean. However, it is not always clear where to draw the line. Natural languages contain many expressions that may be thought of both as contributing to literal meaning and as devices by which speakers signal what they mean. After characterizing the aims of semantics and pragmatics, this chapter will set out the issues concerning such devices and will propose a way of dividing the labor between semantics and pragmatics. Disagreements about the purview of semantics and pragmatics often concern expressions of which we may say that their interpretation somehow depends on the context in which they are used. Thus: • The interpretation of a sentence containing a demonstrative, as in “This is nice”, depends on a contextually-determined reference of the demonstrative. • The interpretation of a quantified sentence, such as “Everyone is present”, depends on a contextually-determined domain of discourse. • The interpretation of a sentence containing a gradable adjective, as in “Dumbo is small”, depends on a contextually-determined standard (Kennedy 2007). • The interpretation of a sentence containing an incomplete predicate, as in “Tipper is ready”, may depend on a contextually-determined completion. Semantics and Pragmatics 8/4/10 Page 2 • The interpretation of a sentence containing a discourse particle such as “too”, as in “Dennis is having dinner in London tonight too”, may depend on a contextually determined set of background propositions (Gauker 2008a). • The interpretation of a sentence employing metonymy, such as “The ham sandwich wants his check”, depends on a contextually-determined relation of reference-shifting.
    [Show full text]
  • Reference and Sense
    REFERENCE AND SENSE y two distinct ways of talking about the meaning of words y tlkitalking of SENSE=deali ng with relationshippggs inside language y talking of REFERENCE=dealing with reltilations hips bbtetween l. and the world y by means of reference a speaker indicates which things (including persons) are being talked about ege.g. My son is in the beech tree. II identifies persons identifies things y REFERENCE-relationship between the Enggplish expression ‘this p pgage’ and the thing you can hold between your finger and thumb (part of the world) y your left ear is the REFERENT of the phrase ‘your left ear’ while REFERENCE is the relationship between parts of a l. and things outside the l. y The same expression can be used to refer to different things- there are as many potential referents for the phrase ‘your left ear’ as there are pppeople in the world with left ears Many expressions can have VARIABLE REFERENCE y There are cases of expressions which in normal everyday conversation never refer to different things, i.e. which in most everyday situations that one can envisage have CONSTANT REFERENCE. y However, there is very little constancy of reference in l. Almost all of the fixing of reference comes from the context in which expressions are used. y Two different expressions can have the same referent class ica l example: ‘the MiMorning St’Star’ and ‘the Evening Star’ to refer to the planet Venus y SENSE of an expression is its place in a system of semantic relati onshi ps wit h other expressions in the l.
    [Show full text]
  • The Meaning of Language
    01:615:201 Introduction to Linguistic Theory Adam Szczegielniak The Meaning of Language Copyright in part: Cengage learning The Meaning of Language • When you know a language you know: • When a word is meaningful or meaningless, when a word has two meanings, when two words have the same meaning, and what words refer to (in the real world or imagination) • When a sentence is meaningful or meaningless, when a sentence has two meanings, when two sentences have the same meaning, and whether a sentence is true or false (the truth conditions of the sentence) • Semantics is the study of the meaning of morphemes, words, phrases, and sentences – Lexical semantics: the meaning of words and the relationships among words – Phrasal or sentential semantics: the meaning of syntactic units larger than one word Truth • Compositional semantics: formulating semantic rules that build the meaning of a sentence based on the meaning of the words and how they combine – Also known as truth-conditional semantics because the speaker’ s knowledge of truth conditions is central Truth • If you know the meaning of a sentence, you can determine under what conditions it is true or false – You don’ t need to know whether or not a sentence is true or false to understand it, so knowing the meaning of a sentence means knowing under what circumstances it would be true or false • Most sentences are true or false depending on the situation – But some sentences are always true (tautologies) – And some are always false (contradictions) Entailment and Related Notions • Entailment: one sentence entails another if whenever the first sentence is true the second one must be true also Jack swims beautifully.
    [Show full text]
  • Two-Dimensionalism: Semantics and Metasemantics
    Two-Dimensionalism: Semantics and Metasemantics YEUNG, \y,ang -C-hun ...:' . '",~ ... ~ .. A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Master of Philosophy In Philosophy The Chinese University of Hong Kong January 2010 Abstract of thesis entitled: Two-Dimensionalism: Semantics and Metasemantics Submitted by YEUNG, Wang Chun for the degree of Master of Philosophy at the Chinese University of Hong Kong in July 2009 This ,thesis investigates problems surrounding the lively debate about how Kripke's examples of necessary a posteriori truths and contingent a priori truths should be explained. Two-dimensionalism is a recent development that offers a non-reductive analysis of such truths. The semantic interpretation of two-dimensionalism, proposed by Jackson and Chalmers, has certain 'descriptive' elements, which can be articulated in terms of the following three claims: (a) names and natural kind terms are reference-fixed by some associated properties, (b) these properties are known a priori by every competent speaker, and (c) these properties reflect the cognitive significance of sentences containing such terms. In this thesis, I argue against two arguments directed at such 'descriptive' elements, namely, The Argument from Ignorance and Error ('AlE'), and The Argument from Variability ('AV'). I thereby suggest that reference-fixing properties belong to the semantics of names and natural kind terms, and not to their metasemantics. Chapter 1 is a survey of some central notions related to the debate between descriptivism and direct reference theory, e.g. sense, reference, and rigidity. Chapter 2 outlines the two-dimensional approach and introduces the va~ieties of interpretations 11 of the two-dimensional framework.
    [Show full text]
  • Scope Ambiguity in Syntax and Semantics
    Scope Ambiguity in Syntax and Semantics Ling324 Reading: Meaning and Grammar, pg. 142-157 Is Scope Ambiguity Semantically Real? (1) Everyone loves someone. a. Wide scope reading of universal quantifier: ∀x[person(x) →∃y[person(y) ∧ love(x,y)]] b. Wide scope reading of existential quantifier: ∃y[person(y) ∧∀x[person(x) → love(x,y)]] 1 Could one semantic representation handle both the readings? • ∃y∀x reading entails ∀x∃y reading. ∀x∃y describes a more general situation where everyone has someone who s/he loves, and ∃y∀x describes a more specific situation where everyone loves the same person. • Then, couldn’t we say that Everyone loves someone is associated with the semantic representation that describes the more general reading, and the more specific reading obtains under an appropriate context? That is, couldn’t we say that Everyone loves someone is not semantically ambiguous, and its only semantic representation is the following? ∀x[person(x) →∃y[person(y) ∧ love(x,y)]] • After all, this semantic representation reflects the syntax: In syntax, everyone c-commands someone. In semantics, everyone scopes over someone. 2 Arguments for Real Scope Ambiguity • The semantic representation with the scope of quantifiers reflecting the order in which quantifiers occur in a sentence does not always represent the most general reading. (2) a. There was a name tag near every plate. b. A guard is standing in front of every gate. c. A student guide took every visitor to two museums. • Could we stipulate that when interpreting a sentence, no matter which order the quantifiers occur, always assign wide scope to every and narrow scope to some, two, etc.? 3 Arguments for Real Scope Ambiguity (cont.) • But in a negative sentence, ¬∀x∃y reading entails ¬∃y∀x reading.
    [Show full text]
  • The Etienne Gilson Series 21
    The Etienne Gilson Series 21 Remapping Scholasticism by MARCIA L. COLISH 3 March 2000 Pontifical Institute of Mediaeval Studies This lecture and its publication was made possible through the generous bequest of the late Charles J. Sullivan (1914-1999) Note: the author may be contacted at: Department of History Oberlin College Oberlin OH USA 44074 ISSN 0-708-319X ISBN 0-88844-721-3 © 2000 by Pontifical Institute of Mediaeval Studies 59 Queen’s Park Crescent East Toronto, Ontario, Canada M5S 2C4 Printed in Canada nce upon a time there were two competing story-lines for medieval intellectual history, each writing a major role for scholasticism into its script. Although these story-lines were O created independently and reflected different concerns, they sometimes overlapped and gave each other aid and comfort. Both exerted considerable influence on the way historians of medieval speculative thought conceptualized their subject in the first half of the twentieth cen- tury. Both versions of the map drawn by these two sets of cartographers illustrated what Wallace K. Ferguson later described as “the revolt of the medievalists.”1 One was confined largely to the academy and appealed to a wide variety of medievalists, while the other had a somewhat narrower draw and reflected political and confessional, as well as academic, concerns. The first was the anti-Burckhardtian effort to push Renaissance humanism, understood as combining a knowledge and love of the classics with “the discovery of the world and of man,” back into the Middle Ages. The second was inspired by the neo-Thomist revival launched by Pope Leo XIII, and was inhabited almost exclusively by Roman Catholic scholars.
    [Show full text]
  • Frege's Theory of Sense
    Frege’s theory of sense Jeff Speaks August 25, 2011 1. Three arguments that there must be more to meaning than reference ............................1 1.1. Frege’s puzzle about identity sentences 1.2. Understanding and knowledge of reference 1.3. Opaque contexts 2. The theoretical roles of senses .........................................................................................4 2.1. Frege’s criterion for distinctness of sense 2.2. Sense determines reference, but not the reverse 2.3. Indirect reference 2.4. Sense, force, and the theory of speech acts 3. What are senses? .............................................................................................................6 We have now seen how a theory of reference — a theory that assigns to each expression of the language a reference, which is what it contributes to determining the truth or falsity of sentences in which it occurs — might look for a fragment of English. (The fragment of English includes proper names, n-place predicates, and quantifiers.) We now turn to Frege’s reasons for thinking that a theory of reference must be supplemented with a theory of sense. 1. THREE ARGUMENTS THAT THERE MUST BE MORE TO MEANING THAN REFERENCE 1.1. Frege’s puzzle about identity sentences As Frege says at the outset of “On sense and reference,” identity “gives rise to challenging questions which are not altogether easy to answer.” The puzzle raised by identity sentences is that, if, even though “the morning star” and “the evening star” have the same reference — the planet Venus — the sentences [1] The morning star is the morning star. [2] The morning star is the evening star. seem quite different. They seem, as Frege says, to differ in “cognitive value.” [1] is trivial and a priori; whereas [2] seems a posteriori, and could express a valuable extension of knowledge — it, unlike [1], seems to express an astronomical discovery which took substantial empirical work to make.
    [Show full text]
  • Challenging the Principle of Compositionality in Interpreting Natural Language Texts Françoise Gayral, Daniel Kayser, François Lévy
    Challenging the Principle of Compositionality in Interpreting Natural Language Texts Françoise Gayral, Daniel Kayser, François Lévy To cite this version: Françoise Gayral, Daniel Kayser, François Lévy. Challenging the Principle of Compositionality in Interpreting Natural Language Texts. E. Machery, M. Werning, G. Schurz, eds. the compositionality of meaning and content, vol II„ ontos verlag, pp.83–105, 2005, applications to Linguistics, Psychology and neuroscience. hal-00084948 HAL Id: hal-00084948 https://hal.archives-ouvertes.fr/hal-00084948 Submitted on 11 Jul 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Challenging the Principle of Compositionality in Interpreting Natural Language Texts Franc¸oise Gayral, Daniel Kayser and Franc¸ois Levy´ Franc¸ois Levy,´ University Paris Nord, Av. J. B. Clement,´ 93430 Villetaneuse, France fl@lipn.univ-paris13.fr 1 Introduction The main assumption of many contemporary semantic theories, from Montague grammars to the most recent papers in journals of linguistics semantics, is and remains the principle of compositionality. This principle is most commonly stated as: The meaning of a complex expression is determined by its structure and the meanings of its constituents. It is also adopted by more computation-oriented traditions (Artificial Intelli- gence or Natural Language Processing – henceforth, NLP).
    [Show full text]
  • Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars
    Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars Luke S. Zettlemoyer and Michael Collins MIT CSAIL [email protected], [email protected] Abstract The training data in our approach consists of a set of sen- tences paired with logical forms, as in this example. This paper addresses the problem of mapping This is a particularly challenging problem because the natural language sentences to lambda–calculus derivation from each sentence to its logical form is not an- encodings of their meaning. We describe a learn- notated in the training data. For example, there is no di- ing algorithm that takes as input a training set of rect evidence that the word states in the sentence corre- sentences labeled with expressions in the lambda sponds to the predicate state in the logical form; in gen- calculus. The algorithm induces a grammar for eral there is no direct evidence of the syntactic analysis of the problem, along with a log-linear model that the sentence. Annotating entire derivations underlying the represents a distribution over syntactic and se- mapping from sentences to their semantics is highly labor- mantic analyses conditioned on the input sen- intensive. Rather than relying on full syntactic annotations, tence. We apply the method to the task of learn- we have deliberately formulated the problem in a way that ing natural language interfaces to databases and requires a relatively minimal level of annotation. show that the learned parsers outperform pre- vious methods in two benchmark database do- Our algorithm automatically induces a grammar that maps mains.
    [Show full text]
  • Sense and Reference
    Sense and Reference The story we’ve been telling has it that a name denotes an individual and an open sentence represents a property, and the sentence obtained by substituting a name for free occurrences of a variable in an open sentence is true if and only if the individual designated by the name has the property. That can’t be all there is to the story, as the following example shows: The Waverley novels were published anonymously, and their authorship was a matter of lively speculation among the British ruling classes. The following sentence was true: The king wondered whether Scott was the author of Waverley. According to the account we’ve been developing, the name “Scott” denotes an individual, the open sentence “The king wondered whether x was the author of Waverley” designates a property, and the sentence is true because the individual named has the property. Now the individual named by “Scott” is the individual names by “the author of Waverley.” Hence, the individual named by “the author of Waverley” has the property designated by the open sentence “The king wondered whether x was the author of Waverley.” Therefore, according to the story we’ve been developing, The king wondered whether the author of Waverley was the author of Waverley should be true. But that’s absurd. The king never had the slightest doubt that the author of Waverley was the author of Waverley. Again, “Phosphorus” is an old name for the Morning Star, and “Hesperus” an old name for the Evening Star. One of the earliest astronomical discoveries was that Phosphorus and Hesperus, were, in fact, the same.
    [Show full text]
  • Passmore, J. (1967). Logical Positivism. in P. Edwards (Ed.). the Encyclopedia of Philosophy (Vol. 5, 52- 57). New York: Macmillan
    Passmore, J. (1967). Logical Positivism. In P. Edwards (Ed.). The Encyclopedia of Philosophy (Vol. 5, 52- 57). New York: Macmillan. LOGICAL POSITIVISM is the name given in 1931 by A. E. Blumberg and Herbert Feigl to a set of philosophical ideas put forward by the Vienna circle. Synonymous expressions include "consistent empiricism," "logical empiricism," "scientific empiricism," and "logical neo-positivism." The name logical positivism is often, but misleadingly, used more broadly to include the "analytical" or "ordinary language philosophies developed at Cambridge and Oxford. HISTORICAL BACKGROUND The logical positivists thought of themselves as continuing a nineteenth-century Viennese empirical tradition, closely linked with British empiricism and culminating in the antimetaphysical, scientifically oriented teaching of Ernst Mach. In 1907 the mathematician Hans Hahn, the economist Otto Neurath, and the physicist Philipp Frank, all of whom were later to be prominent members of the Vienna circle, came together as an informal group to discuss the philosophy of science. They hoped to give an account of science which would do justice -as, they thought, Mach did not- to the central importance of mathematics, logic, and theoretical physics, without abandoning Mach's general doctrine that science is, fundamentally, the description of experience. As a solution to their problems, they looked to the "new positivism" of Poincare; in attempting to reconcile Mach and Poincare; they anticipated the main themes of logical positivism. In 1922, at the instigation of members of the "Vienna group," Moritz Schlick was invited to Vienna as professor, like Mach before him (1895-1901), in the philosophy of the inductive sciences. Schlick had been trained as a scientist under Max Planck and had won a name for himself as an interpreter of Einstein's theory of relativity.
    [Show full text]
  • 11 Semantics
    "'If" '.u' $. Semantics 115 ~' used to describe a product, with 'good for you', but we would not want to ., mclude this association within the basic conceptual meaning of the expres­ sion. Poets and advertisers are, of course, very interested in using terms in such a way that their associative meanings are evoked, and some linguists do lfivestigate this aspect of language use. However, in this chapter we shall be more interested in characterizing what constitutes the conceptual meaning vi terms. Semantic features , So, how would a semantic approach help us to understand something about 11 Semantics Ihe nature of language? One way it might be helpful would be as a means of accounting for the 'oddness' we experience when we read English sentences such as the following: I once referred to a character in one of my cartoons as a "dork" (a popular j The hamburgerate the man insult when I was growing up), but myeditor called me up and said that "dork" My cat studiedlinguistics couldn't be used because it meant "penis." I couldn't believe it. I ran to my A table was listening to some music New Dictionary ofAmerican Slang and, sure enough, he was right. Allthose years of saying or being called a "dork" and I had never really known what it ~otice that the oddness of these sentences does not derive from their syn­ meant. What a nerd. Gary Larson (1989) tactic structure. According to some basic syntactic rules for forming English sentences (such as those presented in Chapter 10), we have well-structured Semantics is the study of the meaning of words, phrases and sentences.
    [Show full text]