LING83600: Context-Free Grammars

Total Page:16

File Type:pdf, Size:1020Kb

LING83600: Context-Free Grammars LING83600: Context-free grammars Kyle Gorman 1 Introduction Context-free grammars (or CFGs) are a formalization of what linguists call “phrase structure gram- mars”. The formalism is originally due to Chomsky (1956) though it has been independently discovered several times. The insight underlying CFGs is the notion of constituency. Most linguists believe that human languages cannot be even weakly described by CFGs (e.g., Schieber 1985). And in formal work, context-free grammars have long been superseded by “trans- formational” approaches which introduce movement operations (in some camps), or by “general- ized phrase structure” approaches which add more complex phrase-building operations (in other camps). However, unlike more expressive formalisms, there exist relatively-efficient cubic-time parsing algorithms for CFGs. Nearly all programming languages are described by—and parsed using—a CFG,1 and CFG grammars of human languages are widely used as a model of syntactic structure in natural language processing and understanding tasks. Bibliographic note This handout covers the formal definition of context-free grammars; the Cocke-Younger-Kasami (CYK) parsing algorithm will be covered in a separate assignment. The definitions here are loosely based on chapter 11 of Jurafsky & Martin, who in turn draw from Hopcroft and Ullman 1979. 2 Definitions 2.1 Context-free grammars A context-free grammar G is a four-tuple N; Σ; R; S such that: • N is a set of non-terminal symbols, corresponding to phrase markers in a syntactic tree. • Σ is a set of terminal symbols, corresponding to words (i.e., X0s) in a syntactic tree. • R is a set of production rules. These rules are of the form A ! β where A 2 N and β 2 (Σ [ N)∗. Thus A is a phrase label and β is a sequence of zero or more terminals and/or non-terminals. 1Python programs are described by a CFG (https://docs.python.org/3/reference/grammar.html). When you execute a Python script, this grammar specification is used to parse your script. • S 2 N is a designated start symbol (i.e., the highest projection in a sentence). For simplicity, we assume N and Σ are disjoint. As is standard, we use Roman uppercase charac- ters to represent non-terminals and Greek lowercase characters to represent terminals. 2.2 Derivation Direct derivation describes the relationship between the input to a single grammar rule in R and the resulting output. If there is a rule A ! β 2 R, and α; γ are strings in (Σ [ N)∗, then αAγ ) αβγ i.e., αAγ directly derives αβγ. Derivation is a generalization of direct derivation which allows us ∗ to iteratively apply rules to strings. Given strings α1; α2; αm 2 (Σ [ N) such that α1 ) α2, and α2 ) α3, …, αm−1 ) αm, then ∗ α1 ) αm i.e., α1 derives αm (and α1 also derives α2, α3, etc.). 2.3 Context-free language The language LG generated by some grammar G is the (possibly infinite) set of strings of terminal symbols that can be derived by G starting from the start symbol S. Exercise Enumerate the language generated by a CFG with the following rules: S ! NP VP VP ! V NP VP ! V NP ! DT NN NP ! Kyle DT ! a j the NN ! cat j dog V ! barks j bites Solution The language generated is regular, and can be described by the following regular expression: [Kyle j (a j the)(dog j cat)] (bites j barks)[Kyle j (a j the)(dog j cat)]? : Note that many strings in the language are ungrammatical in English; e.g., *Kyle barks the dog. 3 Non-equivalence of context-free and regular languages You have previously seen regular grammars, which generate a class of languages known as the regular languages. The definition of the regular languages is repeated below: • The empty language ; is a regular language. • The empty string language fεg is a regular language. • For every symbol x 2 Σ, the singleton language fxg is a regular language. • If X is a regular language then X∗ (its closure) is a regular language. • If X and Y are regular languages then: – X [ Y (their union) is a regular language and – XY (their concatenation) is a regular language. • Other languages are not regular languages. It is well-known (e.g., Chomsky 1959) that the regular languages are a proper subset of the context- free languages. One intuitive explanation for this fact is that all “rules” in a regular grammar must be left-linear or right-linear. That is, they are all of the form A ! B Σ∗ (a left-linear rule) or A ! Σ∗ B (a right-linear rule). But CFGs allow a third type of rule, a center-embedding rule of the form A ! β A γ. Imagine this rule is part of the following CFG: S ! A A ! β A γ A ! ε Intuitively this grammar derives the language βnγn (where n is some non-negative integer). How- ∗ ever, regular languages can only approximate this language (e.g., with β γ∗). 4 Chomsky-normal form Syntacticians have long had a preference for binary branching syntactic structures, meaning that each non-terminal node has at most two children. As it happens, this assumption greatly simpli- fies parsing algorithms as well. One way this is enforced by converting grammars or treebanks to a format known as Chomsky normal form (CNF; Chomsky 1963). In Chomsky normal form, the elements of R, the set of production rules, are constrained to have one of two forms: • A ! BC where A; B; C 2 N. • A ! β where A 2 N and β 2 Σ. In other words, the right-hand side of every rule either consists of two non-terminals or one terminal. There exists for every CFG grammar a weakly equivalent CNF grammar, meaning that there exists a CNF which generates the same language (though it does not necessarily assign exactly the same phrase structure). For instance, given the rule A ! BCD, we can convert this to two CNF rules, namely A ! BX and X ! CD. Exercise Given the CFG rule M ! X λ ρ Y, where X; Y are non-terminals and λ; ρ are terminals, convert the rule into a series of CNF rules. Solution For example, we can introduce LP; RP as non-terminals immediately dominating λ; ρ, and XP; YP as the non-terminals headed by X and Y. We then obtain: • M ! XP YP • XP ! X LP • YP ! RP Y • LP ! λ • RP ! ρ Note that this is not a complete grammar; we have not introduced a start symbol and there are no expansions for X or Y. 5 Further reading • J&M (§13.2.1) give a general-purpose algorithm for converting a context-free grammars and trees to Chomsky-normal form. • J&M (§14) and Eisenstein (§10) describe probabilistic context-free grammars (PCFGs), in which each rule is associated with a conditional probability (conditioned on the left-hand side). • Klein and Manning (2003) describe knowledge-based Markovization techniques for unlex- icalized PCFG parsing. • Petrov et al. (2006) develop data-driven Markovization techniques for unlexicalized PCFG parsing. • Bikel (2004) describes the Collins (1999) parser, which uses a novel form of lexicalized PCFG. References Bikel, Daniel M. 2004. Intricacies of Collins’ parsing model. Computational Linguistics 30:479–511. Chomsky, Noam. 1956. Three models for the description of language. IEEE Transactions on Infor- mation Theory 3:113–124. Chomsky, Noam. 1959. On certain formal properties of grammars. Information and Control 2:137– 167. Chomsky, Noam. 1963. Formal properties of grammars. In Handbook of Mathematical Psychology, ed. R. Duncan Luce, Robert R. Bush, and Eugene Galanter, 323–418. John Wiley & Sons. Collins, Michael. 1999. Head-driven statistical models for natural language processing. Doctoral dissertation, University of Pennslvania. Hopcroft, John E., and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. Klein, Dan, and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, 423–430. Petrov, Slav, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Com- putational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, 433–440. Schieber, Stuart M. 1985. Evidence against the context-freeness of natural languages. Linguistics and Philosophy 8:333–343..
Recommended publications
  • Scaffolding Learning of Language Structures with Visual‐Syntactic Text
    UC Irvine UC Irvine Previously Published Works Title Scaffolding learning of language structures with visual-syntactic text formatting Permalink https://escholarship.org/uc/item/6235t25b Journal British Journal of Educational Technology, 50(4) ISSN 0007-1013 Authors Park, Y Xu, Y Collins, P et al. Publication Date 2018 DOI 10.1111/bjet.12689 Peer reviewed eScholarship.org Powered by the California Digital Library University of California British Journal of Educational Technology Vol 0 No 0 2018 1–17 doi:10.1111/bjet.12689 Scaffolding learning of language structures with visual-syntactic text formatting Youngmin Park, Ying Xu , Penelope Collins, George Farkas and Mark Warschauer Youngmin Park is a Lecturer at Pusan National University, Korea. She received her Ph.D. degree from the University of California, Irvine, specializing in Language, Literacy and Technology. Ying Xu is a Ph.D. student at the University of California, Irvine, with a specialization in specializing in Language, Literacy and Technology. Penelope Collins is an associate professor at the University of California, Irvine. Her research examines the development of language and literacy skills for children from linguistically diverse backgrounds. George Farkas is a professor at the University of California, Irvine. He has employed a range of statistical approaches and databases to examine the causes and consequences of reading achievement gap across varying age groups and educational settings. Mark Warschauer is a professor at the University of California, Irvine. He works on a range of research projects related to digital media in education. Address for correspondence: Ying Xu, University of California Irvine, 3200 Education Bldg, Irvine, CA 92697, USA.
    [Show full text]
  • Probabilistic Grammars and Their Applications This Discretion to Pursue Political and Economic Ends
    Probabilistic Grammars and their Applications this discretion to pursue political and economic ends. and the Law; Monetary Policy; Multinational Cor- Most experiences, however, suggest the limited power porations; Regulation, Economic Theory of; Regu- of privatization in changing the modes of governance lation: Working Conditions; Smith, Adam (1723–90); which are prevalent in each country’s large private Socialism; Venture Capital companies. Further, those countries which have chosen the mass (voucher) privatization route have done so largely out of necessity and face ongoing efficiency problems as a result. In the UK, a country Bibliography whose privatization policies are often referred to as a Armijo L 1998 Balance sheet or ballot box? Incentives to benchmark, ‘control [of privatized companies] is not privatize in emerging democracies. In: Oxhorn P, Starr P exerted in the forms of threats of take-over or (eds.) The Problematic Relationship between Economic and bankruptcy; nor has it for the most part come from Political Liberalization. Lynne Rienner, Boulder, CO Bishop M, Kay J, Mayer C 1994 Introduction: privatization in direct investor intervention’ (Bishop et al. 1994, p. 11). After the steep rise experienced in the immediate performance. In: Bishop M, Kay J, Mayer C (eds.) Pri ati- zation and Economic Performance. Oxford University Press, aftermath of privatizations, the slow but constant Oxford, UK decline in the number of small shareholders highlights Boubakri N, Cosset J-C 1998 The financial and operating the difficulties in sustaining people’s capitalism in the performance of newly privatized firms: evidence from develop- longer run. In Italy, for example, privatization was ing countries.
    [Show full text]
  • Grammars and Normal Forms
    Grammars and Normal Forms Read K & S 3.7. Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no, AND if yes, describe the structure a + b * c Now it's time to worry about extracting structure (and doing so efficiently). Optimizing Context-Free Languages For regular languages: Computation = operation of FSMs. So, Optimization = Operations on FSMs: Conversion to deterministic FSMs Minimization of FSMs For context-free languages: Computation = operation of parsers. So, Optimization = Operations on languages Operations on grammars Parser design Before We Start: Operations on Grammars There are lots of ways to transform grammars so that they are more useful for a particular purpose. the basic idea: 1. Apply transformation 1 to G to get of undesirable property 1. Show that the language generated by G is unchanged. 2. Apply transformation 2 to G to get rid of undesirable property 2. Show that the language generated by G is unchanged AND that undesirable property 1 has not been reintroduced. 3. Continue until the grammar is in the desired form. Examples: • Getting rid of ε rules (nullable rules) • Getting rid of sets of rules with a common initial terminal, e.g., • A → aB, A → aC become A → aD, D → B | C • Conversion to normal forms Lecture Notes 16 Grammars and Normal Forms 1 Normal Forms If you want to design algorithms, it is often useful to have a limited number of input forms that you have to deal with. Normal forms are designed to do just that.
    [Show full text]
  • CS351 Pumping Lemma, Chomsky Normal Form Chomsky Normal
    CS351 Pumping Lemma, Chomsky Normal Form Chomsky Normal Form When working with a context-free grammar a simple and useful form is called the Chomsky Normal Form (CNF). A CFG in CNF is one where every rule is of the form: A ! BC A ! a Where a is any terminal and A,B, and C are any variables, except that B and C may not be the start variable. Note that we have two and only two variables on the right hand side of the rule, with the exception that the rule S!ε is permitted where S is the start variable. Theorem: Any context free language may be generated by a context-free grammar in Chomsky normal form. To show how to make this conversion, we will need to do three things: 1. Eliminate all ε rules of the form A!ε 2. Eliminate all unit rules of the form A!B 3. Convert remaining rules into rules of the form A!BC Proof: 1. First add a new start symbol S0 and the rule S0 ! S, where S was the original start symbol. This guarantees that the start symbol doesn’t occur on the right hand side of a rule. 2. Remove all ε rules. Remove a rule A!ε where A is not the start symbol For each occurrence of A on the right-hand side of a rule, add a new rule with that occurrence of A deleted. Ex: R!uAv becomes R!uv This must be done for each occurrence of A, so the rule: R!uAvAw becomes R! uvAw and R! uAvw and R!uvw This step must be repeated until all ε rules are removed, not including the start.
    [Show full text]
  • Chapter 7 Linguistics As a Science of Structure Ryan M
    Chapter 7 Linguistics as a science of structure Ryan M. Nefdt University of the Western Cape Generative linguistics has rapidly changed during the course of a relatively short period. This has caused many to question its scientific status as a realist scientific theory (Stokhof & van Lambalgen 2011; Lappin et al. 2000). In this chapter, I argue against this conclusion. Specifically, I claim that the mathematical foundations of the science present a different story below the surface. I agree with critics that due to the major shifts in theory over the past 80 years, linguistics is indeed opened up to the problem of pessimistic meta-induction or radical theory change. However, I further argue that a structural realist approach (Ladyman 1998; French 2006) can save the field from this problem and at the same time capture its structural nature. I discuss particular historical instances of theory change in generative grammar as evidence for this interpretation and finally attempt to extend it beyond the gener- ative tradition to encompass previous frameworks in linguistics. 1 Introduction The generativist revolution in linguistics started in the mid-1950s, inspired in large part by insights from mathematical logic and in particular proof theory. Since then, generative linguistics has become a dominant paradigm, with many connections to both the formal and natural sciences. At the centre of the newly established discipline was the syntactic or formal engine, the structures of which were revealed through modelling grammatical form. The generativist paradigm in linguistics initially relied heavily upon the proof-theoretic techniques intro- duced by Emil Post and other formal logicians to model the form language takes (Tomalin 2006; Pullum 2011; 2013).1 Yet despite these aforementioned formal be- ginnings, the generative theory of linguistics has changed its commitments quite 1Here my focus will largely be on the formal history of generative syntax but I will make some comments on other aspects of linguistics along the way.
    [Show full text]
  • Syntactic Structures and Their Symbiotic Guests. Notes on Analepsis from the Perspective of On- Line Syntax*
    Pragmatics 24:3.533-560 (2014) International Pragmatics Association DOI: 10.1075/prag.24.3.05aue SYNTACTIC STRUCTURES AND THEIR SYMBIOTIC GUESTS. NOTES ON ANALEPSIS FROM THE PERSPECTIVE OF ON- LINE SYNTAX* Peter Auer Abstract The empirical focus of this paper is on utterances that re-use syntactic structures from a preceding syntactic unit. Next utterances of this type are usually treated as (coordination) ellipsis. It is argued that from an on-line perspective on spoken syntax, they are better described as structural latency: A grammatical structure already established remains available and can therefore be made use of with one or more of its slots being filled by new material. A variety of cases of this particular kind of conversational symbiosis are discussed. It is argued that they should receive a common treatment. A number of features of the general host/guest relationship are discussed. Keywords: Analepsis; On-line syntax; Structural latency. "Ein Blick in die erste beste Erzählung und eine einfache Ueberlegung muss beweisen, dass jede frühere Aeusserung des Erzählenden die Exposition aller nachfolgenden Prädikate bildet." (Wegener 1885: 46)1 1. Introduction The notion of 'ellipsis' is often regarded with some skepticism by Interactional Linguists - the orientation to 'full' sentences is all too obvious (cf. Selting 1997), and the idea that speakers produce complete sentences just in order to delete some parts of them afterwards surely fails to account for the processual dynamics of sentence production and understanding (cf. Kindt 1985) in time. On the other hand, there can be little doubt that speakers often produce utterance units that could not be understood * I wish to thank Elizabeth Couper-Kuhlen and Susanne Günthner for their helpful comments on a previous version of this paper.
    [Show full text]
  • Generative Linguistics and Neural Networks at 60: Foundation, Friction, and Fusion*
    Generative linguistics and neural networks at 60: foundation, friction, and fusion* Joe Pater, University of Massachusetts Amherst October 3, 2018. Abstract. The birthdate of both generative linguistics and neural networks can be taken as 1957, the year of the publication of foundational work by both Noam Chomsky and Frank Rosenblatt. This paper traces the development of these two approaches to cognitive science, from their largely autonomous early development in their first thirty years, through their collision in the 1980s around the past tense debate (Rumelhart and McClelland 1986, Pinker and Prince 1988), and their integration in much subsequent work up to the present. Although this integration has produced a considerable body of results, the continued general gulf between these two lines of research is likely impeding progress in both: on learning in generative linguistics, and on the representation of language in neural modeling. The paper concludes with a brief argument that generative linguistics is unlikely to fulfill its promise of accounting for language learning if it continues to maintain its distance from neural and statistical approaches to learning. 1. Introduction At the beginning of 1957, two men nearing their 29th birthdays published work that laid the foundation for two radically different approaches to cognitive science. One of these men, Noam Chomsky, continues to contribute sixty years later to the field that he founded, generative linguistics. The book he published in 1957, Syntactic Structures, has been ranked as the most influential work in cognitive science from the 20th century.1 The other one, Frank Rosenblatt, had by the late 1960s largely moved on from his research on perceptrons – now called neural networks – and died tragically young in 1971.
    [Show full text]
  • The Logic of Categorial Grammars: Lecture Notes Christian Retoré
    The Logic of Categorial Grammars: Lecture Notes Christian Retoré To cite this version: Christian Retoré. The Logic of Categorial Grammars: Lecture Notes. RR-5703, INRIA. 2005, pp.105. inria-00070313 HAL Id: inria-00070313 https://hal.inria.fr/inria-00070313 Submitted on 19 May 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE The Logic of Categorial Grammars Lecture Notes Christian Retoré N° 5703 Septembre 2005 Thème SYM apport de recherche ISRN INRIA/RR--5703--FR+ENG ISSN 0249-6399 The Logic of Categorial Grammars Lecture Notes Christian Retoré ∗ Thème SYM — Systèmes symboliques Projet Signes Rapport de recherche n° 5703 — Septembre 2005 —105 pages Abstract: These lecture notes present categorial grammars as deductive systems, and include detailed proofs of their main properties. The first chapter deals with Ajdukiewicz and Bar-Hillel categorial grammars (AB grammars), their relation to context-free grammars and their learning algorithms. The second chapter is devoted to the Lambek calculus as a deductive system; the weak equivalence with context free grammars is proved; we also define the mapping from a syntactic analysis to a higher-order logical formula, which describes the seman- tics of the parsed sentence.
    [Show full text]
  • The Chomsky Enigma
    Weekly Worker 655 - Thursday January 4 2007 18/08/2010 11:07 Weekly Worker 655 Thursday January 11 2007 Subscribe to the Weekly Worker 18:08:201004:05:2009 The Chomsky home contact enigma action weekly worker respect the unity coalition How is that a powerful critic of US imperialism european social forum has been regarded as a valued asset by the US theory military? In the first of three articles Chris Knight resources of the Radical Anthropology Group begins his what we fight for examination of the life and work of Noam programme Chomsky join Noam Chomsky ranks among the leading intellectual search figures of modern times. He has changed the way we Fighting fund think about what it means to be human, gaining a communist university position in the history of ideas - at least according to On the links his supporters - comparable with that of Galileo, Descartes or Newton. Since launching his intellectual our history assault against the academic orthodoxies of the move 1950s, he has succeeded - almost single-handedly - in revolutionising linguistics and establishing it as a modern science. Our January fund has received an early boost with a handsome Such intellectual victories, however, have come at a £100 donation from comrade GD. cost. The stage was set for the “linguistics wars”1 Which is very handy, as when Chomsky published his first book. He might as circumstances have conspired to well have thrown a bomb. “The extraordinary and force us to change premises - an traumatic impact of the publication of Syntactic expensive business, as readers structures by Noam Chomsky in 1957,” recalls one will know.
    [Show full text]
  • Appendix A: Hierarchy of Grammar Formalisms
    Appendix A: Hierarchy of Grammar Formalisms The following figure recalls the language hierarchy that we have developed in the course of the book. '' $$ '' $$ '' $$ ' ' $ $ CFG, 1-MCFG, PDA n n L1 = {a b | n ≥ 0} & % TAG, LIG, CCG, tree-local MCTAG, EPDA n n n ∗ L2 = {a b c | n ≥ 0}, L3 = {ww | w ∈{a, b} } & % 2-LCFRS, 2-MCFG, simple 2-RCG & % 3-LCFRS, 3-MCFG, simple 3-RCG n n n n n ∗ L4 = {a1 a2 a3 a4 a5 | n ≥ 0}, L5 = {www | w ∈{a, b} } & % ... LCFRS, MCFG, simple RCG, MG, set-local MCTAG, finite-copying LFG & % Thread Automata (TA) & % PMCFG 2n L6 = {a | n ≥ 0} & % RCG, simple LMG (= PTIME) & % mildly context-sensitive L. Kallmeyer, Parsing Beyond Context-Free Grammars, Cognitive Technologies, DOI 10.1007/978-3-642-14846-0, c Springer-Verlag Berlin Heidelberg 2010 216 Appendix A For each class the different formalisms and automata that generate/accept exactly the string languages contained in this class are listed. Furthermore, examples of typical languages for this class are added, i.e., of languages that belong to this class while not belonging to the next smaller class in our hier- archy. The inclusions are all proper inclusions, except for the relation between LCFRS and Thread Automata (TA). Here, we do not know whether the in- clusion is a proper one. It is possible that both devices yield the same class of languages. Appendix B: List of Acronyms The following table lists all acronyms that occur in this book. (2,2)-BRCG Binary bottom-up non-erasing RCG with at most two vari- ables per left-hand side argument 2-SA Two-Stack Automaton
    [Show full text]
  • G612310 Syntactic Theory and Analysis
    The Openness of Natural Languages Paul M. Postal, New York University e-mail: [email protected] Preface It might seem plausible to the non-specialist who thinks about natural language (NL) that a given NL, NLx, permits one to report linguistic performances, both performances of NLx elements and those of NLs distinct from NLx. By ‘reporting linguistic performances’ I refer to nothing more arcane than forming statements like ‘Amanda just shouted ‘where’s my baby?’’ It might also seem to a non-specialist that NLx permits one to do descriptive linguistics, not only the descriptive linguistics of NLx, but that of other distinct NLs. By ‘doing descriptive linguistics’ I mean nothing more exotic than forming sentences like ‘The German word for ‘air force’ is ‘Luftwaffe’’. But while these non-specialist assumptions might seem not only plausible but self-evidently true, modern linguistics in its dominant instantiation called generative grammar, in fact denies both these claims. Of course, it does this only implicitly and most advocates of generative grammar may be unaware that its doctrines taken literally preclude what any non-specialist would assume possible. Readers not easily accepting this conclusion will find their skepticism addressed in what follows, for a major goal of this study is to justify in detail the claim that generative grammar has the evidently intolerable implications just mentioned. Section 1 Background Near the beginning of the generative grammar movement in linguistics the following claims were made (all emphases mine: PMP): (1)a. Chomsky (1959: 137) “A language is a collection of sentences of finite length all constructed from a finite alphabet (or, where our concern is limited to syntax, a finite vocabulary) of symbols.” b.
    [Show full text]
  • STRUCTURE and STRUCTURALISM in PHILOSOPHY of LANGUAGE and SEMIOTICS by Susan Petrilli
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by ESE - Salento University Publishing STRUCTURE AND STRUCTURALISM IN PHILOSOPHY OF LANGUAGE AND SEMIOTICS by Susan Petrilli Abstract Structuralism covers a broad range of different tendencies in different disciplines over the entire twentieth century. The term structuralism is plurivocal: it is used for different trends from a variety of different scientific fields and may even diverge on the theoretical and methodological levels. This essay examines some of the main trends in structuralism not only in linguistics, but beyond in other areas of research on language and signs, including philosophy of language through to latest developments in semiotics, and most recently biosemiotics. A critical approach to structuralism is proposed for the development of critical structuralism involving such problematics as Marxian proto-structuralism; the intersemiotic transposition of semiotic approaches to linguistic and socio-cultural structures; ontological structuralism and methodological structuralism; the human being as a semiotic animal and a structuralist animal. Le structuralisme couvre un large éventail de tendances différentes dans les différentes disciplines pendant le XXe siècle. Le terme structuralisme est plurivoque: se réfère à des orientations différentes de différents domaines scientifiques, même sur le 44 plan théorique et méthodologique. Cet article examine quelques-unes des principales tendances du structuralisme, non seulement en linguistique,
    [Show full text]