Sympathy and Phonological Opacity* John J
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Paul Smolensky
Vita PAUL SMOLENSKY Department of Cognitive Science 11824 Mays Chapel Road 239A Krieger Hall Timonium, MD 21093-1821 Johns Hopkins University (667) 229-9509 Baltimore, MD 21218-2685 May 5, 1955 (410) 516-5331 Citizenship: USA [email protected] cogsci.jhu.edu/directory/paul-smolensky/ DEGREES Ph.D. in mathematical physics, Indiana University, 1981. M.S. in physics, Indiana University, 1977. A.B. summa cum laude in physics, Harvard University, 1976. PROFESSIONAL POSITIONS Partner Researcher, Microsoft Research Artificial Intelligence, Redmond WA, Dec. 2016−present. Krieger-Eisenhower Professor of Cognitive Science, Johns Hopkins University, 2006–present. Full Professor, Department of Cognitive Science, Johns Hopkins University, 1994–2006. Chair, Department of Cognitive Science, Johns Hopkins University, Jan. 1997−June 1998 (Acting), July 1998−June 2000 Professor, Department of Computer Science, University of Colorado at Boulder, Full Professor, 1994–95 (on leave, 1994–95). Associate Professor, 1990–94. Assistant Professor, 1985–90. Assistant Research Cognitive Scientist (Assistant Professor – Research), Institute for Cognitive Science, University of California at San Diego, 1982–85. Visiting Scholar, Program in Cognitive Science, University of California at San Diego, 1981–82. Adjunct Professor, Department of Linguistics, University of Maryland at College Park, 1994–2010. Assistant Director, Center for Language and Speech Processing, Johns Hopkins University, 1995–2008. Director, NSF IGERT Training Program, Unifying the Science of Language, 2006−2015. Director, NSF IGERT Training Program in the Cognitive Science of Language, 1999−2006. International Chair, Inria Paris (National Institute for Research in Computer Science and Automation), 2017−2021. Visiting Scientist, Inserm-CEA Cognitive Neuroimaging Unit, NeuroSpin Center, Paris, France, 2016. -
University of Groningen Finding the Right Words Bíró, Tamás Sándor
University of Groningen Finding the right words Bíró, Tamás Sándor IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below. Document Version Publisher's PDF, also known as Version of record Publication date: 2006 Link to publication in University of Groningen/UMCG research database Citation for published version (APA): Bíró, T. S. (2006). Finding the right words: implementing optimality theory with simulated annealing. s.n. Copyright Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons). The publication may also be distributed here under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license. More information can be found on the University of Groningen website: https://www.rug.nl/library/open-access/self-archiving-pure/taverne- amendment. Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum. Download date: 29-09-2021 Bibliography Arto Anttila. Morphologically conditioned phonological alternations. Natural Language and Linguistic Theory, 20:1{42, 2002. Also: ROA-425. -
A View from Gradient Harmonic Grammar
Asymmetries in Long-Distance Dependencies: A View from Gradient Harmonic Grammar constituent class membership to a degree, and presupposes that instead of standard category symbols like [X], there are weighted category symbols like [αX] (where α ranges over the real numbers in Hyunjung Lee & Gereon M¨uller (Universit¨at Leipzig) [0,1]). Rules, filters, and other syntactic building blocks are given upper and lower threshold values WorkshoponLong-DistanceDependencies,HUBerlin October 4-5, 2018 of α between which they operate. 1. Introduction Note: This way, the concept of varying strength of syntactic categories (see Chomsky (2015) for a recent Claim: reappraisal) can be formally implemented in the grammar. Gradient Harmonic Grammar (Smolensky & Goldrick (2016)) offers a new perspective on how to derive three different types of asymmetries as they can be observed with long-distance dependencies Observation: in the world’s languages: So far, most of the work on GHG has been in phonology (e.g., Zimmermann (2017), Faust & Smo- • asymmetries between movement types lensky (2017), Kushnir (2018)); but cf. Smolensky (2017), M¨uller (2017b), Lee (2018) for syntactic applications. • asymmetries between types of moved items • asymmetries between types of local domain 1.3. Harmonic Serialism Background assumptions: Note: (i) Harmonic Grammar Harmonic serialism is a strictly derivational version of optimality theory. (ii) Gradient Representations (3) Harmonic serialism (McCarthy (2008), Heck & M¨uller (2013)): (iii) Harmonic Serialism a. GivensomeinputIi, the candidate set CSi = {Oi1,Oi2, ...Oin} is generated by applying at 1.1. Harmonic Grammar most one operation to Ii. Harmonic Grammar (Smolensky & Legendre (2006), Pater (2016)): A version of optimality theory b. -
Opacity and Cyclicity
Opacity and Cyclicity Paul Kiparsky Stanford University Phonological opacity and paradigmatic effects (“synchronic analogy”) have long been of interest in relation to change, naturalness, and the phonology/morphology interface. Their investigation has now acquired a new urgency, because they call into question OT’s postulate that constraints are evaluated in parallel. Conceptu- ally, parallelism is one of the basic and most interesting tenets of OT, and so there are good methodological reasons to try hard to save it in the face of such recal- citrant data. The price to be paid for it is the introduction of otherwise unneeded powerful new types of Faithfulness constraints, such as Output/Output (O/O) con- straints, Paradigm Uniformity constraints, and Sympathy constraints, which have turned out to compromise the OT program very severely. The alternative to this approach is to abandon full parallelism in favor of strat- ified constraint systems. This has the compensating advantage of maintaining a restrictive and well-defined constraint inventory, as originally envisaged in OT. More importantly, it achieves some genuine explanations by relating the strat- ification motivated by opacity and cyclicity to the intrinsic morphological and prosodic constituency of words and phrases, as characterized by the Stem, Word, and Postlexical levels of Lexical Phonology and Morphology (Booij 1996; 1997; Orgun 1996; Bermudez-Otero 1999). I shall refer to this approach as LPM-OT, and outline how it offers a superior account of the benchmark data that Kager 1999 discusses in Ch. 6 of his book.1 LPM-OT’s goal is to reduce cyclicity to I/O faithfulness, and opacity to inter- level constraint masking. -
Generalized Alignment John J
University of Massachusetts Amherst ScholarWorks@UMass Amherst Linguistics Department Faculty Publication Series Linguistics January 1993 Generalized alignment John J. McCarthy University of Massachusetts, Amherst, [email protected] Alan Prince Follow this and additional works at: https://scholarworks.umass.edu/linguist_faculty_pubs Part of the Morphology Commons, Near Eastern Languages and Societies Commons, and the Phonetics and Phonology Commons Recommended Citation McCarthy, John J. and Prince, Alan, "Generalized alignment" (1993). Yearbook of Morphology. 12. Retrieved from https://scholarworks.umass.edu/linguist_faculty_pubs/12 This Article is brought to you for free and open access by the Linguistics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Linguistics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Generalized Alignment* John J. McCarthy Alan S. Prince University of Massachusetts, Amherst Rutgers University §1. Introduction Overt or covert reference to the edges of constituents is a commonplace throughout phonology and morphology. Some examples include: •In English, Garawa, Indonesian and a number of other languages, the normal right-to-left alternation of stress is interrupted word-initially: (1) Initial Secondary Stress in English (Tàta)ma(góuchee) *Ta(tàma)(góuchee) (Lùxi)pa(lílla) *Lu(xìpa)(lílla) As the foot-brackets ( ) indicate, the favored outcome is one in which -
Transparency in Language a Typological Study
Transparency in language A typological study Published by LOT phone: +31 30 253 6111 Trans 10 3512 JK Utrecht e-mail: [email protected] The Netherlands http://www.lotschool.nl Cover illustration © 2011: Sanne Leufkens – image from the performance ‘Celebration’ ISBN: 978-94-6093-162-8 NUR 616 Copyright © 2015: Sterre Leufkens. All rights reserved. Transparency in language A typological study ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof. dr. D.C. van den Boom ten overstaan van een door het college voor promoties ingestelde commissie, in het openbaar te verdedigen in de Agnietenkapel op vrijdag 23 januari 2015, te 10.00 uur door Sterre Cécile Leufkens geboren te Delft Promotiecommissie Promotor: Prof. dr. P.C. Hengeveld Copromotor: Dr. N.S.H. Smith Overige leden: Prof. dr. E.O. Aboh Dr. J. Audring Prof. dr. Ö. Dahl Prof. dr. M.E. Keizer Prof. dr. F.P. Weerman Faculteit der Geesteswetenschappen i Acknowledgments When I speak about my PhD project, it appears to cover a time-span of four years, in which I performed a number of actions that resulted in this book. In fact, the limits of the project are not so clear. It started when I first heard about linguistics, and it will end when we all stop thinking about transparency, which hopefully will not be the case any time soon. Moreover, even though I might have spent most time and effort to ‘complete’ this project, it is definitely not just my work. Many people have contributed directly or indirectly, by thinking about transparency, or thinking about me. -
Observations on the Phonetic Realization of Opaque Schwa in Southern French
http://dx.doi.org/10.17959/sppm.2015.21.3.457 457 Observations on the phonetic realization of opaque * schwa in Southern French Julien Eychenne (Hankuk University of Foreign Studies) Eychenne, Julien. 2015. Observations on the phonetic realization of opaque schwa in Southern French. Studies in Phonetics, Phonology and Morphology 21.3. 457-494. This paper discusses a little-known case of opacity found in some southern varieties of French, where the vowel /ə/, which is footed in the dependent syllable of a trochee, is usually realized as [ø], like the full vowel /Œ/. This case of opacity is particularly noteworthy because the opaque generalization is suprasegmental, not segmental. I show that, depending on how the phenomenon is analyzed derivationally, it simultaneously displays symptoms of counterbleeding and counterfeeding opacity. A phonetic analysis of data from one representative speaker is carried out, and it is shown that the neutralization between /ə/ and /Œ/ is complete in this idiolect. Implications for the structure of lexical representations and for models of phonology are discussed in light of these results. (Hankuk University of Foreign Studies) Keywords: schwa, Southern French, opacity 1. Introduction Opacity is one of the most fundamental issues in generative phonological theory. The traditional view of opacity, due to Kiparsky (1971, 1973) and framed within the derivational framework of The Sound Pattern of English (henceforth SPE, Chomsky and Halle 1968), goes as follows: (1) Opacity (Kiparsky 1973: 79) A phonological rule P of the form A → B / C__D is opaque if there are * An early version of this work was presented at the Linguistics Colloquium of the English Linguistics Department, Graduate School at Hankuk University of Foreign Studies. -
Harmonic Grammar with Linear Programming
To appear in Phonology 27(1):1–41 February 18, 2010 Harmonic grammar with linear programming: From linear systems to linguistic typology∗ Christopher Potts Joe Pater Stanford University UMass Amherst Karen Jesney Rajesh Bhatt UMass Amherst UMass Amherst Michael Becker Harvard University Abstract Harmonic Grammar (HG) is a model of linguistic constraint interac- tion in which well-formedness is calculated in terms of the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of linguistic data. The associated software package OT-Help provides a practical tool for studying large and complex linguistic systems in the HG framework and comparing the results with those of OT. We first describe the translation from harmonic grammars to systems solvable by linear programming algorithms. We then develop an HG analysis of ATR harmony in Lango that is, we argue, superior to the existing OT and rule-based treatments. We further highlight the usefulness of OT-Help, and the analytic power of HG, with a set of studies of the predictions HG makes for phonological typology. Keywords: Harmonic Grammar, Optimality Theory, linear programming, typology, Lango, ATR harmony, positional markedness, positional faithfulness 1 Introduction We examine a model of grammar that is identical to the standard version of Optimality Theory (OT: Prince & Smolensky 1993/2004), except that the optimal Our thanks to Ash Asudeh, Tim Beechey, Maitine Bergonioux, Paul Boersma, John Colby, Kathryn ∗ Flack, Edward Flemming, Bob Frank, John Goldmsith, Maria Gouskova, Bruce Hayes, René Kager, Shigeto Kawahara, John Kingston, John McCarthy, Andrew McKenzie, Ramgopal Mettu, Alan Prince, Kathryn Pruitt, Jason Riggle, Jim Smith, and Paul Smolensky, and other participants in conferences and courses where this material was presented. -
Ranking and Necessity
Ranking and Necessity Part I: The Fusional Reduction Algorithm ● 12-16-2005 ● Adrian Brasoveanu and Alan Prince Department of Linguistics Rutgers Center for Cognitive Science Rutgers University, New Brunswick ABSTRACT Understanding a linguistic theory within OT requires an exact characterization of the ranking conditions necessitated by data. We describe (Part I) and justify (Part II) an algorithm which calculates the necessary and sufficient ranking conditions inherent in any collection of candidates and presents them in a maximally concise and informative way. The algorithm, stemming from the original proposal of Brasoveanu 2003, develops in the setting of the fusional ERC theory of Prince 2002. ACKNOWLEDGMENTS The authors, whose names are arranged alphabetically, would like to thank Jane Grimshaw, Naz Merchant, Paul Smolensky, and Bruce Tesar for useful discussion; and, for helpful comments, audiences at the Rutgers Optimality Research Group (April 2004, Nov. 2005), HUMDRUM (May 2004), and the LSA 2005 Summer Institute. For support during aspects of the investigation, we thank the John Simon Guggenheim Memorial Foundation, the National Science Foundation (Grant No. BCS-0083101), and the National Institutes of Health (NRSA Training Grant No. 1-T32-MH-19975-05). The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or of the National Institutes of Health. The authors also claim responsibility for any errors, omissions, misprisions, or infelicities that may have crept into the text. Ranking and Necessity Part I 1. Beyond the Sufficient………………………………………………………………..3 2. The Consequences of Comparison………………………………………………..6 2.1 The Elementary Ranking Condition……………………………………..6 2.2 Informativeness……………………………………………………………9 2.3 ERCs and Consequences………………………………………………12 2.4 Remark on the Logical Background…………………………………...23 3. -
Modeling the Acquisition of Phonological Interactions: Biases and Generalization
Modeling the Acquisition of Phonological Interactions: Biases and Generalization Brandon Prickett and Gaja Jarosz1 University of Massachusetts Amherst 1 Introduction Phonological processes in a language have the potential to interact with one another in numerous ways (Kiparsky 1968, 1971). Two relatively common interaction types are bleeding and feeding interactions, with (1) showing hypothetical examples of both (for a similar hypothetical example, see Baković 2011). (1) Example of Bleeding and Feeding Interactions Bleeding Feeding Underlying Representation (UR) /esi/ /ise/ [-Low] → [αHigh] / [αHigh]C_ (Vowel Harmony) ese isi [s] → [ʃ] / _[+High] (Palatalization) - iʃi Surface Representation (SR) [ese] [iʃi] In the bleeding interaction above, the underlying form /esi/ undergoes harmony, since the /e/ and /i/ have different values for the feature [High]. Since the harmony is progressive, the /i/ assimilates to the /e/ and can no longer trigger the palatalization process (which occurs after harmony), resulting in a surface form of [ese]. The feeding interaction has the opposite effect, with an underlying /e/ becoming a high vowel due to harmony and then triggering the palatalization process to create the SR [iʃi]. The crucial difference between the two interaction types in this example is whether the UR undergoes lowering or raising due to the harmony process. Two other interactions can be produced using these same URs by applying the two rules in the opposite order. These are called Counterbleeding and Counterfeeding interactions and are shown in (2). (2) Example of Counterbleeding and Counterfeeding Interactions Counterbleeding Counterfeeding UR /esi/ /ise/ Palatalization eʃi - Vowel Harmony eʃe isi SR [eʃe] [isi] In the counterbleeding interaction, both palatalization and harmony are able to change the underlying representation, but the motivation for palatalizing (i.e. -
Generative Linguistics and Neural Networks at 60: Foundation, Friction, and Fusion*
Generative linguistics and neural networks at 60: foundation, friction, and fusion* Joe Pater, University of Massachusetts Amherst October 3, 2018. Abstract. The birthdate of both generative linguistics and neural networks can be taken as 1957, the year of the publication of foundational work by both Noam Chomsky and Frank Rosenblatt. This paper traces the development of these two approaches to cognitive science, from their largely autonomous early development in their first thirty years, through their collision in the 1980s around the past tense debate (Rumelhart and McClelland 1986, Pinker and Prince 1988), and their integration in much subsequent work up to the present. Although this integration has produced a considerable body of results, the continued general gulf between these two lines of research is likely impeding progress in both: on learning in generative linguistics, and on the representation of language in neural modeling. The paper concludes with a brief argument that generative linguistics is unlikely to fulfill its promise of accounting for language learning if it continues to maintain its distance from neural and statistical approaches to learning. 1. Introduction At the beginning of 1957, two men nearing their 29th birthdays published work that laid the foundation for two radically different approaches to cognitive science. One of these men, Noam Chomsky, continues to contribute sixty years later to the field that he founded, generative linguistics. The book he published in 1957, Syntactic Structures, has been ranked as the most influential work in cognitive science from the 20th century.1 The other one, Frank Rosenblatt, had by the late 1960s largely moved on from his research on perceptrons – now called neural networks – and died tragically young in 1971. -
Chain Shifts in First Language Acquisition
CHAIN SHIFTS IN FIRST LANGUAGE ACQUISITION MASTER’S THESIS BY FRANSIEN WALTON MASTER’S PROGRAMME TAAL, MENS EN MAATSCHAPPIJ TAALWETENSCHAP UNIVERSITY OF UTRECHT STUDENT NUMBER 0419273 SUPERVISOR PROF. DR. RENÉ KAGER SECOND READER PROF. DR. WIM ZONNEVELD 2 TABLE OF CONTENTS PAGE PREFACE 5 1. INTRODUCTION 7 2. PREVIOUS RESEARCH 14 2.1 EARLY GENERATIVE GRAMMAR 15 2.1.1 SMITH (1973) 17 2.1.2 MACKEN (1980) 19 2.2 OPTIMALITY THEORY 21 2.2.1 INTRODUCTION TO OPTIMALITY THEORY 21 2.2.2 CHAIN SHIFTS IN OPTIMALITY THEORY 26 2.2.3 LOCAL CONSTRAINT CONJUNCTION 28 2.2.4 FAITHFULNESS TO INPUT PROMINENCE 39 2.2.5 OPTIMALITY THEORY WITH CANDIDATE CHAINS 52 2.2.6 UNDERSPECIFIED UNDERLYING REPRESENTATIONS 58 2.3 CONCLUSION 62 3. A NEW PROPOSAL 65 3.1 INDEPENDENT PROCESSES 68 3.2 ARTICULATORY DIFFICULTIES 70 3.3 UNDERLYING REPRESENTATIONS 74 3.4 MISPERCEPTION 79 3.5 PREDICTIONS REVISITED 83 3.6 CONCLUSION 86 4. DISCUSSION 88 5. CONCLUSION 103 REFERENCES 105 3 4 PREFACE The idea for this thesis originated in the course Phonological Acquisition taught by René Kager. I read the article On the characterization of a chain shift in normal and delayed phonological acquisition by Daniel Dinnsen and Jessica Barlow (1998) and was intrigued by the phenomenon. After I started reading more on the subject, I became increasingly unhappy with the proposed analyses in the literature and decided to dive deeper into the matter. The result is this thesis. I would like to thank René for his useful comments, critical questions and positive feedback.