Encoding of Phonology in an RNN Model of Grounded Speech

Total Page:16

File Type:pdf, Size:1020Kb

Encoding of Phonology in an RNN Model of Grounded Speech Representations of language in a model of visually Encoding of Phonology grounded speech signal in an RNN model of Grounded Speech Grzegorz Chrupała Lieke Gelderloos AfraAfra Alishahi,Alishahi Marie Barking, Grzegorz Chrupała A Realistic Language Learning Scenario Two men Groundedare speech washing an perception elephant. 2 Grounded Analysis of Language Learning Linguistic Knowledge Roy & Pentland (2002) Elman (1991) Yu & Ballard (2014) Mohamed et al. (2012) Harwath et al. (2016) Frank et al. (2013) Gelderloos & Chrupala Kadar et al. (2016) (2016) Li et al. (2016) Harwath & Glass (2017) Gelderloos & Chrupala Chrupala et al. (2017) (2016) Linzen et al. (2016) Adi et al. (2017) We are here! 3 A Model of GroundedGrounded Speech speech Perception perception Image Speech Model Model Joint Semantic Space 4 Joint Semantic Space Project speech and image to joint space a bird walks on a beam bears play in water 5 6 Image Model Pre-classifcation layer Image model Image VGG-16: Simonyan & Zisserman (2014) VGG-16: Simonyan BOAR BIRD BOAT Speech Model Project to the joint semantic space Attention: weighted sum of Attention • last RHN layer units RHN #5 RHN: Recurrent Highway RHN #4 • Networks (Zilly et al., 2016) RHN #3 Convolution: subsampling RHN #2 • MFCC vector RHN #1 Convolution Grounded speech perceptionMFCC 7 Chrupała et al., ACL‘2017 • Representation of language in a model of visually grounded speech signal • Using hidden layer activations in a set of auxiliary tasks • Predicting utterance length and content, measuring representational similarity and disambiguation of homonyms • Main findings: • Encodings of form and meaning emerge and evolve in hidden layers of stacked RNNs processing grounded speech 8 Current Study • Questions: how is phonology encoded in • MFCC features extracted from speech signal? • activations of the layers of the model? • Data: Synthetically Spoken COCO dataset • Experiments: • Phoneme decoding and clustering • Phoneme discrimination • Synonym discrimination Grounded speech perception 9 Phoneme Decoding • Identifying phonemes from speech signal/activation patterns: supervised classification of aligned phonemes • Speech signal was aligned with phonemic transcription using Gentle toolkit (based on Kaldi, Povey et al., 2011) 10 CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. 400 MFCC features and activations) are stored in a ered from the activations at recurrent layers 1 and 450 401 t D matrix, where t and D are the num- 2, and the accuracy decreases thereafter. This sug- 451 r ⇥ r r r 402 ber of times steps and the dimensionality, respec- gests that the bottom recurrent layers of the model 452 403 tively, for each representation r. Given the align- specialize in recognizing this type of low-level 453 404 ment of each phoneme token to the underlying au- phonologicalPhoneme information. Decoding It is notable however 454 405 dio, we then infer the slice of the representation that even the last recurrent layer encodes phoneme 455 406 matrix corresponding to it. identity to a substantial degree. 456 Identifying phonemes from speech signal/activation 407 • 457 408 5 Experiments patterns: supervised classification of aligned phonemes 458 409 In this section we report on four experiments 459 410 which we designed to elucidate to what extent in- 460 0.5 ● 411 formation about phonology is represented in the ● 461 412 ● 462 activations of the layers of the COCO Speech ● 413 463 model. In Section 5.1 we quantify how easy it is 0.4 414 to decode phoneme identity from activations. In 464 415 465 Section 5.2 we determine phoneme discriminabil- Error rate 0.3 416 ity in a controlled task with minimal pair stimuli. ● 466 ● ● 417 Section 5.3 shows how the phoneme inventory is ● 467 ● ● ● 418 organized in the activation space of the model. Fi- ● ● 468 MFCC Conv Rec1 Rec2 Rec3 Rec4 Rec5 419 nally, in Section 5.4 we tackle the general issue Representation 469 420 of the representation of phonological form versus 470 meaning with the controlled task of synonym dis- 421 471 11 422 crimination. 472 423 473 5.1 Phoneme decoding 424 Figure 1: Accuracy of phoneme decoding with 474 425 In this section we quantify to what extent phoneme input MFCC features and COCO Speech model 475 426 identity can be decoded from the input MFCC fea- activations. The boxplot shows error rates boot- 476 427 tures as compared to the representations extracted strapped with 1000 resamples. 477 428 from the COCO speech. As explained in Sec- 478 tion 4.3, we use phonemic transcriptions aligned 429 5.2 Phoneme discrimination 479 430 to the corresponding audio in order to segment 480 the signal into chunks corresponding to individual 431 Schatz et al. (2013) propose a framework for eval- 481 phonemes. uating speech features learned in an unsupervised 432 482 We take a subset of 5000 utterances from the setup that does not depend on phonetically labeled 433 483 validation set of Synthetically Spoken COCO, and data. They propose a set of tasks called Minimal- 434 484 extract the force-aligned representations from the Pair ABX tasks that allow to make linguistically 435 2 485 Speech COCO model. We split this data into 3 precise comparisons between syllable pairs that 436 1 486 training and 3 heldout portions, and use super- only differ by one phoneme. They use variants of 437 vised classification in order to quantify the recov- this task to study phoneme discrimination across 487 438 erability of phoneme identities from the represen- talkers and phonetic contexts as well as talker dis- 488 439 tations. Each phoneme slice is averaged over time, crimination across phonemes. 489 440 490 so that it becomes a Dr-dimensional vector. For Here we evaluate the COCO Speech model on 441 each representation we then train L2-penalized lo- the Phoneme across Context (PaC) task of Schatz 491 442 gistic regression (with the fixed penalty weight et al. (2013). This task consists of presenting a se- 492 443 1.0) on the training data and measure classifica- ries of equal-length tuples (A, B, X) to the model, 493 444 tion error rate on the heldout portion. where A and B differ by one phoneme (either a 494 445 Figure 1 shows the results. As can be seen from vowel or a consonant), as do B and X, but A and 495 446 this plot, phoneme recoverability is poor for the X are not minimal pairs. For example, in the tu- 496 447 representations based on MFCC and the convolu- ple (be /bi/, me /mi/, my /maI/), human subjects 497 448 tional layer activations, but improves markedly for are asked to identify which of the two syllables be 498 449 the recurrent layers. Phonemes are easiest recov- or me are closest to my. The goal is to measure 499 5 Phoneme Discrimination • ABX task (Schatz et al., 2013): discriminate minimal pairs; is X closer to A or to B? A: be /bi/ B: me /mi/ X: my /maI/ • A, B and X are CV syllables • (A,B) and (B,X) are minimum pairs, but (A,X) are not (34,288 tuples in total) 12 CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Phoneme Discrimination 500 550 Table 3: Accuracy of choosing the correct target 501 ● 551 0.9 in an ABX task using different representations. ● 502 ● 552 ● MFCC 0.71 503 ● ● 553 0.8 Convolutional 0.73 ● 504 ● ● 554 ● Recurrent 1 0.82 ● 505 ● 555 ● ● ● 506 Recurrent 2 0.82 0.7 556 ● ● ● Recurrent 3 0.80 Accuracy ● ● 507 ● 557 ● Recurrent 4 0.76 ● ● ● ● ● 508 0.6 ● 558 ● ● Recurrent 5 0.74 ● 509 ● 559 ● ● ● ● 510 ● 560 0.5 ● ● 511 context invariance in phoneme discrimination by ● 561 mfcc conv rec1 rec2 rec3 rec4 rec5 512 evaluating how often the model recognises X as Representation 562 13 513 ● affricate ● fricative ● plosive 563 the syllable closer to B than to A. Class ● approximant ● nasal ● vowel 514 We used a list of all attested consonant-vowel 564 515 (CV) syllables of American English according to 565 516 the syllabification method described in Gorman Figure 2: Accuracies for the ABX CV task for the 566 517 (2013). We excluded the ones which could not be cases where the target and the distractor belong to 567 518 unambiguously represented using English spelling the same phoneme class. 568 519 for input to the TTS system (e.g. /baU/). We then 569 520 compiled a list of all possible (A, B, X) tuples 570 521 from this list where (A, B) and (B,X) are min- as we move towards higher hidden layers. There 571 522 imal pairs, but (A, X) are not. This resulted in are also clear differences in the pattern of discrim- 572 inability for the phoneme classes. The vowels are 523 34,288 tuples in total. For each tuple, we measure 573 especially easy to tell apart, but accuracy on vow- 524 sign(dist(A, X) dist(B,X)), where dist(i, j) 574 − els drops most acutely in the higher layers. Mean- 525 is the euclidean distance between the vector rep- 575 while the accuracy on fricatives and approximants 526 resentations of syllables i and j. These represen- 576 starts low, but improves rapidly and peaks around 527 tations are either the audio feature vectors or the 577 recurrent layer 2. 528 layer activation vectors. A positive value for a tu- 578 ple means that the model has correctly discrim- 529 579 inated the phonemes that are shared or different 5.3 Organization of phonemes 530 580 across the syllables. In this section we take a closer look at the un- 531 581 Table 3 shows the discrimination accuracy in derlying organization of phonemes in the model. 532 582 this task using various representations.
Recommended publications
  • Semantic Shift, Homonyms, Synonyms and Auto-Antonyms
    WALIA journal 31(S3): 81-85, 2015 Available online at www.Waliaj.com ISSN 1026-3861 © 2015 WALIA Semantic shift, homonyms, synonyms and auto-antonyms Fatemeh Rahmati * PhD Student, Department of Arab Literature, Islamic Azad University, Central Tehran Branch; Tehran, Iran Abstract: One of the important topics in linguistics relates to the words and their meanings. Words of each language have specific meanings, which are originally assigned to them by the builder of that language. However, the truth is that such meanings are not fixed, and may evolve over time. Language is like a living being, which evolves and develops over its lifetime. Therefore, there must be conditions which cause the meaning of the words to change, to disappear over time, or to be signified by new signifiers as the time passes. In some cases, a term may have two or more meanings, which meanings can be different from or even opposite to each other. Also, the semantic field of a word may be expanded, so that it becomes synonymous with more words. This paper tried to discuss the diversity of the meanings of the words. Key words: Word; Semantic shift; Homonym; Synonym; Auto-antonym 1. Introduction person who employed had had the intention to express this sentence. When a word is said in *Speaking of the language immediately brings the absence of intent to convey a meaning, it doesn’t words and meanings immediately to mind, because signify any meaning, and is meaningless, as are the they are two essential elements of the language. words uttered by a parrot.
    [Show full text]
  • Automatic Labeling of Troponymy for Chinese Verbs
    Automatic labeling of troponymy for Chinese verbs 羅巧Ê Chiao-Shan Lo*+ s!蓉 Yi-Rung Chen+ [email protected] [email protected] 林芝Q Chih-Yu Lin+ 謝舒ñ Shu-Kai Hsieh*+ [email protected] [email protected] *Lab of Linguistic Ontology, Language Processing and e-Humanities, +Graduate School of English/Linguistics, National Taiwan Normal University Abstract 以同©^Æ與^Y語意關¶Ë而成的^Y知X«,如ñ語^² (Wordnet)、P語^ ² (EuroWordnet)I,已有E分的研v,^²的úË_已øv完善。ú¼ø同的目的,- 研b語言@¦已úË'規!K-文^Y²路 (Chinese Wordnet,CWN),è(Ð供完t的 -文­YK^©@分。6而,(目MK-文^Y²路ûq-,1¼目M;要/¡(ºº$ 定來標記同©^ÆK間的語意關Â,因d這些標記KxÏ尚*T成可L應(K一定規!。 因d,,Ç文章y%針對動^K間的上下M^Y語意關 (Troponymy),Ðú一.ê動標 記的¹法。我們希望藉1句法上y定的句型 (lexical syntactic pattern),úË一個能 ê 動½取ú動^上下M的ûq。透N^©意$定原G的U0,P果o:,dûqê動½取ú 的動^上M^,cº率將近~分K七A。,研v盼能將,¹法應(¼c(|U-的-文^ ²ê動語意關Â標記,以Ê知X,體Kê動úË,2而能有H率的úË完善的-文^Y知 XÇ源。 關關關uuu^^^:-文^Y²路、語©關Âê動標記、動^^Y語© Abstract Synset and semantic relation based lexical knowledge base such as wordnet, have been well-studied and constructed in English and other European languages (EuroWordnet). The Chinese wordnet (CWN) has been launched by Academia Sinica basing on the similar paradigm. The synset that each word sense locates in CWN are manually labeled, how- ever, the lexical semantic relations among synsets are not fully constructed yet. In this present paper, we try to propose a lexical pattern-based algorithm which can automatically discover the semantic relations among verbs, especially the troponymy relation. There are many ways that the structure of a language can indicate the meaning of lexical items. For Chinese verbs, we identify two sets of lexical syntactic patterns denoting the concept of hypernymy-troponymy relation.
    [Show full text]
  • Applied Linguistics Unit III
    Applied Linguistics Unit III D ISCOURSE AND VOCABUL ARY We cannot deny the fact that vocabulary is one of the most important components of any language to be learnt. The place we give vocabulary in a class can still be discourse-oriented. Most of us will agree that vocabulary should be taught in context, the challenge we may encounter with this way of approaching teaching is that the word ‘context’ is a rather catch-all term and what we need to do at this point is to look at some of the specific relationships between vocabulary choice, context (in the sense of the situation in which the discourse is produced) and co-text (the actual text surrounding any given lexical item). Lexical cohesion As we have seen in Discourse Analysis, related vocabulary items occur across clause and sentence boundaries in written texts and across act, move, and turn boundaries in speech and are a major characteristic of coherent discourse. Do you remember which were those relationships in texts we studied last Semester? We call them Formal links or cohesive devices and they are: verb form, parallelism, referring expressions, repetition and lexical chains, substitution and ellipsis. Some of these are grammatical cohesive devices, like Reference, Substitution and Ellipsis; some others are Lexical Cohesive devices, like Repetition, and lexical chains (such us Synonymy, Antonymy, Meronymy etc.) Why should we study all this? Well, we are not suggesting exploiting them just because they are there, but only because we can give our learners meaningful, controlled practice and the hope of improving them with more varied contexts for using and practicing vocabulary.
    [Show full text]
  • Lexical Semantics
    Lexical Semantics COMP-599 Oct 20, 2015 Outline Semantics Lexical semantics Lexical semantic relations WordNet Word Sense Disambiguation • Lesk algorithm • Yarowsky’s algorithm 2 Semantics The study of meaning in language What does meaning mean? • Relationship of linguistic expression to the real world • Relationship of linguistic expressions to each other Let’s start by focusing on the meaning of words— lexical semantics. Later on: • meaning of phrases and sentences • how to construct that from meanings of words 3 From Language to the World What does telephone mean? • Picks out all of the objects in the world that are telephones (its referents) Its extensional definition not telephones telephones 4 Relationship of Linguistic Expressions How would you define telephone? e.g, to a three-year- old, or to a friendly Martian. 5 Dictionary Definition http://dictionary.reference.com/browse/telephone Its intensional definition • The necessary and sufficient conditions to be a telephone This presupposes you know what “apparatus”, “sound”, “speech”, etc. mean. 6 Sense and Reference (Frege, 1892) Frege was one of the first to distinguish between the sense of a term, and its reference. Same referent, different senses: Venus the morning star the evening star 7 Lexical Semantic Relations How specifically do terms relate to each other? Here are some ways: Hypernymy/hyponymy Synonymy Antonymy Homonymy Polysemy Metonymy Synecdoche Holonymy/meronymy 8 Hypernymy/Hyponymy ISA relationship Hyponym Hypernym monkey mammal Montreal city red wine beverage 9 Synonymy and Antonymy Synonymy (Roughly) same meaning offspring descendent spawn happy joyful merry Antonymy (Roughly) opposite meaning synonym antonym happy sad descendant ancestor 10 Homonymy Same form, different (and unrelated) meaning Homophone – same sound • e.g., son vs.
    [Show full text]
  • Classification of Entailment Relations in PPDB
    Classification of Entailment Relations in PPDB CHAPTER 5. ENTAILMENT RELATIONS 71 CHAPTER 5. ENTAILMENT RELATIONS 71 R0000 R0001 R0010 R0011 CHAPTER 5. ENTAILMENT RELATIONS CHAPTER 5. ENTAILMENT71 RELATIONS 71 1 Overview equivalence synonym negation antonym R0100 R0101 R0110 R0000R0111 R0001 R0010 R0011 couch able un- This document outlines our protocol for labeling sofa able R0000 R0001 R0010 R0000R0011 R0001 R0010 R0011 R R R R R R R R noun pairs according to the entailment relations pro- 1000 1001 1010 01001011 0101 0110 0111 R R R R R R R R posed by Bill MacCartney in his 2009 thesis on Nat- 0100 0101 0110 01000111 0101 0110 0111 R1100 R1101 R1110 R1000R1111 R1001 R1010 R1011 CHAPTER 5. ENTAILMENT RELATIONS 71 ural Language Inference. Our purpose of doing this forward entailment hyponymy alternation shared hypernym Figure 5.2: The 16 elementary set relations, represented by Johnston diagrams. Each box represents the universe U, and the two circles within the box represent the sets R1000 R1001 R1010 R1000R1011 R1001 R1010 R1011 x and y. A region is white if it is empty, and shaded if it is non-empty. Thus in the carni is to build a labelled data set with which to train a R1100 R1101 R1110 R1111 bird vore CHAPTER 5. ENTAILMENT RELATIONSdiagram labeled R1101,onlytheregionx y is empty,71 indicating that x y U. \ ;⇢ ⇢ ⇢ Figure 5.2: The 16 elementary set relations, represented by Johnston diagrams. Each classifier for differentiating between these relations. R0000 R0001 R0010 R0011 feline box represents the universe U, and the two circles within the box represent the setscanine equivalence classR1100 in which onlyR partition1101 10 is empty.)R1110 These equivalenceR1100R1111 classes areR1101 R1110 R1111 x and y.
    [Show full text]
  • Automatic Text Simplification Via Synonym Replacement
    LIU-IDA/KOGVET-A{12/014{SE Linkoping¨ University Master Thesis Automatic Text Simplification via Synonym Replacement by Robin Keskis¨arkk¨a Supervisor: Arne J¨onsson Dept. of Computer and Information Science at Link¨oping University Examinor: Sture H¨agglund Dept. of Computer and Information Science at Link¨oping University Abstract In this study automatic lexical simplification via synonym replacement in Swedish was investigated using three different strategies for choosing alternative synonyms: based on word frequency, based on word length, and based on level of synonymy. These strategies were evaluated in terms of standardized readability metrics for Swedish, average word length, pro- portion of long words, and in relation to the ratio of errors (type A) and number of replacements. The effect of replacements on different genres of texts was also examined. The results show that replacement based on word frequency and word length can improve readability in terms of established metrics for Swedish texts for all genres but that the risk of introducing errors is high. Attempts were made at identifying criteria thresholds that would decrease the ratio of errors but no general thresh- olds could be identified. In a final experiment word frequency and level of synonymy were combined using predefined thresholds. When more than one word passed the thresholds word frequency or level of synonymy was prioritized. The strategy was significantly better than word frequency alone when looking at all texts and prioritizing level of synonymy. Both prioritizing frequency and level of synonymy were significantly better for the newspaper texts. The results indicate that synonym replacement on a one-to-one word level is very likely to produce errors.
    [Show full text]
  • COMPARATIVE FUNCTIONAL ANALYSIS of SYNONYMS in the ENGLISH and UZBEK LANGUAGE Nazokat Rustamova Master Student of Uzbekistan
    ACADEMIC RESEARCH IN EDUCATIONAL SCIENCES VOLUME 2 | ISSUE 6 | 2021 ISSN: 2181-1385 Scientific Journal Impact Factor (SJIF) 2021: 5.723 DOI: 10.24412/2181-1385-2021-6-528-533 COMPARATIVE FUNCTIONAL ANALYSIS OF SYNONYMS IN THE ENGLISH AND UZBEK LANGUAGE Nazokat Rustamova Master student of Uzbekistan State University of World Languages (Supervisor: Munavvar Kayumova) ABSTRACT This article is devoted to the meaningfulness of lexical-semantic relationships. Polysemic lexemes were studied in the synonymy seme and synonyms of sememe which are derived from the meaning of grammar and polysememic lexemes. Synonymic sememes and synonyms are from lexical synonyms. The grammatical synonyms, context synonyms, complete synonyms, the spiritual synonyms, methodological synonyms as well as grammar and lexical units within polysememic lexemes have been studied. Keywords: Synonymic affixes, Lexical synonyms, Meaningful (semantic) synonyms, Contextual synonymy, Full synonymy, Traditional synonyms, Stylistic synonyms. INTRODUCTION With the help of this article we pay a great attention in order to examine the meaningfulness of lexical-semantic relationships. Polysemic lexemes were studied in the synonymy seme and synonyms of sememe which are derived from the meaning of grammar and polysememic lexemes. Synonymic sememes and synonyms are from lexical synonyms. The grammatical synonyms, context synonyms, complete synonyms, the spiritual synonyms, methodological synonyms as well as grammar and lexical units within polysememic lexemes have been studied. There are examples of meaningful words and meaningful additions, to grammatical synonyms. Lexical synonyms and affixes synonyms are derived from linguistics unit. Syntactic synonyms have been studied in terms of the combinations of words, fixed connections and phrases.[1] The synonym semes are divided into several types depending on the lexical, meaning the morpheme composition the grammatical meaning of the semantic space, the syntactic relationship and the synonym syllabus based on this classification.
    [Show full text]
  • Automatic Synonym Discovery with Knowledge Bases
    Automatic Synonym Discovery with Knowledge Bases Meng Xiang Ren Jiawei Han University of Illinois University of Illinois University of Illinois at Urbana-Champaign at Urbana-Champaign at Urbana-Champaign [email protected] [email protected] [email protected] ABSTRACT Text Corpus Synonym Seeds Recognizing entity synonyms from text has become a crucial task in ID Sentence Cancer 1 Washington is a state in the Pacific Northwest region. Cancer, Cancers many entity-leveraging applications. However, discovering entity 2 Washington served as the first President of the US. e.g. Washington State synonyms from domain-specic text corpora ( , news articles, 3 The exact cause of leukemia is unknown. Washington State, State of Washington scientic papers) is rather challenging. Current systems take an 4 Cancer involves abnormal cell growth. entity name string as input to nd out other names that are syn- onymous, ignoring the fact that oen times a name string can refer to multiple entities (e.g., “apple” could refer to both Apple Inc and Cancer George Washington Washington State Leukemia the fruit apple). Moreover, most existing methods require training data manually created by domain experts to construct supervised- learning systems. In this paper, we study the problem of automatic Entities Knowledge Bases synonym discovery with knowledge bases, that is, identifying syn- Figure 1: Distant supervision for synonym discovery. We onyms for knowledge base entities in a given domain-specic corpus. link entity mentions in text corpus to knowledge base enti- e manually-curated synonyms for each entity stored in a knowl- ties, and collect training seeds from knowledge bases. edge base not only form a set of name strings to disambiguate the meaning for each other, but also can serve as “distant” supervision to help determine important features for the task.
    [Show full text]
  • English Language a Level Transition Task Accent Dialect Phoneme
    English Language A Level Transition Task The A Level English Language course is all about language use in the real world. We explore a range of fascinating topics such as: • How do children learn to speak, read and write? • How has the English language changed over time, and what is its future? • How has the internet and modern technology impacted our language use? • Do men and women, or people from different social groups and professions use language differently? • Does your accent reveal something about your identity? Alongside this, we also analyse examples of non-fiction and evaluate how the author represents the topic, and themselves, through their language use. This is what you will be practising for this task. Step 1: Write a short definition for each of the following terms in the glossary below. You may need to research the meanings of some of these terms. Term Definition Accent Dialect Phoneme Synonym Antonym Hypernym Register Semantic change Abstract noun Concrete noun Dynamic verb Stative verb Adjective Adverb Determiner Clause Passive voice Main clause Subordinate clause Deixis Discourse marker Orthography Step 2: For part of the examination, you will be asked to analyse a text, thinking about how the language and layout presents the author and the topic of the text. Read through the example annotations below. Step 3: Read the article below from The Washington Post, January 7th 2021, then annotate it with anything you notice about how the language presents the event. You can use the tips in the box below: Think about how Tips:
    [Show full text]
  • Sample Morpheme Walls
    Sample Morpheme Walls Wall with Basic Morphemes Word + Word = Compound Word house sun light dog time where houseboat sunlight moonlight lapdog sometime somewhere townhouse sunset lighthouse dogcatcher timekeeper nowhere housemate sunrise flashlight dogsled timepiece anywhere doghouse sunglasses lightweight watchdog lifetime whereabouts Word + Word – Letter(s) + Apostrophe = Contraction -n’t -‘re -‘s -‘ve -‘ll -‘d (not) (are) (is or has) (have) (will) (would or had) isn’t you’re it's I’ve I’ll I’d haven’t they’re that’s you’ve you’ll you’d won’t we’re he’s they’ve she’ll he’d shouldn’t who’re what’s would’ve we’ll they’d Inflectional Endings -s -es -ed -ing -er -est (plural) (plural) (past tense) (present tense) (more) (most) bags foxes wanted sending greater greatest trees benches added running hotter hottest friends bushes walked making riper ripest choices buses yelled flying sunnier sunniest Prefixes un- re- pre- dis- over- mis- (not) (again) (before) (not, opposite of) (too much) (wrongly) unhappy redo preview dislike overheat mistake unsure replay preheat disagree overeat misunderstand unfair remake prefix dishonest overdo misread unable reopen preset disappear overwork misbehave Suffixes -ful -less -ly -er -able -ness (adj., full of) (adj., (adv., how an (noun, one who (adj., capable (noun, quality) without) action is done) does something) of) hopeful hopeless quickly teacher teachable kindness fearful fearless sadly worker readable fairness effortful effortless suddenly runner agreeable sadness successful homeless perfectly faker
    [Show full text]
  • Synonymy and Its Features
    4th Global Congress on Contemporary Sciences & Advancements 30th April, 2021 Hosted online from Rome, Italy econferecegloble.com SYNONYMY AND ITS FEATURES Safoyeva Sadokat Nasilloyevna Teacher of the Department of ESP for Humanitarian Sciences, Foreign Languages Faculty, Bukhara State University ABSTARCT The grouping of language units based on the same meaning is called synonymy. Firstly, synonymy is born on the basis of the semantic relationship of two or more language units. The number of units involved in such an approach cannot be limited. Secondly, these language units form a regular system within themselves. Given these properties, the units of language that form a synonymous relationship are called synonymous with one another. Synonyms are collectively referred to as a series of synonyms. Synonymy exists in both lexical units and grammatical units. Accordingly, there are two types of synonyms: 1. Lexical synonymy is the mutual synonymy of lexical units: for example, Heaven, future state, eternal blessedness, eternity, Paradise, Eden, promised land, Golden Age, Utopia, Millennium. 2. Grammatical synonymy is the synonym of grammatical units Lexical synonymy is manifested in three ways: 1. Lexical synonymy is the mutual synonymy of lexical units: cunning, crafty, deceitful, underhand, roguish, mischievous… .. 2. Phraseological synonymy - the mutual synonym of phraseological units: Between a rock and a hard place. Between the devil and the deep Blue Sea, in a bind. 3. Lexical phraseological synonymy - the interaction of a phraseological unit with a lecture unit The grouping of lexemes based on the same meaning is called lexical synonymy. For example, ridiculous, ridiculous, proposterous lexemes have the same meaning. It is not to be understood that lexical units have the same meaning as equivalents in meaning.
    [Show full text]
  • Foundational Literacy Glossary of Terms
    Foundational Literacy Glossary of Terms Accuracy The ability to recognize words correctly Advanced Phonics Strategies for decoding multisyllabic words that include morphology and information about the meaning, pronunciation, and parts of speech of words gained from knowledge of prefixes, roots, and suffixes Affixes Affixes are word parts that are "fixed to" either the beginnings of words (prefixes) or the endings of words (suffixes). The word disrespectful has two affixes, a prefix (dis-) and a suffix (-ful) Alphabetic Awareness Knowledge of letters of the alphabet coupled with the understanding that the alphabet represents the sounds of spoken language and the correspondence of spoken sounds to written language Alphabetic Code Sound-symbol relationships to recognize words Alphabetic Principle The concept that letters and letter combinations represent individual phonemes in written words Alphabetic Understanding Understanding that the left-to-right spellings of printed words represent their phonemes from first to last Automaticity The ability to translate letters-to-sounds-to-words fluently, effortlessly Base Word A unit of meaning that can stand alone as a whole word (e.g., friend, pig). Also called a free morpheme. Also called a free morpheme Blend A blend is a consonant sequence before or after a vowel within a syllable, such as cl, br, or st; it is the written language equivalent of consonant cluster Blending The task of combining sounds rapidly to accurately represent the word Chunking A decoding strategy for separating words into
    [Show full text]