Generating Anagrams from Multiple Core Strings Employing User-Defined Vocabularies and Orthographic Parameters

Total Page:16

File Type:pdf, Size:1020Kb

Generating Anagrams from Multiple Core Strings Employing User-Defined Vocabularies and Orthographic Parameters Behavior Research Methods, Instruments, & Computers 2003, 35 (1), 129-135 Generating anagrams from multiple core strings employing user-defined vocabularies and orthographic parameters TIMOTHY R. JORDAN and AXEL MONTEIRO University of Nottingham, Nottingham, England Anagrams are used widely in psychological research. However, generatinga range of strings with the same lettercontent is an inherently difficult and time-consuming task for humans, and current computer- based anagram generatorsdo not provide the controls necessaryfor psychological research.In this ar- ticle,we present a computational algorithm that overcomes these problems. Specifically,the algorithm processes automatically each word in a user-defined source vocabulary and outputs, for each word, all possible anagrams that exist as words (or as nonwords, if required) as defined by the same source vo- cabulary. Moreover, we show how the output of the algorithm can be filtered to produce anagrams within specificuser-definedorthographic parameters.For example, the anagramsproduced can be filtered to produce words that share, with each other or with other words in the source vocabulary, letters in only certain positions. Finally, we provide free access to the complete Windows-based program and source code containing these facilitiesfor anagram generation. Anagrams play an important and pervasive role in psy- ences of linguistic category (e.g., frequency, imageabil- chological research. For example, anagrams provide a ity, concreteness, orthographic structure, lexicality) on measure of problem-solving ability (i.e., where the task performance to be revealed more clearly. is to generate a word composed of the same letters as a The appropriate selection of core strings (e.g., slate) presented string) that has been used to study a range of and the generation of anagrams (in this case, including psychological issues, including insight (e.g., Smith & the words least, stale, steal, tales, teals,andtesla)foruse Kounios, 1996), aging (e.g., Witte & Freund, 1995), in any psychological research requires certain controls recognition memory (e.g., Weldon, 1991), semantic over the procedure of core string selection and anagram memory (e.g., White, 1988), and the topography of generation that will avoid confounds that may contami- evoked brain activity (e.g., Skrandies, Reik, & Kunze, nate the data produced by an experiment. First, control 1999). Anagrams have also been used extensively to over the source vocabulary used to validate (as legal study processes involved in word recognition, where a words) core strings and their letter-string permutations great deal of research involves comparing performances ensures that only linguistically appropriate core strings between stimuli from different linguistic categories, in- and anagrams are used. Most obviously, defining the cluding frequency, imageability, concreteness, ortho- languagewithin which core strings exist and from which graphic structure, and lexicality (words vs. nonwords).1 anagrams are generated ensures that all permutations are For example, when performance with words and non- relevant for a particular participant population.However, words was compared, several studies (e.g., Gibson, Pick, this is important not only for determining core strings Osser, & Hammond, 1962; Jordan, Patching, & Milner, and anagrams for languages that are highly individual 2000; Mason, 1975; Massaro & Klitzke, 1979; Massaro, (e.g., English vs. French), but also for core strings and Venezky, & Taylor, 1979; Reicher, 1969) have used stim- anagrams specific to variations within a language; for uli matched for their individual letter content (e.g., show example, despite their overall similarity,American, Aus- vs. ohws). The attraction of this matching is that differ- tralian, British, and CanadianEnglish contain words that ences in basic letter content can be removed as a con- vary in their presence, spelling, and frequency of usage foundingvariable between categories and so allow influ- across these three vocabularies. In addition, control over the source vocabulary allows anagrams to be produced from either an exhaustive search of the entire vocabulary This work was supported by BBSRC Grant 42/S12111 to T.R.J. The of a language or a subset of the vocabulary (e.g., one in- order of authorship is alphabetical, and both authors contributed equally cluding only words above a certain frequency of usage). to this work. Correspondenceconcerning this article should be addressed Second, the selection of core strings and anagrams for to either T. R. Jordan or A. Monteiro, School of Psychology,University of Nottingham, University Park, Nottingham NG7 2RD, England an experiment is facilitated and its validity enhanced (e-mail: [email protected] or lpxaxm@psychology. when all the anagrams of all the words in the chosen vo- nottingham.ac.uk). cabulary are availableat the start of the selection process. 129 Copyright 2003 Psychonomic Society, Inc. 130 JORDAN AND MONTEIRO Without this availability, the appropriateness of core lier (see Table 1)—more specifically, with respect to the strings and their anagrams for inclusion in an experiment following. is difficult to determine, and the entire selection process is 1. The vast majority of anagram generators currently susceptible to experimenter bias. For example, core words available provide no facility for determining the system’s that are subjectivelymore likely to be generated (e.g., of source vocabulary (used to verify the core letter-string higher frequencies of occurrence) are more likely to be and its permutationsas legal words), and most provide no selected. Indeed, knowing the number of real-word ana- indication of the source used. This is unsuitable for the grams that can be produced from a particular core string productionof experimentalstimuli,for which knowledge and how this compares with the number produced by other, and control of the nature of the source vocabulary is re- potentialcore strings is crucial for assessing the suitability quired to produce a stimulus set of known characteristics of a core string for a particular task. For example, when and maximal ecological validity. In particular, an inap- nonword anagrams are selected for use in problem-solving propriate source vocabulary may not provide anagrams experiments or as controls in word recognition experi- representative of the linguistic environment of partici- ments, nonwords with just one real-word anagram would pants. Indeed, problems arise not only when the source provide qualitativelydifferent stimuli, as compared with vocabulary has an unknown content, but also when the those for which two or more real-word anagrams exist. content is known but is less than ideal—for example, Third, control over the nature of the anagrams gener- when it contains too few words or contains spellings that ated allows a more refined and focused use of anagrams are inappropriate for a particular participant population in psychological research. In particular, when anagrams (e.g., American English for British participants). of core strings are generated, user-defined constraints 2. All anagram generators currently available allow placed on the operation of the generating algorithm the production of anagrams from core letter strings only allow only the types of anagram relevant to the aims of by taking as input one core string at a time. This input an experiment to be specified and produced and so avoid string must then be processed before the next input string the productionof all (includingunwanted) combinations can be entered by the researcher. This piecemeal approach that satisfy the general principle of an anagram: For ex- is unsuited to the production of experimental stimuli, ample, anagrams above a certain frequency of written where sufficient numbers of appropriatelymatched stim- occurrence, anagrams for which letters in only certain uli are likely to be achieved only after permutations from positions in the core word are transposed, or anagrams several thousand input strings have been calculated. that share letters or groups of letters in only certain po- 3. No anagram generator currently available allows sitions with other words in the source vocabulary. any user-defined, research-relevant constraints to be However, generating a range of different strings with placed on the operation of the generating algorithm. the same letter content is an inherently difficult and Thus, when anagrams of core strings are derived, all time-consuming task (at least for humans), particularly combinations that satisfy the general principle of an ana- when each permutation must be verified as a word or a gram are generated without allowing constraints on such nonword by checking for its existence in the appropriate things as the frequency of occurrence or the ortho- vocabulary. Indeed, whereas the generation and verifi- graphic structure of the strings produced. cation of all possible permutations is feasible for a sin- In this article, we present a computationalalgorithm for gle core string of three or four letters, the generation and producinganagrams from any suitableuser-defined source verification of all possible permutations for multiple vocabulary.Specifically, the algorithm takes in each word core strings, especially those of five letters or more, de- in a chosen vocabularyand outputs,for each word, all pos- mands computational involvementto achieve acceptable sible anagrams that exist in the same vocabulary(and, if re- levels of accuracy, efficiency,
Recommended publications
  • Intelligent Tutoring Systems
    Se Design Recommendations for Intelligent Tutoring Systems Volume 2 Instructional Management Edited by: Robert A. Sottilare, Arthur C. Graesser, Xiangen Hu, and Benjamin S. Goldberg A Book in the Adaptive Tutoring Series Copyright © 2014 by the U.S. Army Research Laboratory. Copyright not claimed on material written by an employee of the U.S. Government. All rights reserved. No part of this book may be reproduced in any manner, print or electronic, without written permission of the copyright holder. The views expressed herein are those of the authors and do not necessarily reflect the views of the U.S. Army Research Laboratory. Use of trade names or names of commercial sources is for information only and does not imply endorsement by the U.S. Army Research Laboratory. This publication is intended to provide accurate information regarding the subject matter addressed herein. The information in this publication is subject to change at any time without notice. The U.S. Army Research Laboratory, nor the authors of the publication, makes any guarantees or warranties concerning the information contained herein. Printed in the United States of America First Printing, June 2014 First Printing (errata addressed), July 2014 U.S. Army Research Laboratory Human Research & Engineering Directorate SFC Paul Ray Smith Simulation & Training Technology Center Orlando, Florida International Standard Book Number: 978-0-9893923-2-7 We wish to acknowledge the editing and formatting contributions of Carol Johnson, ARL Dedicated to current and future scientists and developers of adaptive learning technologies CONTENTS Preface ..................................................................................................... i Prologue ............................................................................................... xv Section I: Affect, Engagement and Grit in Instructional Management (Sottilare) 1 Chapter 1 ‒ Thoughts on the Instructional Management of Affect, Engagement, and Grit ..................................................
    [Show full text]
  • INFORMATION SOURCES and SERVICES Copyright © 2013, Satya Gaur All Rights Reserved
    InformationSourcesandServices DLIS006 INFORMATION SOURCES AND SERVICES Copyright © 2013, Satya Gaur All rights reserved Produced & Printed by EXCEL BOOKS PRIVATE LIMITED A-45, Naraina, Phase-I, New Delhi-110028 for Lovely Professional University Phagwara SYLLABUS Information Sources and Services Objectives: To study about the information sources and services, different types of resources, different types of services which are included in the field of library and information science. Knowledge regarding all this will help the student to manage the library and information sources and services. S. No. Topics Documentary sources of Information; print and non -print : categories: primary, secondary 1. and tertiary. Reference Services: Need, Types (orientation Ready & Long range reference services) 2. Qualities of Reference Librarian. 3. Information Services and Products : Alerting Services, Bibliographic Services. 4. Document Delivery, Online Services, translation Services ,Reprographic Services. 5. Reference sources and their Evaluation : Encyclopedia, Dictionaries. 6. Reference sources and their Evaluation : Directories, Geographical Sources. 7. Bibliographical Sources: Types and Importance, Comparative study of INB and BNB. 8. Indexing and Abstracting Services, Need and importance. CONTENTS Unit 1: Documentary Sources of Information 1 Unit 2: Reference Services 22 Unit 3: Reference Librarian 37 Unit 4: Information Services and Products 59 Unit 5: Alerting and Bibliographic Services 83 Unit 6: Document Delivery Services and Online Services
    [Show full text]
  • Martin Gardner Papers SC0647
    http://oac.cdlib.org/findaid/ark:/13030/kt6s20356s No online items Guide to the Martin Gardner Papers SC0647 Daniel Hartwig & Jenny Johnson Department of Special Collections and University Archives October 2008 Green Library 557 Escondido Mall Stanford 94305-6064 [email protected] URL: http://library.stanford.edu/spc Note This encoded finding aid is compliant with Stanford EAD Best Practice Guidelines, Version 1.0. Guide to the Martin Gardner SC064712473 1 Papers SC0647 Language of Material: English Contributing Institution: Department of Special Collections and University Archives Title: Martin Gardner papers Creator: Gardner, Martin Identifier/Call Number: SC0647 Identifier/Call Number: 12473 Physical Description: 63.5 Linear Feet Date (inclusive): 1957-1997 Abstract: These papers pertain to his interest in mathematics and consist of files relating to his SCIENTIFIC AMERICAN mathematical games column (1957-1986) and subject files on recreational mathematics. Papers include correspondence, notes, clippings, and articles, with some examples of puzzle toys. Correspondents include Dmitri A. Borgmann, John H. Conway, H. S. M Coxeter, Persi Diaconis, Solomon W Golomb, Richard K.Guy, David A. Klarner, Donald Ervin Knuth, Harry Lindgren, Doris Schattschneider, Jerry Slocum, Charles W.Trigg, Stanislaw M. Ulam, and Samuel Yates. Immediate Source of Acquisition note Gift of Martin Gardner, 2002. Information about Access This collection is open for research. Ownership & Copyright All requests to reproduce, publish, quote from, or otherwise use collection materials must be submitted in writing to the Head of Special Collections and University Archives, Stanford University Libraries, Stanford, California 94304-6064. Consent is given on behalf of Special Collections as the owner of the physical items and is not intended to include or imply permission from the copyright owner.
    [Show full text]
  • Dictionaries As We Have Seen, a Tagged Word of the Form (Word, Tag) Is an Association Between a Word and a Part-Of-Speech Tag
    CHAPTER 5 Categorizing and Tagging Words Back in elementary school you learned the difference between nouns, verbs, adjectives, and adverbs. These “word classes” are not just the idle invention of grammarians, but are useful categories for many language processing tasks. As we will see, they arise from simple analysis of the distribution of words in text. The goal of this chapter is to answer the following questions: 1. What are lexical categories, and how are they used in natural language processing? 2. What is a good Python data structure for storing words and their categories? 3. How can we automatically tag each word of a text with its word class? Along the way, we’ll cover some fundamental techniques in NLP, including sequence labeling, n-gram models, backoff, and evaluation. These techniques are useful in many areas, and tagging gives us a simple context in which to present them. We will also see how tagging is the second step in the typical NLP pipeline, following tokenization. The process of classifying words into their parts-of-speech and labeling them accord- ingly is known as part-of-speech tagging, POS tagging, or simply tagging. Parts- of-speech are also known as word classes or lexical categories. The collection of tags used for a particular task is known as a tagset. Our emphasis in this chapter is on exploiting tags, and tagging text automatically. 5.1 Using a Tagger A part-of-speech tagger, or POS tagger, processes a sequence of words, and attaches a part of speech tag to each word (don’t forget to import nltk): >>> text = nltk.word_tokenize("And now for something completely different") >>> nltk.pos_tag(text) [('And', 'CC'), ('now', 'RB'), ('for', 'IN'), ('something', 'NN'), ('completely', 'RB'), ('different', 'JJ')] Here we see that and is CC, a coordinating conjunction; now and completely are RB, or adverbs; for is IN, a preposition; something is NN, a noun; and different is JJ, an adjective.
    [Show full text]
  • Vocabulary and Grammar-1, Reading Skills
    Certificate in Communication Skill (CCS) CCS-02 Reading Skills Block – 4 Vocabulary and Grammar-1 UNIT-1 The Verb Phrase-1: Lexical, Auxiliary and Phrasal Verb UNIT-2 The Verb Phrase-2: Tense, Aspect and Modality UNIT-3 Dictionaries This course material is designed and developed by Indira Gandhi National Open University (IGNOU), New Delhi. OSOU has been permitted to use the material. 1 UNIT-1: The Verb Phrase-1: Lexical, Auxiliary And Phrasal Verbs Structure 1.0 Objective 1.1 Introduction 1.2 Lexical Verbs 1.3 Auxiliary Verbs 1.4 Phrasal Verbs 1.5 Let Us Sum Up 1.6 Suggested Reading 1.7 Answers 1.0 OBJECTIVE In this unit we shall identify the elements of the verbal group (or verb phrase) and note their properties. 1.1 INTRODUCTION The verb can be said to be the most important element of a sentence because the structure of a sentence depends largely on the verb. The difference between the sentences He laughed/ and He built a house is mainly that the first sentence has the verb laugh, which needs nothing to complete its meaning, but the second sentence has build, which requires an object to complete its meaning: what did he build?-A house. The difference between laugh and built is lexical which means that the two verbs represent different meanings or activities. Now look at the sentence: He is building a house. We have once again the item build but it is now preceded by is (a form of be). As you can see the meanings of the two sentences.
    [Show full text]
  • Natural Language Processing with Python CS372: Spring, 2015
    Natural Language Processing with Python CS372: Spring, 2015 Lecture 12 Categorizing and Tagging Words Jong C. Park Department of Computer Science Korea Advanced Institute of Science and Technology CATEGORIZING AND TAGGING WORDS Using a Tagger Tagged Corpora Mapping Words to Properties Using Python Dictionaries Automatic Tagging N-Gram Tagging Transformation-based Tagging How to Determine the Category of a Word 2015-04-09 CS372: NLP with Python 2 Introduction Questions • What are lexical categories, and how are they used in natural language processing? • What is a good Python data structure for storing words and their categories? • How can we automatically tag each word of a text with its word class? 2015-04-09 CS372: NLP with Python 3 Mapping Words to Properties Using Python Dictionaries Indexing Lists Versus Dictionaries Dictionaries in Python Defining Dictionaries Default Dictionaries Incrementally Updating a Dictionary Complex Keys and Values Inverting a Dictionary dictionary data type 2015-04-09 CS372: NLP with Python 4 Indexing Lists Versus Dictionaries List • A text is treated in Python as a list of words. • We can look up a particular item by giving its index. • text1[100] Figure 5-2. List lookup. 2015-04-09 CS372: NLP with Python 5 Indexing Lists Versus Dictionaries With frequency distributions, we specify a word and get back a number. • fdist[‘monstrous’] Figure 5-3. Dictionary lookup. Other names for dictionary are map, hashmap, hash, and associative array. 2015-04-09 CS372: NLP with Python 6 Indexing Lists Versus Dictionaries In Figure 5-3, we mapped from names to numbers, unlike with a list.
    [Show full text]
  • Classification of Knowledge Organization Systems with Wikidata
    Classification of Knowledge Organization Systems with Wikidata Jakob Voß Verbundzentrale des GBV (VZG), Göttingen, Germany [email protected] Abstract. This paper presents a crowd-sourced classification of knowl- edge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extrac- tion of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata. Keywords: Knowledge Organization Systems · Wikidata 1 Classification of Knowledge Organization Systems Since introduction of the term knowledge organization system (KOS), several attempts have been made to classify KOSs by types such as glossaries, the- sauri, classifications, and ontologies6 [ , 4, 11, 2, 10]. The set of KOS types and each of their extent varies depending on both domain or context of application (for instance different academic fields) and criteria of classification (for instance classification by purpose or structure). In many cases, KOS types are arranged according to the degree of controls, ranging from simple term lists to complex ontologies. The most elaborated classification of KOS types in the field of knowledge orga- nization is the the NKOS KOS Types Vocabulary [3]. It was developed by the DCMI NKOS Task Group with definitions mainly based on the KOS Taxon- omy by Zeng and Hodge [17] and on ISO 25964 [7]. The KOS types vocabulary differentiates between 14 types of KOS (categorization scheme, classification scheme, dictionary, gazetteer, glossary, list, name authority list, ontology, se- mantic network, subject heading scheme, synonym ring, taxonomy, terminology, and thesaurus).
    [Show full text]
  • Performance Evaluation of Anagram Orthogrphic Similarity Measures Original Research
    Research Article Article Page 174 Journal ofof ResearchResearch and and Review Review in inScience, Science, 174–183 174–183 Volume 4, December 2017 DOI:10.36108/jrrslasu/7102/40(0152)Volume 4, December 2017 ORIGINAL RESEARCH PERFORMANCE EVALUATION OF ANAGRAM ORTHOGRPHIC SIMILARITY MEASURES Raji-Lawal Hanat Y.1,2, Akinwale Adio T.2 ,Folorunsho O.2, and Mustapha Amidu O.2 1Deparment of Computer, cience, Abstract: Lagos State University, Ojo, Lagos Introduction: Anagram solving task involves a retrieving process from previously acquired knowledge, this serves as a suitable memory cognition 2Department of Computer, test. Automation of this process can give a very good memory cognitive Science, Federal University Of Agriculture Abeokuta test tool, the method behind this automation is anagram orthographic similarity measure. Correspondence Aim: The purpose of this research is to study existing anagram Raji-Lawal Hanat Y., Department of Computer Science, Faculty of orthographic similarity measures, to deduce their strengths and Science, Lagos State University weaknesses, for further improvement. Ojo, Lagos, Nigeria Materials and Methods: Experiments were carried out on the measures, Email:[email protected] using real data. Their behaviour on different orthographic string set was observed. Result: Experiment revealed that brute force has a very poor processing time, while sorting and neighbourhood frequency does not have issues with processing time. Conclusion: The research revealed that existing anagram orthographic similarity measures are not suitable for character position verification and evaluation of syllabic complexity which are essential measures of working memory capacity. Keywords: Cognition, orthography, similarity measures, bi-anagram, word permutation. All co-authors agreed to have their names listed as authors.
    [Show full text]
  • Unit 6 Dictionaries
    Dictionaries UNIT 6 DICTIONARIES Structure 6.0 Objectives 6.1 Introduction 6.2 Definition and Scope of Dictionaries 6.2.1 Alternate Names for Dictionaries 6.2.2 Combination of Dictionary and Other Reference Sources 6.3 Different Types of Dictionaries 6.3.1 General Language Dictionaries 6.3.2 Special Dictionaries 6.3.3 Dictionaries Useful for Translations 6.3.4 Subject Dictionaries 6.4 Uses of Dictionaries 6.4.1 Dictionaries : Some Problems 6.5 Evaluation of Dictionaries 6.6 Summary 6.7 Answers to Self Check Exercises 6.8 Key Words 6.9 References and Further Reading 6.0 OBJECTIVES After reading this Unit, you will be able to: • write one or two lines about dictionaries in general; • classify the dictionaries into different categories; • identify the specific types of dictionaries for answering queries; • assess the value of a dictionary before acquiring it, • locate the examples of different types of dictionaries; • group them in their appropriate categories; and • answer questions by referring to various types of dictionaries. 6.1 INTRODUCTION In this Unit, we intend to: • define and explain the scope of dictionaries; • inform you about the alternate names used for dictionaries; • state the innovative techniques used by combining features of dictionaries with other reference sources; • introduce you to different types of dictionaries; • acquaint you with problems of dictionaries; • enumerate the uses of dictionaries; and • suggest guidelines to evaluate a dictionary before adding it to your reference collection. Here, we have elaborately discussed the varieties of dictionaries listed under each type of dictionary. They are not the exhaustive lists of existing dictionaries but are only some important ones which you may normally come across.
    [Show full text]
  • Anagrams Words from Letters
    Anagrams Words From Letters Admirative Harold enhearten parrot-fashion. Voltaire minor quadrennially. Chaste Demosthenis Latinises some creaks and slipstreams his apteryx so thick! What is right about two points from letters and up for an anagram Before starting, or operate somewhere in between like a mask that can establish identity. Finds different newspapers in its letters appear in anagram of course, to your given player is an anagram? The property site form two word anagram solutions. We Made my Word wheel Game! The principle of multiplication may help. If you tonight the longest word you get to quilt on first the initial round. You held start with consonant patterns. Necessary cookies are absolutely essential confirm the website to function properly. The solutions are below. With anagrams from being processed as letter string to get on this website in any intentional deviation from every trial. Clearly, Words with Friends, and of course was ever popular Jumble newspaper puzzle. That party now i thing we know. Solving and basically any valid, that work well tries would fit in chess players reported a list of each of? By signing up, and POGO. Anagrams are meaningful words made after rearranging all the letters of the word. Each word list has its advantages. Unscramble your could do this kind of years, support for your browsing experience on our word or phrases from your favorite word finder. It lacks a lot of the word game features of other websites, we unscrambled the letters to create a list of all the words found in Scrabble, dwell duration on individual letters revealed a processing disadvantage for word trials.
    [Show full text]
  • Natural Language Processing with Python
    Natural Language Processing with Python Natural Language Processing with Python Steven Bird, Ewan Klein, and Edward Loper Beijing • Cambridge • Farnham • Köln • Sebastopol • Taipei • Tokyo Natural Language Processing with Python by Steven Bird, Ewan Klein, and Edward Loper Copyright © 2009 Steven Bird, Ewan Klein, and Edward Loper. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://my.safaribooksonline.com). For more information, contact our corporate/institutional sales department: (800) 998-9938 or [email protected]. Editor: Julie Steele Indexer: Ellen Troutman Zaig Production Editor: Loranah Dimant Cover Designer: Karen Montgomery Copyeditor: Genevieve d’Entremont Interior Designer: David Futato Proofreader: Loranah Dimant Illustrator: Robert Romano Printing History: June 2009: First Edition. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. Natural Language Processing with Python, the image of a right whale, and related trade dress are trademarks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information con- tained herein.
    [Show full text]
  • Unit 9 Aids and Tools of Translation
    UNIT 9 AIDS AND TOOLS OF TRANSLATION Structure 9.0 Objectives 9.1 Introduction 9.2 Definition and Scope of Dictionaries 9.2.1 What is a Dictionary? 9.2.2 Types of Dictionaries 9.3 Dictionaries with Specific Objectives 9.3.1 Special Dictionaries 9.3.2 Trans-lingual Dictionaries 9.3.3 Subject Dictionaries 9.4 Grammars 9.5 Use of Dictionaries and Grammars for Translation 9.6 Definition and Scope of Encyclopaedias 9.6.1 General Encyclopaedias 9.6.2 Special Encyclopaedias 9.7 Other Reference Sources 9.7.1 Geographical Sources (Maps etc.) 9.7.2 Biographical Sources 9.7.3 Handbooks, Manuals etc. 9.7.4 Others 9.8 Use of Encyclopaedias & Reference Sources for Translation 9.9 Let Us Sum Up 9.10 Key Words 9.11 Answers to Exercises 9.0 OBJECTIVES The purpose of this unit is to introduce you to various aids and tools of translation which will help you in translating any piece of work. After working through this unit, you will be able to describe several types of reference books; differentiate between various kinds of aids and tools used for translation; pick up a particular aid for tracing required information; and make proper use of the reference tools for translating a piece of work. 9.1 INTRODUCTION Translation is the art of rendering the writings of one language into another language. The art of translation lies not only in conveying the literal sense but also in translating the feelings, thought, character and spirit of the work by using apt words so that it is equal in quality to the original work.
    [Show full text]