A New Speech Corpus and Modelling Insights Devaraja Adiga1∗ Rishabh Kumar1∗ Amrith Krishna2 [email protected] [email protected] [email protected]

Total Page:16

File Type:pdf, Size:1020Kb

A New Speech Corpus and Modelling Insights Devaraja Adiga1∗ Rishabh Kumar1∗ Amrith Krishna2 Pdadiga@Iitb.Ac.In Krrishabh@Cse.Iitb.Ac.In Ak2329@Cam.Ac.Uk Automatic Speech Recognition in Sanskrit: A New Speech Corpus and Modelling Insights Devaraja Adiga1∗ Rishabh Kumar1∗ Amrith Krishna2 [email protected] [email protected] [email protected] Preethi Jyothi1 Ganesh Ramakrishnan1 Pawan Goyal3 [email protected] [email protected] [email protected] 1IIT Bombay, Mumbai, India; 2University of Cambridge, UK; 3IIT Kharagpur, WB, India Abstract grapheme-phoneme correspondences. Connected speech leads to phonemic transformations in utter- Automatic speech recognition (ASR) in San- acnes, and in Sanskrit this is faithfully preserved in skrit is interesting, owing to the various lin- guistic peculiarities present in the language. writing as well (Krishna et al., 2018). This is called The Sanskrit language is lexically productive, as Sandhi and is defined as the euphonic assimi- undergoes euphonic assimilation of phones at lation of sounds, i.e., modification and fusion of the word boundaries and exhibits variations in sounds, at or across the boundaries of grammatical spelling conventions and in pronunciations. In units (Matthews, 2007, p. 353). Phonemic orthog- this work, we propose the first large scale study raphy is beneficial for a language, when it comes to of automatic speech recognition (ASR) in San- designing automatic speech recognition Systems skrit, with an emphasis on the impact of unit (ASR), specifically for unit selection at both the selection in Sanskrit ASR. In this work, we release a 78 hour ASR dataset for Sanskrit, Acoustic Model (AM) and Language Model (LM) which faithfully captures several of the linguis- levels. tic characteristics expressed by the language. Regardless of the aforementioned commonali- We investigate the role of different acoustic ties preserved in both the speech and text in San- model and language model units in ASR sys- skrit, designing a large scale ASR system raises tems for Sanskrit. We also propose a new mod- several challenges. The Unicode encoding for elling unit, inspired by the syllable level unit the native scripts in Sanskrit, both in Roman and selection, that captures character sequences from one vowel in the word to the next vowel. Devanāgari, does not preserve the correspondence We also highlight the importance of choos- with the phonemic encoding. Further, mapping ing graphemic representations for Sanskrit and the graphemes in Unicode to the corresponding show the impact of this choice on word er- phonemes either leads to ambiguity and redun- ror rates (WER). Finally, we extend these in- dancy or often requires multi-grapheme combina- sights from Sanskrit ASR for building ASR tions. systems in two other Indic languages, Gujarati The language is lexically productive, which re- and Telugu. For both these languages, our ex- perimental results show that the use of pho- sults in long compound words with multiple com- netic based graphemic representations in ASR ponents in usage. This results in the speakers seg- results in performance improvements as com- menting the compounds at arbitrary lexeme bound- pared to ASR systems that use native scripts.1 aries of the compound, as it need not always be pos- sible to utter the compound in one breath and also 1 Introduction to convey the meaning clearly. Similarly, such Sanskrit is a language with fairly advanced dis- arbitrary segmentations at the word boundaries ciplines of phonetics (Śiksạ̄ ), prosody (Chandas), are possible in utterance of long text sequences and grammar (Vyākaranạ ). The language has where multiple lexical items are fused together a rich oral tradition and it tends to follow via Sandhi. These segmentations are accompanied a phonemic-orthography resulting in systematic with the corresponding Sandhi based transforma- tions, resulting in a new phonetic sequence differ- ∗ Joint first author ent from the original sequence. Finally, Sanskrit 1Dataset and code can be accessed from www.cse.iitb.ac.in/~asr and https://github. might be one of those rare natural languages where com/cyfer0618/Vaksanca.git. the number of non-native proficient speakers are manifold in comparison to the native speakers (Kr- graphemes relevant for these languages which are ishna et al., 2020). This makes the ASR task fur- missing from Sanskrit. We report the performance ther challenging, as the speakers are prone to carry of these ASR systems on two publicly available their influence from their corresponding mother ASR datasets. tongues into the Sanskrit utterances as well. Our main contributions in this work are: While there exist several computational models 1) We present (in Section 2) a new, large vocabu- for processing Sanskrit texts (Kulkarni, 2013; Ku- lary Sanskrit ASR system and the first ever ASR- mar et al., 2010; Shukla et al., 2010; Kulkarni et al., based study for Sanskrit using a new, large and di- 2010a; Goyal et al., 2012; Kulkarni et al., 2010c; verse, labeled speech corpus वा啍 सञ्चयः (/Vāksañ­ Mishra et al., 2013; Saluja et al., 2017; Anoop and cayah/)̣ . Ramakrishnan, 2019; Krishna et al., 2021), large 2) We investigate (in Sections 3 and 4) different scale systems for processing of speech in Sanskrit, modeling choices for both acoustic models and lan- are almost non-existent. First, we present a new guage models in Sanskrit ASR systems, along with dataset, with 78 hours of speech covering about different graphemic representations. We propose 46,000 sentences, for ASR in Sanskrit. Keeping a new word segmentation technique based on split- the rich and long cultural heritage the language ting at vowels that can be used with both the acous- carries, we prepare our dataset to be diverse both tic model and the language model. chronologically and in terms of the domain cover- 3) We also contextualize our findings for Sanskrit age. Further, the dataset contains utterances from by providing comparisons on ASR systems built 27 different speakers, representing 6 different na- for two other Indian languages, viz., Gujarati and tive languages. The dataset splits have disjoint Telugu. speakers, with 12 in the training and 5 each in the 2 A new Sanskrit Speech Corpus: validation, test and out-of-domain test data sets. (/Vāksañcayah/)̣ Further, we explicitly mark the segmentation de- वा啍 सञ्चयः cisions made by a speaker to segment long com- Our corpus वा啍 सञ्चयः (/Vāksañcayah/)̣ , has more pound words and fused phrases and include the cor- than 78 hours of data with an overall vocabu- responding transformations due to sandhi. lary size of 91,000 words and recordings of about Using this dataset, we propose a new, large- 46,000 sentences, each with a sampling rate of 22 vocabulary Sanskrit ASR system, which, to the KHz. The contents span over 3 time periods cat- best of our knowledge, is the first such system for egorised into pre-classical literature (1,500 BCE - Sanskrit. The phonemic orthography followed in 100 BCE), classical literature (300 CE - 800 CE) Sanskrit has influenced our design choices in terms and modern literature (900 CE to now). The corpus of unit selection at the level of the acoustic and lan- is intended to address the challenges in interfacing guage models. We investigate three different en- the speech and the text covered by the disciplines coding schemes used to model LM tokens, namely, of phonetics (Śiksạ̄ ), and grammar (vyākaranạ ). word-based encoding, byte pair encoding (BPE) Hence, we confine our corpora to those written 2 and a new vowel split encoding inspired by exist- only in prose (Gadya) . In the Sanskrit litera- ing linguistic theories of syllabic structure popu- ture, frequency of commonly used words changes larly used within text-to-speech systems (Kishore and Black, 2003; Mishra et al., 2013). Further, to Dataset Speakers Hours Utterances address the redundancy issues in Unicode represen- Train 12 56 34,309 tations, we make use of the Sanskrit Library Pho- Validation 4 7 3,190 netic (SLP1) encoding scheme proposed by Scharf Test 6 11 6,004 and Hyman (2011). SLP1 is designed such that it Out-of- 5 5 2,618 preserves the phonemic orthography. Building on domain Test the study by Scharf and Hyman (2011), we focus on two graphemic representations only, viz., native Table 1: Overview of Sanskrit speech corpus. script (Devanagari) and SLP1. Finally, we extend our insights to model ASR 2We do not include verses in our current dataset, as mod- elling ASR systems for verses would require additional re- systems for two more Indian languages, viz., Tel- sources on both the acoustic model and the language model ugu and Gujarati. We extend the SLP1 to include fronts. from one topical domain to another, specifically Word Length: The tokens in Sanskrit texts one Śāstra (branch of learning) to another (Adiga can be very long owing to “Sandhi” and the et al., 2018). Our corpus contain samples from lexically productive process of compounding diverse domains, including philosophy, literature, (``Samāsa"). For instance, consider a compound commentary on poetry and grammar. It also in- word, वागथ셍प्रतपत्तये(/vāgarthapratipattaye/). It cludes contemporary recordings such as stories, forms a 19 letter word in SLP1 (vAgarTapratipat- live lectures, spiritual discourse and radio pro- taye), and is formed by combining the three San- gram/podcast, so that collecting a wide range of skrit stems वाक् , अथ셍, प्रतपत्त (/vāk, artha, prati­ Sanskrit vocabulary. patti/), as per the rules of Sandhi and Samāsa. In The recordings were primarily collected with Figure 1, we present the distribution of the num- the help of volunteers, recording their speech by ber of characters (in SLP1 format) per word across using the Recorder app on Android phones and the the three languages that we experimentally anal- Audacity platform, and from various sources avail- yse in this work, viz., Sanskrit, Telugu and Gu- able online.3 oTranscribe3 was used to transcribe jarati. The plots are normalized with respect to the the audio files.
Recommended publications
  • Adapting to Trends in Language Resource Development: a Progress
    Adapting to Trends in Language Resource Development: A Progress Report on LDC Activities Christopher Cieri, Mark Liberman University of Pennsylvania, Linguistic Data Consortium 3600 Market Street, Suite 810, Philadelphia PA. 19104, USA E-mail: {ccieri,myl} AT ldc.upenn.edu Abstract This paper describes changing needs among the communities that exploit language resources and recent LDC activities and publications that support those needs by providing greater volumes of data and associated resources in a growing inventory of languages with ever more sophisticated annotation. Specifically, it covers the evolving role of data centers with specific emphasis on the LDC, the publications released by the LDC in the two years since our last report and the sponsored research programs that provide LRs initially to participants in those programs but eventually to the larger HLT research communities and beyond. creators of tools, specifications and best practices as well 1. Introduction as databases. The language resource landscape over the past two years At least in the case of LDC, the role of the data may be characterized by what has changed and what has center has grown organically, generally in response to remained constant. Constant are the growing demands stated need but occasionally in anticipation of it. LDC’s for an ever-increasing body of language resources in a role was initially that of a specialized publisher of LRs growing number of human languages with increasingly with the additional responsibility to archive and protect sophisticated annotation to satisfy an ever-expanding list published resources to guarantee their availability of the of user communities. Changing are the relative long term.
    [Show full text]
  • An Introduction to Sanskrit Chanda
    [VOLUME 5 I ISSUE 3 I JULY – SEPT 2018] e ISSN 2348 –1269, Print ISSN 2349-5138 http://ijrar.com/ Cosmos Impact Factor 4.236 An Introduction to Sanskrit Chanda MITHUN HOWLADAR Ph. D Scholar, Department of Sanskrit, Sidho-Kanho-Birsha University, Purulia, West Bengal Received: May 22, 2018 Accepted: July 11, 2018 ABSTRACT We can generally say, any composition which has a musical sound, is called chanda. Chanda has been one of the Vedāṅgas since Vedic period. Vedic verses are composed in several chandas. The number of Vedic chandas is 21, out of which 7 are mainly used. Earliest poetic composition in public language (laukika Sanskrit) started from Valmiki, later it became a fashion and then a discipline for composition (kāvya). But here has been a difference in Vedic and laukika chandas. Where Vedic chnadas are identified by the number of syllables (varṇa or akṣara) in a line of verse or whole verse and the number of lines in the verse, laukika chanda is identified by the order of the laghu-guru syllables. The number of the laukika chandas is not yet finally defined but many texts have been composed describing the different number of chandas. Each chanda of laukika Sanskrit (post Vedic Sanskrit) consists of four pādas or caraṇas, that is, the fourth part of the chanda. Keywords: Chanda, Vedāṅgas, Chandaśāstra, pāda, Chandomañjarī. Introduction: Veda, the oldest literature in the world, is also called Chandas because the Vedic mantras (compositions) are all metric compositions (Chandobaddha). All the four Saṁhitās (with some exceptions in Yajurveda and Atharvaveda) are of this nature.
    [Show full text]
  • Contribution of Leelavathi to Prosody
    IOSR Journal Of Humanities And Social Science (IOSR-JHSS) Volume 20, Issue 8, Ver. VI (Aug. 2015), PP 08-13 e-ISSN: 2279-0837, p-ISSN: 2279-0845. www.iosrjournals.org Contribution of Leelavathi to Prosody Dr.K.K.Geethakumary Associate Professor, Dept.of Sanskrit, University of Calicut, India, Kerala, 673635) I. Introduction Leelavathi, a treatise on Mathematics, is written by Bhaskara II who lived in 12th century A.D. Besides explaining the details of mathematical concepts that were in existence up to that period, the text introduces some new mathematical concepts. This paper is an attempt to analyze the metres employed in Leelavathi as well as the concepts of permutation and combination introduced by the author in the same text. Key words:-Combination, Leelavathi, Meruprasthara, Metre, Permutation, II. Metres In The Text Leelavathi In poetry, metre has a significant role to contribute the emotive aspect. The spontaneous out pore of emotion always happens through a suitable metre that is revealed in the mind of the poet at the time of literary creation. Rhythm itself is the life of the metre as it transfuses the emotion. Varied compositions of diversified rhythms which are innumerable give birth to different metres in poetry. Early poeticians like Bhamaha, Dandin, Vamana, Rudrada and Rajasekhara have stated that erudition in prosody is essential for making poetical composition. In Vedic period, the skill of Vedic Rishis in handling the language and metre for expressing their ideas is also equally attractive. The metres used are well suited to the types of poetry, the ideas expressed in them and the content exposed.
    [Show full text]
  • Prakriya Documentation Release 0.0.7
    prakriya Documentation Release 0.0.7 Dr. Dhaval Patel Dec 17, 2018 Contents 1 prakriya 3 1.1 Features..................................................3 1.2 Support..................................................3 1.3 Credits..................................................3 2 Installation 5 2.1 Stable release...............................................5 2.2 From sources...............................................5 3 Usage 7 4 Contributing 11 4.1 Types of Contributions.......................................... 11 4.2 Get Started!................................................ 12 4.3 Pull Request Guidelines......................................... 13 4.4 Tips.................................................... 13 5 Credits 15 5.1 Development Lead............................................ 15 5.2 Contributors............................................... 15 6 History 17 6.1 0.0.1 (2017-12-30)............................................ 17 6.2 0.0.2 (2018-01-01)............................................ 17 6.3 0.0.3 (2018-01-02)............................................ 17 6.4 0.0.4 (2018-01-03)............................................ 17 6.5 0.0.5 (2018-01-13)............................................ 17 6.6 0.0.6 (2018-01-16)............................................ 18 6.7 0.0.7 (2018-01-21)............................................ 18 6.8 0.1.0 (2018-12-17)............................................ 18 7 Indices and tables 19 Python Module Index 21 i ii prakriya Documentation, Release 0.0.7 Contents: Contents
    [Show full text]
  • Characteristics of Text-To-Speech and Other Corpora
    Characteristics of Text-to-Speech and Other Corpora Erica Cooper, Emily Li, Julia Hirschberg Columbia University, USA [email protected], [email protected], [email protected] Abstract “standard” TTS speaking style, whether other forms of profes- sional and non-professional speech differ substantially from the Extensive TTS corpora exist for commercial systems cre- TTS style, and which features are most salient in differentiating ated for high-resource languages such as Mandarin, English, the speech genres. and Japanese. Speakers recorded for these corpora are typically instructed to maintain constant f0, energy, and speaking rate and 2. Related Work are recorded in ideal acoustic environments, producing clean, consistent audio. We have been developing TTS systems from TTS speakers are typically instructed to speak as consistently “found” data collected for other purposes (e.g. training ASR as possible, without varying their voice quality, speaking style, systems) or available on the web (e.g. news broadcasts, au- pitch, volume, or tempo significantly [1]. This is different from diobooks) to produce TTS systems for low-resource languages other forms of professional speech in that even with the rela- (LRLs) which do not currently have expensive, commercial sys- tively neutral content of broadcast news, anchors will still have tems. This study investigates whether traditional TTS speakers some variance in their speech. Audiobooks present an even do exhibit significantly less variation and better speaking char- greater challenge, with a more expressive reading style and dif- acteristics than speakers in found genres. By examining char- ferent character voices. Nevertheless, [2, 3, 4] have had suc- acteristics of f0, energy, speaking rate, articulation, NHR, jit- cess in building voices from audiobook data by identifying and ter, and shimmer in found genres and comparing these to tra- using the most neutral and highest-quality utterances.
    [Show full text]
  • 100000 Podcasts
    100,000 Podcasts: A Spoken English Document Corpus Ann Clifton Sravana Reddy Yongze Yu Spotify Spotify Spotify [email protected] [email protected] [email protected] Aasish Pappu Rezvaneh Rezapour∗ Hamed Bonab∗ Spotify University of Illinois University of Massachusetts [email protected] at Urbana-Champaign Amherst [email protected] [email protected] Maria Eskevich Gareth J. F. Jones Jussi Karlgren CLARIN ERIC Dublin City University Spotify [email protected] [email protected] [email protected] Ben Carterette Rosie Jones Spotify Spotify [email protected] [email protected] Abstract Podcasts are a large and growing repository of spoken audio. As an audio format, podcasts are more varied in style and production type than broadcast news, contain more genres than typi- cally studied in video data, and are more varied in style and format than previous corpora of conversations. When transcribed with automatic speech recognition they represent a noisy but fascinating collection of documents which can be studied through the lens of natural language processing, information retrieval, and linguistics. Paired with the audio files, they are also a re- source for speech processing and the study of paralinguistic, sociolinguistic, and acoustic aspects of the domain. We introduce the Spotify Podcast Dataset, a new corpus of 100,000 podcasts. We demonstrate the complexity of the domain with a case study of two tasks: (1) passage search and (2) summarization. This is orders of magnitude larger than previous speech corpora used for search and summarization. Our results show that the size and variability of this corpus opens up new avenues for research.
    [Show full text]
  • Syntactic Annotation of Spoken Utterances: a Case Study on the Czech Academic Corpus
    Syntactic annotation of spoken utterances: A case study on the Czech Academic Corpus Barbora Hladká and Zde ňka Urešová Charles University in Prague Institute of Formal and Applied Linguistics {hladka, uresova}@ufal.mff.cuni.cz text corpora, e.g. the Penn Treebank, Abstract the family of the Prague Dependency Tree- banks, the Tiger corpus for German, etc. Some Corpus annotation plays an important corpora contain a semantic annotation, such as role in linguistic analysis and computa- the Penn Treebank enriched by PropBank and tional processing of both written and Nombank, the Prague Dependency Treebank in spoken language. Syntactic annotation its highest layer, the Penn Chinese or the of spoken texts becomes clearly a topic the Korean Treebanks. The Penn Discourse of considerable interest nowadays, Treebank contains discourse annotation. driven by the desire to improve auto- It is desirable that syntactic (and higher) an- matic speech recognition systems by notation of spoken texts respects the written- incorporating syntax in the language text style as much as possible, for obvious rea- models, or to build language under- sons: data “compatibility”, reuse of tools etc. standing applications. Syntactic anno- A number of questions arise immediately: tation of both written and spoken texts How much experience and knowledge ac- in the Czech Academic Corpus was quired during the written text annotation can created thirty years ago when no other we apply to the spoken texts? Are the annota- (even annotated) corpus of spoken texts tion instructions applicable to transcriptions in has existed. We will discuss how much a straightforward way or some modifications relevant and inspiring this annotation is of them must be done? Can transcriptions be to the current frameworks of spoken annotated “as they are” or some transformation text annotation.
    [Show full text]
  • Named Entity Recognition in Question Answering of Speech Data
    Proceedings of the Australasian Language Technology Workshop 2007, pages 57-65 Named Entity Recognition in Question Answering of Speech Data Diego Molla´ Menno van Zaanen Steve Cassidy Division of ICS Department of Computing Macquarie University North Ryde Australia {diego,menno,cassidy}@ics.mq.edu.au Abstract wordings of the same information). The system uses symbolic algorithms to find exact answers to ques- Question answering on speech transcripts tions in large document collections. (QAst) is a pilot track of the CLEF com- The design and implementation of the An- petition. In this paper we present our con- swerFinder system has been driven by requirements tribution to QAst, which is centred on a that the system should be easy to configure, extend, study of Named Entity (NE) recognition on and, therefore, port to new domains. To measure speech transcripts, and how it impacts on the success of the implementation of AnswerFinder the accuracy of the final question answering in these respects, we decided to participate in the system. We have ported AFNER, the NE CLEF 2007 pilot task of question answering on recogniser of the AnswerFinder question- speech transcripts (QAst). The task in this compe- answering project, to the set of answer types tition is different from that for which AnswerFinder expected in the QAst track. AFNER uses a was originally designed and provides a good test of combination of regular expressions, lists of portability to new domains. names (gazetteers) and machine learning to The current CLEF pilot track QAst presents an find NeWS in the data. The machine learn- interesting and challenging new application of ques- ing component was trained on a develop- tion answering.
    [Show full text]
  • Reducing Out-Of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Birmingham Research Archive, E-theses Repository Reducing Out-of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition by Khalid Abdulrahman Almeman A thesis submitted to The University of Birmingham for the degree of Doctor of Philosophy School of Computer Science The University of Birmingham March 2015 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract This thesis has two aims: developing resources for Arabic dialects and improving the speech recognition of Arabic dialects. Two important components are considered: Pro- nunciation Dictionary (PD) and Language Model (LM). Six parts are involved, which relate to finding and evaluating dialects resources and improving the performance of sys- tems for the speech recognition of dialects. Three resources are built and evaluated: one tool and two corpora. The methodology that was used for building the multi-dialect morphology analyser involves the proposal and evaluation of linguistic and statistic bases. We obtained an overall accuracy of 94%. The dialect text corpora have four sub-dialects, with more than 50 million tokens.
    [Show full text]
  • Detection of Imperative and Declarative Question-Answer Pairs in Email Conversations
    Detection of Imperative and Declarative Question-Answer Pairs in Email Conversations Helen Kwong Neil Yorke-Smith Stanford University, USA SRI International, USA [email protected] [email protected] Abstract is part of an effort to endow an email client with a range of usable smart features by means of AI technology. Question-answer pairs extracted from email threads Prior work that studies the problems of question identifi- can help construct summaries of the thread, as well cation and question-answer pairing in email threads has fo- as inform semantic-based assistance with email. cused on interrogative questions, such as “Which movies do Previous work dedicated to email threads extracts you like?” However, questions are often expressed in non- only questions in interrogative form. We extend the interrogative forms: declarative questions such as “I was scope of question and answer detection and pair- wondering if you are free today”, and imperative questions ing to encompass also questions in imperative and such as “Tell me the cost of the tickets”. Our sampling from declarative forms, and to operate at sentence-level the well-known Enron email corpus finds about one in five fidelity. Building on prior work, our methods are questions are non-interrogative in form. Further, previous based on learned models over a set of features that work for email threads has operated at the paragraph level, include the content, context, and structure of email thus potentially missing more fine-grained conversation. threads. For two large email corpora, we show We propose an approach to extract question-answer pairs that our methods balance precision and recall in ex- that includes imperative and declarative questions.
    [Show full text]
  • Vedic Accent and Lexicography
    Vedic Accent and Lexicography Felix Rau University of Cologne – Lazarus Project Vedic Accent and Lexicography Lazarus Project: Cologne Sanskrit Lexicon, Project Documentation 2 Felix Rau orcid.org/0000-0003-4167-0601 This work is licensed under the Creative Commons Attribution 4.0 In- ternational License. cite as: Rau, Felix 2017. Vedic Accent and Lexicography. Lazarus Project: Cologne Sanskrit Lexicon, Project Documentation 2. Cologne: Lazarus Project. doi:10.5281/10.5281/zenodo.837826 Lazarus Project (Cologne Sanskrit Lexicon) University of Cologne http://www.cceh.uni-koeln.de/lazarus http://www.sanskrit-lexicon.uni-koeln.de/ 1 Introduction This paper is a preliminary investigation into the problems the representation of the ac- cents of Vedic Sanskrit poses to Sanskrit lexicography. The purpose is to assess the prin- ciples applied in various lexicographic works in the representation of Vedic accents and its relation to the underlying linguistic category as well as traditions of accent marking in different texts. Since the focus is on Sanskrit lexicography, we ignore the complexity of accent marking in manuscripts and the diversity of accent marking across different Indic scripts that were used to write Sanskrit over the ages. We will restrict ourselves to accent marking in Devanagari and Latin script in print, as these two are the relevant systems for virtually all of modern philological Sanskrit lexicography. The complex nature of accent marking in Vedic Sanskrit derives from several facts. Besides the intricacies of the linguistic phenomenon itself (see Kiparsky, 1973, among others), the complexity arises from the fact that different textual or editorial traditions employ structurally different systems for marking Vedic accent.
    [Show full text]
  • Chanting Sanskrit Verses in Gaudiya Vaishnavism
    Chanting Sanskrit verses in Gaudiya Vaishnavism – Jagadananda Das – Mangalacharan om ajJAna-timirAndhasya jJAnAJjanA-zalAkayA cakSur unmIlitaM yena tasmai zrI-gurave namaH nAma-zreSThaM manum api zacI-putram atra svarUpaM rUpaM tasyAgrajam uru-purim mAthurIM goSTha-vATIm rAdhA-kuNDaM giri-varam aho rAdhikA-mAdhavAzAM prApto yasya prathita-kRpayA zrI-guruM taM nato ‘smi I bow my head again and again to the holy preceptor, through whose most celebrated mercy I have received the best of all names, the initiation mantra, Sri Sachinandan Mahaprabhu, Svarupa, Rupa and his older brother Sanatan, the extensive dominions of Mathurapuri, a dwelling place in the pasturing grounds [of Krishna], Radha Kund, the chief of all mountains, Sri Govardhan, and most pointedly of all, the hope of attaining the lotus feet of Sri Radha Madhava. Introduction One of the things that attracts many people to Indian religion and to Vaishnavism in particular is the beauty of the Sanskrit language. One of the most attractive features of Sanskrit is its verse. The complex Sanskrit metres have a majestic sonority that is unmatched in any other language. A Sanskrit verse properly chanted seems to carry an authority that confirms and supports its meaning. In this little article I am going to discuss some features of Sanskrit prosody so that students and devotees can learn how to pronounce and chant Sanskrit verses in the proper manner. We will start by reviewing Sanskrit pronunciation. Then we will discuss some of the rules of prosody. The word “prosody” means the study of metrical composition, that is to say, the rules for creating verse.
    [Show full text]