Morphology-Rich Alphasyllabary Embeddings

Morphology-Rich Alphasyllabary Embeddings

Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 2590–2595 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Morphology-rich Alphasyllabary Embeddings Amanuel Mersha, Stephen Wu Addis Ababa University, UT Health Science Center at Houston [email protected], [email protected] Abstract Word embeddings have been successfully trained in many languages. However, both intrinsic and extrinsic metrics are variable across languages, especially for languages that depart significantly from English in morphology and orthography. This study focuses on building a word embedding model suitable for the Semitic language of Amharic (Ethiopia), which is both morphologically rich and written as an alphasyllabary (abugida) rather than an alphabet. We compare embeddings from tailored neural models, simple pre-processing steps, off-the-shelf baselines, and parallel tasks on a better-resourced Semitic language – Arabic. Experiments show our model’s performance on word analogy tasks, illustrating the divergent objectives of morphological vs. semantic analogies. Keywords: word embeddings, word2vec, Amharic, morphologically rich, alphasyllabary 1. Introduction of the Fidel characters strictly; for example, the 6th Word embeddings are ubiquitous in today’s language- form may have an ə] vowel or no vowel at all. related applications. A long history of distributional As we will show, word embedding models face unique semantics effortsDeerwester ( et al., 1990; Blei et al., challenges in this lesser-studied language. An abugida 2003; Padó and Lapata, 2007) have largely given way orthographic system as described above (besides Fi- to efficient neural network-inspired word embedding del, see also e.g., Devanagari) is distinct from an abjad methods (Mikolov et al., 2013; Pennington et al., (requisite glyphs only represent consonants, e.g., Ara- 2014), which have been applied across dozens of tasks bic), syllabary (glyphs represent syllables with no vi- with great success. Word embeddings have also been sual similarity between related sounds, e.g., Japanese studied in bilingual (Zou et al., 2013) and multilingual hiragana or katakana), or alphabet (glyphs give vowels (Ammar et al., 2016; Bojanowski et al., 2017) settings, equal status to consonants). but evaluations in lesser-resourced languages have been Thus, this work introduces both simple (Section 3.1.) cursory and highly variable. and complex (Section 3.2.) embedding models de- This work focuses on word embeddings in the Amharic signed for Amharic. In addition, it releases two 2 language, the second-most-spoken Semitic language1 Amharic language resources : a cleaned but unla- (Eberhard et al., 2020). Similar to other Semitic lan- beled corpus of digital Amharic text used for train- guages like Arabic or Hebrew, it is morphologically ing word embeddings, and an Amharic evaluation re- rich, with templatic morphology that is fusional rather source for morphological and semantic word analogies than agglutinative, so that morphemes are difficult to (Section 4.). separate: Our results (Section 5.) show that word embeddings in the Amharic language benefit from linguistically in- አይመለስም እመለሳለሁ formed treatment of the orthography, and that simple almelesem imelesalehu pre-processing architectures outperform complex mod- return.1sg.neg.past return.1sg.future els. We note that the utility of embeddings in this lesser-resourced language is hampered by the need for Amharic is written with the Fidel alphasyllabary (also large corpora, especially on heavily semantic tasks. known as an abugida). Each character in Amharic Fi- del represents both a consonant (35) and a vowel (7 + 2. Related Work 5 diphthongs) and is unique, yet there are similarities Mikolov et al. (2013) have shown that words can be for each onset (consonant) and for each nucleus (vowel) represented using distributed vectors by implicitly fac- of syllables. Below, note the similarities in shape cor- torizing the co-occurrence of the tokens (Levy and responding to consonants as well as vowels: Note that Goldberg, 2014) and their word2vec algorithm that employs the asynchronous gradient update has been ሀ ሁ ሂ ሃ ሄ ህ ሆ extensively used. Similarly, Pennington et al.’s GloVe ha hu hi ha hē hə ho representation (2014) performed their matrix factor- ለ ሉ ሊ ላ ሌ ል ሎ ization on windows of words explicitly. These well- le lu li la lē lə lo known methods were able to demonstrate effective- ness in inferring morphological and semantic regular- spoken Amharic does not follow the syllabic structure ities, despite ignoring morphological regularities be- 1https://www.ethnologue.com/language/amh 2https://github.com/leobitz/amharic_word_embedding 2590 tween words. However, challenges such out of vocabu- Word Embeddings Cons lary and complex morphological structure led the ex- Decoder Dense Layer ploration of subword or character-level models. Clos- (GRU) Vowel Max pooling, flattening est to this work are the character-aware neural lan- Decoder guage modeling approach of Kim et al. (2016) and the (GRU) 2D Convolution fastText sub-word skip-gram (swsg) models of Bo- janowski et al. (2017). Kim et al. used a convolutional neural network (CNN) as their word-level representa- consonants Word matrix tion, directly feeding dense vectors (embeddings) to a vowels long short-term memory (LSTM) language model. consonants vowels Bojanowski et al.’s fastText (2017) introduced a sim- ple subword model on top of the skip-gram model Figure 1: Consonant-Vowel subword (cvsw) model, (2017): subword character n-grams, alongside the orig- with CNN character embedding and separate RNN- inal target word, have their own vectoral representa- modeled consonant and vowel sequences. tions that are summed to form a word (e.g., for slant with n = 3, the set of vectors would include {<sl, sla, lan, ant, nt>}). This subword model improved Note that this is computationally different from embeddings for the morphologically rich languages of other purported abugidas. Hindi’s Devanagari script Arabic, German, and Russian in a word similarity task. has vowel markers that are separate Unicode code Other work on morphologically rich languages has ap- points, easily distinguished by computational systems; plied subword models to tasks like machine translation Amharic’s Fidel script has a single Unicode code point (Vylomova et al., 2017). for each consonant–vowel character. Indeed, only full Training models across more languages leads to further syllabaries are as opaque from the standpoint of digi- insights on subword modeling; in subsequent work on talized orthographies. fastText, Grave et al. (2018) trained models in 157 languages. This massive study showed that a large 3.2. Consonant-Vowel subword model training corpus for embeddings (the Common Crawl) We introduce a consonant-vowel subword model (cvsw) was extremely important in building more accurate for alphasyllabary text, shown in Figure 1. Here, an embeddings. input word is represented in 3 ways: a sequence of one- Significant efforts in aligning embeddings across mul- hot consonants, a sequence of one-hot vowels, and a tiple languages (Zou et al., 2013; Ammar et al., 2016) static matrix where each row is a two-hot vector (one- are only tangentially related to ours. Multilingual em- hot consonant concatenated with a one-hot vowel) rep- beddings, though, have been jointly developed with resenting an alphasyllabary character in the word. other tasks such as language identification (Jaech et The model is similar to autoencoder which learns ef- al., 2016). ficient data encoding. It contains an “encoder” CNN Instead of translated evaluation sets (Zahran et al., (followed by max pooling and a dense layer) which em- 2015; Abdou et al., 2018), we are interested in build- beds the sequence of the alphasyllabary characters in a ing mono-lingual word/subword embeddings validated given word. We chose this CNN to encode the input to within their own linguistic context. Most non-English the latent space after preliminary experiments with al- models with significant monolingual evaluation efforts ternative architectures, e.g., one-hot or removing RNN (other than those described above) are relatively well- input sequences, with the intuition that an embedding resourced compared to Amharic, e.g., Chinese (Chen of a word is directly formed from its consonant and et al., 2015) and Arabic (Dahou et al., 2016). We em- vowel. The dense layer is considered the embedding ploy Arabic’s larger resources for comparison, since it of a given sequence. Two recurrent neural networks is also in the Semitic language family. (RNNs) using gated recurrent units (GRUs) “decode” the sequence of the input characters. One RNN pre- 3. Methods dicts the consonant sequence; the other predicts the 3.1. Alphabetization and abjadization vowel. Our first subword models combine two simple The probability of predicting a character l is given by: linguistically-motivated pre-processes with the Bo- Y janowski et al.’s model (2017). First, we normalized P (l1:::lt j x1:::xt) = P (ct j u; c1:::ct−1) Amharic Fidel consonants and represented vowels sep- Y · j arately, approximating an alphabet. Second, we nor- ] P (vt u; v1:::vt−1) malized consonants and eliminated the vowels, approx- log(P (l1:::lt j x1:::xt)) = log(P (ctju; c1:::ct − 1)) imating an abjad. Below, we chose the 6th form of + log(P (vtju; v1:::vt − 1)) each Fidel to represent the consonant, since that form is also used for a consonant with a null vowel. where l denotes the alphasyllabary character sequence ሰላም −! ስeልaምi −! ስልም to be predicted, c and v are the consonant and vowel selam −! s e l a m −! s l m features of the letter, and finally, u is the fixed length Abugida −! Alphabet −! Abjad embedding (latent feature) of the input sequence. 2591 Pre-processed corpus Amharic Arabic Total tokens 16,295,955 16,284,308 Unique tokens 855,109 584,204 Avg. token frequency 19.057 27.874 After min-thresholding at frequency of 5 Total tokens 15,129,549 15,523,827 Unique tokens 155,427 131,014 Avg.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us