The Relevance of the Source Language in Transfer Learning for ASR Nils Hjortnæs Niko Partanen Indiana University University of Helsinki [email protected] Helsinki, Finland [email protected]

Michael Rießler Francis M. Tyers University of Eastern Finland Department of Linguistics Joensuu, Finland Indiana University [email protected] Bloomington, IN [email protected]

Abstract mediately useful for our goals, but we continue to explore different ways to improve our result. This This study presents new experiments on Zyr- study uses the same dataset, but attempts to take ian Komi speech recognition. We use Deep- Speech to train ASR models from a language the multilingual processes found in the corpus into documentation corpus that contains both con- account better. temporary and archival recordings. Earlier ASR has progressed greatly for high resource studies have shown that transfer learning from languages and several advances have been made English and using a domain matching Komi to extend that progress to low resource languages. language model both improve the CER and There are still, however, numerous challenges WER. In this study we experiment with trans- in situations where the available data is limited. fer learning from a more relevant source lan- guage, Russian, and including Russian text in In many situations with the larger languages the the language model construction. The moti- training data for speech recognition is collected vation for this is that Russian and Komi are expressly for the purpose of ASR. These ap- contemporary contact languages, and Russian proaches, such as Common platform, can is regularly present in the corpus. We found be extended also the endangered languages (see that despite the close contact of Russian and i.e. Berkson et al., 2019), so there is no clear cut Komi, the size of the English speech corpus boundary between resources available for different yielded greater performance when used as the languages. While having dedicated, purpose-built source language. Additionally, we can report that already an update in DeepSpeech version data is good for the performance of ASR, it also improved the CER by 3.9% against the earlier leaves a large quantity of more challenging but us- studies, which is an important step in the de- able data untapped. At the same time these mate- velopment of Komi ASR. rials not explicitly produced for this purpose may be more realistic for the resources we intend to use 1 Introduction the ASR for in the later stages. This study describes a Automatic Speech Recog- The data collected in language documentation nition (ASR) experiment on Zyrian Komi, an en- work customarily originates from recorded and dangered, low-resource Uralic language spoken in transcribed conversations and/or elicitations in the . Komi has approximately 160,000 speakers target language. While this data does not have the and the is well established using desirable features of a custom speech recognition the . Although Zyrian Komi is en- dataset such as a large variety of speakers and ac- dangered, it is used widely in various media, and cents, and includes much fewer recorded hours, also in education system in the . for many endangered languages the language doc- This paper continues our experiments on Zyr- umentation corpora are the only available source. ian Komi ASR using DeepSpeech, which started However, attention should also be paid to the in Hjortnaes et al.(2020b). The scores reported differences in endangered language contexts glob- there were very low, but in a later study we found ally. There is an enormous variation in how that a language model built on more data increased much one language is previously documented, and the performance dramatically (Hjortnaes et al., whether earlier resources exist. This also connects 2020a). We are not yet at a level that would be im- to the new materials collected, as some languages

63

Proceedings of the 4th Workshop on the Use of Computational Methods in the Study of Endangered Languages: Vol. 1 Papers, pages 63–69, Online, March 2–3, 2021. need a full linguistic description, and some already segments where the language is in Russian, al- have established orthographies and variety of de- though the main language is Komi. There are also scriptions available. For example, in the case of very fragmentary occurrences of Tundra Nenets, Komi our transcription choice is the existing or- Kildin Saami and Northern Mansi languages, but thography which is also used in other resources these are so rare at the moment that we have not (Gerstenberger et al., 2016, 32). Our spoken lan- addressed them separately. In addition, none of guage corpus is connected with the entire NLP the data is annotated for which language is be- ecosystem of the , which includes ing spoken, and we only have transcriptions in the Finite State Transducer (Rueter, 2000), well devel- contemporary Cyrillic orthographies of these lan- oped dictionaries both online1 and in print (Rueter guages, as explained above. We propose two pos- et al., 2020; Beznosikova et al., 2012; Alnajjar sible methods to accommodate these properties of et al., 2019), several treebanks (Partanen et al., the data. First, we compare whether it is better to 2018) and also written language corpora (Fedina, transfer from a high resource language, English, 2019)2. We use this technical stack to annotate or the contact language, Russian. Second, we an- our corpus directly into the ELAN files (Gersten- alyze the impact of constructing languages mod- berger et al., 2017), but also to create versions els from different combinations of Komi and Rus- where identifiable information has been restricted sian sources. The goal is to make the language (Partanen et al., 2020). Thereby our goal is not to model more representative of the data and thereby describe the language from the scratch, but to cre- improve performance. ate a spoken language corpus that is not separate from the rest of the work and infrastructure done 2 Prior Work on this language. From this point of view we need an ASR system that produces the contemporary A majority of the work on Speech Recognition orthography, and not purely the phonetic or phone- focuses on improving the performance of models mic levels. It can be expected that entirely undocu- for high resource languages. Very small improve- mented languages and languages with a long tradi- ments may be made through advances such as im- tion of documentation need very different ASR ap- proving the network and the information available proaches, although still being under the umbrella to it, as in Han et al.(2020) or Li et al.(2019), of endangered language documentation. though as performance increases the gains of these In this work we expand on the use of trans- new methods decrease. Another avenue is to try to fer learning to improve the quality of a speech make these systems more robust to noise through recognition system for dialectal Zyrian Komi. Our data augmentation (Braun et al., 2017; Park et al., data consists of about 35 hours of transcribed 2019). As with improving the networks, how- speech data that will be available as an indepen- ever, these improvements become more and more dent dataset in the Language Bank of Finland marginal as performance increases. (Blokland et al., forthcoming). While this collec- As more models for ASR become available as tion is under preparation, the original raw mul- open source (Hannun et al., 2014; Pratap et al., timedia is available by request in The Language 2019), it becomes easier to develop these tools for Archive in Nijmegen (Blokland et al., 2021). This low resource languages and to create best prac- is a relatively large dataset for a low resource tice standards for doing so. This is the fundamen- language, but is still nowhere near high resource tal goal of Common Voice (Ardila et al., 2020). datasets such as Librispeech (Panayotov et al., Others also work on individual languages, such as 2015), which has about 1000 hours of English. Fantaye et al.(2020) and Dalmia et al.(2018). One of the largest challenges in our dataset is In the language documentation context we have that there is significant code switching between seen a large number of experiments on endangered Komi and Russian. This is a feature shared with languages in the last few years, but often focus- other similar corpora (compare i.e. Shi et al., ing on the datasets with a single speaker. Under 2021). All speakers are bi- or multilingual, and this constraint a few hours of transcribed data has use several languages regularly, so there are large already shown to result in a relatively good ac- curacy, as shown by Adams et al.(2018). Also 1https://dict.fu-lab.ru Partanen et al.(2020) report very good results 2http://komicorpora.ru on the extinct Samoyedic language Kamas, where

64 the model was also trained with one speaker, for cleaned any sections which were too long or too whom, however, a relatively large corpus exists. short as defined by DeepSpeech. The alphabet is Under many circumstances it is realistic and im- based on the Komi data and not the text used to portant to record individual speakers in numerous construct the language models as it is what deter- recording sessions, and such collections appear to mines the output of the network. We obtained our be numerous in the archives containing past field English model from DeepSpeech’s publicly avail- recordings, so there is no doubt that also single able release models4. speaker systems can be useful, although not ideal. Recently, Shi et al.(2021) also report very 3.3 DeepSpeech encouraging results on Yoloxochitl´ Mixtec and We trained our models using Mozilla’s open Puebla Nahuatl, especially as the corpus contains source DeepSpeech5 (Hannun et al., 2014) ver- multiple speakers. Our corpus is of a compara- sion 0.8.2. We used this version because it was ble size as is used in their experiments (Shi et al., the latest release version at the time of these 2021). Compared to their results our Komi accu- experiments. DeepSpeech is an end-to-end bi- racy, including the latest ones reported in this pa- directional LSTM neural network specifically de- per, are tens of percentages worse that what could signed for speech recognition. It consists of 5 be expected from the size of our corpus. This calls hidden layers followed by a softmax layer where for wider experimentation at our dataset using dif- the 4th layer is the LSTM layer. The other hid- ferent systems, which, we hope, will reveal more den layers all use the ReLU activation function. about how particularities of individual corpora im- The whole structure can be seen in figure1, which pact the result. shows an older version of the architecture with a Zahrer et al.(2020) also describe a language unidirectional LSTM. For this experiment we used documentation project design where ASR tools a dropout of 10% and a learning rate of 0.0001 are being integrated into actual workflows during with batch sizes of 128 for training, testing, and the project. The end goal of our work is in line development sets. DeepSpeech automatically de- with this: we want Komi ASR to reach a level tects plateaus and reduces the learning rate by a where it is useful for work on this language, and factor of 10 when no further improvement is being we see this happening through gradual steps where made on the dev set. the system used is improved through different ex- We trained a Russian model using DeepSpeech periments. from the standard random initialization using What it comes to the usability and accessibil- the hyperparameters defined above. DeepSpeech ity of ASR systems, Adams et al.(2020) describe saves the best performing model, which we then their work on a user friendly interface for the lan- use for transfer learning later. guage workers to train and use ASR tools. Cox (2019) has created an ELAN plugin, and the same 3.4 Language Models approach was recently extended for DeepSpeech by Partanen(2021). DeepSpeech outputs its best guess as to the tran- scription, but that is based entirely on the contents 3 Methodology of the audio and does not account for spelling or punctuation. In order to address this, the output 3.1 Data is put through a function which attempts to max- The Russian speech corpus we use is from imize the weighted probability of the model out- 3 Mozilla’s Common Voice project and contains put and a probabilistic language model with two about 105 hours of speech data (Ardila et al., tuneable parameters. The first, α, determines how 2020). The Komi data consists of about 35 hours much the language model is allowed to edit the of dialectal speech, and is described in Hjortnaes network output. The second, β, controls inserting et al.(2020b). spaces (Hannun et al., 2014). Our language mod- 3.2 Data Preprocessing els were constructed using Kenlm (Heafield, 2011)

To prepare both our Komi and Russian data, we 4https://github.com/mozilla/ split it into 8/1/1 training, dev, and testing sets and DeepSpeech/releases 5https://github.com/mozilla/ 3https://commonvoice.mozilla.org/ DeepSpeech/

65 Corpus Size Komi Speech 35 hours Russian Speech 105 hours Komi Literary 1.39M tokens Komi Social 1.37M tokens Komi Text Combined 2.76M tokens Russian Wiki 786M tokens

Table 1: Token counts for the speech corpora and text corpora used to create the language models.

layer after the LSTM layer, yielded the best per- formance. The softmax function outputs a letter of the alphabet for each time stamp and re-initializing is necessary to accommodate the target language Figure 1: Mozilla’s DeepSpeech architecture (Meyer, having a different alphabet than the source lan- 2019) guage. To train the Komi model, we used transfer learn- ing from English to Komi using each of the three on the 500000 most common words in the relevant language models defined above and used trans- text corpus. fer learning from the Russian model we trained to We constructed three language models using Komi again using each of the three language mod- various quantities of available Komi and Russian els. We obtained the English model from Deep- data. The first is exclusively Komi and is con- Speech’s released models for 0.8.2. structed using the Komi-Zyrian corpora6 which consists of a main corpus of 1.39 million words in 4 Results the literary domain and a 1.37 million word cor- pus from social media (see Arkhangelskiy, 2019). The best Word Error Rate and best Character Error This serves as our baseline language model. The Rate were achieved by using English as the source second includes all available text data from both language for the transfer and the language model the Komi corpora and the Russian Wikipedia constructed from equal parts Komi and Russian. dump from September 1st, 2020, which contains This is, however, only a minor improvement as over 786 million tokens. We did not expect this compared to using the language model constructed largely Russian model to perform especially well, from Komi alone. When using Russian as the but include it anyway as an additional point of source language, performance drops regardless of comparison. The last language model was con- the language model used. The worst performance structed by cutting the Russian Wikipedia dump was achieved when using Russian as the source down to the same number of tokens as the com- language and the language model leveraging all bined Komi corpora such that the model is based available text. on an equal amount of Komi and Russian data. 5 Discussion 3.5 Transfer Learning The biggest difference in performance between the We trained our models using the transfer learning models was between the English sourced models feature built into DeepSpeech which is based on and the Russian sourced models. Though there Meyer(2019). This starts by training the model on is a significant amount of Russian data alongside a high resource language, re-initializing the last n the Komi due to code switching and borrowing, layers, and switching to the target language. Both training from a Russian model did not yield any Meyer and Hjortnaes et al.(2020b) found that re- improvement. We interpret this as demonstrating initializing the last 2 layers, the softmax and ReLU that the amount of data in the source language is 6http://komi-zyrian.web-corpora.net more important than the relevance of the source

66 Language Model CER/WER Komi All Available Text Komi & Russian Source Language English 0.415/0.767 0.441/0.824 0.414/0.765 Russian 0.478/0.830 0.499/0.870 0.477/0.830

Table 2: The best Character Error Rate (CER) and Word Error Rate (WER) for each combination of source lan- guage and language model (lower is better). Note the best WER and best CER may have come from different language model α and β parameters. language to the dataset. Common Voice for En- was identical. Our results indicate that when using glish has over 1400 hours of validated data, as transfer learning to create ASR tools for minority compared to the 105 hours of Russian data. languages, the size of the source language is more It is unsurprising that the language model con- important than the similarity or contact. Having a structed from all available text caused a reduc- larger quantity of training data in the source lan- tion in performance, as the focus of the dataset guage allows the model to learn to interpret on is on Komi. The 786 million tokens in the Rus- a phonetic level. This improvement in phonetic sian Wikipedia corpus dwarfed the 2.76 million to- understanding is more valuable and impactful on kens available for Komi, and because the language the performance of the model than having a more model was constructed using only the 500000 relevant but lower resource source language after most common words in the text, there was prob- transferring to the target language. We do note, ably very little Komi accounted for. This language however, that while this is true for this particu- model combined with the English source language lar case, it does not necessarily hold true for any still achieves a better performance than the Rus- source/target language pairing. sian source language runs. We conclude from Multilinguality is by no means the only chal- this that while both source language and language lenge the used dataset offers. For example, the model are important, source language is more im- corpus has a large amount of overlapping speech, portant. which is very frequent in the interviews. Most of Additionally, it can be discussed whether an the recordings have more than two participants, ASR system that relies so strongly on the language and participants were not discouraged to use inter- model is the best architecture for endangered lan- ruptions and small verbal cues, as these are essen- guages, especially with very agglutinative mor- tial for normal communication, and the goal was phology. While a language model is capable of to collect natural speech. Additionally the corpus handling new words it has not encountered, it will has a large number of speakers, and many speak- presume them to be less likely regardless of their ers are present in one recording only, so the un- validity. In the case of Komi, new morpholog- transcribed recordings are prone to contain speak- ically complex word forms are continuously en- ers who are entirely unseen by the ASR model. countered for the first time in the new recordings, Further work is needed to effectively leverage and there is no way that a relatively small corpus resources in closely related languages and con- would cover them perfectly, not to even mention tact languages, as the current choice of English the dialectal forms that are common. Still, the lan- in transfer learning is not motivated by anything guage model has proven to have an important role other than the amount of available data. in our approach, and also other systems could pos- sibly benefit from using it in one form or another. Acknowledgments

6 Conclusions This research was supported in part by Lilly En- dowment, Inc., through its support for the Indiana We can report continuous improvements from the University Pervasive Technology Institute. Niko earlier studies by Hjortnaes et al.(2020b) and Partanen and Michael Rießler collaborate within Hjortnaes et al.(2020a), and our CER improves the project Language Documentation meets Lan- by several percentages from the earlier best score. guage Technology: The Next Step in the Descrip- This appears to be, however, just due to a different tion of Komi, funded by Kone Foundation, Fin- DeepSpeech version, as otherwise the test setup land.

67 References Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W Black. 2018. Sequence-based multi- Oliver Adams, Trevor Cohn, Graham Neubig, Hilaria lingual low resource speech recognition. In Cruz, Steven Bird, and Alexis Michaud. 2018. Eval- 2018 IEEE International Conference on Acous- uating phonemic transcription of low-resource tonal tics, Speech and Signal Processing (ICASSP), pages languages for language documentation. In Proceed- 4909–4913. IEEE. ings of LREC 2018. Tessfu Geteye Fantaye, Junqing Yu, and Tulu Tilahun Oliver Adams, Benjamin Galliot, Guillaume Wis- Hailu. 2020. Investigation of automatic speech niewski, Nicholas Lambourne, Ben Foley, Ra- recognition systems via the multilingual deep neural hasya Sanders-Dwyer, Janet Wiles, Alexis Michaud, network modeling methods for a very low-resource Severine´ Guillaume, Laurent Besacier, Christopher language, Chaha. Journal of Signal and Information Cox, Katya Aplonova, Guillaume Jacques, and Processing, 11(1):1–21. Nathan Hill. 2020. User-friendly automatic tran- scription of low-resource languages: Plugging esp- Marina Serafimovna Fedina. 2019. Korpus komi net into elpis. azykaˆ kak baza dlaˆ naucnyhˇ issledovanij. In II Mezdunarodnaˇ aˆ naucnaˇ aˆ konferenciaˆ Elektronna` aˆ Khalid Alnajjar, Mika Ham¨ al¨ ainen,¨ Niko Partanen, pismennost narodov Rossijskoj Federacii: opyt, Jack Rueter, et al. 2019. The open dictionary infras- problemy i perspektivy, pages 45–48. Baskirskaˇ aˆ tructure for . In II Mezdunarodnaˇ aˆ enciklopedi` a.ˆ naucnaˇ aˆ konferenciaˆ Elektronna` aˆ pismennost naro- dov Rossijskoj Federacii: opyt, problemy i perspek- Ciprian Gerstenberger, Niko Partanen, and Michael Rießler. 2017. Instant annotations in ELAN corpora tivy, pages 49–51. Baskirskaˇ aˆ enciklopedi` a.ˆ of spoken and written Komi, an endangered lan- guage of the Barents Sea region. In Antti Arppe, Jeff R. Ardila, M. Branson, K. Davis, M. Henretty, Good, Mans Hulden, Jordan Lachler, Alexis Palmer, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. and Lane Schwartz, editors, Workshop on the Use of Tyers, and G. Weber. 2020. Common Voice. In Pro- Computational Methods in the Study of Endangered ceedings of the 12th Conference on Language Re- Languages (ComputEl-2), pages 57–66. Association sources and Evaluation (LREC 2020). for Computational Linguistics. Timofey Arkhangelskiy. 2019. Corpora of social me- Ciprian Gerstenberger, Niko Partanen, Michael dia in minority Uralic languages. In Proceedings of Rießler, and Joshua Wilbur. 2016. Utilizing the Fifth International Workshop on Computational language technology in the documentation of Linguistics for Uralic Languages, pages 125–140. endangered Uralic languages. Northern European Journal of Language Technology, 4:29–47. Kelly Berkson, Samson Lotven, Peng Hlei Thang, Thomas Thawngza, Zai Sung, James C Wamsley, Wei Han, Zhengdong Zhang, Yu Zhang, Jiahui Yu, Francis Tyers, Kenneth Van Bik, Sandra Kubler,¨ Chung-Cheng Chiu, James Qin, Anmol Gulati, Donald Williamson, et al. 2019. Building a common Ruoming Pang, and Yonghui Wu. 2020. Contextnet: voice corpus for laiholh (hakha chin). In Proceed- Improving convolutional neural networks for auto- ings of the Workshop on Computational Methods for matic speech recognition with global context. arXiv Endangered Languages, volume 2. preprint arXiv:2005.03191.

L. M. Beznosikova, E. A. Ajbabina, and R. I. Kos- Awni Hannun, Carl Case, Jared Casper, Bryan Catan- nyreva. 2012. Slovar dialektov komi azykaˆ . Kola. zaro, Greg Diamos, Erich Elsen, Ryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, and Rogier Blokland, Vasily Chuprov, Maria Fedina, Ma- Andrew Y. Ng. 2014. Deep Speech. rina Fedina, Dmitry Levchenko, Niko Partanen, and Kenneth Heafield. 2011. Kenlm. In Proceedings of Michael Rießler. 2021. Spoken Komi Corpus. The the sixth workshop on statistical machine transla- Language Archive version. tion, pages 187–197. Rogier Blokland, Vasily Chuprov, Maria Fedina, Ma- Nils Hjortnaes, Timofey Arkhangelskiy, Niko Parta- rina Fedina, Dmitry Levchenko, Niko Partanen, and nen, Michael Rießler, and Francis M. Tyers. 2020a. Michael Rießler. forthcoming. Spoken Komi Cor- Improving the language model for low-resource pus. The Language Bank of Finland version. ASR with online text corpora. In Proceedings of the 1st joint SLTU and CCURL workshop (SLTU- Stefan Braun, Daniel Neil, and Shih-Chii Liu. 2017. A CCURL 2020), pages 336–341, Marseille. European curriculum learning method for improved noise ro- Language Resources Association (ELRA). bustness in automatic speech recognition. In 2017 25th European Signal Processing Conference (EU- Nils Hjortnaes, Niko Partanen, Michael Rießler, and SIPCO), pages 548–552. IEEE. Francis M. Tyers. 2020b. Towards a speech rec- ognizer for Komi, an endangered and low-resource Christopher Cox. 2019. Persephone-ELAN: Automatic Uralic language. In Proceedings of the Sixth In- phoneme recognition for ELAN users. Version 0.1.2. ternational Workshop on Computational Linguistics

68 of Uralic Languages, pages 31–37. Association for Alexander Zahrer, Andrej Zgank, and Barbara Schup- Computational Linguistics, Vienna. pler. 2020. Towards building an automatic transcrip- tion system for language documentation: Experi- Jinyu Li, Rui Zhao, Hu Hu, and Yifan Gong. 2019. ences from muyu. In Proceedings of The 12th Lan- Improving rnn transducer modeling for end-to-end guage Resources and Evaluation Conference, pages speech recognition. In 2019 IEEE Automatic Speech 2893–2900. Recognition and Understanding Workshop (ASRU), pages 114–121. IEEE. Josh Meyer. 2019. Multi-task and transfer learning in low-resource speech recognition. Ph.D. thesis, Uni- versity of Arizona. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5206–5210. IEEE. Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779. Niko Partanen. 2021. DeepSpeech-ELAN. Version 0.1.1. 10.5281/zenodo.4486474. Niko Partanen, Rogier Blokland, KyungTae Lim, Thierry Poibeau, and Michael Rießler. 2018. The first Komi-Zyrian Universal Dependencies tree- banks. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 126– 132. Association for Computational Linguistics. Niko Partanen, Rogier Blokland, and Michael Rießler. 2020. A pseudonymisation method for language documentation corpora. In Tommi A. Pirinen, Fran- cis M. Tyers, and Michael Rießler, editors, Proceed- ings of the Sixth International Workshop on Compu- tational Linguistics of Uralic Languages, pages 1–8. Association for Computational Linguistics. Vineel Pratap, Awni Hannun, Qiantong Xu, Jeff Cai, Jacob Kahn, Gabriel Synnaeve, Vitaliy Liptchin- sky, and Ronan Collobert. 2019. Wav2letter++: A fast open-source speech recognition system. In ICASSP 2019-2019 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6460–6464. IEEE. Jack Rueter, Paula Kokkonen, and Marina Fedina. 2020. Komi-Zyrian to X lexica. Version 0.5.1, De- cember 7. 2020. 10.5281/zenodo.4309763. Jack M. Rueter. 2000. Helsinkisa universitetyn kyv tualysˆ Izkarynˇ perymsa simpozium vylyn lyddomtor.¨ In Permistika 6 (Proceedings of Per- mistika 6 conference), pages 154–158. Jiatong Shi, Jonathan D. Amith, Rey Castillo Garc´ıa, Esteban Guadalupe Sierra, Kevin Duh, and Shinji Watanabe. 2021. Leveraging end-to-end ASR for endangered language documentation: An empirical study on Yoloxochitl´ Mixtec. ArXiv:2101.10877.

69