Weighted Automata in Text and Speech Processing

Weighted Automata in Text and Speech Processing

Weighted Automata in Text and Speech Processing Mehryar Mohri, Fernando Pereira, Michael Riley AT&T Research 600 Mountain Avenue Murray Hill, 07974 NJ {mohri,pereira,riley}@research.att.com Abstract. Finite-state automata are a very effective tool level. In particular, we use transducer composition to repre- in natural language processing. However, in a variety of ap- sent the combination of the various levels of acoustic, phonetic plications and especially in speech precessing, it is necessary and linguistic information used in a recognizer [11]. For exam- to consider more general machines in which arcs are assigned ple, we may decompose a recognition task into a weighted ac- weights or costs. We briefly describe some of the main theoret- ceptor O describing the acoustic observation sequence for the ical and algorithmic aspects of these machines. In particular, utterance to be recognized, a transducer A mapping acoustic we describe an efficient composition algorithm for weighted observation sequences to context-dependent phone sequences, transducers, and give examples illustrating the value of de- a transducer C that converts between sequences of context- terminization and minimization algorithms for weighted au- dependent and context-independent phonetic units, a trans- tomata. ducer D from context-independent unit sequences to word sequences and a weighted acceptor M that specifies the lan- guage model 1 Introduction , that is, the likelihoods of different lexical tran- scriptions (Figure 2). Finite-state acceptors and transducers have been successfully used in many natural language-processing applications, for o1 o2 on (a) t t t ... t instance the compilation of morphological and phonological 0 1 2 n rules [5, 10] and the compact representation of very large dic- tionaries [8, 12]. An important reason for those successes is ε ε oi: /p01(i) oi: /p12(i) ε:π/p2f that complex acceptors and transducers can be conveniently (b) s s s 0 . 1 . 2 built from simpler ones by using a standard set of algebraic . operations — the standard rational operations together with . transducer composition — that can be efficiently implemented o :ε/p (i) o :ε/p (i) o :ε/p (i) [4, 2]. However, applications such as speech processing re- i 00 i 11 i 22 quire the use of more general devices: weighted acceptors and ε ε weighted transducers, that is, automata in which transitions d:ε/1 ey: /.4 dx: /.8 ax:"data"/1 (c) are assigned a weight in addition to the usual transition la- ae:ε/.6 t:ε/.2 bels. We briefly sketch here the main theoretical and algo- rithmic aspects of weighted acceptors and transducers and α:π their application to speech processing. Our novel contribu- .c γ π . : r . (d) . qlc qcr . tions include a general way of representing recognition tasks ω:π . c . with weighted transducers, the use of transducers to repre- . β:πs . sent context-dependencies in recognition, efficient algorithms qcs . for on-the-fly composition of weighted transducers, and effi- cient algorithms for determinizing and minimizing weighted automata, including an on-the-fly determinization algorithm Figure 1. Models as Weighted Transducers. to remove redundancies in the interface between a recognizer and subsequent language processing. The trivial acoustic observation acceptor O for the vector- 2 Speech processing quantized representation of a given utterance is depicted in Figure 1a. Each state represents a point in time ti, and the In our work we use weighted automata as a simple and ef- transition from ti−1 to ti is labeled with the name oi of the ficient representation for all the inputs, outputs and domain quantization cell that contains the acoustic parameter vector information in speech recognition above the signal processing for the sample at time ti−1. For continuous-density acoustic c 1996 AT&T Corp. ECAI 96. 12th European Conference on Artificial Intelligence Edited by W. Wahlster Published in 1996 by John Wiley & Sons, Ltd. O A C D M Context− Observations dependent phones words sentences phones Figure 2. Recognition Cascade representations, there would be a transition from ti−1 to ti la- Finite-state models have been used widely in speech recog- beled with a distribution name and the likelihood1of that dis- nition for quite a while, in the form of hidden-Markov models tribution generating the acoustic-parameter vector, for each and related probabilistic automata for acoustic modeling [1] acoustic-parameter distribution in the acoustic model. and of various probabilistic language models. However, our The acoustic-observation sequence to phone sequence trans- approach of expressing the recognition task with transducer ducer A is built from context-dependent (CD) phone models. composition and of representing context dependencies with A CD-phone model is given as a transducer from a sequence transducers allows a new flexibility and greater uniformity of acoustic observation labels to a specific context-dependent and modularity in building and optimizing recognizers. unit, and assigns to each acoustic sequence the likelihood that the specified unit produced it. Thus, different paths through a CD-phone model correspond to different acoustic realizations 3 Theoretical definitions of a CD-phone. Figure 1b depicts a common topology for such a CD-phone model. The full acoustic-to-phone transducer A is In a recognition cascade such as the one we just discussed then defined by an appropriate algebraic combination (Kleene (Figure 2), each step implements a mapping from input-output closure of sum) of CD-phone models. pairs (r, s) to probabilities P (s|r). More formally, each trans- The form of C for triphonic CD-models is depicted in Fig- ducer in the cascade implements a weighted transduction. A ure 1d. For each context-dependent phone model γ, which weighted transduction T is a mapping T :Σ∗ ×Γ∗ → K where ∗ ∗ corresponds to the (context-independent) phone πc in the con- Σ and Γ are the sets of strings over the alphabets Σ and text of πl and πr, there is a state qlc in C for the biphone πlπc, Γ, and K is an appropriate weight structure, for instance the 2 a state qcr for πcπr and a transition from qlc to qcr with in- real numbers between 0 and 1 in the case of probabilities. put label γ and output label πr. We have used transducers of The right-most step of (Figure 2), the language model ac- that and related forms to map context-independent phonetic ceptor M, represents not a transduction but a weighted lan- representations to context-dependent representations for a va- guage. However, we can identify any weighted language L with riety of medium to large vocabulary speech recognition tasks. the restriction of the identity transduction that assigns to We are thus able to provide full-context dependency, with its (w,w) the same weight as L assigns to w, and in the rest of well-known accuracy advantages [7], without having to build the paper we will use this to identify languages with trans- special-purpose context-dependency machinery in the recog- ductions and acceptors with transducers as appropriate. nizer. Given two transductions S : Σ∗ × Γ∗ → K and T : Γ∗ × The transducer D from phone sequences to word sequences ∆∗ → K, we can define their composition S ◦ T by is defined similarly to A. We first build word models as trans- ducers from sequences of phone labels to a specific word, (S ◦ T )(r, t) = X S(r, s)T (s, t) (1) which assign to each phone sequence a likelihood that the s∈Γ∗ specified word produced it. Thus, different paths through a word model correspond to different phonetic realizations of For example, if S represents P (sl|si) and T P (sj |sl) in (2), the word. Figure 1c shows a typical word model. D is then S ◦ T represents P (sj |si). defined as an appropriate algebraic combination of word mod- It is easy to see that composition ◦ is associative, that is, in els. any transduction cascade R1 ◦· · ·◦Rm, the order of association Finally, the language model M, which may be for example of the ◦ operators does not matter. an n-gram model of word sequence statistics, is easily repre- Rational weighted transductions and languages are those sented as a weighted acceptor. that can be defined by application of appropriate generaliza- The overall recognition task can be then expressed as the tions of the standard Kleene operations (union-sum, concate- search for the highest-likelihood string in the composition nation and closure) and are also exactly those implemented O ◦ A ◦ C ◦ D ◦ M of the various transducers just described, by weighted finite-state automata (transducers and acceptors) which is an acceptor assigning each word sequence the likeli- [4, 3, 6]. We are thus justified in our abuse of language in us- hood that it could have generated the given acoustic observa- ing the same terms and symbols for rational transduction and tions. For efficiency, we use the standard Viterbi approxima- finite-state transducer operations in what follows. tion and search for the highest-probability path rather than the highest-probability string. 1 For computational reasons, sums and products of probabilities are often replaced by minimizations and sums of negative log probabilities. Formally, this corresponds to changing the semir- 2 Composition can be defined for any semiring (K, +, ·), and the ing of the weights. The algorithms we present here work for any transductions we are considering here can be defined in terms of semiring. formal power series [3, 6]. 2 M. Mohri, F. Pereira and M. Riley 4 Efficient Composition of Weighted Finite-State Transducers (a) a:a b:ε c:ε d:d A 0 1 2 3 4 Composition is the key operation on weighted transducers.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us