Examples of Lithuanian Voice Dialogue Applications

Examples of Lithuanian Voice Dialogue Applications

SPECOM'2006, St. Petersburg, 25-29 June 2006 Examples of Lithuanian voice dialogue applications Algimantas Rudzionis*, Kastytis Ratkevicius*, Vytautas Rudzionis**, Rytis Maskeliunas*. * Speech Research Group, Kaunas University of Technology ** Kaunas Faculty of Humanities, Vilnius University [email protected] Abstract • individuals with physical or perceptual disabilities might have greater access to e-business services; • enterprise branding is extended to a new channel – This paper presents several speech technology phone-based interaction; demonstrations developed with the aim to show the potential of speech technologies. All these applications must comply • overall customer service costs are decreased; with several emerging voice technology oriented standards – • return on investment for speech application SALT and VoiceXML and use such software platforms as development often occurs in as few as three to six Microsoft Speech Server or IBM WebSphere in order to month; achieve necessary level of compatibility with other • opportunities for integrating and streamlining applications. Since these platforms don’t have Lithuanian business processes arise as speech applications are text-to-speech synthesis and speech recognition engines developed. proprietary speech processing modules were developed and An effective dialogue is the key component to a successful matched to the chosen standards and platforms. These demos interaction between a voice-only (telephony) application and could serve as tool for evaluating speech technology a user. A voice-only application interacts with the user capabilities by the authorities of telecommunication entirely without visual cues. The dialogue flow must be companies and other potential business customers or intuitive and natural enough to simulate two humans representatives from governmental organizations. Also they conversing. It must also provide a user with enough could be used as an educational resource in the learning contextual and supporting information to understand the process. next action step at any point in the application. Because multimodal applications feature a graphical user interface (GUI) with which users interact, developers do not design 1. Introduction dialogs for them. A hands-free application is an exception to this rule. Hands-free applications contain both a GUI and Speech processing is an important technology for enhanced dialog-flow components, and provide users with both verbal computing because it provides a natural and intuitive and visual confirmations. A dashboard navigation system in interface for the user. People communicate with one another a car is an example of a hands-free application. A user through conversation, so it is comfortable and efficient to speaks to the application and the application speaks to the use the same method for communication with computers. user, while a visual cue appears on a map. Voice technologies -- speech recognition, text-to-speech, There are some alternatives for the developing and and speaker verification -- are now mature enough to create deploying of speech-enabled telephony applications: a vital mode of customer contact, equally powerful as live CAPI (Common ISDN (Integrated Services Digital Network) agents and the Web. They have the potential to dramatically Application Programming Interface); reduce the number of routine inquiries and transactions Telephony API (TAPI)+Speech API (SAPI); handled by agents and boost customer satisfaction by Voice Server + Software Development Kit (SDK) + markup offering easy-to-use, always-available access from any language. landline or mobile phone. Speech technology is also the The voice based timetable for long distance buses was future technology of e-business because it enables more created using CAPI: the user collects the known phone natural, intuitive, and engaging customer service for less number and listens to directions by voice from computer, cost. The numerous benefits of speech technology for e- selects the departure town and the arrival town by voice, business include: computer by phone presents some routes reading • interaction with callers is easier and more natural; prerecorded speech phrases [1]. This approach is efficient • menus can be eliminated or flattened, for more for telephony but is not well-suited to speech and internet subtle and intuitive navigation; applications [2]. • call durations can be minimized, meaning less cost Second approach integrates together telephony and speech. per transaction; The SAPI application programming interface (API) • interaction with customers can occur 24 hours a dramatically reduces the code overhead required for an day, 7 days a week; application to use speech recognition and text-to-speech, • customers interact with your business using their making speech technology more accessible and robust for a telephone or cellular telephone, resulting in wide range of applications. The SAPI API provides a high- continuous access to the customer base regardless level interface between an application and speech engines. of their location; The two basic types of SAPI engines are text-to-speech (TTS) systems and speech recognizers. SAPI includes the 143 SPECOM'2006, St. Petersburg, 25-29 June 2006 speech grammar compiler tool, which enables to design Where factors fi define both linguistic (e. g. context) voice dialogues in XML grammar format without changing and non-linguistic (e. g. speaking rate) factors. Factors fi are the program source code. Microsoft's Telephony API (TAPI) found experimentally. provides developers with a standardized interface to rich After some simple experiments were carried on, the selection telephony hardware. By utilizing TAPI, developers inherent and minimal durations of the sounds were estimated, can write applications that support any device with a TAPI 7 most important factors were found and their values were driver, also called a Telephony Service Provider (TSP). calculated. The factors are as follows: vowel before vowel, TAPI eliminates the need for developers to wrestle with vowel after vowel, vowel after voiced consonant, vowel device specific APIs and enables well-behaved device before the group of consonants, consonant belong to the sharing between TAPI applications. Unfortunately TAPI is group of consonants, consonant at the end of phrase, vowel at very complex and does not include direct support for useful the end of phrase before consonant, vowel at the end of speech technologies like text-to-speech (TTS) and speech phrase. In addition to the mentioned factors on more factor recognition (SR). Now many companies offer TAPI controls was used - the speaking rate. (a collection of ActiveX and VCL (Visual Component The curves of fundamental frequency F0 were Library) controls) [3], which you can call from your modeled as a superposition of phrase intonation curves and telephony application. These controls release you from the pitch accent curves. The only difference is that another drudgery of writing low-level code. function for modeling the pitch accent curves was chosen [7]. Third approach integrates together telephony, speech and The fundamental frequency contour can be calculated internet. So far it has only two kits to develop realizations: according to the following formulas: Microsoft Speech Server (MSS) and IBM WebSphere Voice I J ln F (t) = ln F + A G (t − T ) + A G ((t − T ) / d ), Server [4]. The IBM WebShpere Voice Server is a 0 b ∑∑pi p 0i a j a 1 j j i 11j VoiceXML 2.0 (Voice eXtensible Markup Language) - == ⎧α 2te −αt , t ≥ 0, enabled speech environment. The VoiceXML is aimed at G p (t) = ⎨ developing telephony-based applications, and takes the ⎩ 0, t < 0, advantages of Web-based applications delivery to IVR ⎧1+ cos(πt), −1 ≤ t < 1, G (t) = (Interactive Voice Response) applications. Being different a ⎨ ⎩ 0, t < −1,t ≥ 1 from IBM, Microsoft is using SALT 1.0 (Speech Application Language Tags) within Microsoft Speech Server. SALT where Gp(t) represents the impulse response function targets speech-enabled applications across all devices such of the phrase control mechanism and Ga(t) represents the as telephones, PDAs, tablet PCs, and desktop PCs [4]. The impulse response function of the accent control mechanism. Microsoft Speech Application SDK (SASDK), version 1.0 The symbols in these equation indicate: Fb – baseline value enables developers to create two basic types of applications: of fundamental frequency, I – number of phrase commands, J telephony (voice-only) and multimodal (text, voice, and – number of accent commands, Api – magnitude of the ith visual) [5]. Run from within the Visual Studio.NET phrase command, Aaj – magnitude of the jth accent environment, the SASDK is used to create Web-based command, T0i – onset of the ith phrase command, T1j – applications only. The SASDK makes it easy for developers middle of the jth accent command, dj – duration of the jth to utilize speech technology. Graphical interfaces and drag- accent command, α – parameter for F0 shape control (equals and-drop capabilities mask all the complexities. All the to 3). .NET developer needs to know about speech recognition is The pitch of the synthesized speech can be controlled how to interpret the resulting confidence score. by controlling the baseline of the fundamental frequency. The sample speech synthesizer "SampleTTSVoice" from 2. Lithuanian text-to-speech synthesizer Microsoft Speech SDK v. 5.1 was used when creating the LtMBR interface of the synthesizer LtMBR, so the synthesizer LtMBR is compatible

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us