D5.1 ANALYSIS: Multi-User, Multimodal & Context Aware Value Added Services

Total Page:16

File Type:pdf, Size:1020Kb

D5.1 ANALYSIS: Multi-User, Multimodal & Context Aware Value Added Services Deliverable 5.1 Project Title Next-Generation Hybrid Broadcast Broadband Project Acronym HBB-NEXT Call Identifier FP7-ICT-2011-7 Starting Date 01.10.2011 End Date 31.03.2014 Contract no. 287848 Deliverable no. 5.1 Deliverable Name ANALYSIS: Multi-User, Multimodal & Context Aware Value Added Services Work package 5 Nature Report Dissemination Public Authors Oskar van Deventer (TNO), Mark Gülbahar (IRT), Sebastian Schumann, Radovan Kadlic (ST), Gregor Rozinaj, Ivan Minarik (STUBA), Joost de Wit (TNO), Christian Überall, Christian Köbel (THM), Contributors Jennifer Müller (RBB), Jozef Bán, Marián Beniak, Matej Féder, Juraj Kačur, Anna Kondelová, Luboš Omelina, Miloš Oravec, Jarmila Pavlovičová, Ján Tóth, Martin Turi Nagy, Miloslav Valčo, Mário Varga, Matúš Vasek (STUBA) Due Date 30.03.2012 Actual Delivery Date 12.04.2012 HBB-NEXT I D5.1 ANALYSIS: Multi-User, Multimodal & Context Aware Value Added Services Table of Contents 1. General introduction ....................................................................................................... 3 2. Multimodal interface for user/group-aware personalisation in a multi-user environment . 6 2.1. Outline ........................................................................................................................... 6 2.2. Problem statement ......................................................................................................... 6 2.3. Gesture recognition ........................................................................................................ 7 2.3.1. Gesture taxonomies .................................................................................................................. 8 2.3.2. Feature extraction methods ..................................................................................................... 9 2.3.2.1. Methods employing data gloves ............................................................................................................. 9 2.3.2.2. Vision based methods ............................................................................................................................. 9 2.3.2.3. Hidden Markov Models ........................................................................................................................... 9 2.3.2.4. Particle Filtering .................................................................................................................................... 10 2.3.2.5. Condensation algorithm ........................................................................................................................ 11 2.3.2.6. Finite State Machine Approach ............................................................................................................. 11 2.3.2.7. Soft Computing and Connectionist Approach ....................................................................................... 12 2.4. Face recognition ............................................................................................................ 12 2.4.1. Theoretical background .......................................................................................................... 13 2.4.2. Team Expertise ........................................................................................................................ 18 2.4.2.1. Skin Colour Analysis............................................................................................................................... 18 2.4.2.2. Gabor Wavelet approach ...................................................................................................................... 18 2.4.2.3. ASM and SVM utilization ....................................................................................................................... 19 2.5. Multi-speaker identification .......................................................................................... 19 2.5.1. Theoretical background .......................................................................................................... 20 2.5.2. Team Expertise background.................................................................................................... 29 2.6. Speech recognition ........................................................................................................ 29 2.6.1. Theoretical background .......................................................................................................... 30 2.6.2. Speech feature extraction methods for speech recognition .................................................. 31 2.6.2.1. HMM overview ...................................................................................................................................... 33 2.6.2.2. HMM variations and modifications ....................................................................................................... 35 2.6.3. Team Expertise background.................................................................................................... 35 2.7. Speech synthesis ........................................................................................................... 36 2.7.1. Available solutions .................................................................................................................. 37 2.7.1.1. Formant Speech Synthesis .................................................................................................................... 38 2.7.1.2. Articulatory Speech Synthesis ............................................................................................................... 38 2.7.1.3. HMM-based Speech Synthesis .............................................................................................................. 39 2.7.1.4. Concatenative Speech Synthesis ........................................................................................................... 39 2.7.1.5. MBROLA ................................................................................................................................................ 40 2.7.2. Speech synthesis consortium background .............................................................................. 40 2.8. APIs for multimodal interfaces ...................................................................................... 44 2.8.1. Gesture recognition projects, overview ................................................................................. 44 2.8.2. Gesture Recognition Projects, Microsoft Kinect in-depth description ................................... 46 2.8.2.1. Projects .................................................................................................................................................. 49 2.8.2.2. Standards .............................................................................................................................................. 50 2.9. Gaps analysis ................................................................................................................ 57 2.9.1. Gesture recognition ................................................................................................................ 57 2.9.2. Face detection ......................................................................................................................... 58 2.9.2.1. ASM, AAM: Gaps Analysis ..................................................................................................................... 59 2.9.3. Multi speaker recognition ....................................................................................................... 60 HBB-NEXT Consortium 2012 Page 1 HBB-NEXT I D5.1 ANALYSIS: Multi-User, Multimodal & Context Aware Value Added Services 2.9.4. Speech recognition ................................................................................................................. 61 2.9.5. Speech synthesis ..................................................................................................................... 63 3. Context-aware and multi-user content recommendation ............................................... 66 3.1. Outline ......................................................................................................................... 66 3.2. Problem statement ....................................................................................................... 66 3.3. Filtering types ............................................................................................................... 70 3.3.1. Content-based filtering ........................................................................................................... 70 3.3.2. Collaborative filtering ............................................................................................................. 72 3.4. User profiles ................................................................................................................. 75 3.4.1. Implicit and explicit feedback ................................................................................................. 76 3.4.2. User profiles in TNO’s Personal Recommendation Engine Framework .................................. 76 3.5. Presentation of the recommendations ........................................................................... 78 3.6. Evaluation/validation of the recommendation ............................................................... 79 3.6.1. Accuracy .................................................................................................................................
Recommended publications
  • Rečové Interaktívne Komunikačné Systémy
    Rečové interaktívne komunikačné systémy Matúš Pleva, Stanislav Ondáš, Jozef Juhár, Ján Staš, Daniel Hládek, Martin Lojka, Peter Viszlay Ing. Matúš Pleva, PhD. Katedra elektroniky a multimediálnych telekomunikácií Fakulta elektrotechniky a informatiky Technická univerzita v Košiciach Letná 9, 04200 Košice [email protected] Táto učebnica vznikla s podporou Ministerstvo školstva, vedy, výskumu a športu SR v rámci projektu KEGA 055TUKE-04/2016. c Košice 2017 Názov: Rečové interaktívne komunikačné systémy Autori: Ing. Matúš Pleva, PhD., Ing. Stanislav Ondáš, PhD., prof. Ing. Jozef Juhár, CSc., Ing. Ján Staš, PhD., Ing. Daniel Hládek, PhD., Ing. Martin Lojka, PhD., Ing. Peter Viszlay, PhD. Vydal: Technická univerzita v Košiciach Vydanie: prvé Všetky práva vyhradené. Rukopis neprešiel jazykovou úpravou. ISBN 978-80-553-2661-0 Obsah Zoznam obrázkov ix Zoznam tabuliek xii 1 Úvod 14 1.1 Rečové dialógové systémy . 16 1.2 Multimodálne interaktívne systémy . 19 1.3 Aplikácie rečových interaktívnych komunikačných systémov . 19 2 Multimodalita a mobilita v interaktívnych systémoch s rečo- vým rozhraním 27 2.1 Multimodalita . 27 2.2 Mobilita . 30 2.3 Rečový dialógový systém pre mobilné zariadenia s podporou multimodality . 31 2.3.1 Univerzálne riešenia pre mobilné terminály . 32 2.3.2 Projekt MOBILTEL . 35 3 Parametrizácia rečových a audio signálov 40 3.1 Predspracovanie . 40 3.1.1 Preemfáza . 40 3.1.2 Segmentácia . 41 3.1.3 Váhovanie oknovou funkciou . 41 3.2 Spracovanie rečového signálu v spektrálnej oblasti . 41 3.2.1 Lineárna predikčná analýza . 43 3.2.2 Percepčná Lineárna Predikčná analýza . 43 3.2.3 RASTA metóda . 43 3.2.4 MVDR analýza .
    [Show full text]
  • Commercial Tools in Speech Synthesis Technology
    International Journal of Research in Engineering, Science and Management 320 Volume-2, Issue-12, December-2019 www.ijresm.com | ISSN (Online): 2581-5792 Commercial Tools in Speech Synthesis Technology D. Nagaraju1, R. J. Ramasree2, K. Kishore3, K. Vamsi Krishna4, R. Sujana5 1Associate Professor, Dept. of Computer Science, Audisankara College of Engg. and Technology, Gudur, India 2Professor, Dept. of Computer Science, Rastriya Sanskrit VidyaPeet, Tirupati, India 3,4,5UG Student, Dept. of Computer Science, Audisankara College of Engg. and Technology, Gudur, India Abstract: This is a study paper planned to a new system phonetic and prosodic information. These two phases are emotional speech system for Telugu (ESST). The main objective of usually called as high- and low-level synthesis. The input text this paper is to map the situation of today's speech synthesis might be for example data from a word processor, standard technology and to focus on potential methods for the future. ASCII from e-mail, a mobile text-message, or scanned text Usually literature and articles in the area are focused on a single method or single synthesizer or the very limited range of the from a newspaper. The character string is then preprocessed and technology. In this paper the whole speech synthesis area with as analyzed into phonetic representation which is usually a string many methods, techniques, applications, and products as possible of phonemes with some additional information for correct is under investigation. Unfortunately, this leads to a situation intonation, duration, and stress. Speech sound is finally where in some cases very detailed information may not be given generated with the low-level synthesizer by the information here, but may be found in given references.
    [Show full text]
  • A Framework for Intelligent Voice-Enabled E-Education Systems
    A FRAMEWORK FOR INTELLIGENT VOICE-ENABLED E-EDUCATION SYSTEMS BY AZETA, Agbon Ambrose (CUGP050134) A THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER AND INFORMATION SCIENCES, SCHOOL OF NATURAL AND APPLIED SCIENCES, COLLEGE OF SCIENCE AND TECHNOLOGY, COVENANT UNIVERSITY, OTA, OGUN STATE NIGERIA, IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF DOCTOR OF PHILOSOPHY IN COMPUTER SCIENCE MARCH, 2012 CERTIFICATION This is to certify that this thesis is an original research work undertaken by Ambrose Agbon Azeta with matriculation number CUGP050134 under our supervision and approved by: Professor Charles Korede Ayo --------------------------- Supervisor Signature and Date Dr. Aderemi Aaron Anthony Atayero --------------------------- Co- Supervisor Signature and Date Professor Charles Korede Ayo --------------------------- Head of Department Signature and Date --------------------------- External Examiner Signature and Date ii DECLARATION It is hereby declared that this research was undertaken by Ambrose Agbon Azeta. The thesis is based on his original study in the department of Computer and Information Sciences, College of Science and Technology, Covenant University, Ota, under the supervision of Prof. C. K. Ayo and Dr. A. A. Atayero. Ideas and views of this research work are products of the original research undertaken by Ambrose Agbon Azeta and the views of other researchers have been duly expressed and acknowledged. Professor Charles Korede Ayo --------------------------- Supervisor Signature and Date Dr. Aderemi Aaron Anthony Atayero --------------------------- Co- Supervisor Signature and Date iii ACKNOWLEDGMENTS I wish to express my thanks to God almighty, the author of life and provider of wisdom, understanding and knowledge for seeing me through my Doctoral research program. My high appreciation goes to the Chancellor, Dr. David Oyedepo and members of the Board of Regent of Covenant University for catching the vision and mission of Covenant University.
    [Show full text]
  • Speechtek 2008 Final Program
    FINAL PROGRAM AUGUST 18–20, 2008 NEW YORK MARRIOTT MARQUIS NEW YORK CITY SPEECH IN THE MAINSTREAM WWW. SPEECHTEK. COM KEYNOTE SPEAKERS w Analytics w Mobile Devices Ray Kurzweil author of The Age of Spiritual Machines and w Natural Language w Multimodal The Age of Intelligent Machines w Security w Video and Speech w Voice User Interfaces w Testing and Tuning Lior Arussy President, Strativity Group author of Excellence Every Day Gold Sponsors: Kevin Mitnick author, world-famous (former) hacker, & Silver Sponsor: Media Sponsors: security consultant Organized and produced by Welcome to SpeechTEK 2008 Speech in the Mainstream Last year we recognized that the speech industry was at a tipping Conference Chairs point, or the point at which change becomes unstoppable. To James A. Larson illustrate this point, we featured Malcolm Gladwell, author of the VP, Larson Technical Services best-selling book, The Tipping Point, as our keynote speaker. Not surprisingly, a lot has happened since the industry tipped: We’re Susan L. Hura seeing speech technology in economy-class cars and advertised on television to millions of households, popular video games with VP, Product Support Solutions more robust voice commands, and retail shelves stocked with affordable, speech-enabled aftermarket navigation systems. It’s Director, SpeechTEK clear that speech is in the mainstream—the focus of this year’s Program Planning conference. David Myron, Editorial Director, Speech technology has become mainstream for organizations CRM and Speech Technology magazines seeking operational efficiencies through self-service. For enterprises, speech enables employees to reset passwords, sign up for benefits, and find information on company policies and procedures.
    [Show full text]
  • A Tooi to Support Speech and Non-Speech Audio Feedback Generation in Audio Interfaces
    A TooI to Support Speech and Non-Speech Audio Feedback Generation in Audio Interfaces Lisa J. Stfelman Speech Research Group MIT Media Labomtory 20 Ames Street, Cambridge, MA 02139 Tel: 1-617-253-8026 E-mail: lisa@?media.mit. edu ABSTRACT is also needed for specifying speech and non-speech audio Development of new auditory interfaces requires the feedback (i.e., auditory icons [9] or earcons [4]). integration of text-to-speech synthesis, digitized audio, and non-speech audio output. This paper describes a tool for Natural language generation research has tended to focus on specifying speech and non-speech audio feedback and its use producing coherent mtdtisentential text [14], and detailed in the development of a speech interface, Conversational multisentential explanations and descriptions [22, 23], VoiceNotes. Auditory feedback is specified as a context-free rather than the kind of terse interactive dialogue needed for grammar, where the basic elements in the grammar can be today’s speech systems. In addition, sophisticated language either words or non-speech sounds. The feedback generation tools are not generally accessible by interface specification method described here provides the ability to designers and developers.z The goal of the work described vary the feedback based on the current state of the system, here was to simplify the feedback generation component of and is flexible enough to allow different feedback for developing audio user interfaces and allow rapid iteration of different input modalities (e.g., speech, mouse, buttons). designs. The declarative specification is easily modifiable, supporting an iterative design process. This paper describes a tool for specifying speech and non- speech audio feedback and its use in the development of a KEYWORDS speech interface, Conversational VoiceNotes.
    [Show full text]
  • Speech Recognition and Synthesis
    Automatic Text-To-Speech synthesis Speech recognition and synthesis 1 Automatic Text-To-Speech synthesis Introduction Computer Speech Text preprocessing Grapheme to Phoneme conversion Morphological decomposition Lexical stress and sentence accent Duration Intonation Acoustic realization, PSOLA, MBROLA Controlling TTS systems Assignment Bibliography Copyright c 2007-2008 R.J.J.H. van Son, GNU General Public License [FSF(1991)] van Son & Weenink (IFA, ACLC) Speech recognition and synthesis Fall 2008 4 / 4 Automatic Text-To-Speech synthesis Introduction Introduction Uses of speech synthesis by computer Read aloud existing text, eg, news, email and stories Communicate volatile data as speech, eg, weather reports, query results The computer part of interactive dialogs The building block is a Text-to-Speech system that can handle standard text with a Speech Synthesis (XML) markup. The TTS system has to be able to generate acceptable speech from plain text, but can improve the quality using the markup tags van Son & Weenink (IFA, ACLC) Speech recognition and synthesis Fall 2008 5 / 4 Automatic Text-To-Speech synthesis Computer Speech Computer Speech: Generating the sound Speech Synthesizers can be classified on the way they generate speech sounds. This determines the type, and amount, of data that have to be collected. Speech Synthesis Articulatory models Rules (formant synthesis) Diphone concatenation Unit selection van Son & Weenink (IFA, ACLC) Speech recognition and synthesis Fall 2008 6 / 4 Automatic Text-To-Speech synthesis Computer Speech
    [Show full text]
  • Voicexml (VXML) 2.0
    Mobile Internet Applications─ XML-based Languages Lesson 03 XML based Standards and Formats for Applications © Oxford University Press 2018. All rights reserved. 1 Markup Language Format Stand- ardized For Specific Application • The tags, attributes, and XML-based language use the extensible property of XML • Defines a specific standardized sets of instances of the tags, attributes, their representation and behaviour, and other characteristics for using in that application. © Oxford University Press 2018. All rights reserved. 2 XForm • An XML format standardized for Specific Application needing UIs (user interfaces) like text area fields, buttons, check- boxes, and radios • Xform the fields (keys) which are initially specified • Fields either have no initial field values or default field values © Oxford University Press 2018. All rights reserved. 3 XForm • Presented to a user using a browser or presentation software and user interactions take place • The user supplies the values for these fields by entering text into the fields, checking the check-boxes, and selecting the radio © Oxford University Press 2018. All rights reserved. 4 XForm • The user then submits the XForm which is transmitted from client to server for carrying out the needed form processing • The server program can be at remote web server or at the device computing system © Oxford University Press 2018. All rights reserved. 5 XML for Forms─ XForms • An application may generate a form for answering the queries at server end, which needs to be filled and submitted by the client end • XForms is a form in XML format which specifies a data processing model for XML data and UIs for the XML data © Oxford University Press 2018.
    [Show full text]
  • IBM Websphere Voice Systems Solutions
    Front cover IBM WebSphere Voice Systems Solutions Covers features and functionality of WebSphere Voice Server with WebSphere Voice Response V3.1 Includes scenarios of Dialogic, Cisco and speech technologies Highlights application development and SDK 3.1 Guy Kempny Suchita Dadhich Ekaterina Dietrich Hu Jiong Justin Poggioli ibm.com/redbooks International Technical Support Organization IBM WebSphere Voice Systems Solutions Implementation Guide January 2003 SG24-6884-00 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. First Edition (January 2003) This edition applies to WebSphere Voice Server for Windows 2000 and AIX, V3.1, WebSphere Voice Response for AIX, V3.1, and WebSphere Voice Response for Windows NT and Windows 2000, V3.1. © Copyright International Business Machines Corporation 2003. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .xi Trademarks . xii Preface . xiii The team that wrote this redbook. xiv Become a published author . xvi Comments welcome. xvi Chapter 1. Voice technologies . 1 1.1 Access to information through voice . 2 1.2 What are voice applications? . 3 1.3 Speech recognition . 4 1.4 Text-to-speech . 6 1.5 Terminology. 7 1.6 VoiceXML . 8 1.7 Application development . 11 1.7.1 Available tools . 11 1.7.2 Creating and deploying an application . 12 1.7.3 Integrating speech recognition and TTS . 13 1.8 Hardware technology . 14 1.8.1 IBM . 14 1.8.2 Intel Dialogic . 14 1.8.3 Aculab .
    [Show full text]
  • Really Useful
    Really Useful Technology For Students with Learning Disabilities Students with learning disabilities often have difficulty with skills others take for granted, such as reading, listening, spelling, writing, or organizing information. Appropriate computer software and assistive technology can make those tasks easier and allow a student to feel a sense of accomplishment. Feeling successful with technology can greatly boost an individual’s self-esteem and may even make neces- sary tasks enjoyable. However, not all software or technology is appropriate or useful for students with learning disabilities. The more care parents and teach- ers take to fit the software to a student’s specific need and learning style, the more useful the software or tool will be for that student. 1 It may help to keep the following suggestions in mind when choosing software for students with learning disabilities: ¡ The software should address the skill the student needs to learn and have levels that allow the student to progress. ¡ Computer displays should not be cluttered. Students with learning disabilities usually concentrate better with few distractions. ¡ Instructions should be straightforward. Long, wordy directions at the beginning of the program are frustrating to most students with learning disabilities. ¡ It should be easy for the student to correct mistakes. ¡ The program should be easy to enter and exit. Ideally, the student should be able to operate the program independently. ¡ The software should be fun and motivating with topics of interest to the individual. ¡ It should be easy to save the work. Students with learning disabilities should be allowed as many sessions as necessary to complete the project.
    [Show full text]
  • Design, Implementation and Evaluation of a Voice Controlled Information Platform Applied in Ships Inspection
    NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY DESIGN, IMPLEMENTATION AND EVALUATION OF A VOICE CONTROLLED INFORMATION PLATFORM APPLIED IN SHIPS INSPECTION by TOR-ØYVIND BJØRKLI DEPARTMENT OF ENGINEERING CYBERNETICS FACULTY OF ELECTRICAL ENGINEERING AND TELECOMMUNICATION THESIS IN NAUTICAL ENGINEERING 0 DNV NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY ABSTRACT This thesis describes the set-up of a speech recognition platform in connection with ship inspection. Ship inspections involve recording of damages that is traditionally done by taking notes on a piece of paper. It is assumed that considerable double work of manual re-entering the data into the ship database can be avoided by introducing a speech recogniser. The thesis explains the challenges and requirements such a system must meet when used on board. Its individual system components are described in detail and discussed with respect to their performance. Various backup solutions in case the speech recogniser fails are presented and considered. A list of selected relevant commercially available products (microphones, speech recogniser and backup solutions) is given including an evaluation of their suitability for their intended use. Based on published literature and own experiences gained from an speech demonstrator having essentially the same interface as the corresponding part as the DNV ships database, it can be concluded that considerable improvement in microphone and speech recognition technology is needed before they are applicable under challenging environments. The thesis ends with a future outlook and some general recommendations about promising solutions. Page i DNV NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY PREFACE The Norwegian University of Science and Technology (NTNU) offers a Nautical engineering Studies programme, required entrance qualification are graduation from a Naval Academy or Maritime College, along with practical maritime experience as an officer.
    [Show full text]
  • W3C Looks to Improve Speech Recognition Technology for Web Transactions 10 December 2005
    W3C looks to improve speech recognition technology for web transactions 10 December 2005 pronunciation. This insures the software will hear the right tones and pitches so critical in languages were a tiny change in pronunciation can affect the whole meaning of a word. SSML is also used to tag areas of speech with different regional pronunciations. It is based on JSpeech Grammar Format (JSGF). A technical description and how to use SSML version 1 on a web page can be found here: www.xml.com/pub/a/2004/10/20/ssml.html Microsoft Agent website is another source for would be speech interface developers. www.microsoft.com/MSAGENT/downloads/user.as W3C, the standards-setting body for the Internet p (World Wide Web Consortium), has completed a draft for the important VoiceXML 3.0 - technology Opera browsers can be programmed for speech enabling voice identification verification. While recognition with some XHTML (Extended Hypertext normally associated with voice commands, it has Markup Language) extensions. the potential to greatly speed and improve the my.opera.com/community/dev/voice/ accuracy and positive proof of online transactions. Working with web-based speech applications can Some larger net businesses are even using it to be frustrating. While the speech recognition confirm orders and verify identity. Many, however, software works well, poor quality microphones and have become increasingly worried about the PC speakers combined with slower Internet reliability and security of these transactions with connections can put a damper on effectiveness. fraud and identity theft on the rise. Error rates have These issues will be difficult to address due to been around 1 to 2% - unacceptable for ironclad being largely beyond the control of the developer.
    [Show full text]
  • AN EXTENSIBLE TRANSCODER for HTML to VOICEXML CONVERSION APPROVED by SUPERVISORY COMMITTEE: Supervisor
    AN EXTENSIBLE TRANSCODER FOR HTML TO VOICEXML CONVERSION APPROVED BY SUPERVISORY COMMITTEE: Supervisor: AN EXTENSIBLE TRANSCODER FOR HTML TO VOICEXML CONVERSION by Narayanan Annamalai, B.E. in CSE THESIS Presented to the Faculty of The University of Texas at Dallas in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN COMPUTER SCIENCE THE UNIVERSITY OF TEXAS AT DALLAS May 2002 Copyright c 2002 Narayanan Annamalai All Rights Reserved Dedicated to my Parents ACKNOWLEDGEMENTS My sincere gratitude goes to my advisors Dr. Gopal Gupta and Dr. B Prabhakaran for being instrumental in shaping my ideas and helping me achieve my ambitions. But for their guidance, it would not have been possible for me to complete my thesis. I would also like to thank Dr. Latifur Khan for serving in my thesis committee. I would like to extend my sincere gratitude to all my family, friends who have been sup- portive and encouraging all through my career. Finally, I would like to thank my fellow graduate students Srikanth Kuppa and Vinod Vokkarane for reading my thesis draft and providing me valuable suggestions and helping me with the format of the draft. v AN EXTENSIBLE TRANSCODER FOR HTML TO VOICEXML CONVERSION Publication No. Narayanan Annamalai, M.S The University of Texas at Dallas, 2002 Supervising Professors: Dr. Gopal Gupta and Dr. B Prabhakaran ‘Anytime anywhere Internet access’ has become the goal for current technology vendors. Sudden increase in the number of mobile users has necessitated the need for ‘Internet access through mobile phones’. The existing web infrastructure was designed for traditional desktop browsers and not for hand-held devices.
    [Show full text]