Preliminary Project Report

Total Page:16

File Type:pdf, Size:1020Kb

Preliminary Project Report Instituto Superior de Engenharia de Lisboa Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores Interface de Fala para Dispositivos Móveis por João Freitas Submetido ao Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores como requisito parcial para obtenção do grau de licenciado em Engenharia Informática e de Computadores ISEL, 15 de Setembro de 2007 Interface de Fala para Dispositivos Móveis por João Freitas Submetido ao Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores como requisito parcial para obtenção do grau de licenciado em Engenharia Informática e de Computadores ISEL, 15 de Setembro de 2007 Autor __________________________________________________________________ Aluno n.º 26068, DEETC Certificado por ___________________________________________________________ Maria João Barros, orientador(a) do projecto Aceite por _______________________________________________________________ António Luís Freixo Guedes Osório, responsável de curso, ___/___/______ Interface de Fala para Dispositivos Móveis por João Freitas Submetido ao Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores como requisito parcial para obtenção do grau de licenciado em Engenharia Informática e de Computadores ISEL, 15 de Setembro de 2007 Resumo As tecnologias da fala permitem aos utilizadores interagir com todo o tipo de máquinas através da fala. A interacção homem-máquina varia de acordo com o tipo de sistema automático e os dispositivos móveis (Pocket Pc e SmartPhone) não são excepção. O sucesso de uma aplicação de fala em dispositivos móveis está dependente de uma série de aspectos relacionados com a natureza da tarefa, por exemplo, cenários de utilização, tipo de utilizador, desempenho da aplicação, memória necessária, ruído ambiente e posição do dispositivo em relação ao utilizador. Para ter sucesso, as interfaces de fala devem ser superiores a interfaces alternativas que executam a mesma tarefa. Ao longo deste trabalho são apresentados problemas, desafios, metodologias e optimizações no desenvolvimento de interfaces de fala para aplicações móveis. Neste trabalho é apresentada uma versão do produto Voice Command adaptada ao Português Europeu. O Voice Command é uma aplicação para dispositivos Pocket PC e SmartPhone que permite ao utilizador comandar e controlar o seu dispositivo através da voz. Para realizar com sucesso esta adaptação foi previamente efectuado um estudo de interfaces fala para dispositivos móveis e desenvolvida uma versão compacta do motor de reconhecimento em Português Europeu. O motor de reconhecimento é depois integrado no Voice Command juntamente com um sintetizador, já existente e também em Português Europeu. No final é apresentada uma versão beta do produto completamente adaptada ao Português Europeu e o resultado de um estudo de usabilidade. Este estudo compara o uso da interface gráfica com a interface de fala ao executar tarefas básicas num dispositivo móvel e permite tirar conclusões sobre como melhorar a experiência do utilizador ao usar uma interface de fala. Palavras-chave: interface de fala; dispositivos móveis; Voice Command; Português Europeu. Orientador(a) do projecto: Maria João Barros, DEETC-ISEL, Instituto Superior de Engenharia de Lisboa Instituto Superior de Engenharia de Lisboa Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores Spoken Language Interface for Mobile Devices by João Freitas Submited to the Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores as a partial requisite to obtain the degree of Graduated in Computer Science Engineering ISEL, 15 September 2007 Spoken Language Interface for Mobile Devices por João Freitas Submited to the Departamento de Engenharia de Electrónica e Telecomunicações e de Computadores as a partial requisite to obtain the degree of Graduated in Computer Science Engineering ISEL, 15 de Setembro de 2007 Abstract Spoken language technologies allow the users to interact with all kind of machines through speech. Human-computer interaction varies according with the type of automated system and mobile devices (Pocket Pc and SmartPhone) aren’t exceptions. The success of a speech application in mobile devices is dependent on a series of aspects related with the nature of the task such as usage scenarios, type of user application performance, required memory, environment, associated background noise variation and device positioning towards the user. To be successful, speech interfaces should be superior to alternative interfaces that perform the same task. On the course of this work, problems, challenges, methodologies and optimizations concerning the development of spoken language interfaces are presented. In this work it is presented a localized version of Voice Command in European Portuguese. Voice Command is a product designed for Pocket PC and Smartphone devices that allows the user to command and control the device using his voice. To accomplish this objective a previous study of spoken language interfaces is presented focusing on mobile applications. A compacted version of a European Portuguese Speech Recognizer engine was also developed, which is then integrated into Voice Command along with a European Portuguese Text-to-speech engine already developed. At the end it is presented a beta version of the product fully localized to European Portuguese, as well as the results of a usability evaluation. This evaluation allows comparing a graphical interface with a spoken language interface when accomplishing the same tasks in a mobile device. They also allow to conclude how to improve user experience when using a spoken language interface. Keywords: Spoken language interface; mobile devices; Voice Command; European Portuguese. Project’s director: Maria João Barros, DEETC-ISEL, Instituto Superior de Engenharia de Lisboa To my father CONTENTS CONTENTS ........................................................................................................................ i LIST OF FIGURES ......................................................................................................... vi LIST OF TABLES ......................................................................................................... viii ACKNOWLEDGEMENTS ............................................................................................ ix GLOSSARY OF ABBREVIATIONS AND ACRONYMS ........................................... x Chapter 1 ........................................................................................................................... 1 Introduction ....................................................................................................................... 1 1.1 Motivations .......................................................................................................... 2 1.1.1 Spoken Language Interface ................................................................................ 2 1.1.2 Mobility .............................................................................................................. 3 1.1.3 European Portuguese in speech applications ..................................................... 4 1.2 Objectives ............................................................................................................. 5 1.2.1 Study and analysis of spoken language technology ........................................... 5 1.2.2 Mobile application development ........................................................................ 5 1.2.3 Localization of Voice Command to EP.............................................................. 6 1.3 Scope .................................................................................................................... 6 1.4 Audience .............................................................................................................. 7 1.5 Document Structure ............................................................................................. 7 Chapter 2 ........................................................................................................................... 9 Spoken Language Technology ......................................................................................... 9 2.1 Spoken language systems ..................................................................................... 9 2.1.1 Automatic speech recognition ............................................................................ 9 2.1.1.1 Acoustic-phonetic .............................................................................................. 10 2.1.1.2 Pattern recognition ............................................................................................ 10 2.1.1.3 Artificial intelligence .......................................................................................... 12 2.1.2 Text-to-speech synthesis .................................................................................. 12 2.1.2.1 Text analysis module ......................................................................................... 13 2.1.2.2 Phonetic analysis module .................................................................................. 13 i 2.1.2.3 Prosodic analysis module .................................................................................. 14 2.1.2.4 Speech synthesis module .................................................................................. 14 2.2 Hidden Markov Models ....................................................................................
Recommended publications
  • Secure Access 6.5 Supported Platforms Guide
    Platform Guide SA Supported Platforms Service Package Version 6.5 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408 745 2000 or 888 JUNIPER www.juniper.net September 1, 2009 Contents Introduction..........................................................................................................................1 SA Hardware Requirements ...............................................................................................1 Platform Support .................................................................................................................1 Qualified Platform........................................................................................................1 Compatible Platform....................................................................................................1 Multiple Language Support .........................................................................................2 Web and File Browsing ...............................................................................................3 Client-side Java Applets..............................................................................................4 Secure Terminal Access .............................................................................................5 Java-Secure Application Manager (J-SAM) ................................................................6 Windows version of Secure Application Manager (W-SAM).......................................7 Network Connect.........................................................................................................8
    [Show full text]
  • General Subtitling FAQ's
    General Subtitling FAQ’s What's the difference between open and closed captions? Open captions are sometimes referred to as ‘burnt-in’ or ‘in-vision’ subtitles. They are generally encoded as a permanent part of the video image. Closed captions is a generic term for subtitles that are viewer-selectable and is generally used for subtitles that are transmitted as a discrete data stream in the VBI then decoded and displayed as text by the TV receiver e.g. Line 21 Closed Captioning system in the USA, and Teletext subtitles, using line 335, in Europe. What's the difference between "live" and "offline" subtitling? Live subtitling is the real-time captioning of live programmes, typically news and sports. This is achieved by using either Speech Recognition systems or Stenographic (Steno) keyboards. Offline subtitling is created by viewing previously recorded material. Typically this produces better results because the subtitle author can ensure they produce An accurate translation/transcription with no spelling mistakes Subtitles are timed to coincide precisely with the dialogue Subtitles are positioned to avoid obscuring other important onscreen features Which Speech Recognition packages do Starfish support? Starfish supports Speech Recognition packages from IBM and Dragon and other suppliers who confirm to the Microsoft Speech API. The choice of the software is usually dictated by the availability of a specific language. Starfish Technologies Ltd FAQ General FAQ’s What's the difference between subtitling and captioning? The use of these terms varies in different parts of the world. Subtitling often refers to the technique of using open captions for language translation of foreign programmes.
    [Show full text]
  • PSG Commercial Handheld Datasheet
    Świat pędzi do przodu. Nie zostań w tyle. HP iPAQ rw6815 Personal Messenger Personal Messenger jest mały, ale ma duże Niezakłócona produktywność – stała możliwość możliwości łączenia. Pozwala telefonować, korzystania z notesu, pamięci i prezentacji przesyłać wiadomości tekstowe, korzystać z Pracuj z pasją. System Microsoft® Windows komunikatora, przeglądać strony internetowe, Mobile® 5.0, Phone Edition z technologią Direct sprawdzać pocztę itd. Push zapewnia przekazywanie wiadomości e-mail i Nieprzerwana łączność – można stale rozmawiać, danych bezpośrednio do komunikatora HP iPAQ. przesyłać wiadomości i nawiązywać połączenia Pakiet Microsoft Office umożliwia przeglądanie Za jednym naciśnięciem przycisku można połączyć prezentacji, arkuszy kalkulacyjnych i dokumentów, się z przyjaciółmi, rodziną i znajomymi. można więc pracować w dowolnym miejscu i Bezprzewodowa technologia GSM pozwala czasie. Przenoś pliki. Zapisuj je. Za pomocą rozmawiać i wysyłać wiadomości tekstowe z wielu gniazda karty pamięci mini-SD można łatwo miejsc na całym świecie. 1, 2, 3 Miejsc, w których przenosić utwory muzyczne, zdjęcia i inne pliki możesz mieć dostęp do poczty elektronicznej, pomiędzy urządzeniem HP iPAQ i innymi łączyć się z Internetem i korzystać z urządzeniami.5 natychmiastowego przesyłania wiadomości, jest Nieprzerwany dostęp do multimediów – zdjęć, więcej niż myślisz. Technologia bezprzewodowa nagrań muzycznych i plików wideo GPRS/EDGE zapewnia szybką łączność w podróży. Celujesz, naciskasz i zdjęcie gotowe. Wbudowany 1 Stały dostęp do informacji w czasie podróży. aparat cyfrowy HP Photosmart o rozdzielczości 2 Gdziekolwiek jesteś – w pracy, w domu, porcie megapikseli umożliwia szybkie wykonywanie zdjęć i lotniczym, na terenie miasteczka uniwersyteckiego nagrań wideo oraz ich bezprzewodowe wysyłanie. czy w kawiarence z dostępem do sieci Wi-Fi – 1 Daj się porwać zabawie.
    [Show full text]
  • Pointsec Mobile Smartphone (Windows Mobile) 3.1.1 © Pointsec Mobile Technologies AB, 2007, a Check Point Software Technologies Ltd
    Release Notes - Confidential Release Notes Pointsec Mobile Smartphone (Windows Mobile) 3.1.1 © Pointsec Mobile Technologies AB, 2007, a Check Point Software Technologies Ltd. Company. For full documentation, please see these documents: • Pointsec Mobile Smartphone (Windows Mobile) Installation Guide • Pointsec Mobile Smartphone (Windows Mobile) Administrator’s Guide • Pointsec Administration Console Administrator’s Guide Contents About this Document............................................................................................................................................. 2 About Pointsec Mobile .......................................................................................................................................... 2 New in this Release................................................................................................................................................ 2 Fixed in this Release.............................................................................................................................................. 2 Supported Smartphones ....................................................................................................................................... 3 Hardware Requirements........................................................................................................................................ 3 Tested 3rd-party Software.....................................................................................................................................
    [Show full text]
  • Speech Recognition Technology for Disabilities Education
    J. EDUCATIONAL TECHNOLOGY SYSTEMS, Vol. 33(2) 173-184, 2004-2005 SPEECH RECOGNITION TECHNOLOGY FOR DISABILITIES EDUCATION K. WENDY TANG GILBERT ENG RIDHA KAMOUA WEI CHERN CHU VICTOR SUTAN GUOFENG HOU OMER FAROOQ Stony Brook University, New York ABSTRACT Speech recognition is an alternative to traditional methods of interacting with a computer, such as textual input through a keyboard. An effective system can replace or reduce the reliability on standard keyboard and mouse input. This can especially assist dyslexic students who have problems with character or word use and manipulation in a textual form; and students with physical disabilities that affect their data entry or ability to read, and therefore to check, what they have entered. In this article, we summarize the current state of available speech recognition technologies and describe a student project that integrates speech recognition technologies with Personal Digital Assistants to provide a cost-effective and portable health monitoring system for people with disabilities. We are hopeful that this project may inspire more student-led projects for disabilities education. 1. INTRODUCTION Speech recognition allows “hands-free” control of various electronic devices. It is particularly advantageous to physically disabled persons. Speech recognition can be used in computers to control the system, launch applications, and create “print-ready” dictation and thus provide easy access to persons with hearing or 173 Ó 2004, Baywood Publishing Co., Inc. 174 / TANG ET AL. vision impairments. For example, a hearing impaired person can use a microphone to capture another’s speech and then use speech recognition technologies to convert the speech to text.
    [Show full text]
  • Designing Energy and User Efficient Interactions with Mobile Systems
    Designing Energy and User Efficient Interactions with Mobile Systems Lu Luo CMU-ISR-08-102 April 2008 School of Computer Science Institute for Software Research Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Daniel P. Siewiorek, Chair Bonnie E. John Priya Narasimhan Diana Marculescu, Department of Electrical and Computer Engineering Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Copyright © 2008 Lu Luo This research was sponsored by the Defense Advanced Project Agency (DARPA) under Contract No. NBCHD030010, the National Science Foundation (NSF) under Grant Nos. EEEC-540865 and 0205266, the Office of Naval Research (ONR), N00014-03-1-0086, and HP Labs. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies or endorsements, either express or implied, of DARPA, the NSF, ONR, HP Labs, Carnegie Mellon University, the U.S. Government, or any other entity. Keywords: Mobile computing, energy efficiency, user interaction, user interface design, cognitive modeling, ubiquitous computing Abstract Mobile computing has thrived to provide unprecedented user experiences beyond the boundary of desks and wires. The increasing demand for mobility faces two major challenges: reduced form factor and limited energy supply. Although each challenge has been addressed by significant research efforts, the correlation between them is underexplored. Energy efficiency should be integrated as an important metric of user interaction design in mobile systems. By carefully considering the specific requirements of the user and the context of usage, energy efficiency can be achieved without sacrificing user performance and satisfaction, and interaction design that facilitates user efficiency can also promote energy efficiency.
    [Show full text]
  • Mobiliųjų Telefonų Modeliai, Kuriems Tinka Ši Programinė Įranga
    Mobiliųjų telefonų modeliai, kuriems tinka ši programinė įranga Telefonai su BlackBerry operacinė sistema 1. Alltel BlackBerry 7250 2. Alltel BlackBerry 8703e 3. Sprint BlackBerry Curve 8530 4. Sprint BlackBerry Pearl 8130 5. Alltel BlackBerry 7130 6. Alltel BlackBerry 8703e 7. Alltel BlackBerry 8830 8. Alltel BlackBerry Curve 8330 9. Alltel BlackBerry Curve 8530 10. Alltel BlackBerry Pearl 8130 11. Alltel BlackBerry Tour 9630 12. Alltel Pearl Flip 8230 13. AT&T BlackBerry 7130c 14. AT&T BlackBerry 7290 15. AT&T BlackBerry 8520 16. AT&T BlackBerry 8700c 17. AT&T BlackBerry 8800 18. AT&T BlackBerry 8820 19. AT&T BlackBerry Bold 9000 20. AT&T BlackBerry Bold 9700 21. AT&T BlackBerry Curve 22. AT&T BlackBerry Curve 8310 23. AT&T BlackBerry Curve 8320 24. AT&T BlackBerry Curve 8900 25. AT&T BlackBerry Pearl 26. AT&T BlackBerry Pearl 8110 27. AT&T BlackBerry Pearl 8120 28. BlackBerry 5810 29. BlackBerry 5820 30. BlackBerry 6210 31. BlackBerry 6220 32. BlackBerry 6230 33. BlackBerry 6280 34. BlackBerry 6510 35. BlackBerry 6710 36. BlackBerry 6720 37. BlackBerry 6750 38. BlackBerry 7100g 39. BlackBerry 7100i 40. BlackBerry 7100r 41. BlackBerry 7100t 42. BlackBerry 7100v 43. BlackBerry 7100x 1 44. BlackBerry 7105t 45. BlackBerry 7130c 46. BlackBerry 7130e 47. BlackBerry 7130g 48. BlackBerry 7130v 49. BlackBerry 7210 50. BlackBerry 7230 51. BlackBerry 7250 52. BlackBerry 7270 53. BlackBerry 7280 54. BlackBerry 7290 55. BlackBerry 7510 56. BlackBerry 7520 57. BlackBerry 7730 58. BlackBerry 7750 59. BlackBerry 7780 60. BlackBerry 8700c 61. BlackBerry 8700f 62. BlackBerry 8700g 63. BlackBerry 8700r 64.
    [Show full text]
  • Automated Regression Testing Approach to Expansion and Refinement of Speech Recognition Grammars
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2008 Automated Regression Testing Approach To Expansion And Refinement Of Speech Recognition Grammars Raul Dookhoo University of Central Florida Part of the Computer Sciences Commons, and the Engineering Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Dookhoo, Raul, "Automated Regression Testing Approach To Expansion And Refinement Of Speech Recognition Grammars" (2008). Electronic Theses and Dissertations, 2004-2019. 3503. https://stars.library.ucf.edu/etd/3503 AUTOMATED REGRESSION TESTING APPROACH TO EXPANSION AND REFINEMENT OF SPEECH RECOGNITION GRAMMARS by RAUL AVINASH DOOKHOO B.S. University of Guyana, 2004 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the School of Electrical Engineering and Computer Science in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Fall Term 2008 © 2008 Raul Avinash Dookhoo ii ABSTRACT This thesis describes an approach to automated regression testing for speech recognition grammars. A prototype Audio Regression Tester called ART has been developed using Microsoft’s Speech API and C#. ART allows a user to perform any of three tasks: automatically generate a new XML-based grammar file from standardized SQL database entries, record and cross-reference audio files for use by an underlying speech recognition engine, and perform regression tests with the aid of an oracle grammar.
    [Show full text]
  • Speech@Home: an Exploratory Study
    Speech@Home: An Exploratory Study A.J. Brush Paul Johns Abstract Microsoft Research Microsoft Research To understand how people might use a speech dialog 1 Microsoft Way 1 Microsoft Way system in the public areas of their homes, we Redmond, WA 98052 USA Redmond, WA 98052 USA conducted an exploratory field study in six households. [email protected] [email protected] For two weeks each household used a system that logged motion and usage data, recorded speech diary Kori Inkpen Brian Meyers entries and used Experience Sampling Methodology Microsoft Research Microsoft Research (ESM) to prompt participants for additional examples of 1 Microsoft Way 1 Microsoft Way speech commands. The results demonstrated our Redmond, WA 98052 USA Redmond, WA 98052 USA participants’ interest in speech interaction at home, in [email protected] [email protected] particular for web browsing, calendaring and email tasks, although there are still many technical challenges that need to be overcome. More generally, our study suggests the value of using speech to enable a wide range of interactions. Keywords Speech, home technology, diary study, domestic ACM Classification Keywords H.5.2 User Interfaces: Voice I/O. General Terms Human Factors, Experimentation Copyright is held by the author/owner(s). Introduction CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. The potential for using speech in home environments ACM 978-1-4503-0268-5/11/05. has long been recognized. Smart home research has suggested using speech dialog systems for controlling . home infrastructure (e.g. [5, 8, 10, 12]), and a variety We conducted a two week exploratory speech diary of novel home applications use speech interfaces, such study in six households to understand whether or not as cooking support systems [2] and the Nursebot speech interaction was desirable, and, if so, what types personal robotic assistant for the elderly [14].
    [Show full text]
  • Personalized Speech Translation Using Google Speech API and Microsoft Translation API
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 05 | May 2020 www.irjet.net p-ISSN: 2395-0072 Personalized Speech Translation using Google Speech API and Microsoft Translation API Sagar Nimbalkar1, Tekendra Baghele2, Shaifullah Quraishi3, Sayali Mahalle4, Monali Junghare5 1Student, Dept. of Computer Science and Engineering, Guru Nanak Institute of Technology, Maharashtra, India 2Student, Dept. of Computer Science and Engineering, Guru Nanak Institute of Technology, Maharashtra, India 3Student, Dept. of Computer Science and Engineering, Guru Nanak Institute of Technology, Maharashtra, India 4Student, Dept. of Computer Science and Engineering, Guru Nanak Institute of Technology, Maharashtra, India 5Student, Dept. of Computer Science and Engineering, Guru Nanak Institute of Technology, Maharashtra, India ---------------------------------------------------------------------***---------------------------------------------------------------------- Abstract - Speech translation innovation empowers inherent in globalization.[6] Speech to speech translation speakers of various language to impart. It subsequently is of systems are often used for applications in a specific situation, enormous estimation of mankind as far as science, cross- such as supporting conversations in non-native languages. cultural exchange and worldwide trade. Henceforth we The demand for trans-lingual conversations, triggered by IT proposed a that can connect this language hindrance. In this technologies has boosted research activities on Speech to paper we propose a personalized speech translation system speech technology. Most of them were consist of three using Google Speech API and Microsoft Translation API. modules namely speech 7recognition, machine translation Personalized in the sense that user have the choice to select and text to speech translation. Speech recognition is an the languages which will be used by the system as a input and interdisciplinary sub-field of computational linguistics that a output.
    [Show full text]
  • License Clerks
    Paralegals and Legal Assistants License Clerks TORQ Analysis of Paralegals and Legal Assistants to License Clerks ANALYSIS INPUT Transfer Title O*NET Filters Paralegals and Legal Importance LeveL: Weight: From Title: 23-2011.00 Abilities: Assistants 50 1 Importance LeveL: Weight: To Title: License Clerks 43-4031.03 Skills: 69 1 Labor Market Importance Level: Weight: Maine Statewide Knowledge: Area: 69 1 TORQ RESULTS Grand TORQ: 93 Ability TORQ Skills TORQ Knowledge TORQ Level Level Level 96 91 91 Gaps To Narrow if Possible Upgrade These Skills Knowledge to Add Ability Level Gap Impt Skill Level Gap Impt Knowledge Level Gap Impt No Critical Gaps Recorded! Speaking 76 13 83 Customer and Personal 88 32 73 Service Transportation 12 2 76 LEVEL and IMPT (IMPORTANCE) refer to the Target License Clerks. GAP refers to level difference between Paralegals and Legal Assistants and License Clerks. ASK ANALYSIS Ability Level Comparison - Abilities with importance scores over 50 Paralegals and Legal Description Assistants License Clerks Importance Oral Comprehension 67 51 75 Oral Expression 66 53 75 Written Comprehension 69 50 72 Written Expression 64 48 65 Speech Recognition 59 41 62 Speech Clarity 53 44 62 Near Vision 71 51 59 Problem Sensitivity 51 42 53 Deductive Reasoning 59 44 50 Inductive Reasoning 64 42 50 Information Ordering 55 44 50 Jul-07-2009 - TORQ Analysis Page 1 of 53. Copyright 2009. Workforce Associates, Inc. Paralegals and Legal Assistants License Clerks Selective Attention 42 39 50 Skill Level Comparison - Abilities with importance
    [Show full text]
  • Acer Acer S200 F1 Neotouch Acer Acer E101 Asus Asus 1210 VDA
    Acer Acer S200 F1 neoTouch Acer Acer E101 Asus Asus 1210 VDA Audiovox/UTStarcom/PCD SMT-5800 Audiovox/UTStarcom/PCD PPC-6700 Audiovox/UTStarcom/PCD XV-6700 Audiovox/UTStarcom/PCD XV-6850 Audiovox/UTStarcom/PCD XV-6875 Touch Pro 2 Audiovox/UTStarcom/PCD XV-6900 Dopod Dopod S1 Dopod Dopod D600 Dopod Dopod P800 Dopod Dopod P860 Dopod Dopod T2222 Dopod Dopod T3238 Dopod Dopod T8288 Dopod Dopod T8388 Garmin Nuvi 200 Garmin Oregon 200 Garmin Nuvi 200W Garmin Nuvi 205 Garmin Nuvi 205W Garmin Nuvi 250 Garmin Nuvi 250W Garmin Nuvi 255 Garmin Nuvi 255W Garmin Nuvi 260W Garmin Nuvi 265 Garmin Nuvi 265W Garmin Nuvi 275 Garmin Nuvi 285W Garmin Nuvi 350 Garmin Nuvi 360 Garmin Oregon 450 Garmin Oregon 450T Garmin Zumo 450 Garmin Nuvi 465 Garmin Nuvi 500 Garmin Nuvi 550 Garmin Oregon 550 Garmin Oregon 550T Garmin Zumo 550 Garmin GPSmap 620 Garmin GPSmap 640 Garmin Nuvi 660 Garmin Zumo 660 Garmin Zumo 665 Garmin Nuvi 680 NA Garmin Nuvi 710 Garmin Nuvi 750 Garmin Nuvi 760 Garmin Nuvi 765 Garmin Nuvi 780 Garmin Nuvi 855 Garmin Nuvi 1200 Garmin Nuvi 1210T Garmin Nuvi 1240 Garmin Nuvi 1250 Garmin Nuvi 1300 Garmin Nuvi 1310 Garmin Nuvi 1350 Garmin Nuvi 1390 Garmin Nuvi 1450 Garmin Nuvi 1490 Garmin Nuvi 1690 Garmin Nuvi 1690 BMW edition Garmin Street Pilot 2730 Garmin Nuvi 3760 Garmin Nuvi 3790 Garmin Nuvi 5000 HTC HTC T8585 Touch HD2 HTC HTC S52x Dash 3G HTC HTC S710 HTC HTC S730 HTC HTC S740 HTC HTC T3333 Mega HTC HTC Touch2 HTC HTC P3450 Touch HTC HTC P3470 HTC HTC P3650 Touch Cruise HTC HTC P3700 Touch Diamond HTC HTC T4242 Touch Cruise HTC HTC XV6175 HTC
    [Show full text]