European Conference on Disability, Virtual Reality and Associated Technologies (ECDVRAT 1998)

Total Page:16

File Type:pdf, Size:1020Kb

European Conference on Disability, Virtual Reality and Associated Technologies (ECDVRAT 1998) Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998) Book Published Version Conference Proceedings Sharkey, P., Rose, F. D. and Lindström, J.-I., eds. (1998) Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998). ICDVRAT. The University of Reading, Reading, UK, pp265. ISBN 0704911418 Available at http://centaur.reading.ac.uk/27455/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . Publisher: The University of Reading All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement . www.reading.ac.uk/centaur CentAUR Central Archive at the University of Reading Reading’s research outputs online The 2nd European Conference on Disability, Virtual Reality and Associated Technologies Proceedings Edited by: Paul Sharkey (Programme Chair) David Rose (Conference Co-Chair) Jan-Ingvar Lindström (Conference Co-Chair) 10, 11 of September, 1998 Mount Billingen, Skövde, Sweden ECDVRAT ‘98 Proceedings: The papers appearing in this book comprise the proceedings of the Second European Conference on Disability, Virtual Reality and Associated Technologies, held on the 10th and 11th of September, 1998 at the Hotel Billingehus Conference Centre, Mount Billingen, Skövde, Sweden. The papers presented reflect the authors’ opinions and are published as presented and without change (formatting excepted). Their inclusion in this publication does not necessarily constitute endorsement by the editors, by ECDVRAT, or by the University of Reading. Please use the following format to cite material from these Proceedings: Author(s), “Title of paper”, in Proceedings of the 2nd European Conference on Disability, Virtual Reality and Associated Technologies, Sharkey, Rose & Lindström (Eds.), page numbers, Skövde, Sweden, 10-11 Sept. 1998. Proceedings reference number: ISBN 0 7049 1141 8 Published by The University of Reading For information, contact: ECDVRAT, Department of Cybernetics, University of Reading, Whiteknights, Reading, RG6 6AY, UK Phone: +44 (0) 118 931 6704 Fax: +44 (0) 118 931 8220 Email: [email protected] WWW: www.cyber.rdg.ac.uk/P.Sharkey/WWW/ecdvrat/ Copyright 1998 ECDVRAT and the University of Reading. Copying of material in these Proceedings for internal or personal use, or for the internal or personal use of specific clients is permitted without charge by ECDVRAT and the University of Reading. Other copying for republication, resale, advertising, or promotion, or any form of systematic or multiple reproduction of any material from these Proceedings is prohibited except with permission in writing from ECDVRAT. Printed in the UK. ii Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden, 1998 1998 ECDVRAT and University of Reading, UK; ISBN 0 7049 1141 8 Contents Abstracts xi Abstracts from all papers, alphabetically, by first author 1 – 252 Full Papers 261 – 264 Author Index 265 – 268 Keyword Index Session I Virtual Life Skills Session Chair: David Rose 1 User group involvement in the development of a virtual city, L Meakin, L Wilkins, C Gent, S Brown, D Moreledge, C Gretton, M Carlisle, C McClean, J Scott, J Constance & A Mallett, The Shepherd School, Nottingham, UK 11 The development of the Virtual City: A user centred approach, D J Brown, S J Kerr & V Bayon, Nottingham Trent University/University of Nottingham, UK 17 Evaluation of virtual learning environments, S V G Cobb, H R Neale & H Reynolds, University of Nottingham/Metropolitan Housing Trust Nottingham, UK Session II Interfacing to Virtual Environments Session Chair: Paul Sharkey 25 Multimedia Interface for the Disabled (MIND) project, R J McCrindle & R M Adams, The University of Reading, UK 35 Design of a non-contact head-control mouse emulator for use by a quadriplegic operator, E D Coyle, M Farrell, R White & B Stewart, Dublin Institute of Technology, IRELAND 45 Virtual keyboard with scanning and augmented by prediction, P E Jones, University of Western Australia, AUSTRALIA 53 Making 3D models: A challenge in man-machine communication, A Osorio, LIMSI- CNRS, FRANCE Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden, 1998 iii 1998 ECDVRAT and University of Reading, UK ISBN 0 7049 1141 8 Session III Training, Assessment & Rehabilitation Session Chair: Albert Rizzo 61 A practical example using virtual reality in the assessment of brain injury, R C Davies, G Johansson, K Boschian, A Lindén, U Minör & B Sonesson, University of Lund/Lund University Hospital, Höör, SWEDEN 69 Transfer of training from virtual to real environments, F D Rose, E A Attree, B M Brooks, D M Parslow, P R Penn & N Ambihaipahan, University of East London, UK 77 Developments of a collaborative research on VR applications for mental health, L Pugnetti, L Mendozzi, E Barbieri, A Motta, D Alpini, E A Attree, B M Brooks & F D Rose, Scientific Institute S. Maria Nascente, Don Gnocchi Foundation, Milan, ITALY/Casa di Cura “Policlinico di Monza”, Monza, ITALY/University of East London, UK 85 Virtual professional networks between speech pathologists, M Magnusson, University College of Karlstad, SWEDEN Session IV Virtual Environments & Visual Impairment Session Chair: Peter Jones 89 Virtual reality for blind users, R Berka & P Slavík, Czech Technical University Prague, CZECH REPUBLIC 99 The use of haptic virtual environments by blind people, C Colwell, H Petrie, D Kornbrot, A Hardwick & S Furner, University of Hertfordshire/British Telecommunications plc Ipswich, UK 105 Can a haptic force feedback display provide visually impaired people with useful information about texture roughness and 3D form of virtual objects?, G Jansson, Uppsala University, SWEDEN 113 Ambisonic sound in virtual environments and applications for blind people, M Cooper & M E Taylor, Open University Milton Keynes, UK 119 3D aural interactive hyperstories for blind children, M Lumbreras & J Sánchez, University of Chile, Santiago, CHILE iv Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden, 1998 1998 ECDVRAT and University of Reading, UK; ISBN 0 7049 1141 8 Session V Sign Language & Gesture Recognition Session Chair: Rachel McCrindle 129 Sign language formal description and synthesis, O Losson & J-M Vannobel, Université des Sciences et Technologies de Lille, FRANCE 139 Segmentation and classification of hand gestures for man-machine communication, F García-Ugalde, D Gatica-Pérez & V García-Garduño, FI-UNAM, MÉXICO / University of Washington, Seattle, USA 149 Japanese sign-language recognition based on gesture primitives using acceleration sensors and datagloves, H Sawada, T Notsu & S Hashimoto, Waseda University, Tokyo, JAPAN 159 S-TEL: An avatar based sign language telecommunication system, T Kuroda, K Sato & K Chihara, Nara Institute of Science and Technology, JAPAN Session VI Enhancing Mobility Session Chair: Francisco García-Ugalde 169 Improving the mobility of severely disabled, P Peussa, A Virtanen & T Johansson, VTT Automation, Tampere/Adaptation Training Centre for Disabled, Launeenkatu, FINLAND 177 Development and evaluation of a virtual reality based training system for disabled children, M Desbonnet, S L Cox & A Rahman, University of Limerick, IRELAND 183 Neural-network virtual reality mobility aid for the severely visually impaired, M R Everingham, B T Thomas, T Troscianko & D Easty, University of Bristol, UK 193 Robotic travel aid for the blind: HARUNOBU-6, H Mori & S Kotani, Yamanashi University, JAPAN 203 Generating virtual environments to allow increased access to the built environment, G T Foster, D E N Wenn & F O’Hart, University of Reading, UK/University of Dublin, Trinity College, IRELAND Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden, 1998 v 1998 ECDVRAT and University of Reading, UK ISBN 0 7049 1141 8 Session VII Spatial Orientation & Navigation Session Chair: Jan-Ingvar Lindström 213 Preliminary Findings on a Virtual Environment Targeting Human Mental Rotation/Spatial Abilities, A Rizzo, J G Buckwalter, P Larson, A van Rooyen, K Kratz, U. Neumann, C Kesselman & M Thiebaux, University of Southern California, Los Angeles, CA/Fuller Graduate School of Psychology, Pasadena CA, USA 221 Virtual reality in vestibular diagnosis and rehabilitation, D Alpini, L Pugnetti, L Mendozzi, E Barbieri, B Monti & A Cesarani, Scientific Institute S. Maria Nascente, Fondazione don Gnocchi, Milan/University of Sassari, ITALY 229 Virtual reality as a communication aid for persons with aphasia, E Ahlsén & V Geroimenko, Göteborg University, SWEDEN Additional Papers 237 Implementation and capabilities of a virtual interaction system, A W Joyce, III & A C Phalangas, Alfred I. DuPont Hospital for Children/University of Delaware, USA 247 Anthropometric models of children, M T Beitler & R A Foulds, University of Pennsylvania/duPont Hospital for Children, Wilmington USA 253 The invisible keyboard in the air: An overview of the educational, therapeutic and creative applications of the EMS Soundbeam, T Swingler, Norwich, UK 261 Author Index 265 Keyword Index vi Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden, 1998 1998 ECDVRAT and University of Reading, UK; ISBN 0 7049 1141 8 Conference Committee Conference Co-Chairs Dr. Jan-Ingvar
Recommended publications
  • An Animated Avatar to Interpret Signwriting Transcription
    An Animated Avatar to Interpret SignWriting Transcription Yosra Bouzid Mohamed Jemni Research Laboratory of Technologies of Information and Research Laboratory of Technologies of Information and Communication & Electrical Engineering (LaTICE) Communication & Electrical Engineering (LaTICE) ESSTT, University of Tunis ESSTT, University of Tunis [email protected] [email protected] Abstract—People with hearing disability often face multiple To address this issue, different methods and technologies barriers when attempting to interact with hearing society. geared towards deaf communication have been suggested over The lower proficiency in reading, writing and understanding the the last decades. For instance, the use of sign language spoken language may be one of the most important reasons. interpreters is one of the most common ways to ensure a Certainly, if there were a commonly accepted notation system for successful communication between deaf and hearing persons in sign languages, signed information could be provided in written direct interaction. But according to a WFD survey in 2009, 13 form, but such a system does not exist yet. SignWriting seems at countries out of 93 do not have any sign language interpreters, present the best solution to the problem of SL representation as it and in many of those countries where there are interpreters, was intended as a practical writing system for everyday there are serious problems in finding qualified candidates. communication, but requires mastery of a set of conventions The digital technologies, particularly video and avatar-based different from those of the other transcriptions systems to become a proficient reader or writer. To provide additional systems, contribute, in turn, in bridging the communication support for deaf signers to learn and use such notation, we gap.
    [Show full text]
  • Analysis of Notation Systems for Machine Translation of Sign Languages
    Analysis of Notation Systems for Machine Translation of Sign Languages Submitted in partial fulfilment of the requirements of the degree of Bachelor of Science (Honours) of Rhodes University Jessica Jeanne Hutchinson Grahamstown, South Africa November 2012 Abstract Machine translation of sign languages is complicated by the fact that there are few stan- dards for sign languages, both in terms of the actual languages used by signers within regions and dialogue groups, and also in terms of the notations with which sign languages are represented in written form. A standard textual representation of sign languages would aid in optimising the translation process. This area of research still needs to determine the best, most efficient and scalable tech- niques for translation of sign languages. Being a young field of research, there is still great scope for introducing new techniques, or greatly improving on previous techniques, which makes comparing and evaluating the techniques difficult to do. The methods used are factors which contribute to the process of translation and need to be considered in an evaluation of optimising translation systems. This project analyses sign language notation systems; what systems exists, what data is currently available, and which of them might be best suited for machine translation purposes. The question being asked is how using a textual representation of signs could aid machine translation, and which notation would best suit the task. A small corpus of SignWriting data was built and this notation was shown to be the most accessible. The data was cleaned and run through a statistical machine translation system. The results had limitations, but overall are comparable to other translation systems, showing that translation using a notation is possible, but can be greatly improved upon.
    [Show full text]
  • Automated Sign Language Translation: the Role of Artificial Intelligence Now and in the Future
    Automated Sign Language Translation: The Role of Artificial Intelligence Now and in the Future Lea Baumgartner,¨ Stephanie Jauss, Johannes Maucher and Gottfried Zimmermann Hochschule der Medien, Nobelstraße 10, 70569 Stuttgart, Germany Keywords: Sign Language Translation, Sign Language Recognition, Sign Language Generation, Artificial Intelligence. Abstract: Sign languages are the primary language of many people worldwide. To overcome communication barriers between the Deaf and the hearing community, artificial intelligence technologies have been employed, aiming to develop systems for automated sign language recognition and generation. Particularities of sign languages have to be considered - though sharing some characteristics of spoken languages - since they differ in others. After providing the linguistic foundations, this paper gives an overview of state-of-the-art machine learning approaches to develop sign language translation systems and outlines the challenges in this area that have yet to be overcome. Obstacles do not only include technological issues but also interdisciplinary considerations of development, evaluation and adoption of such technologies. Finally, opportunities and future visions of automated sign language translation systems are discussed. 1 INTRODUCTION There are, however, differences in how sign lan- guages work in detail, some of which are mentioned A basic understanding of the particularities and no- using the example of American Sign Language (ASL) tation of sign language serves to better understand in the following. Since suffixes to verbs like in spoken the importance and challenges of automated sign lan- languages are not possible, ASL tenses are built by guage translation. We use both the terms deaf and adding words in the beginning or at the end of a sen- Deaf throughout this paper, the first referring solely tence.
    [Show full text]
  • Writing Signed Languages: What For? What Form? Donald A
    Writing Signed Languages: What For? What Form? Donald A. Grushkin American Annals of the Deaf, Volume 161, Number 5, Winter 2017, pp. 509-527 (Article) Published by Gallaudet University Press DOI: https://doi.org/10.1353/aad.2017.0001 For additional information about this article https://muse.jhu.edu/article/648961 Access provided by University of Connecticut @ Storrs (9 Jun 2017 21:07 GMT) 18991-AAD161.5_Winter2017 2/9/17 2:54 PM Page 509 Grushkin, D. A. (2017). Writing signed languages: What for? What form? American Annals of the Deaf, 161 (5), 509–527. WRITING SIGNED LANGUAGES : W HAT FOR ? WHAT FORM ? IGNED LANGUAGES around the world have tended to maintain an “oral,” unwritten status. Despite the advantages of possessing a written form of their language, signed language communities typically resist and reject attempts to create such written forms. The present article ad - dresses many of the arguments against written forms of signed lan - guages, and presents the potential advantages of writing signed languages. Following a history of the development of writing in spoken as well as signed language populations, the effects of orthographic types upon literacy and biliteracy are explored. Attempts at writing signed lan - guages have followed two primary paths: “alphabetic” and “icono - graphic.” It is argued that for greatest congruency and ease in developing biliteracy strategies in societies where an alphabetic script is used for the spoken language, signed language communities within Sthese societies are best served by adoption of an alphabetic script for DONALD A. G RUSHKIN writing their signed language. Keywords: writing, written signed the development of a conventionally GRUSHKIN IS A PROFESSOR , D EAF STUDIES languages, biliteracy, Deaf education, accepted written system for signed lan - PROGRAM , C ALIFORNIA STATE UNIVERSITY , orthography guages has yet to take place.
    [Show full text]
  • Hamnosys – Representing Sign Language Data in Language Resources and Language Processing Contexts
    HamNoSys – Representing Sign Language Data in Language Resources and Language Processing Contexts Thomas Hanke University of Hamburg Institute of German Sign Language and Communication of the Deaf Binderstraße 34, 20146 Hamburg, Germany [email protected] Abstract This paper gives a short overview of the Hamburg Notation System for Sign Languages (HamNoSys) and describes its application areas in language resources for sign languages and in sign language processing. More than fifteen years after the first published 1. Introduction version, HamNoSys is now at version 4 The Hamburg Notation System for Sign Languages (Schmaling/Hanke, 2001). This latest version filled some (HamNoSys) is an alphabetic system describing signs on a minor gaps and introduced some shortcuts, but more mostly phonetic level. As many sign notation systems importantly addressed issues related to using HamNoSys developed in the last 30 years, it has its roots in the Stokoe in a sign language generation context. For the latter notation system that introduced an alphabetic system to purpose, it was also complemented with a new set of describe the sublexical parameters location, hand systems to encode nonmanual behaviour in a detailedness configuration (in most cases, the handshape only) and not previously possible in HamNoSys. movement to give a phonological description of American Sign Language signs (Stokoe, 1960). 2. Overview of the System HamNoSys (first version defined in 1984, first published version Prillwitz et al., 1987), however, was 2.1. General Structure designed to be usable in a variety of contexts with the A HamNoSys notation for a single sign consists of a following goals in mind: description of the initial posture (describing nonmanual • International use: HamNoSys transcriptions should be features, handshape, hand orientation and location) plus possible for virtually all sign languages in the world, the actions changing this posture in sequence or in and the notation should not rely on conventions parallel.
    [Show full text]
  • Deliverable: D1.3 Mapping Features to Hamnosys
    SEVENTH FRAMEWORK PROGRAMME INFORMATION AND COMMUNICATION TECHNOLOGIES Cognitive Systems, Interaction, and Robotics Deliverable: D1.3 Mapping Features to HamNoSys Project acronym: Dicta-Sign Project full title: Sign Language Recognition, Generation and Modelling with Application in Deaf Communication Grant agreement no.: 231135 Status: V1.0 Date of preparation: 30/11/11 Contents 1 Acronyms 4 2 Introduction 5 2.1 Contributors . .5 3 Tracking and Detection 6 3.1 2D Tracking . .6 3.2 3D Tracking . .6 3.2.1 3D Trajectories Using Scene Flow . .6 3.2.2 Scene Particles . .7 3.3 Pose Estimation from Depth . .9 3.3.1 Poselet Representation . 10 3.3.2 Extracting poselets . 11 3.3.3 Detecting poselets . 11 3.3.4 Predictions and Inference . 12 3.3.5 Experiments . 12 4 Learnt Motion Features 13 4.1 Appearance based . 14 4.2 Tracking Based . 16 4.3 Results . 17 5 Deterministic Motion Features 17 5.1 2D Tracking . 17 5.2 3D Tracking . 18 6 Learnt Location Features 19 6.1 Appearance based . 19 6.2 Tracking Based . 20 6.3 Hierarchical Labels . 20 6.4 Results . 21 7 Deterministic Location Features 25 7.1 2D Tracking . 25 7.2 3D Tracking . 25 8 Learnt HandShape Features 27 8.1 Appearance . 27 8.2 Depth . 29 9 Phonetic-Based Sub-Unit Modelling (NTUA) 31 9.1 Phonetic Based Sub-units, Training, Alignment and Mapping to Phonetic Transcriptions . 31 9.2 Data and Visual Processing . 31 9.3 Phonetic Sub-Unit Model Training Alignment and Time Segmentation . 31 9.4 Conclusions . 32 10 Global-Local Active Appearance Modelling for Facial Cues (NTUA) 33 10.1 Global AAM Tracking .
    [Show full text]
  • Deaf Communities and the Culture of Language
    23 Aug 2002 13:1 AR AR169-AN31-04.tex AR169-AN31-04.sgm LaTeX2e(2002/01/18) P1: IBC 10.1146/annurev.anthro.31.020402.101302 Annu. Rev. Anthropol. 2002. 31:69–97 doi: 10.1146/annurev.anthro.31.020402.101302 Copyright c 2002 by Annual Reviews. All rights reserved First published online! as a Review in Advance on June 26, 2002 SIGNS OF THEIR TIMES: Deaf Communities and the Culture of Language Richard J. Senghas1 and Leila Monaghan2 1Sonoma State University, Department of Anthropology/Linguistics, 1801 East Cotati Avenue, Rohnert Park, California 94928-3609; email: [email protected] 2Temple University, Department of Anthropology, 1115 West Berks Street, Philadelphia, Pennsylvania 19122; email: [email protected] Key Words sign language, deafness, linguistic communities, Deaf culture ■ Abstract Because of their deafness, deaf people have been marked as different and treated problematically by their hearing societies. Until 25 years ago, academic literature addressing deafness typically described deafness as pathology, focusing on cures or mitigation of the perceived handicap. In ethnographic accounts, interactions involving deaf people are sometimes presented as examples of how communities treat atypical members. Recently, studies of deafness have adopted more complex sociocul- tural perspectives, raising issues of community identity, formation and maintenance, and language ideology. Anthropological researchers have approached the study of d/Deaf communities from at least three useful angles. The first, focusing on the history of these communities, demonstrates that the current issues have roots in the past, including the central role of education in the creation and maintenance of communities. A second approach cen- ters on emic perspectives, drawing on the voices of community members themselves and accounts of ethnographers.
    [Show full text]
  • The Use of a Filemaker Pro Database in Evaluating Sign Language Notation Systems Julie A
    The Use of a FileMaker Pro Database in Evaluating Sign Language Notation Systems Julie A. Hochgesang Gallaudet University, Linguistics SLCC3206, 800 Florida Ave NE, Washington, DC 20002 USA E-mail: [email protected] Abstract In this paper, FileMaker Pro has been used to create a database in order to evaluate sign language notation systems used for representing hand configurations. The database cited in this paper focuses on child acquisition data, particularly the dataset of one child and one adult productions of the same American Sign Language (ASL) signs produced in a two-year span. The hand configurations in selected signs have been coded using Stokoe notation (Stokoe, Casterline & Croneberg, 1965), the Hamburg Notation System or HamNoSys (Prillwitz et al, 1989), the revised Prosodic Model Handshape Coding system or PM (Eccarius & Brentari, 2008) and Sign Language Phonetic Annotation or SLPA, a notation system that has grown from the Movement-Hold Model (Johnson & Liddell, 2010, 2011a, 2011b, 2012). Data was pulled from ELAN transcripts, organized and notated in a FileMaker Pro database created to investigate the representativeness of each system. Representativeness refers to the ability of the notation system to represent the hand configurations in the dataset. This paper briefly describes the design of the FileMaker Pro database intended to provide both quantitative and qualitative information in order to allow the sign language researcher to examine the representativeness of sign language notation systems. Keywords: sign language, phonetics, phonology, child acquisition, transcription, notation, FileMaker Pro, database does not receive much attention. It is hoped that the same 1. Introduction type of investigation will be carried out for other aspects of transcription.
    [Show full text]
  • Emerging Sign Languages of the Americas
    SLT 9 Edited by Olivier Le Guen, Josefi na Safar, and Marie Coppola na Safar, Edited by Olivier Le Guen, Josefi OF THE AMERICAS EMERGING SIGN LANGUAGES This volume is the fi rst to bring together researchers studying a range of EMERGING SIGN different types of emerging sign languages in the Americas, and their relati- onship to the gestures produced in the surrounding communities of hearing individuals. LANGUAGES OF THE AMERICAS THE SERIES: SIGN LANGUAGE TYPOLOGY The series is dedicated to the comparative study of sign languages around Edited by Olivier Le Guen, Josefi na Safar, the world. Individual or collective works that systematically explore typological variation across sign languages are the focus of this series, with and Marie Coppola particular emphasis on undocumented, underdescribed and endangered sign languages. The scope of the series primarily includes cross-linguistic studies of grammatical domains across a larger or smaller sample of sign languages, but also encompasses the study of individual sign languages from a typological perspective and comparison between signed and spoken languages in terms of language modality, as well as theoretical and methodological contributions to sign language typology. SIGN LANGUAGE TYPOLOGY www.degruyter.com ISBN 978-1-5015-1350-3 ISSN 2192-5186 Ishara Press Sign Language Typology Editors Marie Coppola, Onno Crasborn, Ulrike Zeshan Editorial board Sam Lutalo-Kiingi, Ronice Müller de Quadros, Nick Palfreyman, Roland Pfau, Adam Schembri, Gladys Tang, Erin Wilkinson, Jun Hui Yang Volume 9 Emerging Sign Languages of the Americas Edited by Olivier Le Guen, Josefina Safar and Marie Coppola ISBN 978-1-5015-1350-3 e-ISBN (PDF) 978-1-5015-0488-4 e-ISBN (EPUB) 978-1-5015-0484-6 ISSN 2192-5186 e-ISSN 2192-5194 Library of Congress Control Number: 2020946596 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de.
    [Show full text]
  • Tutorial for Deaf–Teaching Punjabi Alphabet Using Synthetic Animations
    Tutorial for Deaf – Teaching Punjabi Alphabet using Synthetic Animations Lalit Goyal, Assistant Professor, Dr. Viahal Goyal, Associate Professor, DAV College, Jalandhar. Punjabi University, Patiala. [email protected] [email protected] Abstract Developing an automatic tool for educating these differently abled people have very limited students has become essential in today’s world of or no access to education and other information. computerization. For differently abled people, [1][2]. especially in India, where the resources are scares for educating them, it becomes essential to develop such technologies which give the In India, situation is worse; there are 5072914 opportunity to each and every individual to get the persons who suffer from hearing disability. education online and free of cost. Research work Among them, only 546826 hearing impaired has been done to create HamNoSys notation of persons are literate below primary education [3] atleast two words corresponding to each alphabet which accounts for only 10.78%. of Punjabi Language. The HamNoSys notation when used in JASigning animation tool produces the synthetic animation. The synthetic animations Sign language is different for different are better as compared to human videos in terms countries/regions as this language is not created of memory requirement, standardization, and but it is evolved by deaf people. So, depending flexibility. Synthetic animations can be modified on the region, the signs are different as well as as per the requirement whereas the human videos the grammar is also different depending on the cannot be modified. The only drawback that region. As far as Indian Sign Language is seems is, these synthetic animations may lack the natural non-manual component of sign.
    [Show full text]
  • Extending the Sigml Notation – a Progress Report
    Extending the SiGML Notation – a Progress Report John Glauert Ralph Elliott School of Computing Sciences School of Computing Sciences UEA Norwich UEA Norwich Norwich NR4 7TJ, UK Norwich NR4 7TJ, UK [email protected] [email protected] ABSTRACT broadly comparable to Filhol's Zebedee notation [4] for sign We outline work in progress on extensions to the SiGML no- language synthesis. tation, which is used to describe Sign Language performance The purpose of the SiGML notation is to support purely at the phonetic level in a manner suitable for animation by synthetic virtual human sign language performance. It orig- a virtual human signer, or signing avatar. These extensions inated as the interface notation between the front- and back- are intended to provide the author or generator of SiGML ends (or, alternatively, upper and lower levels) of the proto- sign language content with greater flexibility and precision type system for natural-language-to-virtual-human-signing of control, leading to greater authenticity of computer gen- developed in the ViSiCAST project [3]. Since then it has erated performance, while making the production of such been used as the input notation for self-contained signed content as convenient as possible. A significant influence on content performance systems. In that connection, it im- our work is the segmental approach to sign language pho- portant to note SiGML's close relationship to HamNoSys: netics of Johnson and Liddell. We illustrate aspects of our because HamNoSys can be represented as SiGML, existing approach by means of an example. HamNoSys transcriptions and people who know how to write HamNoSys transcriptions can both act as sources of content for our VH signing systems.
    [Show full text]
  • Repositorio Institucional De La Universidad Autónoma De Madrid
    Repositorio Institucional de la Universidad Autónoma de Madrid https://repositorio.uam.es Esta es la versión de autor de la comunicación de congreso publicada en: This is an author produced version of a paper published in: Computer Speech and Language 28.3 (2014): 788-811 DOI: http://dx.doi.org/10.1016/j.csl.2013.10.003 Copyright: © 2014 Elsevier B.V. All rights reserved El acceso a la versión del editor puede requerir la suscripción del recurso Access to the published version may require subscription Accepted Manuscript Title: A rule-based translation from written Spanish to Spanish Sign Language glosses Author: Jordi Porta Fernando Lopez-Colino´ Javier Tejedor Jose´ Colas´ PII: S0885-2308(13)00086-7 DOI: http://dx.doi.org/doi:10.1016/j.csl.2013.10.003 Reference: YCSLA 626 To appear in: Received date: 1-2-2012 Revised date: 26-8-2013 Accepted date: 11-10-2013 Please cite this article as: Porta, J., Lopez-Colino,´ F.,A rule-based translation from written Spanish to Spanish Sign Language glosses, Computer Speech & Language (2013), http://dx.doi.org/10.1016/j.csl.2013.10.003 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
    [Show full text]