João Dinis Colaço De Freitas Interfaces De Fala Silenciosa

Total Page:16

File Type:pdf, Size:1020Kb

João Dinis Colaço De Freitas Interfaces De Fala Silenciosa Universidade de Aveiro Departamento de Eletrónica, Telecomunicações e Ano 2015 Informática João Dinis Interfaces de Fala Silenciosa Multimodais para Colaço de Freitas Português Europeu com base na Articulação Articulation in Multimodal Silent Speech Interface for European Portuguese Tese apresentada à Universidade de Aveiro para cumprimento dos requisitos necessários à obtenção do grau de Doutor em Informática, realizada sob a orientação científica do Professor Doutor António Joaquim da Silva Teixeira, Professor Associado do Departamento de Eletrónica, Telecomunicações e Informática da Universidade de Aveiro e do Professor Doutor José Miguel de Oliveira Monteiro Sales Dias, Diretor do Centro de Desenvolvimento da Linguagem da Microsoft Portugal e Professor Associado Convidado do Instituto Universitário de Lisboa (ISCTE-IUL). To my Mother and to Diana for their immeasurable patience, support and unconditional love. To the loving memory of Carlos, I am forever grateful. o júri presidente Doutora Anabela Botelho Veloso Professora Catedrática da Universidade de Aveiro Doutora Isabel Maria Martins Trancoso Professora Catedrática do Instituto Superior Técnico da Universidade de Lisboa Doutora Maria Beatriz Alves de Sousa Santos Professora Associada com Agregação da Universidade de Aveiro Doutor José Miguel de Oliveira Monteiro Sales Dias Professor Associado Convidado do ISCTE-IUL – Instituto Universitário de Lisboa (Coorientador) Doutor Luís Manuel Dias Coelho Soares Barbosa Professor Associado da Escola de Engenharia da Universidade do Minho Doutor António Joaquim da Silva Teixeira Professor Associado da Universidade de Aveiro (Orientador) Doutor Carlos Jorge da Conceição Teixeira Professor Auxiliar da Faculdade de Ciências da Universidade de Lisboa Doutor Cengiz Acartürk Assistant Professor, Informatics Institute, Middle East Technical University (METU), Ankara - Turkey Agradecimentos I would like to thank to all the people whose contributions, assistance and support made the end result possible. In my case, both my supervisors, Prof. António Teixeira and Prof. Miguel Sales Dias, head the list. I want to thank them for their patience, continuous encouragement, support and understanding. Additionally, I also want to thank them for always finding the time to discuss new ideas and for helping me in the search for the right path. Their contributions for this work are invaluable and it has been an immense privilege to be their student. I am forever in their debt for all the knowledge they have passed me. I also want to thank the Microsoft Language Development Center for supporting this research and to all my colleagues that work or worked there for their encouragement, understanding and support. In particular, I want to thank António Calado, Daan Baldewijns, Pedro Silva, Manuel Ribeiro and Filipe Gaspar, for their reviews and friendship. To the remarkable researchers, Samuel Silva and Catarina Oliveira, from the University of Aveiro, and Artur Ferreira, from the Lisbon Engineering Institute, thank you all your help and advices. Your collaboration in several studies here described and your contributions add an enormous value to this thesis. I also want to thank Prof. Dr. Francisco Vaz for his contribution in designing the Ultrasonic Doppler device and in the Ultrasonic Doppler studies. To my colleagues from MAP-i, Carlos Pereira, Nuno Oliveira, Nuno Luz and Ricardo Anacleto, thank you for the great times in Braga during the first year of the PhD programme. I would also like to deeply thank all the people that have taken their time to participate in the data acquisition sessions. To all, thank you for donating your (silent) speech and for the valuable feedback. To my friends and family, thank you for all the get-away moments. It was after those moments that the best solutions and ideas were created in my mind. In particular, I want to thank my godfather, Carlos Faria, and his family, for being such wonderful hosts, for their continuous encouragement, belief, and for making my weekly trips to Braga worthy. Carlos was like an older brother to me, with great influence in my decision to pursue higher education. I wish he could be here to share this moment. To my mother, thank you for all the sacrifices, for always being present and for supporting me in every way. To Diana, my deepest thanks, I surely could not have done this without her love and understanding. In that sense, this thesis is also hers. Her friendship, support and patience are infinite. Last but not least, a word to my son, Pedro. Your smile, your ability for making me laugh even in the worst moments, are all the motivation I could ever ask. I hope that one day, the hours of absence, can be compensated by passing what I have learnt to you. Palavras-chave Fala Silenciosa, Multimodal, Português Europeu, Articulação, Nasalidade, Interação Humano-Computador, Electromiografia de superfície, Video, Informação de profundidade, Medição de Doppler em ultrassons Resumo O conceito de fala silenciosa, quando aplicado a interação humano- computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal. Keywords Silent Speech, Multimodal, European Portuguese, Articulation, Nasality, Human-Computer Interaction, Surface Electromyography, Video, Depth Information, Ultrasonic Doppler Sensing. Abstract The concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the
Recommended publications
  • Talking Without Talking
    Honey Brijwani et al Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 4( Version 9), April 2014, pp.51-56 RESEARCH ARTICLE OPEN ACCESS Talking Without Talking Deepak Balwani*, Honey Brijwani*, Karishma Daswani*, Somyata Rastogi* *(Student, Department of Electronics and Telecommunication, V. E. S. Institute of Technology, Mumbai ABTRACT The Silent sound technology is a completely new technology which can prove to be a solution for those who have lost their voice but wish to speak over phone. It can be used as part of a communications system operating in silence-required or high-background- noise environments. This article outlines the history associated with the technology followed by presenting two most preferred techniques viz. Electromyography and Ultrasound SSI. The concluding Section compares and contrasts these two techniques and put forth the future prospects of this technology. Keywords- Articulators , Electromyography, Ultrasound SSI, Vocoder, Linear Predictive Coding I. INTRODUCTION Electromyography involves monitoring tiny muscular Each one of us at some point or the other in movements that occur when we speak and converting our lives must have faced a situation of talking aloud them into electrical pulses that can then be turned on the cell phone in the midst of the disturbance into speech, without a sound being uttered. while travelling in trains or buses or in a movie Ultrasound imagery is a non-invasive and clinically theatre. One of the technologies that can eliminate safe procedure which makes possible the real-time this problem is the ‘Silent Sound’ technology. visualization of the tongue by using an ultrasound ‗Silent sound‘ technology is a technique that transducer.
    [Show full text]
  • Silent Speech Interfaces B
    Silent Speech Interfaces B. Denby, T. Schultz, K. Honda, Thomas Hueber, J.M. Gilbert, J.S. Brumberg To cite this version: B. Denby, T. Schultz, K. Honda, Thomas Hueber, J.M. Gilbert, et al.. Silent Speech Interfaces. Speech Communication, Elsevier : North-Holland, 2010, 52 (4), pp.270. 10.1016/j.specom.2009.08.002. hal- 00616227 HAL Id: hal-00616227 https://hal.archives-ouvertes.fr/hal-00616227 Submitted on 20 Aug 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Accepted Manuscript Silent Speech Interfaces B. Denby, T. Schultz, K. Honda, T. Hueber, J.M. Gilbert, J.S. Brumberg PII: S0167-6393(09)00130-7 DOI: 10.1016/j.specom.2009.08.002 Reference: SPECOM 1827 To appear in: Speech Communication Received Date: 12 April 2009 Accepted Date: 20 August 2009 Please cite this article as: Denby, B., Schultz, T., Honda, K., Hueber, T., Gilbert, J.M., Brumberg, J.S., Silent Speech Interfaces, Speech Communication (2009), doi: 10.1016/j.specom.2009.08.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript.
    [Show full text]
  • Silent Sound Technology: a Solution to Noisy Communication
    International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 14 – Mar 2014 Silent Sound Technology: A Solution to Noisy Communication Priya Jethani#1, Bharat Choudhari*2 #M.E Student , Padmashree Dr. V.B.Kolte College of Engineering, Malkapur, Maharashtra ,India *Assistant Professor, Padmashree Dr V.B.Kolte College of Engineering, Malkapur, Maharashtra, India Abstract— when we are in movie, theatre, bus, train there is lot levels. Most people can also understand a few of noise around us we can’t speak properly on a mobile phone. In words which are unspoken, by lip-reading The idea future this problem is eliminated with the help of Silent sound technology. It is a technology that helps you to transmit of interpreting silent speech electronically or with a information without using your vocal cords. This technology computer has been around for a long time, and was notices every lip movements & transforms them into a computer popularized in the 1968 Stanley Kubrick science- generated sound that can be transmitted over a phone. Hence person on another end of phone receives the information in audio fiction film ‘‘2001 – A Space Odyssey ” [7].A This device is developed by the Karlsruhe Institute of major focal point was the DARPA Advanced Technology (KIT).It uses electromyography, monitoring tiny Speech Encoding Program (ASE ) of the early muscular movements that occur when we speak and converting them into electrical pulses that can then be turned into speech, 2000’s, which funded research on low bit rate without a sound uttered .When demonstrated, it seems to detect speech synthesis ‘‘with acceptable intelligibility, every lip movement and internally converts the electrical pulses quality , and aural speaker recognizability in into sounds signals and sends them neglecting all other surrounding noise.
    [Show full text]
  • Pedro Miguel André Coelho Generic Modalities for Silent Interaction
    Departamento de Universidade de Aveiro Electrónica, Telecomunicações e Informática, 2019 Pedro Miguel Generic Modalities for Silent Interaction André Coelho Modalidades Genéricas para Interação Silenciosa Departamento de Universidade de Aveiro Electrónica, Telecomunicações e Informática, 2019 Pedro Miguel Generic Modalities for Silent Interaction André Coelho Modalidades Genéricas para Interação Silenciosa Dissertação apresentada à Universidade de Aveiro para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Engenharia de Computadores e Telemática, realizada sob a orientação científica do Doutor Samuel de Sousa Silva, Investigador do Instituto de Engenharia Electrónica e Informática de Aveiro e do Doutor António Joaquim da Silva Teixeira, Professor Associado com Agregação do Departamento de Eletrónica, Tele- comunicações e Informática da Universidade de Aveiro. o júri / the jury presidente / president Doutora Beatriz Sousa Santos Professora Associada com Agregação da Universidade de Aveiro vogais / examiners committee Doutor José Casimiro Pereira Professor Adjunto do Instituto Politécnico de Tomar Doutor Samuel de Sousa Silva (Orientador) Investigador da Universidade de Aveiro agradecimentos Em primeiro lugar, quero agradecer aos meus orientadores, Doutor Samuel Silva e Professor Doutor António Teixeira, pela confiança que em mim depositaram, por toda a dedicação e pela motivação incondicional. Agradeço, de igual forma, ao Doutor Nuno Almeida, pelo incentivo e apoio na elaboração da presente dissertação. Aos meus amigos e colegas, por todo o encorajamento e amizade. À Mariana, pela sua paciência, compreensão e ajuda prestada, con- tribuindo para chegar ao fim deste percurso. Por último, o meu profundo e sentido agradecimento à minha família, por toda a força e por estarem sempre presentes ao longo da minha vida académica.
    [Show full text]
  • The IAL News
    The IAL News The International Association of Laryngectomees Vol. 60 No. 4 November 2015 Viet Nam Veteran Assists Vietnamese Laryngectomees By Larry Hammer, M.C.D Left to right- Larry Hammer, Speech Language Pathologist; Dr. Huynh An, Chief ENT; Laryngectomee patient and wife; Ms. Dang Thi Thu Tien, therapist. After having made a presentation on the communication of Laryngectomees following surgery and demonstrating the functional use of the electrolarynx, I turn the evaluation and training over to the physician and therapist to evaluate and train the Laryngectomee. I then serve as a “coach”. Viet Nam Laryngectomee Humanitarian Project In 1975 I completed my undergraduate studies at (VNLHP) made its most recent trip to Viet Nam in the University of Nebraska – Lincoln. It was the April 2015. Forty-three (43) electrolarynx devices year of the end of the Viet Nam War. I had served (ELD) were provided to indigent Laryngectomees in in the U.S. Marine Corps during that war during Ha Noi, Da Nang City and Hue City. Actively prac- 1968 and 1969. That was when I committed to ticing as a Speech Language Pathologist, I felt most someday return to Viet Nam to use my skills as a personally rewarded by being involved with the Speech Language Pathologist to serve the Vietnam- communication rehabilitation of Laryngectomees. ese people. I saw a significant number of people They are a unique and special group of people who who had suffered injuries that left them with a vari- have professionally motivated and inspired me for ety of disorders related to Continued on Page 6 36 years.
    [Show full text]
  • Ultrasound-Based Silent Speech Interface Using Deep Learning
    Budapest University of Technology and Economics Faculty of Electrical Engineering and Informatics Dept. of Telecommunications and Media Informatics Ultrasound-based Silent Speech Interface using Deep Learning BSc Thesis Author Supervisor Eloi Moliner Juanpere dr. Tamás Gábor Csapó May 18, 2018 Contents Abstract 4 Resum 5 Introduction6 1 Input data8 1.1 Ultrasound imaging................................8 1.2 Tongue images..................................9 2 Target data 11 2.1 Mel-Generalized Cepstral coefficients...................... 11 2.2 Line Spectral Pair................................ 12 2.3 Vocoder...................................... 13 3 Deep Learning Architectures 14 3.1 Deep Feed-forward Neural Network....................... 14 3.2 Convolutional Neural Network.......................... 16 3.3 CNN-LSTM Recurrent Neural Network..................... 18 3.3.1 Bidirectional CNN-LSTM Recurrent Neural Network......... 19 3.4 Denoising Convolutional Autoencoder..................... 20 4 Hyperparameter optimization 22 4.1 Deep Feed-forward Neural Network....................... 22 4.2 Convolutional Neural Network.......................... 24 4.3 CNN-LSTM Recurrent Neural Network..................... 25 4.3.1 Bidirectional CNN-LSTM Recurrent Neural Network......... 25 4.4 Denoising Convolutional Autoencoder..................... 26 4.5 Optimization using the denoised images.................... 27 5 Results 31 5.1 Objective measurements............................. 31 5.2 Listening subjective tests............................ 34 6 Conclusions 36 1 Acknowledgements 37 List of Figures 38 List of Tables 39 Acknowledgements 43 2 STUDENT DECLARATION I, Eloi Moliner Juanpere, the undersigned, hereby declare that the present BSc thesis work has been prepared by myself and without any unauthorized help or assistance. Only the specified sources (references, tools, etc.) were used. All parts taken from other sources word by word, or after rephrasing but with identical meaning, were unambiguously iden- tified with explicit reference to the sources utilized.
    [Show full text]
  • Direct Generation of Speech from Electromyographic Signals: EMG-To
    EMG-to-Speech: Direct Generation of Speech from Facial Electromyographic Signals Zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften der KIT-Fakult¨atf¨urInformatik des Karlsruher Instituts f¨urTechnologie (KIT) genehmigte Dissertation von Matthias Janke aus Offenburg Tag der m¨undlichen Pr¨ufung: 22. 04. 2016 Erste Gutachterin: Prof. Dr.-Ing. Tanja Schultz Zweiter Gutachter: Prof. Alan W Black, PhD Summary For several years, alternative speech communication techniques have been examined, which are based solely on the articulatory muscle signals instead of the acoustic speech signal. Since these approaches also work with completely silent articulated speech, several advantages arise: the signal is not corrupted by background noise, bystanders are not disturbed, as well as assistance to people who have lost their voice, e.g. due to accident or due to disease of the larynx. The general objective of this work is the design, implementation, improve- ment and evaluation of a system that uses surface electromyographic (EMG) signals and directly synthesizes an audible speech output: EMG-to-speech. The electrical potentials of the articulatory muscles are recorded by small electrodes on the surface of the face and neck. An analysis of these signals allows interpretations on the movements of the articulatory apparatus and in turn on the spoken speech itself. An approach for creating an acoustic signal from the EMG-signal is the usage of techniques from automatic speech recognition. Here, a textual output is produced, which in turn is further processed by a text-to-speech synthesis component. However, this approach is difficult resulting from challenges in the speech recognition part, such as the restriction to a given vocabulary or recognition errors of the system.
    [Show full text]