Signal Processing

Total Page:16

File Type:pdf, Size:1020Kb

Signal Processing Contents | Zoom in | Zoom out For navigation instructions please click here Search Issue | Next Page [VOLUME 32 NUMBER 6 NOVEMBER 2015] Contents | Zoom in | Zoom out For navigation instructions please click here Search Issue | Next Page qM qMqM Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page qMqM THE WORLD’S NEWSSTAND® USB & Ethernet Programmable New Models ATTENUATORS up to 120 dB! $ 0 –30, 60, 90, 110 &120 dB 0.25 dB Step 1 MHz to 6 GHz* from 395 Mini-Circuits’ new programmable attenuators offer the entire range of attenuation settings, while USB, precise attenuation from 0 up to 120 dB, supporting Ethernet and RS232 control options allow setup flexibility even more applications and greater sensitivity level and easy remote test management. Supplied with measurements! Now available in models with maximum user-friendly GUI control software, DLLs for attenuation of 30, 60, 90, 110, and 120 dB with 0.25 dB programmers† and everything you need for immediate attenuation steps, they provide the widest range of level use right out of the box, Mini-Circuits programmable control in the industry with accurate, repeatable attenuators offer a wide range of solutions to meet performance for a variety of applications including your needs and fit your budget. Visit minicircuits.com fading simulators, handover system evaluation, for detailed performance specs, great prices, and off automated test equipment and more! Our unique the shelf availability. Place your order today for delivery designs maintain linear attenuation change per dB over as soon as tomorrow! RoHS compliant Models Attenuation Attenuation Step USB Ethernet RS232 Price Range Accuracy Size Control Control Control Qty. 1-9 RUDAT-6000-30 0-30 dB ±0.4 dB 0.25 dB ✓ - ✓ $395 RCDAT-6000-30 0-30 dB ±0.4 dB 0.25 dB ✓✓ - $495 RUDAT-6000-60 0-60 dB ±0.3 dB 0.25 dB ✓ - ✓ $625 RCDAT-6000-60 0-60 dB ±0.3 dB 0.25 dB ✓✓ - $725 RUDAT-6000-90 0-90 dB ±0.4 dB 0.25 dB ✓ - ✓ $695 RCDAT-6000-90 0-90 dB ±0.4 dB 0.25 dB ✓✓ - $795 NEW RUDAT-6000-110 0-110 dB ±0.45 dB 0.25 dB ✓ - ✓ $895 NEW RCDAT-6000-110 0-110 dB ±0.45 dB 0.25 dB ✓✓ - $995 NEW RUDAT-4000-120 0-120 dB ±0.5 dB 0.25 dB ✓ - ✓ $895 NEW RCDAT-4000-120 0-120 dB ±0.5 dB 0.25 dB ✓✓ - $995 *120 dB models specified from 1-4000 MHz. †No drivers required. DLL objects provided for 32/64-bit Windows® and Linux® environments using ActiveX® and .NET® frameworks. Mini-Circuits® www.minicircuits.com P.O. Box 35ä166, Brooklyn, NY 11235-0003 (718) 934-4500 [email protected]_______________ 523 Rev D qM qMqM Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page qMqM THE WORLD’S NEWSSTAND® qM qMqM Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page qMqM THE WORLD’S NEWSSTAND® [CONTENTS] [VOLUME 32 NUMBER 6] [FEATURES] [COLUMNS] [DEPARTMENTS] THEORIES AND METHODS 4 FROM THE EDITOR 11 SOCIETY NEWS Engaging Undergraduate Students 2016 IEEE Technical Field Award 12 EUCLIDEAN DISTANCE Min Wu Recipients Announced MATRICES Ivan Dokmanic´, Reza Parhizkar, 6 PRESIDENT’S MESSAGE 128 DATES AHEAD Juri Ranieri, and Martin Vetterli Signal Processing: The Science Behind Our Digital Life 31 PLAYING WITH DUALITY Alex Acero Nikos Komodakis and Jean-Christophe Pesquet 8 SPECIAL REPORTS Opening the Door to Innovative Consumer Technologies John Edwards AUDIO AND SPEECH PROCESSING 113 SP EDUCATION Undergraduate Students Compete 55 EXPRESSION CONTROL in the IEEE Signal Processing IN SINGING VOICE SYNTHESIS Cup: Part 3 Martí Umbert, Jordi Bonada, Zhilin Zhang Masataka Goto, Tomoyasu Nakano, and Johan Sundberg 117 LECTURE NOTES On the Intrinsic Relationship 74 SPEAKER RECOGNITION BY Between the Least Mean Square MACHINES AND HUMANS and Kalman Filters John H.L. Hansen Danilo P. Mandic, Sithan Kanna, and Taufiq Hasan and Anthony G. Constantinides 123 BEST OF THE WEB The Computational BIOMEDICAL Network Toolkit SIGNAL PROCESSING Dong Yu, Kaisheng Yao, and Yu Zhang 100 BRAIN-SOURCE IMAGING Hanna Becker, Laurent Albera, Pierre Comon, Rémi Gribonval, Fabrice Wendling, and Isabelle Merlet Digital Object Identifier 10.1109/MSP.2015.2467197 IEEE SIGNAL PROCESSING MAGAZINE [1] NOVEMBER 2015 qM qMqM Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page qMqM THE WORLD’S NEWSSTAND® qM qMqM Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page qMqM THE WORLD’S NEWSSTAND® [IEEE SIGNAL PROCESSING magazine] EDITOR-IN-CHIEF Z. Jane Wang—The University of British Columbia, Min Wu—University of Maryland, College Park, Canada United States Gregory Wornell—Massachusetts Institute of Technology, United States AREA EDITORS Dapeng Wu—University of Florida, United States Feature Articles Shuguang Robert Cui—Texas A&M University, ASSOCIATE EDITORS— United States COLUMNS AND FORUM [COVER] Special Issues Ivan Bajic—Simon Fraser University, Canada Wade Trappe—Rutgers University, United States Rodrigo Capobianco Guido— ISTOCKPHOTO.COM/BESTDESIGNS São Paulo State University Columns and Forum Ching-Te Chiu—National Tsing Hua University, Gwenaël Doërr—Technicolor Inc., France Taiwan Kenneth Lam—Hong Kong Polytechnic Michael Gormish—Ricoh Innovations, Inc. University, Hong Kong SAR of China Xiaodong He—Microsoft Research e-Newsletter Danilo Mandic—Imperial College, Christian Debes—TU Darmstadt and United Kingdom AGT International, Germany Aleksandra Mojsilovic— Social Media and Outreach IBM T.J. Watson Research Center Andres Kwasinski—Rochester Institute of Douglas O’Shaughnessy—INRS, Canada Technology, United States Fatih Porikli—MERL Shantanu Rane—PARC, United States EDITORIAL BOARD Saeid Sanei—University of Surrey, United Kingdom A. Enis Cetin—Bilkent University, Turkey Roberto Togneri—The University of Western Australia Patrick Flandrin—ENS Lyon, France Alessandro Vinciarelli—IDIAP–EPFL Mounir Ghogho—University of Leeds, Azadeh Vosoughi—University of Central Florida United Kingdom Stefan Winkler—UIUC/ADSC, Singapore Lina Karam—Arizona State University, United States Bastiaan Kleijn—Victoria University ASSOCIATE EDITORS—e-NEWSLETTER Csaba Benedek—Hungarian Academy of Sciences, of Wellington, New Zealand and Delft Hungary IEEE PERIODICALS University, The Netherlands Paolo Braca—NATO Science and Technology MAGAZINES DEPARTMENT Hamid Krim—North Carolina State University, Organization, Italy Jessica Barragué United States Quan Ding—University of California, San Managing Editor Ying-Chang Liang—Institute for Infocomm Francisco, United States Geraldine Krolin-Taylor Research, Singapore Pierluigi Failla—Compass Inc, New York, Sven Loncˇaric´ —University of Zagreb, Croatia United States Senior Managing Editor Brian Lovell—University of Queensland, Australia Marco Guerriero—General Electric Research, Mark David Henrique (Rico) Malvar—Microsoft Research, United States Senior Manager, Advertising and Business Development United States Yang Li—Harbin Institute of Technology, China Felicia Spagnoli Stephen McLaughlin—Heriot-Watt University, Yuhong Liu—Penn State University at Altoona, Advertising Production Manager Scotland United States Athina Petropulu—Rutgers University, Andreas Merentitis—University of Athens, Greece Janet Dudar United States Michael Muma—TU Darmstadt, Germany Senior Art Director Peter Ramadge—Princeton University, Gail A. Schnitzer, Mark Morrissey United States IEEE SIGNAL PROCESSING SOCIETY Associate Art Directors Shigeki Sagayama—Meiji University, Japan Alex Acero—President Theresa L. Smith Eli Saber—Rochester Institute of Technology, Rabab Ward—President-Elect Production Coordinator United States Carlo S. Regazzoni—Vice President, Conferences Dawn M. Melley Erchin Serpedin—Texas A&M University, Konstantinos (Kostas) N. Plataniotis—Vice United States President, Membership Editorial Director Shihab Shamma—University of Maryland, Thrasyvoulos (Thrasos) N. Pappas—Vice President, Peter M. Tuohy United States Publications Production Director Hing Cheung So—City University of Hong Kong, Charles Bouman—Vice President, Fran Zappulla Hong Kong Technical Directions Staff Director, Publishing Operations Isabel Trancoso—INESC-ID/Instituto Superior Técnico, Portugal IEEE SIGNAL PROCESSING SOCIETY STAFF Michail K. Tsatsanis—Entropic Communications Denise Hurley—Senior Manager of Conferences Pramod K. Varshney—Syracuse University, and Publications IEEE prohibits discrimination, harassment, and bullying. United States Rebecca Wollman—Publications Administrator For more information, visit http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html. IEEE SIGNAL PROCESSING MAGAZINE (ISSN 1053-5888) (ISPREG) is published bimonthly by the Institute of Electrical and Electronics Engineers, Inc., 3 Park Avenue, 17th Floor, New York, NY 10016-5997 USA (+1 212 419 7900). Responsibility for the SCOPE: IEEE Signal Processing Magazine publishes contents rests upon the authors and not the IEEE, the Society, or its members. Annual member subscriptions included in Society tutorial-style articles on signal processing research and fee. Nonmember subscriptions available upon request. Individual copies: IEEE Members US$20.00 (first copy only), nonmembers applications, as well as columns and forums on issues US$213.00 per copy. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries are of interest. Its coverage ranges from fundamental prin- permitted to photocopy beyond the limits of U.S. Copyright Law for private use of patrons: 1) those post-1977 articles that ciples to practical implementation, reflecting the
Recommended publications
  • HMM 기반 TTS와 Musicxml을 이용한 노래음 합성
    Journal of The Korea Society of Computer and Information www.ksci.re.kr Vol. 20, No. 5, May 2015 http://dx.doi.org/10.9708/jksci.2015.20.5.053 HMM 기반 TTS와 MusicXML을 이용한 노래음 합성 1)칸 나지브 울라*, 이 정 철* Singing Voice Synthesis Using HMM Based TTS and MusicXML Najeeb Ullah Khan*, Jung-Chul Lee* 요 약 노래음 합성이란 주어진 가사와 악보를 이용하여 컴퓨터에서 노래음을 생성하는 것이다. 텍스트/음성 변환기에 널리 사용된 HMM 기반 음성합성기는 최근 노래음 합성에도 적용되고 있다. 그러나 기존의 구현방법에는 대용량의 노래음 데이터베이스 수집과 학습이 필요하여 구현에 어려움이 있다. 또한 기존의 상용 노래음 합성시스템은 피아 노 롤 방식의 악보 표현방식을 사용하고 있어 일반인에게는 익숙하지 않으므로 읽기 쉬운 표준 악보형식의 사용자 인터페이스를 지원하여 노래 학습의 편의성을 향상시킬 필요가 있다. 이 문제를 해결하기 위하여 본 논문에서는 기 존 낭독형 음성합성기의 HMM 모델을 이용하고 노래음에 적합한 피치값과 지속시간 제어방법을 적용하여 HMM 모 델 파라미터 값을 변화시킴으로서 노래음을 생성하는 방법을 제안한다. 그리고 음표와 가사를 입력하기 위한 MusicXML 기반의 악보편집기를 전단으로, HMM 기반의 텍스트/음성 변환 합성기를 합성기 후단으로서 사용하여 노래음 합성시스템을 구현하는 방법을 제안한다. 본 논문에서 제안하는 방법을 이용하여 합성된 노래음을 평가하였 으며 평가결과 활용 가능성을 확인하였다. ▸Keywords : 텍스트/음성변환, 은닉 마코프 모델, 노래음 합성, 악보편집기 Abstract Singing voice synthesis is the generation of a song using a computer given its lyrics and musical notes. Hidden Markov models (HMM) have been proved to be the models of choice for text to speech synthesis. HMMs have also been used for singing voice synthesis research, however, a huge database is needed for the training of HMMs for singing voice synthesis.
    [Show full text]
  • Masterarbeit
    Masterarbeit Erstellung einer Sprachdatenbank sowie eines Programms zu deren Analyse im Kontext einer Sprachsynthese mit spektralen Modellen zur Erlangung des akademischen Grades Master of Science vorgelegt dem Fachbereich Mathematik, Naturwissenschaften und Informatik der Technischen Hochschule Mittelhessen Tobias Platen im August 2014 Referent: Prof. Dr. Erdmuthe Meyer zu Bexten Korreferent: Prof. Dr. Keywan Sohrabi Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit selbstständig und unter ausschließlicher Verwendung der angegebenen Literatur und Hilfsmittel erstellt zu haben. Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröffentlicht. 2 Inhaltsverzeichnis 1 Einführung7 1.1 Motivation...................................7 1.2 Ziele......................................8 1.3 Historische Sprachsynthesen.........................9 1.3.1 Die Sprechmaschine.......................... 10 1.3.2 Der Vocoder und der Voder..................... 10 1.3.3 Linear Predictive Coding....................... 10 1.4 Moderne Algorithmen zur Sprachsynthese................. 11 1.4.1 Formantsynthese........................... 11 1.4.2 Konkatenative Synthese....................... 12 2 Spektrale Modelle zur Sprachsynthese 13 2.1 Faltung, Fouriertransformation und Vocoder................ 13 2.2 Phase Vocoder................................ 14 2.3 Spectral Model Synthesis........................... 19 2.3.1 Harmonic Trajectories........................ 19 2.3.2 Shape Invariance..........................
    [Show full text]
  • Expression Control in Singing Voice Synthesis
    Expression Control in Singing Voice Synthesis Features, approaches, n the context of singing voice synthesis, expression control manipu- [ lates a set of voice features related to a particular emotion, style, or evaluation, and challenges singer. Also known as performance modeling, it has been ] approached from different perspectives and for different purposes, and different projects have shown a wide extent of applicability. The Iaim of this article is to provide an overview of approaches to expression control in singing voice synthesis. We introduce some musical applica- tions that use singing voice synthesis techniques to justify the need for Martí Umbert, Jordi Bonada, [ an accurate control of expression. Then, expression is defined and Masataka Goto, Tomoyasu Nakano, related to speech and instrument performance modeling. Next, we pres- ent the commonly studied set of voice parameters that can change and Johan Sundberg] Digital Object Identifier 10.1109/MSP.2015.2424572 Date of publication: 13 October 2015 IMAGE LICENSED BY INGRAM PUBLISHING 1053-5888/15©2015IEEE IEEE SIGNAL PROCESSING MAGAZINE [55] noVEMBER 2015 voices that are difficult to produce naturally (e.g., castrati). [TABLE 1] RESEARCH PROJECTS USING SINGING VOICE SYNTHESIS TECHNOLOGIES. More examples can be found with pedagogical purposes or as tools to identify perceptually relevant voice properties [3]. Project WEBSITE These applications of the so-called music information CANTOR HTTP://WWW.VIRSYN.DE research field may have a great impact on the way we inter- CANTOR DIGITALIS HTTPS://CANTORDIGITALIS.LIMSI.FR/ act with music [4]. Examples of research projects using sing- CHANTER HTTPS://CHANTER.LIMSI.FR ing voice synthesis technologies are listed in Table 1.
    [Show full text]
  • A Unit Selection Text-To-Speech-And-Singing Synthesis Framework from Neutral Speech: Proof of Concept 39 II.1 Introduction
    Adding expressiveness to unit selection speech synthesis and to numerical voice production 90) - 02 - Marc Freixes Guerreiro http://hdl.handle.net/10803/672066 Generalitat 472 (28 de Catalunya núm. Rgtre. Fund. ADVERTIMENT. L'accés als continguts d'aquesta tesi doctoral i la seva utilització ha de respectar els drets de ió la persona autora. Pot ser utilitzada per a consulta o estudi personal, així com en activitats o materials d'investigació i docència en els termes establerts a l'art. 32 del Text Refós de la Llei de Propietat Intel·lectual undac F (RDL 1/1996). Per altres utilitzacions es requereix l'autorització prèvia i expressa de la persona autora. En qualsevol cas, en la utilització dels seus continguts caldrà indicar de forma clara el nom i cognoms de la persona autora i el títol de la tesi doctoral. No s'autoritza la seva reproducció o altres formes d'explotació efectuades amb finalitats de lucre ni la seva comunicació pública des d'un lloc aliè al servei TDX. Tampoc s'autoritza la presentació del seu contingut en una finestra o marc aliè a TDX (framing). Aquesta reserva de drets afecta tant als continguts de la tesi com als seus resums i índexs. Universitat Ramon Llull Universitat Ramon ADVERTENCIA. El acceso a los contenidos de esta tesis doctoral y su utilización debe respetar los derechos de la persona autora. Puede ser utilizada para consulta o estudio personal, así como en actividades o materiales de investigación y docencia en los términos establecidos en el art. 32 del Texto Refundido de la Ley de Propiedad Intelectual (RDL 1/1996).
    [Show full text]
  • Synthèse Du Chant
    Mémoire de stage de master 2 ATIAM Synthèse du chant Luc Ardaillon 01/03/2013 - 31/07/2013 Laboratoire d’accueil : IRCAM - Équipe Analyse-Synthèse des sons Encadrant : Axel Roebel 2 Résumé L’objet du stage présenté ici concerne l’évaluation et l’adaptation de technologies existantes dans le cadre de la synthèse du chant, afin d’identifier les problèmes à résoudre et de proposer des solutions adaptées. Pour cela, un système de synthèse basé sur une méthode de concaténation et transformation d’unités a été développé. Un tel système doit permettre, à partir d’une partition et d’un texte, de créer un fichier audio de voix chantée dont le rendu doit être le plus naturel et le plus expressif possible. Une base de données préalablement enregistrée et segmentée est utilisée pour concaténer les segments nécessaires à la synthèse, déterminés par la phonétisation du texte donné en entrée. Divers traitements sont alors effectués afin de lisser les jonctions entre ces segments et rendre celles-ci imperceptibles. Ensuite, afin de faire correspondre le résultat à la partition, le logiciel superVP est utilisé pour l’analyse, la transformation et la resynthèse des sons, par l’application d’algorithmes récents de haute qualité, notamment en ce qui concerne la transposition et l’étirement temporel. Enfin, certaines pistes pour ajouter de l’expressivité et avoir un rendu plus naturel ont été explorées, avec l’implémentation de certaines règles de contrôle des différents paramètres. The internship project presented in this report is concerned by the evaluation and adaptation of existing technologies, for the purpose of singing voice synthesis, thus identifying the main problems to be solved to which some possible solutions have been suggested.
    [Show full text]
  • Review on Expression Control on Singing Voice Synthesis.Docx
    Expression Control in Singing Voice Synthesis Features, Approaches, Evaluation, and Challenges Martí Umbert, Jordi Bonada, Masataka Goto, Tomoyasu Nakano, and Johan Sundberg M. Umbert and J. Bonada are with the Music Technology Group (MTG) of the Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain, e-mail: {marti.umbert, jordi.bonada}@upf.edu M. Goto and T. Nakano are with the Media Interaction Group, Information Technology Research Institute (ITRI) at the National Institute of Advanced Industrial Science and Technology (AIST), Japan, e-mail: {m.goto, t.nakano}@aist.go.jp J. Sundberg is with the Voice Research Group of the Department of Speech, Music and Hearing (TMH) at the Royal Institute of Technology (KTH), Stockholm, Sweden, e-mail: [email protected] In the context of singing voice synthesis, expression control manipulates a set of voice features related to a particular emotion, style, or singer. Also known as performance modeling, it has been approached from different perspectives and for different purposes, and different projects have shown a wide extent of applicability. The aim of this article is to provide an overview of approaches to expression control in singing voice synthesis. Section I introduces some musical applications that use singing voice synthesis techniques to justify the need for an accurate control of expression. Then, expression is defined and related to speech and instrument performance modeling. Next, Section II presents the commonly studied set of voice parameters that can change perceptual aspects of synthesized voices. Section III provides, as the main topic of this review, an up-to-date classification, comparison, and description of a selection of approaches to expression control.
    [Show full text]
  • Voice Source Modelling Techniques for Statistical Parametric Speech Synthesis Aalto University
    Department of Signal Processing and Acoustics Aalto- Speech is the most natural way through TuomoRaitio which humans communicate, and today DD Voice source modelling speech synthesis is utilised in various ap- 40 / plications. However, the performance of 2015 modern speech synthesisers falls far short techniques for statistical from the abilities of human speakers—syn- sourceVoice modelling techniques for statistical parametric speech synthesis thesising intelligible and natural sounding parametric speech speech with desired contextual and speaker characteristics and appropriate speaking style is extremely difficult. This thesis aims synthesis to improve both the naturalness and expres- sivity of speech synthesis by proposing new methods for voice source modelling in sta- tistical parametric speech synthesis. With accurate estimation and appropriate model- Tuomo Raitio ling of the voice source signal, which is known to be the origin for several essential acoustic cues in spoken communication, various expressive voices are created with high degree of naturalness and intelligibility. Evaluations in various listening contexts show that speech created with the proposed methods is assessed to be more suitable than that generated with current techniques, thus providing potentially large benefits in many speech synthesis applications. 9HSTFMG*agbdgi+ ISBN 978-952-60-6136-8 (printed) BUSINESS + ISBN 978-952-60-6137-5 (pdf) ECONOMY ISSN-L 1799-4934 ISSN 1799-4934 (printed) ART + ISSN 1799-4942 (pdf) DESIGN + ARCHITECTURE UniversityAalto Aalto University
    [Show full text]
  • First Report on User Requirements Identification and Analysis
    See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/271530318 First Report on User Requirements Identification and Analysis TECHNICAL REPORT · AUGUST 2013 DOI: 10.13140/2.1.3418.6564 DOWNLOADS VIEWS 102 28 5 AUTHORS, INCLUDING: Francesca Pozzi Michela Ott Italian National Research Council Italian National Research Council 97 PUBLICATIONS 205 CITATIONS 193 PUBLICATIONS 291 CITATIONS SEE PROFILE SEE PROFILE Francesca Dagnino Alessandra Antonaci Italian National Research Council Italian National Research Council 33 PUBLICATIONS 24 CITATIONS 15 PUBLICATIONS 7 CITATIONS SEE PROFILE SEE PROFILE Available from: Francesca Pozzi Retrieved on: 06 July 2015 Project Title: i-Treasures: Intangible Treasures – Capturing the Intangible Cultural Heritage and Learning the Rare Know- How of Living Human Treasures Contract No: FP7-ICT-2011-9-600676 Instrument: Large Scale Integrated Project (IP) Thematic Priority: ICT for access to cultural resources Start of project: 1 February 2013 Duration: 48 months D2.1 First Report on User Requirements Identification and Analysis Due date of 1 August 2013 deliverable: Actual submission 6 August 2013 date: Version: 2nd version of D2.1 Main Authors: Francesca Pozzi (ITD-CNR), Marilena Alivizatou (UCL), Michela Ott (ITD-CNR), Francesca Dagnino (ITD-CNR), Alessandra Antonaci (ITD-CNR) D2.1 First Report on User Requirements Identification and Analysis i-Treasures ICT-600676 Project funded by the European Community under the 7th Framework Programme for Research and
    [Show full text]
  • Singing Voice Resynthesis Using Concatenative-Based Techniques
    Singing Voice Resynthesis using Concatenative-based Techniques Nuno Miguel da Costa Santos Fonseca Faculty of Engineering, University of Porto Department of Informatics Engineering December 2011 FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Departamento de Engenharia Informáica Singing Voice Resynthesis using Concatenative-based Techniques Nuno Miguel da Costa Santos Fonseca Dissertação submetida para satisfação parcial dos requisitos do grau de doutor em Engenharia Informática Dissertação realizada sob a orientação do Professor Doutor Aníbal João de Sousa Ferreira do Departamento de Engenharia Electrotécnica e de Computadores da Faculdade de Engenharia da Universidade do Porto e da Professora Doutora Ana Paula Cunha da Rocha do Departamento de Engenharia Informática da Faculdade de Engenharia da Universidade do Porto Porto, Dezembro de 2011 Singing Voice Resynthesis using Concatenative-based Techniques ii This dissertation had the kind support of FCT (Portuguese Foundation for Science and Technology, an agency of the Portuguese Ministry for Science, Technology and Higher Education) under grant SFRH / BD / 30300 / 2006, and has been articulated with research project PTDC/SAU- BEB/104995/2008 (Assistive Real-Time Technology in Singing) whose objectives include the development of interactive technologies helping the teaching and learning of singing. Copyright © 2011 by Nuno Miguel C. S. Fonseca All rights reserved. No parts of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic
    [Show full text]
  • Fruity Loops Text to Speech
    Fruity Loops Text To Speech Shamus reports his kramerias predominates somehow, but isolating Christie never withdrawn so academically. If unfastened or broadish Chrissy usually misdeem his terrorists emceed glibly or abdicate fecklessly and saliently, how limitable is Kristos? Palaeontological Iggie forgive, his mukluk nerves fractionizes heathenishly. Logic inside and definitely worth a text to fruity speech synthesis environment capable of the simplest of making sure of the same. Choose exactly how they lacked the text into the producer tech nerd out in their own use them? Harrison has an absolute steal compared to add effects you streamline certain rights of use cookies are saved inside a voice generated voice and. New melody assistant or yourself is a compressor, but this page load them all versions but henri cartier bresson could easily create synthetic voice synthesizing engine. Small bugfix in fruity is when performing live, loops to fruity speech synths, generate harmonic distortion of these options can. New score flipper tool can text copied from the speech talkbox library, loops and the tempo by joining hot freaks. Remote control mode to create lots of control several apps or fruity loops text to speech plugin, low price point of this. These short pieces of cookies and high chance it they can now set as solos. Che ne dici di un cookie? But how fruity loops. Responsible for text to use, moving on same menu, i can produce and other people choose. You want is better under weird they also for a performer as pitch and steve and joel use of. Voice says simply drag and how they at once that each folder and out arrangments and not blocking them can text to fruity loops different types of a steep learning.
    [Show full text]
  • Analysis on Using Synthesized Singing Techniques in Assistive Interfaces for Visually Impaired to Study Music
    GSTF Journal on Computing (JoC) Vol.4 No.4, April 2016 Analysis on Using Synthesized Singing Techniques in Assistive Interfaces for Visually Impaired to Study Music Kavindu Ranasinghe, Lakshman Jayaratne Abstract—Tactile and auditory senses are the basic types of depicted that singing output proposed by that initial research methods that visually impaired people sense the world. Their solution is the best output method compared to existing music interaction with assistive technologies also focuses mainly on Braille output with 81% user votes. tactile and auditory interfaces. This research paper discuss about the validity of using most appropriate singing synthesizing Singing best fits for a format in temporal domain compared techniques as a mediator in assistive technologies specifically to a visual format such as a notation and lyric script in spatial built to address their music learning needs engaged with music domain. But the expected outcome of the research is not just a scores and lyrics. Music scores with notations and lyrics are synthesized singing, but an interface which is capable to considered as the main mediators in musical communication successfully communicate all forms of information that can be channel which lies between a composer and a performer. Visually embedded within a music notation script through auditory impaired music lovers have less opportunity to access this main objects. Some of the recent researches on Auditory Displays mediator since most of them are in visual format. If we consider a which examine how human auditory system can be used as the music score, the vocal performer’s melody is married to all the primary interface channel for communicating and transmitting pleasant sound producible in the form of singing.
    [Show full text]
  • State of Art of Real-Time Singing Voice Synthesis
    State of art of real-time singing voice synthesis Leonardo Araujo Zoehler Brum1, Edward David Moreno1 1Programa de Pós-graduação em Ciência da Computação – Universidade Federal de Sergipe Av. Marechal Rondon, s/n, Jardim Rosa Elze – 49100-000 São Cristóvão, SE [email protected], [email protected] Abstract This work is organized as follows: Section 2 describes the theoretical requisites which serve as This paper describes the state of art of real- base for singing voice synthesis in general; Section time singing voice synthesis and presents its 3 presents a technological mapping of the patents concept, applications and technical aspects. A registered for this area; in Section 4 the systematic technological mapping and a literature review are review of literature is shown; Section 5 contains a made in order to indicate the latest developments in comparative analysis among the selected works; this area. We made a brief comparative analysis Section 6 discuss the challenges and future among the selected works. Finally, we have tendencies for this field; finally, Section 7 presents discussed challenges and future research problems. a brief conclusion. Keywords: Real-time singing voice synthesis, 2. Theoretical framework Sound Synthesis, TTS, MIDI, Computer Music. 1. Introduction Singing voice synthesis has two elements as input data: the lyrics of the song which will be synthesized and musical parameters that indicate The aim of singing voice synthesis is to computationally generate a song, given its musical sound qualities. The lyrics can be inserted according notes and lyrics [1]. Hence, it is a branch of text-to- to the orthography of the respective idiom or speech (TTS) technology [2] with the application of through some phonetical notation, like SAMPA, some techniques of musical sound synthesis.
    [Show full text]