Multimedia Data Mining a Systematic Introduction to Concepts and Theory

Total Page:16

File Type:pdf, Size:1020Kb

Multimedia Data Mining a Systematic Introduction to Concepts and Theory Multimedia Data Mining A Systematic Introduction to Concepts and Theory © 2009 by Taylor & Francis Group, LLC C9667_FM.indd 1 10/8/08 10:06:11 AM Chapman & Hall/CRC Data Mining and Knowledge Discovery Series SERIES EDITOR Vipin Kumar University of Minnesota Department of Computer Science and Engineering Minneapolis, Minnesota, U.S.A AIMS AND SCOPE This series aims to capture new developments and applications in data mining and knowledge discovery, while summarizing the computational tools and techniques useful in data analysis. This series encourages the integration of mathematical, statistical, and computational methods and techniques through the publication of a broad range of textbooks, reference works, and hand- books. The inclusion of concrete examples and applications is highly encouraged. The scope of the series includes, but is not limited to, titles in the areas of data mining and knowledge discovery methods and applications, modeling, algorithms, theory and foundations, data and knowledge visualization, data mining systems and tools, and privacy and security issues. PUBLISHED TITLES UNDERSTANDING COMPLEX DATASETS: Data Mining with Matrix Decompositions David Skillicorn COMPUTATIONAL METHODS OF FEATURE SELECTION Huan Liu and Hiroshi Motoda CONSTRAINED CLUSTERING: Advances in Algorithms, Theory, and Applications Sugato Basu, Ian Davidson, and Kiri L. Wagsta KNOWLEDGE DISCOVERY FOR COUNTERTERRORISM AND LAW ENFORCEMENT David Skillicorn MULTIMEDIA DATA MINING: A Systematic Introduction to Concepts and Theory Zhongfei Zhang and Ruofei Zhang © 2009 by Taylor & Francis Group, LLC C9667_FM.indd 2 10/8/08 10:06:11 AM Chapman & Hall/CRC Data Mining and Knowledge Discovery Series Multimedia Data Mining A Systematic Introduction to Concepts and Theory Zhongfei Zhang Ruofei Zhang © 2009 by Taylor & Francis Group, LLC C9667_FM.indd 3 10/8/08 10:06:11 AM The cover images were provided by Yu He, who also participated in the design of the cover page. Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2009 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-58488-966-3 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher can- not assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copy- right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro- vides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Zhang, Zhongfei. Multimedia data mining : a systematic introduction to concepts and theory / Zhongfei Zhang, Ruofei Zhang. p. cm. -- (Chapman & Hall/CRC data mining and knowledge discovery series) Includes bibliographical references and index. ISBN 978-1-58488-966-3 (hardcover : alk. paper) 1. Multimedia systems. 2. Data mining. I. Zhang, Ruofei. II. Title. III. Series. QA76.575.Z53 2008 006.7--dc22 2008039398 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com © 2009 by Taylor & Francis Group, LLC C9667_FM.indd 4 10/8/08 10:06:11 AM To my parents, Yukun Zhang and Ming Song; my sister, Xuefei; and my sons, Henry and Andrew Zhongfei (Mark) Zhang To my parents, sister, and wife for their support and tolerance Ruofei Zhang © 2009 by Taylor & Francis Group, LLC Foreword I am delighted to introduce the first book on multimedia data mining. When I came to know about this book project undertaken by two of the most active young researchers in the field, I was pleased that this book is coming in an early stage of a field that will need it more than most fields do. In most emerging research fields, a book can play a significant role in bringing some maturity to the field. Research fields advance through research papers. In research papers, however, only a limited perspective can be provided about the field, its application potential, and the techniques required and already developed in the field. A book gives such a chance. I liked the idea that there will be a book that will try to unify the field by bringing in disparate topics already available in several papers that are not easy to find and understand. I was supportive of this book project even before I had seen any material on it. The project was a brilliant and a bold idea by two active researchers. Now that I have it on my screen, it appears to be even a better idea. Multimedia started gaining recognition in the 1990s as a field. Processing, storage, communication, and capture and display technologies had advanced enough that researchers and technologists started building approaches to com- bine information in multiple types of signals such as audio, images, video, and text. Multimedia computing and communication techniques recognize corre- lated information in multiple sources as well as insufficiency of information in any individual source. By properly selecting sources to provide comple- mentary information, such systems aspire, much like the human perception system, to create a holistic picture of a situation using only partial information from separate sources. Data mining is a direct outgrowth of progress in data storage and processing speeds. When it became possible to store large volumes of data and run different statistical computations to explore all possible and even unlikely correlations among data, the field of data mining was born. Data mining allowed people to hypothesize relationships among data entities and explore support for those. This field has been applied to applications in many diverse domains and keeps getting more applications. In fact, many new fields are a direct outgrowth of data mining, and it is likely to become a powerful computational tool behind many emerging natural and social sciences. Considering the volume of multimedia data and difficulty in developing machine perception systems to bridge the semantic gap, it is natural that multimedia and data mining will come closer and be applied to some of the most challenging problems. And that has started to happen. Some of the 7 © 2009 by Taylor & Francis Group, LLC 8 toughest challenges for data mining are posed by multimedia systems. Sim- ilarly, the potentially most rewarding applications of data mining may come from multimedia data. As is natural and common, in the early stages of a field people explore only incremental modifications to existing approaches. And multimedia data mining is no exception. Most early tools deal with data in a single medium such as images. This is a good start, but the real challenges are in dealing with multimedia data to address problems that cannot be solved using a single medium. A major limitation of machine perception approaches, so obvious in computer vision but equally common in all other signal based systems, is their over reliance on a single medium. By using multimedia data, one can use an analysis context that is created by a data set of a medium to solve complex problems using data from other media. In a way, multimedia data mining could become a field where analysis will proceed through mutual context propagation approaches. I do hope that some young researchers will be motivated to address these rewarding areas. This book is the very first monograph on multimedia data mining. The book presents the state-of-the-art materials in the area of multimedia data mining with three distinguishing features. First, this book brings together the literature of multimedia data mining and defines what this area is about, and puts multimedia data mining in perspective compared to other, more well-established research areas. Second, the book includes an extensive coverage of the foundational the- ory of multimedia data mining with state-of-the-art materials, ranging from feature extraction and representations, to knowledge representations, to sta- tistical learning theory and soft computing theory. Substantial effort is spent to ensure that the theory and techniques included in the book represent the state-of-the-art research in this area. Though not exhaustive, this book has a comprehensive systematic introduction to the theoretical foundations of mul- timedia data mining. Third, in order to showcase to readers the potential and practical appli- cations of the research in multimedia data mining, the book gives specific applications of multimedia data mining theory in order to solve real-world multimedia data mining problems, ranging from image search and mining, to image annotation, to video search and mining, and to audio classification.
Recommended publications
  • How You Can Help Your Child in French Immersion
    Created by the Staff of St. Angela School French Language Arts HOW YOU CAN HELP YOUR CHILD IN FRENCH IMMERSION Here are some suggestions that will support your child in a second language program. Be positive. Just a little work and encouragement on your part can make a significant difference to your child’s attitude towards and achievement in French. • Help your child make connections in language (for example: banane - banana). • Point out French in your community, for example, signs, labels, brochures, neighbours, street names. • Provide some out-of school language and cultural experiences - . • Support your child's learning by providing the necessary tools: English/French dictionary, a dictionary of synonyms (like a thesaurus). For more information on how to support your child in French Immersion view: parentGuideFrench.pdf Yes - You Can Help Your Child in French Alberta Education Website. Homework Tool box from Ontario Created by the Staff of St. Angela School Bring French into Your Home • Use labels from can and food packages to make a collage or collect them in a scrapbook. They won’t realize they are learning vocabulary, spelling, and sorting. • Play I Spy in French. Prepare for the game by printing on cards the French names for objects in a particular room of the house. Don’t know the name of an object? Let your child know it’s OK not to know something. Make a point of finding out the word you didn't know from an older sibling, a teacher, a visual dictionary or an English/French dictionary. • Have your child label as many objects in your house as possible.
    [Show full text]
  • Beyond Flickr: Not All Image Tagging Is Created Equal
    Language-Action Tools for Cognitive Artificial Agents: Papers from the 2011 AAAI Workshop (WS-11-14) Beyond Flickr: Not All Image Tagging Is Created Equal Judith L. Klavans1, Raul Guerra1, Rebecca LaPlante1, Robert Stein2, and Edward Bachta2 1University of Maryland College Park, 2Indianapolis Museum of Art {jklavans, laplante}@umd.edu, [email protected], {rstein, ebachta}@imamuseum.org Abstract or the Art and Architecture Thesaurus (Getty 2010), and This paper reports on the linguistic analysis of a tag set of (c) clustering (Becker, Naaman, and Gravano 2010) to nearly 50,000 tags collected as part of the steve.museum characterize an image? project. The tags describe images of objects in museum col- (2) What are the optimal linguistic processes to normalize lections. We present our results on morphological, part of tags since these steps have impact on later processing? speech and semantic analysis. We demonstrate that deeper tag processing provides valuable information for organizing (3) In what ways can social tags be associated with other and categorizing social tags. This promises to improve ac- information to improve users’ access to museum objects? cess to museum objects by leveraging the characteristics of tags and the relationships between them rather than treating them as individual items. The paper shows the value of us- Description of the Tag Data ing deep computational linguistic techniques in interdisci- plinary projects on tagging over images of objects in muse- The steve project (http://www.steve.museum) is a multi- ums and libraries. We compare our data and analysis to institutional collaboration exploring applications of tagging Flickr and other image tagging projects.
    [Show full text]
  • A Visual Inter Lingua Neil Edwin Michael Leemans Worcester Polytechnic Institute
    Worcester Polytechnic Institute Digital WPI Doctoral Dissertations (All Dissertations, All Years) Electronic Theses and Dissertations 2001-04-24 VIL: A Visual Inter Lingua Neil Edwin Michael Leemans Worcester Polytechnic Institute Follow this and additional works at: https://digitalcommons.wpi.edu/etd-dissertations Repository Citation Leemans, N. E. (2001). VIL: A Visual Inter Lingua. Retrieved from https://digitalcommons.wpi.edu/etd-dissertations/154 This dissertation is brought to you for free and open access by Digital WPI. It has been accepted for inclusion in Doctoral Dissertations (All Dissertations, All Years) by an authorized administrator of Digital WPI. For more information, please contact [email protected]. VIL: A Visual Inter Lingua by Neil Edwin Michael (Paul) Leemans A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science by ____________________ April 2001 APPROVED: _____________________________________________ Dr. Lee A. Becker, Major Advisor _____________________________________________ Dr. David C. Brown, Committee Member _____________________________________________ Dr. Norman Wittels, Committee Member, Department of Civil and Environmental Engineering _____________________________________________ Dr. Stanley S. Selkow, Committee Member __________________________________________________ Dr. Micha Hofri, Head of Department VIL: A Visual Inter Lingua _____________________________________________________________________
    [Show full text]
  • Igor Burkhanov. Lexicography: a Dictionary of Basic Terminology. 1998, 285 Pp
    Resensies / Reviews 320 Igor Burkhanov. Lexicography: A Dictionary of Basic Terminology. 1998, 285 pp. ISBN 83-87288-56-X. Rzeszów: Wyższa Szkoła Pedagogiczna. Price 16 PLN. A long-awaited dictionary has at last been published, going beyond a glossary of lexicographic terms (cf. Robinson 1984) or a dictionary of a particular aspect of lexicography (cf. Cluver 1989). It is the first dictionary of basic lexicography terms. In 265 pages Burkhanov gives us a balanced view of the state of the art in dictionary form. It comprises terms typical of lexicographic work, including terms originally related to different linguistic disciplines. The selection of the data is founded, as can be inferred from the bibliogra- phy (p. 267-285), on a solid basis. It covers publications over the last twenty years, deals with various aspects of the field and includes the most recent trends from the late eighties and the nineties, e.g. Bergenholtz and Tarp (1995), Bright (1992), Bussmann (1996), Cowie (1987), Cluver (1989), Diab (1990), Fill- more and Atkins (1992), Ilson (1991), Benson, Benson and Ilson (1986), Landau (1989), Hüllen (1990), Snell-Hornby (1990). It is not difficult to find one's way in the dictionary: few symbols are used, most important of which are the ones indicating cross-references within the word-list. In this dictionary, though the arrangement of terms is alphabetical, conceptual relatedness between terms is not lost. Related terms are listed at the end of the explanatory text with the guide words "see also" and "compare". "See also" refers to superordinate and subordinate terms, e.g.
    [Show full text]
  • Promoting Fluency by Building the Sight Word Lexicon: Direct Phoneme-Grapheme Mapping and Other Techniques That Facilitate the Process of Orthographic Mapping
    Promoting Fluency by Building the Sight Word Lexicon: Direct Phoneme-Grapheme mapping and other Techniques that Facilitate the Process of Orthographic Mapping Research interest: building and developing students’ orthographic lexicons (sight word vocabularies) so students have a growing number of words they are able to read instantly and effortlessly, to support reading fluency Focus of project: Improving and remediating fluency in struggling readers by focusing on applying research about how students remember the words they read Beginning readers must develop the ability to learn to read words from memory accurately and automatically in order to read fluently Many students who struggle with reading fluency receive interventions designed to improve fluency such as repeated readings, partner reading, choral reading, and timed reading However, these interventions appear to be based on assumptions about how readers learn words that do not line up with the scientific findings about how skilled reading develops. For weak readers, repeated visual exposure to words does not substantially close the gap Two Primary Abilities of Fluent, Skilled Reading (according to David Kilpatrick) 1. Being able to identify unfamiliar words by sounding them out (word reading); achieved through phonic decoding, using visual shapes/characteristics, context, guessing 2. Being able to remember previously read words (word learning); achieved simply through recognizing words Phonics does a great job teaching students how to read words, but it does not do an adequate job of helping students make the leap to learning words so they do not have to do the work of “reading” them anymore – they just know them. Most reading fluency interventions focus on improving the first skill – identifying words through phonic decoding, visual characteristics, context, or guessing.
    [Show full text]
  • Literature Review Chinmayi Dixit & Filippa Karrfelt
    Literature Review Chinmayi Dixit & Filippa Karrfelt Our project focuses on visualizing etymological relationships between words. To this effect, the most relevant work is Etymological Wordnet: Tracing the History of Words, by De Melo, G. This work ​ ​ provides an extensive dataset to see the connections between words, across languages. Although they have a very rudimentary tool to view the results, they have not yet worked toward visualizing the dataset. ​ ​ As for the world of visual communication of language data, there has been extensive work in showing wordnet structures within the english language, for relations such as synonyms, antonyms, usage, meanings etc. However, we have not found a satisfactory visualization for the etymology of words. Some examples for these works are mentioned under visual dictionary interfaces in the reference section. One common theme that we noticed in these works, is that trees were used most extensively. The visualizations are clear to understand and well represented, when the number of nodes was small. In the case of Visualizing wordnet structure, Jaap Kamps, as the number of nodes grew large, the graph turned cluttered. Abrate et al., in WordNet Atlas: a web application for visualizing WordNet as a zoomable map, address the layout of nodes for better perception. They draw the wordnet out as a spatially arranged radial graph, with nodes arranged in concentric circles. This made large data sets more manageable and easier to navigate. One drawback, however, was that all the spatial location data had to be pre­calculated without real­time updates. We intend to work on visualizing the etymology in a cleaner and well­formatted manner.
    [Show full text]
  • Comparative Study on Currently Available Wordnets
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 10 (2018) pp. 8140-8145 © Research India Publications. http://www.ripublication.com Comparative Study on Currently Available WordNets Sreedhi Deleep Kumar Reshma E U PG Scholar PG Scholar Department of Computer Science and Engineering Department of Computer Science and Engineering Vidya Academy of Science and Technology Vidya Academy of Science and Technology Thrissur, India. Thrissur, India. Sunitha C Amal Ganesh Associate Professor Assistant Professor Department of Computer Science and Engineering Department of Computer Science and Engineering Vidya Academy of Science and Technology Vidya Academy of Science and Technology Thrissur, India. Thrissur, India. Abstract SinoTibetan, Tibeto-Burman and Austro-Asiatic. The major ones are the Indo-Aryan, spoken by the northern to western WordNet is an information base which is arranged part of India and Dravidian, spoken by southern part of India. hierarchically in any language. Usually, WordNet is The Eighth Schedule of the Indian Constitution lists 22 implemented using indexed file system. Good WordNets languages, which have been referred to as scheduled available in many languages. However, Malayalam is not languages and given recognition, status and official having an efficient WordNet. WordNet differs from the encouragement. dictionaries in their organization. WordNet does not give pronunciation, derivation morphology, etymology, usage notes, A Dictionary can be called as are source dealing with the or pictorial illustrations. WordNet depicts the semantic relation individual words of a language along with its orthography, between word senses more transparently and elegantly. In this pronunciation, usage, synonyms, derivation, history, work, a general comparison of currently browsable WordNets etymology, etc.
    [Show full text]
  • Japanese-English Bilingual Visual Dictionary Ebook, Epub
    JAPANESE-ENGLISH BILINGUAL VISUAL DICTIONARY PDF, EPUB, EBOOK DK | 360 pages | 01 Feb 2018 | Dorling Kindersley Ltd | 9780241317556 | English | London, United Kingdom Japanese-English Bilingual Visual Dictionary PDF Book To find out more, including how to control cookies, see here: Cookie Policy. Picture dictionaries. Whether you're travelling for business or leisure, buying food or train tickets, discussing work or famous sights on the tourist trail, you'll gain confidence in your new language skills with this bilingual visual dictionary by your side. Charles Darwin. Using full color photographs and artworks to display and label all the elements of everyday life from the home and office to sports, music, and nature the Japanese English Bilingual Visual Dictionary has over 1, photographs and illustrations, more than 6, annotated terms. Admin Admin Admin, collapsed. Become a Member Start earning points for buying books! This handy travel-sized guide also includes a detailed index for instant reference. You are commenting using your Google account. Email required Address never made public. Wanda C. We have a reputation for innovation in design for both print and digital products. Japanese-English Bilingual Visual Dictionary. Christianbook Prefilled Communion Cups, Box of Margot Adler. The Japanese and English Bilingual Visual Dictionary comes in a travel-sized and with a pronunciation guide so you will never be stuck for the right word again. The audio app, available for Apple from the App Store and Android from Google Play , enables you to hear words and phrases spoken by native Japanese speakers. Periplus Pocket Japanese Dictionary. Please enter your name, your email and your question regarding the product in the fields below, and we'll answer you in the next hours.
    [Show full text]
  • Semantic Multimedia Modelling & Interpretation for Annotation
    Semantic Multimedia Modelling & Interpretation for Annotation Irfan Ullah Submitted in partial fulfilment of the Requirements for the degree of Doctor of Philosophy School of Engineering and Information Sciences MIDDLESEX UNIVERSITY May, 2011 © 2011 Irfan Ullah All rights reserved II Declaration I certify that the thesis entitled Semantic Multimedia Modelling and Interpretation for Annotation submitted for the degree of doctor of philosophy is the result of my own work and I have fully cited and referenced all material and results that are not original to this work. Name …………………………IRFANULLAH …. [print name] Signature …………………… …… MAY 24TH, 2011 Date………………………………. III Abstract “If we knew what it was we were doing, it would not be called research, would it?” Albert Einstein The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it.
    [Show full text]
  • Italian English Bilingual Visual Dictionary (DK Visual Dictionaries) by DK
    Italian English Bilingual Visual Dictionary (DK Visual Dictionaries) by DK Ebook Italian English Bilingual Visual Dictionary (DK Visual Dictionaries) currently available for review only, if you need complete ebook Italian English Bilingual Visual Dictionary (DK Visual Dictionaries) please fill out registration form to access in our databases Download here >> Series:::: DK Visual Dictionaries+++Paperback:::: 360 pages+++Publisher:::: DK; Bilingual, Revised, Updated edition (April 18, 2017)+++Language:::: English+++ISBN-10:::: 1465459308+++ISBN-13:::: 978-1465459305+++Product Dimensions::::5.5 x 1 x 6.5 inches++++++ ISBN10 1465459308 ISBN13 978-1465459305 Download here >> Description: Now comes with a free companion audio app that allows readers to scan the pages to hear words spoken in both Italian and English.Newly revised and updated, the Italian–English Bilingual Visual Dictionary is a quick and intuitive way to learn and recall everyday words in Italian.Introducing a range of useful current vocabulary in thematic order, this dictionary uses full-color photographs and artworks to display and label all the elements of everyday life—from the home and office to sport, music, nature, and the countries of the world—with panel features on key nouns, verbs, and useful phrases.The Italian–English Bilingual Visual Dictionary features:• A quick and intuitive way to learn and remember thousands of words.• A complete range of illustrated objects and scenes from everyday life.• Fast and effective learning for any situation, from home and office to shopping and dining out.• Detailed index for instant reference.• Handy size ideal for travel.The illustrations provide a quick and intuitive route to learning a language, defining the words visually so it is easier to remember them and creating a colorful and stimulating learning resource for the foreign-language and EFL/ESL student.
    [Show full text]
  • Logic Based Methods for Terminological Assessment
    Logic Based Methods for Terminological Assessment Benoît Robichaud Observatoire de linguistique Sens-Texte (OLST) Université de Montréal [email protected] Abstract We present a new version of a Graphical User Interface (GUI) called DiCoInfo Visuel, mainly based on a graph visualization device and used for exploring and assessing lexical data found in DiCoInfo, a specialized e-dictionary of computing and the Internet. This new GUI version takes advantage of the fundamental nature of the lexical network encoded in the dictionary: it uses logic based methods from logic programming to explore relations between entries and find pieces of relevant information that may be not accessible by direct searches. The result is a more realistic and useful data coverage shown to end users. Keywords: electronic dictionary, Graphical User Interface, data visualization, logic programming, assessment tool 1. Introduction relationships that remained hidden in the preceding This paper describes the goals, architecture and usability version; second, search recursion is used to compute of a new version of a Graphical User Interface (GUI) transitive closures on selected (or all) subsets of lexical used to browse and assess encoded data through a subset relations allowing the implementation of other search of selected lexical relations found in DiCoInfo strategies and layout improvements. (Dictionnaire fondamental de l’informatique et de l’Internet), an online multilingual specialized dictionary 2. Motivation and related work of computing and the Internet. The GUI allows searches This GUI project started with the idea that it was in the dictionary entry fields and presents results as possible to improve the visual and communicative value (directed) graphs.
    [Show full text]
  • Comparing Phonemes and Visemes with DNN-Based Lipreading
    THANGTHAI, BEAR, HARVEY: PHONEMES AND VISEMES WITH DNNS 1 Comparing phonemes and visemes with DNN-based lipreading Kwanchiva Thangthai1 1 School of Computing Sciences [email protected] University of East Anglia Helen L Bear2 Norwich, UK [email protected] 2 Computer Science & Informatics Richard Harvey1 University of East London [email protected] London, UK. Abstract There is debate if phoneme or viseme units are the most effective for a lipreading system. Some studies use phoneme units even though phonemes describe unique short sounds; other studies tried to improve lipreading accuracy by focusing on visemes with varying results. We compare the performance of a lipreading system by modeling visual speech using either 13 viseme or 38 phoneme units. We report the accuracy of our system at both word and unit levels. The evaluation task is large vocabulary continuous speech using the TCD-TIMIT corpus. We complete our visual speech modeling via hybrid DNN-HMMs and our visual speech decoder is a Weighted Finite-State Transducer (WFST). We use DCT and Eigenlips as a representation of mouth ROI image. The phoneme lipreading system word accuracy outperforms the viseme based system word accuracy. However, the phoneme system achieved lower accuracy at the unit level which shows the importance of the dictionary for decoding classification outputs into words. arXiv:1805.02924v1 [cs.CV] 8 May 2018 1 Introduction As lipreading transitions from GMM/HMM-based technology to systems based on Deep Neural Networks (DNNs) there is merit in re-examining the old assumption that phoneme- based recognition outperforms recognition with viseme-based systems.
    [Show full text]