Statement of Research Interests Sumit Basu

Total Page:16

File Type:pdf, Size:1020Kb

Statement of Research Interests Sumit Basu Statement of Research Interests Sumit Basu sbasu@ sumitbasu.net http://www.media.mit.edu/~sbasu Post-Doctoral Researcher, Microsoft Research PhD (September 2002), MIT Department of Electrical Engineering and Computer Science Thesis Advisor: Professor Alex (Sandy) Pentland, MIT Media Laboratory My research objective is simple: I want to play with audio. I want to take auditory streams from the world, explore them, search through them, cut them apart, extract information from them, filter them, morph them, change them, and play them back into the ether. I want to find new and better ways to do these things and more, then teach them to others, both students and colleagues. I want to make audio useful and fun for a broad variety of communities. I want to build a myriad of interfaces, personal monitoring mechanisms, professional audio tools, toys, and instruments using these methods. In short, I want to do for audio what my colleagues in the computer vision and computer graphics communities have done with images and video. Over the years, I have worked in human-computer interfaces, computer vision/graphics, signal processing, statistical modeling/machine learning, and of course computer audition. W hen I began my graduate studies, I was initially drawn to computer vision and Sandy Pentland‘s group at the MIT Media Lab because of their strong sense of play œ they were extracting interesting meta-information from visual streams, such as the location of a user‘s head and hands, and using it for interactive applications like playing with a virtual dog. I joined their efforts and spent several years working on computer vision and interactive vision systems. However, after this time, I felt myself begin to drift towards audio. I sensed how much richness there was in auditory streams and what amazing potential they had for this same sort of play. Furthermore, it seemed that the work in this area was minimal: there was no equivalent of ACM‘s SIGGRAPH (the major computer graphics conference/journal) or IEEE‘s CVPR (a prominent computer vision conference) for audio œ most everything in the audio community was heavily oriented towards speech recognition or low-level signal processing. Regardless, I decided to take my chances and play in the audio world. Because of the winding path I‘ve taken, my research has spanned the three areas I feel are critical to sophisticated auditory play œ machine perception, human-computer interfaces, and machine learning. Machine perception yields us powerful mechanisms to detect and track low-level features, but machine learning is critical to build models and recognize higher level patterns from this data. Also, the results of learning can be used to guide the parameters for low-level perceptual mechanisms. Furthermore, while analysis and detection tasks are interesting, they become far more so when they can be used in a system that involves a human user. Though the real-time aspects of developing human-computer interfaces often makes the lower level tasks more difficult, I believe it makes for far more engaging and powerful systems. Things get even more interesting when these interfaces are improved by learning mechanisms: over time, the interfaces can adapt to the users‘ behavior or help the users adapt to a behavioral goal. In order for this to be successful, the system must be able to perceive the resulting changes in the users‘ behavior, and thus the cycle begins anew. I‘ve had the good fortune to work in many aspects of these areas: I began doing projects in image processing/computer vision in 1994, when I was working on image enhancement algorithms with hyperacuity sensing as an undergraduate intern at Xerox PARC. Since then, I have worked on optical flow regularization for 3D head-tracking, finite element models (as physical priors for deformable meshes), maximum likelihood tracking of deformable meshes, mesh-based smoothing, and more. My work in machine audition began as an undergraduate researcher at MIT working on speaker identification/clustering, and as a graduate student I have worked on beamforming, pitch estimation, speech detection, source localization, speaking rate estimation, speaking style characterization, and conversational feature extraction. My interface projects began at PARC in 1993 developing applications for the ParcTAB (a pre-palm pilot handheld). I have since worked on the ALIVE (Artificial Life Interactive Virtual Environment) system, audio interfaces for wearable computers, "smart" headphones, the Facilitator Room project, analysis-augmented teleconferencing, the sonovar system for sound recombination/performance, and now video and audio browsing. My machine learning background also began at PARC, where I was exploring handwriting recognition algorithms for Unistrokes (a precursor to the Palm Pilot's handwriting system). Since then I have worked on learning physics for deformable meshes, various applications of dynamic programming, HMM's, belief propagation in dynamic Bayesian networks (DBN‘s), both exact and approximate, and more. Many of these projects are described in more detail at http://web.media.mit.edu/~sbasu/projects.html . My thesis is a good example of what I consider auditory play. It was inspired one night in 1998 when I was out having dinner and overheard a nearby couple speaking in another language. Though I had no idea what they were saying, it was clear that they were probably on one of their first few dates. This made me wonder what else we could figure out from only the tone of voice and the pattern of interaction, and the computer scientist in me wondered what subset of this I could train a machine to understand. I dubbed the topic "Conversational Scene Analysis" and began to think seriously about what we could infer about a conversation without understanding any of the words. The first steps seemed relatively simple: finding the conversational —scenes,“ i.e., regions during which one person or another is dominating, finding what type of conversation is occurring, and quantifying how a person in a given conversation is acting relative to their baseline speaking style. Before I could get to these problems, though, there was an array of feature estimation and pattern recognition problems to be dealt with œ who was speaking when and how, i.e., with what features (pitch, energy, speaking rate: see slide 6 at http://web.media.mit.edu/~sbasu/talks/defense.html). It was important to me that my methods could obtain these features from distant microphones as well as the cumbersome headsets of the speech community, which made many of these tasks significantly more challenging. Along the way, I also found a powerful method for determining whether two people were engaged in a conversation based only on their interaction pattern. I then moved to characterizing conversational styles and patterns with data from the LDC callhome database, a public repository of long telephone conversations between friends/family members. W hat I found was fascinating œ different conversations had markedly different signatures in terms of prosodic and interaction parameters. One of my favorite examples was a conversation in which a young woman in Finland is speaking with her parents back in the States. W hen listening to the conversation, it‘s clear that she is much more excited about talking to her mother than her father. W hat delighted me was that this difference was quite clear in my features as well (see slide 41). As I had hoped, it seemed possible to quantitatively characterize the different interaction styles we take on with different people. This has obvious applications in surveillance, but more interestingly, I see this becoming the basis for a very useful —social feedback“ mechanism: we all know we treat different people differently, and while the differences are clear to a third party, it‘s difficult for us to gauge them ourselves. Having them laid out for us quantitatively can help us assess our relationships and work towards improving them. Furthermore, I found that the interaction patterns for this database fell into a continuum of categories which we could use to characterize and browse conversations (slide 43). I believe this type of categorization is critical if we ever wish to browse the hours, days, and months of audio from our lives, a problem I am now working on at Microsoft Research under the title —Personal Audio“. Basically, since social interactions are key to our existence, I feel that our conversations form a large part of the story of our lives. As a result, I‘m investigating the possibilities of recording all audio on our bodies all the time and then being able to browse back through it using my conversational scene analysis methods, both as a means for retrieving information and as a way of keeping a diary of our lives. I‘m also investigating the possibilities of using this kind of information as a health monitoring mechanism for manic-depressive patients. Along with Havard Medical student Vikram Kumar and Massachusetts General Hospital doctor/professor Roy Perlis, I‘m involved in a drug study to assess whether conversational features can be used to assess a patient‘s mental state. If this is successful, it could drastically increase the effectiveness of treatments, since the patients could be monitored on a daily/hourly basis, thus allowing fine-tuning of medications and intervention. It could also greatly reduce the cost of mental health care by reducing the number of doctor visits. Finally, it could help the patients take control of their own illnesses by providing them with a personal feedback tool, much like the cholesterol meters and blood sugar monitors have done for patients with high cholesterol and diabetes. For my future work, I see a broad variety of topics and application areas. The first is smart/aware environments, where the home, office, and car are transformed into active spaces that can analyze/help/guide the user.
Recommended publications
  • Automatic Music Transcription: an Overview Emmanouil Benetos Member, IEEE, Simon Dixon, Zhiyao Duan Member, IEEE, and Sebastian Ewert Member, IEEE
    1 Automatic Music Transcription: An Overview Emmanouil Benetos Member, IEEE, Simon Dixon, Zhiyao Duan Member, IEEE, and Sebastian Ewert Member, IEEE I. INTRODUCTION IV-F, as well as methods for transcribing specific sources The capability of transcribing music audio into music within a polyphonic mixture such as melody and bass line. notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cog- A. Applications & Impact nition (recognizing musical objects), knowledge representation A successful AMT system would enable a broad range (forming musical structures) and inference (testing alternative of interactions between people and music, including music hypotheses). Automatic Music Transcription (AMT), i.e., the education (e.g., through systems for automatic instrument design of computational algorithms to convert acoustic music tutoring), music creation (e.g., dictating improvised musical signals into some form of music notation, is a challenging task ideas and automatic music accompaniment), music production in signal processing and artificial intelligence. It comprises (e.g., music content visualization and intelligent content-based several subtasks, including (multi-)pitch estimation, onset and editing), music search (e.g., indexing and recommendation of offset detection, instrument recognition, beat and rhythm track- music by melody, bass, rhythm or chord progression), and ing, interpretation of expressive timing and dynamics, and musicology (e.g., analyzing jazz improvisations and other non- score typesetting. Given the number of subtasks it comprises notated music). As such, AMT is an enabling technology with and its wide application range, it is considered a fundamental clear potential for both economic and societal impact. problem in the fields of music signal processing and music AMT is closely related to other music signal processing information retrieval (MIR) [1], [2].
    [Show full text]
  • Real-Time Programming and Processing of Music Signals Arshia Cont
    Real-time Programming and Processing of Music Signals Arshia Cont To cite this version: Arshia Cont. Real-time Programming and Processing of Music Signals. Sound [cs.SD]. Université Pierre et Marie Curie - Paris VI, 2013. tel-00829771 HAL Id: tel-00829771 https://tel.archives-ouvertes.fr/tel-00829771 Submitted on 3 Jun 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Realtime Programming & Processing of Music Signals by ARSHIA CONT Ircam-CNRS-UPMC Mixed Research Unit MuTant Team-Project (INRIA) Musical Representations Team, Ircam-Centre Pompidou 1 Place Igor Stravinsky, 75004 Paris, France. Habilitation à diriger la recherche Defended on May 30th in front of the jury composed of: Gérard Berry Collège de France Professor Roger Dannanberg Carnegie Mellon University Professor Carlos Agon UPMC - Ircam Professor François Pachet Sony CSL Senior Researcher Miller Puckette UCSD Professor Marco Stroppa Composer ii à Marie le sel de ma vie iv CONTENTS 1. Introduction1 1.1. Synthetic Summary .................. 1 1.2. Publication List 2007-2012 ................ 3 1.3. Research Advising Summary ............... 5 2. Realtime Machine Listening7 2.1. Automatic Transcription................. 7 2.2. Automatic Alignment .................. 10 2.2.1.
    [Show full text]
  • 2007–09 Program Requirements (.Pdf)
    1 FOREWORD F ROM THE DEAN http://www.yorku.ca/grads/calendar/ Graduate study involves a level of engagement with subject matter, in Business, Law, Education, Translation and Social Work and in fellow students, and faculty members that marks a high point in health-related disciplines focused through York’s new Faculty of one’s intellectual and creative development. At the master’s and Health. Innovative and unique interdisciplinary programs have Doctoral levels, graduate study in one way or another is at the centre been created in such areas as Environmental Studies, Earth & Space of research and scholarly intensity within the University and provides Science, Social & Political Thought, Interdisciplinary Studies, exciting challenges and opportunities. Women’s Studies, and our most recent programs: Humanities, Human Resources Management, and Critical Studies in Disability. Since its inception in 1963, the Faculty of Graduate Studies has A further innovative dimension has involved the creation of a grown from 11 students in a single graduate program to more number of specialized graduate diplomas—such as Early Childhood than 5000 students in 46 programs. York’s graduate studies are Education, and Environmental/Sustainability Education—which expanding, with five new graduate programs in development, three may be earned concurrently with the master’s or Doctoral degree of which begin this year; 11 more programs are expanding, either in several programs, and which may also be taken as stand-alone adding a doctoral program where there is an existing master’s, graduate diplomas. York offers 32 graduate diplomas. The Faculty or adding new fields or different master’s programs.
    [Show full text]
  • Music Similarity: Learning Algorithms and Applications
    UNIVERSITY OF CALIFORNIA, SAN DIEGO More like this: machine learning approaches to music similarity A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Brian McFee Committee in charge: Professor Sanjoy Dasgupta, Co-Chair Professor Gert Lanckriet, Co-Chair Professor Serge Belongie Professor Lawrence Saul Professor Nuno Vasconcelos 2012 Copyright Brian McFee, 2012 All rights reserved. The dissertation of Brian McFee is approved, and it is ac- ceptable in quality and form for publication on microfilm and electronically: Co-Chair Co-Chair University of California, San Diego 2012 iii DEDICATION To my parents. Thanks for the genes, and everything since. iv EPIGRAPH I’m gonna hear my favorite song, if it takes all night.1 Frank Black, “If It Takes All Night.” 1Clearly, the author is lamenting the inefficiencies of broadcast radio programming. v TABLE OF CONTENTS Signature Page................................... iii Dedication...................................... iv Epigraph.......................................v Table of Contents.................................. vi List of Figures....................................x List of Tables.................................... xi Acknowledgements................................. xii Vita......................................... xiv Abstract of the Dissertation............................. xvi Chapter 1 Introduction.............................1 1.1 Music information retrieval..................1 1.2 Summary of contributions..................1
    [Show full text]
  • 2021 Finalist Directory
    2021 Finalist Directory April 29, 2021 ANIMAL SCIENCES ANIM001 Shrimply Clean: Effects of Mussels and Prawn on Water Quality https://projectboard.world/isef/project/51706 Trinity Skaggs, 11th; Wildwood High School, Wildwood, FL ANIM003 Investigation on High Twinning Rates in Cattle Using Sanger Sequencing https://projectboard.world/isef/project/51833 Lilly Figueroa, 10th; Mancos High School, Mancos, CO ANIM004 Utilization of Mechanically Simulated Kangaroo Care as a Novel Homeostatic Method to Treat Mice Carrying a Remutation of the Ppp1r13l Gene as a Model for Humans with Cardiomyopathy https://projectboard.world/isef/project/51789 Nathan Foo, 12th; West Shore Junior/Senior High School, Melbourne, FL ANIM005T Behavior Study and Development of Artificial Nest for Nurturing Assassin Bugs (Sycanus indagator Stal.) Beneficial in Biological Pest Control https://projectboard.world/isef/project/51803 Nonthaporn Srikha, 10th; Natthida Benjapiyaporn, 11th; Pattarapoom Tubtim, 12th; The Demonstration School of Khon Kaen University (Modindaeng), Muang Khonkaen, Khonkaen, Thailand ANIM006 The Survival of the Fairy: An In-Depth Survey into the Behavior and Life Cycle of the Sand Fairy Cicada, Year 3 https://projectboard.world/isef/project/51630 Antonio Rajaratnam, 12th; Redeemer Baptist School, North Parramatta, NSW, Australia ANIM007 Novel Geotaxic Data Show Botanical Therapeutics Slow Parkinson’s Disease in A53T and ParkinKO Models https://projectboard.world/isef/project/51887 Kristi Biswas, 10th; Paxon School for Advanced Studies, Jacksonville,
    [Show full text]
  • Data-Driven Audio Recognition: a Supervised Dictionary Approach
    DATA-DRIVEN AUDIO RECOGNITION: A SUPERVISED DICTIONARY APPROACH APREPRINT Imad Rida Laboratoire BMBI Compiègne Université de Technologie de Compiègne Compiègne, France 2021-01-01 ABSTRACT Machine hearing or listening represents an emerging area. Conventional approaches rely on the design of handcrafted features specialized to a specific audio task and that can hardly generalized to other audio fields. Unfortunately, these predefined features may be of variable discrimination power while extended to other tasks or even within the same task due to different nature of clips. Motivated by this need of a principled framework across domain applications for machine listening, we propose a generic and data-driven representation learning approach. For this sake, a novel and efficient supervised dictionary learning method is presented. Experiments are performed on both computational auditory scene (East Anglia and Rouen) and synthetic music chord recognition datasets. Obtained results show that our method is capable to reach state-of-the-art hand-crafted features for both applications arXiv:2012.14761v1 [cs.SD] 29 Dec 2020 Keywords Audio · Dictionary learning · Music · Scene Humans have a very high perception capability through physical sensation, which can include sensory input from the eyes, ears, nose, tongue, or skin. A lot of efforts have been devoted to develop intelligent computer systems capable to interpret data in a similar manner to the way humans use their senses to relate to the world around them. While most efforts have focused on vision perception which represents the dominant sense in humans, machine hearing also known as machine listening or computer audition represents an emerging area [1].
    [Show full text]
  • Interpretable Machine Learning
    SYM POS IUM28. + 29. NOV 2019 KLOSTER SANKT JOSEF 2. Netzwerkkongress der ZD.B-Initiativen für die Wissenschaft ABSTRACTBAND PROGRAMM SEITE 4 SNAPSHOTS SEITE 14 ABSTRACTS SEITE 52 2 | PROGRAMM PROGRAMM | 3 PROGRAMM 09:00 UHR Ankunft im Kloster St. Josef in Neumarkt 08:45 UHR Begrüßung TAG 10:15 UHR Eröffnung des Symposiums TAG 09:00 UHR Blitz-Intro für Postersession C Begrüßung durch das Zentrum Digitalisierung.Bayern Postersession C 10:45 UHR Keynote – Multi-Inter-Trans!? Zusammenarbeiten jenseits der Disziplin 09:15 UHR Prof. Dr. Ruth Müller C1 C2 C3 Munich Center for Technology in Society 01DO. 28. NOV. 2019 Technische Universität München 02FR. 29. NOV. 2019 11:30 UHR LET’S TALK ABOUT: Interdisziplinarität in der wissenschaftlichen Praxis 10:15 UHR Kaffeepause Prof. Dr. Oliver Amft – Universität Erlangen-Nürnberg LET’S TALK ABOUT: Innovation und Impact – Welche Rolle spielt die Dr. Jörg Haßler – LMU München 10:45 UHR Wissenschaft beim digitalen Fortschritt? Prof. Dr. Nicholas Müller – HAW Würzburg-Schweinfurth Prof. Dr. Andreas Festag – TH Ingolstadt Prof. Dr. Ruth Müller – TU München Prof. Dr. Albrecht Schmidt – LMU München Prof. Andreas Muxel – HAW Augsburg Prof. Dr. Björn Schuller – Universität Augsburg Prof. Dr. Eva Rothgang – OTH Amberg-Weiden Prof. Dr. Ramin Tavakoli Kolagari – TH Nürnberg Prof. Dr. Verena Tiefenbeck – Universität Erlangen-Nürnberg 12:30 UHR Mittagspause 11:45 UHR TREFFEN DER PHD MEET-UP 13:30 UHR Gruppenfoto ZD.B-ARBEITSKREISE Austausch der Austausch der Blitz-Intro für Postersession A 13:45 UHR ZD.B-Professor*innen & ZD.B-Doktorand*innen Nachwuchsgruppen- Postersession A 14:00 UHR leiter*innen A1 A2 A3 A4 12:30 UHR Mittagspause 15:00 UHR FELLOWS COACHING D OPEN SPACE I 13:30 UHR FELLOWS COACHING E OPEN SPACE II 15:45 UHR Kaffeepause 14:15 UHR Keynote – Open Science – warum & wie machen wir das? Dr.
    [Show full text]
  • Tzanetakis, George Curriculum Vitae (Updated July 2016) 1
    Tzanetakis, George Curriculum Vitae (updated July 2016) 1 Research Interests Music Information Retrieval, Audio Signal Processing, Machine Learning, Human Computer Interaction, Digital Libraries, Software Frameworks for Audio Processing, Auditory Scene Analysis Education • Ph.D in Computer Science, Princeton University 2002 Manipulation, Analysis and Retrieval Systems for Audio Signals Advisor: Perry Cook • MA in Computer Science, Princeton University 1999 • BSE Computer Science, Magna Cum Laude, University of Crete, Greece 1997 • Music Education – Music Theory and Composition - Music Department, Princeton University (10 courses while doing PhD in Computer Science) (1997-2001) – Musicology, saxophone performance, and theory Athenaum Conservatory, Athens, Greece (1993-1997) – Piano and theory - Heraklion Conservatory, Greece (1985-1995) Professional Employment History • 2010-present: Associate Professor of Computer Science, University of Victoria, BC Canada (also cross-listed in Music, and in Electrical and Computer Engineering) • 2011 (6 months): Visiting Scientist, Google Research, Mountain View, California Collaboratos: Dick Lyon, Douglas Eck, Jay Yagnik, David Ross, Tom Walters • 2003-2010: Assistant Professor of Computer Science, University of Victoria, BC Canada (also cross-listed in Music, and in Electrical and Computer Engineering) • 2002-2003: Postdoctoral Fellow, Computer Science, Carnegie Mellon University (Computer Music Group, Informedia Group) Collaborators: R.Dannenberg, C.Atkenson, A.Hauptmann, H.Wactlar, and C. Faloutsos
    [Show full text]
  • Semantic Annotation of Music Collections: a Computational Approach
    Semantic Annotation of Music Collections: A Computational Approach Mohamed Sordo TESI DOCTORAL UPF / 2011 Directors de la tesi: Dr. Xavier Serra i Casals Dept. of Information and Communication Technologies Universitat Pompeu Fabra, Barcelona, Spain Dr. Òscar Celma i Herrada Gracenote, Emeryville, CA, USA Copyright c Mohamed Sordo, 2011. Dissertation submitted to the Department of Information and Communication Technologies of Universitat Pompeu Fabra in partial fulfillment of the require- ments for the degree of DOCTOR PER LA UNIVERSITAT POMPEU FABRA, with the mention of European Doctor. Music Technology Group (http://mtg.upf.edu), Dept. of Information and Communica- tion Technologies (http://www.upf.edu/dtic), Universitat Pompeu Fabra (http://www. upf.edu), Barcelona, Spain. A Radia, Idris y Randa. Me siento muy orgulloso de ser vuestro hijo y tu hermano. A toda mi familia. Acknowledgements During these last few years, I had the luck to work with an amazing group of people at the Music Technology Group. First and foremost, I would specially like to thank 3 people regarding this dissertation. Xavier Serra, for giving me the opportunity to join the Music Technology Group, and for his wise advices in key moments of the thesis work. Òscar Celma, for being the perfect co- supervisor a post–graduate student can have. Whether it was for guidance or for publishing, he was always there. Fabien Gouyon, who would have been without any doubt the third supervisor of this thesis. I especially thank him for giving me the opportunity to join his research group in the wonderful city of Porto, as a research stay.
    [Show full text]
  • Audio Data Analysis
    Audio data analysis CES Data Scientist Slim ESSID Audio Data Analysis and Signal Processing team [email protected] http://www.telecom-paristech.fr/~essid Credits O. GILLET, C. JODER, N. MOREAU, G. RICHARD, F. VALLET, … Slim Essid About “audio”… ►Audio frequency: the range of audible frequencies (20 to 20,000 Hz) Threshold of pain Audible sound Minimal audition threshold CC Attribution 2.5 Generic Frequencies (kHz) 2 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Audio content categories To ad -minis -ter medi Speech Music Environmental 3 CES Data Science – Audio data analysis Slim Essid About “audio”… ►An important distinction: speech vs non-speech Speech signals Music & non-speech (environmental) “Simple” production model: No generic production model: the source-filter model “timbre”, “pitch”, “loudness”, … Image: Edward Flemming, course materials for 24.910 Topics in Linguistic Theory: Laboratory Phonology, Spring 2007. MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology. Downloaded on 05 May 2012 4 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Different research communities Music Information Speech Research Signal representations Music Speech classification recognition (genre, mood, …) Audio coding Speaker Transcription Source recognition separation Speech Rhythm Sound enhancement analysis synthesis … … … Machine Listening / Computer audition 5 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Research fields Acoustics Linguistics Psychology Psychoacoustics Audio content Musicology analysis Signal Knowledge processing engineering Machine learning Databases Statistics 6 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Research fields Acoustics Linguistics Psychology Psychoacoustics Audio content Musicology analysis Signal Knowledge processing engineering Machine learning Databases Statistics 7 CES Data Science – Audio data analysis Slim Essid Why analyse audio data? .
    [Show full text]
  • ARVR Presentations
    Mark F. Bocko | Professor Department: Electrical and Computer Engineering Focus: Spatial audio Pilot Project: “Development of a quantitative framework for spatial audio characterization” Project Goals • Develop quantitative methods to assess spatial audio rendering systems • Incorporate quantitative binaural hearing models into audio system design tools • Predict what listeners will report hearing (locations, spatial extent of sources, diffusiveness) October 1, 2018 Geunyoung Yoon | Professor Department: Ophthalmology, The Institute of Optics, Center for Visual Science, Biomedical Engineering Focus: Physiological Optics, Vision Correction, Visual Psychophysics, Optical Imaging, Biomechanics, Eye Diseases Lab website: http://www.cvs.rochester.edu/yoonlab/ RESEARCH TOPICS: OCULAR OPTICS & CUSTOMIZED ACOOMMODATION & PRESBYOPIA VISION CORRECTION • Vergence-Accommodation conflict • Eye’s aberration and visual quality under VR/AR environments • Ocular wavefront sensing • Extended depth of focus technology • Advanced ophthalmic lenses • Accommodating intraocular lens • Sport vision • Peripheral vision and optics • Optical metrology • Binocular accommodation • Emmetropization / Refractive error OCULAR OPTICS and VISION ANTERIOR SEGMENT IMAGING • Adaptive optics vision simulator • Mechanisms of pathologic cornea • Adaptation to habitual optics diseases • Neural processing and perception • Ocular surface diseases and dry eye • Binocular integration • Corneal biomechanics • Neural plasticity • Multimodal high-resolution ocular • Stereopsis imaging
    [Show full text]
  • Computational Auditory Scene Induction
    NORTHWESTERN UNIVERSITY Computational Auditory Scene Induction A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL AND THE DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE OF NORTHWESTERN UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY Field of Computer Science By Jinyu Han EVANSTON, ILLINOIS August 2012 2 c Copyright by Jinyu Han 2012 All Rights Reserved 3 ABSTRACT Computational Auditory Scene Induction Jinyu Han Real world sound is a mixture of different sources. The sound scene of a busy cof- feehouse, for example, usually consists of several conversations, music playing, laughter and maybe a baby crying, the door being slammed, different machines operating in the background and more. When humans are confronted with these sounds, they rapidly and automatically adjust themselves in this complex sound environment, paying attention to the sound source of interest. This ability has been labeled in psychoacoustics under the name of Auditory Scene Analysis (ASA). The counterpart to ASA in machine listening is called Computational Auditory Scene Analysis (CASA) - the efforts to build computer models to perform auditory scene anal- ysis. Research on CASA has led to great advancement in machine systems capable of analyzing complex sound scene, such as audio source separation and multiple pitch esti- mation. Such systems often fail to perform in presence of corrupted or incomplete sound scenes. In a real world sound scene, different sounds overlap in time and frequency, in- terfering with and canceling each other. Sometimes, the sound of interest may have some 4 critical information totally missing, examples including an old recording from a scratched CD or a band-limited telephone speech signal.
    [Show full text]