Orchestra in a Box: a System for Real-Time Musical Accompaniment

Total Page:16

File Type:pdf, Size:1020Kb

Orchestra in a Box: a System for Real-Time Musical Accompaniment Orchestra in a Box: A System for Real-Time Musical Accompaniment Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst, MA 01003-4515, [email protected] Abstract cartoon-like performance descriptions. Most instruments, however, continually vary several attributes during the evo- We describe a computer system that plays a respon- lution of each note or phrase, thus their MIDI counterparts sive and sensitive accompaniment to a live musi- sound reductive and unconvincing. cian in a piece of non-improvised music. The sys- tem of composed of three components “Listen,” We describe here a new direction in our work on musical “Anticipate” and “Synthesize.” Listen analyzes accompaniment systems: we synthesize audio output by play- the soloist’s acoustic signal and estimates note on- ing back an audio recording at variable rate. In doing so, the set times using a hidden Markov model. Synthe- system captures a much broader range of tone color and inter- size plays a prerecorded audio file back at vari- pretive nuance while posing a significantly more demanding able rate using a phase vocoder. Anticipate cre- computational challenge. ates a Bayesian network that mediates between Lis- Our system is composed of three components we call ten and Synthesize. The system has a learning “Listen,” “Anticipate,” and “Synthesize.” Listen tracks the phase, analogous to a series of rehearsals, in which soloist’s progress through the musical score by analyzing the model parameters for the network are estimated soloist’s digitized acoustic signal. Essentially, Listen pro- from training data. In performance, the system syn- vides a running commentary on this signal, identifying times thesizes the musical score, the training data, and the at which solo note onsets occurs, and delivering these times on-line analysis of the soloist’s acoustic signal us- with variable latency. The combination of accuracy, com- ing a principled decision-making engine, based on putational efficiency, and automatic trainability provided by the Bayesian network. A live demonstration will be the hidden Markov model (HMM) framework makes HMMs given using the aria Mi Chiamano Mimi from Puc- well-suited to the demands of Listen. A more detailed de- cini’s opera La Boheme` . scription of the HMM approach to this problem is given in [Raphael, 1999]. 1 Introduction The actual audio synthesis is accomplished by our Syn- Musical accompaniment systems seek to emulate the task that thesize module through the classic phase vocoding tech- a human musical accompanist performs: supplying a missing nique. Essentially, the phase vocoder is an algorithm en- musical part, generated in real time, in response to the sound abling variable-rate playing of an audio file without introduc- input from a live musician. As with the human musician, the ing pitch distortions. The Synthesize module is driven by a accompaniment system should be flexible, responsive, able sequence of local synchronization goals which guide the syn- to learn from examples, and bring a sense of musicality to the thesis like a trail of bread crumbs. task. Most, if not all, efforts in this area create the audio ac- The sequence of local goals is the product of the Anticipate companiment through a sparse sequence of “commands” that module which mediates between Listen and Synthesize. The control a computer sound synthesis engine or dedicated au- heart of Anticipate is a Bayesian network consisting of hun- dio hardware [Dannenberg, 1984], [R. Dannenberg, 1988], dreds of Gaussian random variables including both observ- [B. Vercoe, 1985], [B. Baird, 1993]. For instance MIDI (mu- able quantities, such as note onset times, and unobservable sical instrument digital interface), the most common control quantities, such as local tempo. The network can be trained protocol, generally describes each note in terms of a start during a rehearsal phase to model both the soloist’s and ac- time, an end time, and a “velocity.” Some instruments, such companist’s interpretations of a specific piece of music. This as plucked string instruments, the piano, and other percus- model then constitutes the backbone of a principled real-time sion instruments can be reasonably reproduced through such decision-making engine used in live performance for schedul- ing musical events. A more detailed treatment of various ¡ This work supported by NSF grants IIS-998789 and IIS- approaches to this problem is given in [Raphael, 2001] and 0113496. [Raphael, 2002]. n states can be learned in an unsupervised manner through the Baum- p pc pc Welch, or Forward-Backward, algorithm. This allows our q q q attack 1 sust sust . sust system to adapt automatically to changes in solo instrument, microphone placement, room acoustics, ambient noise, and choice of the accompaniment instrument. In addition, this p(1-c) p p(1-c) p q automatic trainability has proven indispensable to the pro- q cess of feature selection. A pair of simplifying assumptions rest . rest make the learning process feasible. First, states are “tied” *. so that -¥¨')( depends only on several attributes of the state * Figure 1: A Markov model for a note allowing an optional such as the associated pitch and “flavor” of state, (attack, rearticulation, sustain, etc.). Second, the feature vector is ¥¨' ©©/'101 silence at the end. ' divided into several groups of features, , assumed to be conditionally independent, given the state: 5 0 *. &¥¨' ( * 2 Listen &¥2')( 576 43 . To follow a soloist, one must first hear the soloist; “Listen” is As important as the automatic trainability is the estima- the component of our system that accomplishes this task. tion accuracy that our HMM approach yields. Musical sig- We begin by dividing our acoustic signal into a collection nals are often ambiguous locally in time but become easier to of “snapshots” or “frames.” In our current system the acous- parse with the benefit of longer term hindsight. The HMM tic signal is sampled at 8 KHz with a frame size of 256 sam- approach handles this local ambiguity naturally through its ples leading to about 31 frames per second. In analyzing the probabilistic formulation, as follows. While we are waiting acoustic signal, we seek to label each data frame with an ap- to detect the 8 th solo note, we collect data (increment ) un- propriate score position. We begin by describing the label til ( ' ©© ' ED<F ¥ :9<;>=?A@7=CB process. + The “score” containing both solo and accompaniment parts F ;>=?A@7= is known to us. Using this information, for each note in for some threshold , where B is the first state of the the solo part we build a small Markov model with states as- 8 th solo note model, (e.g. the “attack” state of Fig. 1). If this sociated with various portions of the note such as “attack” condition first occurs at frame HG then we estimate the onset and “sustain” as in Fig. 1. We use various graph topologies time of the 8 th solo note by ¥ ( ' ©S© ' O O for different kinds of notes, such as short notes, long notes, ?A@CJEKL?AM + PQ B "R note BI start rests, trills, and rearticulations. However, all of our models !N"O have tunable parameters that control the length distribution This latter computation is accomplished with the Forward- (in frames) of the note. Fig. 1 shows a model for a note that Backward algorithm. In this way we delay the detection of is followed by an optional silence, as would be encountered the note onset until we are reasonably sure that it is, in fact, if the note is played staccato or if the player takes a breath or past, greatly reducing the number of misfirings of Listen. makes an expressive pause. The self-loops in the graph allow Finally, the HMM approach brings fast computation to our us to model of a variety of note length distributions using a application. Dynamic programming algorithms provide the small collection of states. For instance, one can show that, computational efficiency necessary to perform the calcula- for the model of Fig. 1, the number of frames following that ¢¤£¦¥¨§ © tions we have outlined at a rate consistent with the real-time attack state has a Negative Binomial distribution, , § demands of our application. where and are as described in the figure. We create a model for each solo note in the score, such as the one of Fig. 1, and chain them together in left-to-right fashion to pro- 3 Synthesize duce the hidden label, or state, process for our HMM. We The Synthesize module takes as input both an audio recording © © ¦ ¦ write for the state process where is the state of the accompaniment as well as an “index” into the recording © © visited in the th frame. is a Markov chain. consisting of times at which the various accompaniment notes © © The state process, ¦ , is, of course, not observ- are played. The index is calculated through an offline vari- able. Rather, we observe the acoustic frame data. For each ant of Listen applied to the accompaniment audio data. The ©© frame, , we compute a feature vector describing polyphonic and multitimbral nature of the audio data make the local content of the acoustic signal in that frame, . Most for a difficult estimation problem. To ensure that our index of the components of the vectors !"$# are measurements de- is accurate we hand-corrected the results after the fact in the rived from the finite Fourier transform of the frame data, use- experiments we will present. ful for distinguishing various pitch hypotheses. Other compo- The role of Synthesize is to play this recording back at nents measure signal power, useful for distinguishing rests; variable rate (and without pitch change) so that the accom- and local activity, useful for identifying rearticulations and paniment follows the soloist.
Recommended publications
  • Multilayer Music Representation and Processing: Key Advances and Emerging Trends
    Multilayer Music Representation and Processing: Key Advances and Emerging Trends Federico Avanzini, Luca A. Ludovico LIM – Music Informatics Laboratory Department of Computer Science University of Milan Via G. Celoria 18, 20133 Milano, Italy ffederico.avanzini,[email protected] Abstract—This work represents the introduction to the pro- MAX 2002, this event has been organized once again at the ceedings of the 1st International Workshop on Multilayer Music Department of Computer Science of the University of Milan. Representation and Processing (MMRP19) authored by the Pro- From the point of view of MMRP19 organizers, a key result gram Co-Chairs. The idea is to explain the rationale behind such a scientific initiative, describe the methodological approach used to achieve was inclusiveness. This aspect can be appreciated, in paper selection, and provide a short overview of the workshop’s for example, in the composition of the Scientific Committee, accepted works, trying to highlight the thread that runs through which gathered world-renowned experts from both academia different contributions and approaches. and industry, covering different geographical areas (see Figure Index Terms—Multilayer Music Representations, Multilayer 1) and bringing their multi-faceted experience in sound and Music Applications, Multilayer Music Methods music computing. Inclusiveness was also highlighted in the call for papers, focusing on novel approaches to bridge the gap I. INTRODUCTION between different layers of music representation and generate An effective digital description of music information is a multilayer music contents with no reference to specific formats non-trivial task that involves a number of knowledge areas, or technologies. For instance, suggested application domains technological approaches, and media.
    [Show full text]
  • Copyright by Elad Liebman 2019
    Copyright by Elad Liebman 2019 The Dissertation Committee for Elad Liebman certifies that this is the approved version of the following dissertation: Sequential Decision Making in Artificial Musical Intelligence Committee: Peter Stone, Supervisor Kristen Grauman Scott Niekum Maytal Saar-Tsechansky Roger B. Dannenberg Sequential Decision Making in Artificial Musical Intelligence by Elad Liebman Dissertation Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy The University of Texas at Austin May 2019 To my parents, Zipi and Itzhak, who made me the person I am today (for better or for worse), and to my children, Noam and Omer { you are the reason I get up in the morning (literally) Acknowledgments This thesis would not have been possible without the ongoing and unwavering support of my advisor Peter Stone, whose guidance, wisdom and kindness I am forever indebted to. Peter is the kind of advisor who would give you the freedom to work on almost anything, but would be as committed as you are to your research once you've found something you're passionate about. Patient and generous with his time and advice, gracious and respectful in times of disagreement, Peter has the uncanny gift of knowing to trust his students' instincts, but still continually ask the right questions. I could not have hoped for a better advisor. I also wish to thank my other dissertation committee members for their sage advice and useful comments: Roger Dannenberg, Kristen Grauman, Scott Niekum and Maytal Saar-Tsechansky.
    [Show full text]
  • Rafael Ramirez-Melendez
    Rafael Ramirez-Melendez Rambla Can Bell 52, Sant Cugat del Vallès, Barcelona 08190, Spain +34 93 6750776 (home), +34 93 5421365 (office) [email protected] DOB: 18/10/1966 Education Computer Science Department, Bristol University, Bristol, UK. JANUARY 1993 - Ph.D. Computer Science. FEBRUARY1997 Thesis: A Logic-Based Concurrent Object-Oriented Programming Language. Computer Science Department, Bristol University, Bristol, UK. OCTOBER 1991 - M.Sc. Computer Science in Artificial Intelligence. OCTOBER 1992 Thesis: Executing Temporal Logic. Universidad Nacional Autónoma de México, Mexico City, Mexico. OCTOBER 1986 - B.Sc. Honours Mathematics (with Distinction). SEPTEMBER 1991 Thesis: Formal Languages and Logic Programming National School of Music, UNAM, Mexico City, Mexico. OCTOBER 1980 - Musical Studies SEPTEMBER 1988 Grade 5 classical Guitar, Grade 8 classical Violin. Professional Experience Department of Information and Communication Technologies, Universitat APRIL 2011 - Pompeu Fabra, Barcelona. PRESENT Associate Professor Department of Information and Communication Technologies, Universitat APRIL 2008 - Pompeu Fabra, Barcelona. APRIL 2011 Associate Professor – Head of Computer Science Studies Department of Information and Communication Technologies, Universitat APRIL 2003 - Pompeu Fabra, Barcelona. MARCH 2008 Assistant Professor Department of Information and Communication Technologies, Universitat SEPTEMBER 2002 - Pompeu Fabra, Barcelona. MARCH 2003 Visiting Lecturer Institut National de Recherche en Informatique et en Automatique (INRIA), MAY 2002 - France. AUGUST 2002 Research Fellow School of Computing, National University of Singapore, Singapore. JANUARY 1999 - Fellow ABRIL 2002 MARCH 1997 - School of Computing, National University of Singapore, Singapore. DECEMBER 1998 Postdoctoral Fellow. JANUARY 1996- South Bristol Learning Network, Bristol, UK. JUNE 1996 Internet Consultant. OCTOBER 1990 - Mathematics Department, National University of Mexico, Mexico City, Mexico. SEPTEMBER 1991 Teaching Assistant.
    [Show full text]
  • Using Supervised Machine Learning for Hierarchical Music Analysis
    Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Using Supervised Learning to Uncover Deep Musical Structure Phillip B. Kirlin David D. Jensen Department of Mathematics and Computer Science School of Computer Science Rhodes College University of Massachusetts Amherst Memphis, Tennessee 38112 Amherst, Massachusetts 01003 Abstract and approaching musical studies from a scientific stand- point brings a certain empiricism to the domain which The overarching goal of music theory is to explain the historically has been difficult or impossible due to innate inner workings of a musical composition by examin- human biases (Volk, Wiering, and van Kranenburg 2011; ing the structure of the composition. Schenkerian music Marsden 2009). theory supposes that Western tonal compositions can be viewed as hierarchies of musical objects. The process Despite the importance of Schenkerian analysis in the mu- of Schenkerian analysis reveals this hierarchy by identi- sic theory community, computational studies of Schenke- fying connections between notes or chords of a compo- rian analysis are few and far between due to the lack sition that illustrate both the small- and large-scale con- of a definitive, unambiguous set of rules for the analysis struction of the music. We present a new probabilistic procedure, limited availability of high-quality ground truth model of this variety of music analysis, details of how data, and no established evaluation metrics (Kassler 1975; the parameters of the model can be learned from a cor- Frankel, Rosenschein, and Smoliar 1978). More recent mod- pus, an algorithm for deriving the most probable anal- els, such as those that take advantage of new representational ysis for a given piece of music, and both quantitative techniques (Mavromatis and Brown 2004; Marsden 2010) or and human-based evaluations of the algorithm’s perfor- machine learning algorithms (Gilbert and Conklin 2007) are mance.
    [Show full text]
  • Deep Neural Networks for Music Tagging
    Deep Neural Networks for Music Tagging by Keunwoo Choi A thesis submitted to the University of London for the degree of Doctor of Philosophy Department of Electronic Engineering and Computer Science Queen Mary University of London United Kingdom April 2018 Abstract In this thesis, I present my hypothesis, experiment results, and discussion that are related to various aspects of deep neural networks for music tagging. Music tagging is a task to automatically predict the suitable semantic label when music is provided. Generally speaking, the input of music tagging systems can be any entity that constitutes music, e.g., audio content, lyrics, or metadata, but only the audio content is considered in this thesis. My hypothesis is that we can find effective deep learning practices for the task of music tagging task that improves the classification performance. As a computational model to realise a music tagging system, I use deep neural networks. Combined with the research problem, the scope of this thesis is the understanding, interpretation, optimisation, and application of deep neural networks in the context of music tagging systems. The ultimate goal of this thesis is to provide insight that can help to improve deep learning-based music tagging systems. There are many smaller goals in this regard. Since using deep neural networks is a data-driven approach, it is crucial to understand the dataset. Selecting and designing a better architecture is the next topic to discuss. Since the tagging is done with audio input, preprocessing the audio signal becomes one of the important research topics. After building (or training) a music tagging system, finding a suitable way to re-use it for other music information retrieval tasks is a compelling topic, in addition to interpreting the trained system.
    [Show full text]
  • PGT Study at Informatics Paul Jackson – Msc Year Organiser
    PGT Study at Informatics Paul Jackson – MSc Year Organiser www.ed.ac.uk/informatics Programmes & Specialisms MSc Programmes Specialist Areas Informatics Intelligent Robotics Agents, Knowledge and Data Artificial Intelligence Machine Learning Natural Language Processing Cognitive Science Neural Computation and Neuroinformatics Cognitive Science Analytical and Scientific Databases Computer Systems, Software Engineering & HPC Computer Science Cyber Security and Privacy Programming Languages Theoretical Computer Science Bioinformatics, Systems and Synthetic Biology Music Informatics Data Science www.ed.ac.uk/informatics Programme structure Semester 1 Semester 2 Summer Informatics Informatics Individual Research Project Project Review (10 points) Proposal (10 points) supervised over the summer (60 points) Taught Courses Taught Courses (50 points) (50 points) www.ed.ac.uk/informatics MSc-level courses Undergraduate courses Adaptive Learning Environments Agent Based Systems Courses Advanced Databases Algorithms and Data Structures Advanced Natural Language Processing Compiling Techniques Advanced Vision Computability and Intractability Computer Architecture Applied Databases Computer Design Automated Reasoning • Around 50 taught Database Systems Automatic Speech Recognition Enterprise Computing courses at MSc level Bioinformatics 1 Introduction to Cognitive Science Bioinformatics 2 Introduction to Java Programming Introduction to Vision and Robotics Cognitive Modelling Semantics and Implementation Cognitive Neuroscience of Language • Many additional
    [Show full text]
  • Deep Predictive Models in Interactive Music
    Deep Predictive Models in Interactive Music Charles P. Martin∗ Kai Olav Ellefsen† Jim Torresen‡ Abstract Musical performance requires prediction to operate instruments, to perform in groups and to improvise. In this paper, we investigate how a number of digital musical instruments (DMIs), including two of our own, have applied predictive machine learning models that assist users by predicting unknown states of musical processes. We characterise these predictions as focussed within a musical instrument, at the level of individual performers, and between members of an ensemble. These models can connect to existing frameworks for DMI design and have parallels in the cognitive predictions of human musicians. We discuss how recent advances in deep learning highlight the role of prediction in DMIs, by allowing data-driven predictive models with a long memory of past states. The systems we review are used to motivate musical use-cases where prediction is a necessary component, and to highlight a number of challenges for DMI designers seeking to apply deep predictive models in interactive music systems of the future. 1 Introduction Prediction is a well-known aspect of cognition. Humans use predictions constantly in our everyday actions [17], from the very short-term, such as predicting how far to raise our feet to climb steps, to complex situations such as predicting how to avoid collisions in a busy street and, finally, to long-term planning. Prediction can be defined as guessing unknown or future states of the world informed by our current and past experiences. When our predictions are not accurate, such as lifting our feet for one too many arXiv:1801.10492v3 [cs.SD] 19 Dec 2018 steps, the error is used as a warning to correct our actions; in that case, the warning is the sensation of surprise.
    [Show full text]
  • Audio-Based Music Structure Analysis: Current Trends, Open Challenges, and Applications
    Nieto, O., et al. (2020). Audio-Based Music Structure Analysis: Current Trends, Open Challenges, and Applications. Transactions of the International Society for Music 7,60,5 Information Retrieval, 3(1), pp. 246–263. DOI: https://doi.org/10.5334/tismir.54 OVERVIEW ARTICLE Audio-Based Music Structure Analysis: Current Trends, Open Challenges, and Applications Oriol Nieto*, Gautham J. Mysore†, Cheng-i Wang‡, Jordan B. L. Smith§, Jan Schlüter‖, Thomas Grill¶ and Brian McFee** With recent advances in the field of music informatics, approaches to audio-based music structural analysis have matured considerably, allowing researchers to reassess the challenges posed by the task, and reimagine potential applications. We review the latest breakthroughs on this topic and discuss the challenges that may arise when applying these techniques in real-world applications. Specifically, we argue that it could be beneficial for these systems to be application-dependent in order to increase their usability. Moreover, in certain scenarios, a user may wish to decide which version of a structure to use, calling for systems with multiple outputs, or where the output adapts in a user-dependent fashion. In reviewing the state of the art and discussing the current challenges on this timely topic, we highlight the subjectivity, ambiguity, and hierarchical nature of musical structure as essential factors to address in future work. Keywords: Music Informatics Research; Music Structure Analysis; Music Signal Processing 1. Introduction (e.g., the exposition of a sonata,
    [Show full text]
  • Multimodal Music Information Processing and Retrieval: Survey and Future Challenges
    Multimodal music information processing and retrieval: survey and future challenges Federico Simonetta, Stavros Ntalampiras, Federico Avanzini LIM – Music Informatics Laboratory Department of Computer Science University of Milan Email: [email protected] Abstract—Towards improving the performance in various liturgic and profane) [2]. The tremendous success of the Opera music information processing tasks, recent studies exploit dif- in Italy and then in the rest of Europe, determined a funda- ferent modalities able to capture diverse aspects of music. Such mental way to connect music and visual arts for the future modalities include audio recordings, symbolic music scores, mid- level representations, motion and gestural data, video recordings, centuries. A turning point in the history of music description editorial or cultural tags, lyrics and album cover arts. This paper systems was the invention of the phonograph cylinder by critically reviews the various approaches adopted in Music Infor- Thomas Edison in 1877 and the disc phonograph diffused by mation Processing and Retrieval, and highlights how multimodal Emile Berliner ten years later [3]. In the same years, Edison algorithms can help Music Computing applications. First, we and the Lumiere` brothers invented the first devices to record categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion video [4]. Since then, a number of technologies were born approaches, and we conclude with the set of challenges that Music paving the way for new music description systems. With the Information Retrieval and Sound and Music Computing research invention of computers and the beginning of the digital era, communities should focus in the next years.
    [Show full text]
  • Clef: an Extensible, Experimental Framework for Music Information Retrieval
    Clef: An Extensible, Experimental Framework for Music Information Retrieval The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation DeCurtins, Max. 2018. Clef: An Extensible, Experimental Framework for Music Information Retrieval. Master's thesis, Harvard Extension School. Citable link https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364558 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Clef: An Extensible, Experimental Framework for Music Information Retrieval Maxwell Spenser DeCurtins A Thesis in the Field of Information Technology for the Degree of Master of Liberal Arts in Extension Studies Harvard University November 2018 © 2018 Maxwell Spenser DeCurtins Abstract Text-based searching of textual data has long been a mainstay of computing, and as search technology has evolved so has interest in searching non-textual data. In recent years eff orts to use image fi les as queries for other image fi les (or for information about what is in the query image fi le) have profi ted from advances in machine learning, as have other alternative search domains. While searching for music using musical data has met with considerable success in audio sampling software such as Shazam, searching machine-readable, music nota- tion-based data—also known as symbolic music data—using queries written in music notation has lagged behind, with most development in this area geared toward academic music researchers or existing in ad hoc implementations only.
    [Show full text]
  • Arxiv:1611.09733V1
    0000 Getting Closer to the Essence of Music: The Con Espressione Manifesto GERHARD WIDMER, Johannes Kepler University Linz, Austria This text offers a personal and very subjective view on the current situation of Music Information Research (MIR). Motivated by the desire to build systems with a somewhat deeper understanding of music than the ones we currently have, I try to sketch a number of challenges for the next decade of MIR research, grouped around six simple truths about music that are probably generally agreed on, but often ignored in everyday research. CCS Concepts: •Applied computing → Sound and music computing; General Terms: AI, Machine Learning, Music Additional Key Words and Phrases: MIR, music perception, musical expressivity. ACM Reference Format: Gerhard Widmer, 2016. Getting Closer to the Essence of Music: The Con Espressione Manifesto. ACM Trans. Intell. Syst. Tech. 00, 00, Article 0000 ( 2016), 15 pages. DOI: http://dx.doi.org/10.1145/2899004 1. ABOUT THIS MANIFESTO This text wishes to point our research community to a set of open problems and chal- lenges in Music Information Research (MIR) [Serra et al. 2013],1 and to initiate new research that will hopefully lead to a qualitative leap in musically intelligent systems. As a manifesto, it presents the author’s personal views2 on the strengths and limi- tations of current MIR research, on what is missing, and on what some of the next research steps could be. The discussion is based on the conviction that musically grat- ifying interaction with computers at a high level of musical quality – which I take to be the ultimate goal of our research community – will only be possible if computers (and, in the process, we) achieve a much deeper understanding of the very essence of music, namely, how music is perceived by, and affects, human listeners.
    [Show full text]
  • Music Computer Technologies in Teaching Students with Visual Impairments in Music Education Institutions
    JOURNAL OF CRITICAL REVIEWS ISSN- 2394-5125 VOL 7, ISSUE 19, 2020 MUSIC COMPUTER TECHNOLOGIES IN TEACHING STUDENTS WITH VISUAL IMPAIRMENTS IN MUSIC EDUCATION INSTITUTIONS Irina B. Gorbunova1, Sergey A. Morozov2 Received: 14 March 2020 Revised and Accepted: 8 July 2020 ABSTRACT— The development of computer aids and the emergence of computer screen access programs have led to the creation and development of new techniques of teaching people with visual impairments, rethinking the methods of teaching traditional disciplines in the education system. The emergence and development of - music computer technologies (MCT) served as the foundation for the creation of new forms in the educational process, as well as new subjects, the development of new disciplines and new educational trends in the system of modern music education, including the inclusive education. The article analyzes the key substantive aspects of teaching the discipline ―Computer Arramgement‖ in secondary specialized and higher music education institutions in Russia related to the study of musical disciplines and MCT, based on the Kursk Music Boarding College for the Blind (secondary level musical education) and the Education and Methods Laboratory Music Computer Technologies at the Herzen State Pedagogical University of Russia (higher level and additional professional musical and pedagogical education). KEYWORDS— music computer technologies, computer arrangement, inclusive musical education, disabled people with visual impairments. I. INTRODUCTION Learning the functions of a modern multimedia computer gives enormous opportunities for students with visual impairment to master the unlimited space of sound and music. The fact that blind students can work with a musical computer (MС) [1] thanks to screen access programs (speech synthesizers) – without a computer mouse, with the monitor turned off and not least, without outside help, is of particular importance.
    [Show full text]