ISTANBUL TECHNICAL UNIVERSITY  INSTITUTE OF SOCIAL SCIENCES

THE ROLE OF THE TIMBRAL MANIPULATION IN ELECTROACOUSTIC COMPOSITION IN HISTORICAL CONTEXT

Ph.D. THESIS

Cemal Barkın ENGİN

Deparment of Music Music Programme

OCTOBER 2012

ISTANBUL TECHNICAL UNIVERSITY  INSTITUTE OF SOCIAL SCIENCES

THE ROLE OF THE TIMBRAL MANIPULATION IN ELECTROACOUSTIC COMPOSITION IN HISTORICAL CONTEXT

Cemal Barkın ENGİN (409052002)

Deparment of Music Music Programme

Thesis Advisor : Prof. Dr. Cihat AŞKIN

OCTOBER 2012

İSTANBUL TEKNİK ÜNİVERSİTESİ  SOSYAL BİLİMLER ENSTİTÜSÜ

TINI MANİPÜLASYONUNUN ELEKTROAKUSTİK KOMPOZİSYONDA TARİHSEL BAĞLAMDAKİ ROLÜ

DOKTORA TEZİ Cemal Barkın ENGİN (409052002)

Müzik Bölümü Müzik Programı

Tez Danışmanı : Prof. Dr. Cihat AŞKIN

EKİM 2012

Cemal Barkın Engin, a Ph.D. student of ITU Institute of Social Sciences student ID 409052002 successfully defended the dissertation entitled “THE ROLE OF TIMBRAL MANIPULATION IN ELECTROACOUSTIC COMPOSITION IN HISTORICAL CONTEXT”, which he prepared after fulfilling the requirements specified in the associated legislations, before the jury whose signatures are below.

Thesis Advisor: Prof.Dr.Cihat AŞKIN ………………….. Istanbul Technical University

Jury Members: Doç.Dr.Kıvılcım YILDIZ ŞENÜRKMEZ ...... Mimar Sinan Fine Arts University State Conservatory

Yrd.Doç.Dr.İlke BORAN ………………….. Mimar Sinan Fine Arts University State Conservatory

Yrd.Doç.Dr.Can KARADOĞAN ………………….. Istanbul Technical University

Yrd.Doç.Dr.Ozan BAYSAL ………………….. Istanbul Technical University

Date of Submission: 28 May 2012 Date of Defense: 17 October 2012

v

vi

Sevla ve Berkay Engin’e,

vii

viii

FOREWORD

Firstly, I would like to express my gratitude to all past and present MIAM contributors for completely changing my life in an unpredictable yet extremely positive way. For the last ten years, they became a (continuously expanding) second family to me. Although I am deeply indebted to many bright and beautiful individuals I met during this experience, it is mandatory for me to name a few in my limited space. Without them, I could not have even dreamed of this.

My ever-growing appreciation to my parents cannot be put into words sufficiently enough. I started to understand the true value of their continuous belief and support much better as the years went by. I am also grateful to Erdem and Oya Ertay and the rest of lovely Ertay family for their kind and generous support, which definitely helped me to continue exploring this ceaseless path even in challenging moments.

It was an honor for me to work under the supervision of such a creatively diverse and respectable jury committee. Starting with my advisor Cihat Aşkın, I would like to pay my sincere respects to Şehvar Beşiroğlu, Kıvılcım Yıldız, Can Karadoğan, Ozan Baysal and İlke Boran for their graceful efforts and their valuable time to improve this work.

I would like to thank Dr. John Dack for providing me peerless resources regarding to an almost isolated field. His works unquestionably enabled me to construct the theoretical backbone of this dissertation. I also would like to thank Ahmet Altınel for his intellectual guidance and rewarding comments.

More than a colleague and a friend, Burak Tamer deserves his own section for my endless praises. The countless days and nights, we spent together in various concert halls and recording studios provided me with the necessary aesthetical, practical and theoretical knowledge required for my lifelong research and other musical activities.

Finally, I would like to thank to Aslı Tanrıverdi for her encouragement and patience through the good and bad times of this long process.

October 2012 Cemal Barkın ENGİN

ix

x

TABLE OF CONTENTS

Page

FOREWORD ...... ix TABLE OF CONTENTS ...... xi ABBREVIATIONS ...... xiii LIST OF TABLES…………………………………………………………………xv LIST OF FIGURES ...... xvii SUMMARY……………………………………………………………………...... xix ÖZET ...... xxi 1. INTRODUCTION ...... 1 1.1 The Objectives of the Dissertation ...... 7 1.2 Methodology ...... 9 2.SCHAEFFERIAN THEORY ...... 13 2.1 Fundamentals of Schaefferian Theory ...... 14 2.2 Deviations from Schaefferian Theory ...... 20 2.2.1 Headline debate ...... 20 2.2.2 Natural sounds / synthesized sounds ...... 24 2.2.3 Suitable sound objects ...... 25 2.2.4 The symbiosis of attack and ...... 28 2.2.5 Sound object manipulation ...... 29 2.2.6 Reduced listening vs. heightened listening ...... 31 3. HISTORY AND ANALYSIS ...... 33 3.1 Early Theoretical and Technological Developments ...... 33 3.1.1 Concise history of audio recording ...... 40 3.2.1 Abstract art ...... 42 3.2 Analysis ...... 45 3.2.1 Quatro pezzi per orchestra ...... 46 3.2.2 Polymorphia ...... 54 3.2.3 Kontrakadenz ...... 61 3.2.4 Studie II ...... 69 3.2.5 Etude aux objets ...... 76 3.2.6 De natura sonorum ...... 81 4. SYNTHESIS AND PROCESSING ...... 87 4.1 Fundamentals ...... 87 4.2 Analog: Before the Microprocessors ...... 89 4.2.1 ...... 91 4.2.2 Subtractive synthesis ...... 95 4.2.3 Modulation ...... 104 4.2.4 Time based effects ...... 105 4.3 Digital: After the Microprocessors ...... 106 4.3.1 FM synthesis ...... 107 4.3.2 Granular synthesis ...... 108

xi

4.3.3 Waveshaping ...... 108 5. ADDITIONAL TERMINOLOGY ...... 111 6. CONCLUSIONS ...... 113 REFERENCES ...... 117 APPENDICES ...... 123 CURRICULUM VITAE ...... 137

xii

ABBREVIATIONS

AM : Amplitude Modulation ATM : Absolute Manipulation CPU : Central Processing Unit dB : Decibel DM : Dynamic Manipulation EARS : ElectroAcoustic Resource Site FFT : Fast Fourier Transform FM : Frequency Modulation GRM : Groupe des Recherches Musicales Hz : Hertz IPS : Inch per second Ms : Millisecond PM : Phase Manipulation PROGREMU : Program for Musical Research RM : Ring modulation RAM : Random Access Memory RMS : Root Mean Square RPM : Revolutions per Minute RT 60 : Reverb Time SM : Spectral Manipulation TM : Tessitura Manipulation TBM : Time based Manipulation WDR : West Deutsche Rundfunk

xiii

xiv

LIST OF TABLES

Page

Table 2.1 : Listening modes ...... 15 Table 2.2 : Schaeffer’s division of ordinary music and new music ...... 21 Table 3.1 : Early innovations and manifestations ...... 36 Table 4.1 : Filter orders ...... 100 Table A : Listening behaviors of Delalande...... 125 Table B : Extended summary of PROGREMU ...... 127 Table C : Emmerson’s musical discourse and syntax matrix…...... 129

xv

xvi

LIST OF FIGURES

Page

Figure 2.1: Program for musical research...... 18 Figure 2.2: Components of timbral composition...... 19 Figure 2.3: Schaeffer’s criteria for suitable sound objects...... 25 Figure 2.4: Wisharts’s codependent parts of sound investigation...... 26 Figure 3.1: Paul Gauguin – L’esprit des morts veille...... 43 Figure 3.2: Wassily Kandinsky – Composition VI...... 44 Figure 3.3: Quarter tone and normal intonation……………………………… 46 Figure 3.4: Fletcher – Munson curves...... 48 Figure 3.5: Spectrogram for Quatro pezzi per Orchestra movement IV...... 49 Figure 3.6: Waveform for Quatro pezzi per Orchestra movement IV...... 49 Figure 3.7: Measures 5-7...... 51 Figure 3.8: Wide vibrato indication...... 51 Figure 3.9: Measures 18-20...... 51 Figure 3.10 : Measures 24-26...... 52 Figure 3.11 : Measure 42...... 53 Figure 3.12 : Electrocardiogram based notation...... 55 Figure 3.13 : Clusters and glissandi reprenstation...... 56 Figure 3.14: Spectrogram for Polymorphia...... 57 Figure 3.15: Upper limit of the instrument – indefinite pitch...... 58 Figure 3.16: Excerpt from rehearsal number 39...... 59 Figure 3.17: Cadence on rehearsal number 67...... 61 Figure 3.18: The closure of Kontrakadenz...... 65 Figure 3.19: Spectrogram for Kontrakadenz...... 67 Figure 3.20: Asynchronous pizzicato cluster on measure eight...... 68 Figure 3.21: Amplitude vs. time graphic for 1000 Hz. sine wave...... 71 Figure 3.22: 193 frequency groupings of Studie II...... 72 Figure 3.23: The structure of the graphic notation……………...... 73 Figure 3.24: Frequency rows of Studie II………………………...... 74 Figure 3.25: Spectrogram for Studie II...... 75 Figure 3.26: Movement groupings of Etude aux Objets...... 77 Figure 3.27: Spectrogram for Objets rassemblés...... 79 Figure 3.28: Spectrogram for Conjugaison du timbre...... 82 Figure 4.1: Basic analog signal flow (Recording)...... 90 Figure 4.2: Basic analog signal flow (Digital...... 90 Figure 4.3: Sine wave superposition...... 92 Figure 4.4: Sawtooth wave...... 93 Figure 4.5: Square wave...... 94 Figure 4.6: Triangle wave...... 94 Figure 4.7: Pulse wave...... 95 Figure 4.8: White noise waveform...... 96

xvii

Figure 4.9: White noise spectrum...... 97 Figure 4.10 : Pink noise spectrum...... 98 Figure 4.11 : Brown noise spectrum...... 98 Figure 4.12 : Hi-pass filters...... 100 Figure 4.13 : Band-pass filters...... 101 Figure 4.14 : Band-reject filters...... 102 Figure 4.15 : Shelving filters...... 103 Figure 4.16 : Peaking filters...... 103 Figure 4.17 : Basic digital signal flow (Recording) ...... 107 Figure 4.18 : Basic digital signal flow (Playback) ...... 107 Figure 4.19 : Grain of 50 ms ...... 108 Figure 4.20 : Waveshaping ...... 109

xviii

THE ROLE OF THE TIMBRAL MANIPULATION IN ELECTROACOUSTIC COMPOSITION IN HISTORICAL CONTEXT

SUMMARY

Twentieth century, a century of communication, diversification, globalization and technology, introduced us countless new perspectives to the variables of musical composition and performance, as it unquestionably had the similar influence on all of the other disciplines of art. It was an era of deconstruction and reconstruction. Among all the post Wagnerian avant-garde approaches regarding the aesthetical, functional and structural theories of music, electroacoustic composition represents the most radical deviation from the dogmas of the conventional musical activities present in the history of Western music. Every existing sound has four main dimensions; frequency, duration, loudness and timbre. Fixed pitches (defined by their fundamental frequencies), the durations and the loudness of these individual or collective pitches, and their horizontal/vertical relationships, forming the melodic and the structure respectively, still are the unchallengeable universal building blocks of musical organization. However, timbre, the fourth dimension remained subordinate to the rest of the dimensions or completely omitted as a compositional parameter in the majority of Western cultures, and variations of sounds derived from spectral manipulation was not considered as a potential compositional method for a long period. Thus, the choices of sound material were limited to the conventional musical instruments, with the rare exceptions of extended instrumental techniques and spatial specifications. On the contrary, timbre of individual sounds in micro scale and their cumulative, textural gestalt in macro scale functioned as the artery for electroacoustic composition, starting directly from the early days of its theoretical establishment. The pioneer French and German schools of electroacoustic composition contrasted extremely in terms of aesthetics, conception and methodology. Nevertheless, they shared the same tendency to find an exit point from the restrictions of traditional instrumental and compositional concepts and to create a new music based on the overall assumption to accept every sound, even sounds without a perceptible pitch, as a resource for compositional material. The thesis aims to explore the status and offerings of timbre and timbral manipulation in electroacoustic composition and to define a contemporary theoretical and terminological basis for electroacoustic music along with compositional analyses, techniques and methods of historical significance. As a genre continuously in progress, electroacoustic domain requires a common ground for communication and education oriented purposes. The parallel history of technology must not be neglected, since technology is an inseparable factor for sound and timbre associated compositional strategies.

xix

xx

TINI MANİPÜLASYONUNUN ELEKTROAKUSTİK KOMPOZİSYONDA TARİHSEL BAĞLAMDAKİ ROLÜ

ÖZET

İletişim, çeşitlilik, küreselleşme ve teknoloji kavramlarının anahtar kelimeler olduğu yüzyıl olarak kabul edebileceğimiz 20.Yüzyılda gerçekleşen sosyo-kültürel ve sosyo- politik gelişmelerin, hiç şüphesiz müzikal kompozisyonun değişkenlerine ilişkin yeni ve farklı bakış açıları kazanmamızı sağladığını ve kazanılan ivme ile bu evrimin günümüzde de devam etmekte olduğunu iddia edebiliriz. Benzeri etkilere kaçınılmaz olarak diğer sanat disiplinlerinde de rastlamak, hatta müzik ve diğer sanat disiplinleri arasında ciddi ölçüde etkileşimler tespit edebilmek mümkün. Dolayısıyla kısa sayılabilecek bir süre önce geride bıraktığımız bu yüzyılı, oluşumu uzun ve çok katmanlı süreçlere dayanan sanatsal geleneklerin yıkımı ve yeni olasılıların inşasının yüzyılı olarak algılamak hatalı bir tespit olmayacaktır. Wagner sonrası dönemde ortaya çıkan tüm “öncü” müzikal akımların estetik, işlevsel ve yapısal özellikleri incelendiğinde, elektroakustik kompozisyon başlığı altında değerlendirebileceğimiz bestecilik anlayışı ve yöntemlerinin, Avrupa müzik tarihini oluşturan geleneksel müzikal dogmalardan en radikal sapmaları gerçekleştirdiğini gözlemleyebiliriz. Bu gelişimi kaçınılmaz kılan en temel faktörler, müzikal kompozisyon kapsamında değerlendirilebilecek seslere dair kısıtlamaların sorgulanmaya başlanması, bir başka deyişle kemikleşmiş “enstrüman” kavramına eleştirel yaklaşımların ortaya çıkması, ve bu süreçle birlikte eşzamanlı olarak gelişen ses kayıt teknolojilerinin kompozisyon anlayışına ve pratiğine olan çok boyutlu katkılarıdır. Akustik bilimi perspektifinden baktığımızda, fiziksel mekânda var olan her ses dört ana (ve niceliksel) boyuttan oluşmaktadır: frekans, gürlük, süre ve tını. Baskın olan frekans bilgisinden oluşan “sabitlenmiş” perde verileri, bu verilerin gürlükleri/ süreleri ve son olarak, sırasıyla melodi ve armoni hattını oluşturan bireysel ya da kolektif perde bilgilerinin yatay ve düşey ilişkileri, günümüzde de halen besteleme eyleminin yapıtaşları olarak kabul edilmektedir. Fakat Batı müzik geleneğinde baskın bir eğilim olarak, sesin dördüncü unsuru olan tını, başka bir deyişle ses rengi, değişkeni diğer boyutlara kıyasla ikincil derecede önemli olarak kabul edilmiş, hatta aktif bir besteleme unsuru olmaktan tamamen çıkartılmış ve ağırlıklı olarak canlı performans pratiği dâhilinde ele alınacak bir unsura indirgenmiştir. Böylelikle, spektral sentez ve manipülasyon yöntemleriyle elde edilebilecek tını tabanlı gruplar ve bu gruplardan üretilebilecek olan varyasyonlar zengin potansiyellere sahip bir besteleme tekniği olarak kabul edilmemiştir. Buradan hareketle, müzikal yapıyı oluşturulabilecek olan ses kaynakları, geleneksel müzik enstrümanları ve onların üretebildiği “tampere” sistem dâhilindeki perde bilgileri ve perde dizileri ile sınırlandırılmıştır. Bu durumun evrensel ölçekte bir eğilim olmadığının altını özellikle çizmek gerekmektedir. Tonal hiyerarşinin korunduğu fakat tınının eşdeğer bir gelişim elemanı kabul edildiği dünya müziklerine tarih boyunca rastlanmaktadır. xxi

Geleneksel enstrüman gruplarının sabit tınısal özellikleri, potansiyel kompozisyonun dokusal yapısını önceden saptamakta ve dinleyici algısında, müziğin tanımına ve işlevine dair kalıcı ve değiştirilmesi güç kültürel kodlar oluşturmaktadır. “Genişletilmiş” çalgı teknikleri, uzamsal yerleştirmeler, müzikal kaygılarla üretilmemiş ses üreteçlerinin kompozisyonlara eklemlenmesi ve erken dönem elektronik enstrümanların kullanımı tını paletinin sınırlarını genişletmek adına gerçekleştirilen istisnai çabalar olarak görülebilir. Bu genel hiyerarşi temelli görüşün aksine, mikro ölçekte bireysel ve kolektif seslerin (kaynakları sınırlanmaksızın) tınıları ve makro ölçekte bu tınıların etkileşiminde doğan genel dokunun besteleme sürecinde temel alınması, teorik oluşumundan itibaren elektroakustik müzik konseptinin ana çıkış noktalarından biri olmuştur. Her ne kadar öncü Fransız ve Alman ekolleri, kuruluş aşamalarında estetik, kavramsal ve yöntemsel olarak birbirleriyle zıt bir çizgide dursalar da, geleneğin kısıtlamalarından uzak, var olan sesleri bestecilik eylemi için kaynak kabul eden ve tınıya eşit ölçüde değer veren bir müzikal dil oluşturma konusunda benzeri idealleri taşıyorlardı. Her iki okulun ana prensiplerinin 1950’li yılların ortalarından sonra bir anlamda birbirlerine eklemlenmesi bu ortak paydaya ilişkin bir kanıt olarak yorumlanabilir. Günümüzde, anlamı ve işlevine dair eleştirilerin varlığına rağmen elektroakustik müzik kavramı, birçok farklı olasılığı kapsayan genel bir başlık görevini üstlenmektedir. Bu çalışma, tını değişkeninin işlevini, tını manipülasyonun rolünü ve kavramsal/ pratik önerilerini tarihsel çerçevede incelemeyi ve elde edilen verilerden hareketle çağdaş bir elektroakustik müzik teorisi ve ilgili terminolojik altyapı için çerçeve inşa etmeyi hedeflemektedir. Çalışmanın teorik omurgasını oluşturacak olan Pierre Schaeffer’in, “ses objesi”, “müzikal obje” ve bunların “algısal alan” ile uyumlulukları sorunsallarını merkez alan “acuology” olarak isimlendirdiği müzik teorisi detayları ile incelenecek ve güncel teoriler ve teknolojik gelişmelerin ışığında, fenomenoloji felsefe akımından alınan ilhamlarla geliştirilmiş olan bu teoriden güncel sapmalar belirlenecektir. Schaeffer’in 1966 yılında yayınladığı “Traité des objets musicaux” (Müzikal objeler üzerine inceleme) isimli çalışma, kaleme alınmış ilk ve (halen) en kapsamlı yeni müzik teorisi olma özelliğini taşımaktadır. Schaeffer’in, dizilerle sınırlandırılmış nota kullanımı güdümlü müzik teorilerinin yerini alacak yeni bir teori geliştirme amacı yaptığı araştırmalar ile bu çalışmanın mutlak kabul ettiği temel prensipler, temelde uyumluluk göstermektedir. Bununla birlikte, uygulanışı beş ana aşamadan oluşan bu teorinin (sırasıyla tipoloji, morfoloji, karakteroloji, analiz ve sentez) özellikle ilk üç aşamasının pratiklikten uzak ütopik eğilimleri ve organik / sentetik ses objelerinin işlevlerine dair ayrımcı yapısı, teorinin içeriğinde güncellemeler yapmayı zorunlu kılmaktadır. Küresel sağlıklı bir iletişim dili kurma ve elektroakustik müzik eğitiminde ortak payda sağlamayı amaçlayan temel terminoloji öneri ve eleştirileri de çalışma boyunca gerekli görülen noktalarda irdelenecektir. Yukarıda bahsedilen önceliklere ek olarak, tını merkezili kompozisyon tarihi adına önem taşıyan çalgısal ve elektronik kompozisyonlar, karşılaştırmalı olarak analiz edilecektir. Giacinto Scelsi, Kryzystof Penderecki ve Helmut Lachenmann gibi bestecilere ait çalgısal örneklere de yer verilmesinin temel sebebi, tını manipülasyonu kavramının sadece elektronik tabanlı besteleme süreçlerine ait bir seçim olmadığına dair ipuçları aramak ve iki ilgi alanı arasındaki mevcut iletişimin izlerini sürmektir. Karlheinz Stockhausen, Pierre Schaeffer ve Bernard Parmegiani’nin çalışmaları, farklı elektroakustik akımlardan yola çıkmaları nedeniyle analiz sonuçlarının geniş bir perspektifte olmasına imkân sağlayacaklardır.

xxii

Çalışmanın bir diğer önceliği ise, ses objesi ve tını varyasyonu merkezli kompozisyon prosedürlerinin vazgeçilemez bir parçası olan ses sentezleme / işleme teknikleri ve ses kayıt teknolojisine dair gelişmelerin tarihsel bağlamda araştırılması ve detayları ile değerlendirilmesi olacaktır. Çalışma boyunca takip edilen kronolojik izleğin gerektirdiği yaklaşımsal özellikler ve dijital teknolojilerin zengin ve değişken yapısından dolayı, ilgili bölümde analog tekniklere öncelik verilecek ve dijital teknikler, geçiş işlevi gören öncül teknikler ve temel prensiplerin açıklanması ile sınırlandırılacaktır. Ses kaynaklarının teoride sonsuz olasılıkları göz önüne alındığında, insan işitme sisteminin sınırları, fiziksel dünyadan sapmaları ve dinleme biçimleri de elektroakustik kompozisyon estetiği için önemli yönlendirmeler içermektedir. Dolayısıyla çalışma boyunca akustik ve psikoakustik kuramlara da referans veren çok yönlü bir yaklaşım sergilenmesi kaçınılmaz olacaktır. Müzik tarihi içerisinde henüz başlangıç evresini yaşayan ve sürekli evrim içinde bir (estetik ve teknik) sanatsal seçim olan elektroakustik kompozisyon hakkında iletişimsel, pedagojik ve teknik ihtiyaçlara cevap veren bir ortak zemin oluşturulması giderek artan bir önem taşımaktadır.

xxiii

xxiv

1. INTRODUCTION

For centuries, the concept of “note/fixed pitch” was accepted as the principal parameter for the act of musical composition in continuous collaboration with the parameters duration and loudness. Timbre, also widely known as tone quality or tone color, was considered a secondary element and a retractable feature regarding musical activities, although it constitutes the fourth physical dimension of any sound.1 Nevertheless, this hierarchical tendency to limit the possibilities of potential musical continuum was not at a universal magnitude. This selective attitude towards composition can be mainly observed in the history of Western music. Some of the non - Western cultures traditionally have been given timbre and timbral progression a pivotal role. The music of Tuva, Shakuhachi music of Japan and Inanga Chuchotee music of Africa are among the best-known examples of cultures receiving the timbre as an equal contributor or even considering it the most dominant plane for compositional foundation (Url -1).

Defining timbre proposes its own problems. Analysis of several definitions may assist us to gain a wider perspective about this acoustic phenomenon, since it is not easy to compress the contextual extensions of timbre into a one single inscribed formula. This is largely due to its subjective state as a psychoacoustic concept. American National Standards Institute’s definition is as follows:

“Timbre is that attribute of auditory sensation in terms of which a listener can judge two sounds similarly presented and having the same loudness and pitch as being dissimilar” (ANSI, 1960).

Percy Scholes offers color analogies to broaden the content of the subject and points out particularly to the inner structure within a single sound.

Timbre means tone quality – coarse or smooth, ringing or more subtly penetrating, ‘scarlet’ like that of a trumpet, ‘rich brown’ like that of a cello, or ‘silver’ like that of the flute. The one and only factor in sound production, which conditions timbre, is the presence or absence, or relative strength or weakness, of overtones. (Howard and Angus, 2009, p. 232)

1Many theorists also omitted loudness (intensity) for a significant amount of time. This feature gained more importance after 16th century and dynamic markings became an irrevocable of part of musical notation in Europe. 1

“Handbook for Acoustic Ecology” 2 compiled by Simon Fraser University provides an overall scientific approach to the discussion of timbre without neglecting its perceptional qualities reminiscent of the color analogies by Scholes:

Timbre or tone quality is determined by the behavior in time of the frequency content or spectrum of a sound, including its transients, which are extremely important for the identification of timbre. Often qualities of timbre are described by analogy to color or texture (e.g. bright, dark, rough, smooth), since timbre is perceived and understood as a 'gestalt' impression reflective of the entire sound, seldom as a function of its analytic components. (Url-2) Rafael Ferrer differentiates between the notions of micro and macro timbre. The micro timbre apprehension corresponds to shorter units of time as mentioned above (with a certain threshold of minimum), but macro timbre extends beyond short durations of time such as minutes and hours, and eventually become an identity of a general structure (Ferrer, 2011). Ferrer’s perspective can be related to Gestalt theory of Berlin School, where the whole is different from its constructive elements. Composition wise, this idea leads us to different aesthetic possibilities. A new musical system is feasible without only focusing on the intervallic relationship between the fundamental frequencies of individual sounds. Every dimensions of any sound may offer musical experience. The dominant frequencies in sounds enable us to perceive pitch in them with harmonic partials, which are based on integer multiplies of the fundamental frequency. Inharmonic spectral content has less clear sense of pitch3, due to non-integer formation of overtones. The majority of percussion instruments, gongs and bells produce inharmonic overtone series. The spectral structure of a single sound can also be taken into account for an alternative methodology to musical organization. Textural compositional methods might be derived even from timbral manipulations of a single sound. Macro levels of textural structures might be emerged by the timbral interactions of different kind of sounds.

Scientific contributions of Joseph Fourier and Herrmann Helmholtz during the late 18th century and 19th century prepared an adequate theoretical source of inspiration for composers to find out new means for aesthetical intentions. A brief introduction to these scientific developments is mandatory for a better comprehension of the role of the timbre in the electroacoustic domain, since it is the central theme of the dissertation.

2 Originally published by the World Soundscape Project, Simon Fraser University, and ARC Publications, 1978 3 An inharmonic sound might contain more than one dominant frequency. Thus, they produce multiple pitches. 2

Fourier analysis suggested that general functions might be represented in the sum of simpler trigonometric functions, and the reverse process called Fourier synthesis can rebuild a general function from the simpler functions. From the acoustical perspective, sine waves with amplitude and phase data can regenerate a certain complex spectrum. Besides, Fourier analysis matches with the working principles of human auditory system, since the inner ear functions quite similarly. Sensory hair cells located in the inner ear (in organ of Corti) correspond to different frequency bands and a specific location will be stimulated if the incoming signal contains related data. Nevertheless, we perceive sounds as a gestalt with a certain timbral quality.

By courtesy of digital technology, we are able to perform the Fourier analysis on any real time or non-real time audio input signal and later on rebuild it from its analyzed components. The former decomposing process is called Fourier transform (FT) and the latter reconstruction attempt is called Fourier synthesis. 4 Eduardo Reck Miranda categorizes the types of FT into four categories:

1-Fourier Transform (FT): Continuous time and frequency

2-Discrete Time Fourier Transform (DTFT): Discrete time and continuous frequency.

3-Discrete Fourier Transform (DFT): Discrete time and frequency.

4-Fast Fourier Transform (FFT): Faster algorithm for DFT. This is the version used in computer programming (Miranda, 2002).

Following Fourier, around 1950’s Helmholtz invented the “Helmholtz Resonator”, a cavity type resonator that is isolated to vibrate only at a specific single frequency in order to investigate the spectral components of a sound before the aid of digital audio systems. This type of resonator continues to be used in architectural acoustics to avoid imbalances in the frequency response of the space (Url -3).

In his influential study, “Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik” (On the Sensations of Tone as a Physiological Basis for the Theory of Music), published in 1863, Helmholtz analyzed the impact of tones on our perception of sound.

4 It should be mentioned that FFT is not the only way to accomplish spectral analysis. Swept-tuned and hybrid superhetereodyne FFT spectrum analyzers can be also constructed. 3

Helmholtz categorized sounds regarding their spectral structures such as simple, musical and defined subcategories such as hollow, rich, poor, cutting/rough. (Howard and Angus, 2009, p. 237)

Supported by these scientific foundations, a single sound may be perceived as a microcosm, waiting to be discovered aesthetically. From this point of view, a variety of theorists and composers, substantially from European origin, started to challenge the idea of using fixed frequency pitches as the only compositional unit and question the tradition of establishing predetermined horizontal/vertical relationships between them as the sole compositional procedure. However, this was not an exclusive progress limited to music. It is possible to claim that the profound impact of industrial revolution on the rapid growth of technology and the encounter with the non-European cultures, largely due to colonial expansion, directly influenced the artists from divergent disciplines in similar ways. Painting and musical composition demonstrates the most similar developments and they have pronounced influence on each other. Avant-garde literature was also experiencing with new forms and new hybrid disciplines such as sound poetry appeared from the experiments (Url-4). The relationship between abstract painting and timbre-oriented composition will be investigated more in detail in the third chapter of the dissertation.

In the early manifestations, such as Ferrucio Busoni’s “Sketch of a New Esthetic of Music”, written in 1907, the common rejections were pointed out to the limitations of the equal temperament system and the narrow and finite timbral possibilities offered by traditional musical instruments were questioned (Busoni: 1911). By the time of writing the manifestation, Busoni was informed about Tellharmonium (Dynamophone), an early type of invented by Thaddeus Cahill, capable of performing additive synthesis. This development resonated with Busoni’s opinions significantly and motivated the composer to express the need for a modern musical perspective, free from theoretical restrictions. The early actual compositional examples in this direction were experimenting with new approaches to orchestration and focusing on the development of extended playing techniques to generate new timbral possibilities from the present sound sources. It was a natural but certainly limited progress, since the visionary composers had to wait for almost 40 years for the sufficient audio technology to arrive in order to be able to translate their theories into practice. In a sense, the idea of electroacoustic music heralded the technology.

4

“Farben” (Colors), the third movement of “Fünf Orchesterstücke” (Five Orchestral Pieces), written by Arnold Schönberg in 1909, was one of the initial efforts to organize a sonic structure specifically based on timbre progression. This structural technique is labeled as “Klangfarbenmelodie” (Tone color melody) by Schönberg in his book “Harmonielehre” (Harmony study), written in 1911. He explains the main motivation behind this idea as follows:

In a musical sound three characteristics are recognized: its pitch, color (timbre), and volume5. Up to now it has been measured in only one of the three dimensions in which it operates, in the one we call 'pitch'. Attempts at measurement in the other dimensions have scarcely been undertaken to date; organization of their results into a system has not yet been attempted at all. The evaluation of tone color (Klangfarbe), the second dimension of tone, is thus in a still much less cultivated, much less organized state than is the aesthetic valuation of these last named harmonies. (Schönberg, 1911, p. 421) French composer Edgard Varése has been known as one of the pioneers of electroacoustic composition. As early as 1916, influenced by Busoni’s reformist ideas, he stated:

“The role of color or timbre would be completely changed from being incidental, anecdotal, sensual or picturesque; it would become an agent of delineation like the different colors on a map separating different areas, and an integral part of form” (Chadabe, 1997, p.58).

Sound organization 6 was the aesthetical intention of the composer, without being “imitative”, “descriptive” or “futurist”7. (Meyer. Zimmermann. 2006: 40) James Hard defines the pre electroacoustic/orchestral works of Varése as the transformation of clusters derived from timbral combinations, where each cluster has a unique instrumentation, spectral density and intervallic content. The composition grows out of timbral variations and counterpoint of the sonic densities (Hard, 2008).

For the first time, timbre was given an equal right to be able to determine the form in the history of Western music, which had been historically controlled by the pitch oriented theme and variation approach. Interestingly, the first work of Varése to contain sounds outside of the scope of traditional instruments was “Déserts”, which was written between 1950-54. Over the course of approximately 35 years, Varése had struggled to find financial support to establish an institute for the research of musical technology.

5In the text, Schönberg omitted the dimension duration. 6 Varese preferred to use the term “organized sound” to define this new approach (Varese: 1966, p. 8). 7 A clear reference to futurism movement. 5

The consolidation of music and science as a whole was his main goal but the process were interrupted by two unfortunate world wars.

As mentioned briefly before, the theoretical oppositions to the sonic limitations imposed by the traditional composition, and the urgent need for a wider sonic palette resulted firstly with the invention of new instruments, new instrumental techniques and radical approaches to orchestration. The acceleration in the technological evolution of the audio recording and representation systems during the first part of the 20th century, the invention of the magnetic tape recorder being the most significant one, gave composers the possibility of directly working with the sound material itself. In addition to the developments in the acoustical science, repetitive listening to the sounds, due to the support of recording medium, helped composers to explore the intrinsic qualities of the sound material and establish abstract musical relationships between different types of sounds regardless of their origin. The idea (and the tradition) of musical instruments has to be challenged by using technology. Pierre Schaeffer commented exclusively on the synergy between arts and science in his early writings:

“Art, if it can possibly be attained, is born at the moment when the aesthetic result is in direct contact with the technical means” (Schaeffer, 1952, p.105).

The early sound synthesis methods, additive and subtractive synthesis, allow to create new sounds based on sine waves and white noise generally, sound processing opportunities such as pitch shifting, reverse playback, editing and other early signal processing tools allow to modify existing sounds and regenerate new ones. A new chapter has opened and post-war Europe in the second quarter of 20th century witnessed the formation of a new genre with infinite possibilities; electroacoustic music.

French composer and theorist François Delalande divides the history of music into three sections (Landy, 2007, p.178):

1-Oral tradition

2-Written (score)

3-Electroacoustic (music recorded on fixed medium)

6

Electroacoustic music, resistant in itself to specified aesthetic and technical restrictions and categorizations by its very nature, gave the sound itself pivotal priority regardless to the referential systems of tonal music8, scale based restrictions and any kind of tuning systems, which define the intervallic distances. Although there had been several attempts to assign specific definitions and aesthetical foundations to the concept of electroacoustic music, the core idea remained the same in a global magnitude throughout the decades. Any sound source (real or synthesized) and its electronic “derivatives” may be subject to compositional organization, without necessarily relying on real time performances.

1. 1 Objectives of the Dissertation

The concept of sonic continuum (a utopian concept of a non-hierarchical order) naturally associated with the electroacoustic medium, replaced the traditional concepts of center based gravity and directionality associated with tonal harmony and 12 (equally spaced) note limitation of serialist music.

Electroacoustic music is a significantly broad term, which does not signify or insist on certain aesthetical perspectives, fixed set of rules and methodology except the almost universal acceptance of any sound material as the basis for musical composition.

From the early theoretical discussions (especially evident in the notorious controversy between French “musique concrète” (concrete music) and German “elektronische Musik“ (electronic music) schools), to the recent practical and technical matters, electroacoustic music always has been the subject of productive aesthetical and technical debates with the frequent absence of local cultural overtones. Nevertheless, compositions with ethnical influences also exist. Iranian composer Alireza Mashayekvi’s “Shur Op. 15” (1966) has clear indications of makam usage.

On the other hand, this fertile and radical nature of the genre has a negative impact on its reception mainly due to the impossibility of audiences to match the conceptual and structural codes in consequence of their musical definitions and expectations due to their previous education and experiences with music (Landy, 2007, p.23).

8 Trevor Wishart uses the term “frame of reference” to point out the hierarchical system of tonal music (Wishart, 1994, p.26). 7

The majority of the leading problems related to electroacoustic composition from its actual beginning in 19489 are still valid up to this day and they have to be considered as in an interdependent state:

1- The lack of a common terminology for universal communication

2-The lack of a unified theory and methodology for analysis and composition

3-The lack of a unified theory and methodology for technical applications

4-The lack of a pedagogical system for compositional education

This thesis attempts to investigate the function of the timbre-oriented inclinations to the construction of a musical structure by electronic means, while trying to collect major tendencies from the foundational era and form an updated terminology and methodology for practical and educational purposes on a stable (yet flexible) aesthetical basis. A systematic historical approach derived from various compositional ideologies and movements will be maintained throughout the sections in order to insistently underline the interrelation between technology and electroacoustic music.

1-The leading motivations, which define the systematic of the research, can be summarized in three equivalent parts:

2-To justify the timbral compositional procedure by tracing its existence in electronic and instrumental domain

3-To form a contemporary theoretical perspective in electroacoustic timbre composition directly in connection with the innovations in the field of technology that made recording, synthesis and manipulation techniques more efficient and practical

4-To fulfill the need for a global terminology and methodology for analytic, compositional, and technical applications

The subordinate purpose of the thesis is to apply the results of the research to pedagogical approaches. Thus, the roots for a systematical education method can be established.

9 Pierre Schaeffer composed his first work Etude aux Chemins de Fer in 1948, which is widely accepted as the first actual electroacoustic composition. However, as will be discussed later, this proposition can be challenged historically. 8

Although Bülent Arel and İlhan Mimaroğlu are among the first generations of composers involved with the electronic side of musical thinking, electroacoustic composition has started to become widespread in the curriculums of music departments in Turkey with the beginning of 21st century. There is a significant time gap in comparison with the universal situation. Additionally, the education system does not employ a common theoretical and technical basis yet and dominantly focus on technical matters, while aesthetical and theoretical dimensions of the genre are frequently discarded.

1.2 Methodology

The first section will cover an introduction to the “object” based musical theories of Pierre Schaeffer. Schaefferian theory will form the essential notional perspective for the dissertation. Contradictory theories and deviations from the roots will be included in order to construct a modernistic point of view on the subject matter. The majority of the scholarly effort is spent to chronological history of electroacoustic music and its simultaneous technical progression. After Schaeffer, theoretical evolution became static for almost two decades, until the arrival of the fourth generation of composers. Trevor Wishart can be considered as one of the productive theoreticians in the field of electroacoustic music along with further contributions of Denis Smalley and Simon Emmerson. It is possible to consider these composers and theorists as successors of Schaefferian movement, but not without their own modifications.

On Wishart’s work “On Sonic Art”, written in 1996, echoes of Schaeffer is clearly traceable, but Wishart provided also additional discussions about recent aspects of timbre oriented composition with the light of a general approach in terms of the evaluation of sound objects and recent digital synthesis possibilities. The spectromorphology theory of Smalley and language oriented analysis of Emmerson will be also used to modulate the Schaefferian sound composition concepts in order to form an updated electroacoustic music theory from the contemporary perspective. I would like to extract the cores of their theories and combine the outcomes to reach a consistent and practical perspective.

The second section of the dissertation will consist of two subsections. The former subsection aims mainly to focus on the history of aesthetical, ideological, technological and evolution of timbre oriented compositional strategy.

9

Thus, it will provide the necessary information required for the latter sections. In the second subsection, several early works will be analyzed to provide a background for the subsequent technical, theoretical and terminological discussions. Specifically in this chapter, the scope of the compositional analysis will not be restricted only to electroacoustic works. Orchestral works will be also included in order to demonstrate the general tendency of moving away from pitch based compositional approaches during 20th century and to be able to detect a global grammar for timbre related analysis.

It must be emphasized that the timbral manipulation/progression of individual or cumulative sonic structures, is not the property of electroacoustic music. Starting with Schönberg, many composers have experimented with applying timbral alteration techniques to traditional instruments in order to achieve distinctive sonic results. Therefore, examples of orchestral works function as the proof of the intellectual interaction between instrumental and electroacustic domains. The compositions and their particular movements subject to analysis are indicated below chronologically:

1-Karlheinz Stockhausen – “Studie II” (1954)

2-Pierre Schaeffer – “Etude Aux Objets – Objets rassamblés” (1959)

3-Giacinto Scelsi – “Quattro pezzi su una nota sola – Movement IV” (1959)

4-Krzysztof Penderecki – “Polymorphia” (1961)

5-Helmut Lachenmann – “Kontrakadenz” (1971)

6-Bernard Parmegiani – “De Natura Sonorum - Conjugasion du timbre” (1975)

Italian composer Giacinto Scelsi worked with timbral principles especially in his second period. “Quattro pezzi su una nota sola”, for chamber orchestra was based on the microtonal fluctuations of a single note with the support of the continuous timbral changes in the orchestral texture.

Sonorism movement, led by the Polish composers such as Krzysztof Penderecki, Henryk Górecki, Andrzej Dobrowolskiand Zbigniew Rudzinski during 1960’s, ignored the concept of conventional melody and harmony and focused on the textural progression provided by the timbral interaction and modulation with the use of experimental playing techniques.

10

“Polymorphia”, a composition for 48 string instruments by Krzysztof Penderecki, would be the subject of the second composition analysis outside the electroacoustic domain.

Helmut Lachenmann’s “Kontrakadenz” for full orchestra is the successor of the previously mentioned works with additional unique characteristics. This work would be the last composition to be the subject of an analysis to prove the existence of timbral composition outside the field of electroacoustic music.

Electroacoustic compositions are selected due to their capacity to exhibit the historically contrasting ideals and the following synthesis stage arising from these conflicts. Besides, all of the selected compositions are in direct connection with the instrumental examples on various grounds. Detected correlations will be particularly included to the analyses. All of the analyses will be supported by spectrograms; time based spectral density images due to the assumption that traditional harmonic analysis methods are insufficient for indicating timbral progressions and it might be beneficial to visualize the textures in order to observe especially the formal characteristics.

The third section will investigate sound manipulation and synthesis techniques in historical progression. The chapter consists of three subsections. The former section covers the fundamental technical backgrounds and contains conceptual debates regarding terminology. The rest of the remaining sections examine the primary sound processing and synthesis techniques in analog and digital domains. To exhibit the capacities of manipulation and synthesis types, original audio examples and etudes are provided.

Algorithmic composition and theories regarding spatial movement of sound objects under real acoustic conditions are deliberately excluded in the dissertation, because algorithmic composition involves computer programming, which do not contain timbral concerns in its foundations necessarily, and theories of spatial movement are subjective, site-specific preferences, which are also not directly linked to timbral structuring.

The fourth section is arranged to briefly discuss and propose additional selected terminology by Electroacoustic Resource Site (EARS) and their role in educational methodology along with systematic proposals.

11

Appendix section includes post Schaefferian theories about the listening modes10, extended summary of Schaeffer’s compositional procedure, a chart of Simon Emmerson’s discourse and syntax categorization, glossary for the technical terminology and the index of the accompaniment CD. CD contains sound examples to clarify theoretical and terminological discussions and etudes to represent the compositional techniques more in detail. A full composition entitled “We are lost forever” is also included in order to introduce a large-scale conceptual usage of the theories and techniques.

10 Schaefferian listening modes will be explained in detail. 12

2. SCHAEFFERIAN THEORY

The initial and almost isolated major attempt to create a universal terminology to define the quantitative and qualitative features of sounds in order to establish a foundation for a sound oriented compositional theory came from Pierre Schaeffer, the leading figure of the French musique concrète school. After an almost twenty years of active involvement to determine the framework of a “new music”, Schaeffer published “Traité des objets musicaux” 11(Treatise of Musical Objects) in 1966 and reedited his study in detail in 1977. This comprehensive work is still not translated in its entirety into another language other than its original French version. Therefore, it has not been available to non-French speaking literates and thus remained largely unknown even among scholars.

To be able to avoid the disadvantages of the complex wording and formal structures of the study, another French theoretician and scholar Michel Chion published a work titled “Guide des objets sonores: Pierre Schaeffer et la recherche musicale” (Guide to sound objects: Pierre Schaeffer and Musical Research) in 1983. It was an attempt to summarize the core ideas of “Traité des objets musicaux”12, to be able to introduce them into wider circles.

Around the last decade of the 20th century, a new generation of electroacoustic music composers appeared which cited Schaeffer’s compositions and ideas frequently as an important source for aesthetical inspiration, though modified by the contemporary compositional tendencies. An important number of these musicians were not necessarily coming from academic backgrounds, and they combine the techniques and theories offered by electrocacoustic with a variety of traditional or popular musical forms, resulting often tonal and repetitive structures. This interaction

11Schaeffer’s first published work, In Search of A Concrete Music (1952) contains the premise theoretical ideas and technical details, which would later be explored in great detail in Traité des objets musicaux. 12 This work was also not translated into English until John Dack and Christine North’s translation in 2009.

13 continues to exist today, but will not be explored in the dissertation due to the impossibility to cover every local or sub genre.

2.1 Fundamentals of Schaefferian Theory

Schaeffer’s tireless effort to establish a theoretical basis for a new musical compositional language was intensely influenced by phenomenology; a philosophical movement founded by Edmund Husserl at the beginning of 20th century. Husserl considers phenomenology as a combination of science, (a unique form of) philosophical perspective and thinking methodology.

The central argument of phenomenology lies in its criticism of positivism and naturalism. Experiential hybrids philosophy and sciences limit themselves to matters of facts and realities. Contrarily, phenomenology (either pure or transcendental) is a science of essence; an eidetic science aimed to ascertain cognition of essence (Husserl, 1983).

“The essence (Eidos) is a new sort of object. Just as the datum of individual or experiencing intuition is an individual object, so the datum of eidetic intuition is a pure essence “(Husserl, 1983, p. 9).

Husserl puts the human cognition on the foreground, and rejects relying solely on scientific evaluation of physical parameters for the measurement of perception. A new form of “awareness” can be achieved by ignoring/omitting the spatio-temporal causes, a process called “eidetic” (phenomenological) reduction in order to extract the pure essence as an object.

Husserl uses the terms “noesis” and “noema” to differentiate between traditional scientific observations and a new way of philosophy based on scientific process. Although, it has been a subject of numerous debates, “noesis” may be linked to the real or coded content, while “noema” signifies the ideal content.

Schaeffer derived his principal idea of “sound object” from eidetic reduction process, which to this day remained one of the almost fundamental views of electro-acoustic composition regardless of aesthetical distinctions. Every sound has to be bracketed out from the “real world” for the sake of an objective evaluation. Brian Kane renders it as “the human perception against scientific analysis or habitual listening (Kane, 2007).

14

On the other hand, the idea of sound object has also relations with structuralism. Jean Piaget suggests that a structure is “a system of transformations” that has to fulfill the ideas of wholeness and self regulation”, two important qualities of a sound object (Maconie, 2005, p.103).

Schaefferian theory considers every existing sound as a gestalt, a single universe of internal contrapuntal relationships. The intrinsic structure of a sound object must be perceived independently from its source, physical parameters, literal meaning and function as a symbol regarding cultural codes. The reduction process for sound requires different cognitive activity than observing the acoustical data in terms of its quantifiable properties, which does not have necessary correlations in human perception. However, this does not mean that the quantifiable properties should be avoided completely.

The correlation between perceptual intention and the perceived object is one of the fundamental notions of phenomenology, which Schaeffer reincorporated into musical research, which was dominated in the 50-60s by the scientist notion of a musical object as an object in itself. For Schaeffer, on the contrary, the sound object is the meeting point of an acoustic action and a listening intention. (Chion, 1983, pp.29-30) In order to perform phenomenological process on sound objects, Schaeffer initially focused on the motivations of human listening and categorized four main listening modes (quatre écoutes). These modes are arranged in Table 2.1. The ordering from left to right does not represent a chronological regulation, rather signifies the interaction between the modes.

Table 2.1: Listening Modes.

4.COMPREHENDING 1. LISTENING 3. HEARING 2. PERCEIVING [COMPRENDRE] [ÉCOUTER] [ENTENDRE] [OUÏR]

“Listening” is a causal act focused on gathering information about the cause and the source of the sound, thus it is an objective mode. “Perceiving” is a way of passively witnessing the sound without detecting its source or rendering its meaning, thus it is a subjective mode. “Hearing” is a kind of active listening based on personal interests, thus it is also a subjective mode. The fourth mode “comprehending is a code decoding, meaning gathering mode (mainly based on linguistic concerns), thus it is an objective listening mode (Chion, 1983, p.19-20).

15

Schaeffer added two more advanced listening modes in pairs, which may be considered as the combinations or variations of the four basic listening modes listed in Table 2.1; ordinary/specialist listening natural/cultural listening.

“Ordinary listening” combines two key objective modes of listening, information and meaning gathering. On the other hand, “specialist listening” can be accepted as a derivative of the subjective “hearing” mode. It involves a dedicated action based on personal expectation, ear training and aural experience.

“Natural listening” implies an information-gathering mode not only to identify the sound source, but also the cause of the sounds appearance. In “Cultural listening”, the source and its existential motivation are already given to us in the form of cultural codes. Therefore, the source and the motive of the source carry within themselves a particular “message”.

Basic listening modes can form complex relationships, but they still have to be considered as transitional (or fact based) modes. Schaefferian theory aims to arrive to the ideal listening mode called “reduced listening”. It is the required process to apply phenomenological reduction to sound objects (sounds with a distinctive individual character) in order to discover the “pure structural essence” of them. In “ordinary listening”, the sound is always treated as a medium for particular communication oriented actions, but the inner qualities of sounds are largely omitted. Consequently, “reduced listening” is an anti-natural process, contrary to any form of preconditioning. The act of removing all our habitual references related to listening is a willed and artificial act, which allows us to clarify many phenomena implicit in our perception. Thus, the name reduced listening refers to the notion of “phenomenological reduction”, because, in a way, it consists of stripping the perception of sound of everything that is not “it itself”, in order to hear only the sound, in its materiality, its substance, its perceivable dimensions (Chion, 1983, p.31).

There are conceptual extensions and variations of the mentioned basic listening modes offered by Chion, Norman and Smalley but the most elaborated effort came from the GRM member and theorist François Delalande. A summary of his post- Schaefferian listening mode theories are included for observation in Appendix A (Landy, 2007).

16

For Schaeffer, a sound object derived from any source may be a substitute for the role of pitch and may become the basis for musical organization. Nevertheless, he adds that not every sound object can be a part of musical organization, due to aesthetical discrepancies and formal incompatibilities. Complex (and almost utopian) purpose of the Schaefferian theory depends on the reduction and categorization of sounds and their association for compositional intentions.

As will be mentioned later in the section of theoretical deviations, Schaeffer did not take synthetic sounds into consideration, which would otherwise make the stages of categorization and classification almost impossible due to the fertile nature of the sound objects audio synthesis methods can generate.

Schaeffer’s reduction process on sound objects can be performed by using two historically elementary methods:

Closed groove (sillon férme): Analysis of the sound objects via repetitive listening. Closed groove (or “looping” as commonly used in modern terminology) is an extension to the Pythagorean influenced concept of “acousmatic listening” in which the visual presence of the sound source is excluded in order to concentrate totally on the meaning of the message. Schaeffer even omitted the potential meaning the sound carries and tried to focus on the essentials of the sound. The repetitive sound loops also became the leading structural characteristic of the mainstream branches of electroacoustic music. American composer Otto Luening13 puts great emphasis on the aids of repetitive listening of sound objects for a precise comprehension of the sound material from the macro compositional point of view:

“Unfamiliar material needs repetition and some redundancy to be perceived. More familiar material needs variation of one kind or another to keep it from becoming too redundant” (Heifetz, 1989, p.29).

Cut bell (cloche coupée)14: Manipulations in the envelope - attack, decay, sustain and release portions of the amplitude continuation - of a sound object; a method invented to be able to conduct an accurate analysis on the structure and on the perception of the selected sound in order to question the compositional proclivity of the material.

13 Otto Luening is one of the pioneers of American electroacoustic music, who collaborated to form Columbia Princeton Electronic Music Center. 14Cut bell method was named after one of the very first acoustical experiments made by Pierre Schaeffer. 17

The observations gained by these two primary experiments differ radically from the examination of the parameters of physical signals by using scientific methods such as FFT or orthogonal Walsh Functions. These numeric results do not fully correlate to the psychoacoustic tendencies of perceptual mechanism of human hearing system, although FFT share the same principles of operation in terms of analysis as mentioned earlier, therefore it is not possible to use the data in a musical context. Schaeffer states that:

It is true that the correlations between the variations of a physical signal and the perceived sound object which corresponds to it are close, but they are not a direct copy. It is the job of “psychoacoustics” to study these correlations from simple physical examples. (Chion, 1983, p.15) With the support of a phenomenology influenced analytical system, Schaeffer developed a methodology for a new music theory, in order to form a logical aesthetical base regarding the potential relationships between different types of sound objects for purely compositional purposes. This theoretical and practical process is entitled by Schaeffer with the neologism “acuology”. The term does not have a direct English translation yet it should be considered as a form of an experimental music study/theory, motivated to find a “solfége” system for sounds, which is replacing the role of musical notes. So far, the fundamental tasks of acuology can be categorized into three subgroups: Research on the listening mechanisms, research on the intrinsic qualities of the sound objects and the evaluation of the computability of these qualifications with the human auditory perception mechanism. This mechanism is concisely represented with the term “perceptual field”. Schaeffer’s methodology for acuology, entitled as “Program for Musical Research” (PROGREMU), consists of five successive and interrelated phases. They are summarized in Figure 2.1.15

Composition

Typology Morphology Characterology Analysis Synthesis

Figure 2.1: Schaeffer’s Program for Musical Research.

15 A more detailed chart of PROGREMU with its selected subsections can be found in Appendix B. 18

Chion’s simplified definition regarding the first three stages of PROGREMU provides us with adequate elementary information about the sound object based research procedures, which will later on transform into practical matters.

Typology consists in identifying, distinguishing and isolating sound objects, then sorting them into main types. Morphology consists in describing these objects by identifying the sound criteria that they are made of, and classifying these criteria into classes. Characterology consists in going back to the sound as a whole as a bundle of different criteria combined together, and trying to distinguish the different genres of objects according to their characteristics. (Chion, 1983, p.100) The fourth stage, analysis is the period where the sound genres are tested about their compatibility with the perceptual field. The output data can aid the composer to build scales of sound objects that can be used as scales of musical objects later in the synthesis stage. Thus, synthesis involves actual compositional activity (Chion, 1983, p.100). A two-part analysis will be sufficient to realize a complete exploration of the compositional potentials of sound objects in the micro domain and the textural progression resulted from the interaction of different sound objects in the macro domain. The macro domain may be subject to further manipulation by signal processing and the ultimate version of the sound oriented composition appears. (Figure 2.2)

Manipulation Manipulation of EA of the Macro Composition Sound Objects Texture

Figure 2.2: Components of electroacoustic composition.

The fundamentals of Schaefferian theory, methodology and terminology, represented by the notions such as the primacy of the perceptual field and the five stages of PROGREMU, will serve as the backbone for the aesthetical and theoretical basis of the dissertation. Nevertheless, there is also the urgent demand to update the theory from the present-day perspective, since there are ideological restrictions and technological deficiencies. The deviations from this theory along with suggestions are introduced in the following section.

19

2.2 Deviations from Schaefferian Theory

There are variety of theoretical and terminological preferences and suggestions throughout the dissertation, which are not completely coherent with the principles of Schaefferian theory. These conceptual deviations are largely influenced by the overall (and global) history of electroacoustic music, the contemporary aesthetical standpoints and the technological innovations. The deviations are collected in five main sections.

2.2.1 Headline debate

The very first problematic issue is to apply an appropriate general term for the genre of electronically produced, and/or manipulated music.

In 1948, Pierre Schaeffer came up with the misinterpreted term musique concrète; a term essentially aimed to indicate the breaking up with the traditional notation and fixed pitch based compositional procedures and signaled the new possibilities offered by working directly with sound objects. The detailed explanation of the term took place in the section of dualisms of music in Traité des objets musicaux. According to Schaeffer, the third (and the last) dualism of music is the abstract/concrete one (Chion, 1983, p.34). The other two dualisms are natural/cultural and making/ hearing. Natural and cultural signifies to the distinction of information or meaning gathering. Making/hearing is the questioning of relevance the compositional activity with human listening capabilities. Reconnection of making/hearing is one of the conceptual goals of the Schaefferian theory ( Chion: 1983, p. 35).

The historical Western compositional process accepts the use of notational symbols as the starting point, and this “abstract “prelude leads us to concrete structures during the live performances based on written down symbols. Most of the 20th century avant-garde compositional techniques, such as serialism or indeterminism, were based on mathematical and geometrical processes.16

On the contrary, musique concrète suggested the reverse order. The practice starts the process with the discovery sound objects (which are concrete sonic structures), than it determines the ones whose intrinsic qualities are consonant with the requirements of aural perceptual field.

16 Thus, this predetermination creates a wide gap between making and hearing, since mathematical data do not necessarily correspond to the perceptual field. 20

Later on, the composer extracts musical abstractions from the analyzed and selected sound objects, and relates them with others in order to realize a composition. Table 2.2, organized by Schaeffer, makes a quick crosscheck between two contrasting procedures (Schaeffer, 1952, p.22).

Table 2.2: Schaeffer’s division of ordinary music and new music.

ORDINARY MUSIC NEW MUSIC

(so-called abstract) (so-called concrete)

PHASE I. Conception (mental) PHASE I. Composition (material)

PHASE II. Expression (notated) PHASE II. Drafts (experimentation)

PHASE III. Performance (instrumental) PHASE III. Materials (making)

(From abstract to concrete) (From concrete to abstract)

In 1953, Schaeffer decided to replace the term musique concrète with a more general headline, experimental music, without modifying the general conceptual roots of the theory. However, from the recent historical perspective, the term might be considered to be associated mainly with the organization and modification of sources and processes, which will cause unpredictable (musical) sonic structures. These predetermined processes are not necessarily conducted by electronic means or do not sustain any timbre related concerns or preferences. Michael Nyman’s definition of the term “experimental music” provides a more stable ground for the argument:

Experimental composers are by and large not concerned prescribing a defined time-object, whose materials, structuring and relationships are calculated and arranged in advance, but more excited by the prospect of outlining a situation in which sounds may occur, a process of generating action (sounding or otherwise), a field delineated by certain compositional rules. (Nyman, 1974, p.4) Almost in synchronization with musique concrète, the German “elektronische musik” (electronic music) movement appeared with the contributions of Werner Meyer- Eppler17, Herbert Eimert and Karlheinz Stockhausen. The genre was initially focused on the generation of new sounds via signal generators by using early audio synthesis techniques and rejects to use “anecdotal” recordings as source material.

17 Unlike Eimert and Stockhausen, Werner Meyer-Eppler was a physicist and acoustician. He was not an active composer or performing musician, although he contributed significantly to the genre as an instructor and theorist. 21

Hence, it was a restricted term again in terms of valuable resources. Schaeffer’s opposing limitation strategy will be explored in detail in the following section.

In addition to the synthesis oriented approach of Cologne school, Eimert and Stockhausen also favored the use arithmetic and geometric series to determine all of the parameters of their compositions. From Schaeffer’s point of view, the core conceptual ideas of Cologne school was only an extension of regular “abstract to concrete” pathway due to its mathematical foundations. Therefore, the compositional outcomes are incompatible with the perceptual field and the realization of PROGREMU is impossible, since none of the sonic material is suitable for the synthesis stage.

It is possible to speculate that, Herbert Eimert himself endorsed the motives of Schaeffer’s negative opinions. He declared precisely in his essay “What is electronic music”, that the new genre was the evolution of serialist principles and it allows the composer to have a total control on the parameters of the musical composition. (Eimert, 1965) Therefore, German School was almost instinctively resistant to the idea of using sounds derived from any sound source, since random sound objects will symbolize immediately a loosening up in the parameter control. It is mandatory to mention that simultaneously Eimert accused musique concrète movement as an empirical act, an amateur attempt that should be a part of sound design for moving pictures.

Nevertheless, in 1956, Stockhausen had already started to expand the borders of the genre, when he finished his work “Gesang der Jünglinge”. Stockhausen used the recordings of a boy soprano as a material and as a sound metaphor as well, with the pure electronic sounds generated from sine waves and clicks. Most of the textbooks on the history of electroacoustic music evaluated “Gesang der Jünglinge” as a crossover composition between two pioneer schools of music with simply electronic means. Similar developments can be found in Eimert’s late works like “Epitaph für Aikichi Kuboyama” (Epitaph for Aikichi Kuboyama) of 1962, which was based on electronically manipulated speech recordings.

There is no sign of conceptual regulations and restrictions regarding the definition and realization of electroacoustic music in the global sense, if we examine the historical evolution of the subject genre closely outside of Germany and France.

22

As one of the early and complex establishments historically, the Italian school led by Bruno Maderna and Luciano Berio, did not limited themselves to a certain kind of material or technique.

Schaeffer’s inclination to a broader headline for his compositional perspective and Stockhausen’s ideological shift were open enough to decipher them as the precursor of the aesthetical and technical integration in the electronic field, which was about to come. In addition to historical terms like musique concrète, electronische musik and tape music18, there has been a variety of terms in use to represent the electronically treated compositions (Landy, 2007, p.10).

1-Organized sound

2-Sonic art

3-Sound art

4-Audio art

5-Computer music

6-Electronic music

7-Electroacoustic music

8-Electroacoustics

9-Electronica

Besides the variable cultural preferences for their use, every term has its own advantages and disadvantages. A detailed evaluation of the listed terms would be outside of the scope of this dissertation, but one should say that the main problem rises from commercial, contextual and linguistic factors. Electroacoustic music became the most frequently used term in the last five decades, although electroacoustic simultaneously means the conversion of acoustical energy into electricity. However, it is compatible with almost every aspect of electronically treated music regardless of the aesthetical and ideological standing points. Therefore, the scientific meaning of the term will be omitted and “electroacoustic music” would be the term of choice in this dissertation apart from the historical or contextual debates.

18 One of the earliest terms preferred mainly by American composers, referring specifically to the historical realization and playback medium. 23

Every musical work and theory, influenced by the idea of timbral organization via electronic manipulation will be considered within the scope of the term electroacoustic music throughout this dissertation and historical subdivisions and other terms will not be referred unless they are emphasized for specific intentions.

2.2.2 Natural sounds / synthesized sounds

Schaefferian theory takes natural sounds as role models and gives them an unquestionable priority over unnatural/synthesized sounds. 19 Schaeffer argues on the subject:

“Natural sounds consist of complex and interdependent inner physiognomy, which yields richer results for compositional abstraction during the analysis and synthesis section of PROGREMU, whereas synthesized sounds have a cumulative characteristic with no organic relevance” (Chion, 1983, p.129).

During the course of his theoretical and compositional work, Schaeffer stayed faithful to this idea of using only sound sources with “organic” structures as a compositional constituent. His only work to contain synthesized sounds is “La Trièdre Fertile”, realized in 1975 and it is among the final compositions of the composer (Guedes, 1996).

This kind of discrimination regarding the origins of the sounds leads us to a paradoxical situation when the innovative nature of the sound object ideology is taken into account. Schaeffer’s aesthetical view could be of course justified on a subjective / personal level, although it still has to be valid for a short period of history, due to the impressive technical evolution of synthesis techniques in time. Schaeffer accuses the sonic results of basic spectral approach of additive and subtractive synthesis as being inorganic cumulative sound masses with no direct relations. Nevertheless, the more recent synthesis methods such as time-based (granular, pulsar etc.), source modeling, sample-based and hybrid synthesis methods offer more flexibility for achieving complex sound objects on every aspect. Besides, the recent signal processing abilities improve the capacity of synthesis methods dramatically on the digital domain, which offer infinite possibilities of transformation and juxtaposition.

19 Michel Chion do not use the adjective “synthesized” directly. He uses the word “electronic” instead. To avoid possible confusion, synthesized is assumed in the dissertation to be the antonym for natural. 24

For this reason, any kind of source-oriented discrimination strategy will not be followed in this dissertation. Every individual sound object is assumed to be in equal status for their potential compositional value and therefore they have to be considered as sufficient for musical manipulation and construction activities under any circumstances.

2.2.3 Suitable sound objects

The first three (objective) stages of PROGREMU, typology, morphology and characterology are the prerequisite phases of investigation before entering to the last two (subjective) stages called analysis and synthesis respectively. In addition to the previous debate of natural and synthetic origins, Schaeffer claimed further that not every natural sound object might be suitable for compositional abstraction, due to their disparity with the perceptual field, though limits of the field is variable on personal and cultural levels. Hence, the ones who do not match the sufficient criteria for having musical values cannot pass the analysis stage and enter into concluding synthesis stage. The sound objects, which match the “required” standards, can become “objets musicaux” (musical objects). In order to perform the task of analysis, an aesthetical reference point has to be established. Schaeffer’s fundamental criteria to define the qualities of suitable isolated or collective sound objects are summarized in Figure 2.3 (Chion, 1983, pp. 106-107).

Isolated Sound Collective Sound Objects Objects

Simple / Simple / Memorable / Memorable / Original Original

Openness to Openness to Reduced Reduced Listening Listening

Identifiable Identifiable Musical Value Musical Value

Figure 2.3: Schaeffer’s criteria for suitable sound objects.

25

As can be observed, sound objects may be suitable as individually or collectively, yet both states share the same requirements for being musical objects.

Two aspects of the first row of criteria - simple and memorable - are closely related with the frequency spectrum and the duration of sound objects; meanwhile the third one – originality - is in a way a subjective evaluation. Human beings are not capable to perceive sounds as tones, but rather they tend to hear them as clicks, if the sounds are shorter than the minimum threshold of duration required for auditory perception.20 This perceptional “anomaly” decreases the capacity of memorization process. On the other hand, longer sound objects, unless they are the productions of electronic/synthetic procedures, contain more spectral and dynamic information than a person can successfully memorize. The second row of criteria - openness to reduced listening- points out to the potential distractions caused by the emotional and cultural overload or insufficiency of the sounds objects during the reduced listening process. The third row of criteria - identifiable musical value - again carries subjective overtones.

Trevor Wishart divides the morphology of sound objects into two categories: gestalts and objects with dynamic morphology. In order to execute an exhaustive investigation, Wishart suggests two separate yet codependent structural departure points; “gesture” and “classification of morphologies in relation to perceived natural phenomena”. On the micro level, gestures create a sense of progression in the sonic continuum and therefore, it is effective on the overall perception of the perceived sound mass (Wishart, 1996). This is visualized in Figure 2.4.

Perceived Gesture as the natural articulation of phenomena the continuum

Figure 2.4: Wishart’s codependent parts of sound object investigation.

At this point, the introduction to Denis Smalley’s concept of spectromorphology is of considerable importance for the theoretical basis of the dissertation.

20This phenomenon is frequency dependent. A sine wave of 1000 Hz must be no shorter than 12 milliseconds for human beings to recognize pitch information. 26

As the name implies, spectromorphology concentrates on the structures derived from the spectra of the sound objects and their timbral manipulation in time as Smalley prefers to use the term “temporal shaping”. He notifies that there is no consistent low-level unit and hierarchical stratification on spectromorphological music; rather we have to direct our listening activities on a multi-level. This is a contrasting attempt with the idea of “Hauptstimme” (Foreground) and “Nebenstimme” (Background), terms frequently used by Schönberg to point out structural hierarchy. Thus, the compositional process requires special attention on every micro and macro structural measure (Emmerson, 1986). Before dealing with constructional issues, Smalley initially defines a “note to noise” continuum for sound classification.

The first spectral type “note” signifies every aspect of pitch related concepts: absolute pitches, intervals, chords, and consist of three subsections: “note proper”, “harmonic spectrum” and “inharmonic spectrum”. The second spectral type “node” is a linear transition from the inharmonic spectrum department of the note categorization and contains sound with ambiguous pitch information, as it is valid for some of the percussion instruments such as cymbals. The third spectral type is “noise”, which has a broadband quality in terms spectral density. Again, it is a linear transition to the other extreme of the continuum, since spectral density emphasize the ambiguous quality of sound objects further (Emmerson, 1986).

Smalley proposes the terms “gesture” and “texture” in order to set light to general structural formations, because of similar necessities he felt as Wishart did. He defines the terms as follows:

Gesture is concerned with action directed away from a previous goal or towards a new goal; it is concerned with the application of energy and its consequences. . . . Texture, on the other hand, is concerned with internal behavior patterning, energy directed inwards or reinjected, self propagating (Emmerson 1986, p.82). Here, the term gesture is isolated from its meanings related to gesture as a form of non-verbal human communication or gesture as a concept of musical instrument design; rather it implies a “commitment”, a task based process. Texture is associated with micro level activity forming a macro surface as described earlier.

Smalley also adds that texture and gesture cannot be in equal status in a sonic structure. One of them has to be dominant according to the priorities of the human perception mechanisms. Therefore, he differentiates between “gesture-carried” or “texture-carried” sound structures (Emmerson, 1986, p.83).

27

By omitting the compositional act motivated by a biased categorization of sound objects and defining a terminology on various aspects of sound structuring, the framework of a present-day electroacoustic music theory starts to appear to fill in the “uncompleted” synthesis stage of PROGREMU.

Any sound object, regardless of its origin has to be treated equally for their potential musical capacities, though the outcomes would be qualified, accepted or rejected by the subjective/personal aesthetical preferences of the composer. No general rules regarding sound object evaluation can be employed. All sorts of analog or digital artifacts or any other complex sounds ranging from node to noise are included to the list of potential sources.

2.2.4 The symbiosis of attack and timbre

Although the envelope of a sound and the timbre of a sound are independent titles from the scientific perspective of acoustics, post Helmholtz psychoacoustic experiments demonstrated their state of interdependence clearly.

Attack21, the first phase in the dynamic evolution of any sound, where the amplitude reaches its peak, has a significant effect on the perception of the timbre. Pierre Schaeffer’s experimentation with several traditional music instruments and sound generating objects 22 led him to three distinctive observations about the nature of the symbiosis between attack and timbre. He concludes that attack has more importance on the timbral characteristics of brief, percussive sounds, the importance of the attack decreases, when the duration of sound extends and attack has almost no impact on sustained, stable sounds (Chion, 1983, p.50). Schaeffer rendered the results as the insufficiency of the term timbre and reduced the use of the word solely to an abbreviation for “harmonic timbre”. As a substitute, he offered the terms “characteristic” and “genre”, which he considered as concepts that are more inclusive than the spectral implications, since spectral unity is not the only criteria provider according to Schaeffer. Dynamics has an equal impact on our timbre perception. Concisely, characteristic focuses on the sum of criteria that the sound object is made of, while genre indicates a specific collection of criteria.

21 Some textbooks prefer to use the term “onset”, but for the sake of terminological consistency, the term attack will be used throughout this dissertation. 22Schaeffer focused especially on bells and piano. 28

Wishart prolongs Schaeffer’s principle observations about the relationship between attack and timbre, but does not terminate the use of timbre as an individual concept and term:

“Timbre is not merely dependent on spectral information but also upon the way that information evolves through time” (Wishart, 1996, pp.63-64).

By giving synthesized sounds equal rights to be present in an electroacoustic composition; theoretically, we have infinite possibilities for generating sound objects, a fact that makes Schaefferian concepts of characteristic and genre almost utopian and unpractical. The term “timbre” ensures us a simple yet sufficient starting point for comprehending the fundamental concepts of electroacoustic music. Therefore, it will be used to define the spectra of sound objects.

Timbre manipulation and object transformation will not be restricted only to the changes in the frequency spectrum of sound objects. All processes to modify the dynamic structures of the sound objects will be admitted as an organic extension to the idea of timbral manipulation and progression.

2.2.5 Sound object manipulation

Pierre Schaeffer suggests three different terms to differentiate between the potential types of sound object alteration techniques under the overall heading “manipulation”. For a precise understanding of the terminology, firstly Schaeffer’s concept of “matter” and “form” has to be discussed.

Drawing persistent analogies with visual arts, Schaeffer explains the relationship between matter and form with the metaphor “space in a space”, indicating their natural bond. Both terms are associated with the same parameters; frequency, intensity and duration but has to be detected in divergent temporal units. Matter can be considered as a “snapshot” of these parameters, whereas form signifies the permanent qualities of the parameters obtained in time. We can manipulate the matter (matiere) and form (forme) of sound objects in order to gain material sources for musical synthesis accordingly.

Transformation is a process, which modifies specifically the matter of the object, while transformation only aims to process the form of the object. In both cases, the other “quality” remains unchanged and still identifiable.

29

Modulation, according to Schaeffer is discrete from transmutation and transformation, since it has to be focused on one of the parameters of the object such tessitura, dynamics or timbre (Schaeffer, 1952, p.155).

Although these three terms provide solid, logical representations of the feasible technical activity in order to perform a spectral oriented structuring, they have also potential functional risks. They share the same titles with scientific or conventional musical processes. Transmutation and transformation are terms used in a variety of scientific contexts on diverse grounds. In addition to its electronics and telecommunication oriented meaning and connotations, the term modulation represents the traditional concept of key change in the Western music theory. This synonymous state of the term may lead us to global confusion. Besides, the dissociation of the modulation technique from transmutation and transformation is not very clear. This ambiguous state of the terms may result with deficiencies in communicative and pedagogical aspects.

Without harming the theoretical essence of the role of the sound object alteration, simple but functional terms must be introduced for the sake of a stable global scale communication. The overall term “manipulation” continues to conserve its status to act as a heading for sound object processing activity with further content, since it also serves as the primary verb to signify the processes subjected to different dimensions/parameters of a sound object. There are four fundamental types of manipulation:

“Spectral manipulation” based applications change the spectral content of sound objects in frequency domain via audio filter banks and in time domain via computer applications. SM can be considered as being similar but does not completely correspond to the term transmutation.

“Dynamic manipulation” based applications change the dynamic content of sound objects via envelope shapers. An existing envelope derived from an object may be imposed on another object or a unique one can be constructed. DM can be considered being similar but does not completely correspond to the term transformation. At the same time, DM will be used as a synonym for “peak” and “RMS” control in signal processing.

“Tessitura manipulation” based applications change the spectral range and the pitch

30 register of the sound objects via pitch bend and pitch shift related devices with or without spectral shifting (Wishart, 1994). TM can be considered as being similar but does not completely correspond to the term modulation.

“Time-based manipulations” such as time stretching or granular synthesis change temporal order of the sound objects.

More conventional time-based audio signal processing techniques; reverberation, delay, phasing, flanging chorusing could be also listed in this categorization, although they have also considerable spectral “side effects”. Micro amounts of time delay cause comb filtering, if it s used in combination with the original signal. Comb filtering creates a regularly ordered frequency response, due to the constructive and destructive interference between time-delayed signals.

These four types of manipulations can be also used in combinations. The simultaneous presence of SM, DM, TM and TBM is accepted as a radical typological and morphological reconstruction in the sound object and will be labeled as “Absolute Manipulation” (ATM).

2.2.6 Reduced listening vs. heightened listening

Reduced listening is one of the key concepts of Schaefferian theory, as mentioned earlier in detail. It has direct connections to the idea of sound object as a natural consequence of Schaeffer’s rightful primacy to the aural perception.

The cause of the sound and the information or code it contains has to be avoided/ omitted in order to be able to focus on the spectral and dynamic qualities of the sound itself for compositional matters. Since human beings normally do not react to sound objects according to their intrinsic sonic qualities, reduced listening becomes an act, requiring a “learning period”. The tendency of sound source determination and meaning gathering is more dominant, when the perceived sounds are generated from natural origins. Therefore, reduced listening is an organic extension for the comprehension of the essential theory behind the early musique concrète movement and secures its position in later movements. Although electronically produced or manipulated sounds automatically decrease the amount of the potential requirements to assign sources or attach meaning to them, reduced listening remains a mandatory stage for the evaluation of sound objects before the synthesis stage of PROGREMU.

31

In a converse manner, some of the electroacoustic works use sound objects in an anecdotal, mimetic or symbolic way, which the act of reduced listening may lead to misinterpretation of the work. Thus, the presence of counter concept called “heightened listening” (Landy, 2007, p.105), is necessary to underline the fundamental differences of sound object recognition. Some of the works of Luc Ferrari and Trevor Wishart may fall on this category due to their use of sound as a metaphor.

From the technical perspective, the presence of a sound object in a composition as a sonic symbol may appear in its original form or in a gradual process of manipulation. The limit of manipulation differs conceptually. For instance, as a genre “soundscape composition” requires preserving the original identity of “the source, location, or time” of the object. Barry Truax states that the listeners “past experiences” and “associations” about the environmental sounds are contextually mandatory. Katherine Norman emphasizes the ability of the active engagement of the listener in these recognizable stages. Therefore, absolute abstraction via manipulation does not have a role in soundscape composition (Westerkamp, 2000). There is variety of autonomous ideological branches of sound and timbre composition, and new ones will continue to emerge. Therefore, our theoretical perspective must be flexible regarding electroacoustic composition.

32

3. HISTORY AND ANALYSIS

3.1 Early Theoretical and Technological Developments

The relationship between music and technology has been subject to many debates. Historically, music was considered as one of the principal sciences included to the “quadrivium” (four arts/four ways) of seven liberal arts. The other departments are arithmetic, geometry and astronomy, as stated in classical Greek philosopher Plato’s work “The Republic”, written around 380 BC.23

The classic example of the invention of pianoforte, a musical instrument that indisputably benefited from the highest form of mechanical technology of its time (the early 18thcentury), provides us sufficient evidences to prove the cooperation of science and music (Braun, 2002). The nature and the rate of this interaction has been analyzed, supported or criticized by many scholars insistently, especially in the last three centuries and continues to be so today in an almost logarithmical manner. Thus, the mutual progression of music and science has never been more pronounced and organic than in any other musical movement except electroacoustic music in the written history of music.

This leads us to the mandatory requirement of a parallel historical and analytical research regarding the theoretical and technological developments. The history of audio recording will be analyzed separately in the next section to ensure necessary background information before the introduction of the technical and compositional concepts with historical significance in the fourth chapter.

The early theoretical ideas and manifestations, which influenced the aesthetic foundations of electroacoustic music started to appear approximately around the beginning of 20thcentury, which might be considered as an era of renewal, times of total transformation in the profound history of art.

23 Trivium (three arts / three ways), the precursor of quadrivium, consists of grammar, logic and rhetoric. 33

The intellectual and (occasionally) ideological points of departure of the texts of essays and manifestations, written in the dawn of the “new” century, are diverse and fertile. As a remarkable feature, all of them share particular demands; the rather urgent need for the expansion of sonic material to enrich the aesthetic borders of the musical compositions.

Six important early inscriptive examples carry significant importance for the history of electroacoustic, albeit with the lack of an accompaniment of an active musical work. Perhaps, for the first time we witness this amount of detailed documents of a major aesthetical u-turn. Contextually, sound, noise and timbral concerns are amongst the frequently discussed topics, although there is not a homogenous philosophical or ideological foundation between those texts.

Chronologically, Ferrucio Busoni’s “Sketch of a New Esthetic of Music” (1907) is presumed as the initial example, although earlier fictional descriptions and predictions of sonic continuum had been existed in the Western literature. The essential arguments and the influential references of the Busoni’s essay were discussed at the introductory chapter

Balilla Pratella’s “Manifesto of Futurist Musicians” (1910), and Luigi Russolo’s The “Art of Noise” (1913) were motivated in a collective manner. The ideas of Pratella and Russolo were prolongations of the Italian futurism movement, an art movement with significant political overtones. Futurism embraces the modern state of human culture and daily life, which is affected by technology and its “side effects”, as an extension to our understanding of the nature. Futurisms only concern is not music, instead it cover all forms of art with a general worldview. In our case, Pratella attacks on the musical institutions and urgently requests a compositional and educational approach devoid of any kind of restrictions and rules. Three years later than him, Russolo demands the addition of every type of noise into musical vocabularies. He categorizes the types of noises into six general sections: Rumbles, whistles, whispers, screeches, noises obtained by percussion on metal, wood, skin, stone etc and voices of animal and men. Every section contains subdivisions organized by a simple morphological classification, which is reminiscent of Schaeffer’s later attempts. Although futurists put similar emphasis on the mutual nourishment of music and technology as well as Busoni and Varèse does, one must admit that their aesthetical and ideological goals were radically different from the rest of the six.

34

Henry Cowell’s “The Joys of Noise” (1929) and John Cage’s “The Future of Music: Credo” (1939) questions the restrictions of conventional Western musical theory/ practice in a similar manner with the preceding manifestations and both essays focus particularly on the acceptance of the sounds as a musical device, which are traditionally gathered under the category “noise”. Noise has several meanings in variety of contexts, yet Helmholtz’s definition is significantly convenient in our case, where sounds having from non-periodic vibrations are labeled as “unmusical”. Additionally, Cage emphasizes the importance of the invention of new electrical instruments for the creation of a non-hierarchical sonic continuum, based on any possible combinations of sound.

Edgard Varèse, whose creative activity continued in a considerably long time span, had been active since the first decade of the 19th century. In his collected lectures, “The Liberation of Sound” (1936/1952), his mature opinions about the relationship between music and technology and their mutual benefit from each other can be observed. Varèse also adds the importance of a scientific background a modern composer must have.

At different times and in different places music has been considered either as an Art or as a Science. In reality, music partakes of both. Hoëne Wronsky and Camille Durutte, in their treatise on harmony in the middle of the last century, were obliged to coin new words when they assigned music its place as an "Art-Science," and defined it as "the corporealization of the intelligence that is in sounds. (Varèse, 1939, p.3) The common motives of the authors are not causeless. The listed articles and manifestations appeared exactly in the same time span with the major technological leaps during the late nineteenth and early twentieth century. The progression of technology was accelerated to a certain point with every single innovation, thus new devices for recording, playback (with higher fidelity) and new electrical instruments started to emerge. Some of them, especially audio recording and playback mediums, found public acceptance. Starting with “Electro-mechanical piano” founded by Msr Hipps in 1867, until 1940 approximately, 52 different electronic musical instruments were invented in absolute synchronization with the evolution of audio recording industry.

Table 3.1 shows the selected highlights of corresponding historical evolution of technology, early theories and compositions based on a variety of sound oriented experiments.

35

Table 3.1: Early innovations, manifestations and compositions.

Time Span Innovation Text Compositions

1877–1894 Microphone (1876) – Emile Berliner

Wax Cylinder (1877) – Thomas Edison

Gramophone Disc (1888) - Emile Berliner

1895-1910 Telharmonium (1897) - Ferrucio Busoni Thaddeus Cahill (1907)

Telegraphone (1898) – Balilla Pratella Valdemar Poulsen (1910)

Triode Vacuum Tube (1908) – Lee De Forest

1911-1925 Intonarumori (1913) – Luigi Russolo Vergilo – Luigi Luigi Russolo (1913) Russolo (1914)

Theremin (1917) – Lev Corale – Luigi and Termen Antonio Russolo (1921)

1926-1940 Ondes Martenot (1928) – Henry Cowell Wochenende 24– Maurice Martenot (1929) Walter Ruttman (1930) Trautonium (1930) – Edgard Varèse Freidrich Trautwein (1936 / 1939) Imaginary Landscapes No: 1 Rhythmicon (1931) - John Cage – John Cage Henry Cowell and Lev (1939) (1939) Termen Oraison – Oliver Magnetophone (1932) – Messiaen (1937) AEG

1941-1948 Ac Bias Technology Etude aux chemins (1940) – Walter Weber de fer - Pierre Schaeffer (1948)

24 Walter Ruttman is not a composer; he is an experimental film director. Wochenende was presented as a film without an image. It is a kind of sound collage stands in parallel with the musique concrète movement. 36

As can be seen, the time line ends with Pierre Schaeffer’s “Etude aux chemins de fer”, the first work belongs to the series “Cinq etudes de bruits”, composed successively in 1948. It is widely accepted as the first actual composition of the electroacoustic music. Nevertheless, there are some earlier compositions, such as John Cage’s “Imaginary Landscape No: 1” (for muted piano, a cymbal, and two phonographs, 1939) and especially Halim El Dabh’s “The Expression of Zaar” (wire recorder, 1944), which can be considered as electroacoustic works. Cage’s work was a live performance, Dabh’s work was based on the recordings of an ancient Egyptian Zar ceremony and daily sounds. These sounds were recorded with a wire recorder, an early version of magnetic recording, and later on, they are edited and manipulated in a very similar manner with musique concrète. The overall apprehension could be imposed by the lack of a continuous compositional activity until Schaeffer. However, this historical labeling does not avoid the arising of one important question: Why is there a considerably long time gap between the development of intellectual activity and its actual compositional practice? The answer lies in two main reasons. Imitative nature of early electronic instruments, mostly modeled on acoustical instruments and tempered scales, was a holding factor. On the other hand, improvement on the recording technology was necessary in terms of audio fidelity. This advancement materialized to its peak state during the World War II.

The majority of the electronic instruments developed until 1950 were limited to produce sounds within the restrictions of the equal tempered scale; hence, they had to be operated via traditional keyboard systems. In practice, this means instead of being an independent sound source, most of the electronic devices only provided new textures to the orchestral repertoire as a subordinate to pitch based system. We may assign several reasons to this particular problem. One could be the lack of communication between the inventors and composers. We knew that Edgard Varèse tried for a long time to establish an institute for scientific musical research, but never succeeded find any support (Manning, 1985). The secondary potential problem might be the financial concerns, since the high expenses of the instruments required a stable financial income and certainly, the amount of the followers of a new music could not guarantee that. On the other side, the instruments, which had designs that were open sonic experiments, were adapted to conventional playing techniques by their specialized performers.

37

In the essay “The Future of Music”: Credo, John Cage states connectedly:

“Most inventors of electrical musical instruments have attempted to imitate eighteenth and nineteenth century instruments. We are shielded from new sound experiences” (Cox. and Warner, 2009, p.26).

This situation led most of the composers to make experiments with percussion, non- musical instruments and phonographs. The focus on percussion instruments has a purely acoustical logic. Most of the percussion instruments have non-periodic waveforms resulting with a continuous spectrum and contain no specific pitch information (Howard and Angus, 2009). This spectral quality serves a compositional material, ready to be shaped. The most important work of this kind of strategy came from Varèse. “Ionisation”, written between 1929 and 1931 is the first composition for the percussion ensemble alone, though Varèse had used percussive elements in his earlier compositions such as “Hyperprism” (1923) and “Intégrales” (1925) in combination with wind instruments.25 “Ionisation” does not use only a variety of percussion instruments but also use non-instrumental sources like sirens and lion’s roar. As will be discussed later in the analysis section, the timbre/texture driven traditional orchestral approach continues to exist, but the use of percussion and acoustic or electrical sound sources signifies a transitional period before switching to the ability to work on solely or dominantly on the electronic domain in the second half of the 20th century. Audio recordings, a relatively new medium for the first quarter of twentieth century, became a tool for sound manipulation and a workstation, a kind of “audio canvas”.

Bauhaus artist Laszlo Moholy-Nagy wrote two articles entitled “Production – Reproduction” (1922) and “New Form in Music: Potentialities of the Phonograph” (1923). In those articles, Nagy saw phonograph26 as a new musical tool to produce new music from existing phonograph records, in a sense he have foreseen not only electroacoustic music but also relatively new genres like turntablism and hip-hop almost 60 years ago.

25 The only work of Varése containing electronic instruments before 1950 is Ecuatorial (1934). Original version included Theremins. The revision in 1961 replaced the Theremins with Ondes Martenot’s. 26 Here, we encounter a misconception in terms of terminology. As can be seen in section 2.1.1, Edison invented the phonograph, but Berliner’s Gramophone gained a popular recognition due to its technical features. In reality, Nagy was talking about gramophones. 38

Nagy summarized the potential productive capabilities of playback devices as follows:

“The composer would be able to create his composition for immediate reproduction on the disk itself, thus he will not be dependent on the absolute knowledge of the interpretative artist” (Cox and Warner, 2004, p.332).

Although there is not a single audio example left behind, except Cage’s previously mentioned composition “Imaginary Landscape No: 1”, it has been known that composers such as Darius Milhaud, Paul Hindemith, Ernst Toch, Henry Cowell and again Edgard Varèse were actually experimenting with audio recordings during 1920’s. The manipulation capacities of the audio medium were severely restricted along with the poor frequency range and low signal to noise ratio as can be expected27. Variations in the playback speed caused transposition register wise and allowed timbral changes. The superimposition several layers create the complete musical work. Another essential disadvantage of the earlier formats was the impossibility of the audio editing, due to the physical nature of material. Another experiential medium for sound that compensates the lack of editing was film. The audio on the film can be manipulated similarly as gramophone records, but sound could be drawn on the film as an early form of sound synthesis. Also films can be cut, and pasted. This opportunity led a variety of artists to produce abstract works based on the relationship of sound and image (Kahn, 1999). Walter Ruttmann’s “Wochenende” is an early example for the improved film sound editing technique and it is known that Russian composer Arseny Avraamov experimented with the sound on film during the same period. Film continued to be a medium for compositional activities also later on. Daphne Oram’s “Oramics” system, invented in 1957, was the continuation of this idea.

The developments in the audio recording technology, especially the improvement of the magnetic recording systems has brought many benefits and relative ease in terms micro and macro time editing, sound manipulation and high fidelity spectral quality. It is necessary to reserve a special section for the concise history of audio technology, in order to realize the facts behind the dramatic increase in the number of electroacoustic works and studios after Second World War.

27 Only with the advance of digital audio technology, it became possible to capture the whole frequency range, human beings are able to hear with a very high signal to noise ratio. 39

3.1.1 Concise history of audio recording

The history of sound recording is relatively new, shortly preceding the invention of capturing moving images. Covering the whole aspects and the technical formats might be outside of the scope of the dissertation. There were many formats and technologies, remained isolated or totally forgotten due to practical and commercial issues. Nevertheless, it is necessary to underline the cornerstones of the audio history, since it has organic bonds with the history of electroacoustic music.

“Phonautograph”, invented by Édouard-Léon Scott de Martinville in 1857, was a device able to capture the visual representation of the acoustic waveforms onto cylinder or glass via a form of stylus such as feather. It is not well known as the following subsequent invention of Edison, since it was not capable to play the recording back. However, it was an important step, because it proves that the discoveries regarding acoustics and human auditory system could be a template for further researches in audio technology. Many audio transducers, such as microphones and speakers take the human hearing mechanism as a model.

“Phonograph cylinder”, invented by Thomas Edison in 1877, was the first sound recording and reproduction device. Once with the use of horns, the acoustical energy is transferred to cylinder. A phonograph record cannot be edited, deleted or mass- produced.

Emile Berliners “gramophone”, invented in 1888, uses similar principles with phonograph. One major difference was the choice material, on which the acoustical energy is captured. Instead of using wax cylinders, Berliner chose to use flat disks. From a single master disk, large amounts of duplications can be created. This particular property single handedly triggered a new industry called “music recording”. One specific drawback continued to exist though (Kahn, 1999); the audio material cannot be edited or deleted. Only the RPM, the rotational speed of the disc can be manipulated as was evident in the early experiments related to this medium. In 1924, application of the electrical technology to the recording and playback mediums resulted with less noise and improved RMS. Next big leap in the audio recording history was the improvement of magnetic tape recording technology, which would later become the main medium for electroacoustic music due its several advantages over previous formats.

40

The concept of magnetic recording started as a theoretical idea of an American engineer Oberlin Smith. In 1878, Smith suggested that the telephone signals could be recorded onto steel wire due to steel wires capability to be magnetized, but he did not realize this initial concept. In 1898, Danish inventor Valdemar Poulsen developed Smith’s idea and created the first actual magnetic wire recorder called “telegraphone”. The final and complementary contribution came from a German engineer, Fritz Pfleumer. Around 1928 he transferred the concept of magnetic recording to iron (III) oxide (Fe2O3) powder coating on a long strip of paper and called the new recording medium “magnetophone”. In 1932, AEG Company took the grant of Pfleumers invention for further development. In 1934, BASF Company established plastic coated tape, which brought two fundamental improvements: Magnetic tapes can be erased and new audio material can be recorded onto the same tape. The plastic tapes can be sliced with the use of razor blades and can be joined, meaning, for the first time audio material can be edited. (Url- 6)

In spite of all the major advantages, early versions of magnetophone suffered from low signal ratios and limited frequency range. These problems solved with the AC tape biasing technique, rediscovered by Walter Weber under the direction of Hans Joachim von Braunmühl in 1940. Ac biasing relies on the principle of the addition of ultrasonic frequencies, sine waves between 75.00 – 150.000 Hz, to the audio material during recording, which will help the low-level signals to be boosted to the linear range, resulting with less noise and wider frequency range.

Until the end of World War II, this improved version of magnetic recording technology was unknown outside of Germany. In 1946, American companies did reverse engineering to understand the principles of the magnetophone devices and started to mass produce magnetic tape recorders in 1946. The beginning of the history of electroacoustic music clearly overlaps with the worldwide marketing of high fidelity audio recording and playback technology.

Although the early equipment of Club d’Essai, studios of Schaeffer, consists of direct disc cutting lathe, in 1951, the renewed studio had access to magnetic tape recorders. The second electroacoustic music studio in the world, located in Cologne had also magnetic tape recorders from the very beginning, since the serialist approach of elektronische music school involved also duration parameter to formal structuring, which makes the tape slicing an indispensable part of the process (Manning, 1985).

41

3.1.2 Abstract art

It would be misleading to examine the theoretical and practical history of electroacoustic music in absolute isolation. 20th century became the breaking point for majority of art forms from their relevant traditional aspects. The historical progression of visual arts, especially of painting, crossed similar paths with music and a mutual relationship in terms of influence between various modern painting movements and music might be observed. Expressionism, fauvism, cubism or orphism (any abstract, non-figurative/non-objective movement) could be included to this category. The driving forces of the renewal in painting and music were analogous. The globalization, which diminishes the side effects of socio-cultural isolation, the impact of technology on daily and cultural existence, and the developments in science and philosophy were vital for the progress.

The transformation of “object” to “subject” in modern art can be considered as a similar functional and theoretical point of departure from the past as experienced in the case of music. The transition from fixed pitches to sound objects and timbre related structural concerns correspond to the non-referential usage of color and form in abstract art. In order to place logical foundations to this parallelism, a very brief glance to the history of Western painting is essential. The aesthetical foundations of Western painting dating from Renaissance to the late 19th century was conducted by three main motivations and restrictions; the reproduction of reality, the reproduction of religious commitments and the necessity to fit the rules of visual perspective.

These fundamental values of traditional painting and the conceptual and material restrictions can be related to the use of fixed pitch notes, pitch scales and general forms in conventional musical composition. General tendency of abstracted movements were motivated by the historical attempts to break this boundaries synchronously with music. Visual arts no longer must represent the external world and necessarily insist on the “reality”. Instead, it has to lead the spectator to an inner or fictional world, open to interpretation, motivated by the artists’ physical or spiritual reality. Colors, figures and forms may interact with each other without the guidance of predetermined rules and can build freeform structures, which do not have direct references on material or moral levels. A new way of visual perception, “a reduced seeing” is required for aesthetic appreciation.

42

“L'esprit des morts veille” (Spirit of the Dead Watching), a work by Paul Gauguin completed in 1892, was described by the artist as “the harmony of the colors” forming a musical chord, a musical motive representing death.(Figure 3.1) This is an example of the transitional period from “abstracted” to “abstract” (İpşiroğlu, 2006).

Figure 3.1: Paul Gauguin - L'esprit des morts veille (Spirit of the Dead Watching).

Most of the visual artists took the abstract tendencies of modern music, starting towards the end of the 19th century as a role model. One of the key figures in abstract painting, Russian painter Wassily Kandinsky (he was also an amateur musician) started to label his paintings as “compositions” and “improvisations”, around 1912. In the same period, English art critic Roger Fry coined the term “visual music” to describe the works of Kandinsky.

Nazan İpşiroğlu lists five important developments around 1910, which may give clues about the history of the interaction between two art forms. The initial incident is the publication of “Der Blaue Reiter Almanach” (The Blue Rider Almanac), later on transformed to the same titled artist group, which was led by Wassily Kandinsky and Franz Marc. The overall concept of the almanac was formed around the principles of modernist art and the common properties of different art forms. The subsequent event was the publication of Kandinsky’s “Über das Geistige in der Kunst” (On the Spiritual in Art). The autonomous state of color and its structural effects were discussed in the light of anthroposophy, a philosophical movement founded by Rudolf Steiner. Anthroposophy focuses on the inner development of individuals to a higher state of consciousness without depending on sensory/ material experience or information from the real world. 43

Synesthesia, the condition in which, a sensory stimulation triggers a secondary sensory is also important in Kandinsky’s approach.(Figure 3.2) The rise of cubism and the simultaneous action in the paintings of the Futuristic movement and the exhibitions of abstract paintings in “Salon des Independants” are additional important developments. İpsiroglu emphasize the musical form fugue as the main influence and the basis for the majority of visual art works (İpşiroğlu, 2006).

Figure 3.2: Wassily Kandinsky - Composition VI.

In addition to color and form related structures, other innovative techniques like collage or “papier collé” (an extended version of collage technique) started to be used. Therefore, the role of texture gained a new dimension. Collage based techniques is in correlation with the recorded sound based tendencies of musique concrète school, and the new sensibility to achieve unique macro textures is also the concern of general electroacoustic music.

Pierre Schaeffer expresses his impressions regarding the abstract tendencies in art in his early writings. As a conclusion, he states:

For a long time now, no one has been shocked, when looking at paintings, by the absence of a subject, because paintings do not tell a story, any more than they describe a landscape or a still life. The most interesting canvases are those where the formal element is so discreet, so simplified, an impression of beauty emanates from them. Which leads to the thought that the most worthwhile pieces in musique concrète are those, which, far from seeking musical expression in the classical sense, illustrate simple form, beautiful matter; there’s no need to look for an exposition, movements, details. (Pierre Schaeffer, 1952, p.103)

44

3.2 Analysis

Analysis section consists of six different compositions, which are divided into two equal parts according to a conceptual order. First section includes three compositions, written for traditional instruments. The realizations of these selected works are based either on standard notational systems, as will be investigated in Scelsi’s “Quatro pezzi per Orchestra”, and Lachenmann’s “Kontrakadenz”, or on non-standard graphic notations as can be observed in Penderecki’s “Polymorphia”. The second three-piece section is reserved for electroacoustic works, composed with a variety of organic or synthetic sound sources, which are manipulated further by various processes. The only work to apply a form of notation for future realizations is Stockhausen’s “Studie II”.28 Stockhausen supported his detailed graphic notation with written instructions.

The incompatibility of the exemplary works with conventional interval, harmony and form oriented analysis methods provide evidences regarding the existence of a timbre and texture dominated compositional perspective and immediately require a unique critical approach. Again, it should be underlined that, SM related strategies are not a feature restricted solely to the electroacoustic domain. Timbre and texture became also an integral concern of instrumental compositional practice for a group of composers, which can be traced back to Claude Debussy and the later generations of musical impressionism (Salzman, 1988, p.23). Nevertheless, the most adventurous examples started to appear during the late 1950’s and early 1960’s and reached its complexity peak during the 1970’s. Expectedly, the progress is almost identical within the electroacoustic compositional departments. In order to make historical comparisons possible, a chronological approach is chosen for both sections of instrumental (two orchestral works, and a string quartet) and electroacoustic (magnetic tape) works. Apparently, works that are more recent appoint a complex and unpredicted spectral and formal structuring. This fact could be attributed to the cross influences between the two types of sections, to their individual relations with their organic precursors and to the technological improvements (affected by financial possibilities).

28 The approach to use some sort of notation is composer specific, since not every composition is open to perform live or available for interpretation in terms of sound objects. Karlheinz Stockhausen was one of the first examples of the composers to use graphic notation and document every technical aspects of the compositions in electroacoustic domain. 45

3.2.1 Quatro pezzi per orchestra

“Quatro pezzi per Orchestra”, written in 1959, is considered as the final composition completed in Scelsi’s second period among his four periods, and it is the most complex example of his “single note” compositions, the previous examples being “Four Pieces” (1956) and “String Trio” (1958).

Twenty-six players are required for the realization of the work. The employed instruments are flute in G, oboe, English horn, two clarinets in B flat, bass clarinet, bassoon, four horns in F, alto saxophone in E flat, tenor saxophone in B flat, three trumpets in C, two trombones, bass tuba, two violas, two cellos, double bass, saw29 and percussion instruments including timpani, two bongos, tumba, hanging cymbal, small and large tam-tams. Percussion instruments are required to perform with different kinds of sticks, such a soft or hard tam-tam stick, in order to obtain variations on their spectral output.

An important logical foundation of the orchestral organization lies directly in selected instruments’ ability to reproduce quarter tones with the exception of percussion instruments. As will be investigated subsequently, microtonal fluctuations have a significant role in the composition. In Penderecki’s “Polymorphia”, microtonality plays an important part for the creation of the texture, generated by the clusters, but Scelsi used microtonality to attach a sense of a (minimal) horizontal motion to the composition, where the vertical motion is a priority. On the other hand, this motion also implies a form of tides between traditional concepts of consonance and dissonance.

Figure 3.3: Quarter tone and normal intonation indication (Scelsi, 1960, p.3).

“Quatro pezzi per Orchestra” consisted of four movements focused on fundamental pitches, F, B, A♭, A♮ respectively. Different combinations of the instruments are used thorough out the composition, with the exception of the fourth movement, in which all of the twenty-six players perform simultaneously.

29 Flexatone 46

Italian composer Giulio Castagnoli labels the fourth movement as an act of recapitulation of the first three movements, due to the reappearance of the previous timbral approaches in one specific movement. However, scholar George Reisch argues that some techniques such as overtone and sub-harmonic oriented enhancement do not occur in the previous movements. Thus, he prefers to label the movement as the “intensification of the selected features of the overall composition” (Reisch, 2001, p.247). Therefore, the fourth movement will be the choice of analysis, since it provides all the necessary data in order to determine the general timbral qualities and principles of the whole work.

From the macro structural point of view, it is possible to relate the dynamic structure with the spectral structure during the whole movement, due to the correlation between loudness and spectra. For centuries, composers knew this fact practically since acoustical instruments exhibit different spectral characteristics from “pianissimo possible” to “fortissimo possible”; generally radiating a duller sound in low sound pressure levels and a brighter sound in higher sound pressure levels.

The appearance of the brass section in the orchestration is a deliberate choice, since brass instruments prove to be most suitable examples to demonstrate this physical fact. Applying more force to them entails a certain lip position requirement to the performers. Unlike softer dynamics, where the lips cannot be closed completely, louder dynamics cause the lips to close suddenly. This particular lip motion disturbs the linear behavior of the system, producing higher harmonic distortion. (Url-7)

Generally, the composition has linear tendencies in terms of dynamic structuring and timbral organization from a prolongational reduction-biased perspective, though brass section is mostly responsible to create microbursts of crescendos and timbral expansion, closely (and spectrally) supported by percussion instruments.

In addition to this acoustical fact, also, there is a psychoacoustical condition researched by Harvey Flecther and Wilden Munson called Fletcher-Munson (equal) loudness curves. According to Fletcher-Munson curves, different frequency bands require different amounts of energy in order to be perceived as equally loud. As can be seen in Figure 2.3, two poles of the frequency spectrum, lowest frequencies and highest frequencies require more energy to be equal with mid range frequencies, where the human hearing is most sensitive resonated.

47

We should note that, the actual curves in Figure 3.3 are the results of experiments made by sine waves. An actual musical composition consists of a complex spectrum. Therefore, these curves are suggested to be considered as approximations, but do not have a scientific accuracy.

Figure 3.4: Fletcher- Munson curves (Howard and Angus, 2009, p.93).

Without investigating the score, one may observe that an overall reading of Figure 3.4 and Figure 3.5 indicates a structure rising from this correlation, where the vertical axis represents frequency in spectrogram and amplitude in waveform.30 The timbral compression and expansion is proportional to dynamic indications, extended playing techniques employed in the score and methodology of orchestration. Structurally and texturally, the movement composed of a kind of ABA form.

The opening and the closing section contain similar textures with narrow bandwidth spectra, whereas the middle section contains a denser spectrum, covering the whole frequency range of the human auditory system. However, the opening and closing section differ explicitly in terms of dynamics.

30 Horizontal axis represents time in spectrogram and frequency in spectrum analysis. 48

Once again, the tendency of a reduction based analysis method leads us to the following inference: the former section exhibits linear crescendo characteristics, while the latter section preserves a linear decrescendo quality. Nevertheless, both sections are subject to short term interruptions, as had been briefly mentioned in the previous page.

Figure 3.5: Spectrogram for Quatro pezzi per Orchestra movement IV.

Figure 3.6: Waveform for Quatro pezzi per Orchestra movement IV.

49

The movement begins with a sparse instrumentation (clarinet, four horns and two violas only), but starting with the third measure, presentation of a cumulative approach is evident. In order to produce an organic texture, Scelsi modified the note A, which is the nucleus of the movement, in several ways. A snapshot from measures 5-7 (string section is omitted) in Figure 3.7 ensures us a closer look to the timbral organization. The quarter tone difference between clarinet and bass clarinet31, which are continuously generating approximations of the note A, is creating a “beating” effect.

Beating, a psychoacoustical phenomenon, occurs when two tones are too close to each other less than 15 Hz. When the frequency threshold between separate objects exceeds the beating limit, human perception fuse them into a single tone with periodic variations in its amplitude domain. The rate of these amplitude variations is equal to the frequency difference of two physically separate tones. The resulting sound is modified further with the use of vibrato; it creates variations in the beating range and supply a dynamic gesture to the perceived gestalt. Throughout the movement, they occasionally have to apply a form of wide vibrato as indicated in the score by the composer. (Figure 3.8)

At the same time, a single horn repeats a small motive formed of G (a quarter tone higher) and A (normal intonation), until the immobilization on the A. This stable note is spectrally processed on the seventh measure with the addition of mute (sordino). Generally, horn partitions alternate between with mute and without mute versions. Occasional usage of non-measured flutter – tonguing is present during the various points of the entire movement. Flute, oboe, English horn and fagot do not have any timbral modifiers and support the texture with their native and stable spectra.

The last instruments to contribute to the overall texture between measures five and seven is a single saxophone and a trumpet. Alto saxophone holds the A constantly, while it varies its intonation. Trumpet stay on A with a normal and stable intonation. In brass section, saxophone and tuba uses regular mutes. Trombones and trumpets switch from regular mutes to metallic mutes (sordino metallica), as can be seen in Figure 3.9. We should also notice the regular change in the dynamics on all contributing layers.

31 Clarinet receives a normal intonation mark in measure 4. 50

Figure 3.7: Measures 5-7 (Scelsi, 1960, p.46).

Figure 3.8: Wide vibrato indication (Scelsi, 1960, p.3).

Figure 3.9: Measures 18- 20 (Scelsi, 1960, p.50).

51

A closer glance to the string section shows that the most frequently encountered playing technique is “pizzicato” (plucking the strings) and “sul ponticello” (near the bridge playing), executed to emphasize the higher . However, double basses are exceptions among the strings. Scelsi did not applied sul ponticello to the double bass partitions. In viola, and cello, use of sordino occurs frequently with regular returns to arco as can be seen on Figure 3.10.

Figure 3.10: Measures 24- 26 (Scelsi, 1960, p.52).

Until measure twenty, every individual or collective instrument is introduced in a variety of combinations, but it is not until the fortieth measure that every performer is active simultaneously. This “tutti” section lasts for five measures. Here one can encounter the climax of the gradually expansion of the instrumental register. (Figure. 3.11) After forty-fifth measure, Scelsi activated a diminishing approach in terms of orchestral density. This is clearly an attempt to modify the texture while preparing for a long-term fade out in the dynamics with a slight narrowing of instrumental registers at the same time. In addition to temporal and permanent changes in the dynamic markings, timbre modifications in the individual instruments continue to contribute to the overall texture of the composition. Therefore, the textural progress in the form of spectral compression and expansion may be categorized in the same realm with the ones present in the electroacoustic musical compositions, which will be analyzed in subsequent sections.

The lack of any interval based tonal motion more than approximately around one semitone (minor second) makes any diatonic and chromatic harmonic analysis almost impossible and discursive. Still, the note A natural has a gravitational function in the whole movement and the micro drifts from this center surprisingly creates a tension in the sense of traditional tonal concepts of consonance and dissonance derive from suspension and resolution.

52

It could be assumed that every individual layer becomes a sound object with its own “topography”. Due to the continuous interaction with the other objects, every single object gains a modifier quality, changing the perception of the others, though they are also subject to modifications simultaneously. It is possible to borrow terminology from the audio modulation at this point. Every sound object becomes a “carrier” object and a “modulator” object synchronously, since the manipulation is mutual. 32

Figure 3.11: Measure 42 (Scelsi, 1960, p.58).

On our next analysis example, “Polymorphia”, instrument sections have less individuality, contains less rhythmic figures and display even narrower tonal motion in comparison to “Ouatro pezzi per Orchestra”. Therefore, from the start, the work can be considered as an effort to arrive to a diverse perspective for a sound oriented musical composition, while sharing the same aesthetical and theoretical roots with the former work.

32 These concepts will be investigated in the fourth chapter

53

3.2.2 Polymorphia

“Polymorphia”, written in 1961, is the second piece of the sonorist “trilogy” of Krzystof Penderecki, its precursor being “Tren Ofiarom Hiroszimy” (Threnody to the victims of Hiroshima, 1960) and subsequent being “Fluorescencje” (Fluorescences, 1961–62). These three compositions are not forming an official dramaturgical or purely compositional trilogy approved by the composer. However, they are frequently being treated as the individual parts of a trilogy, due to their status as the chronological representative of the whole sonorist period of Penderecki.

Literally, the title of the work, “Polymorphia”, means “many shapes”, or “many forms”. The meaning of the title is not related to the structural form of the piece, rather it refers to the organization of sounds; primarily a construction built off microtonal clusters, glissandi, pointillism and sonorities provided by the avant-garde instrumental techniques (Schwinger, 1989).

The work is written for forty-eight string instruments; twenty-four violins, eight violas, eight cellos and eight basses. This is the one of the unique features of the composition, the other feature being the score. The works by Scelsi and Lachenmann are written for orchestra (Lachenmann employed additional sound sources as well), and use traditional methods for notational purposes. In terms of potential timbral structuring, the textural possibilities might seem slightly restricted, when compared to the subject of the previous analysis. Here, Penderecki stick to one instrumental family type, as he previously had in “Tren Ofiarom Hiroszimy” for 52 string instruments. To avoid the risk of textural monotony, Penderecki extended his palette of material sources by applying innovative playing techniques to the composition. These techniques are explained elaborately in the semi-graphical score. The score has significant deviations from the traditional Western notational system because of clear structural concerns.

One of the striking features of Penderecki’s notation is the absence of mensural notation. Instead of using conventional measure divisions with clear rhythmical indications, Penderecki prefers to use time units of seconds on the score, without giving an importance to the rhythmical precision. He had a logical reason for his choices and this saves a unique property to “Polymorphia” among the other selected orchestral compositions.

54

The whole composition consists of sixty-seven rehearsal numbers in total, where the minimum and the maximum duration differ between five seconds and twenty-five seconds per rehearsal number. Unlike Penderecki, Scelsi and Lachenmann prefer to use regular metrical notation to have an absolute control on the parameters of synchronization and time. As will be investigated in the next section, timing is a crucial factor for “Kontrakadenz” to realize the desired temporal interplay of grouped musical objects.

Penderecki do not consider the synchronization and time aspects as a priority regarding the interpretation of the work. A certain amount of indeterminacy in the individual timbral structure is certainly preferred in order to acquire an organic texture. This feature can be definitely linked to the use of graphical guidance of electrocardiograms for every instrumental layer. An electrocardiogram is basically a device to measure the electrical activity of the human heart for medical purposes. Penderecki assigned several open strings to electrocardiograms to define the temporal dynamic variations as can be seen on Figure 3.12 (Schwinger, 1989).

Figure 3.12: Electrocardiogram based notation (Penderecki, 1963, p.9).

On Figure 3.13, the difference in graphical representation of clusters and glissandi within a given scale can be seen. Black parts indicate the vertical layering of eight notes and spiral drawings indicate a glissando between the highest note B and the lowest note C.

55

Figure 3.13: Clusters and glissandi representation (Penderecki, 1963, p.6).

The form of “Polymorphia” demonstrates similar attributes to the traditional ternary (ABA') form with the addition of an unexpectedly tonal coda - like section centered in the note C, performed by all players. In this manner, the work display resembling features with “Ouatro pezzi per Orchestra”. A variety of examples, from every section of the ternary structure, can be given to support this theory of resemblance further.

In the opening (A) section, the dominance of a linear, cumulative and expansionist approach is observable.33 While a gradual, time-stretched crescendo lasts throughout the section, a multiplex layering is simultaneously in progress to extend the frequency spectrum via orchestration. Secondly, the B sections have a contrasting role in both of the compositions, although the choice of sound materials and their construction have been achieved by radically different compositional strategies. Nevertheless, the dynamic range of “Polymorphia” is more variable in this section and does not represent the sole dynamical and textural climax in the work, since every section contains its own periods of climax.

In the returning (A') sections of the both compositions, there are the similar tendencies to return to the textural qualities of the A sections, with several deviations in the original parameter template. On the other hand, dynamic organization is handled in an adverse manner. The reverse dynamic map of the A and A’ sections had been mentioned in the previous Scelsi analysis. In “Polymorphia”, the exact linear crescendo profile is maintained in the “recapitulation” section as opposed to “Quatro pezzi per Orchestra”.

33 Nonetheless, dynamic and timbral changes are also present. 56

The macro textural and dynamical framework of “Polymorphia” can be seen at the spectrogram of the whole work in Figure 3.14. The boundaries of the formal sections and the transitional temporal areas between them are clearly observable on this three- dimensional graphic.

Figure 3.14: Spectrogram for Polymorphia.

The A section reflects through-composed inclinations. The increase in the rate of the glissandi and the broadening of the instrumental registers causes acceleration in the textural density, creating distinctive patterns. Thus, we may label the subdivisions of the first section as low rate and high rate glissandi areas.

The section begins with the entrance of the three double basses. The rest of the double bass section joins them on rehearsal three. It should be noted that several performers should raise the written tone higher (a quarter tone, or ¾ quarter tone) to thicken the texture with microtonal fluctuations.

On the rehearsal number six, cellos enter the composition and they start to form a cluster in synchronization with double basses. Starting with the rehearsal number eight, these two instrumental departments start to make upward and downward glissando sequences in the selected scale (indicated it in the score). As will be the case also in the last, variant section, arco technique (regular bowing with the use of a bow direction in string instruments) is used among the four instrumental groups.

57

A global feature of the whole composition is introduced immediately in the opening rehearsal numbers, as soon as the glissandi are activated in the score: It is the absence of any tempo indications related to performance of the glissandi.34 Penderecki’s intention is possibly to constitute an organic, lively texture from the undetermined gestures of the performers. The layers forming the macro texture do not function as individual units. Contrarily, they are perceived frequently as “monolith blocks” with a complex spectral density on their surface.

On the rehearsal number ten, violins enter the composition directed with a special marking. This marking can be seen on Figure 3.15. It transmits a command indicating the need for indefinite pitch generation on the upper limit of the range of the instruments. On the rehearsal number eleven, viola section starts to perform in a cumulative manner. These both newly introduced layers have more individualistic appearance in comparison to the more fused sound of basses and cellos; therefore, they are reducing the perception of the solid spectra into a less complex one, until the appearance of the high rate glissandi parts. This “hypothetic” second part develops with the acceleration in the rhythmic activity in all sections, thus it allows the creation of a complex web of glissandos, linked directly to each other.

The entrance to the B section is signaled with the inactivation of bass and cello parts on rehearsal number 25. It is possible to label the rehearsal numbers 25-32 as a transitional area in the composition. This subtractive approach makes the textural transformation possible, smoothing out the risk of undesired abruptness, while still adopting a less gradual manner in comparison to Scelsi example.

At this point, we may associate the compositional methodologies present in both analysis subjects with the historically fundamental sound synthesis types, additive and subtractive synthesis. These synthesis methods produce similar variations in the macro texture compositional wise. On the analysis of Karlheinz Stockhausen’s early electrocacoustic etude “Studie II”, the basis of this analogy will be discussed further. Chapter 4 will focus on synthesis issues in more detail.

Figure 3.15: Upper limit of the instrument - indefinite pitch (Penderecki, 1963, p.4).

34 This is a general principle in the composition regarding the role of glissandos. 58

The B section assumes the role of a contrasting section, mainly because of the transition from the arco techniques to pointillist techniques. With the introduction of “taps con dita” (tapping the strings with the fingers behind the bridge) on rehearsal number 26 for violas, and “col legno battuto” (hitting the string with the wooden part of the bow) on rehearsal number 27 for double basses, a radically distinctive texture is implied rapidly. Cellos join to apply legno battuto, followed by the pizzicato of the violins on rehearsal number 32. Unlike the A section, the instrumental sections do not lock into a pattern to form clusters, when the tutti section begin. Rather, they keep evolving with the implementation of new techniques such as the “pizzicato con due dita” (pizzicato made with two fingers) on rehearsal number 33.

Towards the second half of the B section, new techniques has been activated globally, as can be inspected in the notational excerpt on Figure 3.16. Single stripes instruct the performers to tap the soundboard with their fingers, while double stripes instruct them to tap the desk with the bows or to hit the chair with their heels. These activities expand the timbral elements of the composition and the dynamic profiles of the resulting percussive sounds add more interactive and rhythmical qualities to the work. This property is coherent with the crescendo oriented motion of this subsection before its dissolve to the last section. It increases the contrast of B section in the overall sense.

Figure 3.16: Excerpt from rehearsal number 39 (Penderecki, 1963, p.16).

59

Rehearsal number 44 is the second transitional period, which marks the return to the modified version of the A section. Violins, violas and cellos switch to arco technique, while double basses continue to perform the pattern built of pizzicatos and legno battutos, before they participate to the rest of the orchestra in rehearsal number 45, they rest on rehearsal number 46 and resume in pianissimo on rehearsal number 47.

The dominant factors responsible for the variation characteristics of the third section are selected vibrato types, repetitions of tone groups and instructions for specific bowing locations on the string instruments. One frequently used spot for alternative bowing is the tailpiece, and the other one is the rectangular bowing at the bridge. We may trace back the onset of closing crescendo to rehearsal number 60, which leads the piece to a closure with the clear pitch of C on rehearsal number 67. The common denominator of the first section A and its variant A’ in terms of instrumental technique is the usage of arco technique, while the resulting textures differ substantially.

We have to exclude the coda section and assign it special consideration, due to its deviating function in the general syntax of the work. It acts as a traditional perfect authentic cadential closure on a C major triad, a central key in tonal systems.35 (Figure 3.17) The median E is provided by the viola section, and the second half of the violins provides the dominant note, G. Instead of assigning a structural meaning to this act, one should underline the symbolic quality of the adjunct section. The tonal language is removed from its musical contextual logic and constitutes a conceptual figure rather than a functional one.

It is challenging to locate an explicit pitch motion and/or harmonic progression in the sonic universe of “Polymorphia”. Thus, this situation is supporting the view of using timbre as an equal, or even dominant compositional parameter in order to obtain abstract relationships within sound objects. In the orchestral domain, these selected compositions could be considered as the theoretical successors of the Schönberg’s Klangfarbenmelodie (tone color melody) idea, and they symbolize a serious departure from the pitch oriented musical organization. “Polymorphia” is not an exception in this sense.

35 The neutral point of circle of the fifths theory is the C major. 60

Figure 3.17: Cadence on rehearsal number 67 (Penderecki, 1963, p.23).

3.2.3 Kontrakadenz

“Kontrakadenz” (Counter cadence), completed during 1970–71 by German composer Helmut Lachenmann, is written for full orchestra with unusual extensions to the sound palette of the modern orchestra. Fifty-nine players are required for the realization of the work. In addition to conventional departments of string, brass, woodwind sections, a complex percussion section, single piano, single harp, unusual instruments such as electric guitar, Hammond organ36 and non-instrumental sound sources such as radio, ping-pong balls coins, Styrofoam, a zinc washtub full of water and more are also used as sound generators. Four performers, entitled as “ad hoc” players in the score (a phrase in Latin, meaning specific solution for a problem/task), are responsible for controlling the non-instrumental sound generators according to

36 Hammond organ is a type of electric organ based on additive synthesis. Developed by Laurens Hammond in 1935, it became one of the most financially successful electronic instruments, but did not become popular in electroacoustic compositional circles (Url – 9). 61 the composers instructions in the score. As discussed before, “Polymorphia” had already employed different kinds of sonic material outside of the Western instrumental tradition, but Lachenmann took this approach one-step further. It is likely to encapsulate the intellectual frameworks of the composition right at the beginning. The size of the orchestra and the presence of non-instrumental sound sources cause theoretical and practical departures from the former compositional examples.

Chronologically, “Kontrakadenz” is the most recent composition in the analysis section and it is the longest one in terms of duration37, lasting approximately nineteen minutes. Similar to “Polymorphia”, the work does not consist of individual movements. It progresses rather linearly and do not contain an anticipated form. Where the difference lies between these two compositions in terms of form is the lack of patterns in the latter example. However, this statement should not be judged as a negative evaluation. Unlike the former analysis examples, it is challenging to detect traces of a simple, predictable form triggered by the organic association of dynamic and timbre in Lachenmann’s work, due to the lack of a linear approach for the continuity of the musical parameters. Elke Hockings qualifies these constructional preferences of the composer as a part of his offensive strategy to tonality and generalize his works during this period as “uninterrupted flow of sound” (Hockings, 1995, p.8).

Lachenmann designates his compositions as works of “orchestral musique concrète”, sharing the similar basic object oriented principles with the French School, with the fundamental difference of using the orchestra as a real-time sound object generator instead of working on acousmatic domain. Lachenmann’s leading motivation is obviously witnessing the physical process of sounds and putting them together in a context. Interestingly, there are some recorded sounds included to the work such as radio broadcasts and tape recordings. The latter (recording of the presentation of the composer, the title of the work and the orchestra by a host) operates as a sound symbol to evoke a kind of alienation among the audience. It does not function as a regular reduced sound object, and requires heightened listening to grasp its contextual meaning.

37 Because of the temporal scale of the work, it would be out of the scope of the dissertation to analyze the whole piece. Therefore, I would like to emphasize the general and prominent qualities of the work and underline the conceptual foundations of it. 62

The Schaefferian usage of sound objects in “Kontrakadenz” interacts with each other in an almost pointillist way resulting with a less dense texture due to the non- cumulative approach. This use of non-repetitive web of sound objects is relevant to the historical progression of musique concrète after a decade. Pierre Schaeffer revised the role of repetition in his compositions starting with “Etude aux allures” and “Etude aux son animes” of 1958, and continued to apply an approach based on linear progression with minimal juxtapositions of objects rather than interconnecting the temporarily holistic, repetition driven layers derived from the selected sound objects. As will be investigated in the analysis section of “pure” electroacoustic works38, “Etude aux objects”, consist of five variable movements, would display a better picture for the sake of pointing out the historical relationship between two composers. Lachenmann himself explains the almost gestural, timbral feedback oriented characteristics of “Kontrakadenz” with the following statement:

Kontrakadenz – “difference” of acoustic situations, the blueprint of a musical model of composition based on the experience I had in pieces composed just before this one, especially in “Air” and “Pression”. That which resounds does not resound for the sake of its tonality and its structural modification, but signals the actual use of energies in the musicians’ actions and renders the mechanical conditions and instances of resistance associated with these actions tangible, hearable, anticipatable. (Lachenmann, 2001, p.10) Amy Bauer relates the sound object based composition approach to American philosopher Stanley Cavell’s identification of art works as objects, which can be perceived only by “sensory input” and “recognition” (Bauer, 2010, p.84). This proposal resonate the fundamentals of the reduced (active) listening basis of musique concrète movement, since it gives the analytical listening an irrevocable role in the comprehension of the work.

Literally, the title of the work exhibits a considerable amount of irony about the function of cadences, one of the indispensable elements of the tonal systems (the role of the cadence had been previously mentioned in the coda section of the former analysis). “Kontrakadenz” does not have a centre-based organization; therefore do not contain states of suspension/dissonance and resolution/consonance with a goal- oriented motion as in conventional tonal systems. Indeed, there is a sense of motion in the composition but it is the timbral motion caused by the appearance and the disappearance of the sound objects, which do not act according to a pitch oriented or any predetermined hierarchy. Habitual listening may augment the sense of

38 Works, containing no real-time acoustic sources. 63 directionless even more than works organized according to serialist principles.39 The continuous and independent movements of sounds permit to create a sort of antithesis to what tonality commands and represents, though Amy Bauer renders the closing section of the work as a conventional cadence. In terms of tonality, it is difficult to support or prove this claim. The work does not imply a tonal root and the dominance of the block flute heads generate a rather ambiguous sense of pitch. Yet, indispensable parameter of traditional cadential closures, the rhythmic motive with fortissimo possibile is quite distinctive, as can be observed on Figure 3.18 (Bauer, 2010).

In the score, the exact locations of every performer, including the Ad-hoc players, are delineated in detail. The spatial preferences contain a significant importance in the composition because of two reasons: Spatial arrangement contribute to the sense of object wise motion in space. Also simultaneously, this arrangement helps to decrease the potential side effects of psychoacoustic masking on the spectra of the sound objects to conserve their intrinsic texture. Both are necessary for the comprehension of the work.

The score specify instructions to the performers in order to achieve particular timbral qualities. Flute and oboe players have to perform with block flute heads on certain measures. Contrabassoon has to use Styrofoam in order to manipulate its sound, as it is also valid for the flutes. Brass section has to produce tones with indefinite pitches. String sections have to use scordatura and they are assigned with special commands. Electric guitar not only incorporates scordatura but also occasionally gives place to bowing techniques. Piano has to be modified, to achieve an even narrower spectrum when the Una corda pedal is used actively. Clarinet, bass clarinet, harp and the electric organ are amongst the seldom instruments, which Lachenmann did not assign an alternative tuning and performance related instructions. For six percussion sections, elaborate commands are noted, covering information about the types of percussion and sticks and their locations. Four Ad-Hoc players are allocated for non- musical instruments. Every performer has unique sound sources reserved for them and they are informed with individual guidance, which is a crucial factor to establish the complicated sound routes.

39 At the same time, Lachenmann does not desire the work to be perceived as an “extreme case of antitonality”.(Lachenmann:, 2001, p.10). 64

Figure 3.18: The closure of Kontrakadenz (Lachenmann, 1971, p.90).

It is virtually unfeasible to fit the work to a conventional framework. From the formal point of view, a relation with, what Karlheinz Stockhausen classifies as “moment form” could be proposed despite the historical oppositions. Lachenmann’s official declaration of his organic bond with Schaefferian music theory is mentioned earlier. On the other hand, the opposition between two schools significantly weakened in the second half of the 1950 and completely replaced by a global electroacoustic compositional tendency. 65

Stockhausen came up with the term moment form during the period of his composition “Kontakte” (for electronic sound, 1958- 1960). In general, the earlier works of Cologne school have certain similarities with the primary intentions of moment form, due to the discontinuous constructions caused by numerical permutations of musical parameters. The analysis of the mathematical foundations of “Studie II” would clarify this phase more efficiently. Yet it is crucial to discuss the outlines of the moment form concept.

Jonathan Kramer defines the moment form as “a mosaic of the moments” and the moments as “self contained (quasi) independent section, set off from other sections by discontinuities” (Kramer, 1988, p.453). In a similar vein, Stockhausen emphasize the constancy of musical characteristics being the most crucial element of moment form, where the moments may morph into each other either suddenly or gradually but always in a stable motion. Despite the continuous progression, the composer suggests the existence of conceptual groupings as he implemented in his example of “Kontakte”. In his lecture, “Moment-Forming and Integration” (London, 1971), Stockhausen explained that the work consists of three main predominated divisions, melodic structure, sound quality and duration (Stockhausen, 2000, pp.63- 66). Without a mathematical requirement necessary for duration, timbre, melody or any other compositional element, moment form consists of individual units of sound, who does not operate according to the traditional variation procedures or does not reappear later on in the composition as a “theme” or “motive”. However, the serial predeterminations are still active and this situation does not cause unrelated sound combinations and does not give the work a chaotic quality, since the moments have variable common ground parameter wise. A unit could be “a gestalt”, “a structure with clear components” or” a mixture of both” (Stockhausen, p.189). It is also mandatory to add that Stockhausen does not limit the moment form only to electroacoustic methods. Its core concept covers also traditional instrumental compositions. Figure 3.19 displays the spectrogram of “Kontrakadenz”, which has unforeseeable dynamic structuring and consistently broad spectra.

The composition opens with an economical strategy regarding instrumentation. Three trombones produce only single thirty-second note with flutter tongue and rests for the next seventeen measures. Cellos start with legno saltando glissandi on the third measure and quickly evolve into double-stops. Violas are divided into two

66 groups, one starting with legno saltando and the other with legno battuto, while they experience a similar path of manipulation in absolute rhythmic synchronization with each other.

Figure 3.19: Spectrogram for Kontrakadenz.

In other words, the section does not operate individually rather they are components of a gestalt, with the rare exceptions such as measures sixty-two and seventy-three40. The identical rhythmical symbiosis can be observed in the other departments of string section again, though they form also temporal canonic or pointillistic bundles of sound objects. Marimba and piano appear on the second measure and rest on the third and the fourth one. Four flutes and four oboes make their brief introduction on measure three, immediately afterwards they rest for several measures. The last instrument present in the opening is the Hammond Organ with its emphasis on the note B, changing from pianissimo to piano in subito sforzando and returns back to pianissimo section, demanding a rhythmically loose, asynchronous pizzicato cluster rather than having an absolute synchronization between three groups of the section.41 This request is an explicit strategy to acquire morphological and dynamical

40 Measures between forty and forty-two provide a case, which can be considered as an exception in terms of rhythmic unity in theory, but practically the density of successive notes results again with a certain macro texture. 41 Lachenmann wrote in the score “the pizzicatos must not be together”. 67 manipulation ability instead of an additive, multi layered timbral generated by a cluster derived from one type of instrument, where the single notes do not have a clearly perceptible identity.

Figure 3.20: Asynchronous pizzicato cluster measure 8 (Lachenmann, 1971, p.3).

Ad hoc players deserve a dedicated section for an overview due to their unique role in the realization of the work. On the twenty-first measure, the sound of the fall of a single metal plate on a stiff floor is used with the accompaniment of bongos, violins, cellos and contrabasses. On the twenty-second measure, four overlapping ping-pong balls immediately follow this contrast by wooden and metal sources. Physical laws dictate the rhythmic activity of these balls, thus the interplay with the other sound sources is organized accordingly until measure twenty-six. Later on, earth gravity is utilized further such as metal plates on glass floor or coins. Starting with measure eighty-nine, first ad hoc player has to excite a Chinese basin with arco technique, later joined by the rest of the performers. It continues almost for forty-two measures, which would be replaced with radio receivers tuned to FM (frequency modulation) or SW (short wave) broadcasts frequencies. On measure one hundred eighty seven, a tape recording has to be played back as previously mentioned. After this particularly provocative and symbolic point in the work, a continuous alternation of metal plates, Styrofoam, basin, water (and related material such as water gong), vessel, saucepan lid, friction drum, ping-pong ball with variations can be observed. These sound objects are interrupted by brief appearances of radio broadcasts in versions of TBM processed (reverberation) and unprocessed. Apart from the general systematic nature of the work, a careful glance to ad hoc instructions leaves no doubt that how carefully Lachenmann calibrates the qualitative and quantitative features of his

68

“readymade42 sound sources and the methods of for their manipulation in order to create a structure far away from the comfort zone of aesthetical habits. He regarded strictly traditional composing activity as an act to “confirm the choices”, and took the physical nature of sounds into account for his sonic creations (Url- 10).

3.2.4 Studie II

“Studie II” (Study II), realized in 1954, is the third electroacoustic composition of the German composer Karlheinz Stockhausen and it is the last one to be labeled as an etude or a study43. This proves the composers state of self-evaluation in terms of personal compositional enhancement.

Stockhausen’s first practical encounter with this new genre took place in Club d’Essai, the studio of Groupe de Recherche de Musique Concrète, during the summer of 1952, when he was a visiting student of Oliver Messiaen. His introduction to Pierre Schaeffer gained him studio time on a weekly basis, soon to be proved insufficient because of the impracticalities of the tape medium and linear/destructive editing. In the meantime, Stockhausen simultaneously divided his attention to explore the fundamentals of acoustic science and to research the acoustical qualities of a wide palette of musical instruments44 by modifying their recordings on magnetic tape.

The single compositional result of this brief period in Paris is titled as “Konkrete Etüde” (Concrete Etude); a three-minute composition in which the source materials were derived from the recordings of the low strings of a piano excited by an iron beater. Afterwards, these recordings were subjected to a list of tape manipulations such as the removal of the attack portion of the audio signal (equivalent to cut-bell technique in musique concrète terminology), pitch transpositions, and juxtaposition of several layers.

“Konkrete Etüde” was composed for two sets of stereo loudspeakers. This would remain a standard final format during Stockhausen’s first three years in electroacoustic domain, which lasts until 1955.

42 “Readymade” a term coined by artist Marcel Duchamp is a concept of modern art that shows striking resemblance to the idea of found sound. Ordinary objects can be modified in various ways and constitute a work of art (Adlington, 2009, p.145). 43 “Gesang der Jünglinge” (Song of the Youths – 1955 / 56) follows “Studie II”. 44 GRM had tapes of recordings made in Musée de l’Homme Paris. 69

A year later, after the subject composition, he began to direct his attention to the spatial arrangements and motion offered by the use of surround sound. As the title of the former work obviously implies, the composition was motivated by the fundamentals of musique concrète theory, though the composer was not very satisfactory about the unpredictable results of tape splicing and tape manipulation in particular. This early experience led the composer to focus on synthesis techniques, initially using mainly sine waves (and white noise later on) as the foundational elements.45 Processing the synthetic audio material further with artificial reverberation was an encountered procedure in the early period of Stockhausen and this is congruent with the components of timbre composition shown in Figure 2.2.

“Studie I” (Study I) was subsequent to “Konkrete Etüde” (Concrete Etude), composed in WDR studios (Cologne) in 1953 and followed a completely different logic. 1920 Hz was taken as the starting frequency, and harmonic ratios were applied in order to generate new frequencies, so all of them together would form a complete sound object. The rest of the musical parameters including amplitude, duration and the hierarchy between all of the layers were determined according to mathematical operations (Holmes, 2002).

Sine waves, generated by audio oscillators are the building blocks for additive synthesis. The reason of this preference is quite logical, since historically Cahill had used the same technique in the earliest synthesis devices called Tellharmonium. It has been known that WDR studio owns monophonic Melocord, and Monochord keyboards. Both of the instruments are capable to synthesize sounds albeit largely ignored due to their inflexibility for mathematical control and rather poor audio quality (Manning, 1985). Thus, manual construction on tape medium is rather requested despite the time consuming process.

Sine waves, also as known as pure tones, do not contain overtones. They consist of a single, fundamental frequency with amplitude and phase data. Thus, they provide an excellent opportunity to have absolute control on the timbral domain, since they represent the smallest possible unit of sound in terms of spectrum and can be used as nucleus of the contexture.46

45 Stockhausen had the first opportunity to experiment with sine waves in Club d’Essai (Cox and Warner, 2009, p.372). 46 This should not to be confused with the infinite time domain. 70

Figure 3.21: Amplitude vs. time graphic for 1000 Hz. sine wave.

For “Studie II”, Stockhausen prolonged similar concepts for musical material and compositional methodology although the textural characters and the temporal scales of both etudes differ radically. The latter work has a harsher, almost noise-like surface but at the same time, it has a more compact form in comparison to the previous example. “Studie I” lasts approximately for ten minutes, whereas the latter one lasts three minutes and forty seconds. This textural difference might be reason for Stockhausen’s choices of total duration.

In his first two electroacoustic creations, composer preferred to keep detailed notes of the technical aspects and the mathematical procedures, which is a remarkable feature that would last for his whole career. This period of realization of “Studie II” was the first time he employed a graphical score for the potential future realizations of the piece. The score is assumed the first “published electroacoustic music score” in the history of music. Here, the verb “realization” should not be confused with real- time interpretation of the work; rather the reproduction of the composition on tape medium has been intended as it is written in the introduction of the score. Live performances apart from phonogram or disk manipulations were technically almost impossible to accomplish during this early period (now it is possible to program special software for computers for live renditions). Stockhausen mostly focused on live performances during the majority of 1960’s, starting with “Kontakte” (Contacts) in 1958.

To establish a deterministic structural work, Stockhausen choose 100 Hz as the basis (lower limit) frequency in order to generate the numerical values of sine waves, those who would form of frequency rows; a reflection of serialist methodology associated with Cologne school. A scale of 81 steps is gathered using the constant ratio of 25\/5 on every frequency derived from the basis frequency. The resulting numerical piles are grouped in 193 separate five-note successions as determined in “Tabelle A” (Table A) in the score. (Figure 3.22) 71

In a sense, the initial concept of defining continuous and almost autonomic units of moments (what later Stockhausen later called “moment form” as mentioned in the previous chapter) is reminiscent of John Cage’s “I Ching” (The Ancient Chinese Book of Oracles) operations. However, Michael Nyman found both approaches contrasting, due to the clash of “indeterminacy” of Cage and strict “determinacy” of Stockhausen47 (Nyman, 1999, p.9).

Every sine wave grouping is subject to DM in a 31-step decibel scale the upper limit being 0 dB48 and lower limit being -40 dB with 1 dB per step possibility of change.49 Therefore, single line spacing in the score equals one dB of change in the intensity. The envelope manipulations are frequently used in the score, achieved by real-time automation of potentiometers to normal and reverse playback. The persistent dynamic alternations between the gradual and rapid attack and decay times impose a certain rhythmic structure to the work, where the rests between groupings have exceptionally significance on the comprehension of the compositional lay out.

Figure 3.22:193 frequency groupings of Studie II (Stockhausen, 1956, p.IV).

47 Stockhausen himself related his instrumental piece “Klavierstück XI” (1956) with John Cage (Cox. Warner, 2009). 48 The reference 0 dB should be calibrated not be less than 80 phons. 49 For untrained / regular listeners, one dB of change in the amplitude domain is the threshold of perception; whereas trained professionals may perceive lesser values. 72

The unit for duration is in centimeters, where the tape speed of 76.2 cm per second (30 IPS) is the reference point. The double strikes point out to the moments of overlap of two or more groups. The individual members of five-note groupings have the equal intensity, have the equal duration of four-centimeter length, and they are lined up according to a numerical hierarchy (sorted from the lowest frequency to the highest). As previously mentioned, each five four centimeter tape is conjoined together in order to form a loop. Every single loop has to be played back in a special room covered with reflective surfaces (echo chambers). The acoustic information has to be recorded again via microphones. RT6050 time of the room has to be 10 seconds long and the room has to have a regular frequency response in order to avoid the peaks and notches in the spectrum.

This time based processed groupings constitute the sound material that would to be used in the final realization of the composition. The actual score is divided into three parts. The upper section is reserved for frequencies collected in sixteen rows, the middle section is reserved for duration instructions and the lower section contains the intensity curves to be executed via potentiometers while overdubbing. (Figure 2.23) The ingredients of the sixteen frequency rows are determined in “Tabelle B” (Table B). (Figure 3.24)

Figure 3.23: The structure of the graphic notation (Stockhausen, 1956, p.1).

50 The time that passes for the level of the original signal to decrease -60 dB gives us the reverb time. 73

Figure 3.24: Frequency rows of Studie II (Stockhausen, 1956, p.V).

By only comprehending the mathematical predeterminations and the rest of the preparatory stages, it is possible to speculate that every individual grouping forms a musical object as explained in the Schaefferian theory. On the other hand, the selected tones/objects are not necessarily a semitone apart from each other (depends on the value of the basis frequency inside the five –note group), a fact that diminishes the chance to perform a pitch centered analysis almost impossible just at the start. The durations of pitch groupings also hinder the perception of micro intervallic motion to a certain extent. The horizontal and vertical interrelations of the groupings are responsible for organizing the overall timbral and rhythmical aspects of the work.

Stockhausen himself included a spectrogram from a fragment of the work in the score in order to exhibit the spectra characteristics of a specific five-note group. On Figure 3.25, the complete spectra of the work may be observed. It is even more challenging than the Lachenmann example to find any traces of a pattern regarding the dynamic and timbral organization, although the texture is not changing drastically at the expense of aural monotony. Schaeffer’s criticism of the mathematical oriented musical structures not being necessarily compatible with the perceptual field comes to mind.

74

Figure 3.25: Spectrogram for Studie II.

By making a distinction between “abstract” and “abstracted syntax”, Simmon Emmerson points out to the differences of material according to the generation of sounds. At the one extreme lies the sounds generated by pure synthesis, at the other end is the sounds achieved by manipulations on “organic” sound objects (Emmerson, 1986).

Examples of “mimetic discourse dominant” compositions are excluded from the analysis section, since they not necessarily employ timbre as an active contributor for aesthetical intentions. The complete table of Emmerson’s compositional examples for syntax types can be found in Appendix C. Bernard Parmegiani is the second most frequently appeared composer in Emmerson’s categorization, where he fills the crossover spot between abstracted syntax and aural discourse dominant with his multi-partite work “De Natura Sonorum.” In other words, Parmegiani’s composition shares the same conceptual roots with “Studie II”, but they differ from each other in terms of source material. Thus, he would be the last composer to be analyzed in this chapter. Nevertheless, a Pierre Schaeffer composition must be inserted earlier to the section in order to establish a more historically precise connection and establish the necessary link between Lachenmann and Schaeffer.

75

3.2.5 Etude aux objets

Pierre Schaeffer composed “Etude aux objets” (Object Etudes) in 1959 and it is the last composition of his presumably second productive period. The composer revised the work later in 1971.51 Schaeffer’s active composing periods were interrupted by extended gaps of time, mainly because of his dedication to complete a sound and musical object based musical theory and because of other career necessities (engineering) as well. These periods are between 1948 – 52, 1958 – 59 and 1975- 79.52 Carlos Palombini devised an alternative four-phase category and omitted his final works from 1970 era: “research into noises” (1948-49), “concrete music” (1948-49), “experimental music” (1953-59) and “musical research” (1958) (Palombini, 1998, p.1).

Little information is evident about Schaeffer’s compositional and technical processes for “Etude aux Objets”, except his personal statements regarding the technical difficulties he encountered with the tape recorders. (Url- 11) Similar to the former compositions of his second period, “Etude aux allures” and “Etude aux sons animés” (1958), the subject work shows a higher degree of technical complexity, and formal maturity in comparison with his earlier etudes and works.53 There is not any kind of score prepared by the composer. The work is completely acousmatic and it is not suitable for interpretation.

The composition is formed of five movements and lasts approximately seventeen and half minutes. This is Schaeffer’s one of the longest compositions. Each movement uses a limited number of sound objects (Guedes, 1996). By listening, one can detect a concealed interaction between all of the movements. The amount of manipulations on the musical objects and their degrees of identification by the listeners, follow a sort of symmetrical pattern in the first four movements. The work ends with a recapitulative fifth movement, reminiscent of Scelsi’s approach for the closing movement.

Figure 3.26 shows the sectional structuring logic of the whole composition in a simplified way.

51 Several compositions of Schaeffer have been subject to various revisions. 52 Every compositionally active period of Schaeffer contains less number of work than the previous one, resulting with an almost t linear decrease in his output. 53 Recently Francis Dhomont used fragments of “Etude aux allures” and ”Etude aux objets” in his compositions “Objets Retrouvés” (1996) and “Avatarson” (1998) along with other composers. 76

Less Abstracted/ More Abstracted/ Less Manipulated More Manipulated Combination

5- Objets 1- Objets 2- Objets rassemblés exposés étendus

3- Objets 4- Objets liés multipliés

Figure 3.26: Movement groupings of Etude aux objets.

As can be seen above, the first movement “Objets exposés” (Exposed objects) is paired with the third movement “Objets multipliés” (Multiply objects), because the sound materials are significantly faithful to their original morphological structure due to the lesser amount of audio manipulation employment in these odd numbered movements. Origins of the sounds are mostly identifiable. On the opening section viola supplies one of the main objects, while the third section is based on variety of material like springs, plates, whistles and so forth.

However, this ability is not valid for the even numbered movements, the second movement “Objets étendus” (Extended objects) and the fourth movement “Objets liés” (Related objects). The sound objects are abstracted to a certain extent by AM hence sound object identification is almost impossible, though it should be mentioned that the amount of signal processing is not homogenous between these two movements. Objets étendus is more abstracted, since there is no single moment left behind, in order to enable us making even unconscious estimates of the objects, and guessing their exciters and their excitation methods.

The fifth movement “Objets rassemblés” (Collected objects) contains both characteristics of the previously mentioned pairs simultaneously. Layers of identifiable sources follow heavily abstracted sounds objects. The majority of both sources, in either abstracted or original state, had been introduced to the listeners during the previous movements.

77

As a closing movement, Objets rassemblés combines the overall characteristics of the whole work including the dynamic and spectral gestures of the sound objects. It is functioning as a recapitulative movement but not without its further textural and structural offerings. The unique texture of the movement is especially evident in the closing section.

Because of it inclusive state, the fifth movement would be the one to be analyzed in this section. It enables us to makes comparisons with the previous Lachenmann example. Theoretical similarities were already mentioned in the analysis of “Kontrakadenz”. The complex, non-repetitive, pointillist and almost random spectral structure of the movement, reminiscent of Lachenmann’s work, can be observed in Figure 3.27, though a spectrogram cannot give sufficient clues about the interrelations of the objects. The rate of the object succession almost suggests a structural unity, as Trevor Wishart has referred to this kind of monolithic forms as “natural” or “artificial” sequences (Wishart, 1994, p.59). A guide map based on the degree of abstraction and on relations to previous movements would be useful for an easier comprehension of the work.

In both cases, the spatial distribution of the sound objects is organized carefully. Like all of the other works by Schaeffer, “Etude aux objets” is composed for magnetic tape only and requires no real time performance except the possible addition of spatial motion in the acoustic space via “Potentiomètre d’espace (space potentiometer”), an early manual panning device for audio recordings invented by Jacques Poullin with real time response.54 For this particular composition, there is no written or verbal information regarding the reception the work. Thus, one must assume the stereo version as the final and stable version of the composition. Still, a multi speaker version may produce exciting musical results in acousmatic performances.

From this point of view, the panoramic dimensions of two loudspeakers replaces the real space of a concert hall as in Lachenmann example, tough for both of the works the motion of the objects have an equal importance on the comprehension of their interactivity and the resulting texture.

54 Jacques Poullin was the engineer of Club D’essai, who also invented the tape recorders “phonogéne” (there are two versions of phonogéne recorder) and “morpophone” with speed and time- based processing capacities respectively. 78

Figure 3.27: Spectrogram for Objects rassembles.

It is a challenging attempt to trace any implication of a predetermined pattern in the appearance rate of musical objects. Same situation also applies to the ratio of abstracted objects vs. unaltered ones. Still, motive based variations can be detected occasionally. Since there is no published score, a time-based analysis is feasible enough. In the first twenty seconds of the movement, the three fragments of trumpet recordings continue to resurface in several variations. These variations are achieved by gestural and timbral manipulations. Various edited but not manipulated sounds follow them each time after their emergence. Temporal editing defines the act of gesture. Mute is the primary modulator for the spectrum of the trumpet sounds. The following 40 seconds in the composition is dominated by more abstracted sound material akin the movement Objets liés, where momentary origin identifications are usually possible.

The second quarter of the work witnessed the return of less abstracted sound objects but it is divided into two sections with a highly abstracted interlude on 1.18. Until this very point, active objects were mainly derived from sounds generating from excitation of various materials by hitting and scratching.

79

The interlude lasts six seconds and consists of a rhythmic, almost mechanical single object. Starting with 1.22, it dissolves into familiar material from the previous section with further addition of new layers. Two subsequent sound objects in the on 1.26, a transposed human voice and few viola fragments, reminiscent of the movement Objets exposés, contrasts with the rest of the added layer. Abstracted and less rhythmic sounds dominate the surface texture, while less rhythmic and subtlety- abstracted viola fragments reappear along the way. This quarter ends with a rest lasting approximately one second indicating a renewal in the object typology and morphology. The following section can be categorized as a section consists of equal amount of alternation between natural and manipulated objects. These alternations are in an overlapping state and they are continuously forming complex layers for brief periods. The most conventional musical object present is the five-note succession produced by a string instrument, carrying a clearly tonal implication unlike the former viola fragments. The section lasts until the arrival of cadential like gesture by a single object on 2.25. A very short rest follows this particular sound.

New section on 2.27 marks the return to material in the second quarter, though with a more complex construction strategy. As a part of the nearly symmetrical structural patterning approach, another break is offered at 2.52. Prolonged sustain phase, gained by reverberation process, connects us to a five second transitional period, which strikingly reserved for low frequency material, suggesting an unavoidable “dark” quality. Similar objects, as can be found in the section before the break, appear on 3.02 and last for ten seconds. Therefore, it is a kind of prolongation of the transitional period, instead of being an individual section with unique characteristics.

The next section starts at 3.13 with extremely abstracted and high frequency dominated objects. The decrease in the rate of change between different objects is rather remarkable. This preference lets the listener to investigate the texture more in detail. Besides this is the longest section in the movement, assuming the role of closing section. It lasts to until 4.00 and a short rest of absolute silence interrupts the section.

On 4.00, the texture becomes low frequency dominated again and it fades out for approximately 18 seconds. It is open to debate whether this section can be analyzed as a simple but contrasting continuation to the closing section or can be considered as a coda.

80

If our discussion of similarity between the object-oriented grammars of Lachenmann and Schaeffer should be revisited, differing conclusions could be made on several grounds. The generation rates of objects along with their superimposition strategies are quite analogous, therefore creating a theoretical unity. This was already evident in Lachenmann’s self-categorization called orchestral musique concrète. Nevertheless, the textural results are not necessarily alike. One can point out to the morphological parallelisms of unprocessed objects of Schaeffer with the nonmusical instruments of Lachenmann. However, the abstracted sound objects with complex spectra derived from audio manipulation and extended playing techniques do not correspond with each other aesthetically or perceptually, significantly dissociating the aural impacts of both works.

3.2.6 De natura sonorum

Last composition of the analysis section is “De Natura Sonorum”, composed by the French composer Bernard Parmegiani in 1975. He is an active member of GRM since 1959. Thus, he represents a logical step for research after Schaeffer. “De Natura Sonorum” consists of ten movements. These movements are neither contextually nor materially successive and each of them have unique titles, indicating mostly the acoustical phenomenon they are based on. Parmegiani describes his effort as an attempt to create a “continuous metamorphosis of instrumental, concrete and electronic sounds”. (Url -12)

There is important historical significance attached to Parmegiani’s works. As one of the representatives of the second generation of musique concrète movement, Parmegiani do not refuse to add synthetic sounds as musical objects - a fact that is harmonious with our deviations from Schaefferian theory. He tries to form an empirical yet active and hybrid syntax. The complete work lasts almost 50 minutes, thence it is the longest work in the analysis section. The fifth movement, “Conjugaison du timbre” (fusion of timbre), would be the selected movement subject to analysis. As its title inevitably implies, the movement displays significantly resembling textural material and progression strategy with Scelsi’s “Quatro pezzi per Orchestra”.

Generally, instrumental sounds of different sorts (string and brass instruments mostly), which insist on a single pitch (C sharp), are in continuous timbral variation

81 along with a dynamic expansion and compression agenda in the movement, as can be seen in the spectrogram on Figure 3.28. These variations and dynamic motions are not generated only by instrumental playing techniques and bidirectional additive and subtractive accumulation influenced by orchestral preferences as had been examined in Scelsi example. Electronic means are used to manipulate the instruments and produce independent sheets of abstract and abstracted sounds. In the latter case, it is not possible to differentiate completely between an absolute object from a highly abstracted on every encounter.

Like Schaeffer, Parmegiani do not share any information about the technical background of the work except the brief conceptual statement, previously mentioned above. Therefore, the adjective “abstract” would be used in order to define the objects outside the identifiable range during the analysis. Besides, there is no available score subject to analysis and interpretation.

Figure 3.28: Spectrogram for Conjugaison du timbre.

Nevertheless, as can be seen again in Appendix B, Simmon Emmerson placed the complete work on the intersection of abstracted syntax and aural discourse dominant. Thus, it has to be emphasized that our suggested deviation is only in force for the selected movement, due to its ambiguous, blurry state in comparison to the rest of the

82 work. Another remarkable aspect of these “abstract” sounds is their continuous temporal quality. They are simultaneously active with instrumental derivations throughout the whole movement, creating a secondary texture, while giving the perceptional priority to the microtonal fluctuations and morphological/spatial gestural developments of the top instrumental line. The microtonal motion provides us additional links to chase the common connections of the movement to Scelsi’s work.

There are three individual layers in our selected movement. They are specified according to their spectral and morphological characteristics and are not in equal status in terms of their structural role. In hierarchical order of appearance, these layers can be labeled as, “instrumental objects”, “semi-manipulated instrumental objects”, and “abstract objects”.

The work opens with twenty seconds of linear fade-in of a low frequency centered abstract sound. The resonant texture (almost displaying similar formant like55 qualities with human speech) flows in continuous (micro) spectral and dynamic variation on the background throughout the composition. On 00:21, a single note from a bowed instrument starts to be radiated with a fundamental frequency only microtonal apart from the abstract layer, thus creating a permanent suspension. After the third repetition of this particular note approximation, eight seconds long rest is activated and continuous abstract layer starts to dominate the texture until the return of the same pitch quality with a forceful dynamic on 00:43. More rapid repetitions and gestural progression can be observed for almost twenty seconds, which will be disabled again briefly.

As might be expected, it is possible to detect the contours of a symmetrical pattern on form, however not in equal durations as it is in “Quattro pezzi per orchestra”. On the next return to the similar section (00:23), bowed instruments quickly give way to brass instruments without losing the specific pitch center, though louder dynamics and broader timbre variations are traceable. The reason of this switch is purely a timbral intention. The compositional preference could be associated with the oldest, traditional timbral manipulation technique: orchestration. Same allegedly “melodic profile” is repeated with another instrumental source.

55 The spectral peaks and troughs present in the human voice, especially for vowel production (Wishart., 1994). 83

Following the fifth brass fragment, the third layer of semi manipulated instrumental source appears as a part of the background texture. Subsequent to the isolated introduction, the “antecedent” resonant abstract layer reappears on 01:3056 and two layers form a dense, instable texture, without the interruption of the instrumental objects until 02:10. Single note brass figures reenter on 02.10 and the rate of the pitch repetition accelerates linearly along with the spatial alternations. Spatial motion, microtonal fluctuations and textural variations are key elements to this section, which lasts almost for one minute. On 3.07, all three layers are simultaneously reversed electronically, providing a unique transitional period until 3.25, since the instrumental objects still can be recognized despite the process. The reverse command do not remove the dominant identity of the instrumental objects dramatically, nevertheless its priority diminishes to a certain extent. On 3.25, the overall envelope turns back to its original state; meanwhile the amount of pitch shift change in the instrumental layer reaches the limits of a semitone for the first time in the piece. Lasting for fifteen seconds, the stretched pitch motion turns back to its equilibrium point on 03.40, while background texture is reversed only briefly as a gestural interlude in order to convert into a “new section”.

This new “episode” in the movement consists of superimposed instrumental objects with contrasting timbral properties. The dynamic and spectral expansion starts to be pronounced in this period and it will reach its limits gradually towards the end of the composition. Around 03:50, the spectral contrast between the objects of the main layer, achieved by physical timbre manipulators such as mutes, become significant and instant reversed objects appears frequently, although it is quite challenging for the listener to detect every reversed layers. The last sixty seconds of the work can be considered as the climax point of the linear expansion and intervallic motion. There were short-term dynamic bursts/peaks previously in the composition, especially in the opening and the third quarter. However, this fact does not affect the overall linear apprehension of closing sections progressive state. The lower range widens to its lowest limits, while dynamics reached to upper ones. On 04.22, semi-manipulated instrumental layer is transposed to an augmented fifth interval, approximately the note A in this specific case.

56 The indicated time lines are approximations n the span of one second due to the differences of track spacing in mediums of Compact Disc, vinyl and audio file etc… 84

TM continues to be active until the arrival of the additional cadential point. The main layers of natural instrumental objects become the supporting texture. In a sense, layers interchange their roles in terms of hierarchy. Although the augmented six carries more tonal implications for our perception in comparison to the role of dominant fifths used in Scelsi’s work, the actual reception is not accordingly. This is mainly due to the lack of resolution to the dominant and the transposition of the abstract layer to the minor second (D) interval in reference to the continuous C sharp, starting on 04:35. This rather ambiguous semi-harmonic textural quality lasts for twenty-two seconds and fades out gradually. A single, bowed instrument fragment forms a cadential figure at the very end, with its clear but unstable sense of tonic pitch, an approximation of C sharp, similar to “Polymorphia”.

Every individual composition in this chapter has a complicated relationship with each other on several grounds. It can be considered indisputable that compositions that are more recent become increasingly complex in terms of duration, structure and technique, regardless of their compositional genres. The imagination coming from the decades of experience and technological knowledge/progression may be the leading factors for this phenomenon. Some compositions share analogous methodologies. The microtonality of Scelsi and Penderecki, conventional instrumental foundations of Schaeffer and Parmegiani or the roles of percussion on the works of Scelsi and Lachenmann are among distinct examples.

Clearly, the same tendency is obvious on both of the instrumental and electroacoustic domains. In addition to this fact, there are also prominent cross influences between two domains. For instance, the similar cumulative approach is evident in Stockhausen, Scelsi and Penderecki compositions. As discussed earlier, Lachenmann, Schaeffer and to some extent Parmegiani have an organic theoretical connection. In their selected work, the texture is not as thick as the other examples. Besides, musical objects have more unique identity, whereas it is the opposite on the other cases. Still, Parmegiani differ from the other two composers by applying relatively gradual motion to his objects, whereas a rapid and unpredictable sense of motion dominates the rest. Nevertheless, all of the compositions contribute unique perspectives to timbre dominated musical composition.

85

86

4. SYNTHESIS AND PROCESSING

4. 1 Fundamentals

Sound synthesis and sound processing (more precisely known as audio signal processing) are crucial technical activities associated to compositional procedures for sound object based electroacoustic music, where the horizontal and vertical pitch dimension is not the central emphasis. However, their theoretical distinction from each other is occasionally ambiguous. This is especially evident in several cases, where the two departments organically overlap. Thus, it is necessary to stabilize the terminological contents before dealing with more complicated issues.

Trevor Wishart defines the conceptual borders of synthesis and processing as a blurry one and put forward on the specific synthesis type, granular synthesis, as an example for this crossover state of practical implications of terminology (Wishart, 1994). Granular synthesis is a form of sound synthesis based on the division of sound objects into micro time events called “grains”. Every grain could be processed individually in several ways and the whole object can be reconstructed in a different “physical” order. Therefore, it is possible to claim that it contains elements of both worlds (Roads, 2001). Similar distinctions can be observed in Miranda’s work (Miranda, 2002). Another rather ambiguous synthesis type is subtractive synthesis. In subtractive synthesis, a sound object with a highly dense spectrum such as white or pink noise is shaped by various filtering devices to come out with sounds having less dense spectra. In a sense, subtractive synthesis cannot exist without SM tools, suggesting a form of symbiotic stage of synthesis and processing. It should be noted additionally that in the majority of the synthesis applications, signal processing is additionally used to shape the characteristics of the sound further, such as the audio filtering devices found even in primitive .

On the other hand, American composer and theorist Curtis Roads considers signal processing as a general term, which includes synthesis and transformation

87

(manipulation according to the choice of terminology in this dissertation) as sub genres (Roads, 1997). Although Roads’ point of view cannot be disproved scientifically, Wishart’s choice of categorization has significant practical advantages in terms of composition and musical education. Besides, it avoids potential methodological confusion. As can be expected, there are also additional insights regarding the subject. For example, Allen Strange narrows down the basic concept of signal processing to amplifiers and filters, omitting the TM and TBM possibilities entirely (Strange, 1983). Nevertheless, synthesis and processing should be defined as separate but interactive applications to be able to determine the terminological and compositional deviations clearly.

American composer Charles Dodge defines the synthesis activity as “the generation of a signal”, which creates “an acoustical sensation” (Dodge and Jerse, 1997, p.72). This simple yet highly adequate statement points out to a capital feature of sound synthesis: It allows the composer to design and generate a unique sound object by configuring its every component such as its envelope, spectral structure, and temporal duration. Audio signal processing tools enable us to manipulate every sounds, synthesized or natural, as the basic categories are given in sound object manipulation section, though the technical details are deliberately omitted.57 The early electroacoustic works produced in Cologne studio constitute clear historical examples for this “hybrid” approach58, whereas the early French musique concrète school only employs signal processing on existing sound objects, since it discards sound synthesis mainly. This had been discussed exhaustively in the second chapter of the dissertation.

In historical context, both technical departments can be divided into two periods: analog and digital synthesis/processing. Although today, the both technologies are available in separate or hybrid systems, digital domain offers precise calculation, functional stability and automation possibilities on parameters, which might be impossible to accomplish in analog systems.59 Thus, it would be logical to apply a historical separation for a better chronological comprehension. To avoid redundancy, the synthesis and processing tools and techniques, present in both of the domains,

57 It is possible to change object radically via AM, which makes synthesis and processing indistinguishable occasionally. 58 Reverberation in “Studie II” is a manual TBM process. 59 The results of the early analog synthesis methods could not be heard in real time except the basic synthesizers, since it requires an editing and / or recording process. 88 would be explored solely in the analog section or vice versa and brief comparisons of both systems would be included. As a fundamental requirement of digital systems, computers have an unquestionable contribution to electroacoustic composition. Inevitably, some of the technical highlights from the history of “electronic” computers need to be briefly mentioned in the digital section. This is a major distinction, since the word computer does not necessarily refer to systems consist of electronic components. Regarding electronic computer systems, the invention of microprocessors marks a turning point not only for technological developments but for electroacoustic music composition as well. A microprocessor is a silicon chip that contains the CPU, which makes the system more efficient/faster, smaller in physical dimensions and financially cheaper. (Url-13) Before the microprocessors, computer systems were available only in university facilities and research laboratories with limited capabilities, requiring programming language skills. Thus, digital domain was rarely used in compositional activities during 1950’s and 1960’s. The emphasis was on tape compositions and live performances. Bell Labs Engineer Max Mathews programmed the first audio software MUSIC I as early as 1957. It is only capable to produce triangle waves. MUSIC I immediately followed by MUSIC II (1958), MUSIC III (1960), MUSIC IV (1962) and MUSIC V (1968). There are several variations of MUSIC IV and MUSIC V with different names60, in order to make it function under different programming languages or to modify the software according to the compositional preferences (Manning, 1985, p.217). It should be noted that there were no global and compatible operating systems and graphical user interfaces as we have in our recent computer systems such as Windows, Mac OS X and Linux.

4.2. Analog: Before the Microprocessors

An analog (or analogue) audio signal points out to the continuous conversion of acoustical energy into electrical signals. This conversion can be achieved with the particular types of devices called transducers.61 A transducer, such as a microphone (the most conventional type), does this conversion by representing the change in the

60 The majority of them preserve the title, such as MUSIC IVB, MUSIC IVBF, and MUSIC 10 etc… The contemporary software CSound is a direct continuation of Mathew’s earlier designs. 61 Transducers are not only limited to conversion of electrical energy. There are types of transducers, which are capable to convert a specific form of energy to other forms such as chemical or thermal, but audio is solely associated with electricity. 89 number of vibrations with the fluctuations of electrical voltages. This will enable the acoustical energy to be captured in a medium like magnetic audio tape, processed in several ways and played back.62 Figure 4.1 and Figure 4.2 shows the overall procedures for recording and playback. Theoretically, this is a lossless process with infinite resolution in comparison to quantization-based, stepwise conversion of digital systems with finite resolution. However, in practice every component in the signal chain has a significant contribution to the sonic result in terms of frequency range, frequency response and linearity. Dynamic range and spectra is heavily dependent on these quantitative measures (Dodge and Jerse, 1997).

Figure 4.1: Basic analog signal flow (Recording).

Figure 4.2: Basic analog signal flow (Playback).

From the processing and synthesis perspective, it is possible to generalize a notion that analog involves every method benefited from electric means. Nevertheless, there are various additional techniques and formats to produce and store sound besides common analog and digital ones. Martin Russ lists several methods as follows: Mechanical, hydraulic, electrostatic and chemical. Among them, mechanical formats carry historical significance, since early analog audio storage mediums such as phonograph and gramophone were originally based on mechanical principles. As Russ suggests, our modern day vinyl is a hybrid platform, which works according to the combination of mechanical movements of the stylus and electrical amplifying (Russ, 2004). For the sake of terminological consistency, physical manipulation to mechanical systems, such as “flanging” effect, would be also considered to belong to analog sound processing brigade.

62 Playback is a reverse process, where the voltage changes make the loudspeaker diaphragm vibrate to convert the signal to acoustical energy. Therefore, a loudspeaker is also a transducer. 90

As had been frequently mentioned earlier due to their historical status, additive and subtractive syntheses are two fundamental synthesis types in use before the introduction of digital computing systems. Their associated signal processing tools will be explored simultaneously.

4.2.1 Additive synthesis

Additive synthesis is based on the mathematical concept of Fourier series, where a periodic function can be stripped down to the sum of basic sine and cosine functions. Starting from this point, a sinusoid wave or function in a regular, repetitive oscillation (simple harmonic motion) can be used to construct a more complex waveform. A sine wave consists of a single frequency and assumes the role of the smallest element responsible for the timbre of the sound object. This inductive process is also known as the “law of superposition”, where individual sine waves preserve their wave shapes but their amplitude adds cumulatively. This feature can be observed sufficiently in Figure 4.3. A sine wave with a specific frequency is (Klingbeil, 2009, p. 13), where T is time in seconds:

X (t) = sin (2πft) (4.1)

Additive synthesis is largely occupied with the motivation of imitating traditional instruments during 20th century, as it is evident in most of the early electronic musical instruments. However, starting with the idealist Cologne school, it is adapted widely to musical composition. Sound materials are generated by using devices called oscillators.

Oscillators are capable to generate different kinds of periodic waveforms but sine waves give control that is more accurate in terms of microstructure. This is due to the lack of overtones; there is only one fundamental frequency. This particular feature is making sine waves extremely beneficial for the realization of electroacoustic composition as the ideal conceptual continuation of serial music, although the process requires superimposition and physical manipulation of magnetic tapes; a time-consuming task. Spectra that are more complex require more sine waves to be present individually, but the number of tracks, which can fit on the magnetic tape, was quite limited until the arrival of digital audio workstations. Composers had to record existing layers on a single track, in order to make room for additional layers, a

91 procedure also known as “bouncing”.

Figure 4.3: Sine wave superposition (Url-14).

Because of the lower signal to noise ratio of the analog systems in comparison to the digital ones, an important amount of background noise is unavoidably added to the audio signal during the superimposition process. This was a particular problem before the invention of Dolby Noise Reduction systems. Controversially, one may find the continuous interference of noise with the texture of the composition ideologically wrong.63 Besides, the duration of the waves had to be handled by cutting the tape with razor blades according to the specific angles (due to glitch free continuity concerns) before the superimposition stage. Early analog additive synthesis was not a real time process.

Real time synthesis was not possible until the arrival of the voltage-controlled synthesizers (Moog and Buchla) and subsequent computer software (Music N series) in the early 1960’s.

63 It is possible to disprove this view with the psychoacoustic masking / adaptation phenomena. 92

This deficiency was making the trial and error based strategies almost impractical.64 Despite the fact that the additive synthesis does have an obvious association with sine waves, the theoretical basis of the synthesis could be achieved by using different waveforms; sawtooth, triangle and square wave. This wave types contain a certain mathematical hierarchy pattern in their overtone structures, thus it is possible to yield spectrally sophisticated sound objects with less individual ingredients. Compositionally, using waveforms with complex spectra has practical advantages, while they unquestionably give the user less control for the micro details of sound construction. The names of waveforms names come from the geometrical shape of their waveforms:

A sawtooth wave (Figure 4.4) has an inverse proportion between its harmonics and their amplitudes. Assuming that the fundamental frequency is F and the amplitude of F is x, then:

F =x, 2F=x/2, 3F= x/3, 4f= x/4… (4.2)

Figure 4.4: Sawtooth wave.

A square wave (Figure 4.5) contains only odd numbered harmonics and has the following amplitude pattern:

F=x, 3F= x/3, 5F=x/5, 7F= x/7… (4.3)

64 As had been criticized in the introductory section, the majority of the synthesizers are limited to equal temperament systems, though some of them are programmable. Nevertheless, they are still restricted in terms of the oscillator number. 93

Figure 4.5: Square wave.

A triangle wave (Figure 4.6) contains again only odd numbered harmonics and but has a different inverse proportion pattern.

F=x, 3F= x /9, 5F= x/25, 7F=X/49… (4.4)

Figure 4.6: Triangle wave.

So far, the fundamental and most commonly used types of waveforms are mentioned, but the actual list of non-sinusoidal waves does not end here. There are additional forms of waves called pulse waves. A quick glance to its waveform might create the impression that pulse waves are similar to square waves (Figure 4.7), yet there is a major distinction. Pulse waves are not periodic and they have a relatively shorter envelope, but their envelope is not as short as an impulse65. The temporal ratio of the on and of stages is not 1:1. In other words, the time lag between on and off stages is not stable, rather a variable one. This dynamic variation is called pulse width or duty

65 Impulse synthesis is occasionally treated as a separate synthesis act. 94 cycle.66 Manipulating the pulse width, a process known as pulse width modulation, causes the perceived timbre to change in time. Dodge proposes that narrow pulses / small ratios signify a high frequency dominant spectrum (Dodge and Jerse, 1997). Reminiscent of sawtooth wave, a pulse wave contains even and odd harmonics, albeit with a denser spectrum then the former wave. This spectral prosperity provides us an interesting transitional point for the conceptual association of wave types to basic synthesis types.

Theoretically, pulse waves can be used for additive synthesis, if we not restrict its foundations to the usage of sine waves. However, in practice, they are commonly used in subtractive synthesis along with almost conventional broadband noise type employment. These noise types will be introduced in the next section.

Figure 4.7: Pulse wave.

In digital domain, additive synthesis can be realized more sufficiently. There is the chance of having theoretically almost infinite numbers of stable oscillators with real- time response, automation control on every parameter and unlimited additional signal processing ability to manipulate the sound object further.67

4.2.2 Subtractive synthesis

Unlike additive synthesis, subtractive synthesis is a reduction-oriented application. Audio filter types are used to remove various portions of source material. Therefore, it possible to use any sound source except sine waves, but richer spectral sources like pulse waves and noises are most preferred ones in order to have more control and

66 The terminological preference is a geographical one. 67 In practice, digital systems indeed have limitations determined by the capacity of its components. 95 options on the result. Subtractive synthesis is often categorized under the heading “source modeling synthesis approach”. Source modeling is based on the idea that every sound source produces a raw sound, which is spectrally shaped by a resonator/ modifier.68 Thus, by having a similar source, filters can be adjusted to give similar results as the resonator/modifier. Additionally, Miranda (2002) defines five more techniques for source modeling: Waveguide filtering, Karplus-Strong Synthesis, cellular automata lookup table, physical modeling and modal synthesis. In a general sense, the natures of these acts are motivated mainly by (instrumental or human voice based) imitational purposes. Thus, they would be discarded as compositional procedures and will be omitted in the dissertation.

Noise types are the richest sound sources in terms of spectral density among the electronically available signals. White noise, the leading broadband noise type for synthesis procedures, could be produced by applying equal intensity in a specific point of time to all the frequencies present. The frequencies are covering the range of human auditory system, from 20 to 20.000 Hz. The name is an analogy to optics, where the mixture of the three main colors, red, green and blue, produce the color white contrary to the general assumption regarding the color.

Generating white noise is a random process, showing no signs of a repeating pattern. On Figure 4.8, a snapshot from the waveform of white noise is taken in order to observe its random attributions.

Figure 4.8: White noise waveform.

68 Human voice is an important example as a proof for this theory. The raw sound the vocal cords can produce is reminiscent of a triangle wave, but it is highly modified by the oral and nasal cavities. 96

Variations can be produced by filtering the white noise and changing the power spectral density. Most frequently used noise versions are pink and brown69 noise. However, for these types color names do not have a direct association to optic principles as in the case of white noise. In pink noise, the intensity is not equal in the complete audible range, but it is equal in octave bands, resulting with a darker, muffled sound then white noise. Brown noise has even more intensity wise disparity among its frequency bands, the amplitude hierarchy decreases by 6 dB per octave from low end to high end and thereby the sonic object is dominated by the low frequencies. There are other forms of noise, though historically they are not commonly used. Blue (azure) noise has a 3 dB increase in every higher frequency band. Violet noise has the same tendency with a 6 dB increase. Grey noise is a broadband, random noise, modified according to equal loudness contours (A weighted). It has dominating spectral energy on lowest and highest bands and less energy at the middle bands, where the human hearing is the most sensitive due to the resonance in the ear canal. The spectral analyzes of the main noise types can be observed in Figure 4.9, 4.10 and 4.11.

Figure 4.9: White noise spectrum.

69 Brown noise is also known as Brownian or red noise, named after Robert Brown. 97

Figure 4.10: Pink noise spectrum.

Figure 4.11: Brown noise spectrum.

98

At this point, the technical capacities of audio filters have to be introduced for a better comprehension of subtractive synthesis. The paradoxical relationship between these particular synthesis approach and audio filters had been mentioned in the fundamentals section. Filters are spectral signal processing tools (SM), and they are used in variety of applications not solely for compositional purposes but also in daily applications and communication systems. One essential fact about audio filters is, they are only capable to manipulate (amplify, attenuate or cut) various bands of frequencies present in the source spectrum, but they cannot modify non-present ones.70

Dodge discusses four main filter types: Low-pass (hi-cut), hi-pass (low-cut), band- pass and band-stop (band-reject, notch). Recently it is possible to come up with seven different types, with the addition of hi-shelf, low-shelf and peaking (semi parametric) filters (Dodge and Jerse, 1997, p.171). The eight filter type, all-pass filters are omitted in the dissertation since they only change the phase degree of individual sinusoidal component without changing the spectrum. The combination of filters may form filter banks (serial or parallel), sufficient to shape the sound source more in a complex way. Analog filters can be either passive or active. A passive filter does not include an amplifier in its circuit and consist of capacitors and resistors. The consequence of this feature is that they can only cut or attenuate the target frequencies but cannot amplify (boost) them. On the contrary, active peaking filters are capable to do both actions, due to the presence of an amplifier circuit. Digital filter implementations take active filters as a model in the majority of the cases.

Low pass (hi-cut) and hi-pass (low-cut) filters divide the spectrum into two bands, a pass band and a stop band. A “threshold” frequency has to be determined in order to specify the frequencies above or below this threshold, which would be preserved in their original spectral form (pass band), and the frequencies, which would be cut from the signal (stop band). This threshold frequency is named as the cut-off frequency, where the signal decreases -3 dB according to a certain slope.71 The reason is due to a technical necessity. It is impossible to design a “brick wall” filter in analog domain, meaning a filter with mathematically precise and rapid response.

70 The related audio device is called an (harmonic) exciter or enhancer, which generate harmonics and sub harmonics to accomplish a bunch of audio engineering tasks. 71 Some texts and audio devices refer to slope also as low or high frequency roll-off. 99

The number of “orders” defines the steepness of the cut off in a proportional way. Every mathematically higher order (whole number multiplies of one) adds -6 dB of more increase per octave in the attenuation of above or below cutoff point. In other words, the higher the order, steeper is the slope of the cut. Table 4.1 summarizes the first six orders and their dB/oct (decibel per octave) relationships.

Table 4.1: Filter orders.

1. ORDER 2. ORDER 3. ORDER

-6 dB/oct -12 dB/oct -18 dB/oct

4. ORDER 5. ORDER 6. ORDER

-24 dB/oct -30 dB/oct -36 dB/oct

Digital filters offer adjustable orders, but analog filters have fixed designs in most of the cases. Their orders are dependent on the filter category they are designed on such as Bessel, Butterworth and Chebyshev filters, which are infinite impulse response (IIR) filters.72 In theory, a pass band has to be unmodified (“flat” in daily terms), but certain filters might produce ripples (Chebyshev) near the cut-off frequency, phase non-linearity (elliptical filters) and slow transient response (Butterworth). (Pohlmann, 2000, p. 54) On Figure 4.12, a first and second order (frequently used orders in analog domain) low pass and high pass filter with a cut off frequency of 80 Hz is visually represented. The horizontal axis shows the stop and bass band frequencies and the vertical axis shows their relative amplitude.

Figure 4.12: Hi--pass filters (Davis and Jones, 1990, p.256).

72 An IIR filters’, (also known as recursive) filter impulse response never return to zero, because theoretically the output feeds back to the input continuously (Pohlmann, 2000). 100

Band-pass filters are the combination of low-pass and hi-pass filters, which allows only a certain frequency band to proceed unchanged to the output of the filter, removing both ends of the spectrum according to the given cut-off frequencies. (Figure 4.13)

Figure 4.13: Band-pass filters (Dodge, 1997, p.173).

The width of the pass band is identified by the ratio of center frequency (F0 in the upper diagram) and the bandwidth (between Fi and Fu) with the unit quality factor (Q). The basic formula for calculating the Q is:

Q=Fcenter/BW (4.5)

Mathematically, broader bandwidths correspond to lower Q values and narrower bandwidths correspond to higher Q values (Dodge and Jerse 1997). In theory, it is possible to isolate an extremely narrow frequency band from a broadband noise such as white noise.

The fourth category of filters is band-stop (band-reject) filters. It is possible to consider this particular filter, executing the inverse process of a band-pass filter, where a certain range of frequencies is attenuated according to the desired amount (the cut off frequencies designate the lower and upper limits), but the rest of the spectrum remain unaltered. (Figure 4.14) A variation of this filter type is called also a notch filter, where the Q is significantly high in order to include a narrower stop band.

101

Figure 4.14: Band-reject filters (Dodge and Jerse 1997, p.174).

Shelving filters are among the most familiar filter types to all users, widely used in daily applications and domestic environments. The tone controls on home and mobile systems have been predominantly consisted of low and hi-shelving filters.

Similar to cutting filters, here a cut-off frequency should be specified instead of a center frequency. The frequencies below (low-shelf) or above (hi-shelf) can be decreased or increased in the desired amount but they cannot be removed completely as in the case of cutting filters. (Figure 4.15) From the compositional point of view, this type is rarely used in subtractive synthesis, because of its inability to shape the spectrum radically. Still they are important for SM based applications, since they are capable enough to change the perception of the sound object.

Peaking filters allow the user to boost or cut a specific center frequency according to the selected Q and amplitude amount. As can be seen on Figure 4.16, a center frequency of 1 kHz can cut or boost an extremely narrow or broad range of frequencies. Parametric equalizers are derived from multiband peaking (variable) filters; a term coined by George Massenburg in 1972 and became the most common tool of sound engineering applications (Url – 15). Peaking filters allow the user to add dynamic variations to the parameters of filter types, and creative results can be produced such as the classic frequency sweeping based “wah-wah” effect. Rhythmically adjustable step filter with various frequency notches and boosts generate interesting modifications on the texture.

102

Figure 4.15: Shelving filters (Huber and Miles, 2010, p.481).

The main advantages of digital filters are their automation capacity, more control on parameters and the ability to construct FIR (finite impulse response or non-recursive filters), which are not causing phase distortion, suggesting a linear response, but requires more CPU power and introduce time-delays to the applied audio material.

Figure 4.16: Peaking filters (Huber and Miles, 2010, p.480).

Filters are the main manipulation tools in Stockhausen’s “Microphonie 1” (1964), where the real time- sounds derived from the two tam-tam’s are constantly shaped by microphone positioning and two filters, adjusted according to the given instructions. 103

4.2.3 Modulation

In simple terms, modulation is an (occasionally hybrid) form of synthesis and processing, where the output of one signal (modulator signal) alters the amplitude (AM), frequency (FM) or the phase information (PM)73 of another signal (carrier signal) resulting with significant changes in the perception of the sound or with a completely new sound.74 Miranda (Miranda, 2002) identifies the modulation activities as “loose modeling approaches”, due to the potential imitational qualities of the modulation techniques. The final spectral status of the sound is determined by carrier components and the sidebands. The frequency of carrier component is defined by the carrier signal; whereas both of the signals are responsible for the spectra of the sidebands (Dodge and Jerse, 1997).

Most frequently used AM techniques are classical AM and RM. Tremolo effect, the most basic form of AM occurs, when a sub-audio signal (below 20 Hz) is introduced to the amplitude input of a higher frequency and creates variations in the amplitude domain of the carrier signal. When the frequency of the modulator signal enters in the territory of human perception range, sidebands occur in the overall spectra (Miranda, 2002).

The spectra with the addition of the sidebands are equal to:

Fcarrier + (Fc-Fmodulator) + (Fc+Fm) (4.6)

Therefore, it is possible to claim AM allows the composer to have DM and SM capabilities at the same time just by varying the modulator frequency. It has to be emphasized that the amount of overtones determines the complexity of this mathematical interaction. Sine waves will provide less dense sound structures as can be expected in every form of audio modulation.

RM differs significantly from the classical AM by two main features: Firstly, the amplitude of the carrier is completely dependent on the modulator signal, meaning there will not be any sound if the modulation index equals to zero and secondly the frequency of the carrier signal is not present in the final spectra (Miranda, 2002).

73 PM is the most commonly used in synthesizer designs and it is not employed directly in electroacoustic compositional activity. Thus, it will be omitted in the modulation section.. 74 The ratio of the deviation from the original state of the specific parameter defines the modulation index or modulation depth. 104

Therefore, the equation in order to represent the RM applied inharmonic spectra (due to the lack of fundamental frequency) is:

(Fc-Fm) + (Fc+Fm) (4.7)

Stockhausen used RM extensively in his work “Mixtur”, realized in 1966. The real time sounds derived from the orchestra is ring modulated synchronously along with the accompaniment of sine generators. The pitch sensation is diminished to a certain extent.

The third modulation method, FM, is largely a digital domain process. Thus, the principles of this synthesis method would be summarized in the following section.

4.2.4 Time based effects

Audio effects such as reverberation, (tape) delay, phasing and flanging are most commonly (and historically) used analog time based effects in electroacoustic composition and naturally have their sophisticated versions on digital domain. Reverberation, an acoustic phenomenon caused by multiple reflections from the physical space, prolongs the decay stage of their envelope, while giving them a sense of depth and volume. The only technique to apply reverberation to sound objects was to use so-called echo chambers, real spaces with reflective surfaces as discussed in the analysis of “Studie II”. The layback of the material in the echo chamber has to be rerecorded with microphones. This manual technique evolved into electromechanical reverberation systems such as plate and spring reverbs. Digital reverberation offer simulation of real or fictional spaces, with precise parameter control. The latest technology called “convolution reverb” is a sample-based reverb application, which reconstructs the reflection of the spaces and their frequency responses in an extremely realistic way. This is due to prerecording the impulse responses in the specified place and rerecording it to be analyzed in digital domain.

Delay is the signal repetition of the original signal in a predetermined time unit. Until the arrival of digital delay systems with additional features such as pre delay and feedback, tape delay was the only technique to apply delay to audio material. The recorded audio fed into the recording head while playing it back simultaneously. Thus, the IPS determines the delay time. Schaeffer used delay in many of his compositions.

105

Flanging is achieved by superimposing its short delayed version (usually less than 20 ms) to the original audio signal. Although it is categorized in the time-based effects, flanging affects the spectrum of the audio dominantly. This is due to the Haas effect. Time delays less than 50 ms are not perceived as a separate entity. If the spatial positioning between the original and the delayed is the same, comb filtering occurs, resulting with dip and peak patterns in the spectrum.

Similarly, phasing employs a unique filter type called all-phase filter. All-pass filters do not change the frequency content of the audio material. Rather they change the phase information. Therefore, a superimposed version with original audio interferes constructively and destructively, resulting with not equally spaced dips and peaks in the spectrum.

4.3. Digital: After the Microprocessors

The main benefits of digital systems were discussed briefly several times, since most of the analog processing and synthesis methods are available in digital domain with relatively more efficiency. However, there are increasing numbers of “only digital” applications. A selected few will be introduced in the section. First, the fundamental conceptual difference of digital domain should be clarified.

Digital audio employs a significantly different method for the storage of electrical voltages caused by acoustical energy at the output of a microphone or an electronic sound source. Unlike the infinite resolution of the analog systems, digital audio technology must convert the voltage fluctuations into binary numbers in order to record, process and playback the signal. Thus, it is a finite system with theoretical loss in the dynamic range and spectrum. Sample rate and the bit depth are the main resolution parameters for the dynamic range and the frequency range of the system. However, current digital technology can capture the frequencies above the human hearing capacity and more importantly, it gives us the most increased dynamic range available in the history of audio recording, although this is not favored by every user.

Figure 4.17 and Figure 4.18 represent the simplified signal flow in digital domains. The low pass filters applied on both of the activities are omitted in the figures, though they are crucial elements in the chain, to prevent aliasing errors in both of the conversion processes.

106

The internal subsections of ADC and DAC units are also excluded from the diagrams below.

Figure 4.17: Basic digital signal flow (Recording).

Figure 4.18: Basic digital signal flow (Playback).

Recent computer systems equipped with multiple-core based CPU systems, which provide advanced and stable clock speed, and higher amount of RAM offer us higher resolution audio along with more sophisticated processing, and synthesis hardware and plug-ins.

4.3.1 FM synthesis

Frequency modulation (abbreviated as FM), developed by John Chowning in 1967 (Url-16), is based on the introduction of a modulator signal to the frequency input of carrier signal. Similar to amplitude modulation (AM), sub-audio modulator signal causes frequency variations in the carrier signal, known as vibrato in a typical musical performance sense. As soon as the modulator signal exceeds the lowest limit, sidebands that are more complex started to be generated in comparison to AM and RM capabilities (Miranda, 2002). The formula to calculate the frequency of the sidebands is:

Fc + (Fc-Fm) + (Fc+Fm) + (Fc-2Fm) + (Fc+2Fm) + (Fc-2Fm) + (Fc+3Fm) (4.8)

Chowning used this synthesis method extensively in his composition “Stria”, realized in 1977, and FM became one of the most commonly used synthesis methods after the mass application of FM by Japanese company Yamaha to domestic synthesizer market.

107

4.3.2 Granular synthesis

Granular synthesis is essentially not a spectral synthesis application. It does not function primarily on amplitude and frequency domain, but focuses on amplitude and time domain. Roads (2001) define the granular synthesis as the division of sounds into “microacoustic events” called grains, “near the perception threshold of human auditory system”, ranging from to one to 100 ms. Processes such as time stretching, envelope shaping and time-based effects could be applied on individual grains, and the grains can be reconstructed in various random or predetermined orders and the result could be processed further with conventional processing tools. Therefore, granular synthesis has also timbral implications. The density of the grains has also considerable influence in the macro texture. Figure 4.19 shows a grain of 50 ms on amplitude and time domain.

Figure 4.19: Grain of 50 ms.

Micro level working ability in terms of temporal duration is unquestionably a trademark of digital environment. Granular synthesis has many variations such as “glisson”, “grainlet”, “trainlet”, “pulsar synthesis” and “abstract” or “physical particle modeling”. (Roads, 2001, pp.119-120)

4.3.2 Waveshaping

Waveshaping is based on modifying the waveform of the signal. Therefore, theoretically it could be achieved by sending the signal to a system with nonlinear characteristics and recording the “distorted” output. Manipulating the sound objects with low fidelity transducers are among the most frequently employed waveshaping techniques in analog domain. Figure 4.20 represents a visual example of the process, where (a) is the original waveform and (b) is the waveshaped version.

108

Figure 4.20: Waveshaping (Roads, p.39).

Miller Puckette (2006) categorized waveshaping as a part of modulation, Dodge (1997) and Miranda (2002) consider it as part of the distortion synthesis. Computers allow the users to have more privileges for defining the deviations from the original form (Miranda, 2002).

109

110

5. ADDITIONAL TERMINOLOGY

So far, technical, theoretical and other related sorts of terminology has been discussed and actively used in association with timbre the based electroacoustic compositional approach. A stable terminology is not only important for artistic communication, but also crucial for educational purposes. Along with a new ear training methodology, a common terminological base would be advantageous to analyze, comprehend and internalize this highly abstract and unconventional art form. The EARS project leaded by Music, Technology and Innovation Research Centre at De Montfort University, included a glossary to the web site (Url-17), selected from derived mainly from theories of Schaeffer, Emmerson, Smalley and Wishart with additional contributions regarding practical and technical aspects of the genre. Among the unmentioned ones, significant ones would be introduced in this brief chapter.

“Utterance”, a term by Wishart, indicates the expressional quality of non-pitch based human voice’s function in electroacoustic context, where the sound source is not visible necessarily, as it is the case for acousmatical listening conditions. Today, the human voice continues to be an important source for electroacoustic composition. Thus, the analysis of aesthetic and functional roles of human voice in the sonic structure must be given special importance during the intermediate stages of the compositional education. Etudes, based on application of various manipulation and synthesis methods on human voice would be useful to explore the compositional potentials (Url-18).

“Transcontextuality”, a term adapted by Smalley to the field from literature, points out to the possibility to apply a semiotic analytical approach to investigate the relationship between sound objects and between movements in macro scale. This critical perspective would be helpful to enable the students generate meaning- oriented reference points in order to investigate the dramaturgical and technical structure and react according to the data (Url-19).

111

“Composed space” by Smalley refer to the spatial movement of the sound sources as an integral part of the compositional procedure. The site-specific compositions can be extremely busy in the spatial domain (Url-20). Aesthetical and technical possibilities and requirements of space compositions have to be included to the curriculum of electroacoustic studies.

“Aesthesis” (reception) and “poiesis (construction), adapted from Paul Valéry, denote to the distinctive stages of composition and its perception (Url- 21). The potential advantages and disadvantages of the amount of the coherence between these two stages will provide a necessary perspective for the micro and macro structural strategies.

“Parametric analysis” is a type of microanalysis, where only an individual parameter of the sound should be observed throughout the composition (Url- 22). “Structural analysis” gives the whole network of parameters equal importance, while also studying it on a macro scale (Url- 23). This both views on analysis can be applied to electroacoustic music for an efficient study of the formal tendencies and technical strategies.

“Aural landscape”, another Wishart term, points out to the ability of the recorded material to create an implication of a space along with visual implications (Url- 24). In combination with composed space, both terms establish the necessary conceptual framework of spatial organization.

112

6. CONCLUSIONS

The conceptual and structural definition of music has been in gradual evolution, since the inception of human culture. The written history helps us to trace the theory and practice of music back to Ancient Greek, though the flute like instrument remain, found in the cave Hohle Fels in Germany, is aged at least 35.00 years old (Url-25). This discovery leaves us no doubts, that the problem of what music is, or how music aesthetically and ideologically should be, has been possibly one of the central motives throughout the thousands of years. At the beginning of the 20th century, contemporary complex yet restricted musical language, established in a long period, started to be challenged by a variety of Western artists and theorists in order to find new ways for musical expression and construction. This is partially because of the advanced sonic imagination provided by the invention of audio technology and the potentials that it suggests. In the last 60 years, the search for a new compositional perspective firstly focused on the promises of the timbre as a form-defining, alternative compositional dimension. Afterwards, the pitch started to cease being the focus for musical organization and any sound of natural or electronic origin started to be considered as a potential musical material. As discussed exhaustively, electroacoustic music became the base for sound oriented composition, although timbral approach continues to exist also in the instrumental practice. Today, electroacoustic music’s aesthetical and operational heritage can be observed even in popular and hybrid musical forms. Besides, it has conceptual and practical reflections on other art forms such as sound installations or sound designs created to accompany moving or still images. On several occasions, it would be challenging to distinguish between musical composition and other sonic arts, though their function and scale can be questioned to reach a “subjective” verdict.

This thesis has several attempts on various grounds. One of the primary motivations is to explore the establishment of the theoretical and compositional background of timbre dominated compositional activities. Analyses of selected compositions proved

113 their practical existence and evolution chronologically. By placing electroacoustic music at the base for spectral strategies regarding four-dimensional sonic compositions, the evolution of audio technology involves immediately as one of the leading themes. The historical contextual approach in reference to the contributing elements is essential to arrive to a complete aesthetical and technological framework, compatible with future deviations and expectations. While criticizing the present unofficial terminology, the work aimed to propose a stable one. To accomplish these tasks, a role model is inevitably required.

Pierre Schaeffer’s elaborate musical theory is among the extremely rare theories aforethought specifically for a new, technology dependent music. Despite its utopian tendencies and sound wise constraints, it succeeds to supply the fundamentals of a sonic comprehension and related theory, which provides an open intellectual source. Selected features of the Schaefferian theory can be modified according to contemporary intellectual needs and technical innovations. Additionally, they can be applied also to hybrid musical forms, such as the mainstream electronica scene with diverse subgenres.

One of the key concepts, which match these criteria of flexibility and persistence, is the object-oriented approach to sounds. All four dimensions of a physical sound may contribute to our notion of a sound object by making them individual and unique sonic structures. Nevertheless, the thesis gives timbre a certain priority among the other parameters due to its unique cultural status. Horizontal and vertical relations between different or variant objects produce a sonic continuum, and provide an alternative to the traditional concept of Western musical composition. Even a single sound object can progress to a “musical” value by applying modifications on its gesture and texture through audio manipulations, morphing and spatial movement. As a natural consequence, listening becomes a crucial activity for compositional process. Thus, analytic listening becomes the focal concern for electroacoustic composition, as it is already evident in the “primacy to the ear” motto of musique concrète movement. Schaeffer differentiated between sound objects and musical objects according to their compositional value and tried to set a standard for this determination, while receiving support from the principles of human auditory perception. Selected concepts of psychoacoustics have been introduced throughout the thesis, where it was prerequisite for making conceptual connections. To open up

114 a parenthesis, the theoretical interaction of abstract art and electroacoustic music has been discussed briefly to obtain a more general historical perspective.

The deviations from the Schaefferian theory is discussed in the second chapter, though it should be emphasized insistently, that composers’ aesthetic decisions regarding object election lead them to create a sonic structure of her/his own imagination and technical ability. Therefore, any predetermined standard for object selection may result with works that have uniform spectral qualities. The risk of producing prototypes contradicts with the theoretical roots of the genre, since the common feature of all early ideas was to have broad range of compositional material. Besides, any aesthetical classification represents a considerable restriction to the potential results in an era, in which more complex process, and synthesis is available to more people in comparison to the foundational years of the genre. Nevertheless, the reception and production of a timbre-oriented composition entails several stages of preparation.

Electroacoustic composition requires a different form of ear training in order to gain an analytic/critical listening perspective, in comparison the traditional chord and interval based identification systems. Many theorists suggest complex extensions to the primary listening mode definitions of Schaeffer, nevertheless the core of the reduced listening ideal remained the same for the general appreciation of the genre. It is possible to suggest that the ear training methods applied in sound engineering education is quite relevant to the demands of reduced listening. Every spectral and other temporal behavior of sounds has to be observed instead of focusing only to pitch quality and its relations, in order to decide a certain strategy for manipulation and structuring purposes. Frequency, timbre and signal processing based ear training can be included to the curriculums of the music departments for a complete view of the musical possibilities.

Correspondingly, the second stage is unquestionably reserved for technical calibration. Due to the unfamiliarity of the technical matters, some of the first generation electroacoustic music composers employed assistants for the realization of their work. Today, computers offer us almost infinite options, real time response and precise control for audio related activities, according to various degrees of software knowledge. Thus, every active contributor to the genre must have knowledge about sequencing, processing and synthesis and must have a considerable

115 amount of experience on audio production and postproduction stages. Traditional concepts of sound engineering are not included to the thesis. However, elementary level (and historically significant) processing and synthesis techniques have been introduced with occasional debates about terminology and practice. The accompaniment CD contains audio examples and etudes, representing all discussed manipulation and synthesis techniques applied on organic and synthetic sound sources, for a clear apprehension of their functions. The composition “We are lost forever” uses all the mentioned techniques in a compositional context.

New techniques define new forms and aesthetical movements of various sorts under the general heading called electroacoustic composition. Starting especially with the early experiments of live performances in 1960’s, its methodologies continue to be used in more daring collaborations with instrumental, human or any sound sources assuming different roles, such as live processing techniques. Either in collaborative or pure state, contemporary musical object percept is diverse and inclusive, with many formal options having different amounts of determinacy and indeterminacy. From the digital error based glitch concept to the minimalistic microsound movement, our interactions with sound offer new paths and contribute to our imagination to establish unheard sonic structures. It would be difficult to predict the future theoretical and practical expansions. Nevertheless, with the development of interactive electronic instruments, programmable software languages and efficient synthesis/processing tools, it is possible to suggest that electroacoustic music will continue to evolve to a further level of performance practice and compositional methodology. It is mandatory to adapt the fundamental techniques and theories to educational systems along with a critical perceptive approach to timbral and textural progression. Eventually, it is one of the intrinsic functions of the art forms to originate compatible or contrasting contexts and possibilities in order to flourish the human culture.

116

REFERENCES

Adlington, R. (2009). Sound Commitments – Avant-garde Music and the Sixties. New York: Oxford Press. Ali, F. (2002). Elektronik Müziğin Öncüsü Bülent Arel. İstanbul: Türkiye İş Bankası Kültür Yayınları. Altınel, A. (2011). Personal communication. 14.03.2011 Attali, J. (2009). Noise: The Political Economy of Music. Minneapolis / London: University of Minnesota Press. Bauer, A. (2010): Philosophy Recomposed: Stanley Cavell and the Critique of New Music. Journal of Music Theory. Date Retrieved: 23.01 2012, address http://jmt.dukejournals.org/content/54/1/75.abstract Braun, H. J. (2002). Music and Technology in the Twentieth Century. Baltimore: The Johns Hopkins University Press. Busoni, F. (1911). Sketch of a New Esthetic of Music. New York. G. Schirmer. Chadabe, J. (1997). Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice-Hall. Chion, M. (1983). Guide des objets sonores: Pierre Schaeffer et la recherche musicale. Paris : Éditions Buchet/Chastel. Cox, C. and Warner, D. (2009). Audio Culture: Readings in Modern Music. New York: Continuum. Dack, J. (2002). Abstract and Concrete. Journal of Electroacoustic Music, vol.14. Date retrieved:12.05.2010, address: http://www.sonic.mdx.ac.uk/research/dackabstract.html Dack,J. (2006). Translating Pierre Schaeffer: Symbolism, Literature and Music. Electroacoustic Music Studies Network. Date retrieved: 12.05.2010, address: http://www.ems-network.org/spip.php?article231 Dack, J. (2003). Acoulogie: an answer to Lévi-Strauss? Electroacoustic Music Studies Network. Date retrieved: 12.05.2010, address: http://www.ems-network.org/spip.php?article231 Dack, J. (2002). Histories and Ideologies of Synthesis. Lansdown Centre for Electronic Arts. Date retrieved: 13.05.2010, address: http://www.cea.mdx.ac.uk/?location_id=61&item=86

117

Dack, J. (2002). Instrument and Pseudoinstrument - Acousmatic Conceptions. Lansdown Centre for Electronic Arts. Date retrieved: 13.05.2010, address: http://www.cea.mdx.ac.uk/?location_id=61&item=86 Dack, J. (2008). Mixed Electroacoustic Music: Interactions between the actual and the virtual. Lansdown Centre for Electronic Arts. Date retrieved: 13.05.2010,address:http://www.cea.mdx.ac.uk/?location_id=61&item =86 Dack,J. (1999). Pierre Henry’s continuing Journey. Lansdown Centre for Electronic Arts. Date retrieved: 13.05.2010, address: http://www.cea.mdx.ac.uk/?location_id=61&item=86 Dack, J. (1994). Pierre Schaeffer and the Significance of Radiophonic Art. Lansdown Centre for Electronic Arts. Date retrieved: 13.05.2010, address: http://www.cea.mdx.ac.uk/?location_id=61&item=86 Davis, G and Jones R. (1990). The Sound Reinforcement Handbook. (2nd ed.) Milwaukee: Hal Leonard Publishing Corporation. Dodge, C. and Jerse, A. T. (1997). Computer Music: Synthesis, Composition and Performance. (2nd ed.) New York: Schirmer. Eimert, H. and Stockhausen K, (1965). Die Reihe vol. 1 Electronic Music (4th ed.) London: Theodore Presser Co. Emmerson, S. (1986). The language of Electro-acoustic Music. London: The Macmillan Press. Emmerson, S. (2007). Living Electronic Music. Hampshire: Ashgate Publishing. Ferrer, R. (2011). Timbral Environments: An Ecological Approach to the Cognition of Timbre. Empricsl Musicology Review. Vol: 6, No: 2 Guedes, C. (1996). Pierre Schaeffer, Musique Concrète, and the Influences in the Compositional Practice of the Twentieth Century. Date retrieved 10.03.2009, address: http://web.mac.com/carlosguedes/iWeb/HTM/Media_files/Schaeffer- 1.pdf Hard, J. (2008). The Future of Modern Music: A Philosophical Exploration of Modernist Music in the 20th Century and Beyond. (3rd ed.) Michigan: Iconic Press. Harrison, J. (2000). Diffusion: Theories and Practices, with Particular Reference to the BEAST System. Date retrieved: 03.12.2010, address: http://cec.concordia.ca/econtact/Diffusion/Beast.html Heifetz, R.J. (1989). On the Wires of Our Nerves: The Art of Electroacoustic Music. NJ: Associated University Presses. Hockings, E. (1995). Helmut Lachenmann’s Concept of Rejection. JSTOR. Date retrieved: 13.01.2010, address: http://www.jstor.org/pss/945557?searchUrl=%2Faction%2FdoBasicSe arch%3FQuery%3Delke%2Bhockings%26acc%3Doff%26wc%3Don &Search=yes Holmes, T. (2002). Electronic and Experimental Music: Pioneers in Technology and Composition. (2nd ed.) New York: Routledge

118

Howad, D. and Angus, J.A.S. (2009). Acoustics and Psychoacoustics. (4th ed.) Oxford: Focal Press. Huber, D.M, and Runstein, R.E. (2010). Modern Recording Techniques. (7th ed.) Oxford: Focal Press. Husserl, E. (1983). Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. Boston: Martinus Nijhoff Publishers. Husserl, E. (2010). Fenomenoloji Üzerine Beş Ders. Ankara: Bilge Su Yayıncılık. İpşiroğlu, N. (2006). Resimde Müziğin Etkisi:Yeni Bir Alımlama Boyutu. (3rd ed.) İstanbul: Yirmidört Yayınevi. Kahn, D. (1999). Noise, Water, Meat: A History of Sound in Arts. London: MIT Press. Kane, B. (2007). L’Objet Sonore Maintenant: Pierre Schaeffer,sound objects and the phenomenological reduction. Cambridge: Cambridge University Press. Klingbeil, M. (2009). Spectral Analysis, Editing, and Resynthesis: Methods and Applications. Oxford University. Koenigsberg, C. (1991). Karlheinz Stockhausen: New Morphology of Musical Time. Date retrieved: 29.03.2009, address: http://www.music.princeton.edu/~ckk/smmt/index.html Kramer, J.D. (1988). The time of Music. New York: Schirmer Books. Landy, L. (2007). Understanding the Art of Sound Organization. London. MIT Press. Lachenmann, H. (1971). Kontrakadenz. Wiesbaden: Breitkopf and Haertel Maconie, R. (2005). Other Planets: The Music of Stockhausen. Lanham: Scarecrow Press. Manning, P. (1985). Electronic & Computer Music. New York: Oxford University Press. McHard, J. (2006). The Future of Modern Music: A Philosophical Exploration of Modernist Music in the Twentieth Century and Beyond. Michigan: Iconic Press. Meyer, F. and Zimmermann, H. (2006). Edgard Varése - Composer Sound Sculptor Visionary. Woodbridge, Suffolk: The Boydel Press. Miranda, E.R. (2002). Computer Sound Design: Synthesis Techniques and Programming(2nd ed.)Oxford: Focal Press. Mimaroglu, İ. (1991). Elektronik Müzik. İstanbul: Pan Yayınları. Nyman, M, (1999). Experimental Music: Cage and Beyond. (2nd ed.) Cambridge: Cambridge University Press. Palombini, C. (1998). Pierre Schaeffer, 1953: Towards an Experimental Music. Electronic Musicological Review. Date retrieved: 06.01.2009, address: http://www.rem.ufpr.br/_REM/REMv4/vol4/arti-palombini.htm Palombini, C. (1999). Musique Concréte Revisited. Electronic Musicological Review. Date retrieved: 05.02.2009, address: http: //www.rem.ufpr.br 119

Penderecki, K. (1963). Polymorphia. Celle: Hermann Moeck Verlag

Pohlmann, K.C. (2000). Principles of Digital Audio. (4th ed.) New York: McGraw- Hill. Puckette, M. (2006). Theory and Technique of Electronic Music. (Unpublished draft version of Doctoral dissertation). Retrieved from http://crca.ucsd.edu/~msp/techniques/v0.11/book.pdf Reisch, G. N. (2001). The transformation of Giacinto Scelsi’s Musical Style and Aesthetics. (Doctoral Dissertation). Athens: University of Georgia. Ricard, J. and Perfecto, H. (2003). Using Morphological Description for Generic Sound Retrieval. Date retrieved: 05.12.2012, address: //jscholarship.library.jhu.edu/bitstream/handle/1774.2/49/paper.pdf Roads, C. (2001). Microsound. London: MIT Press. Roads, C. Piccialli,A., Pope, T. A. and de Polli, G., (1997). Musical Signal Processing. Lisse: Swets & Zeitlinger. Roads, C. (1996). The Computer Music Tutorial. London: MIT Press. Russ, M. (2004). Sound Synthesis and Sampling. (2nd ed.) London: Focal Press. Russolo, L. (1913). The Art of Noises. Retrieved from http://www.scribd.com/doc/1280/The-Art-of-Noise Salzman, E. (1988). Twentieth- Century Music. (3rd ed.) New Jersey: Upper Saddle River. Paris : Éditions Salabert. Scelsi, G. (1998). Quattro pezzi (su una nota sola). Schaeffer, P. (1952). In Search of a Concrete Music. Éditions Du Seuil, Paris. Schönberg, A. (1921). Theory of Harmony. (3rd ed.) Los Angeles: University of California Press. Schwinger, W. (1989). Kryzsztof Penderecki: His Life and Work. London: Schott & Co. Stockhausen, K. (1956). Studie II. Wien: Universal Edition Stockhausen, K. (2000). Stockhausen on Music. (2nd ed.) London: Marion Boyars. Strange, A. (1983). Electronic Music: Systems, Techniques, and Control. New York: William C Brown Pub. Sethares, W. (2005). Tuning, Timbre, Spectrum, Scale. (2nd edn.) New York: Springer. Toop, D. (1995). Ocean of Sound. London: Serpent’s Tail. Varèse, E. and Wen-chung C. (1966). The Liberation of Sound. Perspectives of New Music. Vol: 5, No: 1 Wishart, T. (1994). Audible Design. London: Orpheus the Pantomime. Wishart, T. (1996). On Sonic Art. (2nd edn.) New York: Routledge.

120

Westerkamp. H. (2000). Soundscape Composition: Linking Inner and Outer Worlds. World Forum for Acoustic Ecology. Date retrieved: 15.07.2010, address: http://wfae.proscenia.net/ Url-1 , date retrieved 13.11.2010. Url-2 , date retrieved 20.11.2010. Url-3 , date retrieved 02.12. Url-4 < http://www.ubu.com/sound/>, date retrieved 02.12.2010. Url-5 , date retrieved 05.01.2011. Url-6 , date retrieved 06.03.2011. Url-7 , date retrieved 28.05.2011. Url-8 , date retrieved 29.05.2011. Url-9 , date retrieved 29.01.2012. Url-11 <://www.ele-mental.org/ele_ment/said&did/schaeffer_interview.html>, date retrieved 07.02.2012. Url-12 , date retrieved 29.02.2012. Url-13 , date retrieved 07.03.2012. Url-14 , date retrieved 08.03.2012. Url-15 , date retrieved 01.04.2012. Url-16 , date retrieved 25.04.2012. Url-17 < http://www.ears.dmu.ac.uk/>, date retrieved 29.04.2012. Url-18 < http://www.ears.dmu.ac.uk/spip.php?rubrique1456>, date retrieved 29.04.2012. Url-19 < http://www.ears.dmu.ac.uk/spip.php?rubrique190>, date retrieved 29.04.2012. Url-20 < http://www.ears.dmu.ac.uk/spip.php?rubrique37>, date retrieved 29.04.2012. Url-21 < http://www.ears.dmu.ac.uk/spip.php?rubrique1445>, date retrieved 29.04.2012. Url-22 < http://www.ears.dmu.ac.uk/spip.php?rubrique1450>, date retrieved 30.04.2012. Url-23 < http://www.ears.dmu.ac.uk/spip.php?rubrique165>, date retrieved 30.04.2012.

121

Url-24 < http://www.ears.dmu.ac.uk/spip.php?rubrique410>, date retrieved 30.04.2012. Url-25 , date retrieved 01.05.2012.

122

APPENDICES

APPENDIX A : Listening modes APPENDIX B : Extended summary of PROGREMU APPENDIX C : Emmerson’s musical discourse and syntax categorization APPENDIX D : Glossary APPENDIX E : CD Index

123

APPENDIX A: Listening modes

Table A: Listening behaviors of Delalande (Landy, 2007).

Taxonomic listening – Morphological, descriptive listening

Empathetic listening – Subjective, impact oriented listening

Figurativization – Narrative significance oriented listening

Law of organization – Form oriented listening

Immersed listening – Subjective, contribution and support oriented listening

Nonlistening – Losing interest

124

APPENDIX B: Extended summary of PROGREMU

Table B: Extended summary of PROGREMU.

TYPOLOGY a) Context: Articulation / Appui b) Masse / Facture, Duree / Variation, Equilibre / Originalite

MORPHOLOGY a) Contexture: Forme / Matiere b) Morphological Criteria:

Masse, Timbre harmonique, Grain, Allure, Dynamique, Profil de masse, Profil mélodique

CHARACTEROLOGY a) Matiere:

Masse, Timbre harmonique, Grain b) Forme:

Dynamique, Allure c) Variation:

Profil mélodique, Profil de masse

ANALYSIS a) Echelle:

Cardinal

Ordinal

125 b) Critére / Dimension, Site / Calibre c) Triple perceptual field of the ear:

Pitch, Intensity, Duration

126

APPENDIX C: Emmerson’s musical discourse and syntax categorization

Table C: Emmerson’s musical discourse and syntax matrix (Emmerson, 1986). Abstract Syntax Stockhausen: Nuno: Stockhausen: Studie I, Studie II La Fabbrica Telemusik Illıminata Babbitt: Ensembles for Synthesizer

Combination of Stockhausen: McNabb: Wishart: Abstract and Momente Dreamsong. Red Bird Abstracted Syntax Harvey: Mortuous Plango, Vivos Voco

Abstracted Parmegiani: Parmegiani: Ferrari: Syntax De natura Dedans-Dehors Presque Rien no.1 Sonorum

Electroacoustic Aural Discourse Combination of Mimetic Composition Dominant Aural and Discourse Mimetic Dominant Discourse

127

APPENDIX D: Glossary

AC Bias: A high frequency signal above the human hearing, applied in magnetic tape recorders to decrease the nonlinearity of the medium.

ADC: Electronic device, which converts the voltage fluctuations into binary numbers in order to be represented in the digital domain.

Bit Depth: In digital recording, bit depth defines the amount of bit a single sample from the original signal can contain.

Comb Filter: When the time delayed version of the signal is superimposed with the original signal, harmonically related notches (multiples of the lowest notch) appear in the spectrum due to destructive interference.

DAC: Electronic device, which converts the binary numbers into voltage fluctuations for playback and related purposes in the digital domain.

Decibel: One tenth of bel, a unit of measurement invented by Alexander Graham Bell. Decibel provides several reference points in logarithmic scale but it does not refer to an absolute value.

Dolby: Dolby Laboratories, known for noise reduction systems and surround sound researches related technologies.

Haas Effect: Under 50 ms, the time delayed versions of the original signal cannot be perceived as autonomous units, but as a part of the whole.

Psychoacoustic: A science, which investigates the phenomenon of human auditory perception.

Peak: The highest amplitude value in a specific time unit.

Pre-delay: The time gap between the original signal and the onset of early reflections in reverberation tools. Phon: A loudness unit used for the comparison of sine wave loudness.

RMS: The square root of the mean of the square used for average loudness calculations.

RPM: Revolutions per minute, indicates the frequency of rotation.

Sample Rate: The rate of the samples being taken from the original signal per second to reconstruct the signal in digital domain.

Wah-wah: An audio effect gained by sweeping the resonant frequencies of the original signal, boosted by a peak filter.

128

APPENDIX E: CD INDEX

Track 1: Sound Object Instrumental

Track 2: Sound Object Synthetic

Track 3: Sound Object Mechanic

Track 4: Sound Object – Human Voice - Musical

Track 5: Sound Object – Human Voice - Speech

Track 6: Sound Object – Human Voice - Utterance

Track 7: Sound Object – Found Sound

Track 8: Sound Object - Ambience

Track 9: Layer Hierarchy (Pro Emmerson)

Track 10: Layer Hierarchy (Contra Emmerson)

Track 11: Note (Pipe Organ – G)

Track 12: Node (Kick and Cymbal)

Track 13: Noise (Brown)

Track 14: Envelope (Original)

Track 15: Envelope (Attack removed)

Track 16: Envelope (Reversed)

Track 17: Simple Object (Unprocessed)

Track 18: Simple Object (Spectral Manipulation)

Track 19: Simple Object (Dynamic Manipulation)

Track 20: Simple Object (Tessitura Manipulation)

129

Track 21: Simple Object (Time Based Manipulation)

Track 22: Simple Object (Absolute Manipulation)

Track 23: Complex Object (Unprocessed)

Track 24: Complex Object (Spectral Manipulation)

Track 25: Complex Object (Dynamic Manipulation)

Track 26: Complex Object (Tessitura Manipulation)

Track 27: Complex Object (Time Based Manipulation)

Track 28: Complex Object (Absolute Manipulation)

Track 29: Additive Synthesis (Sine Waves)

Track 30: Subtractive Synthesis (Pink Noise – Unprocessed)

Track 31: Subtractive Synthesis (Pink Noise – Processed 1)

Track 32: Subtractive Synthesis (Pink Noise – Processed 2)

Track 33: Sound Object (Unprocessed)

Track 34: Sound Object (Amplitude Modulation)

Track 35: Sound Object (Frequency Modulation)

Track 36: Sound Object (Ring Modulation)

Track 37: Sound Object (Unprocessed)

Track 38: Sound Object (Granular Synthesis)

Track 39: Sound Object (Unprocessed)

Track 40: Sound Object (Waveshaping)

Track 41: Composition – “We Are Lost Forever”

130

CURRICULUM VITAE

Candidate’s full name: CEMAL BARKIN ENGİN

Place and date of birth: Istanbul, 17/05/1978

E-Mail: [email protected]

B.A.: Marmara University (1995-2002)

M.A.: Istanbul Technical University (2002 – 2005)

PhD: Istanbul Technical University (2005 – 2012)

PUBLICATIONS/PRESENTATIONS ON THE THESIS . Engin, C. B., Aşkın, C., 2012: Pierre Schaeffer’in obje tabanlı elektroakustik müzik teorisinin temel prensipleri ve teoriden güncel sapmalar. İTÜ Dergisi. (In Press)

131

132