Journal of Fundamental and Applied Sciences Research Article

ISSN 1112-9867 Special Issue

Available online at http://www.jfas.info

CALCULATION OF PITCH FOR THE IRANIAN TRADITIONAL MUSIC GENRE CLASSIFICATION STUDIED ON TWO STRINGED INSTRUMENTS THE TAR AND THE SETAR

S. Firouzfar1,*, M. A. Layegh2 and S. Haghipour3

1Azarbaijan Shahid Madani University, Faculty of Information Technology 2Department of Electrical Engineering, Urmia University, Urmia, West Azarbaijan Province, Iran 3Department Of Biomedical Engineering,Tabriz Branch,Islamic Azad University

Published online: 15 February 2016 ABSTRACT Audio signal classification system analyzes the input audio signal and creates a label that describes the signal at the output. These are used to characterize both music and speech signals. The categorization can be done on the basis of pitch, music content, music tempo and rhythm. A musical sound is said to have four perceptual attributes: pitch, loudness, duration and timbre. These four attributes make it possible for a listener to distinguish musical sounds from each other. Pitch is a perceptual and subjective attribute of sound and plays an important role in the human understanding of sound. Although the pitch estimation of monophonic sound is well resolved, the polyphonic pitch estimation has been proved to be a very difficult task. Here, the main purpose is to calculate pitch for classification of The Radif of Mirzâ Ábdollâh which has been played by two famous stringed instruments the Tar and the Setar . Keywords: Pitch; Radif; Tmbre; Monophonic

Author Correspondence, e-mail: [email protected] doi: http://dx.doi.org/10.4314/jfas.v8i2s.454

Journal of Fundamental and Applied Sciences is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Libraries Resource Directory. We are listed under Research Associations category. S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1987

1. INTRODUCTION Musical genre is an important description that can be used to classify and characterize music from different sources. For human beings, it is not difficult to classify music into different genres. Although letting computers understand and classify musical genre is a big challenge, there are still perceptual criteria related to the melody, tempo, texture, instrumentation and rhythmic structure that can be used to characterize and discriminate different musical genres.[1] Automatic musical genre classification can potentially automate the process of structuring music content and thus it can provide an important component for a complete music information retrieval system for audio signals. Furthermore, it provides a framework for developing and evaluating features for describing musical content .Such features can be used for similarity retrieval, classification segmentation and audio thum nailing and form the foundation of the most proposed audio analysis for music. In order to verify the discriminating abilities of each feature, researches have used different techniques such as cluster analysis, distance measures, entropy analysis and other related methods [2]. Audio samples have been selected from our novel database to extract the features. The database contains 250 gushe of the repertoire played separately by five most famous Iranian masters using two long-necked lutes tar and Setar. In other words, the database totally contains 1500 gushe.

2. MATERIALS AND METHODS: 2.1. Persian traditional music The radif is the principal emblem and the heart of Persian music, a form of art as quintessentially Persian as that nation's fine carpet and exquisite miniature (Nettl, 1987). The radif indeed, is not only the musical masterpiece of Iranian genius, but also the artifact which distinguishes its music from all other forms (except the Azerbaijani tradition which has developed its own radif on the same basis).The radif is made up essentially of non-measured pieces («free rhythm») which provide a generative model or pattern for the creation of new compositions, mainly measured, as well as for free improvisation. The radif is a musical treasure of exceptional richness, the study of which can be approached from different angles S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1988 such as theory, practice pedagogy and cultural sociology. In formal terms, it can be defined in the following way: «Radif (which means rank, random, series) actually signifies the order in which the gushe are played; this word also means the totality of the twelve âvâz [and dastgâh], as each is played by such and such a master. Thus, the same âvâz can have several radif, each composed or arranged by a different master. The name of radif is best used before the name of a master who arranged it, and sometimes composed junctions between the gushe .In broad terms, radif is a collection of pieces, generally non-measured, classified according to model affinities into 12 modal systems, and supposed to be played in a certain order. It is also a teaching model which permits one to learn: a) The repertoire of melodic types (gushe and certain almost fixed canonical pieces such as reng or certain gushe): b) The classification of modes and modulations, their structure, their typical features: c) The instrumental techniques, the classical style, the aesthetic principles, the implicit rules of composition and improvisation. [3] 2.2. Mirzâ Ábdollâh's radif Theoretically, the repertoire is divided into 12 modal systems (dastgâh and âvâz), but in practice Bayât-e Kord can be separated from Shur (like in Mirzâ Ábdollâh's radif), and shushtari can be separated from Homâyun giving us a possible 14 modal systems or even more. This classification into 12 modes could have been extended to 15 or 16 as well by detaching great gushe such as Mokhâlef-Segah, Shahnâz, Árâq or Hejâz.But the number 12 has a very strong symbolic connotation. It ought to be made clear that the number of gushe may lose their name when integrated with other gushe. This happens often in Mirzâ Ábdollâh's version like for instance in Esfahan, or Razavi or Shur. While drawing a short survey of the modal structures of the different dastgâh and âvâz, we shall draw some parallels with other versions. [3] 2.2.1.Dastgâh-e Shur -shur is played in the following scale in which the fifth is variable. Mirzâ Ábdollâh's version given is quite complete; it does not display important changes of pitch but rather metabol or shifting of the tonic. One of the characteristics of the âvâz of Shur is that they S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1989 always start in the high register and in the low, on the tonic G (or sometimes leading tone, F) of Shur. In this radif, the âvâz are always played on the upper octave of Shur, but nowadays, on the târ or setâr, they are often played a fourth below. (Shur on D). The melodic formation in Šur is conceived within the modal structure shown below for Šur D in Figure 1.

Fig.1. Dastgâh-e Shur

2.2.1.1.Âvâz-e Abu Átâ Âvâz-e Abu Átâ is played on the scale of Shur (with a natural fifth), fourth being polarized as can be seen in Figure 1a. It is introduced hereby the gushe Râmkeli preceding the darâmad, which can also be performed at the end of Abu Átâ. The gushe Gilaki is generally incorporated into Dashti rather than Abu Átâ the second part (Hejâz and Gilaki) on the fifth. .

Fig.1.a. Âvâz-e Abu Áta

2.2.1.2. Âvâz-e Bayât-e Tork Avaz-e Bayat-e Tork uses the same scale but polarizing the third Bb as can be seen in Figure 1b . The gushe Qatâr and Qarâ'i, played here lend themselves equally well to Bayât-e Tork. The version below carries a number of gushe belonging to Mâhur (Feyli, Khosravani, Shekaste), which are not found in Hoseyn-Goli's radif. The name Bayât is probably a reference to the ancient mode Bayât which is very common in the Turkish and Arabic world and is similar to Shur, with a perfect fifth.

Fig.1.b. Âvâz-e Bayât-e Tork S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1990

2.2.1.3.Âvâz-e Afshâri Avaz-e Afshari follows the scale of Shur with a variable fifth this version is very short if compared with Hoseyn-Qoli's and especially Ma'rufi's, but it is possible to extend it with gushe borrowed from other dastgâh, such as Mâhur and Navâ.The scale of Avaz-e Afshari is shown in Figure 1c as the following:

Fig.1.c.Âvâz-e Afshâri

2.2.1.4..Âvâz-e Dashti Avaz-e Dashti follows the same scale but centered on the variable fifth. Here too it is reduced to a small number of gushe. The scale of Avaz-e Dashti is shown in Figure 1d as the following:

Fig.1.d.Âvâz-e Dashti

2.2.1.5. Âvâz-e Bayât-e Kord Avaz-e Bayat- e Kord is often incorporated with Shur. The following version is very close to the one of Hâji Aqâ Mohammad Irâni with whom Borumand studied. The scale of Avaz-e Bayât-e Kord is shown in Figure 1e as the following:

Fig.1.e.Âvâz-e Bayât-e Kord

2.2.2. Dastgâh-e Segâh Dastgah-e- Segah can be completed by the great gushe Hesâr borrowed from Chahârgâh, with an adaption of the basic scale; G, A, Bp (tonic), C, D. Most of the gushe of Segâh are also S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1991 found in Chahârgâh and are probably borrowed from that mode. Nowadays, the gushe Rohâb, Masihi ,Takht-e Tâgdis and Shâh Khatâ'i are rarely performed. They belong rather to Nava, but can also be added to the end of Afshâri. The reng-e Delgosha is adapted here in 4/8 instead of 6/8. This version does not include the reng Shahr-âshub, but it is possible to adapt the Shahr-Ashub of Chahârgâh to the Segâh mode. The scale of Segâh is shown as the following in Figure 2.

Fig.2. Dastgâh -e Segâh

Its great gushe Mokhâlef uses the following scale as shown in Figure 2a :

Fig.2.a. gushe Mokhâlef (Dastgâh -e Segâh )

2.2.3. Dastgâh-e Chahârgâh Dastgah-e- Chahrgah is one of the most important and developed dastgâh. The following version is very complete, with its transposition to the fifth (Hesâr) and to the octave (Mansuri). The distinctive scale of Chahargah is shown in Figure 3.

Fig.3. Dastgâh-e Chahârgâh

2.2.4. Dastgâh-e Mâhur

The intervallic structure of the mode of Māhur parallels that of the major mode in western classic music. Yet, because of the other elements which go into the making of Persian modes, S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1992 probabely no melody in the major mode can be said to be in the mode of Māhur. The modal structure of Māhur is shown below in Figure 4.

Fig.4. Dastgâh-e Mâhur 2.2.5. Dastgâh-e Homâyun Dastgah-e- Homayun is again very complete, except the short modulation in Segâh (Bayât-e Ájam) and Shur (Oshshaq) which is found in Ma'rufi's transcription Shushtari is sometimes performed as an independent âvâz with its modulation Mansuri on the scale of Chahârgâh not given here. This version does not include the Shahrâshub.Figure 5 shows the scale of Dastgah-e- Homayun as the following.

Fig.5. Dastgâh-e Homâyun

2.2.5.1. Âvâz-e Bayât-e Esfahân Avaz-e-Bayat-e- Esfahan is often considered as a branch of Homâyun. There are at least two reasons to consider it as autonomous and not derived . a) It is nearly always performed separated. b) The leading tone F# must be slightly lower. The scale of Âvâz-e Bayât-e Esfahân is shown as the following in Figure 5a.

Fig.5.a. Âvâz-e Bayât-e Esfahân 2.2.6. Dastgâh-e Navâ Dastgah-e Nava is also a dastgâh almost only performed in classical music. This version is also very close to that of Hoseyn Qoli and Ma'rufi, except the gushe of the last part to which can be added. Rohâb, Masihi, Shâh Khatâ'I Takht-e-tâgdis, one fifth below their version given in Segâh as can be seen in Figure 6. S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1993

Fig.6. Dastgâh-e Navâ

2.2.7. Dastgâh-e Rast-Panjgâh Dastgah-e Rast- Panjgah is an ancient mode only performed in art music. Very few compositions use this mode that was overshadowed by Mahur on a similar scale but it seems that gushe, as well as some compositions (tasnif) attributed to Mahur actually belong to this dastgah. The same Panjgah that nowadays designates a secondary gushe has lost its ancient meaning, so that it would be more correct to call this dastgah-e Rast, like in Azerbaijani tradition .The modal structure of Dastgah-e Rast- Panjgah is shown in Figure 7.[3]

Fig.7. Dastgâh-e Rast-Panjgâh

2.3. Pitch calculation 2.3.1 Autocorrelation Algorithm Fundamentally, this algorithm exploits the fact that a periodic signal, even if it is not a pure sine wave, will be similar from one period to the next. This is true even if the amplitude of the signal is changing in time, provided those changes do not occur too quickly. To detect the pitch, we take a window of the signal, with a length at least twice as long as the longest period that we might detect. In our case, this corresponded to a length of 1200 samples, given a sampling rate of 44,100 KHz. Using this section of signal, we generate the autocorrelation function r(s) defined as the sum of the point wise absolute difference between the two signals over some interval, perhaps 600 points. Graphically as can be seen in Figure.8, this corresponds to the following: S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1994

Fig.8. Shifting the signal

Here, the blue signal is the original and the green signal is a copy of the original, shifted left by an amount nearing the fundamental period. Notice how the signals begin to align with each other as the shift amount near the fundamental period. Intuitively, it should make sense that as the shift value s begins to reach the fundamental period of the signal T, the difference between the shifted signal and the original signal will begin to decrease. Indeed, we can see this in the Figure.9 in which the autocorrelation function rapidly approaches zero at the fundamental period. S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1995

Figure.9 Autocorrelation Function

The fundamental period is indentified as the first minimum of the autocorrelation function. Notice that the function is periodic, as we expect. r(s) measured the total difference between the signal and its shifted copy, so the shifts approach k*T, the signals again align and the difference approaches zero. We can detect this value by differentiating the autocorrelation function and then looking for a change of sign, which yields critical points. We then look at the direction of the sign change across points (positive difference to negative), to take only the minima. We then search for the first minimum below some threshold, i.e. the minimum corresponding to the smallest s. The location of this minimum gives us the fundamental period of the windowed portion of signal, from which we can easily determine the frequency using. 2.3.2. FAST-Autocorrelation Clearly, this algorithm requires a great deal of computation. First, we generate the autocorrelation function r(s) for some positive range of s. For each value of s, we need to compute the total difference between the shifted signals. Next, we need to differentiate this S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1996 signal and search for the minimum, finally determining the correct minimum. We must do this for each window. In generating the r(s) function, we define a domain for s of 0 to 599. This allows for fundamental frequencies between about 50 and 22000 Hz, which works nicely for human voice. However, this does require calculating r(s) 600 times for each window. In effort to improve the efficiency of this algorithm, we created an alternative called FAST Autocorrelation, which has yielded speed improvements in excess of 70%. We exploit the nature of the signal, specifically the fact that if the signal is generated using a high sampling rate and if the windows are narrow enough, we can assume that the pitch will not vary drastically from window to window. Thus, we can begin calculating the r(s) function using values of s that correspond to areas near the previous minimum. This means that, if the previous window had a fundamental period of 156 samples, we begin calculating r(s) for s = 136. If we fail to find the minimal s in this area, we calculate further and further from the previous s until we find a minimum. Also, we note that the first minimum (valued below the threshold) is always going to correspond to the fundamental frequency. Thus, we can calculate the difference equation dr(s)/ds as we generate r(s). Then, when we find the first minimum below threshold, we can stop calculating altogether and move on to the next window. If we use only the second improvement, we usually cut down the range of s from 600 points to around 200. If we then couple in the first improvement, we wind up calculating r(s) for only about 20 values of s, which is a savings of (580) * (1200) = 700000 calculations per window. When the signal may consist of hundreds of windows, this improvement is substantial indeed. 2.3.2.1. Limitations of Autocorrelation The autocorrelation algorithm is relatively impervious to noise, but is sensitive to sampling rate. Because it calculates fundamental frequency directly from a shift in samples, it follows that if we have a lower sampling rate, we have lower resolution in pitch. As stated earlier, autocorrelation is also extremely expensive computationally. However, using the adaptive techniques described above, computation can be expedited and run in near-real S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1997 time. 2.3.3. Harmonic Product Spectrum If the input signal is a musical note, then its spectrum should consist of a series of peaks, corresponding to fundamental frequency with harmonic components at integer multiples of the fundamental frequency. Hence when we compress the spectrum a number of times (downsampling), and compare it with the original spectrum, we can see that the strongest harmonic peaks line up. The first peak in the original spectrum coincides with the second peak in the spectrum compressed by a factor of two, which coincides with the third peak in the spectrum compressed by a factor of three. Hence, when the various spectrums are multiplied together, the result will form clear peak at the fundamental frequency. 2.3.3.1. HPS Overview, method and its limitations As can be seen in Figure 10, First, the windowed frame is taken into the frequency domain and the magnitude of the spectrum is calculated (left). Next, the spectrum is downsampled to create more compressed versions of itself (center). Notice how the higher harmonics of the fundamental frequency align with each other in the downsampled spectra. Last, a multiplication of these spectra is performed and the maximum is found (right). This corresponds to the fundamental frequency. First, we divide the input signal into segments by applying a Hanning window, where the window size and hop size are given as an input. For each window, we utilize the Short-Time Fourier Transform to convert the input signal from the time domain to the frequency domain. Once the input is in the frequency domain, we apply the Harmonic Product Spectrum technique to each window. The HPS involves two steps: downsampling and multiplication. To downsample, we compress the spectrum twice in each window by resampling: the first time, we compress the original spectrum by two and the second time, by three. Once this is completed, we multiply the three spectra together and find the frequency that corresponds to the peak (maximum value). This particular frequency represents the fundamental frequency of that particular window. S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1998

Fig.10. HPS algorithm

Some nice features of this method include: it is computationally inexpensive, reasonably resistant to additive and multiplicative noise, and adjustable to different kind of inputs. For instance, we could change the number of compressed spectra to use, and we could replace the spectral multiplication with a spectral addition. However, since human pitch perception is basically logarithmic, this means that low pitches may be tracked less accurately than high pitches. Another severe shortfall of the HPS method is that it its resolution is only as good as the length of the FFT used to calculate the spectrum. If we perform a short and fast FFT, we are limited in the number of discrete frequencies we can consider. In order to gain a higher resolution in our output (and therefore see less graininess in our pitch output), we need to take a longer FFT which requires more time.[4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]

3. RESULTS AND DISCUSSION Pitch calculation for some Gushe of Different Dastgah and Avaz are depicted in Figures 11 and 12 as follows : S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 1999

Audio signal Pitch, n11 8 1000

900 6

800

4

700

2 600

0 500 amplitude

400 -2 coefficient value Hz)(in

300

-4

200

-6 100

-8 0 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 time (s) Temporal location of events (in s.)

Fig.11. Pitch calculation for Dastgah e Segah ,Gushe :Basteh Negar

Pitch, n11 Audio signal 170 6

160 4

150

2

140

0 amplitude 130 coefficient value Hz)(in

-2

120

-4 110

100 -6 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Temporal location of events (in s.) time (s)

Fig.12. Pitch calculation for Avaz e Bayat Turk ,Gushe :Daramade Aval

4. CONCLUSION As can be seen pitch detection is a good parameter in the recognition of Iranian traditional music .On the contrary to audio signals belonging to musical genres from different parts of the world , it can be seen that the pitches are not regularly distributed in the same row and some of the coefficients are distributed in the space irregularly .This feature can distinguish Iranian traditional music from other music genres as in other countries pitch detection represents much simpler patterns .

5. REFERENCES

[1] Changsheng Xu, Namunu C. Maddage, Xi Shao, Fang Cao and Qi Tian,” MUSICAL GENRE CLASSIFICATION USING SUPPORT VECTOR MACHINES”. Proceedings. S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 2000

(ICASSP '03). 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, Pages: V - 429-32 vol.5,April 2003. [2] Muhammad Kashif Saeed Khan, Automatic Classification of Speech & Music in Digitized Audio,Master degree thesis,Dhahran,Saudi Arabia,2005. [3]Jean During,The Radif Of Mirza Abdollah ,A Canonic Repertoire of Persian Music,2nd edition ,Mahoor publications,2006. [4] D. Gerhard. Pitch Extraction and Fundamental Frequency: History and Current Techniques, technical report, Dept. of Computer Science, University of Regina, 2003. [5] A. de Cheveigné and H. Kawahara. YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, 111:1917, 2002.doi:10.1121/1.1458024 [6]J P. McLeod and G. Wyvill. A smarter way to find pitch. In Proceedings of the International

Computer Music Conference (ICMC’05), 2005. [7] Hayes, Monson (1996). Statistical Digital Signal Processing and Modeling. John Wiley & Sons, Inc. p. 393. ISBN 0-471-59431-8. [8] Pitch Detection Algorithms, online resource from Connexions

[9] A. Michael Noll, “Pitch Determination of Human Speech by the Harmonic Product Spectrum, the Harmonic Sum Spectrum and a Maximum Likelihood Estimate,” Proceedings of the Symposium on Computer Processing in Communications, Vol. XIX, Polytechnic Press: Brooklyn, New York, (1970), pp. 779-797.

[10] A. Michael Noll, “Cepstrum Pitch Determination,” Journal of the Acoustical Society of America, Vol. 41, No. 2, (February 1967), pp. 293-309. [11] Mitre, Adriano; Queiroz, Marcelo; Faria, Régis. Accurate and Efficient Fundamental Frequency Determination from Precise Partial Estimates. Proceedings of the 4th AES Brazil Conference. 113-118, 2006. [12] Brown JC and Puckette MS (1993). A high resolution fundamental frequency determination based on phase changes of the Fourier transform. J. Acoust. Soc. Am. Volume 94, Issue 2, pp. 662-667 [1] [13] Stephen A. Zahorian and Hongbing Hu. A Spectral/temporal method for Robust S.Firouzfar et al. J Fundam Appl Sci. 2016, 8(2S), 1986-2001 2001

Fundamental Frequency Tracking. The Journal of the Acoustical Society of America, 123 (6), 2008.doi:10.1121/1.2916590 [14] Stephen A. Zahorian and Hongbing Hu. YAAPT Pitch Tracking MATLAB Function [15] Huang, Xuedong; Alex Acero; Hsiao-Wuen Hon (2001). Spoken Language Processing. Prentice Hall PTR. p. 325. ISBN 0-13-022616-5. [16] Curtis Roads, The Computer Music Tutorial, The MIT Press, 1998. [17] Puckette, Apel, Zicarelli, Real-time audio analysis tools for Pd and MSP [18] Slaney, Lyon A perceptual Pitch Detector http://www.interval.com/~malcolm/pubs.html [19] Hui-Ling Lu,A Hybrid Fundamental Frequency Estimator for Singing Voice [20] Serra, Pitch Detection Musical Sound Modeling with Sinusoids plus Noise. www.iua.upf.es/~xserra/articles/msm/pitch.html [21] Tristan Jehan, Pitch Detection http://www.cnmat.berkeley.edu/~tristan/Report/node4.html

How to cite this article: Firouzfar S, Layegh M A, Haghipour S. Calculation of pitch for the Iranian traditional music genre classification studied on two stringed instruments the tar and the setar. J. Fundam. Appl. Sci., 2016, 8(2S), 1986-2001.