Music Genre Classification Using Spectral Analysis and Sparse Representation of the Signals

Total Page:16

File Type:pdf, Size:1020Kb

Music Genre Classification Using Spectral Analysis and Sparse Representation of the Signals MUSIC GENRE CLASSIFICATION USING SPECTRAL ANALYSIS AND SPARSE REPRESENTATION OF THE SIGNALS Mehdi Banitalebi-DehkordiAmin Banitalebi-Dehkordi window size is chosen the signal within each frame can be ABSTRACT considered as a stationary signal. The windowed signal is In this paper, we proposed a robust music genre then transformed into another representation space in classification method based on a sparse FFT based feature order to achieve good discrimination and/or energy extraction method which extracted with discriminating compaction properties. The length of a short term analysis power of spectral analysis of non-stationary audio signals, window depends on the signal type. In music and the capability of sparse representation based classification, the length of a short-term window is classifiers. Feature extraction method combines two sets influenced by the adopted audio coding scheme [6]. A of features namely short-term features (extracted from music data window of length less than 50 ms is usually windowed signals) and long-term features (extracted from referred to as a short-term window [7]. combination of extracted short-time features). Non-stationary signals such as audio signals can be Experimental results demonstrate that the proposed modeled as the product of a narrow bandwidth low-pass feature extraction method leads to a sparse representation process modulating a higher bandwidth carrier [7]. The of audio signals. As a result, a significant reduction in the low-pass content of these signals cannot be effectively dimensionality of the signals is achieved. The extracted captured by using a too short analysis window. To features are then fed into a sparse representation based improve the deficiency of the short-term feature analysis, classifier (SRC). Our experimental results on the GTZAN we propose a simple but very effective method for database demonstrate that the proposed method combining the short-term and long-term features. outperforms the other state of the art SRC approaches. Optimum selection of the number of features also plays an Moreover, the computational efficiency of the proposed important role in this context. Too few features can fail to method is better than that of the other Compressive encapsulate sufficient information while too many Sampling (CS)-based classifiers. features usually degrade the performance since they may Index Terms— Feature Extraction, Compressive be irrelevant. Moreover, too many features could also Sampling, Genre Classification entail excessive computation downgrading the system’s efficiency. As a result, we need an effective method for 1. INTRODUCTION feature extraction and/or selection. Compressive or Sparse Sampling turns out to be an appropriate tool for such a Audio classification provides useful information for purpose. In this paper, a Compressive Sampling (CS) understanding the content of both audio and audio-visual based classifier which is based on the theory of sparse recordings. Audio information can be classified from representation and reconstruction of signals is presented different points of view. Among them, the generic classes for music genre classification. The innovative aspect of of music have attracted a lot of attention [1]. Low bit-rate our approach lies in the adopted method of combining audio coding is an application that can benefit from short-term (extracted from windowed signals) and long- distinguishing music classes [2]. In the previous studies, term (extracted from combination of extracted short-time various classification schemes and feature extraction features) signal characteristics to make a decision. methods have been used for this purpose. Most of the music genre classification algorithms resort to the so- An important issue in audio classification algorithms called bag-of-features approach [3-4], which models the which has not been widely investigated is the effect of audio signals by their long-term statistical distribution of background noise on the classification performance. In short-time features. Features commonly exploited for fact, a classification algorithm trained using clean music genre classification can be roughly classified into sequences may fail to work properly when the actual timbral texture, rhythmic, pitch content ones, or their testing sequences contain background noise with a certain combinations [3-4]. Having extracted descriptive features, level of signal-to-noise ratio (SNR). For practical pattern recognition algorithms are employed for their applications wherein environmental sounds are involved classification into genres. These features can be in audio classification tasks, noise robustness is an categorized into two types, namely short-term and long- essential characteristic of the processing system. We show term features. The short-term features are derived from a that the proposed feature extraction and data classification short segment (such as a frame). In contrast, long-time algorithm is robust to the background noise. features usually characterize the variation of spectral The rest of this paper is organized as follows. In the shape or beat information within a long segment. The next section, the proposed feature extraction algorithm is short and long segments are also referred to as “analysis described. The corresponding CS-based classifier is then window” and “texture window” respectively [5]. In short- introduced in Section 3. The experimental settings and time estimates, the signal is partitioned into successive results are detailed in Section 4 leading to conclusions in frames using small sized windows. If an appropriate Section 5. 2. COMPRESSIVE SENSING BACKGROUND 1 p a = ( a ) p . (5) p j We sample the speech signal x(t) at the Nyquist rate j and process it in frames of N samples. Each frame is then a N ×1 vector x, which can be represented as x= ΨX , Equation (4) can be reformulated as: where Ψ is a N × N matrix whose columns are the min Y − ΦΨθ s.t. θ = K, (6) ψ θ 2 0 similarly sampled basis functions i (t) , and X is the vector that chooses the linear combinations of the basis There are a variety of algorithms to perform the functions. X can be thought of as x in the domain of Ψ , reconstructions in (4) and (6); in this paper we make use and it is X that is required to be sparse for compressed of OMP [11] to solve (6). sensing to perform well. We say that X is K-sparse if it contains only K non-zero elements. In other words, x can OMP is a relatively-efficient iterative algorithm that ∧ be exactly represented by the linear combination of K produces one component of θ each iteration, and thus basis functions. ∧ allows for simple control of the sparsity of θ . As the true It is also important to note that compressed sensing sparsity is often unknown, the OMP algorithm is run for a will also recover signals that are not truly sparse, as long ∧ these signals are highly compressible, meaning that most pre-determined number of iterations, K, resulting in θ of the energy of x is contained in a small number of the being K-sparse. elements of X. Given a measurement sample Y ∈ RM and a × dictionary D∈ RM N (the columns of D are referred to as the atoms), we seek a vector solution X satisfying: min X s.t. Y = DX (1) 0 In above equation (known as l norm), is the Figure 1. The measurement of Compressive Sampling 0 0 number of non-zero coefficient of X [5]. 3. FEATURE EXTRACTION In Fig. 1, consider a signal X with Length N that has In many of music genre classification algorithms K non-zero coefficients on sparse basis matrix Ψ , and authors used speech signal to determine music classics, consider also an M × N measurement basis matrix Φ , this has effect on computational complexity. In [1-4], the M << N where the rows of Φ are incoherent with the computational complexity of algorithms is high, since it uses the high dimensional received signals. In [11], columns of Ψ . In matrix notation, we have X = Ψθ , in authors also used the high dimensional received signals, which θ can be approximated using only K << N non- but they trying to reduce computational complexity by a zero entries. The CS theory states that such a signal X feature extraction process that select the useful data for can be reconstructed by taking linear, non-adaptive music genre classification. measurement as follows [10]: = Φ = ΦΨθ = θ Timbral texture features are frequently used in Y X A (2) various music information retrieval systems [8]. Some where Y represents an M ×1 sampled vector, A = ΦΨ timbral texture features which are widely used for audio is an M × N matrix. The reconstruction is equivalent to classification have been summarized in Table I [1]. 1 finding the signal’s sparse coefficient vectors θ , which Among them, MFCC , spectral centroids, spectral roll off, can be cast into a l optimization problem: spectral flux and zero crossings are short-time features, 0 thus their statistics are computed over a texture window. min θ s.t. y = ΦΨθ = Aθ (3) The low-energy feature is a long-time feature. 0 In the previous speech processing researches, various Generally, (3) is NP-hard, and this l0 optimization is feature extraction methods have been used. Scheirer and replaced with an l1 optimization [10] as follows: Slaney proposed features such as 4-Hz modulation energy and spectral centroids [2]. Various content-based features min θ s.t. Y = ΦΨθ = Aθ (4) have been proposed for applications such as sound 1 classification and music information retrieval (MIR) [3-4]. From (4) we will recover θ with high probability if These features can be categorised into two types, namely short-term and long-term features. The short-term features enough measurements are taken. In general, the l p norm are derived from a short segment (such as a frame). In is defined as: contrast, long-time features usually characterise the 1 Mell Frequency Coefficient variation of spectral shape or beat information within a representation.
Recommended publications
  • Multimodal Deep Learning for Music Genre Classification
    Oramas, S., Barbieri, F., Nieto, O., and Serra, X (2018). Multimodal Transactions of the International Society for Deep Learning for Music Genre Classification, Transactions of the Inter- Music Information Retrieval national Society for Music Information Retrieval, V(N), pp. xx–xx, DOI: https://doi.org/xx.xxxx/xxxx.xx ARTICLE TYPE Multimodal Deep Learning for Music Genre Classification Sergio Oramas,∗ Francesco Barbieri,y Oriol Nieto,z and Xavier Serra∗ Abstract Music genre labels are useful to organize songs, albums, and artists into broader groups that share similar musical characteristics. In this work, an approach to learn and combine multimodal data representations for music genre classification is proposed. Intermediate rep- resentations of deep neural networks are learned from audio tracks, text reviews, and cover art images, and further combined for classification. Experiments on single and multi-label genre classification are then carried out, evaluating the effect of the different learned repre- sentations and their combinations. Results on both experiments show how the aggregation of learned representations from different modalities improves the accuracy of the classifica- tion, suggesting that different modalities embed complementary information. In addition, the learning of a multimodal feature space increase the performance of pure audio representa- tions, which may be specially relevant when the other modalities are available for training, but not at prediction time. Moreover, a proposed approach for dimensionality reduction of target labels yields major improvements in multi-label classification not only in terms of accuracy, but also in terms of the diversity of the predicted genres, which implies a more fine-grained categorization. Finally, a qualitative analysis of the results sheds some light on the behavior of the different modalities in the classification task.
    [Show full text]
  • Music Genre Classification
    Music Genre Classification Michael Haggblade Yang Hong Kenny Kao 1 Introduction Music classification is an interesting problem with many applications, from Drinkify (a program that generates cocktails to match the music) to Pandora to dynamically generating images that comple- ment the music. However, music genre classification has been a challenging task in the field of music information retrieval (MIR). Music genres are hard to systematically and consistently describe due to their inherent subjective nature. In this paper, we investigate various machine learning algorithms, including k-nearest neighbor (k- NN), k-means, multi-class SVM, and neural networks to classify the following four genres: clas- sical, jazz, metal, and pop. We relied purely on Mel Frequency Cepstral Coefficients (MFCC) to characterize our data as recommended by previous work in this field [5]. We then applied the ma- chine learning algorithms using the MFCCs as our features. Lastly, we explored an interesting extension by mapping images to music genres. We matched the song genres with clusters of images by using the Fourier-Mellin 2D transform to extract features and clustered the images with k-means. 2 Our Approach 2.1 Data Retrieval Process Marsyas (Music Analysis, Retrieval, and Synthesis for Audio Signals) is an open source software framework for audio processing with specific emphasis on Music Information Retrieval Applica- tions. Its website also provides access to a database, GTZAN Genre Collection, of 1000 audio tracks each 30 seconds long. There are 10 genres represented, each containing 100 tracks. All the tracks are 22050Hz Mono 16-bit audio files in .au format [2].
    [Show full text]
  • California State University, Northridge Where's The
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE WHERE’S THE ROCK? AN EXAMINATION OF ROCK MUSIC’S LONG-TERM SUCCESS THROUGH THE GEOGRAPHY OF ITS ARTISTS’ ORIGINS AND ITS STATUS IN THE MUSIC INDUSTRY A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Arts in Geography, Geographic Information Science By Mark T. Ulmer May 2015 The thesis of Mark Ulmer is approved: __________________________________________ __________________ Dr. James Craine Date __________________________________________ __________________ Dr. Ronald Davidson Date __________________________________________ __________________ Dr. Steven Graves, Chair Date California State University, Northridge ii Acknowledgements I would like to thank my committee members and all of my professors at California State University, Northridge, for helping me to broaden my geographic horizons. Dr. Boroushaki, Dr. Cox, Dr. Craine, Dr. Davidson, Dr. Graves, Dr. Jackiewicz, Dr. Maas, Dr. Sun, and David Deis, thank you! iii TABLE OF CONTENTS Signature Page .................................................................................................................... ii Acknowledgements ............................................................................................................ iii LIST OF FIGURES AND TABLES.................................................................................. vi ABSTRACT ..................................................................................................................... viii Chapter 1 – Introduction ....................................................................................................
    [Show full text]
  • Ethnomusicology a Very Short Introduction
    ETHNOMUSICOLOGY A VERY SHORT INTRODUCTION Thimoty Rice Sumário Chapter 1 – Defining ethnomusicology...........................................................................................4 Ethnos..........................................................................................................................................5 Mousikē.......................................................................................................................................5 Logos...........................................................................................................................................7 Chapter 2 A bit of history.................................................................................................................9 Ancient and medieval precursors................................................................................................9 Exploration and enlightenment.................................................................................................10 Nationalism, musical folklore, and ethnology..........................................................................10 Early ethnomusicology.............................................................................................................13 “Mature” ethnomusicology.......................................................................................................15 Chapter 3........................................................................................................................................17 Conducting
    [Show full text]
  • Fabian Holt. 2007. Genre in Popular Music. Chicago and London: University of Chicago Press
    Fabian Holt. 2007. Genre in Popular Music. Chicago and London: University of Chicago Press. Reviewed by Elizabeth K. Keenan The concept of genre has often presented difficulties for popular music studies. Although discussions of genres and their boundaries frequently emerge from the ethnographic study of popular music, these studies often fail to theorize the category of genre itself, instead focusing on elements of style that demarcate a localized version of a particular genre. In contrast, studies focusing on genre itself often take a top-down, Adorno-inspired, culture-industry perspective, locating the concept within the marketing machinations of the major-label music industry. Both approaches have their limits, leaving open numerous questions about how genres form and change over time; how people employ the concept of genre in their musical practices-whether listening, purchasing, or performing; and how culture shapes our construction of genre categories. In Genre in Popular Music, Fabian Holt deftly integrates his theoretical model of genre into a series of ethnographic and historical case studies from both the center and the boundaries of popular music in the United States. These individual case studies, from reactions to Elvis Presley in Nashville to a portrait of the lesser­ known Chicago jazz and rock musician Jeff Parker, enable Holt to develop a multifaceted theory of genre that takes into consideration how discourse and practice, local experience and translocal circulation, and industry and individuals all shape popular music genres in the United States. This book is a much needed addition to studies of genre, as well as a welcome invitation to explore more fully the possibilities for doing ethnography in American popular music studies.
    [Show full text]
  • Genre Classification Via Album Cover
    Genre Classification via Album Cover Jonathan Li Di Sun Department of Computer Science Department of Computer Science Stanford University Stanford University [email protected] [email protected] Tongxin Cai Department of Civil and Environmental Engineering Stanford University [email protected] Abstract Music information retrieval techniques are being used in recommender systems and automatically categorize music. Research in music information retrieval often focuses on textual or audio features for genre classification. This paper takes a direct look at the ability of album cover artwork to predict album genre and expands upon previous work in this area. We use transfer learning with VGG-16 to improve classifier performance. Our work results in an increase in our topline metrics, area under the precision recall curve, of 80%: going from 0.315 to 0.5671, and precision and recall increasing from 0.694 and 0.253 respectively to 0.7336 and 0.3812 respectively. 1 Introduction Genre provides information and topic classification in music. Additionally, the vast majority of music is written collaboratively and with several influences, which leads to multi-label (multi-genres) albums, i.e., a song and its album cover may belong to more than one genre. Music genre classification, firstly introduced by Tzanetakis et al [18] in 2002 as a pattern recognition task, is one of the many branches of Music Information Retrieval (MIR). From here, other tasks on musical data, such as music generation, recommendation systems, instrument recognition and so on, can be performed. Nowadays, companies use music genre classification, either to offer recommendations to their users (such as iTunes, Spotify) or simply as a product (for example Shazam).
    [Show full text]
  • Genre, Form, and Medium of Performance Terms in Music Cataloging
    Genre, Form, and Medium of Performance Terms in Music Cataloging Ann Shaffer, University of Oregon [email protected] OLA 2015 Pre-Conference What We’ll Cover • New music vocabularies for LC Genre & Form and LC Medium of Performance Terms • Why were they developed? • How to use them in cataloging? • What does it all mean? Genre, Form, & MoP in LCSH Music Terms When describing music materials, LCSH have often blurred the distinction between: • Subject what the work is about • Genre what the work is • Form the purpose or format of the work • Medium of performance what performing forces the work requires Genre, Form, & MoP in LCSH Music Terms Book Music Score • LCSH describe what • LCSH describe what the musical work is: subject the book is Symphonies about: Operas Symphony Bluegrass music Opera Waltzes Bluegrass music • LCSH also describe what the format is: Rap (music) $v Scores and parts $v Vocal scores $v Fake books Waltz • And what the medium of performance is: Sonatas (Cello and piano) Banjo with instrumental ensemble String quartets Genre, Form, & MoP in LCSH Music Terms Audio Recording • LCSH describes what the musical work is: Popular music Folk music Flamenco music Concertos • And what the medium of performance is: Songs (High voice) with guitar Jazz ensembles Choruses (Mens’ voices, 4 parts) with piano Concertos (Tuba with string orchestra) Why Is This Problematic? • Machines can’t distinguish between topic, form, genre, and medium of performance in the LCSH • All coded in same field (650) • Elements of headings & subdivisions
    [Show full text]
  • You Can Judge an Artist by an Album Cover: Using Images for Music Annotation
    You Can Judge an Artist by an Album Cover: Using Images for Music Annotation Janis¯ L¯ıbeks Douglas Turnbull Swarthmore College Department of Computer Science Swarthmore, PA 19081 Ithaca College Email: [email protected] Ithaca, NY 14850 Email: [email protected] Abstract—While the perception of music tends to focus on our acoustic listening experience, the image of an artist can play a role in how we categorize (and thus judge) the artistic work. Based on a user study, we show that both album cover artwork and promotional photographs encode valuable information that helps place an artist into a musical context. We also describe a simple computer vision system that can predict music genre tags based on content-based image analysis. This suggests that we can automatically learn some notion of artist similarity based on visual appearance alone. Such visual information may be helpful for improving music discovery in terms of the quality of recommendations, the efficiency of the search process, and the aesthetics of the multimedia experience. Fig. 1. Illustrative promotional photos (left) and album covers (right) of artists with the tags pop (1st row), hip hop (2nd row), metal (3rd row). See I. INTRODUCTION Section VI for attribution. Imagine that you are at a large summer music festival. appearance. Such a measure is useful, for example, because You walk over to one of the side stages and observe a band it allows us to develop a novel music retrieval paradigm in which is about to begin their sound check. Each member of which a user can discover new artists by specifying a query the band has long unkempt hair and is wearing black t-shirts, image.
    [Show full text]
  • Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features
    MULTI-LABEL MUSIC GENRE CLASSIFICATION FROM AUDIO, TEXT, AND IMAGES USING DEEP FEATURES Sergio Oramas1, Oriol Nieto2, Francesco Barbieri3, Xavier Serra1 1Music Technology Group, Universitat Pompeu Fabra 2Pandora Media Inc. 3TALN Group, Universitat Pompeu Fabra fsergio.oramas, francesco.barbieri, [email protected], [email protected] ABSTRACT exclusive (i.e., a song could be Pop, and at the same time have elements from Deep House and a Reggae grove). In Music genres allow to categorize musical items that share this work we aim to advance the field of music classifi- common characteristics. Although these categories are not cation by framing it as multi-label genre classification of mutually exclusive, most related research is traditionally fine-grained genres. focused on classifying tracks into a single class. Further- To this end, we present MuMu, a new large-scale mul- more, these categories (e.g., Pop, Rock) tend to be too timodal dataset for multi-label music genre classification. broad for certain applications. In this work we aim to ex- MuMu contains information of roughly 31k albums clas- pand this task by categorizing musical items into multiple sified into one or more 250 genre classes. For every al- and fine-grained labels, using three different data modal- bum we analyze the cover image, text reviews, and audio ities: audio, text, and images. To this end we present tracks, with a total number of approximately 147k audio MuMu, a new dataset of more than 31k albums classified tracks and 447k album reviews. Furthermore, we exploit into 250 genre classes. For every album we have collected this dataset with a novel deep learning approach to learn the cover image, text reviews, and audio tracks.
    [Show full text]
  • Classification of Music Genre Muralidhar Talupur, Suman Nath, Hong Yan
    Project Report for 15781 Classification of Music Genre Muralidhar Talupur, Suman Nath, Hong Yan As the demand for multimedia grows, the development of information retrieval systems including information about music is of increasing concern. Radio stations and music TV channels hold archives of millions of music tapes. Gigabytes of music files are also spread over the web. To automate searching and organizing the music files based on their genre is a challenging task. In this report, we present an approach to identifying genre of music file based on their content. We present experimental result that shows that our approach is effective in identifying the genre of the music file with acceptable level of confidence. 1. Introduction Vast musical databases are currently accessible over computer networks (e.g., the Web), creating a need for sophisticated methods to search and organize these databases. Because music is a multifaceted, multi-dimensional medium, it demands specialized representations, abstractions and processing techniques for effective search that are fundamentally different from those used for other retrieval tasks. The project “Exploiting Style as Retrieval and Classification Mechanism” undertaken by Roger Dannenberg and his group aims to develop hierarchical, stochastic music representations and concomitant storage and retrieval mechanisms that are well suited to music’s unique characteristics by exploiting reductionist theories of musical structure and performance (i.e., musical style). As a part of this whole effort, we decided to explore if it is possible to make prediction about music genre/type based on the content of the audio file. Work done previously shows that the spectro-temporal features are sufficiently rich to allow for coarse classification of music from brief sample of the music.
    [Show full text]
  • Music Genre/Form Terms in LCGFT Derivative Works
    Music Genre/Form Terms in LCGFT Derivative works … Adaptations Arrangements (Music) Intabulations Piano scores Simplified editions (Music) Vocal scores Excerpts Facsimiles … Illustrated works … Fingering charts … Posters Playbills (Posters) Toy and movable books … Sound books … Informational works … Fingering charts … Posters Playbills (Posters) Press releases Programs (Publications) Concert programs Dance programs Film festival programs Memorial service programs Opera programs Theater programs … Reference works Catalogs … Discographies ... Thematic catalogs (Music) … Reviews Book reviews Dance reviews Motion picture reviews Music reviews Television program reviews Theater reviews Instructional and educational works Teaching pieces (Music) Methods (Music) Studies (Music) Music Accompaniments (Music) Recorded accompaniments Karaoke Arrangements (Music) Intabulations Piano scores Simplified editions (Music) Vocal scores Art music Aʼak Aleatory music Open form music Anthems Ballades (Instrumental music) Barcaroles Cadenzas Canons (Music) Rounds (Music) Cantatas Carnatic music Ālāpa Chamber music Part songs Balletti (Part songs) Cacce (Part songs) Canti carnascialeschi Canzonets (Part songs) Ensaladas Madrigals (Music) Motets Rounds (Music) Villotte Chorale preludes Concert etudes Concertos Concerti grossi Dastgāhs Dialogues (Music) Fanfares Finales (Music) Fugues Gagaku Bugaku (Music) Saibara Hát ả đào Hát bội Heike biwa Hindustani music Dādrās Dhrupad Dhuns Gats (Music) Khayāl Honkyoku Interludes (Music) Entremés (Music) Tonadillas Kacapi-suling
    [Show full text]
  • From the Concert Hall to the Cinema
    FROM THE CONCERT HALL TO THE CINEMA: THE JOURNEY OF THE 20TH CENTURY CLASSICAL AMERICAN SOUND By Rebecca Ellen Stegall Liberty University A MASTER’S THESIS PRESENTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS IN MUSIC EDUCATION Liberty University April, 2017 i FROM THE CONCERT HALL TO THE CINEMA: THE JOURNEY OF THE 20TH CENTURY CLASSICAL AMERICAN SOUND By Rebecca Ellen Stegall A Thesis Presented in Partial Fulfillment Of the Requirements for the Degree Master of Arts in Music Education Liberty University, Lynchburg, VA April, 2017 APPROVED BY: Dr. Monica D. Taylor, Ph.D., Committee Chair Dr. John D. Kinchen III, D.M.A., Committee Member Dr. Vernon M. Whaley, Ph.D. Dean of the School of Music ii Acknowledgements I would first like to acknowledge and personally thank my family for supporting me endlessly through this process and for being an encouragement to “just keep at it” when it seemed I could not continue. Thank you to Dr. Monica Taylor for walking with me through this process and being tremendously patient with me along the way. I would also like to acknowledge a professor that has had a tremendous impact upon both my education and my life. Dr. John Kinchen, thank you for teaching me most of what I know about music, inspiring me to take on such daunting research and to step outside my comfort zone, and endlessly encouraging and pushing me to do my absolute best and pursue what the Lord has in store for me. You have made such an impact on my life.
    [Show full text]