
entropy Article Texture Classification Using Spectral Entropy of Acoustic Signal Generated by a Human Echolocator Raja Syamsul Azmir Raja Abdullah 1,*, Nur Luqman Saleh 1, Sharifah Mumtazah Syed Abdul Rahman 1 , Nur Syazmira Zamri 1 and Nur Emileen Abdul Rashid 2 1 Wireless and Photonic Network Research Centre (WiPNET), Faculty of Engineering, University Putra Malaysia (UPM), Serdang 43400, Malaysia; [email protected] (N.L.S.); [email protected] (S.M.S.A.R.); [email protected] (N.S.Z.) 2 Microwave Research Institute, UniversitiTeknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia; [email protected] * Correspondence: [email protected]; Tel.: +603-976-943-47 Received: 12 August 2019; Accepted: 29 September 2019; Published: 2 October 2019 Abstract: Human echolocation is a biological process wherein the human emits a punctuated acoustic signal, and the ear analyzes the echo in order to perceive the surroundings. The peculiar acoustic signal is normally produced by clicking inside the mouth. This paper utilized this unique acoustic signal from a human echolocator as a source of transmitted signal in a synthetic human echolocation technique. Thus, the aim of the paper was to extract information from the echo signal and develop a classification scheme to identify signals reflected from different textures at various distance. The scheme was based on spectral entropy extracted from Mel-scale filtering output in the Mel-frequency cepstrum coefficient of a reflected echo signal. The classification process involved data mining, features extraction, clustering, and classifier validation. The reflected echo signals were obtained via an experimental setup resembling a human echolocation scenario, configured for synthetic data collection. Unlike in typical speech signals, extracted entropy from the formant characteristics was likely not visible for the human mouth-click signals. Instead, multiple peak spectral features derived from the synthesis signal of the mouth-click were assumed as the entropy obtained from the Mel-scale filtering output. To realize the classification process, K-means clustering and K-nearest neighbor processes were employed. Moreover, the impacts of sound propagation toward the extracted spectral entropy used in the classification outcome were also investigated. The outcomes of the classifier performance herein indicated that spectral entropy is essential for human echolocation. Keywords: spectral entropy; acoustic signal; human echolocation; classification; MFCC 1. Introduction The phrase “echolocation” was initially used by Griffin to describe bats’ ability to safely navigate and locate their prey using ultrasound call signals [1]. What is less known is that a group of people (often blind people) known as human echolocators have adapted to visualize their surrounding using a similar concept. Human echolocation is the ability to perceive one’s surroundings by listening to the echoes of the active emission of sound signals reflected from an obstacle. Recent studies have reported that these people are able to visualize their surrounding by “seeing through sound”. They exhibit exceptional performance in defining their space, even able to accurately discriminate the profile of objects. This ability has elicited inquiries among scholars on how the space and objects can be recognized using mouth-click signals. Although a series of studies have focused on this question, most works have focused on the perceptual concept rather than technical explanations. It is known that Entropy 2019, 21, 963; doi:10.3390/e21100963 www.mdpi.com/journal/entropy Entropy 2019, 21, 963 2 of 20 Entropy 2019, 21, 963 2 of 20 orderhuman to translate echolocators meaningful depend cues on theirfrom auditorymouth-click systems signals in and order turn to them translate into meaningfulvisual perception, cues from as illustratedmouth-click in Figure signals 1 and[1]. turn them into visual perception, as illustrated in Figure1[1]. FigureFigure 1. 1.TheThe human human echolocation echolocation concept. concept. It Itis isessential essential for for humans humans to to recognize recognize the the ac acousticoustic signals signals entering entering their their ear, ear, mainly mainly for for communicationcommunication and and recognition. recognition. For For comparison, comparison, radar radar and and sonar sonar are are a good a good example example of of man-made man-made sensorssensors which which benefit benefit fromfrom thesethese classifications; classifications; the th illuminatione illumination of a targetof a target is associated is associated with detection with detectionand classification and classification schemes [schemes2–4]. This [2–4]. meaningful This meaningful information information is useful in is distinguishing useful in distinguishing the profile of thethe profile detected of the target, detected and helpstarget, to and minimize helps to poor minimize and false poor detection and false events. detection For humanevents. echolocation,For human echolocation,how people how recognize people spaces recognize and objects spaces using and objects mouth-click using signalsmouth-click has still signals not been has clearlystill not verified. been clearlyIn addition, verified. no In studies addition, to date no have studies reported to date a technicalhave reported classification a technical process classification of human mouth-clicks.process of humanMoreover, mouth-clicks. human mouth-clicks Moreover, dohuma notn inherit mouth-clicks formant propertiesdo not inherit like thoseformant in typical properties speech, like which those have in typicalbeen provenspeech, to which be strong have features been proven in speech to recognition.be strong features Instead, in the speech multiple recognition. frequency Instead, components the multiple(spectral frequency entropy) foundcomponents in the signal(spectral serve entropy) as features found for thein the classification signal serve process. as features This gap for should the classificationbe investigated process. to assure This continuitygap should and be to investigated utilize the full to potentialassure continuity of human and echolocation. to utilize the We full were potentialthus motivated of human to analyzeecholocation. and design We were this thus classification motivated process. to analyze and design this classification process.Studies related to the human auditory system became the primary references in this paper for the classificationStudies related process to the of human human mouth-clicks. auditory system We be hereincame propose the primary a classification references schemein this paper for human for themouth-clicks classification using process experimental of human data mouth-clicks. by utilizing We a human herein auditory propose model a classification (Mel-frequency scheme cepstral for humancoeffi cientmouth-clicks (MFCC) using framework experime processing).ntal data by By utilizing understanding a huma hown auditory human model echolocators (Mel-frequency carry out cepstralecholocation coefficient (utilizing (MFCC) mouth-click), framework a newprocessi dimensionng). By couldunderstanding be opened how in the human design echolocators of man-made carrysensors out (especiallyecholocation for (utilizing radar and mouth-click), sonar) in the neara new future. dimension In this could paper, be we opened have not in madethe design any claim of man-madethat the classification sensors (especially schemes for described radar and closely sonar) replicate in the the near strategy future. used Infor this human paper, echolocation; we have not we madepresent any the claim best intuitivethat the approachedclassification for schemes decision-making described based closely on thereplicate credible the knowledge strategy used of human for humanecholocation echolocation; techniques we present and the humanthe best hearing intuitiv process.e approached for decision-making based on the credibleHence, knowledge this study of human aimed echolocation to investigate techniques the characteristics and the ofhuman the echo hearing signal process. (acoustic mouth click) reflectedHence, from this distudyfferent aimed textures to atinvestigate various distances. the characteristics We developed of the a classificationecho signal (acoustic scheme to mouth classify click)textures reflected based from on thedifferent spectral textures entropy at various of the echo distances. signal We obtained developed from a Mel-scale classification filtering scheme outputs to classify(Mel-spectrum), textures based which on was the incorporated spectral entropy into the of MFCCthe echo framework. signal obtained The classification from Mel-scale tasks filtering included outputsdistinguishing (Mel-spectrum), hard, medium, which andwas softincorporated textures, and into grouping the MFCC them framework. into their The respective classification cluster tasks region includedsources distinguishing using the reflected hard, echo medium, signal and at di softfferent textures, distances. and Thegrouping classification them into routine their wasrespective realized clusterusing region K-means sources process using and the was refl validatedected echo using signal K-nearest at different neighbors distances. (K-NN). The Theclassification paper is structured routine wasas follows:realized Sectionusing K-means2 provides process a description and was of validated the characteristics using K-nearest of the human neighbors mouth-click (K-NN). signal, The paperand presentsis structured a brief as explanation follows:
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages20 Page
-
File Size-