IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

Voice Based for Visually Impaired People’s S. Chinnadurai1, Muhammed Fazal2, M.Santhakumar3, S.Suseendran4 Assistant Professor, Department of Computer Science and Engineering1 UG Students, Department of Computer Science and Engineering2,3,4 Dhanalakshmi Srinivasan Engineering College, Perambalur, Tamil Nadu, India.

Abstract: World Wide Web (WWW) is rapidly emerging as the universal information source for our society. The WWW is generally accessible using a web-browsing package from a networked computer. The design of information on the web is visually oriented. The reliance on visual presentation places high cognitive demands on a user to operate such a system. The interaction may sometimes require the full attention of a user. The design of information presentation on the web is predominately visual- oriented. This presentation approach requires most, if not all, of the user’s attention and imposes considerable cognitive load on a user. This approach is not always practical, especially for the visually impaired persons. The focus of this project is to develop a prototype which supports web browsing using speech-based interface, e.g. a phone, and to measure its effectiveness. The command input and the delivery of web contents are entirely in voice. Audio icons are built into the prototype so that users can have better understanding of the original structure/intent of a web page. Navigation and control commands are available to enhance the web browsing experience. The effectiveness of this prototype is evaluated in a user study involving both normally sighted and visually impaired people. Voice may also be offered as an adjunct to conventional desktop browsers with high resolution graphical displays, providing an accessible alternative to using the keyboard or screen, for instance in automobiles where hands/eyes free operation is essential. Voice interaction can escape the physical limitations on keypads and displays as mobile devices become ever smaller. The browser will have an integrated text extraction engine that inspects the content of the page to construct a structured representation. The internal nodes of the structure represent various levels of abstraction of the content. This helps in easy and flexible navigation of the page so as to rapidly home into objects of interest.

Keywords: World Wide Web, Visual Presentation, Cognitive, Speech-Based-Interface, Conventional Desktop Browsers, Text Extraction, Structured Representation, Abstraction of content

I. INTRODUCTION Web search engine is a software system that is designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real time information by running an algorithm on a .

II. RELEVANT WORK A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of (web spidering). Web search engines and some other websites use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently. Crawlers consume resources on visited systems and often visit sites without approval. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Copyright to IJARSCT DOI: 10.48175/568 233 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. Web indexing, or internet indexing, comprises methods for indexing the contents of a website or of the Internet as a whole. Individual websites or intranets may use a back of-the-book index, while search engines usually use keywords and metadata to provide a more useful vocabulary for Internet or onsite searching. With the increase in the number of periodicals that have articles online, web indexing is also becoming important for periodical websites. The most productive way to conduct a search on the internet is through a search engine. A web search engine is a software system designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SEROs). The information may be a mix of web pages, images, and other types of files. Some search engines also mine data available in databases or open directories. The top web search engines are , Bing, Yahoo, Ask.com, and AOL.com. For the purpose of this course, we will be searching using the Google Chrome web browser, and search first with the Google search engine and then Microsoft’s Bing search engine. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs) The information may be a mix of links to web pages, images, videos, info graphics.

2.1 Voice Based Search Engine The web is primarily a visual medium that requires a keyboard and mouse to navigate. People, who lack motor skills to use a keyboard and mouse, find navigation troublesome. Visually impaired people have problems in accessing the web. Those who temporarily cannot use a traditional web browser, as their eyes or hands are occupied or because they are not closer to their computer are at a minimum inconvenienced. and generation technologies offer a potential solution to these problems by augmenting the capabilities of a web browser. Speech recognition accuracy can be improved in many ways, time frequency distribution; HMM approach, Bayesian classification, wavelet transformation domain or combination of such approaches can be used. Advances in voice recognition have made possible applications in robotics controlled by voice alone.

2.2 User Voice A voice-user interface (VUI) makes human interaction with computers possible through a voice/speech platform in order to initiate an automated service or process. A VUI is the interface to any speech application. Controlling a machine by simply talking to it was science fiction only a short time ago. Until recently, this area was considered to be artificial intelligence. However, with advances in technology, VUIs have become more commonplace, and people are taking advantage of the value that these handsfree, eyes-free interfaces provided in many situations.

2.3 Speech Recognition Language Language models are used to constrain search in a decoder by limiting the number of possible words that need to be considered at any one point in the search. The consequence is faster execution and higher accuracy. Language models constrain search either absolutely (by enumerating some small subset of possible expansions) or probabilistically (by computing a likelihood for each possible successor word). The former will usually have an associated grammar this is compiled down into a graph, the latter will be trained from a corpus. Statistical language models (SLMs) are good for free-form input, such as dictation or spontaneous speech, where it's not practical or possible to a priori specify all possible legal word sequences. Trigram SLMs are probably the most common ones used in ASR and represent a good balance between complexity and robust estimation.

2.4 Server Database A database server is a server which houses a database application that provides database services to other computer programs or to computers, as defined by the client–server model. Database management systems frequently provide database-server functionality, and some database management systems (DBMSs) (such as MySQL) rely exclusively on the client–server model for database access (while others e.g. SQLite are meant for using as an embedded database).

Copyright to IJARSCT DOI: 10.48175/568 234 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

Users access a database server either through a "front end" running on the user's computer – which displays requested data – or through the "back end", which runs on the server and handles tasks such as data analysis and storage. In a master-slave model, database master servers are central and primary locations of data while database slave servers are synchronized backups of the master acting as proxies. Most database applications respond to a query language. Each database understands its query language and converts each submitted query to server readable form and executes it to retrieve results.

2.5 Web Page Content Web content is the textual, visual, or aural content that is encountered as part of the user experience on websites. It may include—among other things—text, images, sounds, videos, and animations. Search engine sites are composed mainly of HTML content, but also have a typically structured approach to revealing information. A Search Engine Results Page (SERP) displays a heading, usually the name of the search engine itself, and then a list of websites and their web addresses. The list of web addresses is listed by their order of relevance according to the search query. Searchers typically type in keywords or keyword phrases to find or search what they are looking for on the web. In this case, a website provides a blank space where web content is written in the form of paragraphs and bullets. Information written in these pages embellishes the services and amenities provided by a company. Non template contents are mainly used because, they have lower quantity of info graphics involved and can be customized. This makes websites fast and in the process reduces website load time.

2.6 API Interface In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building application software. In general terms, it is a set of clearly defined methods of communication between various software components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web based system, operating system, database system, computer hardware or software library. An API specification can take many forms, but often includes specifications for routines, data structures, object classes, variables or remote calls. POSIX, Windows API and ASPI are examples of different forms of APIs. Documentation for the API is usually provided to facilitate usage. Just as a graphical user interface makes it easier for people to use programs, application programming interfaces make it easier for developers to use certain technologies in building applications.

2.7 Synthesized Speech is the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output. A text-tospeech system (or "engine") is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The frontend then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the frontend. The back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations), which is

Copyright to IJARSCT DOI: 10.48175/568 235 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

then imposed on the output speech. HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum (vocal tract), fundamental frequency (voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Given that voice based systems are interactive, such systems are also called open-domain question answering systems. Voice search is often interactive, involving several rounds of interaction that allows a system to ask for clarification. Voice search is a type of dialog system. Voice search is a speech recognition technology that allows users to search by saying terms aloud rather than typing them into a search field.

III. PROPOSED SYSTEM Voice search, also called voice-enabled search, allows the user to use a voice command to search the Internet, or a portable device. Currently, voice search is commonly used in (in a narrow sense) "directory assistance", or . In a broader definition, voice search include open domain keyword query on any information on the Internet. Voice search is often interactive, involving several rounds of interaction that allows a system to ask for clarification. Voice search is a type of dialog system. Voice search is a speech recognition technology that allows users to search by saying terms aloud rather than typing them into a search field. The proliferation of smart phones and other small, Web-enabled mobile devices has spurred interest in voice search. Applications of voice search include: Making search engine queries. Clarifying specifics of the request. Requesting specific information, such as a stock quote or sports score. Launching programs and selecting options. The free voice search service, however, uses another approach. It might seem obvious, but people search differently using voice than when they type in a query. Speech recognition and generation technologies offer a potential solution to these problems by augmenting the capabilities of a web browser. The user can speak with the computer and the computer will respond to the user in the form of voice. The computer will assist the user in reading the documents as well. Making search engine queries. Clarifying specifics of the request. Requesting specific information, such as a stock quote or sports score. Launching programs and selecting options. The free voice search service, however, uses another approach. It might seem obvious, but people search differently using voice than when they type in a query. Speech recognition and generation technologies offer a potential solution to these problems by augmenting the capabilities of a web browser. The user can speak with the computer and the computer will respond to the user in the form of voice. The computer will assist the user in reading the documents as well.

IV. SYSTEM IMPLEMENTATION 4.1 Voice Recognition In this module the input is given through voice. The voice recognition module compare the given voice based on the pronunciation with the loaded grammar and return the respective action assigned to words such as New Tab, Back, Forward, Print Page and operations like Redirecting the mouse pointer to the URI or search box will help to type words by pronouncing each letters. Alternatively referred to as speech recognition, voice recognition is a computer software program or hardware device with the ability to decode the human voice. Voice recognition is commonly used to operate a device, perform commands, or write without having to use a keyboard, mouse, or press any buttons. Today, this is done on a computer with automatic speech recognition (ASR) software programs. Many ASR programs require the user to "train" the ASR program to recognize their voice so that it can more accurately convert the speech to text.

4.2 Speech To Text Conversion Speech to text conversion is the process of converting spoken words into written texts. This process is also often called speech recognition. Although these terms are almost synonymous, Speech recognition is sometimes used to describe the wider process of extracting meaning from speech, i.e. speech understanding. The term voice recognition should be avoided as it is often associated to the process of identifying a person from their voice, i.e. speaker recognition. All speech-to-text systems rely on at least two models: an acoustic model and a language model. In

Copyright to IJARSCT DOI: 10.48175/568 236 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

addition large vocabulary systems use a pronunciation model. It is important to understand that there is no such thing as a universal speech recognizer. To get the best transcription quality, all of these models can be specialized for a given language, dialect, application domain, type of speech, and communication channel. The complete voice-to text conversion process is done in three steps. The software first identifies the audio segments containing speech, and then it recognizes the language being spoken if it is not known a priori, and finally it converts the speech segments to text and time-codes.

4.3 Database Connectivity In computer science, a database connection is the means by which a database server and its client software communicate with each other. The term is used whether or not the client and the server are on different machines. The client uses a database connection to send commands to and receive replies from the server. A database is stored as a file or a set of files on magnetic disk or tape, optical disk, or some other secondary storage device. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage, and each field typically contains information pertaining to one aspect or attribute of the entity described by the database. Records are also organized into tables that include information about relationships between its various fields. Although database is applied loosely to any collection of information in computer files, a database in the strict sense provides cross-referencing capabilities. Once a connection has been built, it can be opened and closed at will, and properties (such as the command time-out length, or transaction, if one exists) can be set. The connection string consists of a set of key-value pairs, dictated by the data access interface of the data provider.

4.4 Display Results The extraction of text and presenting it to a visually handicapped person has many difficult aspects to it. With in numerous web pages present in the web, there is a varied diversity in the type of the pages. A web page may contain more than one kind of contents like, links, images, advertisements, and animations. These contents may not provide valuable information to a visually impaired person. Further, the document structure of an email page is also different from other page.

4.5 Speech Synthesis The Text To Speech module responds to user like an artificial Intelligent agent by guiding through browsing. This module reads out the content in the web browser by parsing the web document by removing html tags and extracts only text to the user as audio. It also returns information's like date, day, time, weather, etc on request.

V. SYSTEM DESIGN This architecture has five parts such as pre-processing, features extraction, Server DataBase, Display Relevant Implement HMM Algorithm, Search Engine. User can be input Voice as input and pre-processing steps is to eliminate noise and convert the voice input into grey text form and filter the noise using HMM algorithm. Then perform Google Search according to input from the server database. Then the relevant result is displayed in text and Synthesized into voice and output as sound in the speaker

VI. SOFTWARE DESCRIPTION The .NET Framework (pronounced dot net) is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a large library and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs written for the .NET Framework execute in a software environment (as contrasted to hardware environment), known as the Common Language Runtime (CLR), an application virtual machine that provides services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework.

Copyright to IJARSCT DOI: 10.48175/568 237 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

Figure 5.1: System Architecture The .NET Framework's Base Class Library provides user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their own source code with the .NET Framework and other libraries. The .NET Framework is intended to be used by most new applications created for the Windows platform. Microsoft also produces an integrated development environment largely for .NET software called Visual Studio

VII. DESIGN FEATURES 7.1 Interoperability Because computer systems commonly require interaction between newer and older applications, the .NET Framework provides means to access functionality implemented in newer and older programs that execute outside the .NET environment. Access to COM components is provided in the System.Runtime.Interop Services and System. Enterprise Services namespaces of the framework; access to other functionality is achieved using the P/Invoke feature.

7.2 Common Language Runtime engine The Common Language Runtime (CLR) serves as the execution engine of the .NET Framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling.

7.3 Language Independence The .NET Framework introduces a Common Type System, or CTS. The CTS specification defines all possible data types and programming constructs supported by the CLR and how they may or may not interact with each other conforming to the Common Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language.

7.4 Base Class Library The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality available to all languages using the .NET Framework. The BCL provides classes that encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction, XML document manipulation, and so on. It consists of classes, interfaces of reusable types that integrate with CLR (Common Language Runtime). Copyright to IJARSCT DOI: 10.48175/568 238 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

7.5 Simplified Deployment The .NET Framework includes design features and tools which help manage the installation of computer software to ensure it does not interfere with previously installed software, and it conforms to security requirements.

7.6 Security The design addresses some of the vulnerabilities, such as buffer overflows, which have been exploited by malicious software. Additionally, .NET provides a common security model for all applications.

7.7 Portability While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be platform-agnostic, [3] and cross-platform implementations are available for other operating systems (see Silver light and the Alternative implementations section below). Microsoft submitted the specifications for the Common Language Infrastructure (which includes the core class libraries, Common Type System, and the Common Intermediate Language), the C# language, and the C++/CLI language to both ECMA and the ISO, making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.

7.8 Common Language Infrastructure (CLI) The purpose of the Common Language Infrastructure (CLI) is to provide a language-neutral platform for application development and execution, including functions for Exception handling, Garbage Collection, security, and interoperability. By implementing the core aspects of the .NET Framework within the scope of the CL, this functionality will not be tied to a single language but will be available across the many languages supported by the framework. Microsoft's implementation of the CLI is called the Common Language Runtime, or CLR. The CIL code is housed in CLI assemblies. As mandated by the specification, assemblies are stored in the Portable Executable (PE) format, common on the Windows platform for all DLL and EXE files. The assembly consists of one or more files, one of which must contain the manifest, which has the metadata for the assembly. The complete name of an assembly (not to be confused with the filename on disk) contains its simple text name, version number, culture, and public key token. Assemblies are considered equivalent if they share the same complete name, excluding the revision of the version number. A private key can also be used by the creator of the assembly for strong naming. The public key token identifies which public key an assembly is signed with. Only the creator of the key pair (typically the .NET developer signing the assembly) can sign assemblies that have the same strong name as a previous version assembly, since he is in possession of the private key. Strong naming is required to add assemblies to the Global Assembly Cache.

VIII. SYSTEM TESTING Software testing is a method of assessing the functionality of a software program. There are many different types of software testing but the two main categories are dynamic testing and static testing. Dynamic testing is an assessment that is conducted while the program is executed; static testing, on the other hand, is an examination of the program's code and associated documentation. Dynamic and static methods are often used together. Testing is a set activity that can be planned and conducted systematically. Testing begins at the module level and work towards the integration of entire computers based system. Nothing is complete without testing, as it is vital success of the system. There are three ways to test a program  For Correctness  For Implementation Efficiency  For Computational Complexity. Tests for correctness are supposed to verify that a program does exactly what it was designed to do. The data is entered in all forms separately and whenever an error occurred, it is corrected immediately. A quality team deputed by the management verified all the necessary documents and tested the Software while entering the data at all levels. The Copyright to IJARSCT DOI: 10.48175/568 239 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are:  Unit Test.  Functional Test  Integration Test

8.1 Unit Testing The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behaviour. The test done on these units of code is called unit test.

8.2 Functional Testing Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

8.3 Integration Testing In integration testing modules are combined and tested as a group. Integration Testing follows unit testing and precedes system testing. Testing after the product is code complete.

IX. CONCLUSION In this project we proposed an efficient way of accessing the web browser is presented which is termed as voice browsing in which visually impaired people can access the browser using speech. As access to internet visually incurs limitations such as visually impaired persons cannot use keypads, touch screens etc. for giving inputs to computer. User can speech the word and converted into text automatically, now this browser reduces their effort by performing this conversion automatically. And the blind people can also use this browser to convert text documents in English characters. Thus combination of browsing with speech technology is an efficient way of accessing webs. This methodology can be further improvised for a browser that allows visually impaired learners to interact more efficiently with the browser by converting their English characters to speech i.e. listening of characters, which can be easily understood by them. In addition, all the text content present over the web for various links can be made accessible by using speech technology. This technology can also be implemented in browser. More work can be done to increase the accuracy, pronunciation and precision of speech technology. The proposed method has used only English language.

REFERENCES [1]. Bahdanau, Dzmitry, Kyunghyun Cho, and YoshuaBengio. "Neural machine translation by jointly learning to align and translate." arXiv preprint arXiv:1409.0473 (2014). [2]. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." arXiv preprint arXiv:1409.3215 (2014). [3]. Wang, Yuxuan, et al. "Tacotron: Towards end -to-end speech synthesis." arXiv preprint arXiv:1703.10135 (2017). [4]. Shen, Jonathan, et al. "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).IEEE, 2018. [5]. Tachibana, Hideyuki, KatsuyaUenoyama, and ShunsukeAihara. "Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).IEEE, 2018. [6]. Li, Naihan, et al. "Neural speech synthesis with transformer network." Proceedings of the AAAI Conference on Artificial Intelligence.Vol. 33.No. 01. 2019. Copyright to IJARSCT DOI: 10.48175/568 240 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429

International Journal of Advanced Research in Science, Communication and Technology (IJARSCT)

Volume 4, Issue 2, April 2021 Impact Factor: 4.819

[7]. Yang, Shan, et al. "On the localness modeling for the self attention based end-to-end speech synthesis." Neural networks 125 (2020): 121-130. [8]. Valentini-Botinhao, Cassia, et al. "Investigating RNNbased speech enhancement methods for noise-robust Text-to Speech." SSW. 2016. [9]. Gurunath, Nishant, Sai Krishna Rallabandi, and Alan Black. "Disentangling speech and non-speech components for building robust acoustic models from found data." arXiv preprint arXiv:1909.11727 (2019). [10]. Hsu, Wei-Ning, et al. "Disentangling correlated speaker and noise for speech synthesis via data augmentation and adversarial factorization." ICASSP 2019-2019IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).IEEE, 2019.

AUTHORS  First Author: S. Chinnadurai M.E Assistant Professor, Department of CSE, Dhanalakshmi Srinivasan Engineering College, Perambalur, Tamil Nadu, India.  Second Author: Muhammed Fazal , Department of CSE, Dhanalakshmi Srinivasan Engineering College, Perambalur, Tamil Nadu, India.  Third Author: M. Santhakumar , Department of CSE, Dhanalakshmi Srinivasan Engineering College, Perambalur, Tamil Nadu, Indi  Fourth Author: S. Suseendran , Department of CSE, Dhanalakshmi Srinivasan Engineering College, Perambalur, Tamil Nadu, India

Copyright to IJARSCT DOI: 10.48175/568 241 www.ijarsct.co.in