An Effective Multi-Resolution Hierarchical Granular Representation Based Classifier Using General Fuzzy Min-Max Neural Network

Total Page:16

File Type:pdf, Size:1020Kb

An Effective Multi-Resolution Hierarchical Granular Representation Based Classifier Using General Fuzzy Min-Max Neural Network 1 An Effective Multi-Resolution Hierarchical Granular Representation based Classifier using General Fuzzy Min-Max Neural Network Thanh Tung Khuat , Fang Chen , and Bogdan Gabrys , Senior Member, IEEE Abstract—Motivated by the practical demands for simplifica- numeric data [4]. Information granules have also contributed tion of data towards being consistent with human thinking and to quantifying the limited numeric precision in data [5]. problem solving as well as tolerance of uncertainty, information Utilizing information granules is one of the problem-solving granules are becoming important entities in data processing at different levels of data abstraction. This paper proposes a methods based on decomposing a big problem into sub-tasks method to construct classifiers from multi-resolution hierarchical which can be solved individually. In the world of big data, granular representations (MRHGRC) using hyperbox fuzzy sets. one regularly departs from specific data entities and discover The proposed approach forms a series of granular inferences general rules from data via encapsulation and abstraction. The hierarchically through many levels of abstraction. An attractive use of information granules is meaningful when tackling the characteristic of our classifier is that it can maintain a high accuracy in comparison to other fuzzy min-max models at a low five Vs of big data [6], i.e., volume, variety, velocity, veracity, degree of granularity based on reusing the knowledge learned and value. Granulation process gathering similar data together from lower levels of abstraction. In addition, our approach contributes to reducing the data size, and so the volume can reduce the data size significantly as well as handle the issue is addressed. The information from many heterogeneous uncertainty and incompleteness associated with data in real- sources can be granulated into various granular constructs, and world applications. The construction process of the classifier consists of two phases. The first phase is to formulate the model then several measures and rules for uniform representation at the greatest level of granularity, while the later stage aims are proposed to fuse base information granules as shown to reduce the complexity of the constructed model and deduce in [7]. Hence, the data variety is addressed. Several studies it from data at higher abstraction levels. Experimental analyses constructed the evolving information granules to adapt to the conducted comprehensively on both synthetic and real datasets changes in the streams of data as in [8]. The variations of indicated the efficiency of our method in terms of training time and predictive performance in comparison to other types of information granules in a high-speed data stream assist in fuzzy min-max neural networks and common machine learning tackling the velocity problem of big data. The process of algorithms. forming information granules is often associated with the Index Terms—Information granules, granular computing, hy- removal of outliers and dealing with incomplete data [6]; perbox, general fuzzy min-max neural network, classification, thus the veracity of data is guaranteed. Finally, the multi- hierarchical granular representation. resolution hierarchical architecture of various granular levels can disregard some irrelevant features but highlight facets of interest [9]. In this way, the granular representation may help I. INTRODUCTION with cognitive demands and capabilities of different users. IERARCHICAL problem solving, where the problems A multi-dimensional hyperbox fuzzy set is a fundamental are analyzed in a variety of granularity degrees, is a arXiv:1905.12170v3 [cs.LG] 3 Dec 2019 H conceptual vehicle to represent information granules. Each typical characteristic of the human brain [1]. Inspired by fuzzy min-max hyperbox is determined by the minimum this ability, granular computing was introduced. One of the and maximum points and a fuzzy membership function. A critical features of granular computing is to model the data as classifier can be built from the hyperbox fuzzy sets along high-level abstract structures and to tackle problems based on with an appropriate training algorithm. We can extract a these representations similar to structured human thinking [2]. rule set directly from hyperbox fuzzy sets or by using it in Information granules (IGs) [3] are underlying constructs of combination with other methods such as decision trees [10] the granular computing. They are abstract entities describing to account for the predictive results. However, a limitation of important properties of numeric data and formulating knowl- hyperbox-based classifiers is that their accuracy at the low edge pieces from data at a higher abstraction level. They play level of granularity (corresponding to large-sized hyperboxes) a critical role in the concise description and abstraction of decreases. In contrast, classifiers at the high granularity level are more accurate, but the building process of classifiers at T.T. Khuat (email: [email protected]) and B. Gabrys (email: [email protected]) are with Advanced Analytics Institute, this level is time-consuming, and it is difficult to extract the Faculty of Engineering and Information Technology, University of Technology rule set interpretable for predictive outcomes because of the Sydney, Ultimo, NSW 2007, Australia. high complexity of resulting models. Hence, it is desired to F. Chen (email: [email protected]) is with Data Science Centre, Faculty of Engineering and Information Technology, University of Technology construct a simple classifier with high accuracy. In addition, Sydney, Ultimo, NSW 2007, Australia. we expect to observe the change in the predictive results at 2 different data abstraction levels. This paper proposes a method between efficiency and simplicity of the classifiers. A model of constructing a high-precision classifier at the high data with high resolution corresponds to the use of a small value of abstraction level based on the knowledge learned from lower maximum hyperbox size, and vice versa. A choice of suitable abstraction levels. On the basis of classification errors on the resolution level results in better predictive accuracy of the validation set, we can predict the change in the accuracy of generated model. Our main contributions in this paper can the constructed classifier on unseen data, and we can select be summarized as follows: an abstraction level satisfying both acceptable accuracy and • We propose a new data classification model based on simple architecture on the resulting classifier. Furthermore, the multi-resolution of granular data representations in our method is likely to expand for large-sized datasets due combination with the online learning ability of the general to the capability of parallel execution during the constructing fuzzy min-max neural network. process of core hyperboxes at the highest level of granularity. • The proposed method is capable of reusing the learned In our method, the algorithm starts with a relatively small knowledge from the highest granularity level to construct value of maximum hyperbox size (θ) to produce base hyperbox new classifiers at higher abstraction levels with the low fuzzy sets, and then this threshold is increased in succeeding trade-off between the simplification and accuracy. levels of abstraction whose inputs are the hyperbox fuzzy • The efficiency and running time of the general fuzzy min- sets formed in the previous step. By using many hierarchical max classifier are significantly enhanced in the proposed resolutions of granularity, the information captured in earlier algorithm. steps is transferred to the classifier at the next level. Therefore, • Our classifier can perform on large-sized datasets because the classification accuracy is still maintained at an acceptable of the parallel execution ability. value when the resolution of training data is low. • Comprehensive experiments are conducted on synthetic Data generated from complex real-world applications fre- and real datasets to prove the effectiveness of the pro- quently change over time, so the machine learning models posed method compared to other approaches and base- used to predict behaviors of such systems need the efficient lines. online learning capability. Many studies considered the online The rest of this paper is organized as follows. Section II learning capability when building machine learning models presents existing studies related to information granules as such as [11], [12], [13], [14], [15], [16], and [17]. Fuzzy min- well as briefly describes the online learning version of the max neural networks proposed by Simpson [11] and many general fuzzy min-max neural network. Section III shows our of its improved variants only work on the input data in the proposed method to construct a classifier based on data gran- form of points. In practice, due to the uncertainty and some ulation. Experimental configuration and results are presented abnormal behaviors in the systems, the input data include not in Section IV. Section V concludes the main findings and only crisp points but also intervals. To address this problem, discusses some open directions for future works. Gabrys and Bargiela [12] introduced a general fuzzy min-max (GFMM) neural network, which can handle both fuzzy and crisp input samples. By using hyperbox fuzzy sets for the input II. PRELIMINARIES layer, this model can accept the input patterns in the granular A. Related Work form and process data at a high-level
Recommended publications
  • Multi-View Cluster Analysis with Incomplete Data to Understand Treatment Effects
    Information Sciences 494 (2019) 278–293 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins Multi-view cluster analysis with incomplete data to understand treatment effects Guoqing Chao a, Jiangwen Sun b, Jin Lu a, An-Li Wang c, Daniel D. Langleben c, ∗ Chiang-Shan Li d, Jinbo Bi a, a Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, USA b Department of Computer Science Old Dominion University, Norfolk, Virginia, USA c University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA d Department of Psychiatry Yale University, New Haven, CT, USA a r t i c l e i n f o a b s t r a c t Article history: Multi-view cluster analysis, as a popular granular computing method, aims to partition Received 25 December 2018 sample subjects into consistent clusters across different views in which the subjects are Revised 13 March 2019 characterized. Frequently, data entries can be missing from some of the views. The latest Accepted 21 April 2019 multi-view co-clustering methods cannot effectively deal with incomplete data, especially Available online 22 April 2019 when there are mixed patterns of missing values. We propose an enhanced formulation for Keywords: a family of multi-view co-clustering methods to cope with the missing data problem by Missing value introducing an indicator matrix whose elements indicate which data entries are observed Multi-view data analysis and assessing cluster validity only on observed entries. In comparison with common meth- Co-clustering ods that impute missing data in order to use regular multi-view analytics, our approach is Granular computing less sensitive to imputation uncertainty.
    [Show full text]
  • Examining Granular Computing from a Modeling Perspective Ying Xie Kennesaw State University, [email protected]
    Kennesaw State University DigitalCommons@Kennesaw State University Faculty Publications 2008 Examining Granular Computing from a Modeling Perspective Ying Xie Kennesaw State University, [email protected] Jayasimha R. Katukuri EBay Inc Vijay V. Raghavan University of Louisiana at Lafayette Follow this and additional works at: http://digitalcommons.kennesaw.edu/facpubs Part of the Computer Sciences Commons Recommended Citation Xie, Ying; Katukuri, Jayasimha R.; and Raghavan, Vijay V., "Examining Granular Computing from a Modeling Perspective" (2008). Faculty Publications. 1599. http://digitalcommons.kennesaw.edu/facpubs/1599 This Article is brought to you for free and open access by DigitalCommons@Kennesaw State University. It has been accepted for inclusion in Faculty Publications by an authorized administrator of DigitalCommons@Kennesaw State University. For more information, please contact [email protected]. Examining Granular Computing from a Modeling Perspective Ying Xie Jayasimha Katukuri, Vijay V. Raghavan Department of Computer Science and Information Systems The Center for Advance Computer Studies Kennesaw State University University of Louisiana at Lafayette 1000 Chastain RD., Kennesaw, GA 30144 104 University Circle, Lafayette LA 70504 [email protected] {jrk8907, raghavan}@cacs.louisiana.edu Tom Johnsten School of Computer and Information Sciences University of South Alabama Mobile, Alabama 36688 [email protected] perceptions and understanding are essential. One of the advantages of modeling different Abstract - In this paper, we use a set of unified components to domains from a unified granular perspective is that it enables conduct granular modeling for problem solving paradigms in us to extract crucial components of granular computing using several fields of computing. Each identified component may a single formalism.
    [Show full text]
  • Exploratory Multivariate Analysis for Empirical Information Affected by Uncertainty and Modeled in a Fuzzy Manner: a Review
    Granul. Comput. DOI 10.1007/s41066-017-0040-y ORIGINAL PAPER Exploratory multivariate analysis for empirical information affected by uncertainty and modeled in a fuzzy manner: a review Pierpaolo D’Urso1 Received: 18 September 2016 / Accepted: 25 January 2017 © Springer International Publishing Switzerland 2017 Abstract In the last few decades, there has been an Keywords Statistical reasoning · Information · increase in the interest of the scientific community for mul- Uncertainty and imprecision · Granularity · Information tivariate statistical techniques of data analysis in which the granules · Exploratory multivariate statistical analysis · data are affected by uncertainty, imprecision, or vague- Fuzzy data · Metrics for fuzzy data · Cluster analysis · ness. In this context, following a fuzzy formalization, sev- Regression analysis · Self-organizing maps · Principal eral contributions and developments have been offered in component analysis · Multidimensional scaling · various fields of the multivariate analysis. In this paper—to Correspondence analysis · Clusterwise regression analysis · show the advantages of the fuzzy approach in providing a Classification and regression trees · Three-way analysis deeper and more comprehensive insight into the manage- ment of uncertainty in this branch of Statistics—we present an overview of the developments in the exploratory multi- 1 Introduction variate analysis of imprecise data. In particular, we give an outline of these contributions within an overall framework In Statistical Reasoning, the basic ingredients of a statis- of the general fuzzy approach to multivariate statistical tical analysis are data and models. Both share an infor- analysis and review the principal exploratory multivariate mational nature, which can be clearly sized in the knowl- methods for imprecise data proposed in the literature, i.e., edge discovery process leading to the acquisition of an cluster analysis, self-organizing maps, regression analysis, informational gain.
    [Show full text]
  • Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
    springer.com A. An, J. Stefanowski, S. Ramanna, C. Butz, W. Pedrycz (Eds.) Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11th International Conference, RSFDGrC 2007, Toronto, Canada, May 14- 16, 2007 Series: Lecture Notes in Artificial Intelligence This volume contains the papers selected for presentation at the 11th Int- national Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC 2007), a part of the Joint Rough Set Symposium (JRS 2007) organized by Infobright Inc. and York University. JRS 2007 was held for the ?rst time during May 14–16, 2007 in MaRS Discovery District, Toronto, Canada. It consisted of two conferences: RSFDGrC 2007 and the Second Int- national Conference on Rough Sets and Knowledge Technology (RSKT 2007). The two conferences that 2007, XIV, 588 p. constituted JRS 2007 investigated rough sets as an emerging methodology established more than 25 years ago by Zdzis law Pawlak. Printed book Roughsettheoryhasbecomeanintegralpartofdiversehybridresearchstreams. In keeping with this Softcover trend, JRS 2007 encompassed rough and fuzzy sets, kno- edgetechnologyanddiscovery, 99,99 € | £90.00 | $139.00 softandgranularcomputing,dataprocessingand mining, while maintaining an emphasis on [1] 106,99 € (D) | 109,99 € (A) | CHF foundations and applications. RSFDGrC 2007 followed in the footsteps of well-established 133,63 international initiatives devoted to the dissemination of rough sets research, held so far in eBook Canada, China, Japan, Poland, Sweden, and the USA. RSFDGrC was ?rst - ganized as the 7th 85,59 € | £72.00 | $109.00 International Workshop on Rough Sets, Data Mining and Granular Computing held in [2]85,59 € (D) | 85,59 € (A) | CHF Yamaguchi, Japan in 1999.
    [Show full text]
  • History and Development of Granular Computing – Witold Pedrycz
    HISTORY OF MATHEMATICS – History and Development of Granular Computing – Witold Pedrycz HISTORY AND DEVELOPMENT OF GRANULAR COMPUTING Witold Pedrycz Department of Electrical & Computer Engineering, University of Alberta, Edmonton AB T6R 2G7 Canada and System Research Institute, Polish Academy of Sciences, Warsaw, Poland Keywords: information granules, Granular Computing, fuzzy sets, interval mathematics, clustering, principle of justifiable granularity, optimal granularity allocation Contents 1. Introduction 2. Information granularity and Granular Computing 3. Formal approaches to information granulation: an overview and generalizations 3.1. Formal platforms of information granularity 3.2. Information granules of higher type and higher order 3.3. Hybrid models of information granules 4. A design of information granules 4.1. The principle of justifiable granularity and design of fuzzy sets 4.2. Information granules as constructs of fuzzy clustering 4.3. Design of information granules with knowledge hints 5. Information granularity in signal representation and processing 6. Optimal allocation of information granularity 7. Conclusions Glossary Bibliography Biographical Sketch Summary Information granules and information granularity, which altogether give rise to the general framework of Granular Computing, offer interesting opportunities to endow information UNESCOprocessing with an important facet– ofEOLSS human-centricity. This facet directly implies that the underlying processing supports non-numeric data inherently associated with an adjustable perception of humans and generates results being seamlessly comprehended by the users. SAMPLE CHAPTERS This study offers an introduction to the concepts of information granules, elaborates on the main formal frameworks of Granular Computing (GC) such as sets (interval mathematics), fuzzy sets, probability, rough sets, and shadowed sets and afterwards looks at ways in which information granules are formed.
    [Show full text]
  • Granular Computing in Intelligent Transportation: an Exploratory Study
    BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES • Volume 15, No 6 Special Issue on Logistics, Informatics and Service Science Sofia • 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0073 Granular Computing in Intelligent Transportation: An Exploratory Study Seng Dewen, Cheng Xinhong, Chen Jing, Fang Xujian School of Computer Science and Technology, Hangzhou Dianzi University, 310018 Hangzhou, China Key Laboratory of Complex Systems Modelling and Simulation, Ministry of Education, China Email: [email protected] Abstract: With the continuous development of the cities, the traffic situation has gradually become a topic of concern. The concept of an intelligent transportation system has been proposed and developed. In the field of intelligent transportation, the traffic data has gradually increased. People have higher demands to real time data. The traditional data processing methods and tools have become unable to meet the needs of urban transport development. In this paper we analyzed the basic theory of granular computing, the methods, technology and current situation of granular computing. Besides, we discussed the hot issues of granular computing in an intelligent transportation system. Finally, granular computing in the development of intelligent transportation fields was also discussed. Keywords: Intelligent Transportation system, granular computing, traffic data, rough set. 1. Introduction With the rapid development of the computers and networks, the contradiction between the rich data and poor knowledge has become increasingly prominent. In this connection the paper “Fuzzy sets and information granularity” [1] was published. In all aspects of life the researchers accepted that granular information has generated strong interest. The paper noted that the concept of information granules is found in many areas, but in different forms.
    [Show full text]
  • Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11Th International Conference, Rsfdgrc 2007, Toronto, Canada, May 14-16, 2007
    A. An, J. Stefanowski, S. Ramanna, C. Butz, W. Pedrycz (Eds.) Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11th International Conference, RSFDGrC 2007, Toronto, Canada, May 14-16, 2007 Series: Lecture Notes in Artificial Intelligence, Vol. 4482 This book constitutes the refereed proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, RSFDGrC 2007, held in Toronto, Canada in May 2007 in conjunction with the Second International Conference on Rough Sets and Knowledge Technology, RSKT 2007, both as part of the Joint Rough Set Symposium, JRS 2007. The 65 revised full papers presented together with four invited papers were carefully reviewed and selected from a total of 319 general submissions to the JRS 2007 symposium. The papers are organized in topical sections on fuzzy-rough hybridization, fuzzy sets, soft computing in medical image processing, soft computing in information retrieval, clustering, text and Web mining, learning, data mining and rough classifiers, granular computing, soft computing in multimedia processing, soft computing 2007, XIV, 588 p. applications, and rough and complex concepts. Printed book Softcover ▶ 99,99 € | £90.00 | $139.00 ▶ *106,99 € (D) | 109,99 € (A) | CHF 133.63 eBook Available from your bookstore or ▶ springer.com/shop MyCopy Printed eBook for just ▶ € | $ 24.99 ▶ springer.com/mycopy Order online at springer.com ▶ or for the Americas call (toll free) 1-800-SPRINGER ▶ or email us at: [email protected]. ▶ For outside the Americas call +49 (0) 6221-345-4301 ▶ or email us at: [email protected]. The first € price and the £ and $ price are net prices, subject to local VAT.
    [Show full text]
  • Clustering with Granular Information Processing
    CLUSTERING WITH GRANULAR INFORMATION PROCESSING Urszula Kuzelewska˙ Faculty of Computer Science, Technical University of Bialystok, Wiejska 45a, 15-521 Bialystok, Poland Keywords: Knowledge discovery, Data mining, Information granulation, Granular computing, Clustering, Hyperboxes. Abstract: Clustering is a part of data mining domain. Its task is to detect groups of similar objects on the basis of es- tablished similarity criterion. Granular computing (GrC) includes methods from various areas with the aim to support human with better understanding analyzed problem and generated results. Granular computing techniques create and/or process data portions named as granules identified with regard to similar description, functionality or behavior. Interesting characteristic of granular computation is offer of multi-perspective view of data depending on required resolution level. Data granules identified on different levels of resolution form a hierarchical structure expressing relations between objects of data. A method proposed in this article performs creation data granules by clustering data in form of hyperboxes. The results are compared with clustering of point-type data with regard to complexity, quality and interpretabil- ity. 1 INTRODUCTION levels of resolution form a hierarchical structure ex- pressing relations between objects of data. Such Granular computing (GrC) is a new multidisciplinary structure can be used to facilitate investigation and theory rapidly developed in recent years. The most helps to understand complex systems. Understand- common definitions of GrC (Yao, 2006), (Zadeh, ing of analyzed problem and attained results are 2001) include a postulate of computing with informa- main aspects of human-oriented systems. There are tion granules, that is collections of objects, that ex- also definitions of granular computing additionally hibit similarity in terms of their properties or func- concentrating on systems supporting human beings tional appearance.
    [Show full text]
  • Event Abstraction in Process Mining: Literature Review and Taxonomy
    Granular Computing (2021) 6:719–736 https://doi.org/10.1007/s41066-020-00226-2 (0123456789().,-volV)(0123456789().,- volV) ORIGINAL PAPER Event abstraction in process mining: literature review and taxonomy 1,2 3,4 5 6 Sebastiaan J. van Zelst • Felix Mannhardt • Massimiliano de Leoni • Agnes Koschmider Received: 15 October 2019 / Accepted: 25 April 2020 / Published online: 27 May 2020 Ó The Author(s) 2020 Abstract The execution of processes in companies generates traces of event data, stored in the underlying information system(s), capturing the actual execution of the process. Analyzing event data, i.e., the focus of process mining, yields a detailed understanding of the process, e.g., we are able to discover the control flow of the process and detect compliance and performance issues. Most process mining techniques assume that the event data are of the same and/or appropriate level of granularity. However, in practice, the data are extracted from different systems, e.g., systems for customer relationship management, Enterprise Resource Planning, etc., record the events at different granularity levels. Hence, pre-processing techniques that allow us to abstract event data into the right level of granularity are vital for the successful application of process mining. In this paper, we present a literature study, in which we assess the state-of-the-art in the application of such event abstraction techniques in the field of process mining. The survey is accompanied by a taxonomy of the existing approaches, which we exploit to highlight interesting novel directions. Keywords Granular computing Á Process mining Á Sequential data Á Label refinement Á Event abstraction 1 Introduction performance from different perspectives: reduced lead time, higher revenue, higher customer satisfaction, better In modern organizations, the execution of business pro- compliance with internal/external regulations, etc.
    [Show full text]
  • Model-Based Feature Selection Based on Radial Basis Functions and Information Measures
    This is a repository copy of Model-Based Feature Selection Based on Radial Basis Functions and Information Measures. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/116414/ Version: Accepted Version Proceedings Paper: Tzagarakis, G. and Panoutsos, G. (2016) Model-Based Feature Selection Based on Radial Basis Functions and Information Measures. In: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). 2016 IEEE World Congress on Computational Intelligence (WCCI), 24-29 July 2016, Vancouver, BC, Canada. Institute of Electrical and Electronics Engineers (IEEE) , pp. 401-407. ISBN 9781509006267 https://doi.org/10.1109/FUZZ-IEEE.2016.7737715 Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Model-Based Feature Selection Based on Radial Basis Functions and Information Measures Georgios N.
    [Show full text]
  • Granular Soft Computing: a Paradigm in Information Processing
    Granular Soft Computing: A Paradigm in Information Processing Saroj K. Meher Systems Science and Informatics Unit Indian Statistical Institute, Bangalore Granular computing (GrC): Outline Introduction Definitions and framework Background Philosophy Advantages Structure Basic issues Computing in GrC domain Concluding remarks 2 Granular Computing (examples) 1. For travelling one needs to know about the weather conditions like sunny, cloudy or rainy etc. instead of exact temperature, 2. While establishing a course view of the world-map, we deal with high-level information like continents, countries, and oceans. When more details are required, we move down to regions, provinces, states, seas, etc. 3. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher- resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the 3 streets, buildings, etc. Granular Computing (examples) 4 Granular Computing (examples) Technical writing: One can easily observe multiple levels of processsing information in any technical writing: High level of information • title, abstract Middle levels of information • chapter/section titles • subsection titles • subsubsection titles Low level of information 5 • text Granular Computing (examples) 6 Granular Computing in SUMMARY • Similar interpretation also holds good OR generally true for all data: • At
    [Show full text]
  • Granular Approach of Knowledge Discovery in Databases
    Volume 2, No. 1, Jan-Feb 2011 ISSN No. 0976 -5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Granular Approach of Knowledge Discovery in Databases P. R. Pal* R.C. Jain P G Deptt. Of Computer Science & Applications Department of Computer Applications Shri Vaishnav Institute of Management Samrat Ashok Technological Institute (Degree) Indore, M.P. India Vidisha, M.P. India [email protected] [email protected] Abstract: The main objective of this paper is to examine the basic principles of granular computing and apply them for knowledge discovery in databases. Granular computing is an emerging field of research that provides a way of studying many issues and solving problems related to them. The paper consist introduction and overview of granular computing. It also examines some of those issues, including data and knowledge representation and processing. It is demonstrated that one of fundamental task of data mining is finding for knowledge up to right level of granularity in data and knowledge representation. Finally it concludes on that granular computing is powerful view that can be used to model many problems. Keywords: KDD, Granular Computing, Data Mining. I. INTRODUCTION presents overview of knowledge discovery in databases. Section 4 describes the role of granular computing in Granular Computing is a general computation theory knowledge discovery and at last some remarks are given in for effectively using granules such as classes, clusters, section 5. subsets, groups and intervals to build an efficient computational model for complex applications with huge II. OVERVIEW OF GRANULAR COMPUTING amounts of data, information and knowledge.
    [Show full text]