Data Science, Big Data and Granular Mining

Total Page:16

File Type:pdf, Size:1020Kb

Data Science, Big Data and Granular Mining Pattern Recognition Letters 67 (2015) 109–112 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec Editorial Data science, big data and granular mining Introduction Mining of big data can be made effective with the methodologies that can deal with their characteristics, such as heterogeneity, dy- With the evolution of various modern technologies, huge amount namism or time varying, redundancy, uncertainty and impreciseness. of data is being constantly generated and collected around us. We Heterogeneity in data comes from the characterization of information are in the midst of what is popularly called information revolution in different ways such as using numerical, categorical, text, and im- and are living in a so-called world of knowledge. Intentionally and/or age or audio/video data. Dynamism in data is due to the mechanism accidentally, generation of these data is inevitable. As a result, large that generates related data changes at different times or different cir- data, broadly characterised by three Vs – large volume, velocity and cumstances, which adds new uncertainty and difficulties for analysis. variety (popularly known as “big data”) [1], is becoming a fancy word, High dimensionality is due to the inappropriate collection of data for and analysis, access and store of these data are now central to various a task. Although a large candidate set of attributes is provided, most scientific innovation, public health and welfare, public security and of them are irrelevant or redundant. These superfluous attributes de- so on. Moreover, big data are highly complex in nature and mining teriorate the learning performance of decision-making algorithms. them is not straight forward. Most of the information is heteroge- In the recent past, evolution of research interest has cropped up neous, time varying, redundant, uncertain and imprecise. To reason, a relatively new area called, granular computing (GrC) [2,3,7],due understand and mine useful knowledge from these data is becoming to the need and challenges from various domains of applications, a great challenge. It is also true that large integrated data sets can such as data mining, document analysis, financial gaming, organiza- potentially provide a much deeper understanding of both the nature tion and retrieval of huge data bases of multimedia, medical data, and society, and open up new avenues for research activities. remote sensing, and biometrics. Several aspects of GrC [6] play im- Long before the data space, there have been histories of physi- portant roles in the design of different decision-making models with cal and social spaces to describe various phenomena in nature and acceptable performance. human civilization, respectively. These two spaces eventually led to GrC, based on technologies like fuzzy sets [9], rough sets [8], com- natural and social sciences, where a few research activities related to puting with words, etc., provides powerful tools for multiple granu- understanding of processes and concepts such as discovering various larity and multiple-view data analysis, which is of vital importance phenomena in the physical world and understanding the human in- for understanding the complexity in big data. Different users may teractions are being carried out. In the recent past, digitalization of require understanding of big data at different granularity levels and various observable facts of these spaces has produced huge amount need to analyse the data with different viewpoints. GrC has exhibited of data. With the over-flooding of data called “big data” people have some capability and advantages in intelligent data analysis, pattern started realizing the existence of data space along with the natural recognition, machine learning and uncertain reasoning for noticeable and social spaces, and shown remarkable interest on it to explore. size of data. However, very few research activities have gone deep into Once the data are generated, they will not evolve accordingly if no the nutshell of heterogeneous, complex and big data analysis so far. special mechanism is arranged. The data may have powerful reac- tion to the real world even if it is fabricated, e.g., rumours spread via Granular computing: components, characteristics and features mobile phones or through a social media. Big data has a role in its promising usefulness in many fields such as commerce and business, GrC by its name is not new, but the popularity in its use for biology, medicine, public administration, material science, and cog- various domains has gained recently. It is a computing paradigm of nition in human brain, just to name a few. The objective of big data, information processing that works with the process of information as it stands, is to develop complex procedure running over large-scale granulation/abstraction. GrC is an umbrella term that accommodates enormous-sized data repositories for – extracting useful knowledge everything about theories, methods, techniques and tools of informa- hidden therein and delivering accurate predictions of various kinds tion granulation. In processing large-scale information, GrC plays an within a reasonable time period. Scientists from the academia, in- important role that finds simple approximate solution which is cost dustry, and open source community have been trying for improved effective and provides improved description of real world intelligent reasoning and understanding of big data and to provide better scope systems. In the present scenario, synergetic integration of GrC and to solve several social and natural problems. It may be mentioned computational intelligence has become a hot area of research to here that while analytics over big data is playing a leading role, there develop and experiment various efficient decision-making models has been a shortage of deep analytical talent globally. for complex problems. http://dx.doi.org/10.1016/j.patrec.2015.08.001 0167-8655/© 2015 Published by Elsevier B.V. 110 Editorial / Pattern Recognition Letters 67 (2015) 109–112 Broadly, the prime motivation in the development of GrC-based in an opposite manner, broadly they are highly correlated to each methodologies is three-fold. These are: (i) GrC does not go for an ex- other. At higher level, a granule represents more abstract concept by cessive precision of solution which is in fact the inherent character- ignoring irreverent details, while at lower level it is more specific one istic of human reasoning process, (ii) the structural re-presentation that reveals more detailed information. A challenging problem in GrC of the problem in the work-flow of GrC makes the solution process is to coordinate and/or swap between different levels of granularity. more efficient, and (iii) the computing method provides transparency Several attempts have been made to develop machine learning mod- in the information processing steps. Human reasoning-process nor- els using these approaches. mally follows the principle of information granulation and performs To improve the decision making-process using GrC, one may have the computation and operations on information granules. Most often, to represent and interpret the granular relationship, such as inter and modelling and monitoring complex systems do not require high level intra-relationship meticulously. The intra-relationship among the el- of precisions and in fact it is often expensive and not necessary. Fur- ements of a granule and inter-relationship among the elements of ther, a task with incomplete and imprecise information poses prob- two different granules have strong influence in GrC-based models. lems to differentiate distinct elements. This motivates one to go for These relationships provide the essential information for whether to granular representation of information and processes. As a result, GrC go for integration or dispersion of granules. framework provides efficient and intelligent information processing It is worthwhile mentioning the importance of methods for rea- systems for various real life decision-making applications. The said soning (judgment) about properties of computations over granules. framework can be modelled with principles of neural networks, in- These properties are constructed (induced) using a higher level gran- terval analysis, fuzzy sets and rough sets, both in isolation and inte- ulation over computations on granules. gration, among other theories. One of the key advantages in computing with granules is that granular processing mimics the human way of understanding of Granular computing: fuzzy and rough sets a particular problem where it requires different levels of informa- tion. GrC works in a similar fashion that deals with the structured Among the different technologies required for performing GrC, representation of the real-world problems. In fact, this form of the ones based on fuzzy and rough sets are the most successful and characterization is inherent in any task of the universe, and several considered to be the best choice for decision-making processes. In relationships among the different levels of structures and multiple fact, the concept of granulation is inherent in those theories. Fuzzy set levels of understanding provide ample scope to understand and theory starts with the definition of membership function and granu- interpret them. In many cases, including large data sets, a problem lates the features; thereby producing the fuzzy granulation of fea- may be difficult to be solved at one level, but subsequent
Recommended publications
  • Multi-View Cluster Analysis with Incomplete Data to Understand Treatment Effects
    Information Sciences 494 (2019) 278–293 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins Multi-view cluster analysis with incomplete data to understand treatment effects Guoqing Chao a, Jiangwen Sun b, Jin Lu a, An-Li Wang c, Daniel D. Langleben c, ∗ Chiang-Shan Li d, Jinbo Bi a, a Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, USA b Department of Computer Science Old Dominion University, Norfolk, Virginia, USA c University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA d Department of Psychiatry Yale University, New Haven, CT, USA a r t i c l e i n f o a b s t r a c t Article history: Multi-view cluster analysis, as a popular granular computing method, aims to partition Received 25 December 2018 sample subjects into consistent clusters across different views in which the subjects are Revised 13 March 2019 characterized. Frequently, data entries can be missing from some of the views. The latest Accepted 21 April 2019 multi-view co-clustering methods cannot effectively deal with incomplete data, especially Available online 22 April 2019 when there are mixed patterns of missing values. We propose an enhanced formulation for Keywords: a family of multi-view co-clustering methods to cope with the missing data problem by Missing value introducing an indicator matrix whose elements indicate which data entries are observed Multi-view data analysis and assessing cluster validity only on observed entries. In comparison with common meth- Co-clustering ods that impute missing data in order to use regular multi-view analytics, our approach is Granular computing less sensitive to imputation uncertainty.
    [Show full text]
  • Examining Granular Computing from a Modeling Perspective Ying Xie Kennesaw State University, [email protected]
    Kennesaw State University DigitalCommons@Kennesaw State University Faculty Publications 2008 Examining Granular Computing from a Modeling Perspective Ying Xie Kennesaw State University, [email protected] Jayasimha R. Katukuri EBay Inc Vijay V. Raghavan University of Louisiana at Lafayette Follow this and additional works at: http://digitalcommons.kennesaw.edu/facpubs Part of the Computer Sciences Commons Recommended Citation Xie, Ying; Katukuri, Jayasimha R.; and Raghavan, Vijay V., "Examining Granular Computing from a Modeling Perspective" (2008). Faculty Publications. 1599. http://digitalcommons.kennesaw.edu/facpubs/1599 This Article is brought to you for free and open access by DigitalCommons@Kennesaw State University. It has been accepted for inclusion in Faculty Publications by an authorized administrator of DigitalCommons@Kennesaw State University. For more information, please contact [email protected]. Examining Granular Computing from a Modeling Perspective Ying Xie Jayasimha Katukuri, Vijay V. Raghavan Department of Computer Science and Information Systems The Center for Advance Computer Studies Kennesaw State University University of Louisiana at Lafayette 1000 Chastain RD., Kennesaw, GA 30144 104 University Circle, Lafayette LA 70504 [email protected] {jrk8907, raghavan}@cacs.louisiana.edu Tom Johnsten School of Computer and Information Sciences University of South Alabama Mobile, Alabama 36688 [email protected] perceptions and understanding are essential. One of the advantages of modeling different Abstract - In this paper, we use a set of unified components to domains from a unified granular perspective is that it enables conduct granular modeling for problem solving paradigms in us to extract crucial components of granular computing using several fields of computing. Each identified component may a single formalism.
    [Show full text]
  • Exploratory Multivariate Analysis for Empirical Information Affected by Uncertainty and Modeled in a Fuzzy Manner: a Review
    Granul. Comput. DOI 10.1007/s41066-017-0040-y ORIGINAL PAPER Exploratory multivariate analysis for empirical information affected by uncertainty and modeled in a fuzzy manner: a review Pierpaolo D’Urso1 Received: 18 September 2016 / Accepted: 25 January 2017 © Springer International Publishing Switzerland 2017 Abstract In the last few decades, there has been an Keywords Statistical reasoning · Information · increase in the interest of the scientific community for mul- Uncertainty and imprecision · Granularity · Information tivariate statistical techniques of data analysis in which the granules · Exploratory multivariate statistical analysis · data are affected by uncertainty, imprecision, or vague- Fuzzy data · Metrics for fuzzy data · Cluster analysis · ness. In this context, following a fuzzy formalization, sev- Regression analysis · Self-organizing maps · Principal eral contributions and developments have been offered in component analysis · Multidimensional scaling · various fields of the multivariate analysis. In this paper—to Correspondence analysis · Clusterwise regression analysis · show the advantages of the fuzzy approach in providing a Classification and regression trees · Three-way analysis deeper and more comprehensive insight into the manage- ment of uncertainty in this branch of Statistics—we present an overview of the developments in the exploratory multi- 1 Introduction variate analysis of imprecise data. In particular, we give an outline of these contributions within an overall framework In Statistical Reasoning, the basic ingredients of a statis- of the general fuzzy approach to multivariate statistical tical analysis are data and models. Both share an infor- analysis and review the principal exploratory multivariate mational nature, which can be clearly sized in the knowl- methods for imprecise data proposed in the literature, i.e., edge discovery process leading to the acquisition of an cluster analysis, self-organizing maps, regression analysis, informational gain.
    [Show full text]
  • Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
    springer.com A. An, J. Stefanowski, S. Ramanna, C. Butz, W. Pedrycz (Eds.) Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11th International Conference, RSFDGrC 2007, Toronto, Canada, May 14- 16, 2007 Series: Lecture Notes in Artificial Intelligence This volume contains the papers selected for presentation at the 11th Int- national Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC 2007), a part of the Joint Rough Set Symposium (JRS 2007) organized by Infobright Inc. and York University. JRS 2007 was held for the ?rst time during May 14–16, 2007 in MaRS Discovery District, Toronto, Canada. It consisted of two conferences: RSFDGrC 2007 and the Second Int- national Conference on Rough Sets and Knowledge Technology (RSKT 2007). The two conferences that 2007, XIV, 588 p. constituted JRS 2007 investigated rough sets as an emerging methodology established more than 25 years ago by Zdzis law Pawlak. Printed book Roughsettheoryhasbecomeanintegralpartofdiversehybridresearchstreams. In keeping with this Softcover trend, JRS 2007 encompassed rough and fuzzy sets, kno- edgetechnologyanddiscovery, 99,99 € | £90.00 | $139.00 softandgranularcomputing,dataprocessingand mining, while maintaining an emphasis on [1] 106,99 € (D) | 109,99 € (A) | CHF foundations and applications. RSFDGrC 2007 followed in the footsteps of well-established 133,63 international initiatives devoted to the dissemination of rough sets research, held so far in eBook Canada, China, Japan, Poland, Sweden, and the USA. RSFDGrC was ?rst - ganized as the 7th 85,59 € | £72.00 | $109.00 International Workshop on Rough Sets, Data Mining and Granular Computing held in [2]85,59 € (D) | 85,59 € (A) | CHF Yamaguchi, Japan in 1999.
    [Show full text]
  • History and Development of Granular Computing – Witold Pedrycz
    HISTORY OF MATHEMATICS – History and Development of Granular Computing – Witold Pedrycz HISTORY AND DEVELOPMENT OF GRANULAR COMPUTING Witold Pedrycz Department of Electrical & Computer Engineering, University of Alberta, Edmonton AB T6R 2G7 Canada and System Research Institute, Polish Academy of Sciences, Warsaw, Poland Keywords: information granules, Granular Computing, fuzzy sets, interval mathematics, clustering, principle of justifiable granularity, optimal granularity allocation Contents 1. Introduction 2. Information granularity and Granular Computing 3. Formal approaches to information granulation: an overview and generalizations 3.1. Formal platforms of information granularity 3.2. Information granules of higher type and higher order 3.3. Hybrid models of information granules 4. A design of information granules 4.1. The principle of justifiable granularity and design of fuzzy sets 4.2. Information granules as constructs of fuzzy clustering 4.3. Design of information granules with knowledge hints 5. Information granularity in signal representation and processing 6. Optimal allocation of information granularity 7. Conclusions Glossary Bibliography Biographical Sketch Summary Information granules and information granularity, which altogether give rise to the general framework of Granular Computing, offer interesting opportunities to endow information UNESCOprocessing with an important facet– ofEOLSS human-centricity. This facet directly implies that the underlying processing supports non-numeric data inherently associated with an adjustable perception of humans and generates results being seamlessly comprehended by the users. SAMPLE CHAPTERS This study offers an introduction to the concepts of information granules, elaborates on the main formal frameworks of Granular Computing (GC) such as sets (interval mathematics), fuzzy sets, probability, rough sets, and shadowed sets and afterwards looks at ways in which information granules are formed.
    [Show full text]
  • Granular Computing in Intelligent Transportation: an Exploratory Study
    BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES • Volume 15, No 6 Special Issue on Logistics, Informatics and Service Science Sofia • 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0073 Granular Computing in Intelligent Transportation: An Exploratory Study Seng Dewen, Cheng Xinhong, Chen Jing, Fang Xujian School of Computer Science and Technology, Hangzhou Dianzi University, 310018 Hangzhou, China Key Laboratory of Complex Systems Modelling and Simulation, Ministry of Education, China Email: [email protected] Abstract: With the continuous development of the cities, the traffic situation has gradually become a topic of concern. The concept of an intelligent transportation system has been proposed and developed. In the field of intelligent transportation, the traffic data has gradually increased. People have higher demands to real time data. The traditional data processing methods and tools have become unable to meet the needs of urban transport development. In this paper we analyzed the basic theory of granular computing, the methods, technology and current situation of granular computing. Besides, we discussed the hot issues of granular computing in an intelligent transportation system. Finally, granular computing in the development of intelligent transportation fields was also discussed. Keywords: Intelligent Transportation system, granular computing, traffic data, rough set. 1. Introduction With the rapid development of the computers and networks, the contradiction between the rich data and poor knowledge has become increasingly prominent. In this connection the paper “Fuzzy sets and information granularity” [1] was published. In all aspects of life the researchers accepted that granular information has generated strong interest. The paper noted that the concept of information granules is found in many areas, but in different forms.
    [Show full text]
  • Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11Th International Conference, Rsfdgrc 2007, Toronto, Canada, May 14-16, 2007
    A. An, J. Stefanowski, S. Ramanna, C. Butz, W. Pedrycz (Eds.) Rough Sets, Fuzzy Sets, Data Mining and Granular Computing 11th International Conference, RSFDGrC 2007, Toronto, Canada, May 14-16, 2007 Series: Lecture Notes in Artificial Intelligence, Vol. 4482 This book constitutes the refereed proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, RSFDGrC 2007, held in Toronto, Canada in May 2007 in conjunction with the Second International Conference on Rough Sets and Knowledge Technology, RSKT 2007, both as part of the Joint Rough Set Symposium, JRS 2007. The 65 revised full papers presented together with four invited papers were carefully reviewed and selected from a total of 319 general submissions to the JRS 2007 symposium. The papers are organized in topical sections on fuzzy-rough hybridization, fuzzy sets, soft computing in medical image processing, soft computing in information retrieval, clustering, text and Web mining, learning, data mining and rough classifiers, granular computing, soft computing in multimedia processing, soft computing 2007, XIV, 588 p. applications, and rough and complex concepts. Printed book Softcover ▶ 99,99 € | £90.00 | $139.00 ▶ *106,99 € (D) | 109,99 € (A) | CHF 133.63 eBook Available from your bookstore or ▶ springer.com/shop MyCopy Printed eBook for just ▶ € | $ 24.99 ▶ springer.com/mycopy Order online at springer.com ▶ or for the Americas call (toll free) 1-800-SPRINGER ▶ or email us at: [email protected]. ▶ For outside the Americas call +49 (0) 6221-345-4301 ▶ or email us at: [email protected]. The first € price and the £ and $ price are net prices, subject to local VAT.
    [Show full text]
  • Clustering with Granular Information Processing
    CLUSTERING WITH GRANULAR INFORMATION PROCESSING Urszula Kuzelewska˙ Faculty of Computer Science, Technical University of Bialystok, Wiejska 45a, 15-521 Bialystok, Poland Keywords: Knowledge discovery, Data mining, Information granulation, Granular computing, Clustering, Hyperboxes. Abstract: Clustering is a part of data mining domain. Its task is to detect groups of similar objects on the basis of es- tablished similarity criterion. Granular computing (GrC) includes methods from various areas with the aim to support human with better understanding analyzed problem and generated results. Granular computing techniques create and/or process data portions named as granules identified with regard to similar description, functionality or behavior. Interesting characteristic of granular computation is offer of multi-perspective view of data depending on required resolution level. Data granules identified on different levels of resolution form a hierarchical structure expressing relations between objects of data. A method proposed in this article performs creation data granules by clustering data in form of hyperboxes. The results are compared with clustering of point-type data with regard to complexity, quality and interpretabil- ity. 1 INTRODUCTION levels of resolution form a hierarchical structure ex- pressing relations between objects of data. Such Granular computing (GrC) is a new multidisciplinary structure can be used to facilitate investigation and theory rapidly developed in recent years. The most helps to understand complex systems. Understand- common definitions of GrC (Yao, 2006), (Zadeh, ing of analyzed problem and attained results are 2001) include a postulate of computing with informa- main aspects of human-oriented systems. There are tion granules, that is collections of objects, that ex- also definitions of granular computing additionally hibit similarity in terms of their properties or func- concentrating on systems supporting human beings tional appearance.
    [Show full text]
  • Event Abstraction in Process Mining: Literature Review and Taxonomy
    Granular Computing (2021) 6:719–736 https://doi.org/10.1007/s41066-020-00226-2 (0123456789().,-volV)(0123456789().,- volV) ORIGINAL PAPER Event abstraction in process mining: literature review and taxonomy 1,2 3,4 5 6 Sebastiaan J. van Zelst • Felix Mannhardt • Massimiliano de Leoni • Agnes Koschmider Received: 15 October 2019 / Accepted: 25 April 2020 / Published online: 27 May 2020 Ó The Author(s) 2020 Abstract The execution of processes in companies generates traces of event data, stored in the underlying information system(s), capturing the actual execution of the process. Analyzing event data, i.e., the focus of process mining, yields a detailed understanding of the process, e.g., we are able to discover the control flow of the process and detect compliance and performance issues. Most process mining techniques assume that the event data are of the same and/or appropriate level of granularity. However, in practice, the data are extracted from different systems, e.g., systems for customer relationship management, Enterprise Resource Planning, etc., record the events at different granularity levels. Hence, pre-processing techniques that allow us to abstract event data into the right level of granularity are vital for the successful application of process mining. In this paper, we present a literature study, in which we assess the state-of-the-art in the application of such event abstraction techniques in the field of process mining. The survey is accompanied by a taxonomy of the existing approaches, which we exploit to highlight interesting novel directions. Keywords Granular computing Á Process mining Á Sequential data Á Label refinement Á Event abstraction 1 Introduction performance from different perspectives: reduced lead time, higher revenue, higher customer satisfaction, better In modern organizations, the execution of business pro- compliance with internal/external regulations, etc.
    [Show full text]
  • Model-Based Feature Selection Based on Radial Basis Functions and Information Measures
    This is a repository copy of Model-Based Feature Selection Based on Radial Basis Functions and Information Measures. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/116414/ Version: Accepted Version Proceedings Paper: Tzagarakis, G. and Panoutsos, G. (2016) Model-Based Feature Selection Based on Radial Basis Functions and Information Measures. In: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). 2016 IEEE World Congress on Computational Intelligence (WCCI), 24-29 July 2016, Vancouver, BC, Canada. Institute of Electrical and Electronics Engineers (IEEE) , pp. 401-407. ISBN 9781509006267 https://doi.org/10.1109/FUZZ-IEEE.2016.7737715 Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Model-Based Feature Selection Based on Radial Basis Functions and Information Measures Georgios N.
    [Show full text]
  • Granular Soft Computing: a Paradigm in Information Processing
    Granular Soft Computing: A Paradigm in Information Processing Saroj K. Meher Systems Science and Informatics Unit Indian Statistical Institute, Bangalore Granular computing (GrC): Outline Introduction Definitions and framework Background Philosophy Advantages Structure Basic issues Computing in GrC domain Concluding remarks 2 Granular Computing (examples) 1. For travelling one needs to know about the weather conditions like sunny, cloudy or rainy etc. instead of exact temperature, 2. While establishing a course view of the world-map, we deal with high-level information like continents, countries, and oceans. When more details are required, we move down to regions, provinces, states, seas, etc. 3. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher- resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the 3 streets, buildings, etc. Granular Computing (examples) 4 Granular Computing (examples) Technical writing: One can easily observe multiple levels of processsing information in any technical writing: High level of information • title, abstract Middle levels of information • chapter/section titles • subsection titles • subsubsection titles Low level of information 5 • text Granular Computing (examples) 6 Granular Computing in SUMMARY • Similar interpretation also holds good OR generally true for all data: • At
    [Show full text]
  • Granular Approach of Knowledge Discovery in Databases
    Volume 2, No. 1, Jan-Feb 2011 ISSN No. 0976 -5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Granular Approach of Knowledge Discovery in Databases P. R. Pal* R.C. Jain P G Deptt. Of Computer Science & Applications Department of Computer Applications Shri Vaishnav Institute of Management Samrat Ashok Technological Institute (Degree) Indore, M.P. India Vidisha, M.P. India [email protected] [email protected] Abstract: The main objective of this paper is to examine the basic principles of granular computing and apply them for knowledge discovery in databases. Granular computing is an emerging field of research that provides a way of studying many issues and solving problems related to them. The paper consist introduction and overview of granular computing. It also examines some of those issues, including data and knowledge representation and processing. It is demonstrated that one of fundamental task of data mining is finding for knowledge up to right level of granularity in data and knowledge representation. Finally it concludes on that granular computing is powerful view that can be used to model many problems. Keywords: KDD, Granular Computing, Data Mining. I. INTRODUCTION presents overview of knowledge discovery in databases. Section 4 describes the role of granular computing in Granular Computing is a general computation theory knowledge discovery and at last some remarks are given in for effectively using granules such as classes, clusters, section 5. subsets, groups and intervals to build an efficient computational model for complex applications with huge II. OVERVIEW OF GRANULAR COMPUTING amounts of data, information and knowledge.
    [Show full text]