
A Framework for Clustering Uncertain Data Erich Schubert, Alexander Koos, Tobias Emrich, Andreas Zufle,¨ Klaus Arthur Schmid, Arthur Zimek Ludwig-Maximilians-Universitat¨ Munchen¨ Oettingenstr. 67, 80538 Munich, Germany http://www.dbs.ifi.lmu.de fschube,koos,emrich,zuefle,schmid,[email protected]fi.lmu.de ABSTRACT industry and academia, in the last five years. \Veracity" has The challenges associated with handling uncertain data, in often been named as the fourth \V" of big data in addition particular with querying and mining, are finding increas- to volume, velocity and variety. Adequate methods need to ing attention in the research community. Here we focus quantify the uncertainty in the data using proper models of on clustering uncertain data and describe a general frame- uncertainty, and then to propagate the uncertainty through work for this purpose that also allows to visualize and under- the data mining process, in order to obtain data mining re- stand the impact of uncertainty|using different uncertainty sults associated with significance and reliability information. models|on the data mining results. Our framework consti- This demonstration targets the problem of how to derive tutes release 0.7 of ELKI (http://elki.dbs.ifi.lmu.de/) a meaningful clustering from an uncertain dataset. For this and thus comes along with a plethora of implementations of purpose, we extend the ELKI framework [3] to handle un- algorithms, distance measures, indexing techniques, evalua- certain data. ELKI is an open source (AGPLv3) data min- tion measures and visualization components. ing software written in Java aimed at users in research and algorithm development, with an emphasis on unsupervised methods such as cluster analysis and outlier detection. We 1. INTRODUCTION give a short overview on our new release, ELKI 0.7, in Sec- Given high-quality, reliable, up-to-date, exact, and suffi- tion 2.1. Additionally, we make the following contributions ciently large data, clustering is often used to support ad- to handle uncertain data in a general way: vanced and educated decision making in many application • ELKI 0.7 adds support for the most commonly used un- domains in economics, health-care, science, and many more. certainty models (Section 2.2). In particular, ELKI 0.7 Consequently, a large number of clustering algorithms has provides an uncertain databases sampler, which derives been developed to cope with different application scenar- multiple database samples from an uncertain database ios. However, our ability to unearth valuable knowledge using the configured uncertainty model. from large sets of data is often impaired by the quality of • The ELKI visualization tools have been extended to sup- the data: data may be imprecise (e.g., due to measure- port (the clustering of) uncertain data. Therefore ground- ment errors), data can be obsolete (e.g., when a dynamic truth data, observed data, as well as various sampled database is not up-to-date), data may originate from unre- databases and their corresponding clusterings can be an- liable sources (such as crowd-sourcing), the volume of the alyzed visually. This allows for getting an intuition of dataset may be too small to answer questions reliably [8], how uncertainty affects traditional clustering results. We or it may be blurred to prevent privacy threats and to pro- describe this in more detail in Section 2.3. tect user anonymity [20]. Simply ignoring that data objects are imprecise, obsolete, unreliable, sparse, or cloaked, thus • Comparison algorithms for clustering uncertain data for pretending the data were accurate, current, reliable, and specific uncertainty models have been added to ELKI 0.7 sufficiently large, is a common source of false decision mak- (see Section 2.4). The ELKI framework can easily be ing. A different approach accepts these sources of error and extended by users to support their favorite algorithms. creates models of what the true (yet admittedly unknown) • Traditional clustering algorithms as implemented in ELKI data may look like. This is the notion of handling uncer- can be applied to sampled databases, and the clustering tain data [4]. The challenge in handling uncertain data is results can then be unified using the approach of Z¨ufleet to obtain reliable results despite the presence of uncertainty. al. [22] as sketched in Section 2.5. This challenge has received a strong research focus, by both We outline the demonstration scenario in Section 3 and close with details on the public availability of our open This work is licensed under the Creative Commons Attribution- source (AGPLv3) implementation in Section 4. NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this li- cense, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain per- mission prior to any use beyond those covered by the license. Contact 2. THE FRAMEWORK copyright holder by emailing [email protected]. Articles from this volume This project is an extension of the ELKI framework [3] were invited to present their results at the 41st International Conference on (http://elki.dbs.ifi.lmu.de/). Based on this framework, Very Large Data Bases, August 31st - September 4th 2015, Kohala Coast, we aim at providing a platform to design, experiment with, Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 and evaluate algorithms for uncertain data, as we will be Copyright 2015 VLDB Endowment 2150-8097/15/08. instantly able to use the provided functionality. 1976 Attribute Existential 2.1 General Functionality of ELKI Uncertainty Uncertainty ELKI uses a modular and extensible architecture. Many algorithms in ELKI are implemented based on general dis- Discrete Continuous tance functions and neighborhood queries, but are agnostic to the underlying data type or distance. Functionality pro- vided by ELKI includes:1 Uniform Normal GMM • input readers for many popular file formats, such as CSV, ARFF, and the libSVM format; • distance functions, including set-based distances, distri- Figure 1: Uncertain Data Models bution-based distances, and string dissimilarities; • clustering algorithms, including many k-means and hier- archical clustering variations, density-based algorithms For the case of continuous probability density functions, such as DBSCAN and OPTICS, but also subspace clus- ELKI provides classic parametric models for which the cor- tering and correlation clustering algorithms; responding probability density function and cumulative dis- • unsupervised outlier detection algorithms [2]; tribution function can be specified. As standard paramet- ric functions, ELKI offers support for uniform distributions • data indexing methods such as R*-tree variations, M- and normal distributions. Then, for each object, the cor- tree variations, VA-file, and LSH that can be used to responding parameter values can be passed to ELKI, either accelerate many algorithms; by selecting attributes of a relation as parameter values, or • evaluation measures such as the adjusted Rand index by reading the parameter values from a file. In addition, (ARI) [17], Fowlkes-Mallows [16], BCubed [7], mutual- mixture models are supported. For these models, a number information-based, and entropy-based measures; of parametric probability density functions can be provided, • a modular visualization architecture including scatter- each associated with a probability. This allows to support plots and parallel coordinates, using an SVG renderer Gaussian mixture models as used by B¨ohmet al. [11]. to produce high quality vector graphics. For both cases (discrete or continuous distributions) ELKI provides data parsers and helper classes to link data records ELKI can be extended by implementing the appropriate to their corresponding p.m.f. or p.d.f. interfaces. The provided UIs for ELKI will automatically de- tect the new implementations and allow simple configuration 2.3 Visualization Tools of experiments without the need to write further code. How- The ELKI visualization tools have been extended to sup- ever, not all functionality required for analyzing uncertain port clustering of uncertain data. The corresponding view data can be added using such extensions. In particular, sam- (cf. Figure 2) can switch between the following perspectives: pling possible worlds will require an additional processing (1) The result of clustering algorithm C on the ground-truth, loop around the algorithms, and a second meta-clustering if it is available, gives an intuition on how the clustering phase to aggregate these results. An application providing should look like without the presence of uncertainty. (2) such more complex solutions is a core contribution of this The result of C on random samples gives insight on how the demonstration and will be sketched below (Section 2.5). possible clusterings vary and how the uncertainty affects tra- 2.2 Supported Uncertain Data Models ditional clustering results. (3) The representative clusterings (c.f. Section 2.5)give a summarized view of the possible clus- The most common discrete and continuous data models terings and allow for educated decision making. For all these for uncertain data (cf. Figure 1) have been implemented in perspectives we can utilize the existing visualization toolkit ELKI. Let us outline the implemented models briefly: of ELKI including scatterplots (2-dimensional projections A pioneering uncertainty model is the existential uncer- for each pair of attributes), 1-dimensional attribute distri- tainty model [13, 10], where each data record is associated butions (histograms), parallel-coordinate plots,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-