Optimal Grid-Clustering : Towards Breaking the Curse of Dimensionality in High-Dimensional Clustering

Total Page:16

File Type:pdf, Size:1020Kb

Optimal Grid-Clustering : Towards Breaking the Curse of Dimensionality in High-Dimensional Clustering Optimal Grid-Clustering: Towards Breaking the Curse of Dimensionality in High-Dimensional Clustering Alexander Hinneburg Daniel A. Keim [email protected] [email protected] Institute of Computer Science, UniversityofHalle Kurt-Mothes-Str.1, 06120 Halle (Saale), Germany Abstract 1 Intro duction Because of the fast technological progress, the amount Many applications require the clustering of large amounts of data which is stored in databases increases very fast. of high-dimensional data. Most clustering algorithms, This is true for traditional relational databases but however, do not work e ectively and eciently in high- also for databases of complex 2D and 3D multimedia dimensional space, which is due to the so-called "curse of data such as image, CAD, geographic, and molecular dimensionality". In addition, the high-dimensional data biology data. It is obvious that relational databases often contains a signi cant amount of noise which causes can be seen as high-dimensional databases (the at- additional e ectiveness problems. In this pap er, we review tributes corresp ond to the dimensions of the data set), and compare the existing algorithms for clustering high- butitisalsotrueformultimedia data which - for an dimensional data and show the impact of the curse of di- ecient retrieval - is usually transformed into high- mensionality on their e ectiveness and eciency.Thecom- dimensional feature vectors such as color histograms parison reveals that condensation-based approaches (such [SH94], shap e descriptors [Jag91, MG95], Fourier vec- as BIRCH or STING) are the most promising candidates tors [WW80], and text descriptors [Kuk92]. In many for achieving the necessary eciency, but it also shows of the mentioned applications, the databases are very that basically all condensation-based approaches havese- large and consist of millions of data ob jects with sev- vere weaknesses with resp ect to their e ectiveness in high- eral tens to a few hundreds of dimensions. dimensional space. To overcome these problems, we de- Automated clustering in high-dimensional velop a new clustering technique called OptiGrid which databases is an imp ortant problem and there is based on constructing an optimal grid-partitioning of are a numb er of di erent clustering algorithms which the data. The optimal grid-partitioning is determined by are applicable to high-dimensional data. The most calculating the b est partitioning hyp erplanes for eachdi- prominent representatives are partitioning algorithms mension (if sucha partitioning exists) using certain pro- such as CLARANS [NH94], hierarchical clustering jections of the data. The advantages of our new approach algorithms, and lo cality-based clustering algorithms are (1) it has a rm mathematical basis (2) it is by far such as (G)DBSCAN [EKSX96, EKSX97] and DB- more e ective than existing clustering algorithms for high- CLASD [XEKS98]. The basic idea of partitioning dimensional data (3) it is very ecienteven for large data algorithms is to construct a partition of the database sets of high dimensionality. To demonstrate the e ective- into k clusters which are represented by the gravityof ness and eciency of our new approach, we p erform a series the cluster (k -means)orby one representative ob ject of exp eriments on a numb er of di erent data sets including of the cluster (k -medoid). Each ob ject is assigned real data sets from CAD and molecular biology. A com- to the closest cluster. A well-known partitioning parison with one of the b est known algorithms (BIRCH) algorithm is CLARANS which uses a randomised and shows the sup eriority of our new approach. b ounded search strategy to improve the p erformance. Hierarchical clustering algorithms decomp ose the Permission to copy without feeallor part of this material is database into several levels of partitionings which grantedprovided that the copies are not made or distributedfor direct commercial advantage, the VLDB copyright noticeand are usually represented by a dendrogram - a tree the title of the publication and its date appear, and notice is which splits the database recursively into smaller given that copying is by permission of the Very Large Data Base subsets. The dendrogram can be created top-down Endowment. Tocopy otherwise, or to republish, requires a fee and/or special permission from the Endowment. (divisive) or b ottom-up (agglomerative). Although hierarchical clustering algorithms can b e very e ective Pro ceedings of the 25th VLDB Conference, in knowledge discovery, the costs of creating the Edinburgh, Scotland, 1999. ing pro jections of the data to determine the optimal dendrograms is prohibitively exp ensive for large data cutting (hyp er-)planes for partitioning the data. If sets since the algorithms are usually at least quadratic no go o d partitioning plane exist in some dimensions, in the number of data ob jects. More ecient are we do not partition the data set in those dimensions. locality-based clustering algorithms since they usually Our strategy of using a data-dep endent partitioning group neighb oring data elements into clusters based of the data avoids the e ectiveness problems of the on lo cal conditions and therefore allow the clustering existing approaches and guarantees that all clusters to be p erformed in one scan of the database. DB- are found by the algorithm (even for high noise lev- SCAN, for example, uses a density-based notion of els), while still retaining the eciency of a grid-based clusters and allows the discovery of arbitrarily shap ed approach. By using the highly-p opulated grid cells clusters. The basic idea is that for each point of a based on the optimal partitioning of the data, weare cluster the density of data p oints in the neighborhood able to eciently determine the clusters. A detailed has to exceed some threshold. DBCLASD also works evaluation 5 shows the advantages of our approach. lo cality-based but in contrast to DBSCAN assumes We show theoretically that our approach guarantees that the p oints inside of the clusters are randomly to nd all center-de ned clusters (which roughly sp o- distributed, allowing DBCLASD to work without any ken corresp ond to clusters generated by a normal dis- input parameters. tribution). We con rm the e ectiveness lemma byan A problem is that most approaches are not designed extensive exp erimental evaluation on a wide range of for a clustering of high-dimensional data and there- synthetic and real data sets, showing the sup erior ef- fore, the p erformance of existing algorithms degener- fectiveness of our new approach. In addition to the ates rapidly with increasing dimension. To improve e ectiveness, we also examine the eciency, showing the eciency, optimised clustering techniques have that our approach is comp etitive with the fastest ex- b een prop osed. Examples include Grid-based cluster- isting algorithms (BIRCH) and (in some cases) even ing [Sch96], BIRCH [ZRL96] which is based on the outp erforms BIRCH by up to a factor of ab out 2. Cluster-Feature-tree, STING which uses a quadtree- like structure containing additional statistical informa- 2 Clustering of High-Dimensional tion [WYM97], and DENCLUE which uses a regular grid to improve the eciency [HK98]. Unfortunately, Data the curse of dimensionality also has a severe impact In this section, we discuss and compare the most on the e ectiveness of the resulting clustering. So far, ecient and e ective available clustering algorithms this e ect has not b een examined thoroughly for high- and examine their p otential for clustering large high- dimensional data but a detailed comparison shows se- dimensional data sets. We show the impact of the vere problems in e ectiveness (cf. section 2), esp ecially curseof dimensionality and reveal severe eciency and in the presence of noise. In our comparison, we analyse e ectiveness problems of the existing approaches. the impact of the dimensionality on the e ectiveness and eciency of a number of well-known and comp et- 2.1 Related Approaches itive clustering algorithms. Weshow that they either su er from a severe breakdown in eciency which is The most ecient clustering algorithms for low- at least true for all index-based metho ds or have se- dimensional data are based on some typ e of hierar- vere e ectiveness problems which is basically true for chical data structure. The data structures are either all other metho ds. The exp eriments show that even based on a hierarchical partitioning of the data or a for simple data sets (e.g. a data set with two clusters hierarchical partitioning of the space. given as normal distributions and a little bit of noise) All techniques which are based on partitioning the basically none of the fast algorithms guarantees to nd data such as R-trees do not work eciently due to the correct clustering. the p erformance degeneration of R-tree-based index structures in high-dimensional space. This is true From our analysis, it gets clear that only for algorithms such as DBSCAN [EKSX96] which condensation-based approaches (such as BIRCH or has an almost quadratic time complexity for high- DENCLUE) can provide the necessary eciency for dimensional data if the R*-tree-based implementation clustering large data sets. To b etter understand the se- is used. Even if a sp ecial indexing techniques for high- vere e ectiveness problems of the existing approaches, dimensional data is used, all approaches which deter- we examine the impact of high dimensionality on mine the clustering based on near(est) neighbor in- condensation-based approaches, esp ecially grid-based formation do not work e ectively since the near(est) approaches (cf. section 3). The discussion in section 3 neighb ors do not contain sucient information ab out reveals the source of the problems, namely the inad- the density of the data in high-dimensional space (cf.
Recommended publications
  • Curse of Dimensionality for TSK Fuzzy Neural Networks
    Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions Yuqi Cui, Dongrui Wu and Yifan Xu School of Artificial Intelligence and Automation Huazhong University of Science and Technology Wuhan, China yqcui, drwu, yfxu @hust.edu.cn { } Abstract—Takagi-Sugeno-Kang (TSK) fuzzy system with usually use distance based approaches to compute membership Gaussian membership functions (MFs) is one of the most widely grades, so when it comes to high-dimensional datasets, the used fuzzy systems in machine learning. However, it usually fuzzy partitions may collapse. For instance, FCM is known has difficulty handling high-dimensional datasets. This paper explores why TSK fuzzy systems with Gaussian MFs may fail to have trouble handling high-dimensional datasets, because on high-dimensional inputs. After transforming defuzzification the membership grades of different clusters become similar, to an equivalent form of softmax function, we find that the poor leading all centers to move to the center of the gravity [18]. performance is due to the saturation of softmax. We show that Most previous works used feature selection or dimensional- two defuzzification operations, LogTSK and HTSK, the latter of ity reduction to cope with high-dimensionality. Model-agnostic which is first proposed in this paper, can avoid the saturation. Experimental results on datasets with various dimensionalities feature selection or dimensionality reduction algorithms, such validated our analysis and demonstrated the effectiveness of as Relief [19] and principal component analysis (PCA) [20], LogTSK and HTSK. [21], can filter the features before feeding them into TSK Index Terms—Mini-batch gradient descent, fuzzy neural net- models. Neural networks pre-trained on large datasets can also work, high-dimensional TSK fuzzy system, HTSK, LogTSK be used as feature extractor to generate high-level features with low dimensionality [22], [23].
    [Show full text]
  • High Dimensional Geometry, Curse of Dimensionality, Dimension Reduction Lecturer: Pravesh Kothari Scribe:Sanjeev Arora
    princeton univ. F'16 cos 521: Advanced Algorithm Design Lecture 9: High Dimensional Geometry, Curse of Dimensionality, Dimension Reduction Lecturer: Pravesh Kothari Scribe:Sanjeev Arora High-dimensional vectors are ubiquitous in applications (gene expression data, set of movies watched by Netflix customer, etc.) and this lecture seeks to introduce high dimen- sional geometry. We encounter the so-called curse of dimensionality which refers to the fact that algorithms are simply harder to design in high dimensions and often have a running time exponential in the dimension. We also encounter the blessings of dimensionality, which allows us to reason about higher dimensional geometry using tools like Chernoff bounds. We also show the possibility of dimension reduction | it is sometimes possible to reduce the dimension of a dataset, for some purposes. d P 2 1=2 Notation: For a vector x 2 < its `2-norm is jxj2 = ( i xi ) and the `1-norm is P jxj1 = i jxij. For any two vectors x; y their Euclidean distance refers to jx − yj2 and Manhattan distance refers to jx − yj1. High dimensional geometry is inherently different from low-dimensional geometry. Example 1 Consider how many almost orthogonal unit vectors we can have in space, such that all pairwise angles lie between 88 degrees and 92 degrees. In <2 the answer is 2. In <3 it is 3. (Prove these to yourself.) In <d the answer is exp(cd) where c > 0 is some constant. Rd# R2# R3# Figure 1: Number of almost-orthogonal vectors in <2; <3; <d Example 2 Another example is the ratio of the the volume of the unit sphere to its cir- cumscribing cube (i.e.
    [Show full text]
  • Clustering for High Dimensional Data: Density Based Subspace Clustering Algorithms
    International Journal of Computer Applications (0975 – 8887) Volume 63– No.20, February 2013 Clustering for High Dimensional Data: Density based Subspace Clustering Algorithms Sunita Jahirabadkar Parag Kulkarni Department of Computer Engineering & I.T., Department of Computer Engineering & I.T., College of Engineering, Pune, India College of Engineering, Pune, India ABSTRACT hiding clusters in noisy data. Then, common approach is to Finding clusters in high dimensional data is a challenging task as reduce the dimensionality of the data, of course, without losing the high dimensional data comprises hundreds of attributes. important information. Subspace clustering is an evolving methodology which, instead Fundamental techniques to eliminate such irrelevant dimensions of finding clusters in the entire feature space, it aims at finding can be considered as Feature selection or Feature clusters in various overlapping or non-overlapping subspaces of Transformation techniques [5]. Feature transformation methods the high dimensional dataset. Density based subspace clustering such as aggregation, dimensionality reduction etc. project the algorithms treat clusters as the dense regions compared to noise higher dimensional data onto a smaller space while preserving or border regions. Many momentous density based subspace the distance between the original data objects. The commonly clustering algorithms exist in the literature. Each of them is used methods are Principal Component Analysis [6, 7], Singular characterized by different characteristics caused by different Value Decomposition [8] etc. The major limitation of feature assumptions, input parameters or by the use of different transformation approaches is, they do not actually remove any of techniques etc. Hence it is quite unfeasible for the future the attributes and hence the information from the not-so-useful developers to compare all these algorithms using one common dimensions is preserved, making the clusters less meaningful.
    [Show full text]
  • A Brief Survey of Deep Reinforcement Learning Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath
    IEEE SIGNAL PROCESSING MAGAZINE, SPECIAL ISSUE ON DEEP LEARNING FOR IMAGE UNDERSTANDING (ARXIV EXTENDED VERSION) 1 A Brief Survey of Deep Reinforcement Learning Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath Abstract—Deep reinforcement learning is poised to revolu- is to cover both seminal and recent developments in DRL, tionise the field of AI and represents a step towards building conveying the innovative ways in which neural networks can autonomous systems with a higher level understanding of the be used to bring us closer towards developing autonomous visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, agents. For a more comprehensive survey of recent efforts in such as learning to play video games directly from pixels. Deep DRL, including applications of DRL to areas such as natural reinforcement learning algorithms are also applied to robotics, language processing [106, 5], we refer readers to the overview allowing control policies for robots to be learned directly from by Li [78]. camera inputs in the real world. In this survey, we begin with Deep learning enables RL to scale to decision-making an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy- problems that were previously intractable, i.e., settings with based methods. Our survey will cover central algorithms in high-dimensional state and action spaces. Amongst recent deep reinforcement learning, including the deep Q-network, work in the field of DRL, there have been two outstanding trust region policy optimisation, and asynchronous advantage success stories.
    [Show full text]
  • Subspace Clustering for High Dimensional Data: a Review ∗
    Subspace Clustering for High Dimensional Data: A Review ¤ Lance Parsons Ehtesham Haque Huan Liu Department of Computer Department of Computer Department of Computer Science Engineering Science Engineering Science Engineering Arizona State University Arizona State University Arizona State University Tempe, AZ 85281 Tempe, AZ 85281 Tempe, AZ 85281 [email protected] [email protected] [email protected] ABSTRACT in an attempt to learn as much as possible about each ob- ject described. In high dimensional data, however, many of Subspace clustering is an extension of traditional cluster- the dimensions are often irrelevant. These irrelevant dimen- ing that seeks to ¯nd clusters in di®erent subspaces within sions can confuse clustering algorithms by hiding clusters in a dataset. Often in high dimensional data, many dimen- noisy data. In very high dimensions it is common for all sions are irrelevant and can mask existing clusters in noisy of the objects in a dataset to be nearly equidistant from data. Feature selection removes irrelevant and redundant each other, completely masking the clusters. Feature selec- dimensions by analyzing the entire dataset. Subspace clus- tion methods have been employed somewhat successfully to tering algorithms localize the search for relevant dimensions improve cluster quality. These algorithms ¯nd a subset of allowing them to ¯nd clusters that exist in multiple, possi- dimensions on which to perform clustering by removing ir- bly overlapping subspaces. There are two major branches relevant and redundant dimensions. Unlike feature selection of subspace clustering based on their search strategy. Top- methods which examine the dataset as a whole, subspace down algorithms ¯nd an initial clustering in the full set of clustering algorithms localize their search and are able to dimensions and evaluate the subspaces of each cluster, it- uncover clusters that exist in multiple, possibly overlapping eratively improving the results.
    [Show full text]
  • Dimensionality Reduction
    Dimensionality Reduction Aarti Singh Machine Learning 10-701/15-781 Nov 17, 2010 Slides Courtesy: Tom Mitchell, Eric Xing, Lawrence Saul 1 High-Dimensional data • High-Dimensions = Lot of Features Document classification Features per document = thousands of words/unigrams millions of bigrams, contextual information Surveys - Netflix 480189 users x 17770 movies 2 High-Dimensional data • High-Dimensions = Lot of Features Discovering gene networks 10,000 genes x 1000 drugs x several species MEG Brain Imaging 120 locations x 500 time points x 20 objects 3 Curse of Dimensionality • Why are more features bad? – Redundant features (not all words are useful to classify a document) more noise added than signal – Hard to interpret and visualize – Hard to store and process data (computationally challenging) – Complexity of decision rule tends to grow with # features. Hard to learn complex rules as VC dimension increases (statistically challenging) 4 Dimensionality Reduction “Unrolling the swiss roll” 5 Dimensionality Reduction • Feature Selection – Only a few features are relevant to the learning task X3 - Irrelevant X3 X1 X2 • Latent features – Some linear/nonlinear combination of features provides a more efficient representation than observed features 6 Feature Selection • Approach 1: Score each feature and extract a subset 7 Feature Selection • Approach 1: Score each feature and extract a subset Common subset selection methods: • One step: Choose d highest scoring features • Iterative: 8 9 10 Feature Selection • Approach 2: Regularization (MAP) Integrate feature selection into learning objective by penalizing number of features with non-zero weights -ve log likelihood penalty Minimizes # features Convex Small weights of chosen compromise features chosen 11 Latent Feature Extraction Combinations of observed features provide more efficient representation, and capture underlying relations that govern the data E.g.
    [Show full text]
  • On the Behavior of Intrinsically High-Dimensional Spaces: Distances, Direct and Reverse Nearest Neighbors, and Hubness
    Journal of Machine Learning Research 18 (2018) 1-60 Submitted 3/17; Revised 2/18; Published 4/18 On the Behavior of Intrinsically High-Dimensional Spaces: Distances, Direct and Reverse Nearest Neighbors, and Hubness Fabrizio Angiulli [email protected] DIMES { Dept. of Computer, Modeling, Electronics, and Systems Engineering University of Calabria 87036 Rende (CS), Italy Editor: Sanjiv Kumar Abstract Over the years, different characterizations of the curse of dimensionality have been pro- vided, usually stating the conditions under which, in the limit of the infinite dimensionality, distances become indistinguishable. However, these characterizations almost never address the form of associated distributions in the finite, although high-dimensional, case. This work aims to contribute in this respect by investigating the distribution of distances, and of direct and reverse nearest neighbors, in intrinsically high-dimensional spaces. Indeed, we derive a closed form for the distribution of distances from a given point, for the expected distance from a given point to its kth nearest neighbor, and for the expected size of the ap- proximate set of neighbors of a given point in finite high-dimensional spaces. Additionally, the hubness problem is considered, which is related to the form of the function Nk repre- senting the number of points that have a given point as one of their k nearest neighbors, which is also called the number of k-occurrences. Despite the extensive use of this function, the precise characterization of its form is a longstanding problem. We derive a closed form for the number of k-occurrences associated with a given point in finite high-dimensional spaces, together with the associated limiting probability distribution.
    [Show full text]
  • Curse of Dimensionality in Adversarial Examples
    IJCNN 2019 - International Joint Conference on Neural Networks, Budapest Hungary, 14-19 July 2019 Curse of Dimensionality in Adversarial Examples Nandish Chattopadhyay∗y, Anupam Chattopadhyayy, Sourav Sen Guptay and Michael Kasper∗ ∗Fraunhofer Singapore, Nanyang Technological University, Singapore Email: [email protected], [email protected] ySchool of Computer Science and Engineering, Nanyang Technological University, Singapore Email: [email protected], [email protected], [email protected] Abstract—While machine learning and deep neural networks capacity to learn features and patterns in the data, to extents in particular, have undergone massive progress in the past years, which were not reachable before. this ubiquitous paradigm faces a relatively newly discovered chal- lenge, adversarial attacks. An adversary can leverage a plethora of attacking algorithms to severely reduce the performance of A. Motivation : Adversarial Examples existing models, therefore threatening the use of AI in many Soon after the boom in flourishing AI, also popularly re- safety-critical applications. Several attempts have been made to try and understand the root cause behind the generation of ferred to as the “AI revival”, a new discovery proved to a newly adversarial examples. In this paper, we try to relate the geometry found problem. In 2015, it was shown that the high performing of the high-dimensional space in which the model operates and neural networks were vulnerable to adversarial attacks [4]. optimizes, and the properties and problems therein, to such The models, which would perform the task of classification adversarial attacks. We present the mathematical background, very accurately on a test image dataset, would perform very the intuition behind the existence of adversarial examples and substantiate them with empirical results from our experiments.
    [Show full text]
  • Fractional Norms and Quasinorms Do Not Help to Overcome the Curse of Dimensionality
    entropy Article Fractional Norms and Quasinorms Do Not Help to Overcome the Curse of Dimensionality Evgeny M. Mirkes 1,2,∗ , Jeza Allohibi 1,3 and Alexander Gorban 1,2 1 School of Mathematics and Actuarial Science, University of Leicester, Leicester LE1 7HR, UK; [email protected] (J.A.); [email protected] (A.G.) 2 Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky State University, 603105 Nizhny Novgorod, Russia 3 Department of Mathematics, Taibah University, Janadah Bin Umayyah Road, Tayba, Medina 42353, Saudi Arabia * Correspondence: [email protected] Received: 4 August 2020; Accepted: 27 September 2020; Published: 30 September 2020 Abstract: The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods. There is a hypothesis that using the Manhattan distance and even fractional lp quasinorms (for p less than 1) can help to overcome the curse of dimensionality in classification problems. In this study, we systematically test this hypothesis. It is illustrated that fractional quasinorms have a greater relative contrast and coefficient of variation than the Euclidean norm l2, but it is shown that this difference decays with increasing space dimension. It has been demonstrated that the concentration of distances shows qualitatively the same behaviour for all tested norms and quasinorms. It is shown that a greater relative contrast does not mean a better classification quality. It was revealed that for different databases the best (worst) performance was achieved under different norms (quasinorms). A systematic comparison shows that the difference in the performance of kNN classifiers for lp at p = 0.5, 1, and 2 is statistically insignificant.
    [Show full text]
  • Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning
    Utah State University DigitalCommons@USU All Graduate Theses and Dissertations Graduate Studies 5-2010 Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning Scott H. Burton Utah State University Follow this and additional works at: https://digitalcommons.usu.edu/etd Part of the Computer Engineering Commons, and the Operational Research Commons Recommended Citation Burton, Scott H., "Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning" (2010). All Graduate Theses and Dissertations. 559. https://digitalcommons.usu.edu/etd/559 This Thesis is brought to you for free and open access by the Graduate Studies at DigitalCommons@USU. It has been accepted for inclusion in All Graduate Theses and Dissertations by an authorized administrator of DigitalCommons@USU. For more information, please contact [email protected]. COPING WITH THE CURSE OF DIMENSIONALITY BY COMBINING LINEAR PROGRAMMING AND REINFORCEMENT LEARNING by Scott H. Burton A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in Computer Science Approved: _______________________ _______________________ Nicholas S. Flann Donald H. Cooley Major Professor Committee Member _______________________ _______________________ Dan W. Watson Byron R. Burnham Committee Member Dean of Graduate Studies UTAH STATE UNIVERSITY Logan, Utah 2010 ii Copyright © Scott H. Burton 2010 All Rights Reserved iii ABSTRACT Coping with the Curse of Dimensionality by Combining Linear Programming and Reinforcement Learning by Scott H. Burton, Master of Science Utah State University, 2010 Major Professor: Dr. Nicholas S. Flann Department: Computer Science Reinforcement learning techniques offer a very powerful method of finding solutions in unpredictable problem environments where human supervision is not possible.
    [Show full text]
  • On the Effects of Dimensionality on Data Analysis with Neural Networks
    On the effects of dimensionality on data analysis with neural networks M. Verleysen1, D. François2, G. Simon1, V. Wertz2* Université catholique de Louvain 1 DICE, 3 place du Levant, B-1348 Louvain-la-Neuve, Belgium 2 CESAME, 4 av. G. Lemaitre, B-1348 Louvain-la-Neuve, Belgium {verleysen, simon}@dice.ucl.ac.be, {francois, wertz}@auto.ucl.ac.be Abstract. Modern data analysis often faces high-dimensional data. Nevertheless, most neural network data analysis tools are not adapted to high- dimensional spaces, because of the use of conventional concepts (as the Euclidean distance) that scale poorly with dimension. This paper shows some limitations of such concepts and suggests some research directions as the use of alternative distance definitions and of non-linear dimension reduction. 1. Introduction In the last few years, data analysis has become a specific discipline, sometimes far from its mathematical and statistical origin, where understanding of the problems and limitations coming from the data themselves is often more valuable than developing complex algorithms and methods. The specificity of modern data mining is that huge amounts of data are considered. There are new fields where data mining becomes crucial (medical research, financial analysis, etc.); furthermore, collecting huge amount of data often becomes easier and cheaper. A main concern in that direction is the dimensionality of data. Think of each measurement of data as one observation, each observation being composed of a set of variables. It is very different to analyze 10000 observations of 3 variables each, than analyzing 100 observations of 50 variables each! One way to get some feeling of this difficulty is to imagine each observation as a point in a space whose dimension is the number of variables.
    [Show full text]
  • ORF 544 Stochastic Optimization and Learning
    ORF 544 Stochastic Optimization and Learning Spring, 2019 . Warren Powell Princeton University http://www.castlelab.princeton.edu © 2018 W.B. Powell Week 10 Backward approximate dynamic programming © 2019 Warren B. Powell Backward MDP and the curse of dimensionality © 2019 Warren B. Powell Curse of dimensionality The ultimate policy is to optimize from now on: T * X tt(SCSxCSXSSSx ) arg max x ( tt , ) max ( t''' , t ( t )) | t 1 | tt , t tt'1 Ideally, we would like to replace the future contributions with a single value function: XS* () argmax CSx (,) V ( S )|, Sx tt xt tt t11 t tt Sometimes we can compute this function exactly! © 2019 Warren B. Powell Curse of dimensionality Energy storage with stochastic prices, supplies and demands. wind Et Dt grid Pt battery Rt wind windˆ wind EEEttt11 grid gridˆ grid W Exogenous inputs PPPttt11 t1 St State variable load loadˆ load DDDttt11 battery battery Rtt1 RAx txt Controllable inputs © 2019 Warren B. Powell Curse of dimensionality Bellman’s optimality equation VS() min CS (,) x VS ( (,, S xW )| S txt X ttttttt 11 E wind wind battery ˆ wind t xt Et1 P grid wind load ˆ grid t xt Pt1 load grid battery load Dt x Dˆ t t1 battery grid load Rt x t battery load xt These are the “three curses of dimensionality.” © 2019 Warren B. Powell Curse of dimensionality Backward dynamic programming in one dimension Step 0: Initialize VRTT11( ) 0 for R T 10,1,...,100 Step 1: Step backward tTTT, 1, 2,... Step 2: Loop over Rt 0,1,...,100 max Step 3: Loop over all decisions (RRxRttt ) Step 4: Take the expectation over exogenous information: 100 max W Compute QR(tt , x ) CR ( tt , x ) V t1 (min R , R t x w ) P ( w ) w0 End step 4; End Step 3; Find V * ()max(,)RQRx t txttt * Store XR ( ) arg max QRx ( , ).
    [Show full text]