Convex Mathematical Programs for Relational Matching of Object Views

Total Page:16

File Type:pdf, Size:1020Kb

Convex Mathematical Programs for Relational Matching of Object Views Convex Mathematical Programs for Relational Matching of Object Views Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der UniversitÄat Mannheim vorgelegt von Dipl.-Phys. Christian Schellewald aus Hohenlimburg Mannheim, 2004 Dekan: Professor Dr. Matthias Krause, UniversitÄat Mannheim Referent: Professor Dr. Christoph SchnÄorr, UniversitÄat Mannheim Korreferent: Professor Dr. Joachim Denzler, UniversitÄat Jena Tag der mundlicÄ hen Prufung:Ä 14. Juli 2005 Contents 1 Introduction 1 1.1 Motivation . 1 1.2 Overview . 2 1.2.1 Graph Matching . 2 1.2.2 Exact and Inexact Graph Matching . 3 1.2.3 What this thesis is about . 3 1.2.4 What this thesis is not about . 4 1.3 Convex Optimization . 4 1.3.1 Semide¯nite Programming . 5 1.4 Contribution . 5 1.4.1 QAP Convex Relaxation used for Graph Matching . 5 1.4.2 Subgraph Matching by Semide¯nite Programming . 6 1.5 Outline . 6 1.6 Related Work . 7 1.6.1 Basic Graph Matching Approaches . 7 1.6.2 More General Graph Matching Approaches . 11 2 Preliminaries 15 2.1 Basic Graph Theory . 15 2.2 Graph Matching . 16 2.2.1 Bipartite Matching . 16 2.2.2 Representation of Matchings . 16 2.2.3 Classi¯cation of Graph Matching Problems . 18 2.3 Complexity . 19 2.3.1 Complexity Classes . 20 2.3.2 Complexity of Discussed Graph Matching Approaches . 21 2.4 Terminology and Mathematical Preliminaries . 21 2.4.1 Notation . 21 2.4.2 Operator De¯nitions . 22 2.5 Basic Convexity Concepts . 23 2.5.1 Convex functions and sets . 24 3 Combinatorial Relaxation and Convex Optimization 27 3.1 Relaxations and Lower Bounds . 27 3.1.1 Interpretation of the Relaxation as Approximation . 28 3.1.2 Lagrangian Relaxation . 29 iii iv Contents 3.1.3 Convex Relaxations . 30 3.2 Optimality Conditions . 30 3.2.1 Lagrangian Duality . 31 3.2.2 Karush-Kuhn-Tucker Conditions . 31 3.3 Convex Optimization Problems . 32 3.3.1 General Convex optimization problems . 32 3.3.2 Convex optimization problems in this thesis . 33 3.3.3 Linear Programming . 33 3.3.4 Quadratic Programming . 34 3.3.5 Semide¯nite Programming . 35 3.4 Solving Convex Optimization Problems . 36 3.4.1 Interior Point Methods . 36 3.4.2 Solver for Convex Optimization Problems . 36 3.5 Computing Integer Solutions for Assignments . 37 4 Weighted Graph Matching 41 4.1 Problem Statement . 41 4.1.1 The Quadratic Assignment Problem . 42 4.1.2 Graph Matching as QAP . 42 4.2 Relaxations and Lower Bounds . 45 4.2.1 Permutation Matrices and Relaxations . 45 4.2.2 Orthogonal Relaxation . 46 4.2.3 Projected Eigenvalue Bound . 47 4.2.4 Convex Relaxation . 48 4.2.5 Combinatorial Solutions . 50 4.2.6 The 2opt Post-Processing Heuristics . 51 4.3 Other Approaches . 51 4.3.1 The Approach by Umeyama . 51 4.3.2 Graduated Assignment . 52 4.3.3 SDP Relaxation . 53 4.4 Convex Relaxation: An Illustrative Numerical Example . 54 4.4.1 A Small Graph Matching Problem . 55 4.4.2 Relaxations and Bounds . 55 4.4.3 Visualization . 57 4.5 Implementation Details . 58 4.6 Experiments . 59 4.6.1 QAPLIB Benchmark Experiments . 60 4.6.2 Random Ground-Truth Experiments . 61 4.6.3 Real World Example . 64 4.7 Discussion . 66 4.7.1 Invariants . 66 4.7.2 Limitations of the QAP Graph Matching . 67 4.8 Towards Subgraph Matching . 68 4.8.1 A Simple Extension Approach . 68 4.8.2 A Promising Subgraph Matching Approach . 69 4.8.3 Comparison of the two Approaches . 71 4.8.4 Relaxation Attempt . 72 Contents v 4.8.5 Projection Approach . 74 5 Subgraph Matching 75 5.1 Problem Statement . 75 5.2 Notation and Bipartite Matching . 76 5.2.1 Matching in Bipartite Graphs . 76 5.2.2 Linear Problem Formulation of the Bipartite Matching . 77 5.3 Combinatorial Subgraph Matching Approach . 77 5.3.1 Quadratic Integer Program . 78 5.3.2 Regularization Parameter . 81 5.3.3 Hidden Parameters . 82 5.3.4 Classi¯cation of the Approach . 82 5.4 Semide¯nite Program Formulation . 82 5.4.1 Objective Function . 83 5.4.2 Equality Constraints . 84 5.4.3 Inequality Constraints . 87 5.4.4 Post-Processing to obtain an Integer Solution . 90 5.5 Illustrative Example . 91 5.5.1 The Subgraph Matching Example . 91 5.5.2 Linear Bipartite Matching Approach . 92 5.5.3 SDP Subgraph Matching Approach . 92 5.5.4 Problem Nature . 94 5.6 Subgraph Matching Experiments . 95 5.6.1 Creation of the Problem Instances . 95 5.6.2 Problem Nature . 97 5.6.3 Influence of the Regularization Parameter . 98 5.6.4 Statistical Performance Investigation . 103 5.7 Random Subgraph Matching Experiments . 108 5.7.1 Creation of the Problem Instances . 109 5.7.2 Problem Nature . 110 5.7.3 Performance Investigation . 110 5.8 Real World Examples . 115 5.8.1 Creation of the Real World Examples . 115 5.8.2 Results . 116 5.9 Discussion . 124 5.9.1 Computational E®ort . 125 5.9.2 Reducing the Computational E®ort . 127 5.9.3 Structural Perturbations . 134 5.9.4 Bimodal Experiments . 136 5.9.5 A New Bound for Subgraph Non-Isomorphism . 138 6 Conclusion 141 6.1 Summary . 141 6.1.1 Weighted Graph Matching . 142 6.1.2 Subgraph Matching . 144 6.2 Future Work . 147 vi Contents A Quadratic Assignment Supplements 151 A.1 The Dual of the relaxed homogeneous QAP . 151 A.1.1 QAP Dual as linear program . 152 A.1.2 Non unique solutions for the EVB . 155 A.2 QAP-Bounds . 157 B Subgraph Matching Supplements 159 B.1 Example Data . 159 B.1.1 Illustrative Example Data . 159 B.1.2 Graph Data for the Parameter Dependency Example . 160 B.2 Earth Movers Distance . 161 Bibliography 164 Abstract vii Abstract Automatic recognition of objects in images is a di±cult and challenging task in computer vision which has been tackled in many di®erent ways. Based on the powerful and widely used concept to represent objects and scenes as relational structures, the problem of graph matching, i.e. to ¯nd correspondences between two graphs is a part of the object recognition problem. Belonging to the ¯eld of combinatorial optimization graph matching is considered to be one of the most complex problems in computer vision: It is known to be NP-complete in the general case. In this thesis, two novel approaches to the graph matching problem are proposed and investigated. They are based on recent progress in the mathematical liter- ature on convex programming. Starting out from describing the desired match- ings by suitable objective functions in terms of binary variables, relaxations of combinatorial constraints and an adequate adaption of the objective function lead to continuous convex optimization problems which can be solved without parameter tuning and in polynomial time. A subsequent post-processing step results in feasible, sub-optimal combinatorial solutions to the original decision problem. In the ¯rst part of this thesis, the connection between speci¯c graph-matching problems and the quadratic assignment problem is explored. In this case, the convex relaxation leads to a convex quadratic program , which is combined with a linear program for post-processing. Conditions under which the quadratic assignment representation is adequate from the computer vision point of view are investigated, along with attempts to relax these conditions by modifying the approach accordingly. The second part of this work focuses directly on the matching of subgraphs { representing a model { to a considerably larger scene graph. A bipartite matching is extended with a quadratic regularization term to take into ac- count relations within each set of vertices. Based on this convex relaxation, post-processing and the application to computer vision are investigated and discussed. Numerical experiments reveal both the power and the limitations of the ap- proach. For problems of sizes which occur in applications the approach is quite reasonable and often the combinatorial optimal solution is found. For larger instances the intrinsic combinatorial nature of the problem comes out and leads to sub-optimal solutions which, however, are still good. viii Zusammenfassung Zusammenfassung Die automatische Erkennung von Objekten in Bildern ist eine der grÄo¼ten Herausforderungen in der Bildverarbeitung. Werden die Objekte und Szenar- ien durch Graphen reprÄasentiert, ist ein Teilproblem der Objekterkennung die gewunscÄ hte Zuordnung der Knoten zweier Graphen zu ¯nden. Dabei soll mÄoglichst die Graphstruktur und evtl. zusÄatzlich vorhandene Information berucÄ ksichtigt werden. Die Bestimmung der besten Korrespondenzen der Graphknoten, auch Graph Matching genannt, ist furÄ allgemeine Graphen ein NP-Hartes kombina- torisches Problem und gehÄort damit zu den schwierigsten aller Probleme in der Bildverarbeitung. In dieser Arbeit fuhrenÄ wir zwei neue AnsÄatze ein, um Graph Matching und Subgraph Matching Probleme approximativ zu lÄosen. Dazu nutzen wir neuere Methoden und Erkentnisse der konvexen Optimierung. Die Graph Matching Probleme kÄonnen als 0=1-Integer Optimierungsproblem formuliert werden und lassen sich mit Hilfe einer geeigneten Relaxierung in ein kontinuierliches und konvexes Problem transformieren. Ein Vorteil konvexer Optimierungsprobleme liegt in der Tatsache, dass sie ohne zusÄatzliche Parameteroptimierung e±zient mit Standardverfahren gelÄost werden kÄonnen. Ein Nachverarbeitungsschritt sorgt dafur,Ä dass mit Hilfe der Approximierten LÄosung eine furÄ das Original- problem passende und gute 0=1-Integer LÄosung gefunden wird. In dem ersten Teil.
Recommended publications
  • On Modularity - NP-Completeness and Beyond?
    On Modularity - NP-Completeness and Beyond? Ulrik Brandes1, Daniel Delling2, Marco Gaertler2, Robert G¨orke2, Martin Hoefer1, Zoran Nikoloski3, and Dorothea Wagner2 1 Department of Computer & Information Science, University of Konstanz, {brandes,hoefer}@inf.uni-konstanz.de 2 Faculty of Informatics, Universit¨atKarlsruhe (TH), {delling,gaertler,rgoerke,wagner}@informatik.uni-karlsruhe.de 3 Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University, Prague, [email protected] Abstract. Modularity is a recently introduced quality measure for graph clusterings. It has im- mediately received considerable attention in several disciplines, and in particular in the complex systems literature, although its properties are not well understood. We here present first results on the computational and analytical properties of modularity. The complexity status of modularity maximization is resolved showing that the corresponding decision version is NP-complete in the strong sense. We also give a formulation as an Integer Linear Program (ILP) to facilitate exact optimization, and provide results on the approximation factor of the commonly used greedy al- gorithm. Completing our investigation, we characterize clusterings with maximum modularity for several graph families. 1 Introduction Graph clustering is a fundamental problem in the analysis of relational data. Although studied for decades, it applied in many settings, it is now popularly referred to as the problem of partitioning networks into communities. In this line of research, a novel graph clustering index called modularity has been proposed recently [1]. The rapidly growing interest in this measure prompted a series of follow-up studies on various applications and possible adjustments (see, e.g., [2,3,4,5,6]).
    [Show full text]
  • On Metaheuristic Optimization Motivated by the Immune System
    Applied Mathematics, 2014, 5, 318-326 Published Online January 2014 (http://www.scirp.org/journal/am) http://dx.doi.org/10.4236/am.2014.52032 On Metaheuristic Optimization Motivated by the Immune System Mohammed Fathy Elettreby1,2*, Elsayd Ahmed2, Houari Boumedien Khenous1 1Department of Mathematics, Faculty of Science, King Khalid University, Abha, KSA 2Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt Email: *[email protected] Received November 16, 2013; revised December 16, 2013; accepted December 23, 2013 Copyright © 2014 Mohammed Fathy Elettreby et al. This is an open access article distributed under the Creative Commons Attribu- tion License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Mohammed Fathy Elettreby et al. All Copyright © 2014 are guarded by law and by SCIRP as a guardian. ABSTRACT In this paper, we modify the general-purpose heuristic method called extremal optimization. We compare our results with the results of Boettcher and Percus [1]. Then, some multiobjective optimization problems are solved by using methods motivated by the immune system. KEYWORDS Multiobjective Optimization; Extremal Optimization; Immunememory and Metaheuristic 1. Introduction Multiobjective optimization problems (MOPs) [2] are existing in many situations in nature. Most realistic opti- mization problems require the simultaneous optimization of more than one objective function. In this case, it is unlikely that the different objectives would be optimized by the same alternative parameter choices. Hence, some trade-off between the criteria is needed to ensure a satisfactory problem.
    [Show full text]
  • The Cross-Entropy Method for Continuous Multi-Extremal Optimization
    The Cross-Entropy Method for Continuous Multi-extremal Optimization Dirk P. Kroese Sergey Porotsky Department of Mathematics Optimata Ltd. The University of Queensland 11 Tuval St., Ramat Gan, 52522 Brisbane 4072, Australia Israel. Reuven Y. Rubinstein Faculty of Industrial Engineering and Management Technion, Haifa Israel Abstract In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non- linear constraints. Key words: Cross-entropy, continuous optimization, multi-extremal objective function, dynamic smoothing, constrained optimization, nonlinear constraints, acceptance{rejection, penalty function. 1 1 Introduction The cross-entropy (CE) method (Rubinstein and Kroese, 2004) was motivated by Rubinstein (1997), where an adaptive variance minimization algorithm for estimating probabilities of rare events for stochastic networks was presented. It was modified in Rubinstein (1999) to solve combinatorial optimization problems. The main idea behind using CE for continuous multi-extremal optimization is the same as the one for combinatorial optimization, namely to first associate with each optimization problem a rare event estimation problem | the so-called associated stochastic problem (ASP) | and then to tackle this ASP efficiently by an adaptive algorithm. The principle outcome of this approach is the construction of a random sequence of solutions which converges probabilistically to the optimal or near-optimal solution. As soon as the ASP is defined, the CE method involves the following two iterative phases: 1.
    [Show full text]
  • Arxiv:Cond-Mat/0104214V1
    Extremal Optimization for Graph Partitioning Stefan Boettcher∗ Physics Department, Emory University, Atlanta, Georgia 30322, USA Allon G. Percus† Computer & Computational Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA (Dated: August 4, 2021) Extremal optimization is a new general-purpose method for approximating solu- tions to hard optimization problems. We study the method in detail by way of the NP-hard graph partitioning problem. We discuss the scaling behavior of extremal optimization, focusing on the convergence of the average run as a function of run- time and system size. The method has a single free parameter, which we determine numerically and justify using a simple argument. Our numerical results demonstrate that on random graphs, extremal optimization maintains consistent accuracy for in- creasing system sizes, with an approximation error decreasing over runtime roughly as a power law t−0.4. On geometrically structured graphs, the scaling of results from the average run suggests that these are far from optimal, with large fluctuations between individual trials. But when only the best runs are considered, results con- sistent with theoretical arguments are recovered. PACS number(s): 02.60.Pn, 05.65.+b, 75.10.Nr, 64.60.Cn. arXiv:cond-mat/0104214v1 [cond-mat.stat-mech] 12 Apr 2001 ∗Electronic address: [email protected] †Electronic address: [email protected] 2 I. INTRODUCTION Optimizing a system of many variables with respect to some cost function is a task frequently encountered in physics. The determination of ground-state configurations in disordered materials [1, 2, 3, 4] and of fast-folding protein conformations [5] are but two examples.
    [Show full text]
  • Extremal Optimization of Graph Partitioning at the Percolation
    Extremal Optimization of Graph Partitioning at the Percolation Threshold Stefan Boettcher1,2 1Physics Department, Emory University, Atlanta, Georgia 30322, USA 2Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA (May 31, 2018) The benefits of a recently proposed method to approximate hard optimization problems are demon- strated on the graph partitioning problem. The performance of this new method, called Extremal Optimization, is compared to Simulated Annealing in extensive numerical simulations. While gener- ally a complex (NP-hard) problem, the optimization of the graph partitions is particularly difficult for sparse graphs with average connectivities near the percolation threshold. At this threshold, the relative error of Simulated Annealing for large graphs is found to diverge relative to Extremal Opti- mization at equalized runtime. On the other hand, Extremal Optimization, based on the extremal dynamics of self-organized critical systems, reproduces known results about optimal partitions at this critical point quite well. I. INTRODUCTION this critical point, the performance of SA markedly de- teriorates while EO produces only small errors. The optimization of systems with many degrees of free- This paper is organized as follows: In the next section dom with respect to some cost function is a frequently en- we describe the philosophy behind the EO method, in countered task in physics and beyond [1]. In cases where Sec. III we introduce the graph partitioning problem, the relation between individual components of the sys- and Sec. IV we present the algorithms and the results tem is frustrated [2], such a cost function often exhibits obtained in our numerical comparison of SA and EO, a complex “landscape” [3] over the space of all configu- followed by conclusions in Sec.
    [Show full text]
  • Rank-Constrained Optimization: Algorithms and Applications
    Rank-Constrained Optimization: Algorithms and Applications Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Chuangchuang Sun, BS Graduate Program in Aeronautical and Astronautical Engineering The Ohio State University 2018 Dissertation Committee: Ran Dai, Advisor Mrinal Kumar Manoj Srinivasan Wei Zhang c Copyright by Chuangchuang Sun 2018 Abstract Matrix rank related optimization problems comprise rank-constrained optimization problems (RCOPs) and rank minimization problems, which involve the matrix rank function in the objective or the constraints. General RCOPs are defined to optimize a convex objective function subject to a set of convex constraints and rank constraints on unknown rectangular matrices. RCOPs have received extensive attention due to their wide applications in signal processing, model reduction, and system identification, just to name a few. Noteworthily, The general/nonconvex quadratically constrained quadratic programming (QCQP) problem with wide applications is a special category of RCOP. On the other hand, when the rank function appears in the objective of an optimization problem, it turns out to be a rank minimization problem (RMP), classified as another category of nonconvex optimization. Applications of RMPs have been found in a variety of areas, such as matrix completion, control system analysis and design, and machine learning. At first, RMP and RCOPs are not isolated. Though we can transform a RMP into a RCOP by introducing a slack variable, the reformulated RCOP, however, has an unknown upper bound for the rank function. That is a big shortcoming. Provided that a given matrix upper bounded is a prerequisite for most of the algorithms when solving RCOPs, we first fill this gap by demonstrating that RMPs can be equivalently transformed into RCOPs by introducing a quadratic matrix equality constraint.
    [Show full text]
  • Continuous Extremal Optimization for Lennard-Jones Clusters
    PHYSICAL REVIEW E 72, 016702 ͑2005͒ Continuous extremal optimization for Lennard-Jones clusters Tao Zhou,1 Wen-Jie Bai,2 Long-Jiu Cheng,2 and Bing-Hong Wang1,* 1Nonlinear Science Center and Department of Modern Physics, University of Science and Technology of China, Hefei Anhui, 230026, China 2Department of Chemistry, University of Science and Technology of China, Hefei Anhui, 230026, China ͑Received 17 November 2004; revised manuscript received 19 April 2005; published 6 July 2005͒ We explore a general-purpose heuristic algorithm for finding high-quality solutions to continuous optimiza- tion problems. The method, called continuous extremal optimization ͑CEO͒, can be considered as an extension of extremal optimization and consists of two components, one which is responsible for global searching and the other which is responsible for local searching. The CEO’s performance proves competitive with some more elaborate stochastic optimization procedures such as simulated annealing, genetic algorithms, and so on. We demonstrate it on a well-known continuous optimization problem: the Lennard-Jones cluster optimization problem. DOI: 10.1103/PhysRevE.72.016702 PACS number͑s͒: 02.60.Pn, 36.40.Ϫc, 05.65.ϩb I. INTRODUCTION rank each individual according to its fitness value so that the least-fit one is in the top. Use ki to denote the individual i’s The optimization of a system with many degrees of free- rank; clearly, the least-fit one is of rank 1. Choose one indi- dom with respect to some cost function is a frequently en- ͑ ͒ vidual j that will be changed with the probability P kj , and countered task in physics and beyond.
    [Show full text]
  • A New Hybrid BA ABC Algorithm for Global Optimization Problems
    mathematics Article A New Hybrid BA_ABC Algorithm for Global Optimization Problems Gülnur Yildizdan 1,* and Ömer Kaan Baykan 2 1 Kulu Vocational School, Selcuk University, Kulu, 42770 Konya, Turkey 2 Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Konya Technical University, 42250 Konya, Turkey; [email protected] * Correspondence: [email protected] Received: 1 September 2020; Accepted: 21 September 2020; Published: 12 October 2020 Abstract: Bat Algorithm (BA) and Artificial Bee Colony Algorithm (ABC) are frequently used in solving global optimization problems. Many new algorithms in the literature are obtained by modifying these algorithms for both constrained and unconstrained optimization problems or using them in a hybrid manner with different algorithms. Although successful algorithms have been proposed, BA’s performance declines in complex and large-scale problems are still an ongoing problem. The inadequate global search capability of the BA resulting from its algorithm structure is the major cause of this problem. In this study, firstly, inertia weight was added to the speed formula to improve the search capability of the BA. Then, a new algorithm that operates in a hybrid manner with the ABC algorithm, whose diversity and global search capability is stronger than the BA, was proposed. The performance of the proposed algorithm (BA_ABC) was examined in four different test groups, including classic benchmark functions, CEC2005 small-scale test functions, CEC2010 large-scale test functions, and classical engineering design problems. The BA_ABC results were compared with different algorithms in the literature and current versions of the BA for each test group. The results were interpreted with the help of statistical tests.
    [Show full text]
  • Improviesed Genetic Algorithm for Fuzzy Overlapping Community Detection in Social Networks
    International Journal of Advanced Computational Engineering and Networking, ISSN: 2320-2106, Volume-5, Issue-9, Sep.-2017 http://iraj.in IMPROVIESED GENETIC ALGORITHM FOR FUZZY OVERLAPPING COMMUNITY DETECTION IN SOCIAL NETWORKS 1HARISH KUMAR SHAKYA, 2KULDEEP SINGH, 3BHASKAR BISWAS 123Indian Institute of Technology, Varanasi, (UP) India, India Email: [email protected],[email protected], [email protected] Abstract: Community structure identification is an important task in social network analysis. Social communications exist with some social situation and communities are a fundamental form of social contexts. Social network is application of web mining and web mining is also an application of data mining. Social network is a type of structure made up of a set of social actors like as persons or organizations, sets of pair ties, and other interaction socially between actors. In recent scenario community detection in social networks is a very hot and dynamic area of research. In this paper, we have used improvised genetic algorithm for community detection in social networks, we used the combination of roulette wheel selection and square quadratic knapsack problem. We have executed the experiment on different datasets i.e. Zachary’s karate club [31], American college football [39], Dolphin social network [32] and many more. All are verified and well known datasets in the research world of social network analysis. An experimental result shows the improvement on convergence rate of proposed algorithm and discovered communities are highly inclined towards quality. Index term: community detection, genetic algorithm, roulette wheel selection, I. INTRODUCTION optimization problem, which involves a quality function first defined by Newman and Girvan [3], A social network is a branch of data mining which involves finding some structure or pattern amongst the set of individuals, groups and organizations.
    [Show full text]
  • Nature Based Algorithms Have Been Used Succesifully For
    Anais do I WORCAP, INPE, São José dos Campos, 25 de Outubro de 2001, p. 115-117 Extremal Dynamics Applied to Function Optimization Fabiano Luis de Sousa*,1, Fernando Manuel Ramos**,1 (1) Área de Computação Científica Laboratório Associado de Computação e Matemática Aplicada Instituto Nacional de Pesquisas Espaciais (INPE) (*) Doutorado, email: [email protected]; (**) Orientador Abstract In this article is introduced the Generalized Extremal Optimization algorithm. Based on the dynamics of self-organized criticality, it is intended to be used in complex inverse design problems, where traditional gradient based optimization methods are not efficient. Preliminary results from a set of test functions show that this algorithm can be competitive to other stochastic methods such as the genetic algorithms. Key words: Extremal dynamics, optimization, self-organized criticality. Introduction Recently, Boettcher and Percus [1] have proposed a new optimization algorithm based on a simplified model of natural selection developed by Bak and Sneppen [2]. Evolution in this model is driven by a extremal process that shows characteristics of self-organized criticality (SOC). Boettcher and Percus [1] have adapted the evolutionary model of Bak and Sneppel [2] to tackle hard problems in combinatorial optimization, calling their algorithm Extremal Optimization (EO). Although algorithms such as Simulated Annealing (SA), Genetic Algorithms (GAs) and the EO are inspired by natural processes, their practical implementation to optimization problems shares a common feature: the search for the optimal is done through a stochastic process that is “guided” by the setting of adjustable parameters. Since the proper setting of these parameters are very important to the performance of the algorithms, it is highly desirable that they have few of such parameters, so that the cost of finding the best set to a given optimization problem does not become a costly task in itself.
    [Show full text]
  • Extremal Optimization at the Phase Transition of the 3-Coloring Problem
    PHYSICAL REVIEW E 69, 066703 (2004) Extremal optimization at the phase transition of the three-coloring problem Stefan Boettcher* Department of Physics, Emory University, Atlanta, Georgia 30322, USA Allon G. Percus† Computer & Computational Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA and UCLA Institute for Pure and Applied Mathematics, Los Angeles, California 90049, USA (Received 10 February 2004; published 24 June 2004) We investigate the phase transition in vertex coloring on random graphs, using the extremal optimization heuristic. Three-coloring is among the hardest combinatorial optimization problems and is equivalent to a 3-state anti-ferromagnetic Potts model. Like many other such optimization problems, it has been shown to exhibit a phase transition in its ground state behavior under variation of a system parameter: the graph’s mean vertex degree. This phase transition is often associated with the instances of highest complexity. We use extremal optimization to measure the ground state cost and the “backbone,” an order parameter related to ground state overlap, averaged over a large number of instances near the transition for random graphs of size n up to 512. For these graphs, benchmarks show that extremal optimization reaches ground states and explores a sufficient number of them to give the correct backbone value after about O͑n3.5͒ update steps. Finite size ␣ ͑ ͒ scaling yields a critical mean degree value c =4.703 28 . Furthermore, the exploration of the degenerate ground states indicates that the backbone order parameter, measuring the constrainedness of the problem, exhibits a first-order phase transition. DOI: 10.1103/PhysRevE.69.066703 PACS number(s): 02.60.Pn, 05.10.Ϫa, 75.10.Nr I.
    [Show full text]
  • Studies on Extremal Optimization and Its Applications in Solving Real World Optimization Problems
    Proceedings of the 2007 IEEE Symposium on Foundations of Computational Intelligence (FOCI 2007) 1 Studies on Extremal Optimization and Its Applications in Solving Real World Optimization Problems Yong-Zai Lu*, IEEE Fellow, Min-Rong Chen, Yu-Wang Chen Department of Automation Shanghai Jiaotong University Shanghai, China Abstract— Recently, a local-search heuristic algorithm called of surviving. Species in this model is located on the sites of a Extremal Optimization (EO) has been proposed and successfully lattice. Each species is assigned a fitness value randomly with applied in some NP-hard combinatorial optimization problems. uniform distribution. At each update step, the worst adapted This paper presents an investigation on the fundamentals of EO species is always forced to mutate. The change in the fitness of with its applications in discrete and numerical optimization problems. The EO was originally developed from the fundamental the worst adapted species will cause the alteration of the fitness of statistic physics. However, in this study we also explore the landscape of its neighbors. This means that the fitness values of mechanism of EO from all three aspects: statistical physics, the species around the worst one will also be changed randomly, biological evolution or co-evolution and ecosystem. Furthermore, even if they are well adapted. After a number of iterations, the we introduce our contributions to the applications of EO in solving system evolves to a highly correlated state known as SOC. In traveling salesman problem (TSP) and production scheduling, and that state, almost all species have fitness values above a certain multi-objective optimization problems with novel perspective in discrete and continuous search spaces, respectively.
    [Show full text]