Target Curricula for Multi-Target Classification: the Role of Internal Meta-Features in Machine Teaching

Total Page:16

File Type:pdf, Size:1020Kb

Target Curricula for Multi-Target Classification: the Role of Internal Meta-Features in Machine Teaching Target Curricula for Multi-Target Classification: The Role of Internal Meta-Features in Machine Teaching by Shannon Kayde Fenn B Eng (Comp) / B Comp Sc Thesis submitted in fulfilment of the requirements for the Degree of Doctor of Philosophy Supervisor: Prof Pablo Moscato Co-Supervisor: Dr Alexandre Mendes Co-Supervisor: Dr Nasimul Noman This research was supported by an Australian Government Research Training Program (RTP) Scholarship The University of Newcastle School of Electrical Engineering and Computing June, 2019 © Copyright by Shannon Kayde Fenn 2019 Target Curricula for Multi-Target Classification: The Role of Internal Meta-Features in Machine Teaching Statement of Originality I hereby certify that the work embodied in the thesis is my own work, conducted under normal supervision. The thesis contains no material which has been accepted, or is being examined, for the award of any other degree or diploma in any university or other tertiary institution and, to the best of my knowledge and belief, contains no material previously published or written by another person, except where due reference has been made. I give consent to the final version of my thesis being made available worldwide when deposited in the University’s Digital Repository, subject to the provisions of the Copyright Act 1968 and any approved embargo. Shannon Kayde Fenn 3rd June 2019 iii To Mum and Mar Mar, the brightest lights may shine the shortest but their light reaches farthest. Acknowledgments The first and most important acknowledgement must go to my wife Coralie. You have been with me through every hard moment of my adult life, and the cause of most of the good ones. This thesis would not exist if not for you, nor would I be the person I am today. Thank you. My heartfelt gratitude to my advisers, Pablo, Nasimul, and Alex. Your guidance and wisdom sheltered me from many a poor choice over the years. Your support in professional and personal matters went above and beyond. I am thankful to count you friends. To my father, Richard, you have always been my hero. The way I approach the world and treat the people in it is completely due to you. The day you met mum, and the day you married her, I’m sure they were her best. To my little sisters, Jess and Deeds, never change. You’ve always made me feel loved and special. If ever two people made me feel like I could accomplish something, it was you two. To Bo, Grizzy, Terry, Archie, Baxter, Jill, Brian, Jack, Amos, and everyone else in the Fenn clan and beyond, I can’t remember a single time I didn’t feel part of the family and for that I can never thank you enough. The best result of choosing to do a PhD has been the friends I made. To my friends from CIBM: Amer, Amir, Claudio, Francia, Heloisa, Jake, Inna, Leila, Luke, Łukasz, Marta, Nader, and Natalie, thank you for all the coffee breaks, lunch conversations that were too good to want to leave, and other countless helping hands along the way. To Pat, Matt, Greg, Amy, Chelsea, Ben, Bec, Jason, and far too many more good people to list, thank you for all your support throughout these years. Shannon Kayde Fenn The University of Newcastle June 2019 v List of Publications • S. Fenn, and P. Moscato, “Target Curricula via Selection of Minimum Feature Sets: a Case Study in Boolean Networks”. The Journal of Machine Learning Research, vol. 18, no. 114, pp. 1–26, 2017. vi Contents Acknowledgments v List of Publications vi List of Tables x List of Figures xi Abstract xxiii 1 Introduction 1 1.1 Curriculum Learning . 1 1.2 Machine Learning for Logic Synthesis . 2 1.3 Research Aims . 3 1.4 Thesis Overview . 4 2 Background 6 2.1 Learning Multiple Targets . 6 2.1.1 Formalism and Terminology . 7 2.1.2 The Label/Target distinction . 8 2.1.3 Measuring Prediction Performance . 9 2.1.4 Methods for MTC . 13 2.2 Curriculum Learning . 16 2.2.1 Curricula in Human and Animal Learning . 17 2.2.2 Example Curricula . 17 2.2.3 Target Curricula . 18 2.2.4 Measuring and comparing curricula . 19 2.2.5 Summary . 21 2.3 Intrinsic Dimension and Feature Selection . 22 2.3.1 Feature Selection . 22 2.3.2 The Minimum Feature Set Problem . 23 2.3.3 Summary . 24 vii 2.4 Logic Synthesis . 25 2.5 Boolean Networks . 26 2.5.1 Training Boolean Networks . 28 2.5.2 Late-Acceptance Hill Climbing . 28 2.5.3 Varying sample size . 29 2.6 Conclusion . 30 3 Target Curricula and Hierarchical Loss Functions 32 3.1 Can Guiding Functions Enforce a Curriculum? . 33 3.1.1 Hierarchical Loss Functions . 33 3.1.2 Cases with Suspected Curricula . 36 3.1.3 Training . 38 3.1.4 Experimental Results . 39 3.2 Appraising “Easy-to-Hard”: Ablation Studies . 44 3.3 Discovering Curricula . 46 3.3.1 Target Complexity and Minimum Feature Sets . 48 3.3.2 Experiments and Results . 49 3.4 Real-World Problems . 50 3.4.1 ALU and Biological Models . 50 3.4.2 Inferring Regulatory Network Dynamics From Time-Series . 53 3.4.3 Results . 54 3.5 Discussion and Conclusion . 56 3.5.1 Issues and Limitations . 58 3.5.2 Future Work and Direction . 59 4 Application of ID-curricula to Classifier Chains 60 4.1 Introduction . 60 4.1.1 Classifier Chains . 61 4.1.2 When and how to order chains . 63 4.2 Target-aware ID-curricula . 65 4.3 FBN Classifier Chains . 66 4.4 Experiments . 67 4.4.1 Datasets . 67 4.4.2 Training and Evaluation . 71 4.4.3 Qualifying Curricula . 72 4.5 Results . 73 4.5.1 Prior benchmarks . 73 4.5.2 Random Cascaded Circuits . 80 4.5.3 LGSynth91 . 83 viii 4.6 Discussion and Conclusion . 85 4.6.1 Future Work and Direction . 88 5 Adaptive Learning Via Iterated Selection and Scheduling 89 5.1 Self-paced Curricula and Internal Meta-Features . 90 5.1.1 Motivation: stepping stone state discovery . 90 5.1.2 The Internal Meta-Feature Selection principle . 91 5.1.3 Tractability . 94 5.2 Baseline Comparison . 96 5.2.1 Baselines . 97 5.2.2 Measures . 97 5.2.3 Results and Discussion . 98 5.3 Ablative Studies . 108 5.3.1 Aims and Experimental Design . 108 5.3.2 Results and Discussion . 108 5.4 Scaling to Larger Problems . 113 5.5 Conclusion . 116 5.5.1 Future Work . 117 6 Application of ALVISS to Deep Neural Nets 118 6.1 Feedforward Neural Networks . 118 6.2 Meta-features and Target Curricula in NNs . 120 6.3 Experiments . 121 6.3.1 Architectures . 122 6.3.2 Hyper-parameter Selection . 122 6.3.3 Metrics . 123 6.4 Results . 124 6.5 Discussion and Conclusions . 127 7 Conclusion and future work 129 7.1 Conclusions and Contributions . 129 7.2 Suggestions for Future Work . 131 7.2.1 Feature Selection . 132 7.2.2 Domain Extension . 133 7.2.3 Benchmarks . 134 ix List of Tables 3.1 Example error matrices and the associated values for L1, Lw, Llh, and Lgh (ignoring a normalisation constant for readability). Note that the purpose is not to directly compare the different losses on a particular matrix, but instead to compare the pairwise disparities in the same loss on different error matrices. For example, the first and second matrices are equivalent under L1 but the first is preferred by all other losses. Rows of E are examples and columns are targets (ordered by the curriculum being enforced). Also included are equivalents of E under the two hierarchical losses: the recurrences defining these losses can be thought of as defining a transformation on E under which the respective loss is equivalent to L1............................... 36 3.2 Instance sizes of initial test-bed problems. 39 3.3 Instance sizes of the real-world test-beds. 54 3.4 Results for the yeast dataset. See Figure 3.10 for the estimated hierar- chies. Note that these results are mean difference in test set accuracy, reported as a percentage, and not MCC. The use of curricula on the SK! {Ste9, Rum1} hierarchy yielded negligible improvement, however we see more promise in the PP! {Cdc2/Cdc13, Cdc2/Cdc13*} hierarchy, particularly for Lgh. ............................ 56 3.5 Results for the E. coli dataset. Note that these results are mean differ- ence in test set accuracy, reported as a percentage, and not MCC. See Figure 3.10 for the estimated hierarchies. The use of curricula on the {G2, G8}!G6 hierarchy has given some improvement, however for Llh we see a drop in performance on one of the base targets in the hierarchy. For Lgh the results remained positive overall. 56 4.1 Test-bed instance sizes used in this chapter. 71 5.1 Parameters for Meta-heuristic for Randomised Priority Search (Meta- RaPS) [203] along with optimal values found using random parameter search [206]. 96 x 5.2 Instance and training set sizes for the large adders. Number of targets ni and example pool size are not given as they are 2ni and 2 respectively for all problems. 115 6.1 Architecture and hyper-parameter search space and values found using random parameter search [206]. 123 List of Figures 2.1 A 56-node Feedforward Boolean Network (FBN) which correctly im- plements the 6-bit addition function. Each node takes 2 inputs and computes the NAND function as its output. Inputs (far left) have been coloured red and outputs (far right) green.
Recommended publications
  • Reinforcement Learning in Supervised Problem Domains
    Technische Universität München Fakultät für Informatik Lehrstuhl VI – Echtzeitsysteme und Robotik reinforcement learning in supervised problem domains Thomas F. Rückstieß Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr. Daniel Cremers Prüfer der Dissertation 1. Univ.-Prof. Dr. Patrick van der Smagt 2. Univ.-Prof. Dr. Hans Jürgen Schmidhuber Die Dissertation wurde am 30. 06. 2015 bei der Technischen Universität München eingereicht und durch die Fakultät für Informatik am 18. 09. 2015 angenommen. Thomas Rückstieß: Reinforcement Learning in Supervised Problem Domains © 2015 email: [email protected] ABSTRACT Despite continuous advances in computing technology, today’s brute for- ce data processing approaches may not provide the necessary advantage to win the race against the ever-growing amount of data that can be wit- nessed over the last decades. In this thesis, we discuss novel methods and algorithms that are capable of directing attention to relevant details and analysing it in sequence to overcome the processing bottleneck and to keep up with this data explosion. In the first of three parts, a novel exploration technique for Policy Gradi- ent Reinforcement Learning is presented which replaces traditional ad- ditive random exploration with state-dependent exploration, exploring on a higher, more strategic level. We will show how this new exploration method converges faster and finds better global solutions than random exploration can. The second part of this thesis will introduce the concept of “data con- sumption” and discuss means to minimise it in supervised learning tasks by deriving classification as a sequential decision process and ma- king it accessible to Reinforcement Learning methods.
    [Show full text]
  • A Comprehensive Case Study: an Examination of Machine Learning and Connectionist Algorithms
    A Comprehensive Case Study: An Examination of Machine Learning and Connectionist Algorithms A Thesis Presented to the Department of Computer Science Brigham Young University In Partial Fulfillment of the Requirements for the Degree Master of Science Frederick Zarndt [email protected] June 1995 Contents 1. Introduction . 1 2. Databases . 6 2.1 Database Names . 7 2.2 Database Statistics . 11 3. Learning Models . 20 3.1 Decision Trees . 20 3.2 Nearest Neighbor . 21 3.3 Statistical . 21 3.4 Rule Based . 22 3.6 Neural Networks . 22 4. Method . 24 4.1 Cross-Validation . 24 4.2 Consistency of Training/Testing Sets and Presentation Order . 26 4.3 Attribute Discretization . 26 4.4 Unknown Attribute Values . 28 4.5 Learning Model Parameters . 30 5. Results . 31 6. Limitations . 52 7. Conclusion . 53 Appendix A: Standard Database Format (SDF) . 55 Appendix B: Tools . 58 B.1 Database Tools . 58 B.1.1 verify . 58 B.1.2 verify.ksh . 58 B.1.3 dsstats . 58 B.1.4 dsstats.ksh . 59 B.1.5 xlate . 59 B.2 Test Tools . 61 B.2.1 xval.<algorithm> . 61 B.2.2 test-ml . 66 B.2.3 test-conn . 67 B.3 Result Extraction Tools . 68 B.3.1 average . 68 B.3.2 summarize . 68 Appendix C: Database Generators . 69 Appendix D. Learning Model Parameters . 70 D.1 Decision Trees . 70 D.2 Nearest Neighbor . 70 D.3 Statistical . 71 D.4 Rule Based . 71 D.5 Neural Networks . 72 Appendix E. Database Descriptions . 80 Bibliography . 85 A Comprehensive Case Study: An Examination of Machine Learning and Connectionist Algorithms Frederick Zarndt Department of Computer Science M.S.
    [Show full text]
  • On the Parameterization of Cartesian Genetic Programming
    On the Parameterization of Cartesian Genetic Programming Paul Kaufmann Roman Kalkreuth Gutenberg School of Management & Economics Department of Computer Science Johannes Gutenberg University, Mainz, Germany Dortmund Technical University, Germany [email protected] [email protected] Abstract—In this work, we present a detailed analysis of the (1 + 4) Evolutionary Strategy (ES) that prefers off-spring Cartesian Genetic Programming (CGP) parametrization of the individuals over parents of the same fitness. The consequence selection scheme (µ + λ), and the levels back parameter l. We of the selection preference is that inactive genes that have been also investigate CGP’s mutation operator by decomposing it into a self-recombination, node function mutation, and inactive gene touched by the randomization operator do not contribute to a randomization operators. We perform experiments in the Boolean change of the fitness and therefore propagate always to the and symbolic regression domains with which we contribute to next generation. This circumstance allows a steady influx of the knowledge about efficient parametrization of two essential random genetic material, and we assume that it supports the parameters of CGP and the mutation operator. self-recombination operator to explore close and distant parts Index Terms—Cartesian Genetic Programming of the search space, i.e., increase its search horizon. The search horizon of the self-recombination operator can I. INTRODUCTION also be improved using another inner mechanism of CGP. Genetic Programming (GP) can be considered as nature- Goldman and Punch showed that active genes are distributed inspired search heuristic, which allows for the automatic unevenly in a single-line CGP model [15].
    [Show full text]
  • Dynamic Page Based Crossover in Linear Genetic Programming
    Dynamic Page Based Crossover in Linear Genetic Programming M.I. Heywood, A.N. Zincir-Heywood Abstract. Page-based Linear Genetic Programming (GP) is take the form of a ‘linear’ list of instructions [5-9]. proposed in which individuals are described in terms of a Execution of an individual therefore mimics the process of number of pages. Pages are expressed in terms of a fixed program execution normally associated with a simple number of instructions, constant for all individuals in the register machine as opposed to traversing a tree structure population. Pairwise crossover results in the swapping of (leaves representing an input, the root node the output). single pages, thus individuals are of a fixed number of instructions. Head-to-head comparison with Tree structured Each instruction is defined in terms of an opcode and GP and block-based Linear GP indicates that the page-based operand, and modifies the contents of internal registers, approach evolves succinct solutions without penalizing memory and program counter. generalization ability. The second component of interest is the crossover operator. Biologically, crossover is not ‘blind,’ Keywords: Genetic Programming, Homologous Crossover, Linear chromosomes exist as distinct pairs, each with a matching Structures, Benchmarking. homologous partner [10]. Thus, only when chromosome sequences are aligned may crossover take place; the entire process being referred to as meiosis [10]. Until recently, I. INTRODUCTION however, crossover as applied in GP has been blind. Typically, the stochastic nature of crossover results in A Darwinism perspective on natural selection implies that a individuals whose instruction count continues to increase set of individuals compete for a finite set of resources, with with generation without a corresponding improvement to individuals surviving more frequently when they performance.
    [Show full text]
  • Outline of Machine Learning
    Outline of machine learning The following outline is provided as an overview of and topical guide to machine learning: Machine learning – subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Contents What type of thing is machine learning? Branches of machine learning Subfields of machine learning Cross-disciplinary fields involving machine learning Applications of machine learning Machine learning hardware Machine learning tools Machine learning frameworks Machine learning libraries Machine learning algorithms Machine learning methods Dimensionality reduction Ensemble learning Meta learning Reinforcement learning Supervised learning Unsupervised learning Semi-supervised learning Deep learning Other machine learning methods and problems Machine learning research History of machine learning Machine learning projects Machine learning organizations Machine learning conferences and workshops Machine learning publications
    [Show full text]
  • A Survey of Bio Inspired Optimization Algorithms
    International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-2, May 2012 A Survey of Bio inspired Optimization Algorithms Binitha S, S Siva Sathya example for optimization, because if we closely examine Abstract—Nature is of course a great and immense source of each and every features or phenomenon in nature it always inspiration for solving hard and complex problems in computer find the optimal strategy, still addressing complex interaction science since it exhibits extremely diverse, dynamic, robust, among organisms ranging from microorganism to fully complex and fascinating phenomenon. It always finds the optimal solution to solve its problem maintaining perfect fledged human beings, balancing the ecosystem, maintaining balance among its components. This is the thrust behind bio diversity, adaptation, physical phenomenon like river inspired computing. Nature inspired algorithms are meta formation, forest fire ,cloud, rain .etc..Even though the heuristics that mimics the nature for solving optimization strategy behind the solution is simple the results are amazing. problems opening a new era in computation .For the past Nature is the best teacher and its designs and capabilities are decades ,numerous research efforts has been concentrated in extremely enormous and mysterious that researchers are this particular area. Still being young and the results being very amazing, broadens the scope and viability of Bio Inspired trying to mimic nature in technology. Also the two fields have Algorithms (BIAs) exploring new areas of application and more a much stronger connection since, it seems entirely reasonable opportunities in computing. This paper presents a broad that new or persistent problems in computer science could overview of biologically inspired optimization algorithms, have a lot in common with problems nature has encountered grouped by the biological field that inspired each and the areas and resolved long ago.
    [Show full text]
  • Retaining Experience and Growing Solutions
    Retaining Experience and Growing Solutions Robyn Ffrancon1 Abstract Generally, when genetic programming (GP) is used for function synthesis any valuable experience gained by the system is lost from one problem to the next, even when the problems are closely related. With the aim of developing a system which retains beneficial experience from problem to problem, this paper introduces the novel Node-by-Node Growth Solver (NNGS) algorithm which features a component, called the controller, which can be adapted and improved for use across a set of related problems. NNGS grows a single solution tree from root to leaves. Using semantic backpropagation and acting locally on each node in turn, the algorithm employs the controller to assign subsequent child nodes until a fully formed solution is generated. The aim of this paper is to pave a path towards the use of a neural network as the controller component and also, separately, towards the use of meta-GP as a mechanism for improving the controller component. A proof-of-concept controller is discussed which demonstrates the success and potential of the NNGS algorithm. In this case, the controller constitutes a set of hand written rules which can be used to deterministically and greedily solve standard Boolean function synthesis benchmarks. Even before employing machine learning to improve the controller, the algorithm vastly outperforms other well known recent algorithms on run times, maintains comparable solution sizes, and has a 100% success rate on all Boolean function synthesis benchmarks tested so far. Keywords Genetic Programming — Semantic Backpropagation — Local Search — Meta-GP — Neural Networks 1TAO team, INRIA Saclay, Univ.
    [Show full text]
  • Probabilistic Graph Programs for Randomised and Evolutionary Algorithms
    Probabilistic Graph Programs for Randomised and Evolutionary Algorithms Timothy Atkinson?, Detlef Plump, and Susan Stepney Department of Computer Science, University of York, UK ftja511,detlef.plump,[email protected] Abstract. We extend the graph programming language GP 2 with prob- abilistic constructs: (1) choosing rules according to user-defined probabil- ities and (2) choosing rule matches uniformly at random. We demonstrate these features with graph programs for randomised and evolutionary al- gorithms. First, we implement Karger's minimum cut algorithm, which contracts randomly selected edges; the program finds a minimum cut with high probability. Second, we generate random graphs according to the G(n; p) model. Third, we apply probabilistic graph programming to evolutionary algorithms working on graphs; we benchmark odd-parity digital circuit problems and show that our approach significantly out- performs the established approach of Cartesian Genetic Programming. 1 Introduction GP 2 is a rule-based graph programming language which frees programmers from handling low-level data structures for graphs. The language comes with a concise formal semantics and aims to support formal reasoning on programs; see, for example, [22, 19, 11]. The semantics of GP 2 is nondeterministic in two respects: to execute a rule set fr1; : : : ; rng on a host graph G, any of the rules applicable to G can be picked and applied; and to apply a rule r, any of the valid matches of r's left-hand side in the host graph can be chosen. GP 2's compiler [4] has been designed by prioritising speed over completeness, thus it simply chooses the first applicable rule in textual order and the first match that is found.
    [Show full text]
  • Probabilistic Graph Programs for Randomised and Evolutionary Algorithms
    This is a repository copy of Probabilistic Graph Programs for Randomised and Evolutionary Algorithms. White Rose Research Online URL for this paper: https://eprints.whiterose.ac.uk/129963/ Version: Accepted Version Proceedings Paper: Atkinson, Timothy, Plump, Detlef orcid.org/0000-0002-1148-822X and Stepney, Susan orcid.org/0000-0003-3146-5401 (2018) Probabilistic Graph Programs for Randomised and Evolutionary Algorithms. In: Lambers, Leen and Weber, Jens, (eds.) Proceedings 11th International Conference on Graph Transformation (ICGT 2018). Lecture Notes in Computer Science . Springer . https://doi.org/10.1007/978-3-319-92991-0_5 Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Probabilistic Graph Programs for Randomised and Evolutionary Algorithms Timothy Atkinson⋆, Detlef Plump, and Susan Stepney Department of Computer Science, University of York, UK {tja511,detlef.plump,susan.stepney}@york.ac.uk Abstract. We extend the graph programming language GP2 with prob- abilistic constructs: (1) choosing rules according to user-defined probabil- ities and (2) choosing rule matches uniformly at random.
    [Show full text]
  • Parametrizing Cartesian Genetic Programming: an Empirical Study
    Parametrizing Cartesian Genetic Programming: An Empirical Study Paul Kaufmann1 and Roman Kalkreuth2 1 Department of Computer Science, Paderborn University, Germany 2 Department of Computer Science, University of Dortmund, Germany Abstract. Since its introduction two decades ago, the way researchers parameterized and optimized Cartesian Genetic Programming (CGP) remained almost unchanged. In this work we investigate non-standard parameterizations and optimization algorithms for CGP. We show that the conventional way of using CGP, i.e. configuring it as a single line optimized by an (1+4) Evolutionary Strategies-style search scheme, is a very good choice but that rectangular CGP geometries and more elab- orate metaheuristics, such as Simulated Annealing, can lead to faster convergence rates. 1 Introduction Almost two decades ago Miller, Thompson, Kalganova, and Fogarty presented first publications on CGP|an encoding model inspired by the two-dimensional array of functional nodes connected by feed-forward wires of an Field Pro- grammable Gate Array (FPGA) device [6, 1]. CGP has multiple pivotal advan- tages: { CGP comprises an inherent mechanism for the design of simple hierarchical functions. While in many optimization systems such a mechanism has to be implemented explicitly, in CGP multiple feed-forwards wires may originate from the same output of a functional node. This property can be very useful for the evolution of goal functions that may benefit from repetitive inner structures. { The maximal size of encoded solutions is bound, saving CGP to some extent from \bloat" that is characteristic to Genetic Programming (GP). { CGP offers an implicit way of propagating redundant information through- out the generations. This mechanism can be used as a source of randomness and a memory for evolutionary artifacts.
    [Show full text]
  • The Hierarchical Fair Competition (HFC) Framework for Sustainable Evolutionary Algorithms
    The Hierarchical Fair Competition (HFC) Framework for Sustainable Evolutionary Algorithms Jianjun Hu [email protected] Department of Computer Science and Engineering, Erik Goodman [email protected] Kisung Seo [email protected] Zhun Fan [email protected] Department of Electrical and Computer Engineering, Rondal Rosenberg [email protected] Department of Mechanical Engineering, Michigan State University, East Lansing, MI, 48823, USA Abstract Many current Evolutionary Algorithms (EAs) suffer from a tendency to converge pre- maturely or stagnate without progress for complex problems. This may be due to the loss of or failure to discover certain valuable genetic material or the loss of the capa- bility to discover new genetic material before convergence has limited the algorithm’s ability to search widely. In this paper, the Hierarchical Fair Competition (HFC) model, including several variants, is proposed as a generic framework for sustainable evo- lutionary search by transforming the convergent nature of the current EA framework into a non-convergent search process. That is, the structure of HFC does not allow the convergence of the population to the vicinity of any set of optimal or locally optimal solutions. The sustainable search capability of HFC is achieved by ensuring a contin- uous supply and the incorporation of genetic material in a hierarchical manner, and by culturing and maintaining, but continually renewing, populations of individuals of intermediate fitness levels. HFC employs an assembly-line structure in which subpop- ulations are hierarchically organized into different fitness levels, reducing the selection pressure within each subpopulation while maintaining the global selection pressure to help ensure the exploitation of the good genetic material found.
    [Show full text]
  • On Evolutionary Synthesis of Compact Polymorphic Combinational Circuits⋆
    On Evolutionary Synthesis of Compact Polymorphic Combinational Circuits⋆ ZBYSEKˇ GAJDA1†,LUKA´ Sˇ SEKANINA1‡ Faculty of Information Technology, Brno University of Technology, Czech Republic Polymorphic gates are unconventional circuit components that are not supported by existing synthesis tools. This article presents new methods for synthesis of polymorphic circuits. Proposed methods, based on polymorphic binary decision diagrams and polymorphic multiplexing, extend the ordinary circuit represen- tations with the aim of including polymorphic gates. In order to reduce the number of gates in circuits synthesized using pro- posed methods, an evolutionary optimization based on Cartesian Genetic Programming (CGP) is implemented. The implemen- tations of polymorphic circuits optimized by CGP represent the best known solutions if the number of gates is considered as the decision criterion. Key words: polymorphic circuit, digital circuit synthesis, evolutionary computing, genetic programming 1 INTRODUCTION Polymorphic electronics was introduced by A. Stoica’s group at NASA Jet Propulsion Laboratory as a new class of electronic devices that exhibit a new ⋆ This is authors’ manuscript of the paper: Gajda Zbysek, Sekanina Lukas: On Evolutionary Synthesis of Compact Polymorphic Combinational Circuits. Journal of Multiple-Valued Logic and Soft Computing, Vol. 17, No. 6, 2011, p. 607-631. For the final version, see Old City Pub- lishing, Inc. at http://www.oldcitypublishing.com/MVLSC/MVLSCcontents/MVLSCv17n5- 6contents.html † email: [email protected] ‡ email: [email protected] 1 style of (re)configuration [28]. Polymorphic gates play the central role in the polymorphic electronics. A polymorphic gate is capable of switching among two or more logic functions. However, selection of the function is performed unconventionally.
    [Show full text]