STATE ENGINEERING UNIVERSITY of ARMENIA Department of Computer Systems and Informatics Development of Resource Sharing System Co

Total Page:16

File Type:pdf, Size:1020Kb

STATE ENGINEERING UNIVERSITY of ARMENIA Department of Computer Systems and Informatics Development of Resource Sharing System Co STATE ENGINEERING UNIVERSITY OF ARMENIA Department of Computer Systems and Informatics Artem T. HARUTYUNYAN Development of Resource Sharing System Components for AliEn Grid Infrastructure Ph.D. Thesis CERN-THESIS-2010-084 07/05/2010 Scientific adviser: Prof. Ara A. GRIGORYAN Yerevan-2010 CONTENTS INTRODUCTION .............................................................................................. 4 ACKNOWLEDGEMENTS .................................................................................. 10 CHAPTER 1. GRID COMPUTING AND CLOUD COMPUTING ............................. 12 1.1 Grid computing concepts ...................................................................... 12 1.2 The architecture of the Grid .................................................................. 15 1.2.1 End systems....................................................................................... 15 1.2.2 Clusters ............................................................................................. 16 1.2.3 Intranets ........................................................................................... 17 1.2.4 Internets ........................................................................................... 18 1.2.5 Core Grid services ............................................................................... 19 1.3 Implementations of Grid infrastructures and projects ............................... 20 1.3.1 Grid projects worldwide ....................................................................... 20 1.3.2 National Grid initiative in Armenia ......................................................... 26 1.4 Cloud computing concepts .................................................................... 28 1.4.1 Infrastructure as a Service (IaaS) ......................................................... 30 1.4.2 Platform as a Service (PaaS) ................................................................ 31 1.4.3 Software as a Service (SaaS) ............................................................... 32 1.5 Summary of Chapter 1 ......................................................................... 33 CHAPTER 2. ALIEN - GRID INFRASTRUCTURE OF CERN ALICE EXPERIMENT . 34 2.1 CERN ALICE experiment ....................................................................... 34 2.2 Distributed computing architecture of ALICE ........................................... 35 2.3 The architecture of AliEn ...................................................................... 37 2.3.1 AliEn file and metadata catalogue ......................................................... 41 2.3.2 AliEn monitoring system ...................................................................... 44 2.3.3 AliEn Workload Management System (WMS) .......................................... 47 2.4 Problem definition ............................................................................... 51 2.5 Summary of Chapter 2 ......................................................................... 53 CHAPTER 3. DESIGN AND DEVELOPMENT OF GRID BANKING SERVICE MODEL FOR JOB SCHEDULING IN ALIEN ....................................................... 55 3.1 Development of the Grid Banking Service (GBS) model for AliEn WMS ....... 55 3.2 Discrete-event system model for the simulation of the work of AliEn WMS and GBS ........................................................................................................ 57 3.3 The simulation toolkit details ................................................................ 66 2 3.4 Evaluation of the GBS model with the use of simulation toolkit .................. 67 3.5 Integration of the banking service with AliEn WMS .................................. 69 3.6 Summary of Chapter 3 ......................................................................... 77 CHAPTER 4. DEVELOPMENT OF TWO MODELS OF INTEGRATION OF CLOUD COMPUTING RESOURCES WITH ALIEN .......................................................... 79 4.1 CernVM – a virtual appliance for LHC applications .................................... 79 4.2 Nimbus – a toolkit for building IaaS computing clouds.............................. 86 4.3 Development of ‗Classic‘ model for integration of cloud computing resources with AliEn ................................................................................................ 89 4.4 Development of ‗Co-Pilot‘ model for integration of cloud computing resources with AliEn ................................................................................................ 95 4.4.1 Development of Co-Pilot Agent – Co-Pilot Adapter communication protocol 97 4.5 Comparison of ‗Classic‘ and ‗Co-Pilot‘ models. Measurement of their timing characteristics ......................................................................................... 102 4.6 Summary of Chapter 4 ........................................................................ 106 CHAPTER 5. DEVELOPMENT OF SASL BASED SECURITY SYSTEM AND DEMONSTRATION OF THE PORTABILITY OF ALIEN CLIENT PART TO WINDOWS .................................................................................................. 110 5.1 Development of SASL based authentication and authorization system in AliEn ............................................................................................................. 110 5.2 Demonstration of the portability of the client part of AliEn to Windows ...... 113 5.3. Summary of Chapter 5 ....................................................................... 116 BIBLIOGRAPHY ........................................................................................... 118 Appendix A. Glossary of acronyms .............................................................. 132 Appendix B. AliEn site description file for deploying dynamic virtual sites on Nimbus IaaS cloud (“Classic” model) .................................................... 134 Appendix C. Implementation certificate (YerPhI) ....................................... 139 Appendix D. Implementation certificate (CERN) ......................................... 140 Appendix E. Implementation certificate (University of Chicago and Argonne National Laboratory) .................................................................................. 141 3 INTRODUCTION The problem of the resource provision, sharing, accounting and use represents a principal issue in the contemporary scientific cyberinfrastructures. For example, collaborations in physics, astrophysics, Earth science, biology and medicine need to store huge amounts of data (of the order of several petabytes (250 bytes)) as well as to conduct highly intensive computations. The appropriate computing and storage capacities cannot be ensured by one (even very large) research center. The modern approach to the solution of this problem suggests exploitation of computational and data storage facilities of the centers participating in collaborations. The most advanced implementation of this approach is based on Grid technologies, which enable effective work of the members of collaborations regardless of their geographical location. Currently there are several tens of Grid infrastructures deployed all over the world. The Grid infrastructures of CERN Large Hadron Collider experiments - ALICE, ATLAS, CMS, and LHCb which are exploited by specialists from five inhabited continents, are among the largest ones. A decade of extensive exploitation of Grid resources by various scientific communities has revealed the following problems: Need in an appropriate coordination of the resource usage and accounting for the resources. Necessity of the increase of the computing and storage capacity of Grid by a seamless integration of external resources. Minimization of the work of resource administrators on the maintenance and support of specific application software required by different scientific communities. Need in a secure access to resources on the base of different authentication mechanisms. 4 This dissertation is devoted to the solution of aforementioned problems within the Grid infrastructure of ALICE experiment at CERN Large Hadron Collider (LHC), called AliEn (ALICE Environment on the Grid). AliEn is a set of Grid middleware and application tools and services which are exploited by ALICE collaboration to store and analyze the experimental data, as well as to perform Monte-Carlo simulations. AliEn uses computing and data storage facilities of member institutions from Europe, Asia, Americas and Africa, about 100 centers overall1. Yerevan Physics Institute participates in the ALICE collaboration since 1994. Objectives of the work are: Design and development of a model for the coordination and accounting of the use of resources in the AliEn Workload Management System. Design and development of a model for seamless integration of external resources provided using Cloud Computing technologies. Design and development of an authentication and authorization framework for access to the AliEn Grid resources. Demonstration of the portability of AliEn code to different Operating Systems. The main results of the work are: Development and implementation of a model of Grid Banking Service for job scheduling in Workload Management System of AliEn. The service provides a flexible job scheduling scheme which is based on the collaborative use of resources, where for the execution of jobs users ‗pay‘ from their ‗bank accounts‘ to the sites where their jobs were executed. 1 AliEn is also exploited by other physics collaborations: Panda and CBM at GSI (Gesellschaft
Recommended publications
  • Portal-Plataforma De Cultura Musical" Enero 2016 Óscar Cuevas Lanchares
    PFC "PORTAL-PLATAFORMA DE CULTURA MUSICAL" ENERO 2016 ÓSCAR CUEVAS LANCHARES Departamento de Informática PROYECTO FIN DE CARRERA PORTAL-PLATAFORMA DE CULTURA MUSICAL Autor: ÓSCAR CUEVAS LANCHARES Tutor: JESÚS HERNANDO CORROCHANO Leganés, Enero de 2016 PFC "PORTAL-PLATAFORMA DE CULTURA MUSICAL" ENERO 2016 ÓSCAR CUEVAS LANCHARES Título: PORTAL-PLATAFORMA DE CULTURA MUSICAL Autor: Óscar Cuevas Lanchares Director: EL TRIBUNAL Presidente: Vocal: Secretario: Realizado el acto de defensa y lectura del Proyecto Fin de Carrera el día __ de _______ de 20__ en Leganés, en la Escuela Politécnica Superior de la Universidad Carlos III de Madrid, acuerda otorgarle la CALIFICACIÓN de VOCAL SECRETARIO PRESIDENTE PFC "PORTAL-PLATAFORMA DE CULTURA MUSICAL" ENERO 2016 ÓSCAR CUEVAS LANCHARES ÍNDICE PÁG. AGRADECIMIENTOS 1 RESUMEN 2 PALABRAS CLAVE 3 ABSTRACT 4 ESTRUCTURA DE LA MEMORIA 5 GLOSARIO DE TÉRMINOS 6 CAPÍTULO 1: MOTIVACIÓN Y OBJETIVOS 8 1.1 MOTIVACIÓN 8 1.2 OBJETIVOS 8 1.3 QUÉ SE ESPERA DE ESTE PROYECTO 9 1.4 MEDIOS EMPLEADOS 10 CAPÍTULO 2: ESTADO DEL ARTE 12 2.1 TECNOLOGÍA 12 2.1.1 J2EE 13 2.1.2 PHP 15 2.1.3 .NET 16 2.1.4 COMPUTACIÓN EN LA NUBE (CLOUD COMPUTING) 18 2.1.5 BLUEMIX 21 2.1.6 DOCKER 21 2.1.7 BASES DE DATOS NO-SQL 21 2.1.8 BIG DATA 23 2.1.9 VERTX 25 2.1.10 NODE.JS 25 2.1.11 JAVASCRIPT 26 2.1.12 SOAP 26 2.1.13 API REST 27 2.1.14 BOOTSTRAP 30 2.1.15 JSF 30 2.1.16 JPA 32 2.2 METODOLOGÍA 32 2.2.1 METODOLOGÍA EN CASCADA 33 2.2.2 METODOLOGÍA EN V 34 2.2.3 METODOLOGÍA EN ESPIRAL 35 2.2.4 METODOLOGÍA TRADICIONAL 36 2.2.5 METODOLOGÍA ÁGIL 37 2.3.
    [Show full text]
  • Version 19.0.2 Plixer
    Scrutinizer Documentation Version 19.0.2 Plixer January 15, 2021 Scrutinizer 1 Deployment guides3 1.1 Virtual Appliance deployment guide............................3 1.2 AMI deployment guide................................... 34 2 System administration 41 2.1 Database backups..................................... 41 2.2 Changelog......................................... 45 2.3 Check vulnerabilities.................................... 59 2.4 Data aggregation...................................... 60 2.5 Support for distributed collectors............................. 64 2.6 Configuring Scrutinizer for dual/multi-homing...................... 65 2.7 Functional IDs....................................... 67 2.8 Getting started....................................... 69 2.9 Interactive CLI....................................... 69 2.10 Language translations................................... 91 2.11 Licensing.......................................... 92 2.12 Replicator load balancing................................. 93 2.13 Meta data collection.................................... 94 2.14 Predicting disk needs.................................... 95 2.15 Interactive CLI....................................... 96 2.16 Security updates...................................... 115 2.17 Sizing your environment.................................. 116 2.18 Configuring SSL in Scrutinizer.............................. 121 2.19 Streaming support for customer data lakes........................ 124 2.20 System LEDs........................................ 124 2.21 Third-party
    [Show full text]
  • From Toolkit to Framework - the Past and Future Evolution of Phedex
    From toolkit to framework - the past and future evolution of PhEDEx A Sanchez-Hernandez,a R Egelandb 1, C-H Huangc, N Ratnikovad, N Maginie and T Wildishf a Centro Invest. Estudios Avanz IPN b University of Minnesota, Twin Cities c Fermi National Accelerator Laboratory d KIT (Karlsruhe Institute of Technology) and ITEP (Institute for Theoretical and Experimental Physics, Moscow, Russia) e CERN f Princeton University E-mail: [email protected] Abstract. PhEDEx is the data-movement solution for CMS at the LHC. Created in 2004, it is now one of the longest-lived components of the CMS dataflow/workflow world. As such, it has undergone significant evolution over time, and continues to evolve today, despite being a fully mature system. Originally a toolkit of agents and utilities dedicated to specific tasks, it is becoming a more open framework that can be used in several ways, both within and beyond its original problem domain. In this talk we describe how a combination of refactoring and adoption of new technologies that have become available over the years have made PhEDEx more flexible, maintainable, and scaleable. 1. Introduction PhEDEx[1] (Physics Experiment Data Export) is the data-placement management system developed by CMS for managing the experiments’ data on the grid. It uses grid-level transfer tools (FTS[2], SRM[3], etc) to transfer files according to the CMS data placement policy. It was originally implemented in 2004, and consists of a set of agents and a few ancilliary command-line tools, and a website for presenting information to users. The agents are used for repetitive tasks, scheduling them at regular intervals, such as scheduling transfers according to priorities and link performance, processing the transfer queues for a site, and updating the bookkeeping based on recent activity.
    [Show full text]
  • Introduction to Perl
    Introduction to Perl #!/usr/bin/perl −w # find−a−func use strict; $_=’$;="per l";map{map {s}^\s+}} ;$_{$_}++unless(/[^a− z]/)}split(/ [\s,]+/)i f(/alpha. *$;/i../w ait/)}‘$; doc\040$; toc‘;;;@[=k eys%_;$; =20;$:=15;;for(0..($;*$:−1 )){$;[$_]="_" ;}until($%++>3*$;||@]>2*$:−3){@_=split(//,splice(@[,rand( @[),1));if(3>@_){next;}$~=int(rand($;));$^=int(rand($:)); $−=$~+$^*$;;my$Erudil=0;{if($Erudil++>2*$:){next;}$a=(−1, 0,1)[rand(3)];$b=(−1,0,1)[rand(3)];unless(($a||$b)&&$~ +$a*@_<=$;&&$~+$a*@_>=0&&$^+$b*@_<=$:&&$^+$b*@_>=0){re do;;}my$llama=0;;for(0..$#_){unless($;[$−+$a*$_+$b* $;*$_]eq$_[$_]||$;[$−+$a*$_+$b*$;*$_]eq"_"){$llam a++;last;}}if($llama){redo;}push@],join("",@_);f or(0..$#_){$;[$−+$a*$_+$b*$;*$_]=$_[$_];}}}@_ =sort@];unshift@_ ,"Find:","−"x5;for$a(0. .$:−1){for$b(0. .$;−1){$~=("a".."z") [rand(26)];$_ ="$;[$a*$;+$b]". $";s;_;$~; ;print;}$_=s hift@_|| $";;print$ ",$", $_,$ /;$_ =shi ft@_ ||$ ";pr int $"x $;, $"x $;, $", $", $_ ,$/;; ;}’ ;;; s[\s+] $$g; eval; Kirrily Robert Paul Fenwick Jacinta Richardson Introduction to Perl by Kirrily Robert, Paul Fenwick, and Jacinta Richardson Copyright © 1999-2000 Netizen Pty Ltd Copyright © 2000 Kirrily Robert Copyright © 2001 Obsidian Consulting Group Pty Ltd Copyright © 2001-2005 Paul Fenwick ([email protected]) Copyright © 2001-2005 Jacinta Richardson ([email protected]) Copyright © 2001-2005 Perl Training Australia Open Publications License 1.0 Cover artwork Copyright (c) 2000 by Stephen B. Jenkins. Used with permission. The use of a llama image with the topic of Perl is a trademark of O’Reilly & Associates, Inc.
    [Show full text]
  • Universidad Mayor De San Andrés Facultad De Ciencias Puras Y Naturales Carrera De Informática
    UNIVERSIDAD MAYOR DE SAN ANDRÉS FACULTAD DE CIENCIAS PURAS Y NATURALES CARRERA DE INFORMÁTICA TESIS DE GRADO “APLICACIÓN MÓVIL COLABORATIVA PARA CALCULAR EL TIEMPO MÁS ÓPTIMO EN RUTAS DEL TRANSPORTE PÚBLICO” PARA OPTAR AL TÍTULO DE LICENCIATURA EN INFORMÁTICA MENCIÓN: INGENIERIA DE SISTEMAS INFORMÁTICOS POSTULANTE: ELIZABETH OROZCO APAZA TUTORA METODOLÓGICA: LIC. MENFY MORALES RÍOS ASESORA: LIC. BRÍGIDA ALEXANDRA CARVAJAL BLANCO LA PAZ – BOLIVIA 2015 UNIVERSIDAD MAYOR DE SAN ANDRÉS FACULTAD DE CIENCIAS PURAS Y NATURALES CARRERA DE INFORMÁTICA LA CARRERA DE INFORMÁTICA DE LA FACULTAD DE CIENCIAS PURAS Y NATURALES PERTENECIENTE A LA UNIVERSIDAD MAYOR DE SAN ANDRÉS AUTORIZA EL USO DE LA INFORMACIÓN CONTENIDA EN ESTE DOCUMENTO SI LOS PROPÓSITOS SON ESTRICTAMENTE ACADÉMICOS. LICENCIA DE USO El usuario está autorizado a: a) visualizar el documento mediante el uso de un ordenador o dispositivo móvil. b) copiar, almacenar o imprimir si ha de ser de uso exclusivamente personal y privado. c) copiar textualmente parte(s) de su contenido mencionando la fuente y/o haciendo la referencia correspondiente respetando normas de redacción e investigación. El usuario no puede publicar, distribuir o realizar emisión o exhibición alguna de este material, sin la autorización correspondiente. TODOS LOS DERECHOS RESERVADOS. EL USO NO AUTORIZADO DE LOS CONTENIDOS PUBLICADOS EN ESTE SITIO DERIVARA EN EL INICIO DE ACCIONES LEGALES CONTEMPLADOS EN LA LEY DE DERECHOS DE AUTOR. DEDICATORIA A mi mamita Alicia Apaza Vda. de Orozco por ser ejemplo de esfuerzo y perseverancia. A mis queridos hermanos: Ricardo, Edgar, Roxana y Dora. A mis sobrinos Diana, Ian, Fernanda, Gustavo, Carolina y Brandom. Al hombre que me dio la vida, el cual a pesar de haberlo perdido a muy temprana edad, ha estado siempre cuidándome y guiándome desde el cielo.
    [Show full text]
  • KOS.Content 01 | 2014
    KOS.content 01 | 2014 Ergebnisse der Untersuchungen des Kompetenzzentrum Open Source der DHBW-Stuttgart Frühjahr 2014 band.2 KOS.content 01 | 2014 Editorial Lieber Leser, vor Ihnen liegt der erste Ergebnisband mit studen- Fragestellungen werden dann in Seminaren an der tischen Ausarbeitungen, die im Rahmen des For- DHBW von studentischen Arbeitsgruppen analy- schungsprojekt KOS entstanden sind. KOS steht siert, wobei nicht nur die Dozenten, sondern auch für Kompetenzzentrum Open Source und bezeich- Fachexperten der dualen Partner die Studierenden net ein kooperatives Forschungsprojekt im Studi- wissenschaftlich leiten. engang Wirtschaftsinformatik der DHBW Stutt- gart, das in Zusammenarbeit mit den dualen Am Ende eines jeden Seminars präsentieren die Partnern Allianz Deutschland, HALLESCHE Kran- Studierenden die Untersuchungsergebnisse vor kenversicherung und Deutsche Rentenversiche- den Vertretern der beteiligten Unternehmen. Meist rung Baden-Württemberg den Einsatz von Open geht dabei um generische Lösungskonzepte, die Source Software/Techniken zur Optimierung von von den beteiligten dualen Partnern in konkrete Geschäftsprozessen in Versicherungsunterneh- Lösungen für das eigene Unternehmen umgesetzt men untersucht. werden können. Diese Abschlusspräsentationen sind nicht nur für die Unternehmen, sondern auch Die Ursprünge des Forschungsprojekts KOS gehen für die Studierenden etwas Besonderes, da sie ihre auf das Jahr 2009 zurück, in dem die Duale Hoch- Seminarergebnisse vor einem recht großen fach- schule Baden-Württemberg (DHBW) nicht
    [Show full text]
  • Scrutinizer Documentation Version 18.18 Plixer
    Scrutinizer Documentation Version 18.18 Plixer February 18, 2020 Scrutinizer 1 Deployment guides3 1.1 Virtual Appliance deployment guide............................3 1.2 AMI deployment guide................................... 42 2 System administration 51 2.1 Getting started....................................... 51 2.2 Changelog......................................... 52 2.3 Licensing.......................................... 61 2.4 Meta data collection.................................... 62 2.5 System LEDs........................................ 63 2.6 Functional IDs....................................... 66 2.7 Data aggregation...................................... 68 2.8 Configuring SSL in Scrutinizer.............................. 71 2.9 Database backups..................................... 73 2.10 Upgrade instructions.................................... 77 2.11 Interactive scrut_util.................................... 80 2.12 Configuring Scrutinizer for dual/multi-homing...................... 101 2.13 Support for distributed collectors............................. 103 2.14 Check vulnerabilities.................................... 104 2.15 Security updates...................................... 105 2.16 Language translations................................... 106 2.17 Third-party licenses.................................... 107 3 Integration guides 129 3.1 Configuring Amazon Web Services Flow Logs...................... 129 3.2 Cisco’s FireSIGHT eStreamer client............................ 131 i 3.3 Elasticsearch / Kibana (ELK) integration........................
    [Show full text]
  • Converting Network Media Data Into Human Readable Form a Study on Deep Packet Inspection with with Real- Time Visualization
    Degree project Converting Network Media Data into Human Readable Form A study on deep packet inspection with with real- time visualization. Author: Steffen-Marc Förderer Date: 2011-05-25 Subject: Computer Science Level: Bachelor Course code: 2DV00E Converting Network Media Data into Human Readable Form A Study on deep packet inspection with real-time visualization. by c Steffen-Marc F¨orderer A thesis submitted to the School of Graduate Studies in partial fulfilment of the requirements for the degree of Bachelor of Computer Science Matematiska och systemtekniska institutionen Linn´euniversitetet 05 2011 V¨axj¨o Sverige Abstract A proof of concept study into the working of network media capture and visual- ization through the use of Packet Capture in realtime. An application was developed that is able to capture tcp network packets; identify and display images in raw HTTP network traffic through the use of search, sort, error detection, timeout failsafe al- gorithms in real time. The application was designed for network administrators to visualize raw network media content together with its relevant network source & ad- dress identifiers. Different approaches were tried and tested such as using Perl with GTK+ and Visual Studio C# .Net. Furthermore two different types of image identi- fication methods were used: raw magic string identification in pure tcp network traffic and HTTP Mime type identification. The latter being more accurate and faster. C# was seen as vastly superior in both speed of prototyping and final performance eval- uation. The study presents a novel new way of monitoring networks on the basis of their media content through deep packet inspection.
    [Show full text]
  • Scrutinizer Documentation Version 19.0.2 Plixer
    Scrutinizer Documentation Version 19.0.2 Plixer May 26, 2021 Contents 1 Deployment guides1 1.1 Virtual Appliance deployment guide............................1 1.2 AMI deployment guide................................... 32 2 System administration 39 2.1 Changelog......................................... 39 2.2 Checking for vulnerabilities................................ 53 2.3 Data aggregation...................................... 54 2.4 Support for distributed collectors............................. 58 2.5 Functional IDs....................................... 59 2.6 Interactive CLI....................................... 61 2.7 Language translations................................... 83 2.8 Licensing.......................................... 84 2.9 Predicting disk needs.................................... 85 2.10 Replicator load balancing................................. 86 2.11 Security updates...................................... 87 2.12 Sharing meta data with Plixer............................... 88 2.13 Sizing your environment.................................. 90 2.14 SSL configuration..................................... 95 2.15 Streaming support for customer data lakes........................ 98 2.16 Third-party licenses.................................... 98 2.17 Upgrade guide....................................... 119 3 Integration guides 121 3.1 Advanced Threat Intelligence Feed............................ 121 3.2 Amazon Web Services flow logs.............................. 122 i 3.3 Elasticsearch / Kibana (ELK) integration........................
    [Show full text]
  • Still Doing Evolutionary Algorithms with Perl
    Still doing evolutionary algorithms with Perl Juan Juli´an Merelo Guerv´os Depto. Arquitectura y Tecnolog´ıa de Computadores, ETS Ingenier´ıa Inform´atica y Telecomunicaciones, Universidad de Granada (Spain) and Granada Perl Monger and Free Software Office, UGR http://osl.ugr.es Tel.: +34-958-243162, Email: [email protected] http://search.cpan.org/~jmerelo/ October 8, 2018 Abstract Algorithm::Evolutionary (A::E from now on) was introduced in 2002, after a talk in YAPC::EU in M¨unich. 7 years later, A::E is in its 0.67 version (past its ”number of the beast” 0.666), and has been used extensively, to the point of being the foundation of much of the (computer) science being done by our research group (and, admittedly, not many others). All is not done, however; now A::E is being integrated with POE so that evolutionary algorithms (EAs) can be combined with all kinds of servers and used in client, servers, and anything in between. In this companion to the talk I will explain what arXiv:0908.0516v1 [cs.NE] 4 Aug 2009 evolutionary algorithms are, what they are being used for, how to do them with Perl (using these or other fine modules found in CPAN) and what evolutionary algorithms can do for Perl at large. 1 Don’t talk no evolution, I believe in intel- ligent design! Right-on, buddy, but while you’re at it think about this very simple exper- iment. You’ve probably never done a chili con carne, right? Well, if you 1 Table 1: Scores given by Y.
    [Show full text]
  • Programming Perl
    Programming Perl #!/usr/bin/perl −w use strict; $_=’ev al("seek\040D ATA,0, 0;");foreach(1..2) {<DATA>;}my @camel1hump;my$camel; my$Camel ;while( <DATA>){$_=sprintf("%−6 9s",$_);my@dromedary 1=split(//);if(defined($ _=<DATA>)){@camel1hum p=split(//);}while(@dromeda ry1){my$camel1hump=0 ;my$CAMEL=3;if(defined($_=shif t(@dromedary1 ))&&/\S/){$camel1hump+=1<<$CAMEL;} $CAMEL−−;if(d efined($_=shift(@dromedary1))&&/\S/){ $camel1hump+=1 <<$CAMEL;}$CAMEL−−;if(defined($_=shift( @camel1hump))&&/\S/){$camel1hump+=1<<$CAMEL;}$CAMEL−−;if( defined($_=shift(@camel1hump))&&/\S/){$camel1hump+=1<<$CAME L;;}$camel.=(split(//,"\040..m‘{/J\047\134}L^7FX"))[$camel1h ump];}$camel.="\n";}@camel1hump=split(/\n/,$camel);foreach(@ camel1hump){chomp;$Camel=$_;tr/LJF7\173\175‘\047/\061\062\063 45678/;tr/12345678/JL7F\175\173\047‘/;$_=reverse;print"$_\040 $Camel\n";}foreach(@camel1hump){chomp;$Camel=$_;y/LJF7\173\17 5‘\047/12345678/;tr/12345678/JL7F\175\173\047‘/;$_=reverse;p rint"\040$_$Camel\n";}#japh−Erudil’;;s;\s*;;g;;eval; eval ("seek\040DATA,0,0;");undef$/;$_=<DATA>;s$\s*$$g;( );;s ;^.*_;;;map{eval"print\"$_\"";}/.{4}/g; __DATA__ \124 \1 50\145\040\165\163\145\040\157\1 46\040\1 41\0 40\143\141 \155\145\1 54\040\1 51\155\ 141 \147\145\0 40\151\156 \040\141 \163\16 3\ 157\143\ 151\141\16 4\151\1 57\156 \040\167 \151\164\1 50\040\ 120\1 45\162\ 154\040\15 1\163\ 040\14 1\040\1 64\162\1 41\144 \145\ 155\14 1\162\ 153\04 0\157 \146\ 040\11 7\047\ 122\1 45\15 1\154\1 54\171 \040 \046\ 012\101\16 3\16 3\15 7\143\15 1\14 1\16 4\145\163 \054 \040 \111\156\14 3\056 \040\
    [Show full text]
  • Winning Team Player
    PROGRAMMING Perl: Screen Scrapers Gtk-GUIs and Web requests play along in the Perl Object Environment Winning Team Player The Perl Object Environment (POE) provides a platform for scripts to perform cooperative multitasking without any help from the operating system’s scheduler.This month’s application lets a Gtk-driven graphical interface interact with time-consuming web requests with- out hiccups. BY MICHAEL SCHILLI UI-based applications are usually There is an alternative that avoids another window over the ticker precisely event driven. The program will both these drawbacks – cooperative at this point. In this case the application Gtypically have a main loop that it multitasking with POE, the Perl Object has to redraw the concealed area (this is uses to wait for events such as mouse Environment [2] centered around its known as refreshing). clicks and keyboard input. It is important main developer, Rocco Caputo. The Unfortunately, because it is busy wait- for the program to process these events environment is implemented as a ing for HTTP results to trickle in, it without any delay and quickly return to state machine that runs exactly one doesn’t receive the redraw event, and the main event loop. This prevents the process with a single thread, but has a this leaves a nasty gray hole on the desk- user from noticing the temporary userspace “kernel” that allows multiple top – not a pretty sight. unavailability of the interface. tasks to be performed quasi simultane- The stock price ticker program we will ously. Asynchronous Approach be looking at in this month’s article, peri- It would be more elegant to transmit odically connects to the Yahoo financial Keeping Track a Web request, and get back to re- pages to request an update for selected Developers who want to query share freshing the graphical display imme- share prices (see Figure 1).
    [Show full text]