THEME [INFRA-2012-3.3.] [Coordination Actions, Conferences and Studies Supporting Policy Development, Including International Cooperation, for E-Infrastructures]

Total Page:16

File Type:pdf, Size:1020Kb

THEME [INFRA-2012-3.3.] [Coordination Actions, Conferences and Studies Supporting Policy Development, Including International Cooperation, for E-Infrastructures] THEME [INFRA-2012-3.3.] [Coordination actions, conferences and studies supporting policy development, including international cooperation, for e-Infrastructures] Grant agreement for: Coordination and support action* Annex I - "Description of Work" Project acronym: IDGF-SP Project full title: " International Desktop Grid Federation – Support Project " Grant agreement no: 312297 Version date: Table of Contents Part A A.1 Project summary ......................................................................................................................................3 A.2 List of beneficiaries ..................................................................................................................................4 A.3 Overall budget breakdown for the project ............................................................................................... 5 Workplan Tables WT1 List of work packages ............................................................................................................................1 WT2 List of deliverables .................................................................................................................................2 WT3 Work package descriptions ................................................................................................................... 5 Work package 1......................................................................................................................................5 Work package 2......................................................................................................................................7 Work package 3....................................................................................................................................10 Work package 4....................................................................................................................................13 Work package 5....................................................................................................................................17 WT4 List of milestones .................................................................................................................................20 WT5 Tentative schedule of project reviews ................................................................................................. 21 WT6 Project effort by beneficiaries and work package ................................................................................22 WT7 Project effort by activity type per beneficiary ...................................................................................... 23 WT8 Project efforts and costs ......................................................................................................................24 A1: Project summary 1 2 Project Number 312297 Project Acronym IDGF-SP One form per project General information 3 Project title International Desktop Grid Federation – Support Project 4 Starting date 01/11/2012 5 Duration in months 24 6 Call (part) identifier FP7-INFRASTRUCTURES-2012-1 INFRA-2012-3.3.: Coordination actions, Activity code(s) most conferences and studies 7 supporting policy relevant to your topic development, including international cooperation, for e-Infrastructures 8 Free keywords 9 Abstract There are over a billion PCs in the world. Most of these PCs can be found in citizens’ homes and, to a lesser extent, in universities. Most of these computers remain idle most of the time. About 1 million of them are active in supporting science in a volunteer computing grid and use their idle time to run scientific applications. The potential growth of this computing capacity is enormous. Many Desktop Grids have therefore decided to found the International Desktop Grid Federation (IDGF) to help each other improving their e-Infrastructures. The IDGF-Support Project will give the IDGF a boost in two important areas. Firstly it will help considerably with increasing the number of citizens that donate computing time to e-Science.It will do so by targeted communication activities and settin-up a network of "ambassadors". Secondly it will help universities' e-infrastructures to include otherwise idle PCs from their class rooms and offices. In addition IDGF-SP will collect and analyse data that will help deploying idle PCs in an effective and energy efficient way. It has been shown that Desktop Grids can contribute to Green IT if used in the correct way. IDGF-SP will collect data to underpin and advocate best practices. As a result of IDGF-SP, the number of citizen volunteers donating computing to e-Science will increase significantly. By employing unused PCs in private Desktop Grid, universities and other research organisations, will save on their costs on providing computer capacity for their scientists. IDGF-SP will help strengthening the co-operation amongst Desktop Grid e-Infrastructure operators. IDGF-SP will encourage and help IDGF Desktop Grid providers to integrate their infrastructures into the main e-Science environment. The existance of an lively active IDGF community assures the swift take-up of the IDGF-SP project results. 312297 IDGF-SP - Part A - Page 3 of 5 A2: List of Beneficiaries 1 2 Project Number 312297 Project Acronym IDGF-SP List of Beneficiaries Project entry Project exit No Name Short name Country 10 month month MAGYAR TUDOMANYOS AKADEMIA SZAMITASTECHNIKAI ES 1 MTA SZTAKI Hungary 1 24 AUTOMATIZALASI KUTATO INTEZET 2 STICHTING ALMEREGRID ALMEREGRID Netherlands 1 24 3 THE UNIVERSITY OF WESTMINSTER LBG UoW United Kingdom 1 24 4 UNIVERSITEIT LEIDEN LU Netherlands 1 24 5 SONY EUROPE LIMITED SONY United Kingdom 1 24 6 THE WORLDWIDE COMPUTER COMPANY LIMITED CE United Kingdom 1 24 7 STICHTING INTERNATIONAL DESKTOP GRID FEDERATION IDGF Netherlands 1 24 312297 IDGF-SP - Part A - Page 4 of 5 A3: Budget Breakdown 1 2 Project Number 312297 Project Acronym IDGF-SP One Form per Project Participant Estimated eligible costs (whole duration of the project) Ind. number in Requested EU Participant short name 13 Coordination 11 costs Management (B) Other (C) Total A+B+C contribution this project / Support (A) 1 MTA SZTAKI T 170,745.00 67,732.00 0.00 238,477.00 212,643.00 2 ALMEREGRID T 164,784.00 0.00 0.00 164,784.00 146,932.00 3 UoW T 149,502.00 0.00 0.00 149,502.00 133,305.00 4 LU S 95,040.00 0.00 0.00 95,040.00 76,175.00 5 SONY A 106,787.00 0.00 0.00 106,787.00 95,218.00 6 CE T 77,125.00 0.00 0.00 77,125.00 68,769.00 7 IDGF F 142,383.00 0.00 0.00 142,383.00 126,958.00 Total 906,366.00 67,732.00 0.00 974,098.00 860,000.00 Note that the budget mentioned in this table is the total budget requested by the Beneficiary and associated Third Parties. 312297 IDGF-SP - Part A - Page 5 of 5 * The following funding schemes are distinguished Collaborative Project (if a distinction is made in the call please state which type of Collaborative project is referred to: (i) Small of medium-scale focused research project, (ii) Large-scale integrating project, (iii) Project targeted to special groups such as SMEs and other smaller actors), Network of Excellence, Coordination Action, Support Action. 1. Project number The project number has been assigned by the Commission as the unique identifier for your project, and it cannot be changed. The project number should appear on each page of the grant agreement preparation documents to prevent errors during its handling. 2. Project acronym Use the project acronym as indicated in the submitted proposal. It cannot be changed, unless agreed during the negotiations. The same acronym should appear on each page of the grant agreement preparation documents to prevent errors during its handling. 3. Project title Use the title (preferably no longer than 200 characters) as indicated in the submitted proposal. Minor corrections are possible if agreed during the preparation of the grant agreement. 4. Starting date Unless a specific (fixed) starting date is duly justified and agreed upon during the preparation of the Grant Agreement, the project will start on the first day of the month following the entry info force of the Grant Agreement (NB : entry into force = signature by the Commission). Please note that if a fixed starting date is used, you will be required to provide a detailed justification on a separate note. 5. Duration Insert the duration of the project in full months. 6. Call (part) identifier The Call (part) identifier is the reference number given in the call or part of the call you were addressing, as indicated in the publication of the call in the Official Journal of the European Union. You have to use the identifier given by the Commission in the letter inviting to prepare the grant agreement. 7. Activity code Select the activity code from the drop-down menu. 8. Free keywords Use the free keywords from your original proposal; changes and additions are possible. 9. Abstract 10. The month at which the participant joined the consortium, month 1 marking the start date of the project, and all other start dates being relative to this start date. 11. The number allocated by the Consortium to the participant for this project. 12. Include the funding % for RTD/Innovation – either 50% or 75% 13. Indirect cost model A: Actual Costs S: Actual Costs Simplified Method T: Transitional Flat rate F :Flat Rate Workplan Tables Project number 312297 Project title IDGF-SP—International Desktop Grid Federation – Support Project Call (part) identifier FP7-INFRASTRUCTURES-2012-1 Funding scheme Coordination and support action WT1 List of work
Recommended publications
  • Tesi Di Laurea La Solidarietà Digitale
    Tesi di Laurea La Solidarietà Digitale Da Seti@home a Boinc. Ipotesi di una società dell’elaborazione. di Francesco Morello 1 INDICE Introduzione............................................................... 4 Capitolo I Calcolo Vontario....................................... 5 1.1 Dai media di massa al calcolo distribuito......... 5 1.2 Calcolo Distribuito............................................... 6 1.3 Calcolo Volontario............................................... 8 1.3.1 Come funziona il calcolo volontario ?.......... 10 1.3.2 Applicazioni del Calcolo Volontario.............. 11 Capitolo II Analisi di BOINC.................................... 23 2.1 Piattaforma per il calcolo volontario............... 23 2.2 Architettura di BOINC........................................ 25 2.2.1 L'interfaccia di BOINC.................................... 25 2.2.2 Progetti e Volontari......................................... 31 2.2.3 Checkpointing e Work unit............................ 32 2.2.4 Crediti e ridondanza....................................... 32 2.2.5 Gli scopi di BOINC.......................................... 33 Capitolo III Aspetti tecnici del calcolo distribuito 36 3.1 Grid Computing vs Volunteer Computing....... 36 3.2 Hardware e Software per il Distributed Computing38 3.2.1 La Playstation 3 per raggiungere il Petaflop.41 Capitolo IV Aspetti sociali del calcolo volontario 45 4.1 Riavvicinarci alla scienza.................................. 45 2 4.2 Volontari oltre la CPU........................................ 47 4.2.1 Forum, Blog
    [Show full text]
  • The Social Cloud for Public Eresearch
    The Social Cloud for Public eResearch by Koshy John A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Engineering in Network Engineering. Victoria University of Wellington 2012 Abstract Scientific researchers faced with extremely large computations or the re- quirement of storing vast quantities of data have come to rely on dis- tributed computational models like grid and cloud computing. However, distributed computation is typically complex and expensive. The Social Cloud for Public eResearch aims to provide researchers with a platform to exploit social networks to reach out to users who would otherwise be unlikely to donate computational time for scientific and other research ori- ented projects. This thesis explores the motivations of users to contribute computational time and examines the various ways these motivations can be catered to through established social networks. We specifically look at integrating Facebook and BOINC, and discuss the architecture of the functional system and the novel social engineering algorithms that power it. ii Acknowledgments I would first like to thank my parents, John Koshy and Susan John, for their unwavering love and support in all my endeavours. I would like to thank my supervisor, Kris Bubendorfer, for his valuable guidance and support throughout my thesis. Kyle Chard and Ben Palmer have my thanks for their contributions and feedback in the course of au- thoring the IEEE e-Science paper on the Social Cloud for Public eResearch. Many thanks are also due to Andy Linton for his help with managing the development and test server for the Social Cloud for Public eResearch.
    [Show full text]
  • Deploying and Maintaining a Campus Grid at Clemson University Dru Sepulveda Clemson University, [email protected]
    Clemson University TigerPrints All Theses Theses 8-2009 Deploying and Maintaining a Campus Grid at Clemson University Dru Sepulveda Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Part of the Computer Sciences Commons Recommended Citation Sepulveda, Dru, "Deploying and Maintaining a Campus Grid at Clemson University" (2009). All Theses. 662. https://tigerprints.clemson.edu/all_theses/662 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. Deploying and Maintaining a Campus Grid at Clemson University A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Masters Computer Science by Dru Sepulveda May 2009 Accepted by: Sebastien Goasguen, Committee Chair Mark Smotherman Steven Stevenson Abstract Many institutions have all the tools needed to create a local grid that aggregates commodity compute resources into an accessible grid service, while simultaneously maintaining user satisfaction and system security. In this thesis, the author presents a three-tiered strategy used at Clemson University to deploy and maintain a grid infrastructure by making resources available to both local and federated remote users for scientific research. Using this approach virtually no compute cycles are wasted. Usage trends and power consumption statistics collected from the Clemson campus grid are used as a reference for best-practices. The loosely-coupled components that comprise the campus grid work together to form a highly cohesive infrastructure that not only meets the computing needs of local users, but also helps to fill the needs of the scientific community at large.
    [Show full text]
  • Desktop Grids for Escience
    Produced by the IDGF-SP project for the International Desktop Grid Federation A Road Map Desktop Grids for eScience Technical part – December 2013 IDGF/IDGF-SP International Desktop Grid federation http://desktopgridfederation.org Edited by Ad Emmen Leslie Versweyveld Contributions Robert Lovas Bernhard Schott Erika Swiderski Peter Hannape Graphics are produced by the projects. version 4.2 2013-12-27 © 2013 IDGF-SP Consortium: http://idgf-sp.eu IDGF-SP is supported by the FP7 Capacities Programme under contract nr RI-312297. Copyright (c) 2013. Members of IDGF-SP consortium, see http://degisco.eu/partners for details on the copyright holders. You are permitted to copy and distribute verbatim copies of this document containing this copyright notice but modifying this document is not allowed. You are permitted to copy this document in whole or in part into other documents if you attach the following reference to the copied elements: ‘Copyright (c) 2013. Members of IDGF-SP consortium - http://idgf-sp.eu’. The commercial use of any information contained in this document may require a license from the proprietor of that information. The IDGF-SP consortium members do not warrant that the information contained in the deliverable is capable of use, or that use of the information is free from risk, and accept no liability for loss or damage suffered by any person and organisation using this information. – 2 – Preface This document is produced by the IDGF-SP project for the International Desktop Grid Fe- deration. Please note that there are some links in this document pointing to the Desktop Grid Federation portal, that requires you to be signed in first.
    [Show full text]
  • Integrated Service and Desktop Grids for Scientific Computing
    Integrated Service and Desktop Grids for Scientific Computing Robert Lovas Computer and Automation Research Institute, Hungarian Academy of Sciences, Budapest, Hungary [email protected] Ad Emmen AlmereGrid, Almere, The Nederlands [email protected] RI-261561 GRID 2010, DUBNA Why Desktop Grids are important? http://knowledgebase.ehttp://knowledgebase.e--irg.euirg.eu RI-261561 GRID 2010, DUBNA Introduction RI-261561 WP4 Author: Robert Lovas, Ad Emmen version: 1.0 Prelude - what do people at home and SME’s think about grid computing Survey of EDGeS project Questionnaires all across Europe Get an idea of the interest in people and SMEs to donate computing time for science to a Grid Get an idea of the interest in running a Grid inside an SME RI-261561 GRID 2010, DUBNA Survey amongst the General Public and SME’s RI-261561 GRID 2010, DUBNA Opinions about Grid computing RI-261561 GRID 2010, DUBNA Survey - Conclusions Overall: there is interest in Desktop Grid computing in Europe. However, that people are willing to change their current practice and say that they want to participate in Grid efforts does not mean that they are actually going to do that. Need to generate trust in the organisation that manages the Grid. People want to donate computing time for scientific applications, especially medical applications. They do not like to donate computing time to commercial or defense applications. People want feedback on the application they are running. No clear technical barriers perceived by the respondents: so this does not need much attention. Overall the respondents were rather positive about donating computing time for a Grid or about running applications on a Grid.
    [Show full text]
  • Volunteer Computing at CERN
    Volunteer Computing at CERN - sustainability model proposal - Supervisors: Openlab Student: Ben Segal, Daniela Dorneanu, Miika Tuisku, [email protected] Francois Grey version 0.1 September 12, 2011 Contents 1 Executive summary 2 2 Introduction 3 2.1 What is Volunteer Computing? . .3 2.2 What is a Volunteer Cloud? . .3 2.3 Volunteer Cloud characteristics . .4 2.4 Case study objectives . .4 3 Analysis 5 3.1 Surveys and interviews for main stakeholders . .5 3.2 Volunteers Profiles . .5 3.3 Brainstorming . .6 4 Use Cases 7 4.1 Volunteer Cloud for CERN . .8 4.2 Volunteer Cloud for Industry - Openlab Model . 11 4.3 Nonprofit Volunteer Cloud . 13 4.4 Commercial Volunteer Cloud . 17 5 Conclusions and future steps 20 6 Acknowledgements 21 7 Annexes 22 7.1 Annex A - CernVM and CoPilot . 22 7.2 Annex B - Detailed Surveys . 22 7.3 Annex C - Detailed description of The Business Model Canvas . 24 8 References 27 1 1 Executive summary This report provides an analysis and evaluation of the possible sustainability models (business models) for Volunteer Cloud Computing. Currently on one side there are millions of volunteers willing to share their CPU power and on the other side there are scientists who need this CPU power but don't really know how to obtain this from the volunteers. Thus the main purpose of this report is to suggest different institutional arrangements that would lower the entry barrier for scientists to Volunteer Cloud Computing, and in particular present such resources to them as a service that was fully compatible with the way they use their current computing infrastructure.
    [Show full text]
  • BOINC: a Platform for Volunteer Computing
    BOINC: A Platform for Volunteer Computing David P. Anderson University of California, Berkeley Space Sciences Laboratory Berkeley, CA 94720 [email protected] December 9, 2018 Abstract “Volunteer computing” is the use of consumer digital devices for high-throughput scientific computing. It can provide large computing capacity at low cost, but presents challenges due to device heterogeneity, unreliability, and churn. BOINC, a widely-used open-source middleware system for volunteer computing, addresses these challenges. We describe its features, architecture, and implementation. Keywords BOINC, volunteer computing, distributed computing, scientific computing, high-throughput computing 1. Introduction 1.1 Volunteer computing Volunteer computing (VC) is the use of consumer digital devices, such as desktop and laptop computers, tablets, and smartphones, for high-throughput scientific computing. Device owners participate in VC by installing a program that downloads and executes jobs from servers operated by science projects. There are currently about 30 VC projects in many scientific areas and at many institutions. The research enabled by VC has resulted in numerous papers in Nature, Science, PNAS, Physical Review, Proteins, PloS Biology, Bioinformatics, J. of Mol. Biol., J. Chem. Phys, and other top journals [1]. About 700,000 devices are actively participating in VC projects. These devices have about 4 million CPU cores and 560,000 GPUs, and collectively provide an average throughput of 93 PetaFLOPS. The devices are primarily modern, high-end computers: they average 16.5 CPU GigaFLOPS and 11.4 GB of RAM, and most have a GPU capable of general-purpose computing using OpenCL or CUDA. 1 The potential capacity of VC is much larger: there are more than 2 billion consumer desktop and laptop computers [2].
    [Show full text]
  • Projects on BONIC?
    PRESENTED BY: SAJIN GEORGE PRIYANKA ANGADI Overview of the talk ● GRID and Cluster computing and what are their differences. ● What is BOINC and how it works ? ● Key features of BOINC. ● Power consumption and Energy consumption. ● Projects that are currently using BOINC ● BONIC interface and Client Software. ● How to setup your own BOINC. GRID COMPUTING A distributed computing environment that uses its own resources to handle computational tasks. Grid VS Cluster Loosely coupled Tightly coupled systems (Decentralization) Single system image Diversity and Dynamism Centralized Job Distributed Job management & scheduling Management & scheduling system Volunteer computing. Volunteer computing is an arrangement in which people (volunteers) provide computing resources to projects, which use the resources to do distributed computing and/or storage. ● Volunteers are typically members of the general public who own Internet-connected PCs. Organizations such as schools and businesses may also volunteer the use of their computers. ● Projects are typically academic (university-based) and do scientific research. But there are exceptions; for example, GIMPS and distributed.net (two major projects) are not academic www.bonic.com What is B.O.I.N.C ? is a massive open-source grid computing tool by the university of berkeley. How BOINC works ? How the software works CREDIT The project's server keeps track of how much work your computer has done; this is called credit. To ensure that credit is granted fairly, most BOINC projects work as follows: ● Each task may be sent to two computers. ● When a computer reports a result, it claims a certain amount of credit, based on how much CPU time was used.
    [Show full text]
  • Matemáticas En Gidgrids De E-Cienc Ias
    Aplicaciones Matemáticas en GidGrids de e-Cienc ias “Son las Grid el Nuevo Paradigma de la Computación” Felipe Rolando Menchaca García CIC-IPN GRIDS Las Grid han alcanzado un nivel de madurez tal que se están convirtiendo en la infraestructura de cómputo idónea para la investigación científica y el desarrollo de soluciones de ingeniería de alto nivel Felipe Rolando Menchaca García CIC-IPN GRIDS • La computación distribuida ha sido diseñada para resolver problemas que exceden la capacidad de cualquier supercomputadora, mientras se mantiene la flexibilidad de trabajar en múltiples problemas más ppqequeños. • En particular la computación de ciclos redundantes aprovecha liddlas capacidades no utilizadas de las máquinas para realizar procesamiento de muy altos requerimientos. Felipe Rolando Menchaca García CIC-IPN GRIDS El constante incremento de la capacidad del hdhardware y e ldl decremen tdlto de los prec ios dlde los componentes informáticos, ha permitido que los clusters se hayan convertido en una alternativa muy atractiva en el campo de la computación paralela y distribuida. No obstante,,g la gran demanda tanto de computación como de espacio de almacenamiento requeridos por un diversas aplicaciones que manejan grandes cantidades de datos y deben hacerlo de forma eficiente, exige el uso de nuevas tecnologías, como es el caso de la computación GRID. Felipe Rolando Menchaca García CIC-IPN GRIDS • ¿Qué es la e-Ciencia? • ¿La e-Ciencia requiere del cómputo GRID? • ¿En qué tecnologías se apoyan las GRID? • ¿Qué es el middleware GRID? • ¿Cómo es infraestructura física de la GRID? • ¿Hacia dónde se dirigen las GRID? • ¿Qué son los servicios GRID? • ¿Qué aplicaciones hay en los servicios GRID? Felipe Rolando Menchaca García CIC-IPN Los Sistemas más Poderosos • Folding@Home opera a 8.1 PetaFlops.
    [Show full text]
  • CMS (LHC@CERN) Computing
    A common connection: CMS (LHC@CERN) Computing Kajari Mazumdar Department of High Energy Physics Tata Institute of Fundamental Research Mumbai Accelerating International Collaboration and Science through Connective Computation University of Chicago Centre, Delhi March 10, 2015 Introduction • Raison de etre of LHC: discover/rule out existence of Higgs boson particle (if it existed any time in nature?) • Yes, indeed! about 1 pico-second after the Big Bang. • Today, after 13.7 billion years later, CERN-LHC recreates the condition of the very early universe. • Higgs boson discovered within 3 years of start of data taking at LHC. • Experiments study the aftermath of violent collisions of protons/ions -- using very complicated, mammoth detectors -- with ~ 1 million electronic channels taking data every 25 nano-sec. digital summary of information recorded as collision event. • 300 publications per experiment with data collected in Run1 (~4 years) • LHC Computing Grid is the backbone of the success of LHC project. March 10, 2015 K. Mazumdar 2 CERN-LHC 2009-2035: Physics exploitation Nobel prize in 2013, …..? Higgs boson discovery at CERN-LHC in 2012 March 10, 2015 K. Mazumdar 3 What happens in LHC experiment Selection only 1 in 1013 events. we do not know which one! need to check with all! Enormous computing resources needed. March 10, 2015 K. Mazumdar 4 In hard numbers: • 2 big experiments: ATLAS & CMS. • LHC collides 6-8 hundred million protons/second for several years. • Digital information of each collision : 1 – 10 MB • Capacity for Information storage: ~ 1000 collisions /sec. • Physicists must sift through ~ 20 Petabytes (1015 ) of data annually.
    [Show full text]
  • Connecting Citizens to Science
    ConnectingConnecting CitizensCitizens toto ScienceScience AA LearningLearning EE--volutionvolution MiguelMiguel AngelAngel MarquinaMarquina CERN,CERN, SwitzerlandSwitzerland CERN University of Geneva UN Institute for Training and Research “All for science, science for all” 45th EUCEN Conference Connecting Citizens to Science… Charmey - May 31, 2013 Miguel Angel Marquina ScienceScience andand InnovationInnovation ResearchResearch vsvs HEHE structuresstructures ResearchResearch environments,environments, purepure joyjoy ButBut sometimes,sometimes, IvoryIvory TowersTowers MajorMajor societalsocietal investmentinvestment onon formingforming suchsuch ““eliteelite”” ReturnReturn toto Society,Society, eveneven ifif justjust aa token,token, withwith educationeducation andand learninglearning opportunities?opportunities? 45th EUCEN Conference Connecting Citizens to Science… Charmey - May 31, 2013 Miguel Angel Marquina ScienceScience andand InnovationInnovation GlobalGlobal ChallengesChallenges ToughTough timetime forfor Researchers,Researchers, competingcompeting for:for: ResourcesResources –– Funding,Funding, ICTICT BigBig DataData challengeschallenges Non/semiNon/semi--automatableautomatable analysisanalysis ImpactImpact onon Society,Society, reachingreaching thethe publicpublic 45th EUCEN Conference Connecting Citizens to Science… Charmey - May 31, 2013 Miguel Angel Marquina ScienceScience andand InnovationInnovation CitizenCitizen InvolvementInvolvement CitizensCitizens havehave beenbeen passivepassive watcherswatchers soso farfar
    [Show full text]
  • The 10Th BOINC Workshop
    The 10th BOINC Workshop David P. Anderson Space Sciences Lab University of California, Berkeley 29 Sept. 2014 1985 ● Wisconsin UC Berkeley ● Internet as backplane 1987 ● Marionette 1992 ● Industry 1995 ● David Gedye: SETI@home idea 1998 ● SETI@home development – Eric Korpela – Jeff Cobb – Matt Lebofsky 1999 ● SETI@home launch 2000 ● Infrastructure issues ● United Devices 2001 ● United Devices falling-out 2002 ● ClimatePrediction.net: Myles Allen ● BOINC computing power Scientists Volunteers education/outreach 2002 ● Open source software ● Credit ● Replication and validation ● Client job buffer ● Code signing 2002 ● Hiram Clawson, Eric Heien ● NSF proposal – Mari Maeda, Kevin Thompson ● Visit Climateprediction – Carl Christensen, Tolu Aina ● Derrick Kondo ● Vijay Pande 2003 ● UD lawsuit ● Undergrads, PHP code ● Karl Chen, Mr. Python ● Oct: LIGO, Bruce Allen ● Nov: CERN – Francois Grey, Ben Segal ● Nov: WCG kicks tires 2003 job creation MySQL assimilator server scheduler validator transitioner 2004 ● Rom Walton ● Charlie Fenton 2004 ● Anonymous platform ● Separate GUI ● Cross-project ID and credit ● Preemptive scheduling ● Sticky files ● Upload/download hierarchies ● DB as buffer 2004 ● Predictor@home, Michela Taufer – homogeneous redundancy ● SETI@home: Eric Korpela ● BURP: Janus Kristensen ● Climateprediction.net launch ● LHC@home launch 2004 2004 ● Supercomputer 04 talk ● Matt Blumberg, account manager design 2005 ● Einstein@home – Reinhard Prix, Bernd Machenschalk, Oliver Bock ● Primegrid – Rytis Slatkevičius ● Rosetta@home ● IBM World
    [Show full text]