Tesi Di Laurea La Solidarietà Digitale

Total Page:16

File Type:pdf, Size:1020Kb

Tesi Di Laurea La Solidarietà Digitale Tesi di Laurea La Solidarietà Digitale Da Seti@home a Boinc. Ipotesi di una società dell’elaborazione. di Francesco Morello 1 INDICE Introduzione............................................................... 4 Capitolo I Calcolo Vontario....................................... 5 1.1 Dai media di massa al calcolo distribuito......... 5 1.2 Calcolo Distribuito............................................... 6 1.3 Calcolo Volontario............................................... 8 1.3.1 Come funziona il calcolo volontario ?.......... 10 1.3.2 Applicazioni del Calcolo Volontario.............. 11 Capitolo II Analisi di BOINC.................................... 23 2.1 Piattaforma per il calcolo volontario............... 23 2.2 Architettura di BOINC........................................ 25 2.2.1 L'interfaccia di BOINC.................................... 25 2.2.2 Progetti e Volontari......................................... 31 2.2.3 Checkpointing e Work unit............................ 32 2.2.4 Crediti e ridondanza....................................... 32 2.2.5 Gli scopi di BOINC.......................................... 33 Capitolo III Aspetti tecnici del calcolo distribuito 36 3.1 Grid Computing vs Volunteer Computing....... 36 3.2 Hardware e Software per il Distributed Computing38 3.2.1 La Playstation 3 per raggiungere il Petaflop.41 Capitolo IV Aspetti sociali del calcolo volontario 45 4.1 Riavvicinarci alla scienza.................................. 45 2 4.2 Volontari oltre la CPU........................................ 47 4.2.1 Forum, Blog e Web site................................. 47 4.3 Sondaggi e Statistiche...................................... 48 4.4 Prospettive del calcolo volontario................... 48 Capitolo V Ipertesto................................................ 49 Conclusioni.............................................................. 50 Bibliografia............................................................... 51 3 Introduzione Negli ultimi decenni persino le più comuni azioni quotidiane sono state profondamente modificate dal progresso tecnologico. Tutti i campi sono stati influenzati dalla novità digitale: la conversione al bit ha investito la fotografia come il commercio, la comunicazione personale e di massa, persino il terrorismo e la politica. Questo studio è teso a mettere in evidenza come la “rivoluzione ed evoluzione“ digitale stia modificando ed innovando il concetto di solidarie- tà, offrendo nuove dimensioni globali in cui lʼutente può contribuire soli- dalmente, allargando i confini ed abbattendo le barriere nazionali, verso unʼipotesi prossima della “Società dellʼelaborazione”. Ripercorrendo lʼevoluzione informatica che ha gettato le basi della soli- darietà digitale e analizzando la piattaforma di calcolo volontario più famo- sa del web, traccerò i contorni della comunità online di supporto al calcolo distribuito, uno strumento comunicativo che ha lo scopo di spiegare in termini semplici questo universo e sensibilizzare gli utenti su temi scientifi- ci ed umanitari, con la speranza di ampliare maggiormente questa già grande comunità che aiuta la scienza. 4 Capitolo I Calcolo Vontario 1.1 Dai media di massa al calcolo distribuito. Nel villaggio globale dellʼepoca dei mass media, sono avvenuti innume- revoli cambiamenti. Quello che una volta veniva chiamato comunemente audience, che si ipotizzava rispondesse a teorie ipodermiche, è diventato lentamente un abile creatore di contenuti. Da passivi spettatori di un mes- saggio mediatico, questi abili e pazienti cittadini, grazie al web, hanno ini- ziato a tessere relazioni sociali e la comunicazione da verticale è divenuta orizzontale. La realtà sociale di internet è ormai indiscussa: blog, comu- nity, forum,instant messaging, sono un esempio dellʼinnumerevole quantità di strumenti che il cittadino comunicatore del villaggio globale ha a dispo- sizione 24 ore su 24 ore. I social network come my space, blogger, facebook, sono diventati il centro di interesse sia per gli utenti, sia per gli investitori bilionari di inter- net. Questa veloce evoluzione nellʼutilizzo di internet può essere riassunta in un modo: dopo vari tentativi di creare un prodotto di successo, utiliz- zando una prospettiva di vecchio stampo, alla fine sono stati gli utenti a creare il prodotto di successo, User Created Content (UCC). Esempi ecla- tanti di UCC sono sotto gli occhi di tutti: Wikipedia, Youtube, per citarne due, sono prodotti digitali continuamente creati dagli utenti di internet, un internet soprannominato Web 2.0. 5 il ruolo del computer, grazie allʼincremento delle risorse di elaborazione ed alla maggiore (quasi capillare) diffusione del mezzo, è profondamente cambiato nel corso degli anni e grazie al web, il computer è diventato no- do di una ragnatela che collega tutti i paesi del mondo. Tutte queste nuove caratteristiche e abilità molto utili, che hanno reso lʼutilizzo del computer molto più gradevole da quando si comunicava solo con un prompt, non potevano nascondere la prima caratteristica dellʼelaboratore, la computa- zione. Questʼultima variabile è quella che è cresciuta a dismisura se si pensa che un pc di 13 anni fà, con a bordo Windows 95, doveva avere come requisiti minimi, una cpu 133Mhz ed una RAM di 16 mb. La fre- quenza attuale dei processori moderni si aggira intorno al Ghz, con model- li di CPU che toccano perfino i 3Ghz. La interconnessione di elaboratori così potenti, sta alla base dellʼidea del calcolo distribuito, una nuova applicazione nella megalopoli digitale. 1.2 Calcolo Distribuito Alla base del calcolo distribuito vi è lʼidea di unire vari computer per generare una grande potenza di elaborazione, questa idea è stata di certo aiutata dalla diffusione di internet durante gli anni 90, era il primo periodo in cui i computer connessi in internet utilizzavano applicazioni per inviare email o navigare il web. I primi ricercatori iniziarono a realizzare che quel grande numero di macchine connesse in Internet avrebbero potuto essere utilizzate per for- mare un grande super computer virtuale. 6 Il calcolo distribuito può generare più potenza di elaborazione di qual- siasi supercomputer ed il distacco tra i due sistemi di calcolo crescerà sempre di più. Nel 1998 fu stimato che il progetto SETI@home era stato eseguito da circa 1 milione di computer e questo significava una potenza di calcolo di circa 60 TeraFLOPs1, mentre il super computer IBM ASCI White, il più ve- loce in quegli anni, generava una potenza di calcolo di circa 12 TeraFLOPs. Il milione di utenti di SETI@home rappresenta una piccolissima frazi- one degli utenti di internet e si stima che la popolazione di internauti arrivi entro il 2015 ad un miliardo2, Ciò che caratterizza questa versatilità e successo del calcolo distribuito rispetto ai sistemi di supercomputer costosi ed ingombranti è proprio lʼes- senza del calcolo distribuito: la rete di elaboratori che aderiscono alla con- divisione di risorse è molto simile alla struttura di internet, infatti la rete non dipende da ogni computer connesso, anzi ogni singolo nodo è proprio autonomo e lʼassenza di un nodo non compromette la struttura della rete. Inoltre lʼimplementazione del calcolo distribuito ha un vantaggio molto evidente: la sua economicità rispetto ai sistemi basati sui supercomputer, ciò significa un accesso molto più democratico allʼutilizzo di potenti risorse 1 Trillion floating-point operations per second. 2 D.P. Anderson. Public Computing: Reconnecting People To Science, Nov. 2003. http://boinc.berkeley.edu/papers.php 7 informatiche e quindi la potenziale riuscita di vari progetti scientifici, a pre- scindere dai fondi stanziati per la ricerca. 1.3 Calcolo Volontario “The worldʼs computing power and disk space is no longer primarily con- centrated in supercomputer centers and machine rooms. Instead it is dis- tributed in hundreds of millions of personal computers and game con- soles belonging to the general public”3. Il calcolo volontario è un aspetto molto romantico dellʼera digitale, infatti è un tipo di calcolo distribuito in cui gli utenti donano una percentuale delle risorse informatiche a disposizione, senza scopo di lucro. Dal punto di vista tecnico, può essere definito come una forma di calco- lo distribuito in cui gli utenti donano volontariamente risorse informatiche a progetti, i quali a loro volta usano queste risorse per analizzare dei dati più efficientemente. Qualsiasi utente in possesso di un computer connesso ad internet può contribuire a questi progetti, infatti scaricando un software per simulare dei modelli di dati e registrandosi al progetto, sarà a tutti gli effetti un volonta- rio e ad ogni accensione del proprio personal computer, contribuirà con una parte della potenza del suo elaboratore. I progetti sono normalmente accademici e svolgono compiti scientifici ma esistono anche eccezioni, ad esempio GIMPS e distributed.net, due 3 BOINC: A Stystem for Public-Resource Computing and Storage. 8 dei primi progetti che hanno riscosso molto successo, non sono accade- mici. I volontari sono utenti anonimi, sebbene agli utenti è chiesto di regi- strarsi e di fornire un indirizzo email valido o altre informazioni, non esiste un modo per un progetto di collegare lʼutente volontario con la sua identità reale. A causa dellʼanonimato, i volontari non hanno nessuna responsabili- tà nei confronti dei progetti. Se un volontario ha un comportamento erra- to,per esempio, danneggiare i risultari dellʼelaborazione in modo intenzio- nale, il progetto non può perseguire o giudicare lʼutente volontario. Dʼaltronde anche i Volontari
Recommended publications
  • Packet Switched, General Purpose FPGA-Based Modules For
    Peta-Flop Radio Astronomy Signal Processing and the CASPER Collaboration (and correlators too !) Dan Werthimer and 800 CASPER Collaborators http://casper.berkeley.edu Two Types of Signal Processing 1. Embarrassingly Parallel – Low Data Rates (record the data and process it later) (high computation per bit) 2. Real Time in-situ Processing Petabits per second (can not record data) TYPE 1 Embarrassingly Parallel – Low Data Rates (record the data and process it later) (high computation per bit) VOLUNTEER COMPUTING BOINC - Berkeley Open Infrastructure for Network Computing From Download Work Server Generator Arecibo Feeder Transitioner Shared Database Memory Purger MySQL Volunteers Scheduler Database File Deleter To Nobel Upload Validator Assimilator Server Prize Committee BERKELEY SETI RESEARCH CENTER BERKELEY ASTRONOMY CollaboratorsDEPARTMENT Berkeley SETI and Volunteer Computing Group David Anderson, Hong Chen, Jeff Cobb, Matt Dexter, Walt Fitelson, Eric Korpela, Matt Lebofsky, Geoff Marcy, David MacMahon, Eric Petigura, Andrew Siemion, Charlie Townes, Mark Wagner, Ed Wishnow, Dan Werthimer NSF , NASA, Individual Donors Agilent, Fujitsu, HP, Intel, Xilinx Arecibo Observatory High performance data storage silo UC Berkeley Space Sciences Lab Public Volunteers SETI@Home ✴ Polyphase Channelization ✴ Coherent Doppler Drift Search ✴ Narrowband Pulse Search ✴ Gaussian Drift Search ✴ Autocorrelation ✴ <insert your algorithm here> SETI@home Statistics TOTAL RATE 8,464,550 2,000 per day participants (in 226 countries) 3 million years 1,000 years
    [Show full text]
  • Spin-Off Successes of SETI Research at Berkeley
    **FULL TITLE** ASP Conference Series, Vol. **VOLUME**, c **YEAR OF PUBLICATION** **NAMES OF EDITORS** Spin-Off Successes of SETI Research at Berkeley K. A. Douglas School of Physics, University of Exeter, Exeter, United Kingdom D. P. Anderson, R. Bankay, H. Chen, J. Cobb, E.J. Korpela, M. Lebofsky, A. Parsons, J. Von Korff, D. Werthimer Space Sciences Laboratory, University of California Berkeley, Berkeley CA, USA 94720 Abstract. Our group contributes to the Search for Extra-Terrestrial Intelligence (SETI) by developing and using world-class signal processing computers to analyze data collected on the Arecibo telescope. Although no patterned signal of extra-terrestrial origin has yet been detected, and the immediate prospects for making such a detection are highly uncertain, the SETI@home project has nonetheless proven the value of pursuing such research through its impact on the fields of distributed computing, real-time signal pro- cessing, and radio astronomy. The SETI@home project has spun off the Center for Astronomy Signal Processing and Electronics Research (CASPER) and the Berkeley Open Infrastructure for Networked Computing (BOINC), both of which are responsi- ble for catalyzing a smorgasbord of new research in scientific disciplines in countries around the world. Futhermore, the data collected and archived for the SETI@home project is proving valuable in data-mining experiments for mapping neutral galatic hy- drogen and for detecting black-hole evaporation. 1 The SETI@home Project at UC Berkeley SETI@home is a distributed computing project harnessing the power from millions of volunteer computers around the world (Anderson 2002). Data collected at the Arecibo radio telescope via commensal observations are filtered and calibrated using real-time signal processing hardware, and selectable channels are recorded to disk.
    [Show full text]
  • New SETI Sky Surveys for Radio Pulses
    New SETI Sky Surveys for Radio Pulses Andrew Siemiona,d, Joshua Von Korffc, Peter McMahond, Eric Korpelab, Dan Werthimerb,d, David Andersonb, Geoff Bowera, Jeff Cobbb, Griffin Fostera, Matt Lebofskyb, Joeri van Leeuwena, William Mallardd, Mark Wagnerd aUniversity of California, Berkeley - Department of Astronomy, Berkeley, California bUniversity of California Berkeley - Space Sciences Laboratory, Berkeley, California cUniversity of California, Berkeley - Department of Physics, Berkeley, California dUniversity of California, Berkeley - Berkeley Wireless Research Center, Berkeley, California Abstract Berkeley conducts 7 SETI programs at IR, visible and radio wavelengths. Here we review two of the newest efforts, Astropulse and Fly’s Eye. A variety of possible sources of microsecond to millisecond radio pulses have been suggested in the last sev- eral decades, among them such exotic events as evaporating primordial black holes, hyper-flares from neutron stars, emissions from cosmic strings or perhaps extraterrestrial civilizations, but to-date few searches have been conducted capable of detecting them. The recent announcement by Lorimer et al. of the detection of a powerful (≈ 30 Jy) and highly dispersed (≈ 375 cm−3 pc) radio pulse in Parkes multi-beam survey data has fueled additional interest in such phenomena. We are carrying out two searches in hopes of finding and characterizing these uS to mS time scale dispersed radio pulses. These two observing programs are orthogonal in search space; the Allen Telescope Array’s (ATA) ”Fly’s Eye” experiment observes a 100 square degree field by pointing each 6m ATA antenna in a different direction; by contrast, the Astropulse sky survey at Arecibo is extremely sensitive but has 1/3,000 of the instantaneous sky coverage.
    [Show full text]
  • New SETI Sky Surveys for Radio Pulses
    New SETI Sky Surveys for Radio Pulses Andrew Siemiona,d, Joshua Von Korffc, Peter McMahond,e, Eric Korpelab, Dan Werthimerb,d, David Andersonb, Geoff Bowera, Jeff Cobbb, Griffin Fostera, Matt Lebofskyb, Joeri van Leeuwena, Mark Wagnerd aUniversity of California, Berkeley - Department of Astronomy, Berkeley, California, USA bUniversity of California Berkeley - Space Sciences Laboratory, Berkeley, California, USA cUniversity of California, Berkeley - Department of Physics, Berkeley, California, USA dUniversity of California, Berkeley - Berkeley Wireless Research Center, Berkeley, California, USA eStanford University - Department of Computer Science, Stanford, California, USA Abstract Berkeley conducts 7 SETI programs at IR, visible and radio wavelengths. Here we review two of the newest efforts, Astropulse and Fly’s Eye. A variety of possible sources of microsecond to millisecond radio pulses have been suggested in the last sev- eral decades, among them such exotic events as evaporating primordial black holes, hyper-flares from neutron stars, emissions from cosmic strings or perhaps extraterrestrial civilizations, but to-date few searches have been conducted capable of detecting them. The recent announcement by Lorimer et al. of the detection of a powerful (≈ 30 Jy) and highly dispersed (≈ 375 cm−3 pc) radio pulse in Parkes multi-beam survey data has fueled additional interest in such phenomena. We are carrying out two searches in hopes of finding and characterizing these µs to ms time scale dispersed radio pulses. These two observing programs are orthogonal in search space; the Allen Telescope Array’s (ATA) “Fly’s Eye” experiment observes a 100 square degree field by pointing each 6m ATA antenna in a different direction; by contrast, the Astropulse sky survey at Arecibo is extremely sensitive but has 1/3,000 of the instantaneous sky coverage.
    [Show full text]
  • Installer Et Configurer BOINC
    Installer et configurer BOINC Didacticiel créé par [AF>2TF]Crashoveride, [AF>2TF]Jojo et [AF>2TF]Kyplinor Membres de L'Alliance Francophone , Président, Secrétaire et Trésorier de la 2TF Asso Didacticiel version 3.00 Vérifiez les mises à jour sur http://didacticiel-pas-a-pas.boinc-2tf.org rubrique didacticiel Vous disposez d’un droit de diffusion gratuite de ce dossier. Toute diffusion payante ou réutilisation totale ou partielle sans le consentement de ses auteurs est passible de poursuites. Sommaire du didacticiel : I) Découvrir BOINC A) Le logiciel plateforme a) Un peu d’histoire b) Buts et objectifs c) Fonctionnement 1) Le système de crédits 2) La charte environnementale 3) Comment BOINC utilise t il votre ordinateur ? 4) Un système et des projets NON LUCRATIFS B) La communauté développée autour de BOINC II) Comment installer BOINC ? III) Comment rejoindre un projet sous BOINC? IV) Comment configurer un compte de projet BOINC ? A) Généralités B) Détails a) Page principale b) Pages secondaires C) Les configurations du programme lui même V) Liste des projets actuels VI)Lexique BOINC VII) Liens utiles VIII) Pour aller plus loin A) Les Global Account Managers B) Les clients optimisés C) Aider BOINC autrement a) Diffusion de l’information b) « Recrutement » D) Optimiser son ordinateur pour le calcul E) Faire face aux bugs éventuels IX) Qu’est ce que L’Alliance Francophone ? A) La communauté B) La mini team 2TF Team et l’association (loi 1901) 2TF Asso a) Principe de mini teams b) La 2TF Team, pourquoi pas ? c) La 2TF Asso I) Découvrir BOINC BOINC signifie Berkeley Open Infrastructure for Network Computing (Infrastructure Open source de l'université de Berkeley pour le traitement informatique en réseau).
    [Show full text]
  • Proposta De Mecanismo De Checkpoint Com Armazenamento De Contexto Em Memória Para Ambientes De Computação Voluntária
    UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL INSTITUTO DE INFORMÁTICA PROGRAMA DE PÓS-GRADUAÇÃO EM COMPUTAÇÃO RAFAEL DAL ZOTTO Proposta de Mecanismo de Checkpoint com Armazenamento de Contexto em Memória para Ambientes de Computação Voluntária Dissertação apresentada como requisito parcial para a obtenção do grau de Mestre em Ciência da Computação Prof. Dr. Cláudio Fernando Resin Geyer Orientador Porto Alegre, Setembro de 2010 CIP – CATALOGAÇÃO NA PUBLICAÇÃO Dal Zotto, Rafael Proposta de Mecanismo de Checkpoint com Armazenamento de Contexto em Memória para Ambientes de Computação Volun- tária / Rafael Dal Zotto. – Porto Alegre: PPGC da UFRGS, 2010. 134 f.: il. Dissertação (mestrado) – Universidade Federal do Rio Grande do Sul. Programa de Pós-Graduação em Computação, Porto Ale- gre, BR–RS, 2010. Orientador: Cláudio Fernando Resin Geyer. 1. Computação voluntária. 2. Mecanismos para Checkpoint. 3. Alto Desempenho. 4. Prevalência de Objetos. I. Geyer, Cláudio Fernando Resin. II. Título. UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL Reitor: Prof. Carlos Alexandre Netto Vice-Reitor: Prof. Rui Vicente Oppermann Pró-Reitor de Pós-Graduação: Prof. Aldo Bolten Lucion Diretor do Instituto de Informática: Prof. Flávio Rech Wagner Coordenador do PPGC: Profa. Álvaro Freitas Moreira Bibliotecária-chefe do Instituto de Informática: Beatriz Regina Bastos Haro “An education isn’t how much you have committed to memory, or even how much you know. It’s being able to differentiate between what you do know and what you don’t.” —ANATOLE FRANCE AGRADECIMENTOS Nesta etapa final de minha escrita, gostaria de registrar minha gratidão a todos aqueles que participaram direta ou indiretamente dessa caminhada. Muito obrigado pelas conver- sas, apoio e incentivo oferecidos ao longo dessa jornada.
    [Show full text]
  • Annual Report 2013 Cover Photo: the International LOFAR Telescope (ILT) & Big Data, Danielle Futselaar © ASTRON
    Annual report 2013 Cover photo: The International LOFAR Telescope (ILT) & Big Data, Danielle Futselaar © ASTRON. Photo on this page: prototype for the Apertif phased array feed. The Westerbork Synthesis Radio Telescope (WSRT) will be upgraded with Phased Array Feeds (PAFs), which will allow scientists to perform much faster observations with the telescope with a wider field of view. More information is available on the ASTRON/ JIVE daily image: http://www.astron.nl/ dailyimage/main.php?date=20130624. 2 ASTRON Annual report 2013 Facts and figures of 2013 8 Awards or grants 163 employees 162 refereed articles Funding: € 17,420,955 Expenditure: € 17,091,022 Balance: € 329.33 25 press releases 3 ASTRON Annual report 2013 Contents Facts and figures 2013 3 Director’s report 5 ASTRON Board and Management Team 7 ASTRON in brief 8 Contribution to top sectors 11 Performance indicators 12 Astronomy Group 17 Radio Observatory 22 R&D Laboratory 30 Connected legal entities 36 NOVA Optical/ Infrared Instrumentation Group 38 Joint Institute for VLBI in Europe 41 Outreach and education 43 Appendices 58 Appendix 1: financial summary 59 Appendix 2: personnel highlights 60 Appendix 3: WSRT & LOFAR proposals in 2013 62 Appendix 4: board, committees & staff in 2013 65 Appendix 5: publications 67 4 ASTRON Annual report 2013 Report 2013 was a year in which earlier efforts began to bear fruit. In particular, the various hardware and firmware changes made to the LOFAR telescope system in 2012, resulted in science quality data being delivered to the various Key Science Projects, and in particular the EoR (Epoch of Reionisation) Team.
    [Show full text]
  • Memo 134 Cloud Computing and the Square Kilometre Array
    Memo 134 Cloud Computing and the Square Kilometre Array R. Newman (Oxford University) J Tseng (Oxford University) May011 2 www.skatelescope.org/pages/page_memos.htm 1 Executive Summary The Square Kilometre Array (SKA) will be a next-generation radio telescope that has a discovery potential 10,000 times greater than anything available currently[24]. The SKA's scientific potential is due to its large combined antennae area and consequent ability to collect vast amounts of data|predicted to be many Exabytes1 of data per year once fully operational [71, 46, 41, 44]. Processing this data to form standardised \data products" such as data cubes and images is a major challenge yet to be solved and conservative estimates suggest an Exaflop2 computer will be needed to process the daily data [46, 45, 44, 71]. Although full production may not be until after 2020, such a computer would still be the top supercomputer in the world (even assuming another decade of Moore's Law-type technology improvements) and the daily data captured would still be impractical to store permanently [46]. Such challenges warrant examining all possible sources of computing power to mitigate project risks and ensure most effective use of project funds when construction begins. This report was commissioned by ICRAR, iVEC, and Oxford University to examine whether aspects of Cloud Computing could be used as part of an overall computing strategy for the SKA. The dual aims of this 3 month project are therefore: 1. to examine the computing requirements for the planned SKA data processing pipeline in the context of the growing popularity of cloud computing; and 2.
    [Show full text]
  • Deploying and Maintaining a Campus Grid at Clemson University Dru Sepulveda Clemson University, [email protected]
    Clemson University TigerPrints All Theses Theses 8-2009 Deploying and Maintaining a Campus Grid at Clemson University Dru Sepulveda Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Part of the Computer Sciences Commons Recommended Citation Sepulveda, Dru, "Deploying and Maintaining a Campus Grid at Clemson University" (2009). All Theses. 662. https://tigerprints.clemson.edu/all_theses/662 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. Deploying and Maintaining a Campus Grid at Clemson University A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Masters Computer Science by Dru Sepulveda May 2009 Accepted by: Sebastien Goasguen, Committee Chair Mark Smotherman Steven Stevenson Abstract Many institutions have all the tools needed to create a local grid that aggregates commodity compute resources into an accessible grid service, while simultaneously maintaining user satisfaction and system security. In this thesis, the author presents a three-tiered strategy used at Clemson University to deploy and maintain a grid infrastructure by making resources available to both local and federated remote users for scientific research. Using this approach virtually no compute cycles are wasted. Usage trends and power consumption statistics collected from the Clemson campus grid are used as a reference for best-practices. The loosely-coupled components that comprise the campus grid work together to form a highly cohesive infrastructure that not only meets the computing needs of local users, but also helps to fill the needs of the scientific community at large.
    [Show full text]
  • Desktop Grids for Escience
    Produced by the IDGF-SP project for the International Desktop Grid Federation A Road Map Desktop Grids for eScience Technical part – December 2013 IDGF/IDGF-SP International Desktop Grid federation http://desktopgridfederation.org Edited by Ad Emmen Leslie Versweyveld Contributions Robert Lovas Bernhard Schott Erika Swiderski Peter Hannape Graphics are produced by the projects. version 4.2 2013-12-27 © 2013 IDGF-SP Consortium: http://idgf-sp.eu IDGF-SP is supported by the FP7 Capacities Programme under contract nr RI-312297. Copyright (c) 2013. Members of IDGF-SP consortium, see http://degisco.eu/partners for details on the copyright holders. You are permitted to copy and distribute verbatim copies of this document containing this copyright notice but modifying this document is not allowed. You are permitted to copy this document in whole or in part into other documents if you attach the following reference to the copied elements: ‘Copyright (c) 2013. Members of IDGF-SP consortium - http://idgf-sp.eu’. The commercial use of any information contained in this document may require a license from the proprietor of that information. The IDGF-SP consortium members do not warrant that the information contained in the deliverable is capable of use, or that use of the information is free from risk, and accept no liability for loss or damage suffered by any person and organisation using this information. – 2 – Preface This document is produced by the IDGF-SP project for the International Desktop Grid Fe- deration. Please note that there are some links in this document pointing to the Desktop Grid Federation portal, that requires you to be signed in first.
    [Show full text]
  • Integrated Service and Desktop Grids for Scientific Computing
    Integrated Service and Desktop Grids for Scientific Computing Robert Lovas Computer and Automation Research Institute, Hungarian Academy of Sciences, Budapest, Hungary [email protected] Ad Emmen AlmereGrid, Almere, The Nederlands [email protected] RI-261561 GRID 2010, DUBNA Why Desktop Grids are important? http://knowledgebase.ehttp://knowledgebase.e--irg.euirg.eu RI-261561 GRID 2010, DUBNA Introduction RI-261561 WP4 Author: Robert Lovas, Ad Emmen version: 1.0 Prelude - what do people at home and SME’s think about grid computing Survey of EDGeS project Questionnaires all across Europe Get an idea of the interest in people and SMEs to donate computing time for science to a Grid Get an idea of the interest in running a Grid inside an SME RI-261561 GRID 2010, DUBNA Survey amongst the General Public and SME’s RI-261561 GRID 2010, DUBNA Opinions about Grid computing RI-261561 GRID 2010, DUBNA Survey - Conclusions Overall: there is interest in Desktop Grid computing in Europe. However, that people are willing to change their current practice and say that they want to participate in Grid efforts does not mean that they are actually going to do that. Need to generate trust in the organisation that manages the Grid. People want to donate computing time for scientific applications, especially medical applications. They do not like to donate computing time to commercial or defense applications. People want feedback on the application they are running. No clear technical barriers perceived by the respondents: so this does not need much attention. Overall the respondents were rather positive about donating computing time for a Grid or about running applications on a Grid.
    [Show full text]
  • Installer Et Configurer BOINC
    Installer et configurer BOINC Didacticiel créé par Crashoveride, Membre de L'Alliance Francophone et Leader de la mini Team 2TF Didacticiel version 1.06 Vérifiez les mises à jour sur http://bibledumanga.site.voila.fr/DidactBoinc.html Sommaire: I) Qu'est ce que BOINC? A) Le programme a) Histoire b) But c) Comment ça marche? B) La communauté II) Qu'est ce que l'Alliance Francophone? A) La communauté B) La mini Team 2TF III) Comment installer BOINC? IV) Comment rejoindre un projet BOINC? V) Comment configurer un compte de projet BOINC? A) Généralités B) En détails a) La page principale b) Les pages secondaires VI) Quels sont les projets actuels? VII) Le détail des fenêtres du programme VIII) Lexique BOINC. I) Qu'est ce que BOINC? BOINC signifie Berkeley Open Infrastructure for Network Computing (Infrastructure Open source de l'université de Berkeley pour le traitement informatique en réseau). A) Le programme Le programme BOINC a été créé par les universitaires de Berkeley (aux USA) afin d'aider la communauté scientifique. a) Histoire Le programme BOINC a été développé par la même équipe que celle qui créa en 1992 le programme SETI@Home. Ce programme, délaissé par le gouvernement, a pu survivre grâce à l'idée révolutionnaire du calcul partagé (Voir I)A)c) Comment ça marche? ). Après plusieurs années de bons et loyaux services, les programmeurs et chefs de ce projet ayant acquit un savoir faire unique dans le monde entier ont décidé de le partager avec d'autres projets afin que cette phénoménale puissance de calcul puisse profiter à d'autres.
    [Show full text]