Data Streams Permeate Almost Every Aspect of Computing and Data Communication

Total Page:16

File Type:pdf, Size:1020Kb

Data Streams Permeate Almost Every Aspect of Computing and Data Communication Data streams permeate almost every aspect of computing and data communication. From medical imaging to YouTube videos, from stock market data to sports results or from seismic sensor data to baby monitors, data streams play an increasing role in almost every aspect of our lives. They can be found in every application area imaginable, including medicine, finance, education, engineering, physical sciences, media, entertainment, and the arts. Although increasing processing capacity has made data stream processing more accessible, in- creases in storage capacities and network bandwidths are making it possible to store and access ever increasing volumes of data. This in turn leads to an increase in demand for processing the available data streams, e.g. in the case of multimedia archives there is an acute need to index, search and summarise the data if it is to remain useful. Furthermore, new applications with increasing resource requirements will continue to emerge and many of these applications will require processing resources beyond the capabilities of much of the existing infrastructure of today. Large organisations, such as media corporations and telecommunications providers, are building significant standing infrastructures for specialized data stream processing, but these are not suited to meeting the diverse and transient requirements of a broad range of users, from home musicians to scientific collaborators. Requirements of economy, flexibility, scalability, mobility and location mit- igate against this approach. These users need techniques that allow them to dynamically build and control transient infrastructures for processing their data streams on demand. Questions arising from our previous work are: • Where can the necessary compute power be found to meet these needs? • Can cycle-harvesting (opportunistic) services handle these streams? • Can the use of on-demand virtual machines assist? • What are the real issues in frequency/volume? • What kind of stream datatypes yield most benefit? • What kind of opportunistic resources are best? • What categories of stream execution are best handled this way? • Can idle cores and GPUs be effectively exploited? We will explore architectures that would allow elements from existing infrastructures to be accessed and composed on demand, enabling new possibilities for communities for whom it is infeasible, un- economical or unnecessary to build a dedicated, standing infrastructure. For example, a school might use on-demand infrastructure for indexing, summarizing and searching an archive of audio and video streams from practical science experiments. Or scientists might construct an on-demand infrastructure to process the output data stream from a radio telescope or particle accelerator, with the possibility of performing further stream processing to visualise the output. Alternatively, an on-demand infras- tructure might be created to transform an archived video stream for playback on a low-power mobile device. 1Image of An Bradan´ Feasa courtesy of Ois´ın Mac Suibhne and Alanna Avant, http://www.suibhne.com/suibhne.html 1 Figure 1: On-demand stream processing stack We propose a comprehensive programme of research encompassing the end-to-end challenges pre- sented by stream processing using infrastructure on demand. Figure 1 illustrates the relationship between three work packages: WP1 – Applications We will examine the requirements of a diverse range of applications involving the acquisition, synthesis and processing of data streams. A number of specific applications will be investigated as a means of identifying challenges and evaluating solutions. WP2 – Infrastructure on demand Supporting the requirements of a diverse range of applications is a challenging problem, especially as they may be executed on computing resources distributed both geographically and across administrative boundaries. We propose to address this challenge by investigating techniques for dynamically building infrastructure on demand. We will incorporate the use of existing infrastructure resources but will also investigate the addition of hooks to exist- ing infrastructures that enable the dynamic creation of virtual machines (VMs). Such VMs could be configured to meet the requirements of specific stream processing applications. Although such facilities would be useful across a range of processing models, special consideration will be given to the needs of data stream processing applications. WP3 – Stream processing extensions An infrastructure on demand facility, by itself, will not pro- vide for the execution of complex stream processing applications. We will develop stream pro- cessing extensions that will provide (i) a framework within which applications can describe in- frastructural, quality-of-service, fault-tolerance and other requirements; (ii) the ability to execute complex stream processing workflows using infrastructure on demand; and (iii) interfaces that will give both existing and new applications access to the stream processing facility. 1 WP1: Applications This work package will investigate stream processing applications of various types, and will be led by Dr. Jonathan Dukes. Stream processing applications consist of stream acquisition, stream synthesis and stream processing. Figure 2 illustrates these stream flows. Stream Acquisition Streams of data come into existence either by the acquisition of external information or by synthesis. External information comes either from interfaces to the natural world or from other computing sys- tems. Such information may already be in the form of a time series of discrete values, in which case it can be directly used as a data stream, or it may be a continuous variable that needs to be sampled to create a data stream suitable for further processing. Many scientific grand challenges produce ex- tremely large data streams, of the order of petabytes per year. The prime global example is the CERN 2 Figure 2: Stream flows LHC [1]. Examples of interest to Irish scientists (from the recently submitted HEA PRTLI4 e-PSI proposal) are the LHCb experiment [2], HESS [3], CTA [4] and the Solar Dynamics Observatory (SDO) [5]. While these are on a grand scale, smaller-scale examples such as streaming data from weather stations or from seismic sensors are also of interest. There are already several information and monitoring systems that can be used to acquire data streams, including the very widely deployed gridICE [6] and R-GMA [7, 8, 9] (the latter is specifically targetted at streams). For the purposes of this research we propose to acquire H.323[10] and AccessGrid[11] audio and video streams. For non audio/video data streams we intend to acquire streams using R-GMA, and particularly to avail of the HESS, CTA and SDO sources if possible. Stream Synthesis A great majority of the streams that are synthesised are audio or video streams. Some common ap- plications are audio synthesis (voice and music), avatar video synthesis, navigable world synthesis, and scientific visualisation. Audio is generally synthesised directly and fed to the sound subsystems of desktop PCs. Video stream synthesis is more demanding, and raises interesting questions. Quite complex video streams may be synthesised directly, e.g. using DirectX [12] or OpenGL [13] and dis- played using PCs’ graphics accelerators. Rendering engines can be used for more complex streams: for example OpenGL can be rendered on clusters running Chromium [14]. For immersive visual environments a “cave” can be used. The various scales on which rendering can be performed are illustrated in Figure 3. “Caves” and desktop PCs only serve one user at a time per system whereas rendering engines can be constructed to serve multiple users while scaling performance per user. Stream processing Data streams, whether acquired or synthesised, may be processed to produce one or more new or transformed streams. Processing may be performed on general processing architectures or using specialized stream processing hardware (e.g. GPUs or CPUs with specialized stream processing ca- pabilities). Processed streams may be processed further, rendered (e.g. in the case of video streams) or stored for future use. 1.1 Motivating Application Areas A number of specific application areas will provide focal points for our investigation of on-demand infrastructure for stream processing. In each application area, we will identify specific requirements and challenges and develop proof-of-concept applications. Work in each of the identified application areas will build on past and ongoing research and expertise in the Department of Computer Science at Trinity College Dublin. 3 Figure 3: Rendering engine performance scale spectrum 1.1.1 Scientific Visualisation Early work by the proposers has concentrated on dedicated multiuser rendering engines, e.g. recently a multi-user multi-modal multi-scale grid-enabled visualisation engine [15] has been constructed, in- cluding a 9-node 2-d SCI torus running Chromium for rendering [16]. We will explore the possibility of integrating on-demand resources into rendering engines, which raises interesting questions: • What are the limitations in creating on-demand engines to user-specified scales? • Is it feasible to reliably run a complex engine such as Chromium on transient resources? • Does this model permit the steering of visualisations? • How do such engines scale in performance relative to the number of nodes? • What are the key contributions to time-variance in setup, execution and teardown? 1.1.2 Multimedia Use of multimedia
Recommended publications
  • The First Attempt Our Next Attempt Conclusion
    Dr. Henry Neeman, Director of Oscer Chris Franklin, Computer Science undergrad, IT employee Dr. Horst Severini, Associate Director for Remote & Heterogeneous Computing Joshua Alexander, Computer Engineering undergrad, IT employee What is Condor®? The first attempt ® Condor is a program developed by the University of Wisconsin to allow desktop computers to The initial solution we devised was to install VMWare® within a native Linux install, and then to install Windows inside VMWare®. The steps harness idle time to perform computationally intensive operations. See were: “http://www.cs.wisc.edu/condor/” for more information about Condor®. 1. Install Linux as the native host operating system Why do you need it? 2. Install Condor inside Linux 3. Install VMWare® inside Linux ® Condor® provides free computing cycles for scientific and research use, which extends current 4. Install Windows inside VMWare super-computing resources by adding additional computing time. We installed this solution on approximately 200 lab computers across OU’s campus during the summer of 2005. During testing, we noticed a significant performance decrease using Windows inside VMWare®. To alleviate this problem, we changed VMWare® to use raw disk mode. This mode significantly increased disk performance inside VMWare®. If this is so simple, why can’t I just install it? Once we deployed VMWare® in the labs, several more issues appeared: ® Most scientific and research programs are designed for Linux, but most desktops are running • CD/DVD Burning from inside VMWare did
    [Show full text]
  • Virtual Environment for Ipv6 Analysis
    Virtual Environment for IPv6 Analysis ____________________________ Ricardo Alexander Saladrigas Advisor: Dr. Anna Calveras Barcelona DEDICATION To my parents, for giving me opportunities of immeasurable value and supporting me and my farfetched ideas. To my near family, for their accumulated efforts of improving our collective life. And to Maria Alexandra Siso, Robert Baumgartner, Alyssa Juday and Marc Ramirez for keeping me sane. i ACKNOWLEDGMENTS I extend my gratitude to everyone that has made my work possible. I express my thanks to the communities of VirtualBox, StackOverflow, ServerFault and Ubuntu Help as well as the Reddit communities for Linux and Networking for answering all my technical questions in detail and without prejudice I would like to thank Dr Anna Calveras for her guidance and patience. ii RESUMEN Nuestro objetivo fue la creación de una red compuesta de máquinas virtuales conectadas de forma específica a través de interfaces virtuales y con una seria de protocolos pre configurados que permiten la fácil creación de túneles IPv6 y traductores IPv6 a IPv4. Esta red les permitirá a profesores y estudiantes analizar y observar trafico IPv6 real sin la necesidad de una red física. La red está compuesta de múltiples Máquinas Virtuales Ubuntu y una Máquina Virtual Windows 7. La red puede ser fácilmente instalada en un ordenador corriendo Ubuntu o una distribución basada en Ubuntu. Un USB arrancable fue desarrollado para usar la red en un ordenador sin la necesidad de una instalación o de un sistema operativo especifico. Todas las máquinas virtuales Linux pueden fácilmente ser controladas a través de una terminal sin necesidad de clave utilizando una serie de scripts.
    [Show full text]
  • Comparison of Platform Virtual Machines - Wikipedia
    Comparison of platform virtual machines - Wikipedia... http://en.wikipedia.org/wiki/Comparison_of_platform... Comparison of platform virtual machines From Wikipedia, the free encyclopedia The table below compares basic information about platform virtual machine (VM) packages. Contents 1 General Information 2 More details 3 Features 4 Other emulators 5 See also 6 References 7 External links General Information Name Creator Host CPU Guest CPU Bochs Kevin Lawton any x86, AMD64 CHARON-AXP Stromasys x86 (64 bit) DEC Alphaserver CHARON-VAX Stromasys x86, IA-64 VAX x86, x86-64, SPARC (portable: Contai ners (al so 'Zones') Sun Microsystems (Same as host) not tied to hardware) Dan Aloni helped by other Cooperati ve Li nux x86[1] (Same as parent) developers (1) Denal i University of Washington x86 x86 Peter Veenstra and Sjoerd with DOSBox any x86 community help DOSEMU Community Project x86, AMD64 x86 1 of 15 10/26/2009 12:50 PM Comparison of platform virtual machines - Wikipedia... http://en.wikipedia.org/wiki/Comparison_of_platform... FreeVPS PSoft (http://www.FreeVPS.com) x86, AMD64 compatible ARM, MIPS, M88K GXemul Anders Gavare any PowerPC, SuperH Written by Roger Bowler, Hercul es currently maintained by Jay any z/Architecture Maynard x64 + hardware-assisted Hyper-V Microsoft virtualization (Intel VT or x64,x86 AMD-V) OR1K, MIPS32, ARC600/ARC700, A (can use all OVP OVP Imperas [1] [2] Imperas OVP Tool s x86 (http://www.imperas.com) (http://www.ovpworld compliant models, u can write own to pu OVP APIs) i Core Vi rtual Accounts iCore Software
    [Show full text]
  • Cooperative Linux
    Cooperative Linux Dan Aloni [email protected] Abstract trol on the physical hardware, where the other is provided only with virtual hardware abstrac- tion. From this point on in the paper I’ll refer In this paper I’ll describe Cooperative Linux, a to these two kernels as the host operating sys- port of the Linux kernel that allows it to run as tem, and the guest Linux VM respectively. The an unprivileged lightweight virtual machine in host can be every OS kernel that exports basic kernel mode, on top of another OS kernel. It al- primitives that provide the Cooperative Linux lows Linux to run under any operating system portable driver to run in CPL0 mode (ring 0) that supports loading drivers, such as Windows and allocate memory. or Linux, after minimal porting efforts. The pa- per includes the present and future implemen- The special CPL0 approach in Cooperative tation details, its applications, and its compar- Linux makes it significantly different than ison with other Linux virtualization methods. traditional virtualization solutions such as Among the technical details I’ll present the VMware, plex86, Virtual PC, and other meth- CPU-complete context switch code, hardware ods such as Xen. All of these approaches work interrupt forwarding, the interface between the by running the guest OS in a less privileged host OS and Linux, and the management of the mode than of the host kernel. This approach VM’s pseudo physical RAM. allowed for the extensive simplification of Co- operative Linux’s design and its short early- 1 Introduction beta development cycle which lasted only one month, starting from scratch by modifying the vanilla Linux 2.4.23-pre9 release until reach- Cooperative Linux utilizes the rather under- ing to the point where KDE could run.
    [Show full text]
  • Virtualisation Open Source État De L'art
    Virtualisation open source État de l'art Jean Charles Delépine <[email protected]> Université de Picardie Direction des Infrastructures et des systèmes d'Information Une jungle d'acronymes Xen QEMU Vmware Vserver KVM UML VirtualBox OPENVZ Lguest HyperV Définition « La virtualisation est l'ensemble des techniques matérielles et/ou logicielles qui permettent de faire fonctionner sur une seule machine plusieurs systèmes d'exploitation et/ou plusieurs applications, séparément les uns des autres, comme s'ils fonctionnaient sur des machines physiques distinctes. » (wikipedia) Pourquoi faire ? Utilisation optimale des ressources Installation, déploiement et migration facile des machines virtuelles d'une machine physique à une autre. Économie, mutualisation, … Tests, développement, … Sécurisation Un service : un serveur ! Quelle virtualisation ? Émulateurs: simulation complète de la machine ex. QEmu, Hercules Virtualisation complète: Réutilise le processeur natif quand c'est possible OS client non modifie, requiers support matériel Para-virtualisation: OS client recompilé pour la machine virtuelle Conteneurs: Techniques de virtualisation Problème Le jeu d'instruction du i386 n'est pas virtualisable Émulation complète (QEmu) Contrôle total, mais lent Réécriture du code au vol (VMWare) Plus rapide mais très complexe Paravirtualisation Recompilation pour la plateforme virtuelle, idéal Virtualisation matérielle Isolateur Chroot : isolation par changement de racine; BSD Jail : isolation en espace utilisateur ; Linux-VServer : isolation des processus
    [Show full text]
  • Supporting Multiple Oses with OS Switching
    Supporting Multiple OSes with OS Switching Jun Sun1, Dong Zhou, Steve Longerbeam2 [email protected] DoCoMo USA Labs 3240 Hillview Ave., Palo Alto, CA 94304, USA Abstract—People increasingly put more than one OSes into a user typically installs Virtual Machine (VM) monitor their computers and devices like mobile phones. Multi-boot and related management software. With the help of and virtualization are two common technologies for this VM software the user can further install several purpose. In this paper we promote a new approach called different OSes onto the same computer. The VM OS switching. With OS switching, multiple OSes time-share software can typically run multiple OSes concurrently. the same computer cooperatively. A typical implementation For example, with VMWare Workstation, each OS has can reuse an OS’s suspend/resume functionality with little its display shown as a window on the host OS. modification. The OS switching approach promises fast Switching from one running OS to another is almost native execution speed with shorter switching time than traditional multi-boot approach. We describe the design of equivalent to switching between GUI applications. OS switching as well as our implementation with Linux and However, virtualization technology typically suffers WinCE, and evaluate its performance. from degradation in performance [5]. More importantly, it typically requires a considerable amount 1. Introduction of work to providing a virtualizing and monitoring layer, and sometimes to modify the existing OS and its Many people nowadays run multiple OSes on their device driver code. computers. For example, developers may need to test In this paper we promote an alternative approach, their software on different OSes and/or on different called OS switching.
    [Show full text]
  • Mise En Oeuvre De La Virtualisation
    Mise en oeuvre d'un outils de virtualisation Mise en Oeuvre de la Virtualisation Table des matières 1 Virtualisation (informatique) .................................................................. 2 1.1 Notions ................................................................................................ 2 1.2 Intérêts de la virtualisation ........................................................ 2 1.3 Historique .................................................................................................... 3 1.4 Comparaison de différentes techniques de virtualisation . 3 1.4.1 Isolateur ........................................................................... 3 1.4.2 Noyau en espace utilisateur ............................. 4 1.4.3 Machine virtuelle .............................................................................. 4 1.4.4 Para virtualisation ou hyperviseur ....................... 5 1.4.5 Matériel ............................................................................. 6 2 Xen ................................................................................................................ 6 2.1 Présentation .................................................................................... 6 2.2 Architecture de Xen ..................................................... 7 2.3 Acteurs industriels ............................................................ 7 2.4 Comparaison avec d'autres solutions de virtualisation . 7 3 VirtualBox ........................................................................... 8 3.1
    [Show full text]
  • Virtualisation Environnements Informatiques
    Virtualisation Environnements informatiques Marco Guilmette Plan du cours • Introduction • Notions sur la virtualisation • Intérêts • Inconvénients • Différentes techniques Environnements informatiques 420-AV4-SW Introduction • La virtualisation consiste, en informatique, à exécuter sur une machine hôte dans un environnement isolé des systèmes d’exploitation. • On parle alors de virtualisation système pour les machines. • Et de virtualisation applicative pour les applications. • Ces ordinateurs virtuels sont appelés serveur privé virtuel (Virtual Private Server ou VPS) ou encore environnement virtuel (Virtual Environment ou VE). Environnements informatiques 420-AV4-SW Notions de virtualisation • Chaque outil de virtualisation met en œuvre une ou plusieurs de ces notions : 1. couche d’abstraction matérielle et/ou logicielle ; 2. système d'exploitation hôte (installé directement sur le matériel) ; 3. systèmes d'exploitation (ou applications, ou encore ensemble d'applications) « virtualisé(s) » ou « invité(s) » ; 4. partitionnement, isolation et/ou partage des ressources physiques et/ou logicielles ; 5. images manipulables : démarrage, arrêt, gel, clonage, sauvegarde et restauration, sauvegarde de contexte, migration d'une machine physique à une autre ; 6. réseau virtuel : réseau purement logiciel, interne à la machine hôte, entre hôte et/ou invités. Environnements informatiques 420-AV4-SW Intérêts de la virtualisation • utilisation optimale des ressources d'un parc de machines (répartition des machines virtuelles sur les machines physiques
    [Show full text]
  • LINUX Virtualization
    LINUX Virtualization Running other code under LINUX Environment Virtualization • Citrix/MetaFrame – Virtual desktop under Windows NT. aka Windows Remote Desktop Protocol • VNC, Dameware – virtual console. • XWindows – virtual console • Mac OS RDP – same as Windows. • WUBI.EXE Hardware Emulation • IBM – Virtual Machine (VM) OS Also using code: • Processor virtualization • Instruction set virtualization. • Slow. Usually requires OS (re-)installation • Examples: KVM (uses QEMU), Solaris Domains, DOSBox, DOSEmu, WINE Processor Virtualization • “Native” or full virtualization: virtual machine that mediates between the guest operating systems and the native hardware. Certain protected instructions must be trapped and handled within the hypervisor because the underlying hardware isn't owned by an operating system but is instead shared by it through the hypervisor. • This form usually requires specially virtualization CPU processors (Intel, AMD) for performance. The only constraint is that the operating system must support the underlying hardware. Close to hardware-level performance. • The biggest advantage of full virtualization is that a guest OS can run unmodified. OS is usually “ported” to the hypervisor machine. VmWare, Microsoft HyperV. • Examples: VMWare. Processor Virtualization • Paravirtualization: uses a hypervisor for shared access to the underlying hardware but integrates virtualization-aware code into the operating system itself. Obviates the need for any recompilation or trapping because the operating systems themselves cooperate
    [Show full text]
  • Feasibility Study of Building a User-Mode Native Windows NT VMM
    Universitat¨ Karlsruhe (TH) Institut fur¨ Betriebs- und Dialogsysteme Lehrstuhl Systemarchitektur Feasibility Study of Building a User-mode Native Windows NT VMM Bernhard Poss¨ Studienarbeit Verantwortlicher Betreuer: Prof. Dr. Frank Bellosa Betreuende Mitarbeiter: BA of EE Joshua LeVasseur 9. Mai 2005 Hiermit erklare¨ ich, die vorliegende Arbeit selbstandig¨ verfaßt und keine anderen als die angegebe- nen Literaturhilfsmittel verwendet zu haben. I hereby declare that this thesis is a work of my own, and that only cited sources have been used. Karlsruhe, den 9. Mai 2005 Bernhard Poss¨ Abstract The concept of pre-virtualization offers new ways for building a virtual machine mon- itor. It is now possible to implement a virtual machine monitor in user-mode without suffering the hassle of other approaches such as para-virtualization. This work deter- mines the extents to which a native virtual machine monitor may be implemented un- der Windows NT. A design proposal will be devised for an user-mode virtual machine monitor under Windows NT which supports paging, synchronous interrupt delivery and timer virtualization. Finally, the basics of implementing a native Windows NT virtual machine monitor are discussed. Contents 1 Introduction 1 2 Background 3 2.1 Virtual Machines . 3 2.1.1 Hypervisors . 3 2.1.2 Pure-, Para- and Pre-virtualization . 4 2.1.3 Afterburning . 4 2.1.4 IA32 Architecture . 5 2.2 Windows NT . 6 2.2.1 Using Windows NT as a hypervisor . 6 2.2.2 Constraints of the Win32 API . 6 2.2.3 Windows NT Native API . 8 3 Related Work 13 3.1 User-mode Linux .
    [Show full text]
  • Virtualisation – 12-13 Mars 2008
    TutoJRES / Aristote : Virtualisation – 12-13 mars 2008 TourTour d'horizond'horizon desdes techniquestechniques dede virtualisationvirtualisation Préambule : Ce document PDF contient : - une première section destinée à être imprimée (pages 2 à 57) - une deuxième section destinée à être projetée (ne pas imprimer) Le contenu de ces deux sections est identique. Accès au diaporama révision TutoJRES v1.0 © <[email protected]> Techniques de virtualisation - TutoJRES/Aristote - Paris – 12 mars 2008 TutoJRES / Aristote : Virtualisation – 12-13 mars 2008 TourTour d'horizond'horizon desdes TechniquesTechniques dede virtualisationvirtualisation Bernard Perrot – CNRS – UMR6205 <[email protected]> © <[email protected]> Techniques de virtualisation - TutoJRES/Aristote - Paris – 12 mars 2008 ( Section imprimable jusqu'à la page 45 ) Historique ● 1960's : travaux de centre de recherche IBM de Grenoble : – Donnera CP/CMS, puis VM/CMS ● Par la suite, technologies propriétaires pour virtualiser les OS des mainframes (HP, Sun) ● 1990's : émulation sur x86 des premier ordinateurs personnels tels que Atari, Amiga, Amstrad, ... ● Fin 1990's : introduction de VMware par la société du même nom, virtualisation logicielle des architectures x86 pour machines x86, qui va (re)populariser le concept de machine virtuelle ● Suivront (entre autres) dans le monde x86 : QEMU, Bochs, Xen, Linux- VServer (libres), Virtual PC (MS) qui cristalliseront cette popularisation – ... et qui nous amènent ici aujourd'hui ... ! © <[email protected]> Techniques de virtualisation - TutoJRES/Aristote - Paris – 12 mars 2008 Définition (tentative de) : ● La virtualisation est l'ensemble des techniques matérielles et logicielles permettant de fournir un ensemble ou sous-ensemble de ressources informatiques de manière qu'elles puissent être utilisées, avec avantages, de manière indépendante de la plateforme matérielle (configuration, localisation).
    [Show full text]
  • An Architecture Model for a Distributed Virtualization System
    CLOUD COMPUTING 2018 : The Ninth International Conference on Cloud Computing, GRIDs, and Virtualization An Architecture Model for a Distributed Virtualization System Pablo Pessolani Fernando G. Tinetti Facultad Regional Santa Fe III-LIDI Facultad de Informática- UNLP Universidad Tecnológica Nacional Comisión de Inv. Científicas, Prov. Bs. As Santa Fe – Argentina La Plata, Argentina e-mail: [email protected] e-mail: [email protected] Toni Cortes Silvio Gonnet Barcelona Supercomputing Center & UPC INGAR - Facultad Regional Santa Fe Barcelona – España CONICET - Universidad Tecnológica Nacional e-mail: [email protected] Santa Fe - Argentina e-mail: [email protected] Abstract — This article presents an architecture model for a management tools, among others, which are attractive to Distributed Virtualization System, which could expand a Cloud service providers. virtual execution environment from a single physical machine Nowadays, there are several virtualization technologies to several nodes of a cluster. With current virtualization used to provide Infrastructure as a Service (IaaS) mounted in technologies, computing power and resource usage of Virtual a cluster of servers linked by high-speed networks. Storage Machines (or Containers) are limited to the physical machine Area Networks (SAN), security appliances (network and where they run. To deliver high levels of performance and application firewall, Intrusion Detection/Prevention Systems, scalability, cloud applications are usually partitioned in several etc.), and a set of management systems complement the Virtual Machines (or Containers) located on different nodes of required provider-class infrastructure. a virtualization cluster. Developers often use that processing Hardware virtualization, paravirtualization, and OS-level model because the same instance of the operating system is not available on each node where their components run.
    [Show full text]