Parallel Rendering Technologies for HPC Clusters

Total Page:16

File Type:pdf, Size:1020Kb

Parallel Rendering Technologies for HPC Clusters HIGH-PERFORMANCE COMPUTING PARALLEL RENDERING TECHNOLOGIES FOR HPC CLUSTERS BY LI OU, PH.D. Using parallel rendering technologies with clusters of YUNG-CHIN FANG high-performance computing (HPC) workstations con- ONUR CELEBIOGLU figured with high-end graphics processors helps scale VICTOR MASHAYEKHI, PH.D. out graphics capabilities by exploiting and coordinating distributed computing resources. This article discusses parallel rendering architectures and highlights open source utilities that can help meet rendering require- ments for large-scale data sets. upercomputers and high-performance computing describes how open source utilities such as Chromium and (HPC) clusters enable demanding software—such Distributed Multihead X (DMX) can help meet large-scale Sas real-time simulation, animation, virtual reality, rendering requirements. and scientific visualization applications—to generate high- resolution data sets at sizes that have not typically been Understanding parallel rendering techniques feasible in the past. However, efficiently rendering these large, There are two different ways to build parallel architectures Related Categories: dynamic data sets, especially those with high-resolution dis- for high-performance rendering. The first method is to use Clustering play requirements, can be a significant challenge. a large symmetric multiprocessing computer with extremely High-performance Rendering is the process of converting an abstract high-end graphics capabilities. The downside of this computing (HPC) description of a scene (a data set) to an image. For com- approach is its cost—these systems can be prohibitively System architecture plex data sets or high-resolution images, the rendering expensive. Visit DELL.COM/PowerSolutions process can be highly compute intensive, and applica- The second method is to utilize the aggregate perfor- for the complete category index. tions with requirements for rapid turnaround time and mance of commodity graphics accelerators in clusters of HPC human perception place additional demands on process- workstations. The advantages of this architecture include ing power. State-of-the-art graphics hardware can signifi- the following: cantly enhance rendering performance, but a single piece of hardware is often limited by processor performance • Cost-effectiveness: Commodity graphics hardware and and amount of memory. If very high resolution is required, workstations remain far less expensive than high-end the rendering task can simply be too large for one piece parallel rendering computers, and some PC graphics of hardware to handle. accelerators can provide performance levels comparable Exploiting multiple processing units—a technique known to those of high-end graphics hardware. as parallel rendering—can provide the necessary computa- • Scalability: As long as the network is not saturated, the tional power to accelerate these rendering tasks. This article aggregate hardware capacity of a visualization cluster discusses the major architectures and methodologies for grows linearly as the number of HPC workstations parallel rendering with HPC workstation clusters and increases. DELL POWER SOLUTIONS | November 2007 1 Reprinted from Dell Power Solutions, November 2007. Copyright © 2007 Dell Inc. All rights reserved. must be integrated later into a final image. Image decomposition, in contrast, forms tasks by partitioning the image space: each task ren- ders only the geometric objects that contribute to the pixels that physically belong to the space Multiple frames Multiple subsets assigned to the task. of a data set Figure 3 illustrates how a data set could be partitioned by these two approaches. In object decomposition, each workstation renders a Figure 1. Frames rendered across multiple Figure 2. Data subsets rendered across systems using temporal parallelism multiple systems using data parallelism single object in the data set: one renders the rectangle, and the other renders the circle. In • Flexibility: The performance of commodity to render these subsets simultaneously (see image decomposition, each workstation renders graphics hardware can increase rapidly, and Figure 2). High-performance interconnects route half of the final image: one renders the left side, its development cycles are typically much the data subsets between the processing work- and the other renders the right side. shorter than those of custom-designed, stations, and one or more controlling units syn- There are no absolute guidelines when choos- high-end parallel hardware. In addition, chronize the distributed rendering tasks. When ing between object decomposition and image open interfaces for hardware, such as PCI the rendering process completes, the final image decomposition. Generally, object decomposition Express (PCIe), and open interfaces for soft- can be compiled from the subsets on each work- is suitable for applications with large-scale data ware, such as Open Graphics Library station for display. Data parallelism is widely sets, while image decomposition is suitable for (OpenGL), allow organizations to easily take used by research industries and in software such applications requiring high resolution and a advantage of new hardware to help increase as real-time simulation, virtual reality, virtual large image, such as a tiled display integrating cluster performance. environment simulation, and scientific visualiza- multiple screens into a single display device. tion applications. Object decomposition can enhance load Temporal and data parallelism balancing and scalability compared with image Two common approaches to parallel rendering Object decomposition and decomposition by helping ease preprocessing are temporal parallelism and data parallelism. image decomposition and the distribution of objects evenly among Temporal parallelism divides up work into A key step in data parallelism is decomposing processors. However, it does require a post- single sequential frames that are assigned to large data sets, a step that can utilize one of two composition process to integrate the image systems and rendered in order; data parallel- major approaches: object decomposition and subsets, because objects assigned to different ism divides a large data set into subsets of image decomposition. In object decomposition, processors may map to the same screen space. work that are rendered by multiple systems and tasks are formed by partitioning the geometric For example, in Figure 3, after rendering the then recombined. description of the scene. Individual worksta- circle and the rectangle individually, this post- In temporal parallelism, the basic unit of tions partition and render subsets of the geo- composition step determines how they overlap work is the rendering of a single complete image metric data in parallel, producing pixels that to form the final image. With large numbers of or frame, and each processor is assigned a number of frames to render in sequence (see Figure 1). Because this method is not geared Object Object toward rendering individual images but can decomposition composition increase performance when rendering an entire sequence, film industries often use it for anima- tion applications and similar software, in which Rendering workstations the time it takes to render individual frames may Data set not be as important as the overall time required Final image to render all frames. Image Image decomposition composition The basic concept of data parallelism, on the other hand, is to divide and conquer. Data paral- lelism decomposes a large data set into many small subsets, then uses multiple workstations Figure 3. Parallel rendering processes using object decomposition and image decomposition /PowerSolutions Reprinted from Dell Power Solutions, November 2007. Copyright © 2007 Dell Inc. All rights reserved. DELL.COM 2 HIGH-PERFORMANCE COMPUTING algorithms are typically combined with image Tiled display decomposition, and sort-last algorithms are typically combined with object decomposition Master node (see Figures 4 and 5). Application Enhancing HPC cluster graphics Image with open source utilities decomposition Parallel rendering typically requires a special Sorting Rendering Rendering Rendering software layer to exploit and coordinate distrib- server server server uted computational resources. Chromium and Network DMX, two popular open source utilities, can provide these capabilities.1 Chromium, developed by Lawrence Livermore Figure 4. Parallel rendering process for display on tiled screens using sort-first algorithms with National Laboratory, Stanford University, the image decomposition University of Virginia, and Tungsten Graphics, is an open source software stack for parallel render- partitions, this step can place heavy demands algorithms perform the space mapping early in ing on clusters of workstations. It runs on the on communication networks and require a the rendering process, and the sort operation Microsoft® Windows®, Linux®, IBM® AIX, Solaris, huge amount of computation power for the happens before the data set is partitioned and and IRIX operating systems, and is designed to composition units. distributed. Sort-last algorithms are less sensi- increase three aspects of graphics scalability: Image decomposition helps eliminate the tive to the distribution of objects within the complexity of image integration by only requiring image than sort-first algorithms because the • Data scalability: Chromium can process a final composition step to physically map the payload distribution is based on object
Recommended publications
  • Parallel Particle Rendering: a Performance Comparison Between Chromium and Aura
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/221357191 Parallel Particle Rendering: a Performance Comparison between Chromium and Aura . Conference Paper · January 2006 DOI: 10.2312/EGPGV/EGPGV06/137-144 · Source: DBLP CITATIONS READS 4 29 3 authors, including: Michal Koutek Koninklijk Nederlands Meteorologisch Instituut 26 PUBLICATIONS 172 CITATIONS SEE PROFILE All content following this page was uploaded by Michal Koutek on 28 January 2015. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Eurographics Symposium on Parallel Graphics and Visualization (2006) Alan Heirich, Bruno Raffin, and Luis Paulo dos Santos (Editors) Parallel particle rendering: a performance comparison between Chromium and Aura , Tom van der Schaaf1, Michal Koutek1 2 and Henri Bal1 1Faculty of Sciences - Vrije Universiteit, De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands 2Faculty of Information Technology and Systems - Delft University of Technology Abstract In the fields of high performance computing and distributed rendering, there is a great need for a flexible and scalable architecture that supports coupling of parallel simulations to commodity visualization clusters. The most popular architecture that allows such flexibility, called Chromium, is a parallel implementation of OpenGL. It has sufficient performance on applications with static scenes, but in case of more dynamic content this approach often fails. We have developed Aura, a distributed scene graph library, which allows optimized performance for both static and more dynamic scenes. In this paper we compare the performance of Chromium and Aura.
    [Show full text]
  • A Load-Balancing Strategy for Sort-First Distributed Rendering
    A load-balancing strategy for sort-first distributed rendering Frederico Abraham, Waldemar Celes, Renato Cerqueira, Joao˜ Luiz Campos Tecgraf - Computer Science Department, PUC-Rio Rua Marquesˆ de Sao˜ Vicente 225, 22450-900 Rio de Janeiro, RJ, Brasil ffabraham,celes,rcerq,[email protected] Abstract such algorithms, when applied to complex scenes, still de- mand graphics power that a single processor may not be In this paper, we present a multi-threaded sort-first dis- able to deliver. Examples of such scenarios include scenes tributed rendering system. In order to achieve load balance described by a dense and highly tessellated geometry set, among the rendering nodes, we propose a new partition- sophisticated per-pixel lighting algorithms and volume ren- ing scheme based on the rendering time of the previous dering. frame. The proposed load-balancing algorithm is very sim- The purpose of our research is to investigate the use of ple to be implemented and works well for both geometry- PC-based clusters for improving the rendering performance and rasterization-bound models. We also propose a strat- of such complex scenes, delivering the frame rate usually egy to assign tiles to rendering nodes that effectively uses required by virtual-reality applications. This goal can be the available graphics resources, thus improving rendering achieved by combining the graphics power of a set of PCs performance. equipped with graphics accelerators. Although PC clusters have also been used for supporting high resolution multi- display rendering systems [2, 12, 14, 13], in this paper we focus on the use of a PC cluster for improving rendering 1.
    [Show full text]
  • Cross-Segment Load Balancing in Parallel Rendering
    EUROGRAPHICS Symposium on Parallel Graphics and Visualization (2011), pp. 1–10 N.N. and N.N. (Editors) Cross-Segment Load Balancing in Parallel Rendering Fatih Erol1, Stefan Eilemann1;2 and Renato Pajarola1 1Visualization and MultiMedia Lab, Department of Informatics, University of Zurich, Switzerland 2Eyescale Software GmbH, Switzerland Abstract With faster graphics hardware comes the possibility to realize even more complicated applications that require more detailed data and provide better presentation. The processors keep being challenged with bigger amount of data and higher resolution outputs, requiring more research in the parallel/distributed rendering domain. Optimiz- ing resource usage to improve throughput is one important topic, which we address in this article for multi-display applications, using the Equalizer parallel rendering framework. This paper introduces and analyzes cross-segment load balancing which efficiently assigns all available shared graphics resources to all display output segments with dynamical task partitioning to improve performance in parallel rendering. Categories and Subject Descriptors (according to ACM CCS): I.3.2 [Computer Graphics]: Graphics Systems— Distributed/network graphics; I.3.m [Computer Graphics]: Miscellaneous—Parallel Rendering Keywords: Dynamic load balancing, multi-display systems 1. Introduction of massive pixel amounts to the final display destinations at interactive frame rates. The graphics pipes (GPUs) driving As CPU and GPU processing power improves steadily, so the display array, however, are typically faced with highly and even more increasingly does the amount of data to uneven workloads as the vertex and fragment processing cost be processed and displayed interactively, which necessitates is often very concentrated in a few specific areas of the data new methods to improve performance of interactive massive and on the display screen.
    [Show full text]
  • Chromium Renderserver: Scalable and Open Remote Rendering Infrastructure Brian Paul, Member, IEEE, Sean Ahern, Member, IEEE, E
    1 Chromium Renderserver: Scalable and Open Remote Rendering Infrastructure Brian Paul, Member, IEEE, Sean Ahern, Member, IEEE, E. Wes Bethel, Member, IEEE, Eric Brug- ger, Rich Cook, Jamison Daniel, Ken Lewis, Jens Owen, and Dale Southard Abstract— Chromium Renderserver (CRRS) is software infrastruc- ture that provides the ability for one or more users to run and view image output from unmodified, interactive OpenGL and X11 applications on a remote, parallel computational platform equipped with graphics hardware accelerators via industry-standard Layer 7 network proto- cols and client viewers. The new contributions of this work include a solution to the problem of synchronizing X11 and OpenGL command streams, remote delivery of parallel hardware-accelerated rendering, and a performance anal- ysis of several different optimizations that are generally applicable to a variety of rendering architectures. CRRS is fully operational, Open Source software. Index Terms— remote visualization, remote rendering, parallel rendering, virtual network computer, collaborative Fig. 1. An unmodified molecular docking application run in parallel visualization, distance visualization on a distributed memory system using CRRS. Here, the cluster is configured for a 3x2 tiled display setup. The monitor for the “remote I. INTRODUCTION user machine” in this image is the one on the right. While this example has “remote user” and “central facility” connected via LAN, The Chromium Renderserver (CRRS) is software in- the typical use model is where the remote user is connected to the frastructure that provides access to the virtual desktop central facility via a low-bandwidth, high-latency link. Here, we see the complete 4800x2400 full-resolution image from the application on a remote computing system.
    [Show full text]
  • Hybrid Sort-First and Sort-Last Parallel Rendering with a Cluster of Pcs
    Hybrid Sort-First and Sort-Last Parallel Rendering with a Cluster of PCs Rudrajit Samanta, Thomas Funkhouser, Kai Li, and Jaswinder Pal Singh Princeton University Abstract frequent basis as faster versions become available from any hardware vendor. We investigate a new hybrid of sort-first and sort-last approach for parallel polygon rendering, using as a target platform a clus- Modularity & flexibility: Networked systems in which com- ter of PCs. Unlike previous methods that statically partition the 3D puters communicate only via network protocols, allow PCs to model and/or the 2D image, our approach performs dynamic, view- be added and removed from the system easily, and they can dependent and coordinated partitioning of both the 3D model and even be heterogeneous. Network protocols also allow spe- the 2D image. Using a specific algorithm that follows this approach, cific rendering processors to be accessed directly by remote we show that it performs better than previous approaches and scales computers attached to the network, and the processors can be better with both processor count and screen resolution. Overall, our used for other computing purposes when not in use for high- algorithm is able to achieve interactive frame rates with efficiencies performance graphics. of 55.0% to 70.5% during simulations of a system with 64 PCs. While it does have potential disadvantages in client-side process- Scalable capacity: The aggregate hardware compute, stor- ing and in dynamic data management—which also stem from its age, and bandwidth capacity of a PC cluster grows linearly dynamic, view-dependent nature—these problems are likely to di- with increasing numbers of PCs.
    [Show full text]
  • Scalable Rendering on PC Clusters
    Large-Scale Visualization Scalable Rendering Brian Wylie, Constantine Pavlakos, Vasily Lewis, and Ken Moreland on PC Clusters Sandia National Laboratories oday’s PC-based graphics accelerators Achieving performance Tachieve better performance—both in cost The Department of Energy’s Accelerated Strategic and in speed. A cluster of PC nodes where many or all Computing Initiative (DOE ASCI) is producing compu- of the nodes have 3D hardware accelerators is an attrac- tations of a scale and complexity that are unprecedent- tive approach to building a scalable graphics system. ed.1,2 High-fidelity simulations, at high spatial and We can also use this approach to drive a variety of dis- temporal resolution, are necessary to achieve confi- play technologies—ranging from a single workstation dence in simulation results. The ability to visualize the monitor to multiple projectors arranged in a tiled con- enormous data sets produced by such simulations is figuration—resulting in a wall-sized ultra high-resolu- beyond the current capabilities of a single-pipe graph- tion display. The main obstacle in using cluster-based ics machine. Parallel techniques must be applied to graphics systems is the difficulty in realizing the full achieve interactive rendering of data sets greater than aggregate performance of all the individual graphics several million polygons. Highly scalable techniques will accelerators, particularly for very large data sets that be necessary to address projected rendering perfor- exceed the capacity and performance characteristics of mance targets, which are as high as 20 billion polygons any one single node. per second in 20042 (see Table 1). Based on our efforts to achieve higher performance, ASCI’s Visual Interactive Environment for Weapons we present results from a parallel Simulations (Views) program is exploring a breadth of sort-last implementation that the approaches for effective visualization of large data.
    [Show full text]
  • Recent Developments in Parallel Rendering
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by The University of Utah: J. Willard Marriott Digital Library Recent Developments in Parallel Rendering Scott Whitman David Sarnoff Research Center Charles D. Hansen Los Alamos National Laboratory Thomas W. Crockett Institute for Computer Applications in Science and Engineering While parallel rendering is certainly not new, prior work on the topic was scattered across various journals and conference proceedings, many of them obscure. (The articles printed here contain references to some of this work.) Moreover. researchers in this emerging area had trouble finding an appropriate forum for their work, which often contained too much parallel com­ puting content for the graphics community and too much graph­ ics content for the parallel computing community. Earlier venues for parallel rendering included 1989 and 1990 Siggraph courses, the 1990 conference Parallel Processing for Computer sing parallel computers for computer graphics rendering Vision and Display in Leeds, UK, and the 1993 Eurographics U dates back to the late 1970s. Several papers published then Rendering Workshop in Bristol, UK. focused on image space decompositions for theoretical parallel In response to growing research interest in parallel rendering machines. Early research concentrated on algorithmic studies and the limited opportunities for presenting results, we orga­ and special-purpose hardware, but the growing availability of nized the 1993 Parallel Rendering Symposium. This meeting, commercial parallel systems added a new dimension to parallel sponsored by the IEEE Computer Society Technical Commit­ rendering. tee on Computer Graphics in cooperation with ACM Siggraph, Shortly after parallel machines became available in the mid- took place in conjunction with the Visualization 93 conference.
    [Show full text]
  • Dynamic Load Balancing for Parallel Volume Rendering
    Eurographics Symposium on Parallel Graphics and Visualization (2006), pp. 1–8 Alan Heirich, Bruno Raffin, and Luis Paulo dos Santos (Editors) Dynamic load balancing for parallel volume rendering Paper 1022 Abstract Parallel volume rendering is one of the most efficient techniques to achieve real time visualization of large datasets by distributing the data and the rendering process over a cluster of machines. However, when using level of detail techniques or when zooming on parts of the datasets, load unbalance becomes a challenging issue that has not been widely studied in the context of hardware-based rendering. In this paper, we address this issue and show how to achieve good load balancing for parallel level of detail volume rendering. We do so by dynamically distributing the data among the rendering nodes according to the load of the previous frame. We illustrate the efficiency of our technique on large datasets. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation – Viewing algorithms I.3.2 [Computer Graphics]: Graphics Systems – Distributed/network graphics 1. Introduction on parts of the model, one node might spend significantly more rendering time than the others because its visible data With the advent of high performance interconnection net- share is bigger, while other nodes could be idling because works in recent years, clusters have become an inexpensive their data share is not visible. Similarly, if a level of de- alternative to supercomputers. More recently, improvements tail approach is used, a single node might have to render its in consumer graphics hardware allow the use of clusters as a data share at a higher resolution than the other nodes.
    [Show full text]
  • Cross-Segment Load Balancing in Parallel Rendering
    Eurographics Symposium on Parallel Graphics and Visualization (2011) T. Kuhlen, R. Pajarola, and K. Zhou (Editors) Cross-Segment Load Balancing in Parallel Rendering Fatih Erol1, Stefan Eilemann1;2 and Renato Pajarola1 1Visualization and MultiMedia Lab, Department of Informatics, University of Zurich, Switzerland 2Eyescale Software GmbH, Switzerland Abstract With faster graphics hardware comes the possibility to realize even more complicated applications that require more detailed data and provide better presentation. The processors keep being challenged with bigger amount of data and higher resolution outputs, requiring more research in the parallel/distributed rendering domain. Optimiz- ing resource usage to improve throughput is one important topic, which we address in this article for multi-display applications, using the Equalizer parallel rendering framework. This paper introduces and analyzes cross-segment load balancing which efficiently assigns all available shared graphics resources to all display output segments with dynamical task partitioning to improve performance in parallel rendering. Categories and Subject Descriptors (according to ACM CCS): I.3.2 [Computer Graphics]: Graphics Systems— Distributed/network graphics; I.3.m [Computer Graphics]: Miscellaneous—Parallel Rendering Keywords: Dynamic load balancing, multi-display systems 1. Introduction of massive pixel amounts to the final display destinations at interactive frame rates. The graphics pipes (GPUs) driving As CPU and GPU processing power improves steadily, so the display array, however, are typically faced with highly and even more increasingly does the amount of data to uneven workloads as the vertex and fragment processing cost be processed and displayed interactively, which necessitates is often very concentrated in a few specific areas of the data new methods to improve performance of interactive massive and on the display screen.
    [Show full text]
  • Load Balancing for Multi-Projector Rendering Systems
    Load Balancing for Multi-Projector Rendering Systems Rudrajit Samanta, Jiannan Zheng, Thomas Funkhouser, Kai Li, and Jaswinder Pal Singh Princeton University Abstract drawback of this approach is that that the rendering system is very expensive, often costing millions of dollars. Multi-projector systems are increasingly being used to provide In the Scalable Display Wall project at Princeton University, we large-scale and high-resolution displays for next-generation inter- take a different approach. Rather than relying upon a tightly inte- active 3D graphics applications, including large-scale data visual- grated graphics subsystem, we combine multiple commodity graph- ization, immersive virtual environments, and collaborative design. ics accelerator cards in PCs connected by a network to construct a These systems must include a very high-performance and scalable parallel rendering system capable of driving a multi-projector dis- 3D rendering subsystem in order to generate high-resolution images play with scalable rendering performance and resolution. The main at real time frame rates. This paper describes a sort-first based par- theme of this approach is that inexpensive and high-performance allel rendering system for a scalable display wall system built with a systems can be built using a multiplicity of commodity parts. The network of PCs, graphics accelerators, and portable projectors. The performance of PCs and their graphics accelerators have been im- main challenge is to develop scalable algorithms to partition and proving at an astounding rate over the last few years, and their price- assign rendering tasks effectively under the performance and func- to-performance ratios far exceed those of traditional high-end ren- tionality constrains of system area networks, PCs, and commodity dering systems.
    [Show full text]
  • ON DISTRIBUTED NETWORK RENDERING SYSTEMS 1. Introduction the Establishment and Performance of a Distributed Computer Rendering S
    ON DISTRIBUTED NETWORK RENDERING SYSTEMS DEAN BRUTON The School of Architecture, Landscape Architecture and Urban Design, The University of Adelaide SA 5005 Australia [email protected] Abstract. This paper reports an investigation of the establishment and performance of a distributed computer rendering system for advanced computer graphics production within a centralized university information technology environment. It explores the proposal that the use of distributed computer rendering systems in industry and universities offers synergies for university-industry collaborative agreements. Claims that cluster computing and rendering systems are of benefit for computer graphics productions are to be tested within a standard higher education environment. A small scale distributed computer rendering system was set up to investigate the development of the optimum use of intranet and internet systems for computer generated feature film production and architectural visualisation. The work entailed using monitoring, comparative performance analysis and interviews with relevant stakeholders. The research provides important information for practitioners and the general public and heralds the initiation of a Centre for Visualization and Animation research within the School of Architecture, Landscape Architecture and Urban Design, University of Adelaide. Keywords. Render farm, processing, computer graphics, animation. 1. Introduction The establishment and performance of a distributed computer rendering system for advanced computer graphics production within a centralized university information technology environment may seem an easy task for those who have some experience in the IT industry. From an academic point of view (Bettis, 2005), it seems an attractive proposition because the idea that idle computers can be utilised for digital media production or for other intensive 66 D.
    [Show full text]
  • An Analysis of Parallel Rendering Systems
    An Analysis of Parallel Rendering Systems Stefan Eilemann∗ This White Paper analyzes and classifies the different approaches taken to paral- lel, interactive rendering. The purpose is to clarify common misconceptions and false expectations of the capabilities of the different classes of parallel rendering software. We examine the rendering pipeline of the typical visualization application and iden- tify the typical bottlenecks found in this pipeline. The methods used for parallel ren- dering are classified in three fundamental approaches and then analyzed with respect to their influence on this rendering pipeline, which leads to conclusions about possible performance gains and the necessary porting effort. We advocate the need for a generic, open parallel rendering framework to build scalable graphics software. The drawbacks of other existing solutions are outlined, and a full software stack for graphics clusters is proposed. The Equalizer project1 aims to provide such an implementation, combining existing software packages with newly developed middleware. Version Date Changes 0.6 January 11, 2007 Minor rework of the whole paper 0.5.2 September 19, 2006 Tweaked Figure 4 0.5.1 July 5, 2006 Added link to latest version 0.5 May 19, 2006 Initial Version Latest version at http://www.equalizergraphics.com/documents/ParallelRenderingSystems.pdf ∗[email protected] 1http://www.equalizergraphics.com 1. Introduction The purpose of this white paper is to argue for the need for a middleware to build truely scalable, interactive visualization applications. In contrast to high-performance computing (HPC), the high-performance visualization (HPV) community has not gen- erally accepted the need to parallelize applications for performance.
    [Show full text]