Large Scale Visualization on the Cray XT3 Using Paraview

Large Scale Visualization on the Cray XT3 Using Paraview

Large Scale Visualization on the Cray XT3 Using ParaView Kenneth Moreland, Sandia National Laboratories David Rogers, Sandia National Laboratories John Greenfield, Sandia National Laboratories Berk Geveci, Kitware, Inc. Patrick Marion, Kitware, Inc. Alexander Neundorf, Technical University of Kaiserslautern Kent Eschenberg, Pittsburgh Supercomputing Center May 8, 2008 ABSTRACT: Post-processing and visualization are key components to under- standing any simulation. Porting ParaView, a scalable visualization tool, to the Cray XT3 allows our analysts to leverage the same supercomputer they use for simulation to perform post-processing. Visualization tools traditionally rely on a variety of rendering, scripting, and networking resources; the challenge of running ParaView on the Lightweight Kernel is to provide and use the visualization and post-processing features in the absence of many OS resources. We have successfully accomplished this at Sandia National Laboratories and the Pittsburgh Supercom- puting Center. KEYWORDS: Cray XT3, parallel visualization, ParaView 1 Introduction This approach of a co-located visualization clus- ter works well, and is used frequently today. How- The past decade has seen much research on paral- ever, the visualization cluster is generally built to a lel and large-scale visualization systems. This work much smaller scale than the supercomputer. For the has yielded efficient visualization and rendering algo- majority of simulations, this dichotomy of computer rithms that work well in distributed-memory parallel sizes is usually acceptable as visualization typically environments [2, 9]. requires far less computation than the simulation, As visualization and simulation platforms con- but \hero" sized simulations that push the super- tinue to grow and change, so to do the challenges and computer to its computational limits may produce bottlenecks. Although image rendering was once the output that exceeds the reasonable capabilities of a limiting factor in visualization, rendering speed is visualization cluster. Specialized visualization clus- now often a secondary concern. In fact, it is now ters may be unavailable for other reasons as well; common to perform interactive visualization on clus- budget constraints may not allow for a visualization ters without specialized rendering hardware. The cluster. Also, building these visualization clusters new focus on visualization is in loading, managing, from commodity hardware and software may become and processing data. One consequence of this is the problematic as they get ever larger. practice of co-locating a visualization resource with For these reasons, it is sometimes useful to uti- the supercomputer for faster access to the simulation lize the supercomputer for visualization and post- output [1, 3]. processing. It thus becomes necessary to run visual- 1 ization software such as ParaView [7] on supercom- ing out of the compute nodes on Red Storm. Ulti- puters such as the Cray XT3. Doing so introduces mately, the ability to interactively control a visual- many challenges with cross compilation and with re- ization server on Red Storm, or most any other large moving dependencies on features stripped from the Cray XT3, is of dubious value anyway as scheduling lightweight kernel. This paper describes the ap- queues would require users to wait hours or days for proach and implementation of porting and running the server job to start. ParaView on the Cray XT3 supercomputer. Rather than force sockets on the Catamount op- erating system, we instead removed the necessity of sockets from ParaView. When running on the Cray 2 Approach XT3, ParaView uses a special parallel batch mode. ParaView is designed to allow a user to interactively This mode is roughly equivalent to the parallel server control a remote visualization server [3]. This is with the exception of making a socket connection achieved by running a serial client application on to a client. Instead, the scripting capability of the the user's desktop and a parallel MPI server on a client is brought over to the parallel server as shown remote visualization cluster. The client connects to in Figure 2. Process 0 runs the script interpreter, the root process of the server as shown in Figure 1. and a script issues commands to the parallel job in The user manipulates the visualization via the GUI the same way that a client would. or through scripting on the client. The client trans- parently sends commands to the server and retrieves Scripted “Server” results from the server via the socket. 0 Scripting Client Server Socket 0 GUI Scripting Figure 2: Parallel visualization scripting within Par- aView. Process 0 runs the script interpreter in lieu of getting commands from a client. Figure 1: Typical remote parallel interaction within ParaView. The client controls the server by making Scripts, by their nature, are typically more diffi- a socket connection to process 0 of the server. cult to use than a GUI. State files can be used to make script building easier. A user simply needs to Implementing this approach directly on a Cray load in a representative data set into an interactive XT3 is problematic as the Catamount operating sys- ParaView visualization off of the supercomputer, es- tem does not support sockets and has no socket li- tablish a visualization, and save a state file. The brary. The port of the VisIt application, which has same state file is easily loaded by a ParaView script a similar architecture to ParaView, gets around this running in parallel batch mode. problem by implementing a subset of the socket li- brary through Portals, a low level communication layer on the Cray XT3 [8]. We chose not imple- ment this approach because it is not feasible for Red 3 Implementation Storm, Sandia National Laboratories' Cray XT3. The Red Storm maintainers are loath to allow an ap- Even after removing ParaView's dependence on plication performing socket-like communication for sockets and several other client-specific features, fear of the potential network traffic overhead in- such as GUI controls, there are still several prag- volved. Furthermore, for security reasons we pre- matic issues when compiling for the Cray XT3. This fer to limit the communications going into or com- section addresses these issues. 2 3.1 Python able. Mesa 3D has the added ability to render im- ages without an X Window System, which Cata- ParaView uses Python for its scripting needs. mount clearly does not have. Python was chosen because of its popularity The Mesa 3D build system comes with its own amongst many of the analysts running simulations. cross-compilation support. Our port to Catamount Fortunately, Python was already ported to the Cray involved simply providing correct compilers and XT3 in an earlier project [4]. The porting of Python compile flags. We have contributed back the configu- to the Cray XT3 lightweight kernel, involved over- ration for Catamount builds to the Mesa 3D project, coming three difficulties: the lack of dynamic library and the configuration is now part of the distribution support, issues with cross compiling and problems since the 7.0.2 release. with system environment variable settings. Python is designed to load libraries and modules dynamically, but the lightweight kernel does not al- 3.3 Cross Compiling ParaView low this. This problem was solved by compiling ParaView uses the CMake configuration tool [6] the libraries and modules statically and replacing to simplify porting ParaView across multiple plat- Python's dynamic loader with a simple static loader. forms. CMake simplifies many of the configuration Cross-compiling was enabled by providing wrapper tasks of the ParaView build and automates the build functions to the build system to return information process, which includes code and document gener- for the destination machine rather than the build ation in addition to compiling source code. Most machine. Finally, the system environment variables of the time, CMake's system introspection requires which are not set correctly on the Cray XT3 were very little effort on the user's part. However, when undefined in the Python configuration files so that performing cross-compilation, which is necessary for the correct Python defaults would be used instead. compiling for the Catamount compute nodes, the Since this initial work, we have simplified the pro- user must help point CMake to the correct compil- cess of compiling Python by using the CMake con- ers, libraries, and configurations. figuration tool [6]. We started by converting the CMake simplifies cross compilation with Python build system to CMake including all config- \toolchain files.” A toolchain file is a simple ure checks. Once the build was converted to CMake, CMake script file that provides information about providing the mechanism for static builds, selecting the target system that cannot be determined auto- the desired Python libraries to include in the link, matically: the executable/library type, the location and cross compiling was greatly simplified. More of the appropriate compilers, the location of libraries details on the cross-compiling capabilities of CMake that are appropriate for the target system, and are given in Section 3.3. how CMake should behave when performing system The process of compiling the Python interpreter inspection. Once the toolchain file establishes the for use in ParaView on the Cray XT3 is as follows. target environment, the CMake configuration files Rather than build a Python executable, only a static for ParaView will provide information

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us