Processing High-Volume Stream Queries on a Supercomputer

Processing High-Volume Stream Queries on a Supercomputer

Processing high-volume stream queries on a supercomputer Erik Zeitler and Tore Risch Department of Information Technology, Uppsala University {erik.zeitler,tore.risch}@it.uu.se Abstract sources as the amount of data grows. Once activated, continuous queries (CQs) filter and transform the streams Scientific instruments, such as radio telescopes, colliders, to identify events and reduce data volumes of the result sensor networks, and simulators generate very high vol- streams delivered in real time. The area of stream data umes of data streams that scientists analyze to detect and management has gained a lot of interest from the database understand physical phenomena. The high data volume research community recently [1] [8] [14]. An important and the need for advanced computations on the streams application area for stream-oriented databases is that of require substantial hardware resources and scalable sensor networks where data from large numbers of small stream processing. We address these challenges by sensors are collected and queried in real time [21] [22]. developing data stream management technology to sup- The LOFAR antenna array will be the largest sensor net- port high-volume stream queries utilizing massively work in the world. In difference to conventional sensor parallel computer hardware. We have developed a data networks where each sensor produces a limited amount of stream management system prototype for state-of-the-art very simple data, the data volume produced from each parallel hardware. The performance evaluation uses real LOFAR receiver is very large. measurement data from LOFAR, a radio telescope Thus, DSMS technology needs to be improved to meet antenna array being developed in the Netherlands. the demands of this environment and to utilize state-of- the-art hardware. Our application requires support for computationally expensive continuous queries over data streams of very high volumes. These queries need to exe- 1. Background cute efficiently on new types of hardware in a heteroge- neous environment. LOFAR [13] is building a radio telescope using an array of 25,000 omni directional antenna receivers whose 2. Research problem signals are digitized. These digital data streams will be combined in software into streams of astronomical data A number of research issues are raised when investi- that no conventional radio telescopes have been able to gating how new hardware developments like the provide earlier. Scientists perform computations on these BlueGene massively parallel computer can be optimally data streams to gain more scientific insight. utilized for processing continuous queries over high- The data streams arrive at the central processing facili- volume data streams. For example, we ask the following ties at a rate of several terabits per second, which is too questions: high for the data to be saved on disk. Furthermore, expen- 1. How is the scalability of the continuous query execu- sive numerical computations need to be performed on the tion ensured for large stream data volumes and many streams in real time to detect events as they occur. For stream sources? New query execution strategies need these data intensive computations, LOFAR utilizes an to be developed and evaluated. IBM BlueGene supercomputer and conventional clusters. 2. How should expensive user-defined computations, High-volume streaming data, together with the fact and models to distribute these, be included without that several users wanting to perform analyses suggests compromising the scalability? The query execution the use of a data stream management system (DSMS) [9]. strategies need to include not only communication We are implementing such a DSMS called SCSQ (Super but also computation time. Computer Stream Query processor, pronounced cis- 3. How does the chosen hardware environment influ- queue), running on the BlueGene computer. SCSQ scales ence the DSMS architecture and its algorithms? The by dynamically incorporating more computational re- BlueGene CPUs are relatively slow while the communication is fast. This influences query distribution. Input Back-end Blue Front User 4. How can the communication subsystems be utilized streams cluster Gene cluster optimally? The communication between different Figure 1. Stream data flow in the target CPUs depends on network topology and the load of hardware environment. each individual CPU. This also influences query distribution. ding to the user CQs in the Linux back-end cluster. Next, BlueGene processes the CQs over these pre-processed 3. Our approach streams. The output streams from BlueGene are then post-processed in the front cluster and the result stream is finally delivered to the user. Thus, three parallel To answer the above research questions we are de- computers are involved and it is up to SCSQ to trans- veloping a SCSQ prototype. We analyze the performance parently and optimally distribute the stream processing characteristics of the prototype system in the target hard- between these. ware environment in order to make further design choices The hardware components have different and modifications. The analyses are based on a architectures. The BlueGene features dual PowerPC 440d benchmark using real and simulated LOFAR data, as well 700MHz (5.6 Gflops max) compute nodes connected by a as test queries that reflect typical use scenarios. These 1.4 Gbps 3D torus network, and a 2.8 Gbps tree network experiments provide test cases for prototype [3]. Each compute node has a local 512 MB memory. The implementation and system re-design. In particular, compute nodes run the compute node kernel (CNK) OS, a performance measurements provide a basis for designing simple single-threaded operating system that provides a a system that is more scalable than previous solutions on subset of UNIX functionality. Each compute node has standard hardware. two processors, of which normally one is used for The CQs are specified declaratively in a query computation and the other one for communication with language similar to SQL, extended with streaming and other compute nodes. MPI is used for communication vector processing operators. Vector processing operators between BlueGene compute nodes, whereas communi- are needed in the query language since our application cation with the Linux clusters utilizes I/O nodes that pro- requires extensive numerical computations over high- vide TCP or UDP. One important limitation of CNK is volume streams of vectors of measurement data. The the lack of support for server functionality (no listen(), queries involve stream theta joins over vectors applying accept() or select() are implemented). Furthermore, two- non-trivial numerical vector computations as join criteria. way communication is expensive and should be avoided To filter and transform streams before merging and for time-critical code. Each I/O-node is equipped with a 1 joining them, the system supports sub-queries Gbit/s network interface. In LOFAR’s BlueGene, there parameterized by stream identifiers. These sub-queries are 6144 dual processor compute nodes, grouped in pro- execute in parallel on different nodes. cessing sets, or psets, consisting of 8 compute nodes and A particular problem is how to optimize high-volume one I/O node. This I/O-rich configuration enables high stream queries in the target parallel and heterogeneous volumes of incoming and outgoing data streams. hardware environment, consisting of BlueGene compute The Linux front and back-end clusters are IBM JS20 nodes communicating with conventional shared-nothing computers with dual PowerPC 970 2.2GHz processors. Linux clusters. Pre- and post-processing computations are done on the Linux clusters, while parallelizable computa- tions are likely to be more efficient on the BlueGene. The 5. The SCSQ system distribution of the processing should be automatically optimized over all available hardware resources. When Figure 2 illustrates the architecture of the SCSQ com- several different nodes are involved in the execution of a ponents running on the different clusters. stream query, properties of the different communication On the front cluster, the user application interacts with mechanisms (TCP, UDP, MPI) substantially influence the query execution performance. Back-end BlueGene Front Query idle idle CNC 4. The hardware environment coordinator Preparator SP SP Client QM FSP manager Figure 1 illustrates the stream dataflow in the target Preparator SP SP hardware environment. The users interact with SCSQ on Figure 2. The SCSQ components. Double a Linux front cluster where they specify CQs. The input arrows indicate data streams. streams from the antennas are first pre-processed accor- a SCSQ client manager. The client manager is respon- Nodes participating in the processing of a stream are sible for i) interacting with the user application, ii) sen- called working nodes. Stream processors, query masters, ding CQs and meta-data, such as client manager identi- and FSPs are all working nodes. fication, to the query coordinator for compilation. When a working node needs measurements from an The query coordinator is responsible for i) compiling input stream it initiates TCP communication for that incoming CQs from client managers, ii) starting one or stream through its preparator. A preparator is a working more front stream processors (FSP) to do the post pro- node running on the back-end cluster wrapping one or cessing of the streams from the BlueGene, and iii) more input streams. posting instructions to the BlueGene components for The set-up of a stream query generates a distributed execution of CQs. When the query coordinator receives a query execution tree, as illustrated by the double arrows new CQ from a client manager, the query coordinator in Figure 2. initiates new FSPs for post-processing of that CQ. It also We have implemented the first SCSQ prototype and maintains a request queue of CQs and other instructions are evaluating it. All BlueGene and front node functiona- to be processed by the BlueGene. This queue is regularly lity for execution of single user queries have been polled by the BlueGene compute node coordinator implemented. We have used this implementation to (CNC) (single arrow in Figure 2).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us