Interconnection Networks for Parallel Computers 1613
Total Page:16
File Type:pdf, Size:1020Kb
INTERCONNECTION NETWORKS FOR PARALLEL COMPUTERS 1613 38. E. Davidson, S. McArthur, J. McDonald, T. Cumming, and I. tion network as depicted in Fig. 1(a). In the physically Watt, Applying multi-agent system technology in practice: distributed-memory parallel computer, a processor and a automated management and analysis of SCADA and digital memory module form a processor–memory pair that is fault recorder data, IEEE Trans. Power Sys., 21 (2): 559–567, called processing element (PE). All N PEs are intercon- 2006. nected via an interconnection network as depicted in 39. I. Baxevanos, D. Labridis, Software agents situated in primary Fig. 1(b). In a message-passing system, PEs communicate distribution networks: A cooperative system for fault and by sending and receiving single messages (2), while in a power restoration management, IEEE Trans. Power Delivery, 22 (4): 2378–2385, 2007. distributed-shared-memory system, the distributed PE memory modules act as a single shared address space in 40. D. Morais, J. Rolim, and Z. Vale, DITRANS – A multi-agent system for integrated diagnosis of power transformers, Proc. of which a processor can access any memory cell (3). This cell POWERTECH 2007, Lausanne, Switzerland, 2007. will either be in the memory module local to the processor, 41. R. Giovanini, K. Hopkinson, D. Coury, and J. Thorp, A or be in a different PE that has to be accessed over the primary and backup cooperative protection system based on interconnection network. wide area agents, IEEE Trans. Power Delivery, 21 (3): 1222– Parallel computers can be further divided into SIMD 1230, 2006. and MIMD machines. In single-instruction-stream multi- 42. J. Solanki, S. Khushalani, and N. Schulz, A multi-agent solu- ple-data-stream (SIMD) parallel computers (4), each pro- tion to distribution systems restoration, IEEE Trans. Power cessor executes the same instruction stream, which is Sys., 22 (3): 1026–1034, 2007. distributed to all processors from a single control unit. 43. L. Wehenkel, Automatic Learning Techniques in Power Sys- All processors operate synchronously and will also generate tems, New York: Springer, 1997. messages to be transferred over the network synchro- 44. U. Fayyad, G. Piatetsky-Shapiro, P. Smith, and R. Uthura- nously. Thus, the network in SIMD machines has to sup- samy, Advances in Knowledge Discovery and Data Mining, port synchronous data transfers. In a multiple-instruction- Cambridge, MA: MIT Press, 1996. stream multiple-data-stream (MIMD) parallel computer 45. M. Coutinho, G. Lambert-Torres, L. Silva, E. Fonseca, and (5), all processors operate asynchronously on their own H. Lazarek, A methodology to extract rules to identify attacks instruction streams. The network in MIMD machines in power system critical infrastructure, Proc. of the IEEE PES therefore has to support asynchronous data transfers. Summer Meeting in Tampa, FL, 2007. The interconnection network is an essential part of any 46. S. Ramos, Z. Vale, J. Santana, and J. Duarte, Data mining parallel computer. Only if fast and reliable communication contributions to characterize MV consumers and to improve over the network is guaranteed will the parallel system the suppliers-consumers settlements, Proc. of the IEEE PES Summer Meeting in Tampa, FL, 2007. 47. S. Strachan, B. Stephen, and S. McArthur, Practical applica- tions of data mining in plant monitoring and diagnostics, Proc. of the IEEE PES Summer Meeting in Tampa, FL, 2007. 48. L. Zadeh, Fuzzy sets, Information Control, 8, 1965. 49. M. El-Hawary (ed.), Electric Power Applications of Fuzzy Systems, New York: IEEE Press, 1998. 50. R. Bansal, Bibliography on the fuzzy set theory applications in power systems (1994–2001), IEEE Trans. Power Sys., 18 (4): 1291–1299, 2003. ZITA A. VALE Polytechnic of Porto Portugal INTERCONNECTION NETWORKS FOR PARALLEL COMPUTERS The interconnection network is responsible for fast and reliable communication among the processing nodes in any parallel computer. The demands on the network depend on the parallel computer architecture in which the network is used. Two main parallel computer architectures exist (1). In the physically shared-memory parallel computer, N Figure 1. (a) Physically shared-memory and (b) distributed- processors access M memory modules over an interconnec- memory parallel computer architecture. Wiley Encyclopedia of Computer Science and Engineering, edited by Benjamin Wah. Copyright # 2008 John Wiley & Sons, Inc. 1614 INTERCONNECTION NETWORKS FOR PARALLEL COMPUTERS exhibit high performance. Many different interconnection network diameter F is the maximum distance between two networks for parallel computers have been proposed (6). nodes in a network. This is equal to the maximum number One characteristic of a network is its topology. In this of hops that a message needs to be transferred from any article we consider only point-to-point (non-bus-based) net- source to any destination node. The degree relates the works in which each network link is connected to only two network topology to its hardware requirements (number devices. These networks can be divided into two classes: of links per node), while the diameter is related to the direct and indirect networks. In direct networks, each transfer delay of a message (number of hops through the switch has a direct link to a processing node or is simply network). The two parameters depend on each other. In incorporated directly into the processing node. In indirect most direct network, a higher degree implies a smaller networks, this one-to-one correspondence between swit- diameter because with increasing degree, a node is con- ches and nodes need not exist, and many switches in the nected to more other nodes, so that the maximum distance network may be attached only to other switches. Direct and between two nodes will decrease. indirect network topologies are discussed in the following Many different direct network topologies have been section. proposed. In the following, only the basic topologies are The mechanism to transfer a message through a net- studied. Further discussion of other topologies can be found work is called switching. A section below is devoted to in Refs. 7–9. switching techniques. Switching does not take into con- In a ring network connecting N nodes, each node is sideration the actual route that a message will take connected to only two neighbors (G ¼ 2), with PE i con- through a network. This mechanism is termed routing, nected to PEs i À 1modN and i þ 1modN. However, the and will be discussed in turn. In indirect networks, active network has a large diameter of F ¼bN/2c (assuming switch boxes are used to transfer messages. Switch box bidirectional links). Thus, global communication perfor- architectures are discussed in a final section. mance in a ring network will decrease with increasing number of nodes. A direct network quite commonly used in parallel com- NETWORK TOPOLOGIES puters is the mesh network. In a two-dimensional mesh, the nodes are configured in an M Â M grid (with M nodes in Direct Networks X Y X the X direction and MY nodes in the Y direction), and an Direct networks consist of physical interconnection links internal node is connected to its nearest neighbors in the that connect the nodes (typically PEs) in a parallel compu- north, south, east, and west directions. Each border node is ter. Each node is connected to one or more of those inter- connected to its nearest neighbors only. A 4 Â 4 two- connection links. Because the network consists of links dimensional mesh connecting 16 nodes is depicted in Fig. only, routing decisions have to be made in the nodes. In 2(a). Because of the mesh edges, nodes have different many systems, dedicated router (switch) hardware is used degrees. In Fig. 2(a), the internal nodes have degree G ¼ in each node to select one of the interconnection links to 4, while edge nodes have degree G ¼ 3 and G ¼ 2 (for the send a message to its destination. Because a node is nor- corner nodes). Because the edge nodes have a lower degree mally not directly connected to all other nodes in the than internal nodes, the (relatively large) diameter of a two- parallel computer, a message transfer from a source to a dimensional mesh is F ¼ (MX À 1) þ (MY À 1). destination node may require several steps through inter- To decrease the network diameter, the degree of the edge mediate nodes to reach its destination node. These steps are nodes can be increased to G ¼ 4 by adding edge links. The called hops. topology of a two-dimensional torus network is created by Two topology parameters that characterize direct net- connecting the edge nodes in columns and rows, as depicted works are the degree and the network diameter. The degree in Fig. 2(b). All nodes of this two-dimensional torus network G of a node is defined as the number of interconnection links have degree G ¼ 4, and the network diameter is reduced to F to which a node is connected. Herein, we generally assume ¼bMX/2cþbMY/2c. that direct network links are bidirectional, although this The disadvantage of two-dimensional mesh networks is need not always be the case. Networks in which all nodes their large diameter, which results in message transfers have the same degree n are called n-regular networks. The over many hops during global communication, especially in Figure 2. (a) Two-dimensional mesh network connecting 16 nodes, (b) torus network connecting 16 nodes. Because of the edge connections, the torus network has a uniform degree of four, while nodes in the mesh network have different degrees, depend- ing on their location. INTERCONNECTION NETWORKS FOR PARALLEL COMPUTERS 1615 larger networks. To further reduce the diameter, higher- a message will take through the network is therefore equal dimensional meshes can be used.