
Proceedings of the 2013 Federated Conference on Computer Science and Information Systems pp. 455–462 Library for Matrix Multiplication-based Data Manipulation on a “Mesh-of-Tori” Architecture Maria Ganzha, Marcin Paprzycki Stanislav Sedukhin Systems Research Institute University of Aizu Polish Academy of Sciences Aizu Wakamatsu, Japan Warsaw, Poland Email: [email protected] Email: fi[email protected] Abstract—Recent developments in computational sciences, in- only for multi-processor computers, but also computers with volving both hardware and software, allow reflection on the way processors consisting of multiple computational units (e.g. that computers of the future will be assembled and software cores, processors, etc.). In this context, let us note that as the for them written. In this contribution we combine recent results concerning possible designs of future processors, ways they will number of computational units per processor is systematically be combined to build scalable (super)computers, and generalized increasing, the inflation adjusted price of a processor remains matrix multiplication. As a result we propose a novel library the same. As a result, the price per computational operation of routines, based on generalized matrix multiplication that continues to decrease (see, also [5]). facilitates (matrix / image) manipulations. While a number of approaches have been proposed to deal with the memory wall problem (e.g. see discussion of 3D I. INTRODUCTION memory stacking in [6]), they seem to only slow down the INCE the early 1990’s one of the important factors process, rather than introduce a radical solution. Note that, S limiting computer performance became the ability to feed introduction of multicore processors resulted in (at least tem- data to the, increasingly faster, processors. Already, in 1994 porary) sustaining the Moore’s Law and thus further pushing authors of [1] discussed problems caused by the increasing gap the performance gap (see, [3], [7]). Here, it is also worth between the speeds of memory and processors. Their work was mentioning recent approach to reduce memory contention via followed, among others, by Burger and Goodman ([2]), who data encoding (see, [8]). The idea is to allow for hardware- were concerned with the limitations imposed by the memory based encoding and decoding of data to reduce its size. Since bandwidth on the development of computer systems. In 2002, this proposal is brand new, time will tell how successful it P. Machanick presented an interesting survey ([3]) in which will be. Note, however, that also this proposal is in line with he considered the combined effects of doubling of processor the general observation that “computational hardware” (i.e. speed (predicted by Moore’s Law) and the 7% increase in encoders and decoders) is cheap, and should be used to reduce memory speed, when compared in the same time scale. volume of data moved between processor(s) and memory. The initial approach to address this problem was through Let us now consider one of the important areas of scientific introduction of memory hierarchy for data reuse (see, for computing – computational linear algebra. Obviously, here instance, [4]). In addition to the registers, CPUs have been the basic object is a matrix. While, one dimensional matrices equipped with small fast cache memory. As a result systems (vectors) are indispensable, the fundamental object of majority with 4 layers of latency were developed. Data could be of algorithms is a 2D, or a 3D, matrix. Upon reflection, it replicated and reside in (1) register, (2) cache, (3) main is easy to realize that there exists a conflict between the memory, (4) external memory. Later on, while the “speed gap” structure of a matrix and the way it is stored and processed between processors and memory continued to widen, multi- in most computers. To make the point simple, 2D matrices processor computers gained popularity. As a result, systems are rectangular (while 3D matrices are cuboidal). However, with an increasing number of latencies have been built. On they are stored in one-dimensional memory (as a long vector). the large scale, data element could be replicated and reside Furthermore, in most cases, they are processed in a vector- in (and each subsequent layer means increasing / different oriented fashion (except for the SIMD-style array processors). latency of access): (1) register, (2) level 1 cache, (3) level Finally, they are sent back to be stored in the one-dimensional 2 cache, (4) level 3 cache, (5) main memory of a (multi-core / memory. In other words, data arrangement natural for the multi-processor) computer, (6) memory of another networked matrix is neither preserved, nor taken advantage of, which puts computer (node in the system), (7) external device. Obviously, not only practical, but also theoretical limit on performance of such complex structure of a computer system resulted in linear algebra codes (for more details, see, [9]). need for writing complex codes to efficiently use it. Data Interestingly, similar disregard to the natural arrangement blocking and reuse became the method of choice for solution of data concerns also many “sensor systems.” Here, the input of large computational problems. This method was applied not image, which is square or rectangular, is read out serially, 978-1-4673-4471-5/$25.00 c 2013, IEEE 455 456 PROCEEDINGS OF THE FEDCSIS. KRAKOW,´ 2013 pixel-by-pixel, and is send to the CPU for processing. This the sensor(s) and transfers it directly to its operational registers means that the transfer of pixels destroys the 2D integrity of / local memory; (2) is capable of generalized FMA operations. data (an image, or a frame, start to exist not in their natural The latter requirement means that such FMA should store layout). Separately, such transfer introduces latency caused by (in its registers) constants needed to efficiently perform FMA serial communication. Here, the need to transfer large data operations originating from various semirings. Let us name streams to the processor may prohibit their use in applications, it the extended generalized FMA; EG FMA. Recall, that the which require (near) real-time response [10]. Note that large cost of computational units (of all types) is systematically data streams exist not only in scientific applications. For decreasing ([5]). Therefore, cost of the EG FMA unit should instance modern digital cameras capture images consisting not be much higher than that of a standard FMAs found in of 22.3 × 106 pixels (Cannon EOS 5D Mark III [11]) or today’s processors. Hence, it is easy to imagine m(b)illions even 36.3 × 106 pixels (Nikon D800 [12]). What is even of them “purchased” for a reasonable price. As stated above, more amazing, recently introduced Nokia phone (Nokia 808 such EG FMAs should be connected into a square array that PureView [13]) has camera capturing 41 × 106 pixels. will match the shape of the input data. Let us now describe For the scientific / commercial sensor arrays, the largest of how such system can be build. them seems to be the 2-D pixel matrix detector installed in II. MESH-OF-TORI INTERCONNECTION TOPOLOGY the Large Hadron Collider in CERN [14]. It has 109 sensor cells. Similar number of sensors would be required in a CT Since early 1980’s a number of topologies for supercom- scanner array of size approximately 1m2, with about 50K puter systems have been proposed. Let us omit the unscalable pixels per 1cm2. In devices of this size, for the (near) real-time approaches, like a bus, a tree, or a star. The more interesting image and video processing, as well as a 3-D reconstruction, it topologies (from the 1980’s and 1990’s) were: would be natural to load data directly from the sensors to the • hypercube – scaled up to 64000+ processor in the Con- processing elements (for immediate processing). Thus, a focal- nection Machine CM-1, plane I/O, which can map the pixels of an image (or a video • mesh – scaled up to 4000 processors in the Intel Paragon, frame) directly into the array of processors, allowing data • processor array – scaled up to 16000+ processor in the processing to be carried out immediately, is highly desired. MassPar computer, The computational elements could store the sensor information • rings of rings – scaled up to 1000+ processors in the (e.g. a single pixel, or an array of pixels) directly in their Kendall Square KSR-1 machines registers (or local memory of a processing unit). Such an • torus – scaled up to 2048 units in the Cray T3D architecture has two potential advantages. First, cost can be However, all of these topologies suffered from the fact that reduced because there is no need for memory buses or a at least some of the elements were reachable with a different complicated layout. Second, speed can be improved as the latency than the others. This means, that algorithms imple- integrity of input data is not destroyed by serial communica- mented on such machines would have to be asynchronous, tion. As a result, processing can start as soon as the data is which works well, for instance, for ising-model algorithms available (e.g. in the registers). Note that proposals for similar similar to [20], but is not acceptable for a large set of hardware architectures have been outlined in [15], [16], [17]. computational problems. Otherwise, extra latency had to be However, all previously proposed focal-plane array processors introduced by the need to wait for the information to be were envisioned as a mesh-based interconnect, which is good propagated across the system. for the local data reuse (convolution-like simple algorithms), To overcome this problem, recently, a new (mesh-of-tori; but is not proper to support the global data reuse (matrix- MoTor) multiprocessor system topology has been proposed multiplication-based complex algorithms).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-