Balance principles for algorithm-architecture co-design Kent Czechowski∗, Casey Battaglinoy, Chris McClanahan∗, Aparna Chandramowlishwarany, Richard Vuducy Georgia Institute of Technology y School of Computational Science and Engineering ∗ School of Computer Science fkentcz,cbattaglino3,chris.mcclanahan,aparna,[email protected] Abstract could include multicore CPUs or manycore GPU-like We consider the problem of “co-design,” by which we processors; and by modern algorithms, we mean those mean the problem of how to design computational al- analyzed using recent techniques that explicitly count gorithms for particular hardware architectures and vice- parallel operations and I/O operations. Given an algo- versa. Our position is that balance principles should rithm analyzed this way and a cost model for some ar- drive the co-design process. A balance principle is a chitecture, we can estimate compute-time and I/O-time theoretical constraint equation that explicitly relates al- explicitly. We then connect the algorithm and architec- gorithm parameters to hardware parameters according to ture simply by applying the concept of balance, which some figure of merit, such as speed, power, or cost. This says a computation running on some machine is efficient notion originates in the work of Kung (1986); Callahan, if the compute-time dominates the I/O (memory transfer) Cocke, and Kennedy (1988); and McCalpin (1995); how- time, as suggested originally by Kung (1986) and since ever, we reinterpret these classical notions of balance in applied by numerous others [13, 28, 32, 38].1 The result a modern context of parallel and I/O-efficient algorithm is a constraint equation that binds algorithm and archi- design as well as trends in emerging architectures. From tecture parameters together; we refer to this constraint as such a principle, we argue that one can better under- a balance principle. stand algorithm and hardware trends, and furthermore gain insight into how to improve both algorithms and Position. Our position is that this simple and classical hardware. For example, we suggest that although ma- idea of balance is an insightful way to understand the trix multiply is currently compute-bound, it will in fact impact of architectural design on algorithms and vice- become memory-bound in as few as ten years—even if versa. We argue this point by giving a detailed example last-level caches grow at their current rates. Our overall of how one might derive a balance principle for an algo- aim is to suggest how to co-design rigorously and quan- rithm running on a manycore architecture, and analyze titatively while still yielding intuition and insight. the resulting balance principle to determine the trajectory of algorithms given current architectural trends. Among 1 Our Position and its Limitations numerous observations, We seek a formal framework that can explicitly relate • we show that balance principles can yield unified characteristics of an algorithm, such as its inherent par- ways of viewing several classical performance en- allelism or memory behavior, with parameters of an ar- gineering principles, such as classical machine bal- chitecture, such as the number of cores or cache sizes ance, Amdahl’s Law, and Little’s Law; or memory latency. We refer to this task as one of • we predict that matrix multiply, the prototypi- algorithm-architecture co-design. Our goal is to say pre- cal compute-bound kernel, will actually become cisely and analytically how changes to the architecture memory-bound in as few as ten years based on cur- might affect the scaling of a computation, and, con- rent architectural trends, even if last-level caches versely, identify what classes of computation might exe- continue to grow at their historical rates; cute efficiently on a given architecture. • we argue that stacked memory, widely believed to Our approach is inspired by the fundamental principle be capable of largely solving the problem of how of machine balance, applied here in the context of mod- 1We depart slightly from traditional conventions of balance, which ern architectures and algorithms. “Modern architectures” ask instead that compute-time equal I/O-time. 1 to scale bandwidth with compute capacity, still may not solve the problem of compute-bound kernels be- coming increasingly memory-bound; • we suggest how minimizing I/O-time relative to Slow memory! computation time will also save power and energy, thereby obviating the need for “new” principles of energy- or power-aware algorithm design. Transaction size" α= latency! + = L words! β= bandwidth! … Limitations. Our approach has several weaknesses. Fast memory! First, our discussion is theoretical. That is, we assume (Capacity = Z words)! abstract models for both the algorithms and the archi- 1! 2! …! p! cores! tectures and make numerous simplifying assumptions. W(n) = work (total ops) C0 = flops / time / core! However, as an initial step in formalizing intuition about D(n) = depth how algorithms and architectures need to change, we be- lieve that an abstract analytical model is a useful start. Secondly, the specific balance principles we discuss in Figure 1: (left) A parallel computation in the work-depth this paper still assume “time” as the main figure of merit, model. (right) An abstract manycore processor with a rather than, say, dollar cost. Nevertheless, we emphasize large shared fast memory (cache or local-store). that time-based balance principles are just one example; one is free to redefine the figure of merit appropriately to of operations, where edges indicate dependencies, as il- consider balanced systems in other measures. We discuss lustrated in Fig. 1 (left). Given the DAG, we measure energy based measures specifically in x 3. its work, W (n), which is the total number of unit-cost Thirdly, what we suggest is a kind of re-packaging of operations for an input of size n; and its depth or span, many known principles, rather than discovering an en- D(n), which is its critical path length measured again tirely new principle for co-design. Still, we hope our in unit-cost operations. Note that D(n) defines the min- synthesis proves useful in related efforts to design better imum execution time of the computation; W (n)=D(n) algorithms and hardware through analytical techniques. measures the average available parallelism as each criti- cal path node executes. The ratio D(n)=W (n) is similar 2 Deriving a Balance Principle to the concept of a sequential fraction, as one might use in evaluating Amdahl’s Law [4, 21, 39]. When we de- Deriving a balance principle consists of three steps. sign an algorithm in this model, we try to ensure work- optimality, which says that W (n) is not asymptotically 1. Algorithmically analyze the parallelism. worse than the best sequential algorithm, while also try- 2. Algorithmically analyze the I/O behavior, or num- ing to maximize W (n)=D(n). ber of transfers through the memory hierarchy. 3. Combine these two analyses with a cost model for an abstract machine, to estimate the total cost of Analyzing I/Os. To analyze the I/O behavior, we adopt computation vs. the cost of memory accesses. the classical external memory model [3]. In this model, we assume a sequential processor with a two-level mem- From the last step, we may then impose the condition for ory hierarchy consisting of a large but slow memory and balance, i.e., that compute-time dominate memory-time. a small fast memory of size Z words; work operations This process resembles Kung’s approach [28], except may only be performed on data that lives in fast mem- that we (a) consider parallelism explicitly; (b) use richer ory. This fast memory may be an automatic cache or a algorithmic models that account for more detailed mem- software-controlled scratchpad; our analysis in this pa- ory hierarchy costs; and (c) apply the model to analyze per is agnostic to this choice. We may further consider platforms of interest today, such as GPUs. Note that we that transfers between slow and fast memory must occur use a number of parameters in our model; refer to Fig- in discrete transactions (blocks) of size L words. When ure 1 and Table 1 as a guide. we design an algorithm in this model, we again measure the work, W (n); however, we also measure QZ;L(n), Analyzing parallelism. To analyze algorithmic paral- the number of L-sized transfers between slow and fast lelism, we adopt the classical work-depth (work-span) memory for an input of size n. There are several ways to model [9], which has numerous implementations in real design either cache-aware or cache-oblivious algorithms programming models [11, 18, 24]. In this model, we rep- and then analyze QZ;L(n) [3, 19, 22, 40]. In either resent a computation by a directed acyclic graph (DAG) case, when we design an algorithm in this model, we 2 again aim for work-optimality while also trying to max- For example, Kung’s classical balance principle [28] imize the algorithm’s computational intensity, which is in our notation would have been W (n)= (QZ;L(n) · L). Intensity is the algorithmic ratio pC0 W of operations performed to words transferred. ≤ (4) β Qp;Z;L · L Architecture-specific cost model. The preceding which says the machine’s inherent balance point (left- analyses do not directly model time; for that, we need hand side, peak operations to bandwidth) should be no to choose an architecture model and define the costs. greater than the algorithm’s inherent computational in- For this example, we use Fig. 1 (right) as our abstract tensity (right-hand side), measured in units of opera- machine, which is intended to represent a generic many- tions to words. Indeed, Equ.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-