<<

Parallel Processing

Why Distributed

| Memory Wall + ILP Wall + Power Wall for single z Memory wall • The increasing gap between the processor and memory performance z Instruction Level Parallel (ILP) Wall • Diminishing returns of more advanced uniprocessor architecture z Power Wall

Why

i486

i386

T. Austin et al., “Mobile ”, IEEE Computers, vol 37, No 5 , pp. 81-83, May 2004

1 The Power/Thermal Challenges

300 1400 Transistor 250 1200 Power 1000 200 >100B 800

150 er (Watt) Size (Billion) Size

600 w

100 ~300W Po 400

Transistor Transistor 50 200

0 0 2001 2003 2005 2007 2009 2011 2013 2015 2017 Year • A 300mm2 die is capable of integrating over 100 billion transistors in the middle of the next decade • The power consumption will reach 300 W in early next decade

Flynn’s Computer Models

| Single instruction, single data stream - SISD | Single instruction, multiple data stream - SIMD | Multiple instruction, single data stream - MISD | Multiple instruction, multiple data stream- MIMD

MIMD

| Flexible z One-user high performance platform z Run multiple tasks simultaneously | Cost-effective z Using COTS (commercial off-the-shelf) processors | Exploiting -level parallelism z Thread

2 Taxonomy of Parallel Architectures

Two Types of MIMDs

| architecture z Centralized (tightly coupled) shared-memory multiprocessor • Single memory • Symmetric to all processor (SMP) z Distributed shared memory • Multiple memory • Same memory space

Two Types of MIMD (cont’d)

| Multiprocessor with private z Multiple (distributed) memoryy( (hig h memory bandwidth) z Memory space is private z Communication between processors more costly • High bandwidth interconnection network

3 Symmetric Multiprocessor (SMP)

Symmetric Multiprocessor (SMP)

| Similar processors of comparable capacity | Share same memory and I/O | Communicate via that shared memory | Connected by a or other internal connection | AitlApproximately same memory access time | All processors can perform the same functions (hence symmetric) | System controlled by integrated z providing interaction between processors z Interaction at job, task, file and data element levels

Nonuniform Memory Access (NUMA) z Accessible to all parts of memory z Access time of memory differs depending on region of memory and processors

4 Nonuniform Memory Access (NUMA)

| Effective performance at higher levels of parallelism than SMP | No major software changes | Performance can breakdow n if too mu ch access to remote memory | Not as transparent as SMP | Page allocation, allocation and load balancing changes needed

Clusters

| Collection of independent computers or SMPs | Interconnected to work together as unified resource (as if one computer) | Communication via fixed path or network connections | EhEach compu titer is ca lldlled a no de

SMP, NUMA, and Cluster

| Multiprocessor support to high demand applications. | SMP is easier to manage and control, but has practical limit to number of processors | Clustering | superior incremental & absolute | eachdhith has its own memory | apps do not see large global memory | Coherence maintained by software not hardware | NUMA retains SMP flavour while giving large scale

5 Performance issues of MIMD

| Parallelism of the application | Communication cost

Example I

| Using 100 processors to achieve a of 80. What fraction of the original computation can be sequential?

execution timeold 1 Speedupoverall = = Amdahl’s law: α execution timenew ()1−α + n 1 80 = ===>1−α = 0.25% α ()1−α + 100

The parallelism available in programs limits the performance of distributed computing !

Example II

| Assume: z 32-processor MIMD z 400ns cycle to access remote memory z Processor 1Ghz. z IPC is 2 w/o communication | What is the IPC if 0.2% of instructions involve communication?

CPI = CPIideal + Avg. Comm. stalls CPI =1/ IPC 400ns Avg. Comm. stalls = Comm rate×Comm. Cost = 0.2%× = 0.8 1ns 1 1 1.3 IPC =1/ CPI = = speedup = = 2.6 0.5 + 0.8 1.3 0.5 The communication cost limits the efficiency of the performance for distributed computing !

6 Summary

| Motivations for distributed computing | Flynn’s computer models | Two types of MIMDs z Shared memory • Centralized shared memory, .g. SMP • Distributed shared memory, e.g. NUMA z Private memory, e.g. cluster | Performance of MIMD z Parallelism z Communication cost

7