10/8/10
Agenda
• Kinds of Parallelism CS 61C: Great Ideas in Computer • Administrivia Architecture (Machine Structures) • Technology Break • Amdahl’s Law Instructors: Randy H. Katz David A. Pa erson h p://inst.eecs.Berkeley.edu/~cs61c/fa10
10/8/10 Fall 2010 -- Lecture #17 1 10/8/10 Fall 2010 -- Lecture #17 2
Alterna ve Kinds of Parallelism: Agenda The Programming Viewpoint • Kinds of Parallelism • Job-level parallelism/process-level parallelism • Administrivia – Running independent programs on mul ple • Technology Break processors simultaneously – Example? • Amdahl’s Law • Parallel processing program – Single program that runs on mul ple processors simultaneously – Example?
10/8/10 Fall 2010 -- Lecture #17 3 10/8/10 Fall 2010 -- Lecture #17 4
Alterna ve Kinds of Parallelism: Alterna ve Kinds of Parallelism: Hardware vs. So ware The Processor Viewpoint • Cluster: Set of computers interconnected across a local area network – Like a mul processor, but with no shared memory • Mul core microprocessor – Microprocessor containing mul ple processors (“cores”) in a single IC, with shared memory • Concurrent so ware can also run on serial hardware • Sequen al so ware can also run on parallel hardware • Our “warehouse-scale” computers are • Focus is on parallel processing so ware: sequen al or mul core clusters! concurrent so ware running on parallel hardware
10/8/10 Fall 2010 -- Lecture #17 5 10/8/10 Fall 2010 -- Lecture #17 6
1 10/8/10
Alterna ve Kinds of Parallelism: Alterna ve Kinds of Parallelism: Shared Memory Mul processor Computer Cluster
NI NI
Network Interface NO Shared Memory Local Area Network (LAN) or Cluster Interconnect
10/8/10 Fall 2010 -- Lecture #17 7 10/8/10 Fall 2010 -- Lecture #17 8
Alterna ve Kinds of Parallelism: Alterna ve Kinds of Parallelism: Single Instruc on/Single Data Stream Mul ple Instruc on/Single Data Stream • Single Instruc on, • Mul ple Instruc on, Single Data stream Single Data streams (SISD) (MISD) – Computer that exploits – Sequen al computer mul ple instruc on that exploits no streams against a single parallelism in either the data stream for data instruc on or data opera ons that can be Processing Unit streams. Examples of naturally parallelized. For SISD architecture are example, certain kinds of tradi onal uniprocessor array processors. machines – No longer commonly encountered, mainly of
10/8/10 Fall 2010 -- Lecture #17 9 10/8/10 Fall 2010 -- Lecture #17 historical interest only 10
Alterna ve Kinds of Parallelism: Alterna ve Kinds of Parallelism: Single Instruc on/Mul ple Data Stream Mul ple Instruc on/Mul ple Data Streams • Single Instruc on, • Mul ple Instruc on, Mul ple Data streams Mul ple Data streams (SIMD) (MIMD) – Mul ple autonomous – Computer that exploits processors simultaneously mul ple data streams execu ng different against a single instruc ons on different data. Clusters/distributed instruc on stream to systems are generally opera ons that may be recognized to be MIMD naturally parallelized, architectures, either e.g., an array processor exploi ng a single shared memory space or a or Graphics Processing distributed memory space Unit (GPU)
10/8/10 Fall 2010 -- Lecture #17 11 10/8/10 Fall 2010 -- Lecture #17 12
2 10/8/10
Flynn Taxonomy SIMD Architectures
• Data parallelism: executing one operation on multiple data streams
• Example to provide context: – Multiplying a coefficient vector by a data vector • In 2010, SIMD and MIMD most commonly encountered (e.g., in filtering) • Most common parallel processing programming style: y[i] := c[i] × x[i], 0 ≤ i < n Single Program Mul ple Data – Single program that runs on all processors of an MIMD • Sources of performance improvement: – Cross-processor execu on coordina on through condi onal – One instruction is fetched & decoded for entire operation expressions (save for Dave and thread parallelism) – Multiplications are known to be independent • SIMD (aka hw-level data parallelism): specialized func on – Pipelining/concurrency in memory access as well units, for handling lock-step calcula ons involving arrays – Scien fic compu ng, signal processing, mul media (audio/video processing) • More on SIMD on Monday 10/8/10 Fall 2010 -- Lecture #17 13 10/8/10 Fall 2010 -- Lecture #17 Slide 14
Agenda Midterm Results: Scores
total min 13 • Kinds of Parallelism max 84 median 65 • Administrivia mean 63.1 st. dev. 14.4 • Technology Break 1/4 got >= 75 out of 85 1/2 got >= 65 out of 85 • Amdahl’s Law 2/3 >= 60 3/4 >= 55 80% >= 50 90% >= 45
10/8/10 Fall 2010 -- Lecture #17 15 10/8/10 Fall 2010 -- Lecture #17 16
Midterm Re-Grades: The Rules Course Kicks Into High Gear
• Wri en grading appeals only! • A end Tuesday’s discussion to learn about exam • This week’s EC2 Lab solu ons and grading scheme • Next week’s SIMD Lab • If you feel your exam was incorrectly graded, explain why and a ach legible write-up to examina on booklet • EC2 Project #2 (Due Saturday, 10/23, 1 Second • You have ONE week from Tuesday to hand it in in class, to Midnight) lab, or discussion • Following week’s Thread Parallelism Lab • We want to be fair, and correct mistakes, but are unlikely to change par al credit as assigned • NOTE: We reserve the right to re-examine the whole examina on should you turn it in for a re-grade.
10/8/10 Fall 2010 -- Lecture #17 17 10/8/10 Fall 2010 -- Lecture #17 18
3 10/8/10
Agenda Agenda
• Kinds of Parallelism • Kinds of Parallelism • Administrivia • Administrivia • Technology Break • Technology Break • Amdahl’s Law • Amdahl’s Law
10/8/10 Fall 2010 -- Lecture #17 19 10/8/10 Fall 2010 -- Lecture #17 20
Big Idea: Amdahl’s Law Big Idea: Amdahl’s Law • Speedup due to enhancement E is Exec me w/o E Speedup = Speedup w/ E = ------Exec me w/ E • Suppose that enhancement E accelerates a frac on F (F <1) of the task by a factor S (S>1) and the remainder of the task is unaffected Example: the execu on me of half of the program can be accelerated by a factor of 2. What is the program speed-up overall? Execu on Time w/ E = Execu on Time w/o E × [ (1-F) + F/S] Speedup w/ E = 1 / [ (1-F) + F/S ] 10/8/10 Fall 2010 -- Lecture #17 22 10/8/10 Fall 2010 -- Lecture #17 23
Big Idea: Amdahl’s Law Big Idea: Amdahl’s Law
If the por on of Speedup = 1 the program that can be parallelized (1 - F) + F is small, then the speedup is limited Non-speed-up part S Speed-up part
The non-parallel por on limits Example: the execu on me of half of the the performance program can be accelerated by a factor of 2. What is the program speed-up overall? 1 1 = = 1.33 0.5 + 0.5 0.5 + 0.25 2 10/8/10 Fall 2010 -- Lecture #17 24 10/8/10 Fall 2010 -- Lecture #17 25
4 10/8/10
Example #1: Amdahl’s Law Parallel Speed-up Example Speedup w/ E = 1 / [ (1-F) + F/S ] • Consider an enhancement which runs 20 mes faster but Z0 + Z1 + … + Z10 X1,1 X1,10 Y1,1 Y1,10 Par on 10 ways which is only usable 25% of the me and perform Speedup w/ E = 1/(.75 + .25/20) = 1.31 + on 10 parallel X10,1 X10,10 Y10,1 Y10,10 processing units • What if its usable only 15% of the me? Speedup w/ E = 1/(.85 + .15/20) = 1.17 Non-parallel part Parallel part
• Amdahl’s Law tells us that to achieve linear speedup with • 10 “scalar” opera ons (non-parallelizable) 100 processors, none of the original computa on can be scalar! • 100 parallelizable opera ons • To get a speedup of 90 from 100 processors, the percentage • 110 opera ons of the original program that could be scalar would have to be 0.1% or less Speedup w/ E = 1/(.001 + .999/100) = 90.99 10/8/10 Fall 2010 -- Lecture #17 27 10/8/10 Fall 2010 -- Lecture #17 28
Example #2: Amdahl’s Law Speedup w/ E = 1 / [ (1-F) + F/S ] Scaling
• Consider summing 10 scalar variables and two 10 by • To get good speedup on a mul processor while 10 matrices (matrix sum) on 10 processors keeping the problem size fixed is harder than ge ng Speedup w/ E = 1/(.091 + .909/10) = 1/0.1819 = 5.5 good speedup by increasing the size of the problem. – Strong scaling: when speedup can be achieved on a • What if there are 100 processors ? mul processor without increasing the size of the problem Speedup w/ E = 1/(.091 + .909/100) = 1/0.10009 = 10.0 – Weak scaling: when speedup is achieved on a mul processor by increasing the size of the problem propor onally to the increase in the number of processors • What if the matrices are 100 by 100 (or 10,010 adds in total) on 10 processors? • Load balancing is another important factor. Just a Speedup w/ E = 1/(.001 + .999/10) = 1/0.1009 = 9.9 single processor with twice the load of the others cuts the speedup almost in half • What if there are 100 processors ? Speedup w/ E = 1/(.001 + .999/100) = 1/0.01099 = 91 10/8/10 Fall 2010 -- Lecture #17 30 10/8/10 Fall 2010 -- Lecture #17 31
Summary
• Flynn Taxonomy of Parallel Architectures – SIMD: Single Instruc on Mul ple Data – MIMD: Mul ple Instruc on Mul ple Data – SISD: Single Instruc on Single Data – MISD: Mul ple Instruc on Single Data • Amdahl’s Law – Parallel code speed-up limited by the non- parallelized por on of the code – Strong scaling is hard to achieve
10/8/10 Fall 2010 -- Lecture #17 32
5