Heterogeneous Computing
Total Page:16
File Type:pdf, Size:1020Kb
Heterogeneous Computing Yu Zhang Course web site: http://staff.ustc.edu.cn/~yuzhang/pldpa Heterogeneous Computing and Intel oneAPI 1 Resources - https://www.linux.org/ • TLB、 Page Table Management • https://github.com/numactl/numactl NUMA support for Linux - https://www.khronos.org/: OpenCL、SYCL、SPIR… - Intel (oneAPI)、Nvidia (CUDA) - https://github.com/S4Plus/ABC/tree/master/oneAPI - Book • The Art of Multiprocessor Programming Heterogeneous Computing and Intel oneAPI 2 Multiprocessor-Multicores Heterogeneous Computing and Intel oneAPI 3 Modern Systems • Almost all current systems all have more than one CPU/core • Multiprocessor - More than one physical CPU - SMP: Symmetric multiprocessing, • Each CPU is identical to every other • Each has the same capabilities and privileges - Each CPU is plugged into system via its own slot/socket • Multicore - More than one CPU in a single physical package - Multiple CPUs connect to system via a shared slot /socket - Currently most multicores are SMP Heterogeneous Computing and Intel oneAPI 4 SMP architecture • Each processor in system can - perform the same tasks: Execute same set of instructions, Access memory, Interact with devices - connect to system in same way: Interact with memory/devices via communication over the shared bus/interconnect • Easily lead to chaos - Why we need synchronization Heterogeneous Computing and Intel oneAPI 5 Multiprocessor-Multicores architecture • During the early/mid 2000s CPUs => multicores - Could no longer increase speeds exponentially - But: transistor density was still increasing • SMP with multicores • cat /proc/cpuinfo Heterogeneous Computing and Intel oneAPI 6 yuzhang@user-SYS-2049U-TR4:~$ cat /proc/cpuinfo processor : 0 There are 64 processors in the vendor_id : GenuineIntel machine, and here is the cpu family : 6 model : 85 cpuinfo of processor 0. model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz stepping : 4 microcode : 0x2006a08 cpu MHz : 1000.354 cache size : 22528 KB physical id : 0 siblings : 16 core id : 0 cpu cores : 16 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit bogomips : 4200.00 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual Heterogeneous Computing and Intel oneAPI 7 power management: Multiprocessor-Multicores architecture • What does this mean for the OS? - Mostly hidden by HW - OS sees N cpus that are identical • But the similarity does not always hold for memory - More on that in a minute - L1 Cache, L2 Cache, Last-level Cache, … Heterogeneous Computing and Intel oneAPI 8 Latency Numbers Every Programmer Should Know • Latency Comparison Numbers (~2012) execute typical instruction 1/1,000,000,000 sec = 1 nanosec fetch from L1 cache memory 0.5 nanosec branch misprediction 5 nanosec fetch from L2 cache memory 7 nanosec Mutex lock/unlock 25 nanosec fetch from main memory 100 nanosec send 2K bytes over 1Gbps network 20,000 nanosec read 1MB sequentially from memory 250,000 nanosec fetch from new disk location (seek) 8,000,000 nanosec read 1MB sequentially from disk 20,000,000 nanosec send packet US to Europe and back 150 milliseconds = 150,000,000 nanosec https://research.google/people/jeff/ By Jeff Dean https://gist.github.com/hellerbarde/2843375 https://i.imgur.com/k0t1e.png Heterogeneous Computing and Intel oneAPI 9 TLB: translation lookaside buffer x86-64 architecture TLB 48 address lines faster A translation lookaside buffer (TLB) is a memory Page table Structure cache that is used to reduce slower the time taken to access a user memory location. It is a part of the chip's memory- management unit (MMU). The TLB stores the recent translations of virtual memory to physical memory and can be called an address-translation cache. Heterogeneous Computing and Intel oneAPI 10 Page Table Structure A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Heterogeneous Computing and Intel oneAPI 11 Cross CPU Communication (Shared Memory) • OS must still track state of entire system - Global data structure updated by each core - cat /proc/loadavg provides a look at the load average in regard to both the CPU and IO over time, as well as additional data used by uptime and other commands 0.08 0.06 0.10 1/442 8347 CPU and IO the number of utilization of the last currently running the last process ID one, five, and 15 processes and the total used. minute periods number of processes Heterogeneous Computing and Intel oneAPI 12 Cross CPU Communication (Shared Memory) • OS must still track state of entire system - Global data structure updated by each core • Traditional approach - Single copy of data, protected by locks - Bad scalability, every CPU constantly takes a global lock to update its own state • Modern approach - Replicate state across all CPUs/cores - Each core updates its own local copy (so NO locks!) - Contention only when state is read • Global lock Is required, but concurrent reads are rare Heterogeneous Computing and Intel oneAPI 13 Cross CPU Communication (Signals) • System allows CPUs to explicitly signal each other - Two approaches: notifications and cross-calls - Almost always built on top of interrupts • X86: Inter Processor Interrupts (IPIs) • Notifications - CPU is notified that “something” has happened - No other information - Mostly used to wakeup a remote CPU • Cross Calls - The target CPU jumps to a specified instruction • Source CPU makes a function call that execs on target CPU - Synchronous or asynchronous? • Can be both, up to the programmer Heterogeneous Computing and Intel oneAPI 14 CPU Interconnects • Mechanism by which CPUs communicate - Old way: Front Side Bus (FSB) • Slow with limited scalability • With potentially 100s of CPUs in a system, a bus won’t work - Modern Approach: Exploit HPC networking techniques • Embed a true interconnect into the system • Intel: QPI (QuickPath Interconnects) • AMD: HyperTransport • Interconnects allow point to point communication - Multiple messages can be sent in parallel if they don’t intersect Heterogeneous Computing and Intel oneAPI 15 Multiprocessing and Memory • Shared memory is by far the most popular approach to multiprocessing - Each CPU can access all of a system’s memory - Conflicting accesses resolved via synchronization (locks) - Benefits • Easy to program, allows direct communication - Disadvantages • Limits scalability and performance • Requires more advanced caching behavior - Systems contain a cache hierarchy with different scopes Heterogeneous Computing and Intel oneAPI 16 Multiprocessor Caching • On multicore CPUs some (but not all) caches are shared - Each core has its own private L1 cache - L2 cache can either be private to a core, or shared between cores - L3 cache almost always shared between cores - Caches not shared across physical CPU dies • What if two CPUs update the same memory location stored in their L1 caches? - Shared memory systems require an absolute ordering of operations - Cache coherency ensures this ordering • Implemented in hardware to ensure that memory updates are propagated throughout the entire system • Utilizes CPU interconnect for communication Heterogeneous Computing and Intel oneAPI 17 Memory Issues • As core count increases shared memory becomes harder - Increasingly difficult for HW to provide shared memory behavior to all CPU cores • Manycore CPUs: Need to cross other cores to access memory • Some cores are closer to memory and thus faster • Memory is slow or fast depending on which CPU is accessing it - This is called Non Uniform Memory Access (NUMA) Dell R710 Heterogeneous Computing and Intel oneAPI 18 NUMA: Non Uniform Memory Access https://github.com/numactl/numactl NUMA support for Linux • Memory is organized in a non uniform manner - Its closer to some CPUs than others - Far away memory is slower than close memory - Not required to be cache coherent, but usually is • ccNUMA: Cache Coherent NUMA • Typical organization is to divide system into “zones” - A zone usually contains a CPU socket/slot and a portion of the system memory - Memory is “local” if its in the CPU’s zone • Fast to access • Accessing memory in the local zone does not impact performance in other zones - Interconnect is point to point Heterogeneous Computing and Intel oneAPI 19 Dealing with NUMA • Programming a NUMA system is hard - Ultimately it’s a failed abstraction - Goal: Make all memory ops the same • But they aren’t, because some are slower • AND the abstraction hides the details • Result: Very few people explicitly design