Parallel Programming. Multicore Processors TODAY, Many-Core Co-Processors READY

Parallel Programming. Multicore Processors TODAY, Many-Core Co-Processors READY

Parallel Programming. Multicore processors TODAY, many-core co-processors READY. Solutions for parallel programming work best on Intel® Architecture. Programming for multicore processors today enables scaling forward to many-core co-processors. Multicore processors Many-core co-processors The versatility of Intel® Architecture has both Co-processors based on Intel® Many increased and remained accessible as the Intel® MIC Integrated Core (Intel® MIC) Architecture number of hardware threads and cores has Architecture have high core counts, each with multiple risen over the past decade. New tools and hardware threads and wider vector units. programming models only make it better. These can be ideal for highly parallel Intel® Xeon® processors are designed for intelligent performance applications. The familiarity of established programming and smart energy efficiency. models and tools for Intel Architecture processors makes parallel programming widely accessible. One source base, tuned to many targets Solutions exist today. Programming and optimizing for multicore is the best path to being many-core ready. Code your way Innumerable programming languages, models, and tools support Intel architecture. Use them with Intel multicore processors and many-core co-processors. Here are some code samples implementing vector, task, and cluster parallelism. All are available for multicore and many-core. Mix and match to suit your needs. Summing vector elements in C using OpenMP – openmp.org Sum in Fortran, using co-array feature – intel.com/software/ #pragma omp parallel for reduction(+: s) products for (int i = 0; i < n; i++) { s += x[i]; R E A L S U M[*] } CALL SYNC_ALL( WAIT=1 ) DO IMG= 2,NUM_IMAGES() IF (IMG==THIS_IMAGE()) THEN SUM = SUM + SUM[IMG-1] Per element multiply in C++ using Intel® Array Building Blocks – ENDIF CALL SYNC_ALL( WAIT=IMG ) intel.com/go/arbb ENDDO void products( const arbb::dense<arbb::f32>& a, const arbb::dense<arbb::f32>& b, a rb b::d e n se <a rb b::f32>& c) { c = a * b; Parallel function invocation in C using Intel® Cilk™ Plus – cilk.org } cilk_for (int i=0; i<n; ++i) { Fo o(a[i]); } Dot product in Fortran using OpenMP – openmp.org !$omp parallel do reduction ( + : adotb ) Parallel function invocation in C++ using Intel® Threading do j = 1, n Building Blocks – threadingbuildingblocks.org adotb = adotb + a(j) * b(j) parallel_for (0, n, end do [=](int i) { Foo(a[i]); } !$omp end parallel do ); Per element multiply in C using OpenCL – MPI code in C for clusters – intel.com/go/mpi intel.com/go/opencl for (d=1; d<ntasks; d++) { rows = (d <= extra) ? avrow+1 : avrow; kernel void printf(“ sending %d rows to task %d\n”, rows, dest); dotprod( global const float *a, MPI_Send(&offset, 1, MPI_INT, d, mtype, MPI_COMM_WORLD); global const float *b, MPI_Send(&rows, 1, MPI_INT, d, mtype, MPI_COMM_WORLD); global float *c) { MPI_Send(&a[offset][0], rows*NCA, MPI_DOUBLE, d, mtype, MPI_COMM_WORLD); int myid = get_global_id(0); MPI_Send(&b, NCA*NCB, MPI_DOUBLE, d, mtype, MPI_COMM_WORLD); c[myid] = a[myid] * b[myid]; offset = offset + rows; } } Matrix Multiply in Fortran using Intel® Math Kernel Library – intel.com/software/products call DGEMM(transa,transb,m,n,k,alpha,a,lda,b,ldb,beta,c,ldc) For more information go to: intel.com/software/products Copyright © 2011 Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon Inside, and Cilk are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. 0611/ND/OCG/PP/500 Please Recycle 325648-001US.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    2 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us