Review on Parallel and Distributed Computing

Review on Parallel and Distributed Computing

Scholars Journal of Engineering and Technology (SJET) ISSN 2321-435X Sch. J. Eng. Tech., 2013; 1(4):218-225 ©Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources) www.saspublisher.com Review Article Review on Parallel and Distributed Computing Inderpal Singh Computer Science and Engineering Department, DAV Institute of Engineering and Technology, Jalandhar, India *Corresponding author Inderpal Singh Email: Abstract: Parallel and distributed computing is a complex and fast evolving research area. In its short 50-year history, the mainstream parallel computer architecture has evolved from Single Instruction Multiple Data stream (SIMD) to Multiple Instructions Multiple Data stream (MIMD), and further to loosely coupled computer cluster; now it is about to enter the Computational Grid era. The algorithm research has also changed accordingly over the years. However, the basic principles of parallel computing, such as inter-process and inter-processor communication schemes, parallelism methods and performance model, remain the same. In this paper, a short introduction of parallel and distributed computing will be given, which will cover the definition, motivation, various types of models for abstraction, and recent trend in mainstream parallel computing. Keywords: Single Instruction Multiple Data stream (SIMD), Multiple Instructions Multiple Data stream (MIMD), inter- processor communication and loosely coupled INTRODUCTION Computing”. Distributed And Parallel Computing Distributed computing is the process of Motivation Of Parallel Computing aggregating the power of several computing entities, Parallel computing is widely used to reduce which are logically distributed and may even be the computation time for complex tasks. Many geologically distributed, to collaboratively run a single industrial and scientific research and practice involve computational task in a transparent and coherent way, complex large- scale computation, which without so that they appear as a single, centralized system. parallel computers would take years and even tens of years to compute. It is more than desirable to have the Parallel computing is the simultaneous results available as soon as possible, and for many execution of the same task on multiple processors in applications, late results often imply useless results. A order to obtain faster results. It is widely accepted that typical example is weather forecast, which features parallel computing is a branch of distributed computing, uncommonly complex computation and large dataset. It and puts the emphasis on generating large computing also has strict timing requirement, because of its power by employing multiple processing entities forecast nature. simultaneously for a single computation task. These Parallel computers are also used in many areas to multiple processing entities can be a multiprocessor achieve larger problem scale. Take Computational Fluid system, which consists of multiple processors in a Dynamics (CFD) for an example. While a serial single machine connected by bus or switch networks, or computer can work on one unit area, a parallel a multicomputer system, which consists of several computer with N processors can work on N units of independent computers interconnected by area, or to achieve N times of resolution on the same telecommunication networks or computer networks. unit area. In numeric simulation, larger resolution will help reduce errors, which are inevitable in floating point Besides in parallel computing, distributed calculation; larger problem domain often means more computing has also gained significant development in analogy with realistic experiment and better simulation enterprise computing. The main difference between result. enterprise distributed computing and parallel distributed computing is that the former mainly targets on As predicted by Moore's Law [1], the integration of distributed resources to collaboratively computing capability of single processor has finish some task, while the later targets on utilizing experienced exponential increase. This has been shown multiple processors simultaneously to finish a task as in incredible advancement in microcomputers in the last fast as possible. In this thesis, because we focus on high few decades. Performance of a today desktop PC performance computing using parallel distributed costing a few hundred dollars can easily surpass that of computing, we will not cover enterprise distributed million-dollar parallel supercomputer built in the 1960s. computing, and we will use the term “Parallel It might be argued that parallel computer will phase out 218 Singh I., Sch. J. Eng. Tech., 2013; 1(4):218-225 with this increase of single chip processing capability. computing context, a model of parallel machine will However, 3 main factors have been pushing parallel allow algorithm designers and implementers to ignore computing technology into further development. issues such as synchronization and communication methods and to focus on exploitation of concurrency. First, although some commentators have speculated that sooner or later serial computers will The widely-used theoretic model of parallel meet or exceed any conceivable need for computation, computers is Parallel Random Access Machine this is only true for some problems. There are others (PRAM). A simple PRAM capable of doing add and where exponential increases in processing power are subtract operation is described in Fortune's paper [3]. A matched or exceeded by exponential increases in PRAM is an extension to traditional Random Access complexity as the problem size increases. There are also Machine (RAM) model used to serial computation. It new problems arising to challenge the extreme includes a set of processors, each with its own PC computing capacity. Parallel computers are still the counter and a local memory and can perform widely used and often only solutions to tackle these computation independently. All processors problems. communicate via a shared global memory and processor activation mechanism similar to UNIX process forking. Second, at least with current technologies, the Initially only one processor is active, which will exponential increase in serial computer performance activate other processors; and these new processors will cannot continue forever, because of physical limitations further activate more processors. The execution finishes to the integration density of chips. In fact, the when the root processor executes a HALT instruction. foreseeable physical limitations will be reached soon Readers are advised to read the original paper for a and there is already a sign of slow down in pace of detailed description. single-chip performance growth. Major microprocessor venders have run out of room with most of their Such a theoretic machine, although far from traditional approaches to boosting CPU performance- complete from a practical perspective, provides most driving clock speeds and straight-line instruction details needed for algorithm design and analysis. Each throughput higher. Further improvement in performance processor has its own local memory for computation, will rely more on architecture innovation, including while a global memory is provided for inter-processor parallel processing. Intel and AMD have already communication. Indirect addressing is supported to incorporated hyperthreading and multicore architectures largely increase the flexibility. Using FORK instruction, in their latest offering [2]. a central root processor can recursively activate a hierarchical processor family; each newly created Finally, generating the same computing power, processor starts with a base built by its parent processor. single-processor machine will always be much more Since each processor is able to read from the input expensive then parallel computer. The cost of single registers, task division can be accomplished. Such a CPU grows faster than linearly with speed. With recent theoretical model inspires many realistic hardware and technology, hardware of parallel computers are easy to software systems, such as PVM [4] introduced later in build with off-the-shelf components and processors, this thesis. reducing the development time and cost. Thus parallel computers, especially those built from off-the-shelf Architectural Models Of Parallel Computer components, can have their cost grow linearly with Despite a single standard theoretical model, speed. It is also much easier to scale the processing there exist a number of architectures for parallel power with parallel computer. Most recent technology computer. Diversity of models is partially shown in even supports to use old computers and shared Figure 1-1. This subsection will briefly cover the component to be part of parallel machine and further classification of parallel computers based on their reduces the cost. With the further decrease in hardware architectures. One classification scheme, development cost of parallel computing software, the based on memory architecture, classifies parallel only impediment to fast adoption of parallel computing machines into Shared Memory architecture and will be eliminated. Distributed Memory architecture; another famous scheme, based on observation of instruction and data Theoretical Model Of Parallel Computing streams, classifies parallel machines according to A machine model is an abstract of realistic Flynn's taxonomy. machines ignoring some trivial issues, which usually differ from one machine to another.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us