<<

ISSN 2319-8885 Vol.04,Issue.37, September-2015,

Pages:8054-8066

www.ijsetr.com

Computing Technologies and Practical Utility

K. J. SARMA SMIEEE, Professor, Dept of Humanities and Sciences and Academic Cell, Malla Reddy Engineering College (Autonomous), Secunderabad, TS, India, E-mail:[email protected].

Abstract: Several technologies have emerged by the end of 20 th century due to the high speed Broadband with each having a number of applications in science, engineering, business, social systems, and governments. These technologies brought revolutionary changes in communications and computational processes. In- fact the advances in hardware, investigation of sophisticated rigorous mathematical are also responsible for greater shift in these developments. Some of these computing technologies are cluster, grid, and cloud, cloud-mobile. Multi-cloud, parallel, concurrent, distributed, DNA, mobile, high performance, utility, global cooperative, functional, ubiquitous, pervasive, exploratory, quantum, scalable computing, ubiquitous secure, computational grid, etc. In this paper we review uses and practical applications. Research into enhancing the technology, utility in various application areas have been growing at fast phase. It may stimulate further study of each and the convergence of all these technologies in dealing with logistic based problems of large dimensionality.

Keywords: Cluster, Grid, Cloud, Distributed, DNA, Mobile, High Performance, Ubiquitous Pervasive, Quantum.

I. INTRODUCTION computing clusters. Desktop work-stations which can also Cluster computing is the topic of research among become part of a cluster when they’re not being used. academics, industry community, system designers, network Financial services firm, which probably has many high- developers, language designers, technical forums, powered workstations that sit idle overnight. Cloud instances developers. This is also finding many business applications can be created on demand and used as long as needed, then & production management, projects of graduate students and shut down with the help of public cloud platforms. research of faculties. The use of clusters as also has several scientific and engineering applications. A simple computer cluster may be just connecting two personal computers, or may be a very fast . A basic approach to building a cluster is that of a Beowulf [1] cluster which may be built with a few personal computers to produce a cost-effective computing. The first project conceptualized consisted of 133-node Stone Super computer [2]. The developers used Linux, the Parallel toolkit. Thus computer clustering consist a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network. The activities of the computing nodes are orchestrated by "clustering ". Cluster computing can also be described as a fusion of the field’s parallel, high- performance, distributed, and high-availability computing.

This is a group of server machines installed in an organization’s forming a cluster, managing a Fig.1. significant number of the servers; then trying to keep them busy to justify the investment. This makes sense only for The usage of computer technologies incurs costs of organizations with a substantial ongoing requirement which runs applications on a cluster. Today’s researchers processors as well as other computer sub-systems (such as developed many other ways. One being a cluster created motherboard, memory, hard disks, and network cards). The using Windows HPC Server 2008 R2 containing standardization of network hardware and protocols combination of on-premises servers, as traditional has also enhanced the confidence in using cluster computing. Since the invention of computers, there has been a

Copyright @ 2015 IJSETR. All rights reserved. K. J. SARMA requirement for greater processing power. In-fact solving distinguished from conventional high performance scientific problems with large dimensionality and good computing systems, such as cluster computing. In a grid, precision encouraged the need to improve processing power. computers have each node set to perform a different A cluster designed to meet these needs is termed as a ―high application. performance‖ cluster. But all problems are not amenable to cluster solutions and depend on the communication costs of the algorithm used to solve the problem. There is a wide class of problems which can be solved with low expenditure and effectively using clusters. Parametric modeling is one area, where in ―embarrassingly parallel‖ problems such as brute-force cracking of encryption keys is made use of. As we grow more dependent, the cost of failures is increasing dramatically. Combined with computer systems’ reputation for poor reliability, meaning that there is a great impetus to develop and deploy cluster computer solutions; to ensure that computer systems can be used effectively, i.e., twenty-four hours a day. A cluster designed to meet these needs is a ―high availability‖ cluster.

One feature of cluster systems is a gradual degradation, whereby a system fails slowly but with prior warning, so that remedial action can be taken in time to avoid a catastrophe. Thus we can take an action by placing the nodes of the clusters in separate physical locations. This means even when Fig.2. artificial or natural disasters like power failure, earthquake, Grid computers also tend to be more heterogeneous and fire, flood and riots occur computer systems can continue to geographically dispersed than cluster computers[4]. operate. Some of the Advantages of Cluster Computing are Sometimes a single grid will be dedicated to a particular  Manageability: A cluster with large number of application, depending on the nature and complexity of components will be combined to work as a single entity application. Thus Grids are often constructed with general- which means management becomes easy. purpose grid middleware software libraries. Grid sizes can be  Single System Image: Here the user of the cluster gets a quite large [5]. Grids are a form of feel that he is working with a single system though he is source station wherein a ―super virtual computer” is working with a large number of components. In other composed of many networked, loosely coupled computers words the user manages a single system image. 3. High are used; to perform large tasks. For certain applications, Availability: Even if one component fails due to some ―distributed‖ or ―grid‖ computing, can be seen as a special technical problem, then some other component takes its type of systems that relies on complete set place and the user can continue to work with the system, of computers (with onboard CPUs, storage, power supplies, as all the components are replicas of each other. network interfaces, etc). These are connected to a (private or public) by a conventional network The Disadvantages of Cluster Computing interface, such as Ethernet. This is in contrast to the Programmability Issues: This may be because the traditional notion of a supercomputer, which has many components being different in terms of software from each processors connected by a local high-speed computer bus [6]. other. There may also be issues when combining all of them The integration of grid resources and services on Internet together as a single entity. became convenient and flexible because of the combination Problem in Finding Fault: Because we are dealing with a of and web services which are used for single entity, so problems may arise when finding-out fault various grid applications. To foster state-of-the-art research that which of the components has some problem associated in the area of grid computing and applications we must focus with it. on all aspects of grid technologies.

Difficult to handle by a Layman: As cluster computing We must present novel results and solutions to solve involves merging different or same components together with various applications and challenges in grid platforms. different programmability, so a non-professional person finds Scientists use grid computing for their research, which it difficult to manage [9]. also consists ―resource sharing". Some of the Advantages of Grid Computing II. GRID COMPUTING  Access to Additional Resources: In addition to CPU Grid computing is a collection of computer resources from and other storage resources, a grid can also provide other multiple locations to reach a common goal. The concept of resources also. grid computing allows users to have computing on demand  Resource Balancing: A grid incorporates large number according to the need. The grid can be thought of as of systems into a single system. For applications that are a distributed system with non-interactive workloads that grid enabled, grid performs we must balance by involves a large number of files. Grid computing can be International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility scheduling grid jobs on machines that are showing low utilization.  Reliability: The systems in grid are cheap and geographically dispersed. For example, there is power or cooling failure at one site, then that will not affect the other site, thus high reliability will be there specially in case of real time systems [11].

On the other hand the Disadvantages of Grid Computing are:  Not Stable: Grid software and standards are not stable, when compared to other computing technologies. Its standards are still to be worked out.  High Internet Connection Required: Gathering and assembling various resources from geographically dispersed sites require a good band width and high speed internet connection, which results in high monetary cost.  Different Administrator Domains: Sometimes there will be issues arising due to sharing of resources among different domains. Some additional tools are also Fig.3. required for proper management of different environments like of engine, Opsware etc. For example, a cloud computer facility that serves European users during European business hours with a specific There are many applications of Grid computing in the application (e.g., email) may reallocate the same resources to organizations related to Governments, International serve North American users, during North America's Organizations, military, education, Global enterprises and business hours with a different application (e.g., a web large corporations. One of the most obvious applications is in server). This approach should maximize the use of computing medicine. A doctor having an access to a grid can use power thus reducing environmental damage as well, because administrative , medical image archives and less power, air conditioning, rack space, etc. are required for specialized instruments like MRI machines, CAT scanners a variety of functions. With , multiple users and cardio-angiography devices. This enhances diagnosis, can access a single server to retrieve and update their data speed analysis of complex medical images, and enable life- without purchasing licenses for different applications. The critical applications; such as tele-robotic surgery and remote term "moving to cloud" also refers to an organization moving cardiac monitoring. away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to III. CLOUD COMPUTING the OPEX model (use a shared cloud infrastructure and pay Cloud computing is a computing term or metaphor that as one uses it). The supporters claim that cloud computing evolved in the late 19th Century, based on utility and allows companies to avoid upfront infrastructure costs, and consumption of computer resources. The concept of cloud focus on projects that differentiate their businesses instead of computing depends on the provisioning and de-provisioning infrastructure. of computation, storage, data services to and from the user; without user being not aware of the fact that from where he The supporters also claim that cloud computing allows is gets those resources [4]. With the large scale use of enterprises in making their applications run faster; with internet all over the globe, everything can be delivered over improved manageability, less maintenance. Thus enabling IT internet using the concept of cloud computing as a utility like adjusts resources more rapidly, to meet fluctuating and gas, water, and electricity etc. Cloud computing involves unpredictable business demands. Cloud providers typically deploying groups of remote servers and software networks. use a "pay as you go" model. This can lead to unexpected These servers and networks allow different kinds of data high charges if administrators do not adapt to the cloud sources being uploaded for real time processing for pricing model. The advancements in high-capacity networks, generating computational results, without the need to store low-cost computers and storage devices as well as the processed data on the cloud. Clouds can be classified as widespread adoption of hardware virtualization, service- public, private or hybrid. Also cloud computing relies on oriented architecture, and autonomic and utility computing sharing of resources to achieve coherence and economies of have led to a growth in cloud computing. Commercial scale which is similar to a utility (like the electricity grid) organizations can scale up as computing needs increase and over a network. It may be noted that cloud computing is then scale down again as demand decreases. Cloud vendors based on the broader concept of converged are experiencing growth rates of more than 50% per annum. infrastructure and shared services. Cloud computing, or in The most important common uses of cloud computing: Cloud simpler terms "the cloud", also focuses on maximizing the computing has been credited with increasing competitiveness effectiveness of the shared resources. Cloud resources are through cost reduction, greater flexibility, elasticity and usually not only shared by multiple users but are also optimal resource utilization. There are few situations where dynamically reallocated depending on the demand. This is cloud computing is used to enhance the ability to achieve carried out by an allocation of resources to users. business goals.

International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066 K. J. SARMA 1. Infrastructure as a Service (IaaS) and Platform As A mesh of different physical locations at a much lower cost as Service (Paas) : The IaaS uses an existing infrastructure on the traditional DR site with fixed assets, rigid procedures a pay-per-use scheme. This helps companies saving on the costs more. cost of investing to acquire, manage and maintain the IT infrastructure. There are also situations where organizations 7. Backup: Backing up data has always been a complex and turn to PaaS for the same reasons, while seeking to increase time-consuming operation. This included maintaining a set of the speed of development on a ready-to-use platform to tapes or drives, manually collecting them and dispatching deploy applications. them to a backup facility with all the inherent problems that might happen in between the originating and the backup site. 2. Private Cloud And Hybrid Cloud: Among the many This is a way of ensuring a backup. This job is not immune to incentives for using cloud, there are two situations where problems such as running out of backup media, and there is organizations are looking into ways to assess some of the also time to load the backup devices for a restore operation, applications they intend to deploy into their environment which takes time and is prone to malfunctions and human through the use of a cloud (specifically a public cloud). But errors. Cloud-based backup, while not being the panacea, is in the case of testing and development problems, it is limited certainly a far cry from what it used to be. One can now in time. They adopt a hybrid cloud approach which allows automatically dispatch data to any location across the wire, testing application workloads providing comfort of an without affecting the security, availability nor are capacity environment without the initial investment. This is rendered issues. While the list of the above uses of cloud computing is useless when the workload testing fails. Another use of not exhaustive, it certainly gives an incentive to use the cloud hybrid cloud is also the ability to expand during periods of while comparing to more traditional alternatives, to increase limited peak usage. This is often preferable to hosting a large IT infrastructure flexibility. We can also leverage on big data infrastructure that might seldom be of use. An organization analytics and . would seek to have the additional capacity and availability of an environment, when needed on a pay-as you-go basis. A. Advantages of Cloud Computing Shared Resources: Cloud computing shares resources in 3. Test and Development: Test and development providing the services to multiple users. That’s why it can environment situations make good use of clouds. This is easily provide the facility like scale up and scale down the because of securing a budget in procuring the environment resources on demand. through physical assets, significant manpower and time. Then Pay-As-You-Go: Users just need to pay only for those comes is the installation and configuration of platform. With resources which are used by them. They can demand for cloud computing, there are now readily available more resources if they are required latter on, and can also environments tailored for the requirement and it is at the release their resources after use. fingertips. Better Hardware Management: It is easy for cloud service provider to manage the hardware easily because all 4. Big Data Analytics: One of the aspects offered by computers run on the same hardware [7]. leveraging cloud computing is the ability to tap into vast Save CAPEX and OPEX of Users: New technologies are quantities of both structured and unstructured data. Retailers developing very rapidly. Organizations need to use new and suppliers are now extracting information derived from technologies to fulfill the requirements of their customers. consumers’ buying patterns to target their advertising and But changing the technologies is very costly. With the help of marketing campaigns to a particular segment of the cloud computing, users don’t need to purchase the physical population. Social networking platforms are now providing infrastructure and spend money on maintaining it. They can the basis for analytics on behavioral patterns helping use any technology as per their requirement. organizations to derive meaningful information. B. Disadvantages of Cloud Computing 5. File Storage: Cloud can offer the possibility of storing Less Reliability: Cloud Computing is less reliable because it files and accessing, storing and retrieving them from any shares the resources with multiple users. So there is web-enabled interface. The web service interfaces are simple possibility to steal the confidential data of a user or data of in the sense they can be utilized at any time and place with one organization or may mix with the data of another high availability, speed, and security for the organization. For example, in 2007 Microsoft and Yahoo environment. In this scenario, organizations are only paying released some search data to the US Department of Justice as for the amount of storage they are actually consuming, part of a child pornography case. A disgruntled employee without any worry of overseeing the daily maintenance of the could alter or destroy the data using his or her own access storage infrastructure. There is also the possibility to store the credentials. If cloud storage system is not reliable, no one data either on or off premises, depending on the regulatory wants to save the data on unreliable system. compliance requirements. Data is stored in virtualized pools of storage hosted by a third party based on the customer Internet: The main requirement for services of cloud specification requirements. computing is internet. Users require high speed of internet connection [16]. Unavailability of internet would cause non- 6. Disaster Recovery: This is a benefit derived from using availability of data. cloud based on the cost effectiveness of a disaster recovery Non-Interoperability: If a user stored data in one cloud, (DR) solution, that provides for a faster recovery from a then later on he can’t move it to another cloud service International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility provider because there is non-interoperability between clouds environments and platforms based on the pay-as-you-use based systems. Some of the basic concepts underlying cloud principle." Resource management systems dealing with service are AWS or Open Stack Dashboard to construct multiple Cloud providers need to expose a uniform interface cloud services and its applications. for various services and to build wrappers for the Cloud service APIs. A solution is adopted by a recently developed Web services with data intensive open-source and vendor agnostic platform-as-a-service for computations can be created using map/ reduce, No SQL Multi-Cloud application deployment. The middleware databases, and real-time processing of real-time data streams. includes a multi-agent system for automatic Cloud resource We can use tools to solve simple problems. management. With a modular design, the solution provides a This leads to building applications for cloud computing based flexible approach to encompass new Cloud service offers as on emerging Open Stack and other platforms. The study well as new resource types. includes concepts Bare-metal provisioning, Neutron networking, Identity service, Image service, Orchestration, IV. DNA COMPUTING Infrastructure as a service, Software as a service, Platform as DNA computing is a branch of computing which a service, Map Reduce, Big data, Analytics, Privacy and legal uses DNA, biochemistry and molecular biology hardware, issues. This includes problems and solutions to cloud instead of the traditional silicon-based computer computing, including hands-on laboratory experiments (Load technologies. DNA computing generally refers to bio- Balancing, Web Services, Map Reduce, Hive, Storm, and molecular computing, which is a fast-developing inte- Mahout). Case studies can be carried out usingh Yahoo, rdisciplinary area. Research and development in this area Google, Twitter, Face book, , analytics, and concerns theory, experiments, and applications of DNA machine learning. Multi-cloud is the use of multiple cloud computing. The term "molectronics" is a term which has computing services in a single heterogeneous architecture. also been used more generally, for molecular-scale An enterprise may concurrently use separate cloud providers technology. for infrastructure (IaaS) and software (SaaS) services, or use multiple infrastructure (IaaS) providers. In the latter case, they may use different infrastructure providers for different workloads, deploy a single workload load balanced across multiple providers (active-active), or deploy a single workload on one provider, with a backup on another (active- passive).

There are a number of reasons for deploying a multi- cloud architecture, including reducing reliance on any single vendor, increasing flexibility through choice, mitigating against disasters, etc. It is similar to the use of best-of-breed applications from multiple developers on a personal computer, rather than the defaults offered by the vendor. It is recognition of the fact that no one provider can be everything for everyone. It differs from Fig.4. hybrid cloud, in that it refers to multiple cloud services rather The slow processing speed of a DNA-computer (the than multiple deployment modes (public, private, and response time is measured in minutes, hours or days, rather legacy). Various issues also present themselves in a multi- than milliseconds) is compensated by its potential to make a cloud environment. Security and governance is more complicated, and more "moving parts" may create resiliency high amount of multiple parallel computations. This allows issues. Selection of the right cloud products and services can the system to take a similar amount of time for a complex calculation than for a simple one. This is achieved by the fact also present a challenge, and users may suffer from that millions or billions of molecules interact with each other the paradox of choice. Mobile Cloud Computing (MCC), is simultaneously. However, it is a lot harder to analyze the the combination of cloud computing, mobile computing and wireless networks to bring rich computational resources answers given by a DNA-Computer than by a digital one. to mobile users, network operators, as well as cloud The usefulness of developing techniques of DNA computing computing providers. The ultimate goal of MCC is to enable and ultimately developing working DNA computers fall into one of the following three general categories: execution of rich mobile applications on a plethora of mobile devices, with a rich user experience. MCC provides business  Applications making use of "classic" DNA computing opportunities for mobile network operators as well as cloud schemes where the use of massive parallelism holds an providers. advantage over traditional computing schemes, including potential polynomial time solutions to hard More briefly MCC can be defined as "a rich mobile computational problems; computing technology that leverages unified elastic resources  Applications making use of the "natural" capabilities of of varied clouds, and network technologies towards DNA, including those that make use of informational unrestricted functionality, storage, and mobility to serve a storage abilities and those that interact with existing and multitude of mobile devices anywhere, anytime through the emerging biotechnology; channel of Ethernet or Internet regardless of heterogeneous International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066 K. J. SARMA  Contributions to fundamental research within both computer scientists and practitioners in distributed systems and the physical sciences explore the is location transparency. This goal has fallen out of favor in limitations of computability in understanding and industry, as distributed systems are different from manipulating bio-molecular chemistry. conventional non-distributed systems, and the differences, such as network partitions, partial system failures, and partial Each of these categorizations can be further subdivided upgrades, cannot simply be "papered over" by attempts at into different areas and specific models. There is a great deal "transparency"(c.f. CAP theorem.). Distributed of overlap between the different categories. By viewing the computing also refers to the use of distributed systems to field of DNA computation as a number of interrelated and solve computational problems. In distributed computing, a exciting new , the overall qualities and potential problem is divided into many tasks, each of which is solved for this area of research can be understood. The ability to by one or more computers, which communicate with each obtain tractable solutions to NP-complete and other hard other by message passing. computational problems has many implications to problems, particularly in business planning and management science. Many of the cost optimization problems faced by managers are in NP-complete and are currently solved using heuristic methods and other approximations. These problems include scheduling, routing, and optimal use of raw materials and correspond to problems already solved, either theoretically or experimentally, using DNA computation. Current and near-term advances in laboratory technology preclude the use of DNA computation as a method of solving problems in real time. This restricts the possible application of classical DNA computing to areas where the calculation of optimal solutions could be performed over longer periods. This has many uses in expert systems and can be applied to long term production planning, where initial cost commitments are high such as chip design and manufacturing. DNA computing could also be applied extensively to optimizing airline and bus routes for planning purposes.

The potential applications of re-coding natural DNA include:  DNA sequencing; DNA fingerprinting; DNA mutation detection or population screening; and Fig.5.  Other fundamental operations on DNA. Klint Finley points out that ― The explosion of data driven by sensors, data mining and social media and Web- Methods of DNA computing might serve as the most based interactions mean that more and more companies will b obvious medium for use of evolutionary programming for required to investigate ways of dealing with massive data applications in design or expert systems. DNA computing sets - even companies that haven't typically been data driven might also serve as a medium to implement a true fuzzy logic before. But new business analytics applications may require system. Exploratory computing which is used in designing more processing power. The explosion of data driven by discovery-driven user experiences. sensors, data mining and social media and other Web-based interactions mean that more and more companies will need to V. DISTRIBUTED COMPUTING find ways of dealing with massive data sets. But new Distributed computing is a field of computer science that business analytics applications require more processing studies distributed systems. A distributed system is a power than organization requiring finding ways to handle software system in which components located on networked data as efficiently as possible. Infrastructure-as-a-service computers communicate and coordinate their actions providers and inexpensive data warehousing appliances with by passing messages. The components interact with each in-memory analytics will provide options for many other in order to achieve a common goal. Three significant organizations. But some may find distributed computing a characteristics of distributed systems are: concurrency of better fit for their organization's big data needs. Scientists components, lack of a global clock, and independent failure and academics have been taking advantage of distributed of components. Examples of distributed systems vary computing for years, but it's an approach that can benefit from SOA-based systems to massively multiplayer online information workers in other areas. Some methods of running games to peer-to-peer applications. A computer program that applications in distributed environments, including some runs in a distributed system is called a distributed program, newer approaches. and distributed programming is the process of writing such programs. There are many alternatives for the message Distributed computing systems have recently evolved passing mechanism, including RPC-like connectors and drastically to improve and expand various applications with message queues. A goal and challenge pursued by some better quality of services and lower cost, especially those International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility involving human factors. Recent developments in services structure as a normal centralized system. Access and cloud computing are good examples. The challenges for Transparency is one where resources are accessed in a distributed computing systems satisfy increasing demands for uniform manner regardless of location. Location various applications. Besides reliability, performance and Transparency is the physical location of a resource which is availability, many other attributes, such as security, privacy, hidden from the user Failure. Transparency always tries to trustworthiness, situation awareness, flexibility and rapid hide failures from users (see challenge No.5). development of various applications, have also become important. Some of the challenges for Distributed Computing Parallel computing is a form of computation in which are pointed out by Nidazh are many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into Challenge No.1 – Heterogeneity: ―Describes a system smaller ones, which are then solved concurrently (in parallel). There are several different forms of parallel consisting of multiple distinct components‖. Heterogeneity computing like bit-level, instruction level, data, and task applies to many different items or objects including Food. In parallelism. Parallelism has been employed for many years, many systems in order to overcome heterogeneity a software mainly in high-performance computing, but interest in it has layer known as Middleware is often used to hide the grown lately due to the physical constraints differences amongst the components underlying layers. preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a Challenge No.2 – Openness: Openness is the property of each subsystem to be open for interaction with other systems. concern in recent years, parallel computing has become the Once something has been published it cannot be taken back dominant in , mainly in the or reversed. Furthermore in open distributed systems there is form of multi-core processors. Parallel computers can be often no central authority, as different systems may have roughly classified according to the level at which the their own intermediary. hardware supporting parallelism, with multi-core and multi- processor computers having multiple processing elements Challenge No.3 – Security: The issues surrounding security within a single machine, while clusters, MPPs, and grids use are those of Confidentiality, Integration, and Availability. To multiple computers to work on the same task. Specialized combat these issues encryption techniques such as those of parallel computer architectures are sometimes used alongside can help but they are still not absolute. Denial traditional processors, for accelerating specific tasks. of service attacks can when a server or service is bombarded with false requests usually by botnets (zombie computers). Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several Challenge No.4 – Scalability: It refers to large number of new classes of potential software bugs, of which race resources, which increases the performance of the system, conditions are the most common. which is not lost and remains effective in accomplishing its Communication and synchronization between the different goals‖. That’s a fairly self explanatory description, but there subtasks are typically some of the greatest obstacles to are a number of important issues that arise as a result of getting good parallel program performance. Concurrent increasing scalability, such as increase in cost and physical computing is a form of computing in which resources. It is also important to avoid performance several computations are executed during overlapping time bottlenecks by using caching and replication. periods concurrently, instead of sequentially (one completing before the next starts). This is a property of a system. This Challenge No.5 – Fault handling: Failures are inevitable in may be an individual program, a computer, or a network – any system; some components may stop functioning while and there is a separate execution point or "thread of control" others continue running normally. So naturally we need a for each computation ("process"). A concurrent system is way to detect Failures. Various mechanisms like checksums, one where a computation can make progress without waiting Mask Failures – retransmit upon failure to receive for all other computations to complete – where more than one acknowledgement. Recovery from failures when a server computation can make progress at "the same time". As crashes is rectified by a roll back to previous state by a , is a form building redundancy. Redundancy is the best way to deal of modular programming, namely factoring an overall with failures. It is achieved by replicating data so that if one computation into sub-computations that may be executed sub system crashes another may still be able to provide the concurrently. Pioneers in the field of concurrent computing required information. include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare. An increased application of parallel execution of a Challenge No.6 – Concurrency: Concurrency issues arise concurrent program allows the number of tasks completed in when several clients attempt to request a shared resource at certain time period to increase. High responsiveness for the same time. This is problematic as the outcome of any input/output – input/output-intensive applications mostly such data may depend on the execution order, and so waits for input or output operations to complete. Concurrent synchronization is required. programming allows the time that would be spent waiting to be used for another task. More appropriate program structure Challenge No.7 – Transparency: A distributed system must domains are well-suited to representation as concurrent tasks be able to offer transparency to its users. As a user of a or processes. distributed system we do not care if we are using 20 or 100’s of machines, so we hide this information, presenting the International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066 K. J. SARMA VI. MOBILE COMPUTING interconnects the different parts of the network and allows Mobile computing is related to human–computer interaction access to the fixed Public Switched Telephone Network by which a computer is expected to be transported during (PSTN). The technology consists of a number of hidden normal usage. Mobile computing involves mobile transceivers called Base Stations (BS). Every BS is located communication, mobile hardware, and mobile software. at a strategically selected place and covers a given area Communication issues include ad-hoc and infrastructure or cell hence the name cellular communications. A number of networks as well as communication properties, protocols, adjacent cells grouped together form an area and the data formats and concrete technologies. Mobile deals with corresponding BSs communicate through Mobile Switching the characteristics and requirements of mobile applications. Centre (MSC). The MSC is the heart of a cellular radio Mobile Computing is "taking a computer and all necessary system. It is responsible for routing, or switching, calls from files and software out into the field". There are several the originator to the destinator. It can be thought of different dimensions under which mobile computers can be managing the cell, being responsible for set-up, routing defined: (1) in terms of physical dimensions; (2) in terms of control and termination of the call, for management of inter- how devices may be hosted; (3) in terms of when the MSC hand over and supplementary services, and for mobility occurs; (4) in terms of how devices are networked; collecting charging and accounting information. The MSC (4) in terms of the type of computing that is performed. may be connected to other MSCs on the same network or to the PSTN. The frequencies used vary according to the cellular network technology implemented. For GSM, 890 - 915 MHz range is used for transmission and 935 -960 MHz for reception. The DCS technology uses frequencies in the 1800MHz range while PCS in the 1900MHz range. Each cell has a number of channels associated with it. These are assigned to subscribers on demand. When a Mobile Station (MS) becomes 'active' it registers with the nearest BS. The corresponding MSC stores the information about that MS and its position. This information is used to direct incoming calls to the MS. If during a call the MS moves to an adjacent cell then a change of frequency will necessarily occur - since adjacent cells never use the same channels. This procedure is called hand over and is the key to Mobile communications. As the MS is approaching the edge of a cell, the BS monitors the decrease in signal power. The strength of the signal is compared with adjacent cells and the call is handed over to Fig.6. the cell with the strongest signal. During the switch, the line In terms of dimensions, mobile computers tend to be is lost for about 400ms. When the MS is going from one area planar and tend to range in size from centimeters to to another it registers itself to the new MSC. Its location decimeters. Mobile computer may themselves be mobile, information is updated, thus allowing MSs to be used outside e.g., it is embedded into a or Vehicle that is mobile or their 'home' areas. Data Communications is realized using a it may not be mobile, but is carried by a mobile host, e.g., the variety of networks such as PSTN, leased-lines and more is not mobile but it is carried by a mobile recently ISDN (Integrated Services Data Network) and ATM human. The most flexible mobile computer is one that can (Asynchronous Transfer Mode) Frame Relay. These move during its operation or user session but this depends in networks are partly or totally analogue or digital using part on the range of any , it is connected to. technologies such as circuit - switching, packet – switching A tablet or computer connected via Wi-Fi can move etc. while staying connected within the range of its W-  Circuit switching implies that data from one user LAN transmitter. Mobile telephony took off with the (sender) to another (receiver) has to follow a pre- introduction of cellular technology which allowed the specified path. If a link to be used is busy, the message efficient utilization of frequencies enabling the connection of cannot be redirected, a property which causes many a large number of users. During the 1980's analogue delays. technology was used. Among the most well known systems  Packet switching is an attempt to make better utilization were the NMT900 and 450 (Nordic Mobile Telephone) and of the existing network by splitting the message to be the AMPS (Advanced Mobile Phone Service). In the 1990's sent into packets. Each packet contains information the digital cellular technology was introduced with GSM about the sender, the receiver, the position of the packet (Global System Mobile) being the most widely accepted in the message as well as part of the actual message. system around the world. There are many protocols defining the way packets can be sent from the sender to the receiver. The most widely Other such systems are the DCS1800 (Digital used are the Virtual Circuit-Switching system, which Communication System) and the PCS1900 (Personal implies that packets have to be sent through the same Communication System). A cellular network consisting of path, and the Datagram system allows packets to be sent mobile units linked together to switching equipment, which at various paths depending on the network availability. International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility Packet switching requires more equipment at the Ubiquitous computing touches on a wide range of research receiver, where reconstruction of the message will have topics, including distributed computing, mobile computing, to be carried out. location computing, mobile networking, context-aware computing, sensor networks, human-computer interaction, The introduction of mobility in data communications and . Pervasive computing (also called required a move from the Public Switched Data Network ubiquitous computing) is the growing trend towards (PSDN) to other networks like the ones used by mobile embedding in everyday objects, so they can phones. PCSI has come up with an idea called CDPD communicate information. The words pervasive and (Cellular Digital Packet Data) technology which uses the ubiquitous mean "existing everywhere." Pervasive computing existing mobile network (frequencies used for mobile devices are completely connected and constantly available. telephony). Mobility implemented in data communications Privacy and security issues might very well be the greatest has a significant difference compared to voice barrier to creating a ubiquitously connected world. communications. Mobile phones allow the user to move Addressing such concerns will require not only efficient around and talk at the same time. The loss of the connection algorithms and secure protocols but also usable interfaces for 400ms during the hand over is undetectable by the user. and socially compatible designs. Further researchers with a When it comes to data, 400ms is not only detectable but strong interdisciplinary interest will have to look for the non- causes huge distortion to the message. Therefore data can be obvious solutions. Ubiquitous computing will enable an transmitted from a mobile station under the assumption that it entirely new quality in the exchange and processing of data remains stable or within the same cell. information and knowledge. With ubiquitous computing, many of these processes will recede into the background, and most of them will occur partially or wholly and VII. UBIQUITOUS COMPUTING / PERVASIVE automatically. This new form of ubiquitous computing will COMPUTING not develop uniformly and synchronously in all economic Ubiquitous computing (ubi-comp) is a concept in and computer science, where computing is made and social areas. to appear everywhere and anywhere. In contrast to desktop Rather, applications will be defined and implemented at computing, ubiquitous computing can occur using any different speeds in different contexts. Nine areas of device, in any location, and in any format. A user interacts applications of ubiquitous computing viz. communication, with the computer, which can exist in many different forms, logistics, Motor traffic, Military, production, smart homes, E- including laptop computers, tablets and terminals in everyday commerce, inner security and medical technology are very objects such as a fridge or a pair of glasses. The technologies likely to play a decisive role. Embedded Systems and Wireless used support ubiquitous computing which include Internet, Technology is the private sphere in the digital world. These advanced middleware, operating system, mobile code, differences are illustrated clearly by the gap between sensors, microprocessors, new I/O and user interfaces, Europe’s strict legal regulations and the comparatively open, networks, mobile protocols, location and positioning and new self-regulatory approach in the United States. The global materials. This new paradigm is also described as pervasive networking of smart objects and services, which is computing, , ambient media or 'every- anticipated in the long run, will necessitate the creation of a ware'. Primarily concerning the objects involved, it is also standardized international regulatory regime for data known as physical computing, the , haptic protection in ubiquitous computing. The invisible nature of computing and 'things that think'. Rather than proposing a ubiquitous computing and the complexity of its networking single definition for ubiquitous computing of could mean that system failures and malicious interference properties for ubiquitous computing have been proposed. may go unnoticed, or are noticed much later. In some And from these properties different kinds of ubiquitous ubiquitous computing applications such as medicine, traffic systems and applications can be described. system control or self-organized production lines, this could put human lives in danger and lead to extensive property damage. In applications where safety is crucial, the reliability of ubiquitous computing is essential. It must be guaranteed, with system redundancy or a backup system. A. Ubiquitous Computing Application Areas Ubiquitous computing aims to permeate and interconnect all areas of life. Thus enabling a ubiquitous flows of data, information, by integrating cognitive capabilities in the future. , one of the fathers of ubiquitous computing, described this vision of a continua land ubiquitous exchange transcending the borders of applications, media, and countries as ―everything, always, everywhere.‖ This sketch offers a strongly future-oriented perspective on ubiquitous computing that is still far removed from today’s reality. In the future the special performance characteristics of ubiquitous computing will enable an entirely new quality in the exchange and processing of data, information and Fig.7. International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066 K. J. SARMA knowledge. With ubiquitous computing, many of these Medical Technology: Increasingly autarkic, multifunctional, processes will recede into the background, and most will miniaturized and networked medical applications in ubiquitous occur partially or wholly automatically. But this new form of computing offer a wide range of possibilities for monitoring the ubiquitous computing will not develop uniformly and health of the elderly in their own homes, as well as synchronously in all economic and social areas. The for intelligent implants. Identifying each application area’s applications can be implemented at different speeds in potential and estimating when we can expect applications to different contexts. Nine areas of application in which be established is essential to a well founded. ubiquitous computing are already recognizable are identified and is very likely to play a decisive role in the future. The Embedded Systems and Wireless Technology: The prognosis communications area affects all forms of exchange and of ubiquitous computing developments is because of transmission of data, information, and knowledge. various dentitions of ubiquitous computing and depends on Communications represents a precondition for all variable contexts; firstly we must describe the performance domains. features and characteristics of ubiquitous computing and then relate them to the selected application areas. B. Ubiquitous Computing: Applications, Challenges and Future Trends C. Scalable Computing Logistics: tracking logistical goods along the entire transport Fabric computing or unified computing involves the creation chain of raw materials, semi-finished goods, and finished of a computing fabric consisting of interconnected nodes that products (including their eventual disposal) closes the look like a 'weave' or a 'fabric' when viewed collectively gap in IT control systems between the physical flows and from a distance. Usually this refers to a consolidated high- the information flows. This offers opportunities for performance computing system consisting of loosely coupled optimizing and automating logistics that are already apparent storage, networking and parallel processing functions linked today. by high bandwidth interconnects (such as and Infini-Band), but the term has also been used to Motor Traffic: automobiles already contain several describe platforms like the Azure Services Platform and grid assistance systems that support the driver invisibly. computing in general (where the common theme is Networking vehicles with each other and with surrounding interconnected nodes that appear as a single logical unit). The telematics systems is anticipated for the future. fundamental components of fabrics are "nodes" (processor(s), memory, and/or ) and "links" (functional Military: the military sector requires the provision of connection between nodes). While the term "fabric" has also information on averting and fighting external threats that is as been used in association with storage area close-meshed, multi-dimensional, and interrelated as networks and switched fabric networking, the introduction possible. This comprises the collection and processing of of compute resources provides a complete "unified" information. It also includes the development of new computing system. Other terms used to describe such fabrics weapons systems. include "unified fabric", "data center fabric" and "unified data center fabric" According to Ian Foster, director of the Production: in the smart factory, the flow and processing of Computation Institute at the Argonne National Laboratory components within manufacturing are controlled by the and University of Chicago, "grid computing 'fabrics' are now components and by the processing and transport stations poised to become the underpinning for next-generation themselves. Ubiquitous computing will facilitate enterprise IT architectures and be used by a much greater part decentralized production system that wills independently of many organizations."IBM, TIBCO, Brocade, Cisco, HP, configure, control and monitor itself. Unisys, Egenera, Avaya and Xsigo systems currently manu- facture computing fabric. The main advantages of fabrics are Smart Homes: in smart homes, a large number of home that a massive concurrent processing combined with a huge, technology devices such as heating, lighting, and ventilation tightly-coupled address space makes it possible to solve huge and communication equipment become smart objects that computing problems (such as those presented by delivery automatically adjust to the needs of theresidents. of cloud computing services) and that they are both scalable and able to be dynamically reconfigured. E-commerce: The smart objects of ubiquitous computing Challenges include a non-linearly degrading performance allow for new business models with a variety of digital curve, whereby adding resources does not linearly increase services to be implemented. These include location-based performance which is a common problem with parallel services, a shift from selling products to renting them, and computing and maintaining security. software agents that will instruct components in ubiquitous computing to initiate and carry out services and business transactions independently. VIII. Quantum computing studies theoretical computation Inner Security: Identification systems, such as electronic systems (quantum computers) that make direct use passport and the already abundant smart cards, are applications of of quantum-mechanical phenomena like superposition and ubiquitous computing in inner security. In the future, monitoring entanglement, in order to perform operations on data. systems will become increasingly important—for instance, in Quantum computers are different from digital computers protecting the environment or surveillance of key which are based on transistors. Whereas digital computers infrastructure such as airports and the power grid. require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility quantum computation uses qubits (quantum bits), which can , Green ICT as per IFG International be in super-positions of states. A quantum Turing machine is Federation of Green ICT and IFG Standard, green IT, or ICT a theoretical model of such a computer, and is also known as sustainability, is the study and practice of environmentally the universal quantum computer. Quantum computers share sustainable computing. San Murugesan notes that this can theoretical similarities with non-deterministic and probabili- include "designing, manufacturing, using, and disposing of stic computers. The field of quantum computing was initiated computers, servers, and associated subsystems such as by the work of Yuri Manin in 1980, Richard Feynman in monitors, printers, storage devices, and networking and 1982, and David Deutsch. A quantum computer with spins communications systems efficiently and effectively with as quantum bits was also formulated for use as a minimal or no impact on the environment." The goals of quantum space–time in 1968. As of 2015, the development green computing are similar to green chemistry like reducing of actual quantum computers is still in its infancy, but the use of hazardous materials, maximization of energy experiments have been carried out in which quantum efficiency during the product's lifetime, and promoting the computational operations were executed on a very small recyclability or biodegradability of nonfunctioning products number of qubits. Both practical and theoretical research and factory waste. Green computing is important for all continues, and many national governments and military classes of systems, ranging from handheld systems to large- agencies are carrying efforts to develop quantum scale data centers. Green Computing has the following computers for civilian, business, trade, and national security benefits: purposes, such as cryptanalysis.  Using energy star qualified products to help in energy conservation.  The Climate Savers Computing Initiative (CSCI) catalog can be used for choosing green products.  Organic light-emitting diodes should be used instead of the regular monitors.  Surge protectors offering the benefit of green computing by cutting off the power supply to devices when the computer is turned off.  Donating old computers and other peripherals to reduce the rate of e-waste creation.  It was expected that computers would help reduce paper wastage. However, even today wastage of paper is a serious issue in industries. The easy availability of photocopiers and printers is also one of the culprits behind unchecked paper wastage. Think twice before using printers.  Use the device only if it is necessary.  The manufacturing of disks and boxes needed for video games takes up a lot of resources. Fig.8.  manufacturers can offer their games online for downloading leading to a reduction in e-waste. It is ascertained large-scale quantum computers will be This move can cut down on the transportation / shipping able to solve certain problems much more quickly than any cost. classical computers that use even the best currently known algorithms, like integer factorization using Shor's  Use of 'Local Cooling' software which helps in algorithm[27] or the simulation of quantum many-body monitoring and thereby, bringing down the energy systems. There exist quantum algorithms, such as Simon's consumed by computer. This 'Windows' program makes algorithm, that run faster than any possible probabilistic adjustments to the power options of computer and helps classical algorithm. Given sufficient computational minimize energy consumption. resources, a classical computer could be made to simulate IX. HIGH PERFORMANCE COMPUTING any quantum algorithm; as quantum computation does not High Performance Computing refers to the practice of violate the Church–Turing thesis. Global Cooperative aggregating computing power in a way that delivers much Computing represents the next step in exploiting wide-scale higher performance than one could get out of a typical network connectivity wherein distributed users and or workstation in solving e problems of computers globally cooperate on common computing large dimensionality occurring in science, engineering, or problems. The focus is on how the World-Wide Web can be business. HPC users are faced with the challenges of dealing used to rapidly advance the state of the art in computation with highly heterogeneous resources, where the variability and change the way software is developed, maintained, spans across a wide range of processor configurations, debugged, optimized, and used. The cooperative computing interconnections, virtualization environments and pricing model supports systems which continually adapt to user rates and models. A holistic viewpoint to answer the question needs and global information. This adaptability delivers is why and who should choose or not of the cloud for HPC? considerable performance and convenient to both developers For what applications, and how should cloud be used for and end users. HPC? In order to answer the questions, we perform a International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066 K. J. SARMA comprehensive performance evaluation of a set of [3]Michael M. Resch, Xin Wang, Erich Focht, Wolfgang benchmarks and complex HPC applications. The range of Bez, Hiroaki Kobayashi, Sabine Roller (editors); High platforms varies from to commodity clusters, Performance Computing on Vector Systems 2011. both in-house and in the cloud. After having identified the [4]Jack Dongarra† And Alexey Lastovetsky An verview Of performance bottlenecks in the cloud, we demonstrate an Heterogeneous High Performance And Grid Computing; alternative lightweight virtualization mechanism such as thin www.netlib.org/utk/people/JackDongarra/.../hetero-dist- VMs, OS - level containers, hypervisor, application level survey-2004.pdf. CPU affinity which can greatly lower the overhead and noise [5]Franco Travostino, Joe Mambretti, Gigi Karmous- of virtualization. Edwards (editors): Grid Networks: Enabling Grids with Advanced Communication Technology. [6]Chris Woodford. Supercomputers, Last updated: November 5, 2014. [7]Arumai Selvam M., Suganya S. Recent Advancements in DNA Computing Volume: 03, May 2014, . [8]Rajkumar Buyya; A Proposal for Creating a Computing Research Repository, (CoRR, http://www.arXiv.org/) on Cluster Computing. [9]Kiranjot Kaur, Anjandeep Kaur Rai; A Comparative Analysis: Grid, Cluster and Cloud Computing, IJARCCE, Vol. 3, Issue 3 March 2014. [10]Manish Parashar; Grid Computing: Introduction and Overview, http://www.ggf.org/. [11]Vassilios P, Grid Computing - Advantages and Disadvantages, Emerging ntechnology trends – tool box Apr 9, 2008. [12]Cloud computing, A collection of working papers , Delloite. [13]Lewis N and Weinberger, DNA Computing, Jason Mitre Fig.9. Corporation, Oct. 1995. [14]Watada J. Binti Abu Bakar R ; DNA Computing and its Ubiquitous-Secure: The manifesto of ubiquitous computing applications, Intelligent systems design and applications, is to justly article for Scientific American by the late Mark Nov. 2008. Weiser of Xerox PARC (Weiser, 1991). Weiser spoke of [15]Advantages and Disadvantages of Cloud Computing, ―ubiquitous computing‖ around 1988 and other researchers Blog on Level Cloud- IT any time anywhere, 2015. around the world had also been focusing their efforts in that [16]Mark J. Daley and Lila Kari, DNA Computing: Models direction. Weiser’s article depicts an Active Badge, an and Implementations, Comments on Theoretical Biology, 7: infrared-emitting tag worn by research scientists to locate 177–198, 2002. their colleagues when they were not in their office at the time [17]Distributed computing, From Wikipedia, the free when mobiles are not famous. encyclopedia. [18]Anupam Joshi, Sanjiva Weerawarana, Ranjeewa A. X. CONCLUSION Weerashighe. Tzvetan T. Drashansky, Narendran The article is only an initiative to put the things in a Ramakrishnan; A Survey of Mobile Computing Technologies nutshell, the available computing technologies. It would be and Applications, computer scince reports, purdue university, interesting to discuss the available algorithms in each case. 1995. Further, to sort out the applications which can be commonly [19]Nidazh , Distributed Computing – Architectures. Blogs dealt with by more than one technology, and compare the on Computer Science Sources. 2007. results to check the computational efficiency of the [20]Kalaiselvi, A. Indumathy, V. ; Gajalakshmi, G. ; processes. There still remains sufficient exploring of the Madhusudanan, J. A survey of pervasive computing implementation of the computing technologies in both projects; Computing Communication & Networking scientific, business and social environments. Technologies (ICCCNT), 2012 Third International Conference. XI. ACKNOWLEDGEMENTS [21]James Walker; Quantum Computing: A High-Level The author of acknowledges the organizations and the Overview, cs.mtu.edu/~jwwalker/ files/ cs5431-jwwalker- authors of several research papers, whose findings and views quantumcomputing.pdf. are used in writing the above review paper. Several images [22]Nilesh C. Thakkar, Mr. Nitesh M. Sureja; High designed by the designers are also made use of. He expresses Performance Computing: A Survey, IJARCET. Volume 1, heartfelt thanks to all of them. Issue 9, November 2012. [23]Li, Kuan-Chin, ( Editor ) Handbook of Research on XII. REFERENCES Scalable Computing Technologies; Information Science [1]Wikipedia, Beowulf cluster, 2015. reference. [2]Stone Souper - computer was a Beowulf-style computer cluster built, August 2001. International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054--8066 Computing Technologies and Practical Utility [24]Bajgoric, Nijaz,(edited) Continuous Computing Technologies for Enhancing Business Continuity. [25]Renaud Lifchitz, Quantum computing in practice & applications to cryptography, Nov, 2014 (http://www.oppida.fr). [26]Jaydip Sen, Ubiquitous Computing: Applications, Challenges and Future Trends from embedded systems and wireless technologies. [27]Dave Bacon , Quantum Computing - Shor’s algorithm; courses.cs.washington.edu/courses/cse599d/06wi/lecturenote s11.pdf.

International Journal of Scientific Engineering and Technology Research Volume.04, IssueNo.37, September-2015, Pages: 8054-8066