Rethinking Scalable Service Architectures for the Internet of Things

Total Page:16

File Type:pdf, Size:1020Kb

Rethinking Scalable Service Architectures for the Internet of Things Devices-as-Services: Rethinking Scalable Service Architectures for the Internet of Things Fatih Bakir, Rich Wolski, Chandra Krintz Gowri Sankar Ramachandran Univ. of California, Santa Barbara Univ. of Southern California Abstract vices at the edge that augment device capabilities and enable We investigate a new distributed services model and architec- scale. In this paper, we outline this approach to implementing ture for Internet of Things (IoT) applications. In particular, Devices-as-Services and describe some of the capabilities of we observe that devices at the edge of the network, although an early prototype. resource constrained, are increasingly capable – performing Our work is motivated by the following observations. • IoT applications can and will likely be structured as collec- actions (e.g. data analytics, decision support, actuation, con- tions of services that require functionality from a device trol, etc.) in addition to event telemetry. Thus, such devices tier, an edge tier, and a cloud tier are better modeled as servers, which applications in the cloud • in-network data processing can significantly reduce re- compose for their functionality. We investigate the implica- sponse time and energy consumption [31], tions of this “flipped” IoT client-server model, for server dis- • edge isolation precludes the need for dedicated commu- covery, authentication, and resource use. We find that by com- nication channels between application and devices, and bining capability-based security with an edge-aware registry, facilitates privacy protection for both data and devices, this model can achieve fast response and energy efficiency. • actuation in device tier will require some form of request- 1 Introduction response protocol where the device fields the request, • the heterogeneity of devices militates for a single program- As the Internet of Things (IoT) grows in size and ubiquity, it ming paradigm and distributed interaction model, and is becoming critical that we perform data-driven operations • multi-function devices can and will be able to perform their (i.e. analytics, actuation, and control) at the “edge” to reduce functions (including data publication) conditionally based the latency, response time, cost, and energy use for IoT ap- on application needs (e.g. as a power optimization). plications. As such, edge systems increasingly co-locate data Thus we propose a new model for distributed IoT applications management and analysis services with sensing, instead of in Figure 1. We flip the client-server model such that devices requiring that devices ship their data over long-haul networks at the edge are “servers” that although resource constrained, for remote processing using the traditional “cloud” model. service multiple, scalable applications (i.e. clients) deployed Edge systems [3, 4, 8, 10, 11, 27, 28] typically implement a in the cloud. Note that in many commercial solutions [3,11] publish/ subscribe (pub-sub) model in which devices publish devices “publish” data to channels to which servers (running streams of data (often via a nearby broker); when supported, in the cloud) subscribe. We propose to move the servers actuation often uses a separate protocol. Alternatively, some to the edge (as a way to scale data ingress, lower latency, solutions target a client-server model, where the IoT devices and improve privacy) thereby reversing the current cloud- are the clients that respond with data whenever a decision centric architecture. Such a model requires a new distributed making server needs it. In our view, a client-server model is architecture that leverages edge resources in these ways. well suited to emerging IoT applications in which resource In this paper, we define one such architecture, investigate constrained edge devices provide nanoservices – data-driven how it can accommodate this new model, and evaluate the actuation and control, data analysis, processing, and sensing. implications of its use in IoT settings and for servicing IoT We believe that the IoT deployments in the not-too-distant applications. The architecture is based on the functions as- future will include multi-function devices. As a result, a ser- a-service (FaaS) programming and deployment model (the vices model in which the specific function (including data execution engine that underpins serverless computing for publication) can be requested from each device when it is clouds [13,20,22,23]), and integrates a new approach for effi- needed is more appropriate. Further, because devices will con- cient client and server discovery, authentication, and authoriza- tinue to be resource constrained, they will require “helper” ser- tion via a combination of capability-based security [5, 12, 24] chronous commands (although for low-power applications that use MQTT-SN, this option is not possible). With our system, the same server code runs at all tiers — device, edge, and cloud — unifying the device API as a first-class services API. The currently prevalent commercial IoT/cloud APIs require devices to act separately as “publishers” or “subscribers” or both (the last to implement “closed-loop” actuation). We be- lieve that IoT infrastructure must accommodate a richer model Figure 1: A new distributed services model for IoT: “client” in which devices (however resource restricted) are capable applications in the cloud compose services exported by of actions as well as event telemetry. Thus, logically, IoT de- resource-constrained devices at the edge via Edgistries – edge vices are better modeled as servers that provide services to nodes that facilitate device discovery/registration, privacy me- applications. diation, and optimization. 2.1 The Edgistry and a set of edge services for service registry, privacy media- Our approach splits the provisioning of services between de- tion, and optimization called an Edgistry. vices and a common set of management services located at We prototype an early implementation of this architecture the edge, termed The Edgistry. Devices host small, resource- and approach using microcontrollers, single board comput- light services that field requests from, and respond to, client ers, and edge clouds. We empirically compare the energy applications (hosted in the cloud or edge). The Edgistry consumption and performance of traditional and flipped archi- • implements an eventually consistent distributed service tectures, as well as our capability approach versus TLS/SSL registry (e.g. using blockchain consensus protocols such as based mechanisms for authentication. We find that it is possi- Tendermint BFT [6] and Hyperledger Fabric [2]), ble to implement Devices-as-Services – a flipped model of the • acts as a speed-matching communication service, currently popular cloud-based IoT architecture – efficiently, • can protect device privacy (e.g. through anonymization) by using FaaS as a universal programming paradigm and a set of isolating the service provider from service consumer, and application agnostic edge services. • provides computational and storage off-loading services 2 Devices-as-services – a New Approach (e.g. content caching) for device-hosted services. Our view is that the current Internet and cloud architecture From a programmability perspective, we advocate a uni- should “reverse” to accommodate scalable IoT applications. versally portable “Functions as a Service” (FaaS) capabil- Rather than hosting services in the cloud (logically making ity [13,17,20,22,23]. Services on the device are implemented IoT devices clients of those services), we view devices as using a “micro FaaS” – a small, specially tuned, implementa- “servers” and applications running in the cloud as “clients”. tion of FaaS specifically targeting resource-restricted micro- This viewpoint is supported by several observations. First, controllers as an execution platform. The micro FaaS supports devices are resource restricted making them capable of de- the same programming APIs as a heavier-weight (and server- livering limited and relatively fixed functionality. This func- less) implementation designed to run on an edge device, in a tionality is naturally described as an enumerable set of ser- private cloud, or in a public cloud. In this way, IoT applica- vices (responses to requests) that are composed by applica- tions can be coded using a single “FaaS everywhere” set of tions. These services are necessarily “small” but even fully programming abstractions from device to cloud. resourced cloud services are now being developed as microser- To enable this portability and also data durability in a dis- vices [18]. Devices are the “nanoservices” in an IoT setting. tributed setting, such scale-spanning FaaS capability must Secondly, applications must compose device capabilities define a portable storage abstraction, ideally having append- from vast collections of devices into meaningful functionality only semantics. Thus all computations, regardless of location, for users, and in an IoT setting, the user scale will be sub- can persist data by appending it to some object hosted in the stantial. The cloud is where this scaling will necessarily take device-edge-cloud hierarchy using a common API. place, both in terms of aggregating device functionality, and 2.2 Request Forwarding and Duty Cycle matching these aggregations to user demand for them. Note that to save power, some devices will need to spend most Thirdly, much of the current technological development of their duty cycle in a power-saving “sleep” mode. For ex- for IoT [3, 11] is focused on bringing devices
Recommended publications
  • Towards a Security-Aware Benchmarking Framework for Function-As-A-Service
    Towards a Security-Aware Benchmarking Framework for Function-as-a-Service Roland Pellegrini1, Igor Ivkic1 and Markus Tauber1 1University of Applied Sciences Burgenland, Eisenstadt, Austria {1610781022, igor.ivkic, markus.tauber}@fh-burgenland.at Keywords: Cloud Computing, Benchmarking, Cloud Security, Framework, Function-as-a-Service Abstract: In a world, where complexity increases on a daily basis the Function-as-a-Service (FaaS) cloud model seams to take countermeasures. In comparison to other cloud models, the fast evolving FaaS increasingly abstracts the underlying infrastructure and refocuses on the application logic. This trend brings huge benefits in application and performance but comes with difficulties for benchmarking cloud applications. In this position paper, we present an initial investigation of benchmarking FaaS in close to reality production systems. Furthermore, we outline the architectural design including the necessary benchmarking metrics. We also discuss the possibility of using the proposed framework for identifying security vulnerabilities. 1 INTRODUCTION illustrates the technical workflow between the components: Cloud computing, as defined by Mell and Grance (2011), is a model for enabling on-demand network access to a shared pool of configurable resources. Cloud vendors provide these resources in the service models Infrastructure as a Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). Through virtualization, the IaaS service model provides computing resources (e.g.: servers, storage, network) to consumers so they can deploy Figure 1: FaaS architecture, based on Pientka (2017) and run software. In other words, the consumers do not control the infrastructure, but are able to manage However, Cloud Service Providers (CSP) often the running operating systems and applications.
    [Show full text]
  • BEST PRACTICE GUIDE for CLOUD and AS-A-SERVICE PROCUREMENTS Executive Summary 1 Introduction
    BEST PRACTICE GUIDE FOR CLOUD AND AS-A-SERVICE PROCUREMENTS Executive Summary 1 Introduction Specific Models and Understanding Cloud Procurement Service Models Data EXECUTIVE SUMMARY Breach Notification Personnel Security While private companies rapidly move systems and Vendors share blame, too. Lots of cloud providers are new to Encryption applications to the cloud, public agencies still struggle to adopt public sector business, having grown up selling to consumers Audits Operations hosted services that could save money and provide better value. and private firms. These companies don’t always understand Hybrid Cloud Environments legitimate demands that make government contracting Preparation for Migrating Yet states and localities have much to gain from the different from selling to other markets. Failure to accommodate Workloads to the Cloud technology industry’s “as-a-service” revolution. Many unique government requirements can be a deal-breaker for jurisdictions face huge legacy system replacement challenges. agencies charged with protecting the public’s interests. Conclusion They’re also under pressure to provide new classes of digital services. The cloud can offer a better path toward All too often, government and industry aren’t on the same page Workgroup Members modernization — there’s no hardware to buy, you’re always when it comes to cloud services. They may not even speak the and Contributors on the latest version of the software and system capacity same language. can be adjusted almost instantly based on your needs. Appendix 1 Bridging the Gap Model Terms and Conditions Templates So why is government lagging behind? The fact is that These pressures led us to release the first version of this guide Software-as-a-Service governments often struggle to buy cloud-based services because two years ago.
    [Show full text]
  • Elliptic Curve Cryptography in Cloud Computing Security
    Elliptic curve cryptography in cloud computing security Manu Gopinathan ([email protected]) Øyvind Nygard ([email protected]) Kjetil Aune([email protected]) December 1st, 2015 1 Abstract Cloud computing is a technological advancement that has been growing swiftly during the last decade. In simple terms, cloud computing is a technology that enables shared, remote, on-demand and ubiquitous access to services through the Internet. It enables consumers to access applications and services that reside on remote servers, without having to allocate large amounts of storage space on their own computer and without the need for extensive compatibility configurations. Many such cloud applications provide services that are meant to handle sensitive user data and thus the protection of this data in terms of access and integrity is of major concern. Space- and time complexity of encryption algorithms can prove to be imperative when it comes to system performance. In this paper we will briefly present how elliptic curve cryptography (EEC) works, and then describe the advantages of it and how it can be used as an encryption solution to security related issues in cloud computing. 2 Introduction In this section we will briefly describe the notion of cloud computing to aid us in the discussion of ECC in cloud computing later. According to the National Institute of Standards and Technology (NIST), essential characteristics for a service based on the cloud computing model are [1]: 1. On-demand self-service: The consumer can provision service capabilities, such as server time and network storage, without actively interacting with the service provider. 2.
    [Show full text]
  • Nfaas: Named Function As a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected]
    NFaaS: Named Function as a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected] ABSTRACT functionality to be incorporated. Powerful end-user devices and In the past, the Information-centric networking (ICN) community new applications (e.g., augmented reality [1]) demand minimum has focused on issues mainly pertaining to traditional content de- service delay, while the Internet of Things (IoT) [2] generates huge livery (e.g., routing and forwarding scalability, congestion control amounts of data that flow in the reverse direction from traditional and in-network caching). However, to keep up with future Internet flows (that is, from the edge towards the core for processing). As architectural trends the wider area of future Internet paradigms, a result, computation needs to be brought closer to the edge to there is a pressing need to support edge/fog computing environ- support minimum service latencies and to process huge volumes ments, where cloud functionality is available more proximate to of IoT data. where the data is generated and needs processing. In contrast to cloud computing, edge and fog computing promote With this goal in mind, we propose Named Function as a Service the usage of resources located closer to the network edge to be (NFaaS), a framework that extends the Named Data Networking used by multiple different applications, effectively reducing the architecture to support in-network function execution. In contrast transmission delay and the amount of traffic flowing towards the to existing works, NFaaSbuilds on very lightweight VMs and allows network core.
    [Show full text]
  • 2.5 AWS Lambda
    Bachelorarbeit Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Fakultät Technik und Informatik Faculty of Engineering and Computer Science Department Informatik Department of Computer Science Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Abschlussarbeit eingereicht im Rahmen der Bachelorprüfung im Studiengang Wirtschaftsinformatik am Department Informatik der Fakultät Technik und Informatik der Hochschule für Angewandte Wissenschaften Hamburg Betreuender Prüfer: Prof. Dr. Ulrike Steffens Zweitgutachter: Prof. Dr. Stefan Sarstedt Abgegeben am 19.07.2018 Jules Fehr Thema der Arbeit Serverlose Architektur: Function as a Service mit AWS Lambda Beispiel einer Web-Anwendung für Fotografen Stichworte FaaS, Serverless, AWS Lambda, Web-Anwendung Kurzzusammenfassung Das Ziel dieser Bachelorarbeit ist die Konzeption und Entwicklung einer Function as a Service Web-Anwendung für Fotografen. In dieser Arbeit werden die Prinzipien von serverloser Architektur behandelt. Es wird eine Referenzarchitektur vorgestellt und es wird eine Anforderungsanalyse für die Anwendung erstellt. Basierend auf der Analyse und den Prinzipien der serverlosen Architektur wird die Umsetzung wichtiger Komponenten erläutert. Jules Fehr Title of the paper Serverless Architecture: Function as a Service with AWS Lambda on the example of a web application for photographers Keywords FaaS, Serverless, AWS Lambda, Web-application Abstract The goal of this bachelor thesis is the conception and development of a Function as a Service web-application for photographers. The principles of serverless architecture will be discussed. A reference architecture will be presented and a requirement analysis will be made. The implementation of important components will be made based on the analysis and the principles of serverless architecture.
    [Show full text]
  • Nanolambda: Implementing Functions As a Service at All Resource Scales for the Internet of Things
    NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things. Gareth George, Fatih Bakir, Rich Wolski, and Chandra Krintz Computer Science Department Univ. of California, Santa Barbara Abstract—Internet of Things (IoT) devices are becoming in- decompose their applications into multiple functions, each re- creasingly prevalent in our environment, yet the process of sponsible for some short-lived piece of processing. A function programming these devices and processing the data they produce is typically a small, single-entry code package (50 megabytes remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the or less) written in a high level language (e.g. Python, node.js, cloud, where abundant resources are available for Functions or Java). Thanks to their small size and self-contained design, as a Service (FaaS) or other handlers. FaaS is an emerging these functions are easily invoked and scaled automatically category of flexible computing services, where developers deploy by the FaaS platform in response to incoming events (data self-contained functions to be run in portable and secure con- arrival, messages, web requests, storage updates, etc.). Many tainerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” functions, potentially belonging to different tenants, execute on of the network using resource rich, Linux-based systems. a single server, using Linux containers to provide secure iso- In this paper, we present NanoLambda, a portable platform lation and a consistent execution environment.
    [Show full text]
  • Data Warehouse Offload to Google Bigquery
    DATA WAREHOUSE OFFLOAD TO GOOGLE BIGQUERY In a world where big data presents both a major opportunity and a considerable challenge, a rigid, highly governed traditional enterprise data warehouse isn’t KEY BENEFITS OF MOVING always the best choice for processing large workloads, or for applications like TO GOOGLE BIGQUERY analytics. Google BigQuery is a lightning-fast cloud-based analytics database that lets you keep up with the growing data volumes you need to derive meaningful • Reduces costs and business value, while controlling costs and optimizing performance. shifts your investment from CAPEX to OPEX Pythian’s Data Warehouse Offload to Google BigQuery service moves your workload from an existing legacy data warehouse to a Google BigQuery data • Scales easily and on demand warehouse using our proven methodology and Google experts–starting with a fixed-cost Proof of Concept stage that will quickly demonstrate success. • Enables self-service analytics and advanced analytics GETTING STARTED The Pythian Data Warehouse Offload to Google BigQuery service follows a proven methodology and delivers a Proof of Concept (POC) that demonstrates viability and value within three to four weeks. The POC phase will follow this workflow: 1. Assess existing data warehouse environment to identify tables and up to two reports that will be offloaded in this phase 2. Provision GCP infrastructure including Cloud storage, Bastion hosts, BigQuery, and Networking 3. Implement full repeatable extract/load process for selected tables 4. Implement selected reports on BigQuery 5. Produce report PYTHIAN DELIVERS By the end of the first stage of our engagement, you can expect to have: • Working prototype on BigQuery • Up to two reports • Demonstrated analysis capabilities using one fact with five associated dimensions www.pythian.com • Report that includes: an assessment of your current setup and support you need to plan and maintain your full (including a cost analysis for BigQuery), performance/ Google BigQuery data warehouse and enterprise analytics usability analysis of POC vs.
    [Show full text]
  • Enhancing Bittorrent-Like Peer-To-Peer Content Distribution with Cloud Computing
    ENHANCING BITTORRENT-LIKE PEER-TO-PEER CONTENT DISTRIBUTION WITH CLOUD COMPUTING A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Zhiyuan Peng IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE Haiyang Wang November 2018 © Zhiyuan Peng 2018 Abstract BitTorrent is the most popular P2P file sharing and distribution application. However, the classic BitTorrent protocol favors peers with large upload bandwidth. Certain peers may experience poor download performance due to the disparity between users’ upload/download bandwidth. The major objective of this study is to improve the download performance of BitTorrent users who have limited upload bandwidth. To achieve this goal, a modified peer selection algorithm and a cloud assisted P2P network system is proposed in this study. In this system, we dynamically create additional peers on cloud that are dedicated to boost the download speed of the requested user. i Contents Abstract ............................................................................................................................................. i List of Figures ................................................................................................................................ iv 1 Introduction .............................................................................................................................. 1 2 Background .............................................................................................................................
    [Show full text]
  • Performance Efficiency Pillar
    Performance Efficiency Pillar AWS Well-Architected Framework Performance Efficiency Pillar AWS Well-Architected Framework Performance Efficiency Pillar: AWS Well-Architected Framework Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Performance Efficiency Pillar AWS Well-Architected Framework Table of Contents Abstract and Introduction ................................................................................................................... 1 Abstract .................................................................................................................................... 1 Introduction .............................................................................................................................. 1 Performance Efficiency ....................................................................................................................... 2 Design Principles ........................................................................................................................ 2 Definition .................................................................................................................................
    [Show full text]
  • Serverless Computing in Financial Services High Performance Computing—With More Speed, Accuracy and Flexibility Authors
    Serverless computing in financial services High performance computing—with more speed, accuracy and flexibility Authors Michael Behrendt is a Distinguished Engineer in Neil Cowit is the Worldwide Cloud Offering the IBM Cloud development organization. He is Manager for High Performance Computing at responsible for IBM’s technical strategy around IBM. HPC has been a significant part of Neil’s serverless & Function-as-a-Service. 20-plus-year career within the financial services technology sector. He has held leadership In that context, he’s also the chief architect for and individual contributor roles in Product the IBM serverless offering, IBM Cloud Functions. Management, Development, Sales and Marketing. Before that, he was the chief architect of the core platform of IBM Bluemix and was one of the initial founding members incubating it. Michael has been working on cloud computing for more than 13 years and has 35 patents. He is located in the IBM Research & Development Laboratory in Boeblingen, Germany. | 2 | Contents Authors 02 Introduction 04 What issues can serverless computing help address? 06 Technical perspective 07 IBM Cloud Functions: Implementing serverless computing 08 Benefits of IBM Cloud Functions 08 Conclusion 09 Glossary 10 | 3 | Introduction If Mrs. Wallis Simpson were alive today and Wouldn’t it be terrific if we only had to solve static involved in financial services, she may very well or deterministic models as opposed to stochastic have modified her famous quote from “You can or probabilistic financial models? Imagine being never be too rich or too thin.” to “You can never able to cover all the possible contingencies in have too much compute capacity.” proportion to their likelihood.
    [Show full text]
  • Paas Solutions Evaluation
    PaaS solutions evaluation August 2014 Author: Sofia Danko Supervisors: Giacomo Tenaglia Artur Wiecek CERN openlab Summer Student Report 2014 CERN openlab Summer Student Report 2014 Project Specification OpenShift Origin is an open source software developed mainly by Red Hat to provide a multi- language PaaS. It is meant to allow developers to build and deploy their applications in a uniform way, reducing the configuration and management effort required on the administration side. The aim of the project is to investigate how to deploy OpenShift Origin at CERN, and to which extent it could be integrated with CERN "Middleware on Demand" service. The student will be exposed to modern cloud computing concepts such as PaaS, and will work closely with the IT middleware experts in order to evaluate how to address service needs with a focus on deployment in production. Some of the tools that are going to be heavily used are Puppet and Openstack to integrate with the IT infrastructure. CERN openlab Summer Student Report 2014 Abstract The report is a brief summary of Platform as a Service (PaaS) solutions evaluation including investigation the current situation at CERN and Services on Demand provision, homemade solutions, external market analysis and some information about PaaS deployment process. This first part of the report is devoted to the current status of the process of deployment OpenShift Origin at existing infrastructure at CERN, as well as specification of the common issues and restrictions that were found during this process using different machines for test. Furthermore, the following open source software solutions have been proposed for the investigation of possible PaaS provision at CERN: OpenShift Online; Cloud Foundry; Deis; Paasmaster; Cloudify; Stackato; WSO2 Stratos.
    [Show full text]
  • A Brief History of Cloud Application Architectures
    applied sciences Review A Brief History of Cloud Application Architectures Nane Kratzke ID Lübeck University of Applied Sciences, Department of Electrical Engineering and Computer Science, 23562 Lübeck, Germany; [email protected] Received: 14 July 2018; Accepted: 27 July 2018; Published: 14 August 2018 Featured Application: This paper features system and software engineering use cases for large-scale (business) Cloud-native applications (e.g., Netflix, Twitter, Uber, Google Search). Such Cloud-native applications (CNA) provide web-scalability and independent deployability of their components and enable exponential user growth. Furthermore, migration and architecture transformation use cases of existing tiered and on-premise (business) applications are additionally of interest. Thus, questions of how existing and not cloud-ready applications are migratable into cloud environments are covered as well. Abstract: This paper presents a review of cloud application architectures and its evolution. It reports observations being made during a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect, we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the research project, we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application-related research papers, did action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain-specific language engineering activities. Two primary (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing.
    [Show full text]