Masarykova univerzita Fakulta}w¡¢£¤¥¦§¨  informatiky !"#$%&'()+,-./012345

Performance comparison of JBoss integration platform implementations

Master Thesis

Elena Medvedeva

Brno, May 2014 Declaration

Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Elena Medvedeva

Advisor: Mgr. Marek Gr´ac,Ph.D.

ii Acknowledgement

I would like to thank my technical supervisor from Red Hat, Ing. Pavel Mac´ıkfor sharing his vast experience in the field of performance testing, and for valuable pieces of advice given to me during the preparation of this work. I am grateful to my supervisor Mgr. Marek Gr´ac,Ph.D. for the comments and consultations. Besides, I want to thank all my colleagues who work with me in the JBoss Fuse QA team, for support, and the company Red Hat, which provided me the opportunity to write this thesis.

iii Abstract

We present the results of our investigation in the field of performance testing of JBoss integration platforms to compare performance of JBoss Fuse and SwitchYard. We design a basic set of performance scenarios to cover basic usage patterns of integration platforms. We formalize the scenarios that are already implemented for SwitchYard, and we develop three new sce- narios. All fifteen scenarios are implemented for JBoss Fuse using Per- fCake and tweaked to be optimal from the performance point of view. In addition performance test execution is automated in distributed en- vironment using Jenkins and SmartFrog. Finally, we collect the results of performance testing, and compare the performances of two JBoss integration platform implementations JBoss Fuse and SwitchYard.

iv Keywords performance testing, integration platforms, system integration, JBoss Fuse, JBoss SwitchYard, PerfCake, SmartFrog, Apache ActiveMQ, , Apache CXF, web services, Camel routes

v Contents

1 Introduction ...... 4 2 (ESB) ...... 5 2.1 Evolution of Java Applications ...... 5 2.2 Service-Oriented Architecture concept ...... 6 2.3 Definition of Enterprise Service Bus (ESB) ...... 7 2.3.1 Integration framework ...... 7 2.3.2 ESB ...... 8 2.3.3 Integration Suite ...... 8 3 JBoss integration platforms ...... 9 3.1 JBoss Fuse ...... 9 3.1.1 Apache Camel ...... 10 3.1.2 Apache ActiveMQ ...... 10 3.1.3 Apache CXF ...... 12 3.1.4 Apache Karaf ...... 14 3.1.5 Fuse Fabric ...... 15 3.2 SwitchYard ...... 15 4 Performance testing ...... 17 4.1 Types of performance testing ...... 18 4.1.1 Load testing ...... 18 4.1.2 Stress testing ...... 19 4.1.3 Soak or stability testing ...... 19 4.1.4 Configuration testing ...... 19 4.1.5 Smoke testing ...... 19 4.2 Performance metrics ...... 19 4.3 Performance standards ...... 21 4.4 Tasks to fulfill during performance testing ...... 21 4.5 Open source performance testing tools ...... 22 5 Testing environment and test automation ...... 24 5.1 Testing environment ...... 24 5.2 Test automation ...... 25 5.2.1 Test automation tools ...... 25 6 Task formulation and test automation implementation 27 6.1 Implementation of test automation ...... 27 6.2 Environment characteristics ...... 30 7 Performance tests scenarios design and implementation 31

1 7.1 General architecture of tests ...... 31 7.2 Performance test scenarios design ...... 32 7.2.1 Scenario: HTTP exposed custom service . . . . . 34 7.2.2 Scenario: SOAP exposed custom service . . . . . 34 7.2.3 Scenario: JMS exposed custom service ...... 34 7.2.4 Scenario: HTTP exposed content based routing using XPath ...... 35 7.2.5 Scenario: HTTP exposed content based routing using RegEx ...... 36 7.2.6 Scenario: HTTP exposed content based routing using Rules ...... 36 7.2.7 Scenario: HTTP exposed Services implementing Scatter-Gather pattern ...... 37 7.2.8 Scenario: HTTP exposed Services implementing Splitter-Aggregator pattern ...... 38 7.2.9 Scenario: Service orchestration ...... 38 7.2.10 Scenario: SOAP exposed XML message transfor- mation using XSLT ...... 39 7.2.11 Scenario: SOAP implementation of a web service using JAX-WS ...... 39 7.2.12 Scenario: SOAP implementation of a web service using JAX-WS secured by WS-Security . . . . . 40 7.2.13 Scenario: SOAP web service proxy ...... 40 7.2.14 Scenario: Method GET of a RESTful web service implementation ...... 41 7.2.15 Scenario: Method POST of a RESTful web ser- vice implementation ...... 41 7.3 Implementation of performance scenarios for JBoss Fuse 41 7.3.1 HTTP exposed custom service ...... 42 7.3.2 SOAP exposed custom service ...... 43 7.3.3 JMS exposed custom service ...... 44 7.3.4 HTTP exposed content based routing using XPath 44 7.3.5 HTTP exposed content based routing using RegEx 45 7.3.6 HTTP exposed content based routing using Rules 46 7.3.7 HTTP exposed Services implementing Scatter- Gather pattern ...... 48 7.3.8 HTTP exposed Services implementing Splitter- Aggregator pattern ...... 49

2 7.3.9 Service orchestration ...... 50 7.3.10 SOAP exposed XML message transformation us- ing XSLT ...... 51 7.3.11 SOAP implementation of a web service using JAX- WS ...... 52 7.3.12 SOAP implementation of a web service using JAX- WS secured by WS-Security ...... 52 7.3.13 SOAP web service proxy ...... 53 7.3.14 Methods GET of a RESTful web service imple- mentation ...... 53 7.3.15 Methods POST of a RESTful web service imple- mentation ...... 54 8 Results ...... 55 8.1 Concluding the results ...... 56 8.2 Effort allocation and issues encountered ...... 57 9 Conclusion ...... 60

3 1 Introduction

Nowadays, the market demands software applications to become more and more complex, to process data from various providers, presented in different formats. Integration platforms were introduced to help devel- opers create complex data interaction and processing in their applica- tions, using standardized approach. Integration platforms are used to integrate different applications and services. In this work we will consider two JBoss projects implementations of integration platforms: JBoss Fuse and SwitchYard. These projects provide similar functionality. So it is useful to find out which implementation works better from the performance perspec- tive, and under which circumstances. The goal of this work is to com- pare the performance of those two integration platform implementa- tions. The thesis consists of eight chapters. The Chapter 1 briefly intro- duces the topic of the thesis. The Chapter 2 provides an overview of the notions service oriented architecture and enterprise service bus. In the Chapter 3 the JBoss integration platforms JBoss Fuse and Switch- Yard are described in details, focusing on JBoss Fuse in particular. The technologies for working with JBoss Fuse are introduced such as Apache Camel, Apache ActiveMQ, Apache Karaf, Apache CXF. Methodology of performance testing is defined in the Chapter 4. The Chapter 5covers the theory underlying test automation and the design of test environ- ment. In the Chapter 6 we describe the process of test automation implementation for JBoss Fuse performance tests. We also define the environment characteristics where the tests were executed. The Chapter 7 focuses on formulation of test scenarios for performance comparison of the platforms. Besides, description of implementation of scenarios for JBoss Fuse is in this chapter. Chapter 8 contains the results of the performance measurements, the analysis of the results, and description of the allocation of the effort, while working on the master thesis. This chapter also describes the issues faced during the work.

4 2 Enterprise service bus (ESB)

2.1 Evolution of Java Applications

According to the book [5] the first Web sites were build mainly from static content. Static content is delivered to the user exactly as stored, unlike dynamic content which is generated by web server at the time the user requests the page. But when an owner of the website wanted to modify the page, he had to modify the physical HTML file. Each operation required too much effort, and in order to solve this issue, tools and frameworks for dynamic Web content generation appeared. In 1997 servlet specification was released by Sun Microsystems. A servlet is a Java program that runs within a Web server. Servlets receive requests and respond to requests from Web clients, usually across the HyperText Transfer Protocol (HTTP) 1. Servlets were used to generate dynamic content (pages). But soon developers understood that it was not comfortable to put presentation details into java code. Due to this fact, in 1999 Sun released JavaServer Pages (JSP) specification. JSP technology enables to integrate Java code into HTML tags to generate pages dynamically. JSP technology makes available all the dynamic capabilities of Java Servlet technology but provides a more natural approach to creating static content[10]. But it was too complicated to put all business logic into JSP. As a result of all above, the design pattern Model-View-Controller (MVC) was created. In the variation of MVC JSP were used for presen- tation(View), servlets represented business logic (Controller). As there was separation in layers, the applications became more maintainable and flexible. Apart from that, Enterprise JavaBeans (EJB) were introduced to deal with persistence, transaction integrity, and security in a standard way [10]. As Java gave developers a lot of flexibility while creating ap- plication, design patterns where introduced to represent best practices in application development.

1. http://www.w3.org/Protocols/

5 2. Enterprise service bus (ESB) 2.2 Service-Oriented Architecture concept

As the time went by, and business application became more and more complex, new architectural concept for development of Java applica- tions emerged - Service-Oriented Architecture (SOA). In simple words SOA, and Web services facilitated the interoper- ability between frameworks or applications written in different lan- guages and running on different operating systems. Interoperability is the ability of making systems and organizations to work together (inter-operate). The following definition of SOA was produced by the SOA Definition team of The Open Group SOA Working Group. Service-Oriented Architecture (SOA) is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services. A service is a logical representation of a repeatable business activ- ity that has a specified outcome (e.g., check customer credit, provide weather data, consolidate drilling reports), and • Is self-contained; • May be composed of other services; • Is a “black box” to consumers of the service[14]. Among the basic principles of SOA are standardization and service loose coupling. Standardization: means that service is defined by one or more service- description document, and its implementation can be easily substituted by another implementation, which satisfies the same contract. Service loose coupling: refers to the minimization of dependencies inside the system. Web services is a collection of technologies that implements a service- oriented architecture. Web services is a platform- and technology-agnostic collection of specifications by which services can be published, discovered and com- municate with one another [5].

6 2. Enterprise service bus (ESB) 2.3 Definition of Enterprise Service Bus (ESB)

An enterprise service bus (ESB) is a software architecture model used for designing and implementing communication between mutually in- teracting software applications in a service-oriented architecture (SOA). ESB was introduced as a solution for communication of different appli- cations even within different companies.

Figure 2.1: An example of an enterprise service bus.

Currently there is no standard definition of the term ’ESB’. In this work we will distinguish ESB from integration framework and integra- tion suite according to the paper ’Choosing the Right ESB for Your Integration Needs’ by Kai W¨ahner[20].

2.3.1 Integration framework Integration framework(IF) implements Enterprise Integration Patterns (EIP) which are designed to integrate application in a standardized way. Examples of Java-based integration frameworks include Apache Camel and Spring Integration. The usage of IF reduces developer efforts. IF

7 2. Enterprise service bus (ESB) supports different protocols, and technologies. It also uses EIP to spec- ify the way of communication between those technologies. Integration framework also simplifies understanding of the code for integration.

2.3.2 ESB Enterprise Service Bus (ESB) is based on integration framework and adds tools for deployment, administration and monitoring at run-time. Besides, it offers graphical editors for various integration scenarios. Sometimes, graphical editors use ”drag and drop”, when the source code is automatically generated. ESB allows integration at higher ab- straction level then integration framework. Examples of ESB are JBoss Fuse ESB, SwitchYard, both developed mainly by JBoss Community, which will be considered later, and also Mule ESB. All these ESB are open source. Among proprietary solutions the most prominent are Or- acle Service Bus and IBM WebSphere ESB.

2.3.3 Integration Suite Integration suite adds to ESB tooling for the following features: Busi- ness Process Management (BPM), Business Activity Monitoring (BAM), Master Data Management (MDM), and can add a Repository. Business Process Management refers to a systematic approach for definition, description, improvement of an organization’s business pro- cesses. It uses specific language for description of business processes. Business Activity Monitoring is an application for providing real time data about the status and results of processes and transactions. It provides flexible configuration of monitoring the work of, for example, running services. Master Data Management provides a single point for managing important data of application. Repository is used for version controlling and dependency monitoring of currently deployed applica- tion in the suite.

8 3 JBoss integration platforms

This thesis is dedicated to the performance comparison of two JBoss projects: JBoss Fuse and SwitchYard. First, I would like to take a closer look at them.

3.1 JBoss Fuse

According to the classification from the previous chapter JBoss Fuse is a pure enterprise service bus. It is based on Apache Camel, as an inte- gration framework, Apache CXF as a services framework and Apache ActiveMQ as messaging framework. All those frameworks are running on Apache Karaf, which provides OSGi-based container where appli- cations and components can be deployed. Apart from that JBoss Fuse contains Fuse Fabric for managing multiple containers running on dif- ferent hosts. Fuse IDE is a plugin for Eclipse 1 which offers possibility for design time editing of Camel routes, and advanced run-time and debugging features. Besides, there is Hawtio - web based management console.

Figure 3.1: Technologies comprising JBoss Fuse.

1. http://www.eclipse.org/

9 3. JBoss integration platforms 3.1.1 Apache Camel As an integration framework Apache Camel, provides a way for devel- opers to integrate different systems into one application. Different pro- tocols can be used for communication with different systems. Camel allows developers to create routing rules, which determine the source of the messages, what should happen with them during routing (some transformations) and destination (where to send the resulting mes- sages). The messages are used for communication between the systems inside the application. In Camel the routing rules are defined regardless of the protocol and data-type the systems are using, thus creating an- other level of abstraction. The transformation to the required protocol happens automatically. Currently, there are more than 150 connector implemented in Camel to different systems and technologies, including the most simple as files, URLs, POJO (plain old java objects, anor- dinary Java object) to more complex, as web services, SAP, Facebook, Salesforce and ActiveMQ, or other messaging framework. Routing rules can include elements of Enterprise integration pat- terns (EIPs), which allows developers to create complex routing, suit- able for business process management. The routes can be described us- ing different languages, such as Java, or XML (in Blueprint XMLorin Spring XML)[7]. Blueprint XML and Spring XML are two dependency injections frameworks supported by JBoss Fuse[17]. The advantage of Blueprint XML is that it automatically resolves the dependencies at run time, if the project was packaged as an OSGi bundle. To sum it up, Apache Camel main goals are: • to offer concrete implementations of all the widely used EIPs; • to provide connectivity to a great variety of transports and application programming interfaces (APIs)[1]. The advantages of Camel include the large community of users and developers, and its extensible architecture which, allows 3rd party de- velopers to add connectors to new protocols.

3.1.2 Apache ActiveMQ Apache ActiveMQ is an example of Message Oriented Middleware (MOM), an application which uses messages for communication between its

10 3. JBoss integration platforms parts, which provides the advantage of loose coupling. Java Message Service(JMS) is a standard specifying how Java application should send, receive and create messages. Using this standard, a JMS client written in one technology can exchange messages with another JMS client, using JMS provider. JMS provider is an implementation of the JMS interfaces which is ideally written in pure Java.[book ActiveMQ in Action] According to JMS specification, JMS producer is created by client application to create and send JMS messages, and JMS consumer created by a client application to receive and process JMS messages. JMS provides two models of communication:

• point-to-point;

• publish/subscribe.

In point-to-point communication the JMS producer sends messages to the JMS queue. The messages stay in the JMS queue, until consumer gets them from the queue. When the message is consumed it is removed from the queue. The key point is that the message is delivered to the only one JMS client. JMS queue is the area where messages are stored till they will be consumed by JMS consumer or expire. During publish/subscribe communication the messages is send to the JMS topic, and from the topic it is received by all clients, who where subscribed to the topic. The main difference from point-to-point communication is that the messages can get to many consumers(in this model they are called subscribers) at once. Topics don’t store messages unless they are explicitly instructed to do so. JMS offers two types of message delivery mode: persistent andnot persistent. Persistent messages are delivered once-and-only-once to des- tination, even if the provider fails to deliver it first time. This puts more overhead on database of JMS provider, since the message is stored till it is delivered to destinations. Non-persistent messages will be delivered at most once, which means if provider fails to deliver the message, he will not try again. In this case there is no overhead on provider, and it increases performance, but decreases reliability. The control over the messages and routing them to correct queues or topics and other activities on the provider side are done by mes- sage broker. A message broker is an architectural pattern for message validation, message transformation and message routing[20].

11 3. JBoss integration platforms ActiveMQ is an open source, JMS 1.1 compliant, message-oriented middleware (MOM) from the Apache Software Foundation that pro- vides high-availability, performance, scalability, reliability and security for enterprise messaging[19]. ActiveMQ provides implementations of JMS client and a message broker.

3.1.3 Apache CXF In order to make a software system available to another systems over the Web one can create a web service - a software system with standard interface, which is available on some Web address (URL). Apache CXF is one of the leading standard-based web services framework whose goal is to simplify web services development.[book Apache CXF Web service development] Web services can be developed using two main approaches: • using the Simple Object Access Protocol (SOAP); • using the Representational State Transfer (REST) architectural style. Apache CXF supports both of these approaches. Simple Object Access Protocol (SOAP) is a protocol for exchang- ing XML-based messages over a network, typically using HTTP pro- tocol. [book Apache CXF Web service development] SOAP messages consists of body and header, where header stores information about security, transactions and other context related information, while the body contains application data ( or payload). Web Services Description Language(WSDL) is an XML-based lan- guage for description of web services. Currently there are two standards WSDL 1.1 and WSDL 2.0. WSDL file defines the operations(functions and procedures), which a web service provides, input and output argu- ments, and their types, and also the exact protocol binding(for example SOAP) and endpoint or port binding, typically represented by a simple HTTP URL string. In SOA an endpoint is an entry point to a service, a process, or a queue or topic destination. So, by World Wide Web Consortium (W3C): A Web service is a software system identified by a URI whose public interfaces and bind- ings are defined and described using XML (specifically WSDL). Its

12 3. JBoss integration platforms definition can be discovered by other software systems. These systems may then interact with the web service in a manner prescribed by its definition, using XML-based messages conveyed by Internet protocols. Java API for XML Web Services (JAX-WS) is a specification de- signed to simplify the construction of primarily SOAP-based web ser- vices and web service clients in Java[3]. This is one of the most impor- tant approach to web service development. Representational State Transfer (REST) is a style of building a dis- tributed application architecture, which is often used to build web ser- vices. Systems which implement REST are called ’RESTful’ systems. REST architecture operates resources. Each resource has an identifier within the system. For example, for the HTTP resource identifications are URL-addresses. Web services developed using the REST approach are viewed as resources and identified by their URI. Method or func- tion of a web service is an action, and actions are identified by for commands: GET, POST, PUT and DELETE. Java API for RESTful Web services (JAX-RS) is a specification that determines the semantics to create web services according to REST architectural style[3]. This technology allows to expose any java classes as web services using annotations and a special servlet that is provided by implementations of JAX-RS. JAX-WS and JAX-RS specifications provide a set of annotations to convert POJOs as SOAP and RESTful web services, what makes creation of web services easy. But unlike JAX-WS in JAX-RS no need for additional xml configurations, as WSDL is required to implement a web service. RESTful web services are considered to be simpler technology for implementation, then SOAP based web services. Besides while follow- ing REST architecture style there is less coupling between service and its client, so when something has changed in the web service contract, developers don’t necessarily have to change the client’s implementation, unlike with SOAP based web services. In addition, RESTful implemen- tations there is possibility to send messages between service and client in other data formats apart from XML, such as Java Script Object Notation (JSON). CXF implements JAX-WS and JAX-RS specifications. Also CXF provides a set of API to expose POJOs as web services and create web service clients.

13 3. JBoss integration platforms 3.1.4 Apache Karaf Apache Karaf provides a container for managing life cycle of OSGi- bundles. The OSGi (Open Service Gateway initiative) is a set of specifica- tions that define a dynamic component system for Java, introduced by OSGi Alliance[16]. This specification is used to construct Enterprise applications, and complex desktop applications, such as Eclipse SDK with pluggable architecture. According to this specification, the basic unit is an OSGi-bundle. OSGi bundles contain java-classes and other resources that together can implement some functions as well as provide services and packages for other bundles. Technically an OSGi bundle is a jar archive with a special file, called manifest, which describes the the classes and interfaces to be exported and imported by the bundle, and also includes information about the name and version. Maven allows automatic generation of manifest file using mvn-bundle-plugin. Bundle can be in several states in the OSGi system: Installed: the bundle was successfully installed into the system.

Resolved: all dependencies were resolved. For the bundle all Java- classes and those bundles, on which it depends are available. This status indicates that the bundle is ready to start.

Starting: the bundle is starting.

Active: the bundle was started successfully.

Stopping: the bundle is stopping.

Uninstalled: the bundle was stopped/or didn’t started. The bundle was deleted, so it can’t got to any other state except installed again. There are several frameworks which implement OSGi technology. Among them are Apache Felix framework and Eclipse Equinox OSGi frameworks[2]. Apache Karaf can be configured to use each of them and adds additional functionality. Among the most important features are hot deployment (automatically starts all files which are in the deploy directory), versioning, management console, advanced logging, security,

14 3. JBoss integration platforms possibility to manage multiple instances of Apache Karaf through the main instance (root), and other features. By default Apache Karaf runs on Apache Felix framework. There is also possibility to deploy OSGi-bundles, war files, FAB or a feature.

OSGi-bundle: OSGi-bundle is a jar file with manifest (described ear- lier).

War: War file is a jar archive of a web application..

Fuse application bundle (FAB): FAB is a jar file, which is con- verted by Fuse ESB to and OSGi-bundle after installation.

Feature: A feature is a way of aggregating multiple OSGi bundles into a single unit of deployment[17]. Feature is represented by an XML file, which contains maven coordinates of the bundles and other features, which are included into the feature.

3.1.5 Fuse Fabric Fuse Fabric is a technology layer that supports the scalable deploy- ment of Fuse ESB Enterprise containers across a network.[fuse docu- mentation] Fuse ESB Enterprise container is a container for deployment OSGi-bundles,FAB, and war files, running on Fuse kernel. Fuse Fabric provides a possibility to manage and monitor multiple Karaf container instances in the cloud (on multiple hosts). Test scenarios were developed for comparison of SwitchYard and JBoss Fuse. But there is no alternative to Fuse Fabric in SwitchYard.

3.2 SwitchYard

SwitchYard is a component-based development framework focused on building structured, maintainable services and applications using the concepts and best practices of SOA[9]. It is also an enterprise service bus (ESB), according to the classi- fication described in the Chapter 2, though there is support forbusi- ness process management (BPM). SwitchYard is an open source JBoss project.

15 3. JBoss integration platforms It uses Apache Camel as integration framework, which was described earlier. HornetQ is used as a messaging framework. It is an example of Message Oriented Middleware, and JMS provider implementation. It performs the same functions in SwitchYard, as ActiveMQ in JBoss Fuse. SwitchYard runs on JBoss Application Server(JBoss AS).The JBoss Application Server is a Java EE application server platform for devel- oping and deploying enterprise Java applications, web applications, and web portals[8]. SwitchYard supports integration with JBPM, which is a business process management framework, used for service orchestration and hu- man task integration expressed as BPMN 2[9]. BPMN 2 (Business Process Model and Notation) is a graphical representation for specifying business processes in a business process model[8]. There is an Eclipse plugin for visual representation of the integration design for SwitchYard. SwitchYard offers a full support for Java EE 6. I will focus mainly on description of JBoss Fuse, as implementa- tion of performance test scenarios for SwitchYard were already created by SwitchYard team. Besides, the theoretical description of the tech- nologies used in the test scenarios was made in the Section 3.1 of this Chapter.

16 4 Performance testing

Software testing is a process of examining the software that comprises two goals: to show to developers and customers that the program meets the requirements, to identify situations in which the behavior of the pro- gram is incorrect, inappropriate or inconsistent with the specification. According to the subject of testing the following categories could be distinguished:

• functional testing;

• performance testing;

• usability testing;

• security testing;

• localization testing;

• compatibility testing;

All types of tests defined above are dedicated to ensure that the application possesses characteristics, required by standard ISO 1, which are used for evaluation of software quality. ISO/EIC 9126 standard consists of six main criteria:

• Functionality: Are the required functions available in the soft- ware?

• Reliability: How reliable is the software?

• Usability: Is the software easy to use?

• Efficiency: How efficient is the software?

• Maintainability: How easy is to modify the software?

• Portability: How easy is to transfer the software to another environment?

1. http://www.iso.org/iso/home.html

17 4. Performance testing Performance is a very important characteristic of any software ap- plication. It provides information about efficiency and reliability of the software. A well-performing application allows users to perform tasks without significant delay and irritation. According to the book ’Pro Java EE 5 Performance Management and Optimization’ by S. Haines [5] the impact of poor performance re- sult in the lost productivity, which can also lead to lost of customer confidence and credibility and as a result it can lead to decrease inrev- enue. When using a poorly performing software as internal application, the company is paying its employees for waiting for software to respond. Moreover, troubleshooting takes more time, and happens more often in the poorly performing applications. The customers of such company can become not confident in the services of the company, if the employ- ees deliver them after the deadline. As a result, they will select another better performing companies. Before proceeding to performance testing it’s very important to un- derstand the goals of performance testing in the exact case. The possible goals include: • to make sure that all basic transactions of the system meets some predefined performance criteria;

• to compare several systems to find out which system is better in terms of performance and for which transaction;

• to find out which parts of the system have the worst perfor- mance;

4.1 Types of performance testing

There is no single approach to definition of types of performance tests. The following areas are distinguished: load tests, stress tests, endurance or soak or stability tests, configuration tests, smoke tests[12].

4.1.1 Load testing Load testing is the classic form of performance testing, where the appli- cation is loaded up to the specified level but usually not further. Load testing is usually performed in order to evaluate the behavior of the

18 4. Performance testing application on a given expected load. This load can be, for example, the expected number of concurrent users of the application, creating a specified number of transactions per time interval.

4.1.2 Stress testing Stress testing is used to measure the upper limits, or the sizing of the infrastructure. Thus, a stress test continues until something breaks, e.g. no more users can log in, application becomes unavailable, etc. It is important to know the upper limits of the applications, especially, if future growth of the application traffic is expected.

4.1.3 Soak or stability testing Stability testing is dedicated to identify problems which occur only after a large period of time.The aim of this kind of testing is to make sure that the application will work as expected under specified load during the long time-frame. A classic example of problem which can be found during such tests is a slowly developing memory leak or a gradual slowdown in the number of transactions which are executed per time- frame. Thus, memory utilization is monitored to detect potential leaks.

4.1.4 Configuration testing The aim of the configuration tests is to define how different types of system’s configuration can influence the productivity of the system. This tests can be also combined with load, stress or stability tests.

4.1.5 Smoke testing In performance testing this term refers to testing only those transac- tions that has been affected by a code change.

4.2 Performance metrics

The performance of the application can be assessed using the following criteria:

19 4. Performance testing Response time specifies the amount of time the user has to waitto get the response from the application. In terms of performance testing, this is the time between the user’s requesting response from the application and a complete reply, arriving at the users workstation. Throughput defines the number of things we can accomplish ina time period. In Java EE terms, request throughput is a number of requests a system can service in a second. The goal is to maximize request throughput and to measure it against the number of simultaneous requests. Request throughput reflects the response time[5]. Resources utilization defines the percentage of the capacity ofthe resource is being used by the application. This results help to analyze the work of application, and help to find the root cause of performance degradation, if any. The most important metrics are CPU usage and memory usage. Among others for Java applications are thread pools, JDBC connection pools, caches, JMS servers and others. In case of Java applications, all Java objects are placed in a part of the memory called the heap. When the heap becomes full with references to objects which are no longer in use, the mem- ory is cleaned by a special automatic process called ”Garbage Collector”. Time spent by the CPU to clean memory can be sig- nificant, in case the process has occupied all available memory (in Java - the so-called ”Full GC”) or when the process allo- cated a lot of memory, which now needs to be cleaned. While Garbage Collector is cleaning the memory, the access to the allocated pages of memory is blocked, which may affect perfor- mance. Besides, as specified in the Oracle documentation, when the heap memory is fragmented (there are a lot of small free spaces in the heap, but allocation of large objects is hard or even im- possible), the process of compaction is executed, which moves objects closer together, thus creating larger free areas for new objects. Compaction is performed while all Java threads are paused, what can also influence performance of the application.

20 4. Performance testing Availability defines how much time the application is available tothe users. The application is not available to the users when they are completely unable to effectively use the application.

4.3 Performance standards

According to the article by Jakob Nielsen ’Response Times: The 3 Im- portant Limits’[13], there are 3 main time limits (which are determined by human perceptual abilities) to keep in mind when optimizing web and application performance:

• 0.1 second: Time limit for a user to have illusion that he is manipulating objects in the user interface.

• 1 second: Time limit for a user not to get irritated by a delay in the application processing time, though the user will notice the delay. This is the time-frame during which a user can per- form operations which require him to remember information throughout several responses.

• 10 seconds: Limit for users keeping their attention on the task.

• more than 10 seconds: After waiting more than 10 seconds average user will switch his attention to another task.

4.4 Tasks to fulfill during performance testing

According to the book ’The Art of Performance Testing’ by I. Molyneaux [12], the following tasks should be undertaken during performance test- ing:

• Gather performance requirements from the customer.

• Develop a high-level plan, including requirements, resources, time-frames and milestones.

• Decide on the test team involved into performance testing.

• Design test environment for performance testing.

21 4. Performance testing • Choose a testing tool

• Perform the Proof of Concept for the chosen tool.

• Develop a detailed performance test plan that includes all de- pendencies and associated time lines, detailed scenarios and test cases, workloads, and environment information.

• Configure the test environment. Strive to make test environ- ment a close approximation to the live environment. Ideally identical hardware to the production platform, router config- uration, quiet network, because the results shouldn’t be influ- enced by other users, deployment of server instrumentation, database should be populated realistically in terms of content and sizing, etc.

• Transaction scripting. Implement each test scenario in the test plan.

• Run performance test execution. Run and monitor tests enough times to make sure the results are not effected by some unac- counted factor.

• Analyze results, report, retest. Document all necessary informa- tion about each test run, investigate problems, apply corrective actions, retest if necessary.

4.5 Open source performance testing tools

Performance testing of a web server application can be performed using automated tools such as:

JMeter: This is the most popular tool for performance testing. It is written in Java, and supports testing of the following technolo- gies: JDBC / FTP / LDAP / SOAP / JMS / POP3 / HTTP / TCP. It allows creation of a large number of requests from different computers and monitoring the process of testing from one of them. JMeter can be used to make a graphical analysis of performance. The tool supports extensions via plugins.

22 4. Performance testing PerfCake: This is relatively new framework for performance testing. The first release appeared in 2013. It is written in Java andthe following technologies can be tested using it: HTTP, REST, JMS, JDBC, SOAP, socket, file. PerfCake uses XML descrip- tion file for configuration of tests. The tool can measure through- put, response time and memory consumption of the target JVM. Test results are reported to shell console or CSV. The tool can be extended via plugins2.

ApacheBench: This is also a very popular and one of the most sim- ple in use tools for performance testing. All tests settings are described in the command line, and no configuration files is required. It is single-threaded, and can measure performance only of HTTP queries. ApacheBench was originally designed to test the Apache HTTP Server, but it is generic enough to test any web server. The tool supports GET and POST meth- ods. Results can be generated in CSV format.

Curl-loader: Curl-loader is a C-written web application testing and load generating tool. The tool can simulate tens of thousand and hundred users/clients each with own IP-address. It sup- ports the following technologies for testing: HTTP, FTP and TLS/SSL protocols. Besides, user authentication, log in can be tested using Curl-loader and it provides range of statistics[15].

All the tools described above support Linux, also PerfCake and JMeter can run on Windows.

2. https://www.perfcake.org/

23 5 Testing environment and test automa- tion

5.1 Testing environment

Testing environment is a combination of configured hardware and soft- ware on which the testing team is going to perform the testing of the application[11]. When creating the testing environment, the main goal is to simulate as close as possible the usage of this application in production. Typi- cally, for testing of an enterprise service bus the following configuration is used:

Server host: This is a host, where the server application is running. In the case of performance testing, enterprise service bus is running on this host with some testing application deployed to it. This application implements some functionality from test scenario.

Client host: On this host runs the testing tool, which is generating the load. The testing tool sends requests over the network to the testing application running on server host, and waits for a response.

Database host: Generally, the database used by the ESB is located on a separate host in production. Apart from that there might be a special machine dedicated for the database of testing ap- plication.

Helper host: This is optional host, which can be used for simulation of some third-party dependencies of the scenario implementa- tion. For example,it can be used to run some web services which are called from testing application.

The database and helper hosts are optional, and are used when it makes sense. Another reason, why client-server environment is used for performance testing, is that the testing tool, which generates load, consumes resources, and when placed on a separate host, it doesn’t

24 5. Testing environment and test automation affect the throughput of the testing system. But there is an influence of the network connecting the hosts on throughput. In order to minimize the network influence, the tests should be run in a dedicated high- performance network.

5.2 Test automation

Test automation is a use of special software to run and configure the test and test environment, and collect test results. The software which runs the tests should differ from the software which is tested. Automation is very beneficial for performance testing, as it allows to run preparation of the test in similar flow, when actions for preparation are performed in the same sequence, with the same delay for all scenarios. Besides, the tests can be run faster.

5.2.1 Test automation tools Jenkins is widely used for test automation. Jenkins, originally called Hudson, is an open source Continuous Integration tool written in Java. Continuous Integration, in its simplest form, involves a tool that mon- itors a version control system for changes. Whenever a change is de- tected, this tool automatically compiles and tests the application[18]. Jenkins can download the changes of the tests from the repository, there is a possibility to specify many repositories, and then to run the scripts, where further actions on tests automation are specified. In Jenkins there is a possibility to provide some settings, for tests run, including, the names of the host where to run the scripts, and the version of JDK to use, and others. The client-server test environment requires flexible management of testing process on multiple hosts. This can be done by usage of Smart- Frog. SmartFrog is a powerful and flexible Java-based software framework for configuring, deploying and managing distributed software systems[6]. SmartFrog has it’s own language to describe the sequence of actions, which should be implemented to run SmartFrog components. Smart- Frog components are Java classes, which implement predefined inter- faces, and are considered by SmartFrog a single unit of work. In the main configuration SmartFrog script, one can specify the interaction

25 5. Testing environment and test automation of SmartFrog components, for example to run them in parallel or in sequence, specify the host where to run them, and what should happen with the whole system if one component fails.

26 6 Task formulation and test automation implementation

The main goal of this diploma thesis is to compare performance of JBoss Fuse and SwitchYard. In order to achieve more fair comparison of performance results, performance of JBoss Fuse and SwitchYard was tested with the same testing tool. PerfCake was chosen as a testing tool, because it matches all requirements that were set by design of performance tests. The per- formance tests execution was automated on Jenkins, using SmartFrog. I will focus on description of automation for JBoss Fuse, because au- tomation for SwitchYard is implemented in a similar manner and uses the same design model.

6.1 Implementation of test automation

The code with test scenarios, the messages which are sent for load generations, and applications, which are deployed to JBoss Fuse to test some functionality (let’s call them tested service (TS)), are stored in the repository, managed by version control system. All performance test scenarios for JBoss Fuse are accessible through the Jenkins job.

Figure 6.1: Tests automation in Jenkins.

In order to rebuild the tests only one click on the link in web browser is required. This will execute sequential run of all tests automatically.

27 6. Task formulation and test automation implementation Let’s consider the actions which are done during the automated execu- tion of one performance test scenario. 1)The Linux script kills all possible residual processes from previous test runs, including the process of JBoss Fuse, exactly Karaf on the server machine, and PerfCake process from the client machine. Besides, new code changes are upload to the SmartFrog components, scenarios, messages and tested services from the repository. The folders with all code, are located in the folders shared between all machines in the network, so that both host and server could access them. This script is run by Jenkins. 2)After the Linux script is finished Jenkins runs SmartFrog on the client machine, and provides it with its configuration script. The further steps are described in the SmartFrog configuration script. 3)SmartFrog script performs the following sequence of actions, as drown on the picture below.

Figure 6.2: Components sequence execution in SmartFrog.

SmartFrog runs PrepareServer component at the server host, and PrepareTestsAtClient at the client host. The execution of components at host and server start at the same time and in parallel. When Pre- pareServer component is finished, SmartFrog starts PrepareTestsAt- Server component at server host. So, SmartFrog executes PrepareServer and PrepareTestsAtServer in one block, which corresponds to execution of PrepareTestsAtClient component. When PrepareTestsAtServer and PrepareTestsAtCleint are finished, SmartFrog proceeds to another step of execution. It run StartServer component at Server, and RunTest- sAtClient component at Client. These components run in parallel and start at the same time. If any of the components fails, the whole system stops execution and the test is marked as failed.

28 6. Task formulation and test automation implementation Most of the components, except StartServer use Groovy scripts in which is written the sequence of actions. The component StartServer extends Fuse component, which can start and stop JBoss Fuse, with some settings like which version of Java to use. I created it during the work on this thesis. Groovy is an object-oriented programming language designed for the Java platform as a addition to the Java language with the possibilities of Python, Ruby and Smalltalk. Now, let’s take a look at each component: PrepareServer: PrepareServer script deletes old JBoss Fuse instance from server and installs a new one. Then it adds the user which can access JBoss Fuse. PrepareTestsAtServer: PrepareTestsAtServer builds the specified tested service with maven, using mvn-bundle-plugin, as each tested service is an OSGi-bundle. OSGi-bundles are described in the Subsection 3.1.4 of the Chapter 3. After the bundle was built, the component either deploys a feature, if it is present in the re- source directory of the bundle, or deploys the jar archive with the bundle itself. The deployment happens through ”hot de- ploy” mechanism, which means the artifact (a bundle or a fea- ture) is copied into the deploy directory of JBoss Fuse. After the server will be started from the StartServer component, JBoss Fuse will try to start all bundles which are in deploy directory. PrepareTestsAtClient: PrepareTestsAtClient script deletes old Per- fCake instance from server and installs a new one. Then it downloads the activemq.jar to the lib directory of PerfCake to be able to run tests for ActiveMQ client through JMS. StartServer: This is Java-based component which starts JBoss Fuse, and stops it when the component is terminated from RunTest- sAtClient component. RunTestsAtClient: RunTestsAtClient script specifies which version of Java to use to run PerfCake. After that it runs PerfCake for the specified test scenario. 4) When SmartFrog finishes the execution, runs another Linux script in Jenkins, which collects and saves the results. The results are stored in comma-separated values (CSV) format.

29 6. Task formulation and test automation implementation 6.2 Environment characteristics

The testing was performed on server and client machine with the fol- lowing characteristics:

CPU 4x Intel Xeon CPU [email protected] (16 cores) Memory 36G OS Red Hat Enterprise Linux Server release 6.1 (Santiago) JVM Oracle JDK 1.7.0 30-x86 64 Table 6.1: Client and Server hosts

30 7 Performance tests scenarios design and implementation

7.1 General architecture of tests

The following elements where used for each performance test, as defined by the PerfCake specification: Scenario: Scenario, is an XML document, which can be considered as an entry point for the performance test. For PerfCake this document specifies where to send messages, how many paral- lel processes to use while sending the messages, and for how long to run the tests, and which message to send. All scenarios have similar configuration. Besides, scenario specifies in which format to create output.

Message: Message in this case is a file with a message which would be send by PerfCake during scenario execution to some desti- nation.

Tested service: Tested service is jar file which is deployed to the en- terprise service bus. It contains the tested functionality, which is exposed over HTTP or JMS. During execution of each test 100 concurrent client threads were used and the messages were of 5 Kb in size, except HTTP exposed CBR using Rules and Method GET of a RESTful web service implementation tests. PerfCake was configured to measure throughput of each scenario, and to save the results to CSV file. Each tests runs for 5 minutes, not including warm up period. The reasons for choosing these parameters for PerfCake are as the following: Message size - 5Kb: The messages of size 5Kb are most frequently used in the SOA based applications.

Test run period - 5 minutes: This time is enough for stabilization of the tests results. No doubts, the longer the test is executed, the more precise are the results. There is 15 scenarios, and the execution of all scenarios takes approximately 2 hours including

31 7. Performance tests scenarios design and implementation the time spent for preparation of each test. So, 5 minutes is a compromise between reliability of results and minimization of time the whole test suite runs.

100 concurrent clients: This number was chosen by SwitchYard per- formance test team for SwitchYard performance tests. They made performance measurements of one test with different amount of concurrent clients, and with 100 clients the performance reached its maximum. I have chosen the same amount of clients for JBoss Fuse performance tests measurement to make perfor- mance comparison between these integration platforms more objective.

Warm up period is the time during which the HotSpot dynamic compiler compiles the executed code. HotSpot is the type of compiler which is in current Java versions. HotSpot first runs as an interpreter and only compiles the ”hot” code – the code executed most frequently [4]. That is why before measuring performance PerfCake was configured to repeatedly execute tests during warm up period. The actual perfor- mance measurement of the system starts only after warm up period is finished. So, during warm up period throughput gradually rises, and after it it stabilizes at one level. Only compiling code that is executed frequently has several perfor- mance advantages: No time is wasted compiling code that will execute infrequently, and the compiler can, therefore, spend more time on opti- mization of hot code paths because it knows that the time will be well spent [4]. According to PerfCake documentation, the system is considered warmed up when all of the following conditions are satisfied: the mini- mal iteration count has been executed, the minimal duration from the very start has exceeded and throughput doesn’t vary a lot over the time.

7.2 Performance test scenarios design

Based on the definition and the following basic features of enterprise service bus the test scenarios were designed for comparison JBoss Fuse and SwitchYard.

32 7. Performance tests scenarios design and implementation The basic features of ESB include:

1. possibility to call service synchronously and asynchronously, to call remote services;

2. message routing (address-ability, static/deterministic routing, content-based routing, rules-based routing);

3. messaging (message-processing, message transformation);

4. service orchestration;

5. logging, monitoring, admin console;

6. Message Exchange Patterns (publish/subscribe, point-to-point);

7. security (encryption and signing, a standardized security-model to authorize, authenticate use of ESB);

8. transformation (transformation of data formats and values, in- cluding XSLT)

9. splitting and then merging multiple messages.

Scenarios HTTP exposed custom service, SOAP exposed custom ser- vice, SOAP implementation of a web service using JAX-WS, Method GET of a RESTful web service implementation, Method POST of a RESTful web service implementation and SOAP web service proxy cover the 1st feature. The 2nd feature is covered almost by all scenarios, especially by HTTP exposed content based routing using XPath, HTTP exposed content based routing using RegEx and HTTP exposed content based routing using Rules. The 3rd feature messaging is covered by all scenarios. The 4th feature is tested by Service orchestration scenario. The 5th feature was not tested. The point-to-point model of the 6th feature is covered by JMS ex- posed custom service scenario. The 7th feature is covered by JMS exposed custom service and SOAP implementation of a web service using JAX-WS secured by WS- Security scenarios.

33 7. Performance tests scenarios design and implementation The 8th feature is covered by SOAP exposed XML message trans- formation using XSLT scenario. The 9th feature is tested by HTTP exposed Services implementing Splitter-Aggregator pattern scenario.

7.2.1 Scenario: HTTP exposed custom service There is a custom service that applies simple modification to the mes- sage and returns it back. In this case service doesn’t mean web service, but refers to some operation/transformation which can be done with the message. Scenario: The message is received over HTTP via HTTP POST request from client. Then it is modified by the custom service and after that returned back to the client via HTTP. The goal of this scenario is to calculate the throughput of HTTP transport, which would be considered as baseline for all other HTTP exposed scenarios. Thus, comparison of performance of another scenar- ios which use HTTP with the performance of this scenario will give the performance overhead of the technologies, used in another scenarios.

7.2.2 Scenario: SOAP exposed custom service There is a simple SOAP based web service exposed at some URL. It applies simple modification to the incoming message and returns it back. The service has published WSDL file. Scenario: The SOAP message is received by web service from the client. Then it is modified by the SOAP web service and after that returned back to the client. The goal of this scenario is to create a baseline for the performance of other web service technologies, and to show the performance cost of creation of SOAP web service compared to HTTP exposed custom service.

7.2.3 Scenario: JMS exposed custom service There is a custom service that applies simple modification to the mes- sage. Scenario: The message is received by JMS queue(request queue). The message is modified by the custom service and the result is passed

34 7. Performance tests scenarios design and implementation to the second JMS queue(response queue). From response queue the message is consumed by the client. The goal of this scenario is to compare the performance of JMS transport to other transports.

7.2.4 Scenario: HTTP exposed content based routing using XPath

Normally in integration framework, which is a part of the enterprise service bus, messages are sent from the source to the specified desti- nation. But there are some cases when the exact destination is defined according to the content of the message. The content of the message can be the payload itself, message headers, payload data type. to im- plement it in integration framework a Content Based Router is used. It is working as an ’if’ statement in a programming language like Java. So, Content-Based Router (CBR) is an enterprise integration pat- tern, according to which integration framework examines the message content and routes the message onto a different channel based on data contained in the message. [book ’Enterprise Integration Patterns: De- signing, Building, and Deploying Messaging Solutions’ by Gregor Hohpe, Bobby Woolf.] XML Path Language (XPath) is a query language used to select parts of the XML document. There are two custom services that apply simple modification to the message and return it back. Scenario: The message is received over HTTP via HTTP POST request from client. Then the CBR is applied to the message. The decision about the routing destination is made based on the message content using XPath expression. When message arrives to the selected destination, it is modified by the custom service and returned backto the client via HTTP. The goal of this scenario is to calculate the throughput of HTTP exposed content based routing using XPath feature, and to compare it to other types of content based routing features, described in the next scenarios.

35 7. Performance tests scenarios design and implementation 7.2.5 Scenario: HTTP exposed content based routing using RegEx The content based routing feature is described in the previous scenario. Regular expression (RegEx) is a search pattern, based on the use of meta-characters, used to search and manipulate a substring in the text. There are two custom services that apply simple modification to the message and return it back. Scenario: The message is received over HTTP via HTTP POST request from client. Then the CBR is applied to the message. The decision about the routing destination is made based on the message content using RegEx. When message arrives to the selected destination, it is modified by the custom service and returned back to the client via HTTP. The goal of this scenario is to calculate the throughput of HTTP exposed content based routing using RegEx feature, and to compare it to other types of content based routing features.

7.2.6 Scenario: HTTP exposed content based routing using Rules The content based routing feature is described in the scenario HTTP exposed content based routing using XPath. A business rule is a rule that defines or constrains some aspect of business. Business Rule Management System (BRMS) is an information system which is used to manage, update and execute business rules. Drools is a business rule management system (BRMS), implement- ing some form of artificial intelligence, which consists primarily ofa set of rules about behavior. It’s an open source project, developed by JBoss Community. There is a service which creates some object based on the incoming message. There are two custom services which apply simple modifica- tion to the message and return it back. Scenario: The message is received over HTTP via HTTP POST request from client. The service creates an object and the object is passed to the drools engine. After applying the rules, content based router routes the message based on the results achieved from drools

36 7. Performance tests scenarios design and implementation engine. Then the CBR is applied to the message. When message arrives to the selected destination, it is modified by the custom service and returned back to the client via HTTP. The goal of this scenario is to calculate the throughput of HTTP ex- posed content based routing in combination with application of Rules.The results can be compared with the throughput of other CBR scenarios.

7.2.7 Scenario: HTTP exposed Services implementing Scatter-Gather pattern Multicast EIP (Enterprise Integration Pattern) is used to copy the mes- sage to a number of destinations, in case the destinations are known in advance, and are static. There is a possibility to distribute messages to the destinations in parallel by setting parallel processing mode. Aggregator Enterprise Integration Pattern (EIP) is used to combine several incoming messages into one. The following criteria should be specified: Correlation identifier - defines which messages should be aggregated. Completion condition - defines when the result message is formed. It can be a number of incoming messages, a time interval, a predicate or others. Aggregation strategy - a function which defines the way of combin- ing incoming messages. Scatter-Gather pattern is an EIP which routes a message to multiple recipients, and then aggregates the response from the recipients into one message. For implementation of Scatter-Gather EIP, Multicast EIP was ap- plied to the message, and then the responses were collected using Ag- gregator EIP. Scenario: The message is received over HTTP via HTTP POST re- quest from client. The Multicast EIP broadcasts the message to the two services in parallel. The messages are processed by services. The two processed messages are combined into one resulting message using Aggregator EIP pattern. The resulting message is modified by the cus- tom service. At the end the outcoming message is returned back to the client via HTTP.

37 7. Performance tests scenarios design and implementation The goal of this scenario is to calculate the throughput of HTTP exposed Services implementing Scatter-Gather pattern. The results can be compared with the throughput of HTTP exposed custom service scenario to measure the overhead of Scatter-Gather pattern scenario.

7.2.8 Scenario: HTTP exposed Services implementing Splitter-Aggregator pattern This pattern is a combination of the two EIPs. First Splitter EIP is applied to the message which creates new messages from the parts of the incoming message, using specified rules for splitting. At the end the resulting message is created using Aggregator EIP described in the previous scenario. Scenario: The message is received over HTTP via HTTP POST request from client. The message is splitted into small messages us- ing Splitter EIP and the messages are send to a custom service which applies some transformation. Then messages are combined into one re- sulting message using Aggregator EIP pattern. The resulting message is modified by the custom service. At the end the outcoming message is returned back to the client via HTTP. The goal of this scenario is to calculate the throughput of HTTP ex- posed Services implementing Splitter-Aggregator pattern. The results can be compared with the throughput of HTTP exposed custom ser- vice scenario to measure the overhead of Splitter-Aggregator pattern scenario.

7.2.9 Scenario: Service orchestration Service orchestration is the automated management, arrangement and coordination of several services exposed as a single service. In this case service orchestration will be solving the task of delivering order to several destination and afterwards creation of an aggregated shipment notice. There are 5 services: Validate order - service to simulate the order validation. Credit check - service to simulate the credit check. Inventory check - service to simulate that the requested items are present.

38 7. Performance tests scenarios design and implementation Ship city - service to simulate delivery of order to the required city.

Shipment notice - service to simulate creation of the aggregation no- tice about shipment of the order.

Scenario: The message is received over HTTP via HTTP POST request from client. Then the following services are invoked: Validate order, Credit check, Inventory check, which modifies the message. After that using Multicast EIP pattern with parallel mode the messages are delivered to 3 destinations: Atlanta, Dallas and Los Angeles. In each destination Ship city service is applied to the message, and Aggregator EIP patter is used to gather shipped messages. Shipment notice service is applied to the resulting message. At the end the outcoming message is returned back to the client via HTTP. The goal of this scenario is to calculate the throughput of the im- plementation of Service orchestration scenario.

7.2.10 Scenario: SOAP exposed XML message transformation using XSLT Extensible Stylesheet Language Transformations (XSLT) is a language for transformation of XML documents into another XML documents. There is a SOAP based service, which transforms incoming XML message using XSLT. Scenario: The SOAP message is received by the web service from the client. Then its content, which is an XML document, is modified by the XSLT. The resulting message is sent back to the client. The goal of this scenario is to calculate the throughput of the imple- mentation of SOAP exposed XML message transformation using XSLT. the throughput can be compared with the throughput of SOAP ex- posed custom service scenario implementation to measure the overhead of XSLT.

7.2.11 Scenario: SOAP implementation of a web service using JAX-WS JAX-WS is described in the Subsection 3.1.3 Apache CXF. Scenario: The SOAP message is received by the JAX-WS web ser- vice from the client. Then it is modified by the web service. The result-

39 7. Performance tests scenarios design and implementation ing message is sent back to the client. The goal of this scenario is to calculate the throughput of the SOAP implementation of a web service using JAX-WS. This result can be compared to the SOAP exposed custom service throughput.

7.2.12 Scenario: SOAP implementation of a web service using JAX-WS secured by WS-Security JAX-WS is described in the Subsection 3.1.3 Apache CXF. Web Services Security (WS-Security) is an extension created over SOAP to provide security of a web service. WS-Security can be used to sign a message, encrypt it, and also to identify sender’s identity. In this case the security tokens attached to the message header identify sender’s identity. Scenario: The SOAP message is passed to the web service. The user and password are checked by WS-Security. If they satisfy required conditioned, the service modifies the message. The resulting message is sent back to the client. The goal of this scenario is to calculate the throughput of the SOAP implementation of a web service using JAX-WS secured by WS- Security. This result can be compared to the SOAP implementation of a web service using JAX-WS throughput to show the overhead of applying WS-Security.

7.2.13 Scenario: SOAP web service proxy The scenario consists of a JAX-WS web service, which is called by another web service (this web service is proxying the original JAX-WS implementation). JAX-WS web service can run on a different host from the WS proxy service. Scenario: The SOAP message is received by the WS proxy service. It modifies the message and sends SOAP request to JAX-WS web service. The message is modified by JAX-WS web service and returns back to the WS proxy service. WS proxy service sends SOAP response to the client. The goal of this scenario is to calculate the throughput of the SOAP web service proxy implementation. This result can be compared to the SOAP implementation of a web service using JAX-WS throughput to

40 7. Performance tests scenarios design and implementation show the overhead of WS proxy.

7.2.14 Scenario: Method GET of a RESTful web service implementation REST architectural style principles and RESTful web services are de- scribed in Subsection 3.1.3 of Chapter 3. Scenario: The client sends GET request to the specified URL, under which a REST GET method is deployed. RESTful web service process the request and returns response to the client. The goal of this scenario is to calculate the throughput of the method GET of a RESTful web service implementation. This result can be compared to the throughput of other types of web services and to POST method of a RESTful web service.

7.2.15 Scenario: Method POST of a RESTful web service implementation REST architectural style principles and RESTful web services are de- scribed in Subsection 3.1.3 of Chapter 3. Scenario: The client sends POST request to the specified URL, un- der which a REST POST method is deployed. RESTful web service process the request and returns response to the client. The goal of this scenario is to calculate the throughput of the method POST of a RESTful web service implementation. This result can be compared to the throughput of other types of web services and to GET method of a RESTful web service.

7.3 Implementation of performance scenarios for JBoss Fuse

Since JBoss Fuse and SwitchYard are based on different technologies, it isn’t possible to use the same implementations of scenarios for Switch- Yard and JBoss Fuse. So, the scenarios for JBoss Fuse were imple- mented from scratch. For JBoss Fuse all implementations used Blueprint XML or Spring XML technologies. This allows to use abilities of Spring framework, such as wiring beans using dependency injection. The choice between these

41 7. Performance tests scenarios design and implementation two technologies doesn’t create significant effect on the performance results, as was tested using scenario HTTP exposed custom service.

7.3.1 HTTP exposed custom service This scenario was implemented for JBoss Fuse using three different OSGi-bundles. All three implementations use Camel routes to route the message from HTTP URL, defined in the jetty endpoint. Message endpoint (or endpoint) is an EIP pattern which is placed in both ends of Camel route. It is responsible for transformation of custom application data to/from the message format of Camel routes. For each custom application there should exist its own custom endpoint. The jetty component provides HTTP-based endpoints for consump- tion and production of HTTP requests 1. fuse-http-post-simple-processor: In this implementation the message is transformed in the method ’process’ in the class implementing interface Processor. The Camel route is defined in the following way: This route consumes the message from HTTP URL, defined as a source of the Camel route, and transforms the message calling the im- plementation of method process, defined in the object of Java class referenced by identifier ’resultsProcessor’. The incoming argument of the method ’process’ contains the message from the jetty endpoint. Af- ter the execution of method ’process’, the result returns back to the URL. fuse-http-post-simple-bean: In this implementation the message is transformed in the POJO. The Camel route is defined the following way:

1. http://camel.apache.org/jetty.html

42 7. Performance tests scenarios design and implementation Here the transformation happens in the custom method called ’say- Hello’ of the bean ’testBean’, where the incoming message from the route is passed as an argument of this method. The return value of the method ’sayHello’ is interpreted as a result of the transformation. In Camel a bean is a POJO (it doesn’t have to implement interface java.io.Serializable). fuse-http-post-simple: In this implementation the message is trans- formed using the Simple Expression Language (simple language). The Camel route is defined the following way: Hello ${body} The simple language provides various elementary expressions that return different parts of the message 2, like reference to message body in this case. In this case the transformation adds Hello at the beginning of the message body. The result of the transformation is returned back to jetty endpoint.

7.3.2 SOAP exposed custom service This scenario is implemented using OSGi-bundle called fuse-http- camel-cxf-proxy. For this OSGi-bundle the Camel route is defined the following way: Hello ${body}! In this case, using the definition uri=”cxf:bean:proxyEndpoint”, the entire route is exposes as a CXF web service (a SOAP-based web ser- vice). Everything what is defined inside the route specifies an imple- mentation of this web service. There is some configuration defined for

2. http://fusesource.com/docs/router/2.5/eip/FMRS.SimpleLang.html

43 7. Performance tests scenarios design and implementation this web service by the tag ’cxfEndpoint’, under which URL to deploy it and which name of the method to use to access this route. In this case, when message goes to the URL, under which web service is deployed, the corresponding method of the web service is called, and the Camel route defined above is executed. The transformation used in this route is described in the Subsection 7.3.1. The result of the transformation is returned as a web service call reply.

7.3.3 JMS exposed custom service This scenario is implemented using OSGi-bundle called fuse-jms-queues- pooled-persistent. For this OSGi-bundle the Camel route is defined the following way: Hello ${body} Here the Camel route is defined from queue.request to queue.response, and the message is transformed by the transformation defined by sim- ple language. The transformation used in this route is described in the Subsection 7.3.1. Authentication is used to access both queues. Both of the queues send messages in persistent mode, what is specified by the attribute deliveryMode=1. PerfCake send messages to queue.request, and consumes them from queue.response. In order to increase the performance of this scenario, connection pooling was implemented. By connection pooling mechanism, connec- tions to the JMS queue are cached. When there is a new request to create a connection, old connection can be taken from the cache.

7.3.4 HTTP exposed content based routing using XPath This scenario is implemented using OSGi-bundle called fuse-http- camel-cbr-xpath. For this OSGi-bundle the Camel route is defined

44 7. Performance tests scenarios design and implementation the following way: //text[starts-with(., "I’m the fish.")] I like swimming I like walking The jetty endpoint and simple transformation are defined in Sub- section 7.3.1. CBR pattern here is specified by XML tags ”choice”, ”when” and ”otherwise”. The decision where to route the message is based on the condition provided in the XPath expression. The body of the message should meet this condition. In this case the text element of the message body should start with expression ”I’m the fish.”. Thus, the route behaves as follows: if the message contains a text element starting with sentence ”I’m the fish.”, the message body is transformed to ”I like swimming”, otherwise to ”I like walking”.

7.3.5 HTTP exposed content based routing using RegEx This scenario is implemented using OSGi-bundle called fuse-http- camel-cbr-regex. For this OSGi-bundle the Camel route is defined in the following way:

45 7. Performance tests scenarios design and implementation ${body} regex ’[\s\S]*<toWhom>[\s\S]* ([_A-Za-z0-9-]+(\.[_A-Za-z0-9-]+)*@[A-Za-z0-9]+ (\.[A-Za-z0-9]+)*(\.[A-Za-z]{2,}))[\s\S]* </toWhom>[\s\S]*’ matched not matched The jetty endpoint and simple transformation are defined in the Subsection 7.3.1. Thus, the route behaves as follows: if the message body matches the regular expression defined in the route, then the message body is transformed to ’matched’, otherwise to ’not matched’. This regular expression was chosen because the same regular expression is used in SwitchYard implementation of this scenario.

7.3.6 HTTP exposed content based routing using Rules This scenario is implemented using OSGi-bundle called fuse-http- camel-cbr-rules. For this OSGi-bundle the Camel route is defined the following way: ${body.destination} == ’Red’ Failed

46 7. Performance tests scenarios design and implementation ${body.destination} == ’Green’ Passed Not defined This route defines the following sequence of actions. When request comes to the jetty endpoint the method createTestWidget of class Wid- getHelper is executed. This generates an object of type Widget with some field called ’id’. Here Widget is a custom class with fields called ’id’ and ’destination’. The generated object is put into the message body. Then the drools rule specified below is applied to the message. rule "Red_Destination" when widget : Widget(id matches "FF0000-.*") then widget.setDestination("Red"); end rule "Green_Destination" when widget : Widget(id matches "00FF00-.*") then widget.setDestination("Green"); end The rules above set the field ’description’. Then the object of type Widget is put to the message body, and the CBR is executed, based on the value of field ’destination’. If the value of field ’description’ is’Red’, the Camel route returns the value ’Failed’, if the value of field ’descrip-

47 7. Performance tests scenarios design and implementation tion’ is ’Green’, the Camel route returns value ’Passed’, otherwise it returns ’Not defined’.

7.3.7 HTTP exposed Services implementing Scatter-Gather pattern This scenario is implemented using OSGi-bundle called fuse-http- camel-scatter-gather. For this OSGi-bundle the Camel routes are defined in the following way: 1 Country CZ 1 Country SVK

48 7. Performance tests scenarios design and implementation header.id aggregated The Multicast and Aggregator EIPs are defined in the Subsection 7.2.7 This scenario is implemented using four Camel routes. The first route copies the incoming message to the two other routes, using the parallel processing mode. In that two routes, which represent countries, the body is set to the corresponding country. Besides, these routes set the same header, to be used a correlation identifier for aggregation. Modified messages are aggregated using custom aggregation strategy in the route with id ’Aggregator’. Completion size equals two, indicates that when there are two messages aggregated, the aggregating route returns the result.

7.3.8 HTTP exposed Services implementing Splitter-Aggregator pattern This scenario is implemented using OSGi-bundle called fuse-http- camel-splitter-aggregator. For this OSGi-bundle the Camel routes are defined in the following way:

49 7. Performance tests scenarios design and implementation 1 "Hello ${body}" header.id "from aggregate" The Splitter EIP is defined in the Subsection 7.2.7 and Aggregator EIPs is defined in the Subsection 7.2.7. The incoming message is a message in the XML format, with XML tags named ’note’. The first route splits the message into several mes- sages. It selects as a body of each new message a part inside ’note’ tag of old message. The new messages are sent to the second route, which adds text ’Hello’ at the beginning of each message. In the last route the messages are aggregated the same way as described in the previous Subsection 7.3.7.

7.3.9 Service orchestration This scenario is implemented using OSGi-bundle called fuse-orchestration. In this OSGi-bundle five routes are defined. The first route looksthis way:

50 7. Performance tests scenarios design and implementation When the message gets to the route through jetty endpoint, it is val- idated by the method ’validate’ of bean validateOrder, then it goes to the method checkCredit, and then to the method checkInventory of beans checkCredit and checkInventory respectively. After that the modified message is copied to three routes, which represent orderde- livery to three cities. In each of those routes the message is modified. Three messages from the same order are combined into one message in the last route, and modified by the method createShipmentNotice of the bean shipmentNotice. The modified message is sent to the client. The following example illustrates what is going with the message. If the incoming message is: small message Then the outcoming is: From Atlanta: small message validated! credit checked! inventory checked!; From Dallas: small message validated! credit checked! inventory checked!; From Los Angeles: small message validated! credit checked! inventory checked! shipment notice created!

7.3.10 SOAP exposed XML message transformation using XSLT This scenario is implemented using OSGi-bundle called fuse-camel- cxf-xslt. For this OSGi-bundle the Camel routes are defined the fol-

51 7. Performance tests scenarios design and implementation lowing way: This route is very similar to the Camel route described in the Sub- section 7.3.2. The only difference is that instead of transformation using simple language, in this route XSLT is used. The rules for transforma- tion are defined in the file hello2.xsl.

7.3.11 SOAP implementation of a web service using JAX-WS This scenario is implemented using OSGi-bundle called fuse-jaxws. No Camel route is defined for this scenario. The JAX-WS web service is implemented using Apache CXF, and defined in Blueprint XML in the following way: The attribute ’implementor’ specifies the class which implements the endpoint, and address specifies the prefix of the URL address, where the web service will be available. The resulting URL is http://: 8181/cxf/HelloWorld. The WSDL document is automatically gener- ated by cxf-java2ws-plugin 3 for Maven. The class HelloWorldImpl spec- ifies the corresponding interface using annotation WebService4.

7.3.12 SOAP implementation of a web service using JAX-WS secured by WS-Security This scenario is implemented using OSGi-bundle called fuse-jaxws- secured. No Camel route is provided for this scenario. The web service used for this scenario is defined in the same way as in the previous Subsection 7.3.11. Interceptors which specify WS-Security are added. Interceptor is used to change, or augment the usual processing cycle. When the message is passed in the system, interceptors are used to

3. http://cxf.apache.org/docs/maven-java2ws-plugin.html 4. http://docs.oracle.com/javaee/5/api/javax/jws/WebService.html

52 7. Performance tests scenarios design and implementation catch the message and perform some operations with it before it reaches the destination object. For authentication the JAASLoginInterceptor 5 is used with param- eter contextName, which has value ’karaf’. It ensures that log in cre- dentials, passed from the client, correspond to the login and password specified for the Karaf container.

7.3.13 SOAP web service proxy This scenario is implemented using OSGi-bundle called fuse-jaxws- proxy. For this OSGi-bundle the Camel route is defined in the following way: The JAX-WS web service is named HelloWorld, and is defined in the same way as in the Subsection 7.3.11. The Camel route refers to this web service by the identifier ’callRealWebService’. Camel creates a proxy web service ’helloWorldProxy’ in a similar way as described in the Subsection 7.3.2. When a message arrives to Camel route, it is passed from the ’helloWorldProxy’ web service to the method ’enrich’ of class ’EnrichBean’, which modifies the message. After that the message goes to ’JAX-WS’ web service, specified by the URL http://localhost: 8181/cxf/HelloWorld. The resulting message is interpreted as a result of invocation of helloWorldProxy web service and returns to the client.

7.3.14 Methods GET of a RESTful web service implementation This scenario is implemented using OSGi-bundle called fuse-rest. The JAX-RS web service is configured in Blueprint XML the following way:

5. http://cxf.apache.org/javadoc/latest-2.7.x/org/apache/cxf/interceptor/security

53 7. Performance tests scenarios design and implementation ... The ’jaxrs:server’ element sets up JAX-RS service. The address tag provides a relative address to access the web service. This bean ’customerSvc’ has a set of JAX-RS annotations, which allow to map methods of the bean to client requests. The class CustomerService contains two methods:

• getCustomer, which returns customer object by identifier.

• addCustomer, which increases customer counter, but doesn’t actually add a customer.

Method getCustomer is annotated with GET annotation and is invoked in this scenario. This is the only scenario where PerfCake sends GET request to the server. For all the rest scenarios POST request is sent.

7.3.15 Methods POST of a RESTful web service implementation This scenario is implemented using OSGi-bundle called fuse-rest. The scenario invokes the method addCustomer, annotated with POST an- notation and described in the Subsection 7.3.14.

54 8 Results

Using the scenarios described in the previous chapter, the following results were achieved. Scenario / (iterations/sec) JBoss Fuse SwitchYard 1. HTTP exposed custom service 20,345* 19,152 2. SOAP exposed custom service 16,845 9,161 3. JMS exposed custom service 41 1,437 4. HTTP exposed CBR using XPath 16,812 7,041 5. HTTP exposed CBR using RegEx 1,473 1,290 6. HTTP exposed CBR using Rules 5,152 5,673 7. HTTP exposed Services implementing 17,609 2,267 Scatter-Gather pattern 8. HTTP exposed Services implementing 8,281 2,044 Splitter-Aggregator pattern 9. Service orchestration 7,005 5,243 10. SOAP exposed XML message transfor- 4,557 4,059 mation using XSLT 11. SOAP implementation of a web service 19,927 16,624 using JAX-WS 12. SOAP implementation of a web service 12,487 15,106 using JAX-WS secured by WS-Security 13. SOAP web service proxy 1,082 6,451 14. Method GET of a RESTful web service 25,638* 68,559 implementation 15. Method POST of a RESTful web service 18,477 21,113 implementation Table 8.1: Performance comparison of JBoss Fuse and SwitchYard

All scenarios were executed with 100 concurrent client threads and with message that has size of 5 KiB,except HTTP exposed CBR using Rules, which was executed with 200B message, both for JBoss Fuse and SwitchYard tests. All tests run for 5 minutes each. The reason for choosing this parameters are the following are described in the Chapter

55 8. Results 7. Before running the tests, the source code of JBoss Fuse was down- loaded for the repository https://github.com/jboss-fuse/fuse.git, which is publicly available and built using maven. For the tests Method GET of a RESTful web service implementation for JBoss Fuse the number 25,638 is preliminary. It was not possible to execute the test till the end. It is discussed further in the Sections 8.2. For the test JMS exposed custom service in case of JBoss Fuse the result is shown for database LevelDB 1, which is used for persist- ing messages in the ActiveMQ configuration. By default JBoss Fuse is configured with KahaDB2. In case of KahaDB throughput of this test is 31 iterations/sec. for SwitchYard the results for test JMS exposed custom service, was achieved when HornetQ was configured for Oracle database. JBoss Fuse also was tested against the same database, but the result was even lower than for KahaDB, and is not shown here. For the test HTTP exposed custom service, the result from the table was taken for the case when the message was passed to the processor. There are also another results, for this test for JBoss Fuse, when the message is transformed using bean - 19,101 iterations/sec, and using Simple Expression Language - 19,161 iterations/sec.

8.1 Concluding the results

According to the achieved results from the Table 8.1 Performance com- parison of JBoss Fuse and SwitchYard, Camel routes and enterprise in- tegration patterns are significantly faster in JBoss Fuse than in Switch- Yard. This is the most noticeable in scenarios: HTTP exposed CBR us- ing XPath, HTTP exposed Services implementing Scatter-Gather pat- tern, HTTP exposed Services implementing Splitter-Aggregator pat- tern. HTTP exposed custom service scenario has approximately similar performance for both platforms, as well as scenario using XSLT trans- formation. SOAP exposed custom service, and JAX-WS web service are also faster in JBoss Fuse than in SwitchYard.

1. http://activemq.apache.org/leveldb-store.html 2. http://activemq.apache.org/kahadb.html

56 8. Results

Figure 8.1: Relative throughput of JBoss Fuse compared to SwitchYard.

Other tests, dedicated to web services work better in SwitchYard. Especially large difference is shown in case of SOAP web service proxy and method GET of a RESTful web service. Besides, the test JMS exposed custom service was working signif- icantly slower in JBoss Fuse than in SwitchYard. For JBoss Fuse this test is using simple Camel route, which is working fast in other tests, and ActiveMQ, which is faster according to other measurements than HornetQ. But those comparisons between ActiveMQ and HornetQ were made mainly for not persistent messages. So, this scenario seems as a possibility for improvement for JBoss Fuse.

8.2 Effort allocation and issues encountered

During the work on the master thesis the author of the thesis learned the theory connected with ESB, and performance testing, to formalized the scenarios and added additional scenarios. The most complicated part of the work for the author was connected with scenarios implementation for JBoss Fuse and their automation.

57 8. Results During the work on the thesis the author learned all the technologies used for implementation of scenarios. Apart from implementing the scenario, the author tweaked the implementation to make it optimal from performance perspective. This part required deep knowledge and understanding in the technologies used for implementation. In some cases scenarios were rerun in different configuration to find out which has the best performance. The automation using SmartFrog also wasn’t easy for the author. She learned how SmartFrog works in detail. It has its uses its own lan- guage to define the interaction between components. During the work on automation, the author implemented SmartFrog component to start and stop JBoss Fuse in Java, and wrote other components for automa- tion using Groovy language. SwitchYard runs on JBoss Application Server, and there was already SmartFrog component for SwitchYard implemented in Java. During the work on the scenario Method GET of a RESTful web service the following issue was encountered. The test runs for 1 minute in the warm up phase, reaches the performance of 28,342 iterations/sec, and then the client throws the following exception: java.net.NoRouteToHostException: Cannot assign requested address Apart from that on the client host 28,223 connections are in the TIME WAIT status. On the server host there are maximum five waiting connections, and no exception. This situation happens because entire ephemeral port range was used on the client machine. Ephemeral port is commonly used by protocols UDP and TCP as a port on the client side in client–server communication to the specified port on the server side. The exception was thrown because there were no more available free ephemeral ports on the client host when PerfCake wanted to create new connections with the server. To solve this problem some network tuning is required, including making more ephemeral ports available, and en- abling fast recycling TIME-WAIT sockets. Unfortunately, allowance to tune the network was not granted, as it was used by other applications. Besides, in the earlier version of JBoss Fuse there was an issue con- nected with low performance test result for scenario SOAP implemen- tation of a web service using JAX-WS secured by WS-Security. The performance of that scenario was 144 iterations/sec, and after fixing the issue it increased to 12,487 iteration/sec. The issue was fond due to

58 8. Results the work on this thesis, and it is described in the issue tracking system3.

3. https://issues.jboss.org/browse/ENTESB-1473

59 9 Conclusion

The aim of this thesis was to compare performance of JBoss integration platform implementations. During the work on the thesis two integration platforms were con- sidered: JBoss Fuse and SwitchYard. In the thesis the notions service oriented architecture and enterprise service bus are described. From the perspective of testing, methods of performance testing, and tests automation were characterized. Based on that knowledge the author contributed to design of performance scenarios for comparison JBoss Fuse and SwitchYard. She has formalized and defined scenarios which were already implemented for SwitchYard, and also introduced new scenarios:

• HTTP exposed Services implementing Splitter-Aggregator pat- tern

• Method GET of a RESTful web service implementation

• Method POST of a RESTful web service implementation

All fifteen scenarios were implemented for JBoss Fuse using Perf- Cake and tweaked to be optimal from the performance point of view. The performance was measured using a tool for measuring performance named PerfCake. Performance test execution was automated in distributed environ- ment using Jenkins and SmartFrog, a tool used to manage test run in Client-Server environment. After multiple automated test execution, the author of the thesis made sure that results are stable and collected them. The technologies used for implementation of performance tests for JBoss Fuse, included Apache Camel, Apache ActiveMQ, Apache Karaf, Apache CXF. The analysis of the results is persented in the Chapter 8. It can be concluded that for most of the scenarios the results are comparable. In general, it appears that performance of integration framework is better in JBoss Fuse, as most of the scenarios connected with routing work faster for it. But the scenarios connected with web services, especially proxying the third party web service run faster in SwitchYard.

60 9. Conclusion Two issues were encountered during the work on the thesis. They are described in Section 8.2. One issue deals with performance of JBoss Fuse scenario JAX-WS secured by WS-Security, which is already fixed by developers of JBoss Fuse. The automated tests developed during this thesis can be used to measure performance of different releases of JBoss Fuse in order to make sure that there was no drop in performance in some release.

61 Bibliography

[1] Jonathan Anstey. Open source integration with apache camel and how fuse ide can help. DZone, 2011. Available at http://java. dzone.com/articles/open-source-integration-apache.

[2] The Apache Software Foundation. Apache Karaf Manual. Avail- able at http://karaf.apache.org/manual/latest/index.html.

[3] Naveen Balani and Rajeev Hathi. Apache CXF Web Service De- velopment. Packt Publishing, 2009.

[4] Brian Goetz. Java theory and practice: Dynamic compilation and performance measurement. DeveloperWorks, 2004. Available at http://www.ibm.com/developerworks/library/j-jtp12214/.

[5] Steven Haines. Pro Java EE 5 Performance Management and Op- timization. Apress, 1 edition, May 2006.

[6] HP Labs. SmartFrog Documentation. Available at http://www. smartfrog.org/display/sf/SmartFrog+Home.

[7] Claus Ibsen and Jonathan Anstey. Camel in Action. Manning Publications Co., Stamford, 2011.

[8] JBoss Community. JBoss Application Server Documentation. Available at http://jbossas.jboss.org/.

[9] JBoss Community. SwitchYard Documentation. Available at https://docs.jboss.org/author/display/SWITCHYARD11/ Home.

[10] Eric Jendrock, Ricardo Cervera-Navarro, Ian Evans, Kim Haase, and William Markito. The Java EE 7 Tutorial. Ora- cle. Available at http://docs.oracle.com/javaee/7/tutorial/ doc/index.html.

[11] Jaideep Khanduja. What is a testing environment for software testing? TechTarget, 2008. Available at http://www.infoq.com/ articles/ESB-Integration.

62 9. Conclusion [12] Ian Molyneaux. The Art of Application Performance Testing. O’Reilly, 2009.

[13] Jakob Nielsen. Response times: The 3 important limits. Nielsen Norman Group, 1993. Available at http://www.infoq.com/ articles/ESB-Integration.

[14] The OPEN Group. The SOA Source Book. Available at http: //www.opengroup.org/soa/source-book/soa/soa.htm.

[15] Opensourcetesting.org. Performance test tools. Available at http: //www.opensourcetesting.org/performance.php.

[16] OSGi Alliance. The OSGI Architecture. Available at http://www. osgi.org/Technology/WhatIsOSGi.

[17] Red Hat. Fuse Documentation. Available at https: //access.redhat.com/site/documentation/en-US/Red_ Hat_JBoss_Fuse/6.1/.

[18] John Ferguson Smart. Jenkins: The Definitive Guide. O’Reilly Media, Inc., 2011.

[19] Bruce Snyder, Dejan Bosanac, and Rob Davies. ActiveMQ in Ac- tion. Manning Publications Co., Greenwich, CT, USA, 2011.

[20] Kai W¨ahner. Choosing the right esb for your integration needs. InfoQ, 2013. Available at http://www.infoq.com/articles/ ESB-Integration.

63 10 Appendix

Figure 10.1: HTTP exposed custom service

64 10. Appendix

Figure 10.2: SOAP exposed custom service

Figure 10.3: JMS exposed custom service

65 10. Appendix

Figure 10.4: HTTP exposed content based routing using XPath

Figure 10.5: HTTP exposed content based routing using RegEx

66 10. Appendix

Figure 10.6: HTTP exposed content based routing using Rules

Figure 10.7: HTTP exposed Services implementing Scatter-Gather pat- tern

67 10. Appendix

Figure 10.8: HTTP exposed Services implementing Splitter-Aggregator pattern

Figure 10.9: Service orchestration

68 10. Appendix

Figure 10.10: SOAP exposed XML message transformation using XSLT

Figure 10.11: SOAP implementation of a web service using JAX-WS

69 10. Appendix

Figure 10.12: SOAP implementation of a web service using JAX-WS secured by WS-Security

Figure 10.13: SOAP web service proxy

70 10. Appendix

Figure 10.14: Method POST of a RESTful web service implementation

71