User Guide

How to Use Payara Micro with via EKS

The Payara® Platform - Production-Ready, Cloud Native and Aggressively Compatible. How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Contents

Amazon Web Services Elastic Kubernetes Service (EKS) 2 Requirements 2 AWS Account Setup 3 Creating the Kubernetes Cluster 6 Payara Micro Sample Application 8 Preparing our Image 15 Preparing the Kubernetes Cluster for Payara Micro 18 Provision the Kubernetes Cluster with a new Deployment and Service 19 Testing the Sample Application in a Cluster 25 Summary 29 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Kubernetes has become the de-facto solution for container orchestration in the cloud. Kubernetes is a complex tool designed for operating hybrid platforms. If you intend to deploy a Payara Micro Kubernetes cluster using a cloud provider you have to follow specific instructions for the provider of your choice (Amazon Web Services, , Google Cloud Platform, etc.) as they all have separate implementation mechanisms for provisioning new clusters. The purpose of this guide is to showcase how to create a new Kubernetes cluster in Amazon Web Services and to set up a deploy- ment using a sample WAR application running on Payara Micro. The contents of this guide will cover:

• What is AWS Elastic Kubernetes Service (EKS)? • Requirements for how to set up your environment • How to setup your AWS account • Creating the Kubernetes cluster • The structure of the Payara Micro sample application • How to prepare the Docker image to deploy in the cluster • Provision your Kubernetes cluster with a new Deployment • Testing the application in the cluster

You can also watch our video tutorial for this guide on YouTube:

1 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Amazon Web Services Elastic Kubernetes Service (EKS)

The Amazon Web Services infrastructure offers a special service called the Elastic Kubernetes Service (EKS) that allows users to easily create and manage the lifecycle of a Kubernetes cluster in the AWS infrastructure without the need for maintaining the control plane components of the clus- ter. One of the main advantages of this service is that the health and availability of the Kubernetes control plane is guaranteed by it, lifting the burden off the back of the users. EKS will create a con- trol plane for each Kubernetes cluster in a specific AWS region. The EKS service doesn’t operate by itself in maintaining the cluster, though; the following AWS services are used in conjunction with it:

• Amazon EC2 which manages the nodes of the Kubernetes cluster as virtual instances, with its corresponding security groups and corresponding AMIs used to provision the contents of each node. • Amazon VPC which manages the networking aspects of the cluster, policies to restrict traffic between the nodes of the cluster and the control plane. Along with this, policies for role based authorization are placed to isolate the cluster from unwanted access. • Amazon Elastic Load Balancing, for distributing the load of requests received by the cluster. • Identity and Access Management (IAM), for authentication provided to administrators and other users. • Amazon CloudFormation, to maintain template stacks used to create and maintain the state of the cluster as intended.

Requirements

In order to set up your first cluster using Amazon EKS you will first be required to install the following tools in your local machine:

• The Amazon Web Services command-line interface (aws-cli). You can read more informa- tion about how to install this tool for your Operating System here. • Have Docker installed locally on your machine. • The kubetcl command-line utility which will allow you to interact with the Kubernetes cluster. There are multiple ways to install this tool: • For Windows and macOS environments, the utility will be included when installing either Docker Desktop or Minikube. I personally recommend using Docker Desktop since it provides both Docker and Kubernetes management tools with one installation. • For most environments, you can install the utility directly using either a package man- ager (apt/yum/snap for Linux, Brew for macOS, Chocolatey for Windows), or download the utility directly to your local machine as well. Both alternatives are documented here.

2 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

AWS Account Setup

In order to create and interact with the cluster, you will have to set up an Amazon Web Service user account locally on your machine to be used with the command-line utilities. Although you can use your personal account, it is a recommended practice to set up a different user account with limited permissions in case it gets compromised. This account should be used for programmatic access to the AWS API services via the command line interface exclusively.

To create this account, log in to your personal AWS account and head to the Identity and Access Management (IAM) service and proceed to create a new user under the Users option. Hit the Add User button and provide the following data:

Hit the Next:Permissions button and in the following screen set the permissions for the account. You’ll have to head to the Attach existing policies directly tab and then use the search input to add the following policies:

• AdministratorAccess • AmazonEKSClusterPolicy • AmazonEKSServicePolicy

3 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Hit the Next:Tags button and in the following screen leave the tag input fields blank. Proceed to the review screen and review that the user’s settings are correct:

To finish the process, hit theCreate user button. The user account will be created. Take special attention to the credentials that are listed in the table for the user, Access key ID, and Secret access

4 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

key. You will use these credentials to set up the account’s programmatic access for the AWS com- mand-line interface. Click the Show link to display the secret access key:

The secret access key will only be available for display on the result screens for the user creation. Once you close this screen, you will not be able to get this key anywhere within the administration console. This is by design in order to protect this key. If you forget it, you will have to create a new set of credentials, which is outside the scope of this guide.

With the user account created, the following step will be to configure the AWS command-line utility to use these credentials. To do this, run the aws configure command. You will be requested the access ID and key, a default AWS region, and the default output format:

$ aws configure

AWS Access Key ID [none]: AKIAWO5SNRHVGXWZDRXS AWS Secret Access Key [none]: ------Default region name [none]: us-west-2 Default output format [none]: json

5 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

With this, your command-line interface is ready to remotely interact with the EKS service.

Creating the Kubernetes Cluster

The first and most important step on provisioning our Kubernetes cluster will be to create it. To quickly create a new Kubernetes cluster, we will use the eksctl create cluster command and we will use the following initial arguments:

• Cluster Name • Version • Starting number of nodes • The region where the cluster nodes will be hosted • Type of EC2 instances used to create the cluster nodes.

With the following command, we will create a new cluster with 3 nodes all living in the us-west-2 region. All three nodes will be created using t2-medium sized EC2 instances (2 vCPUs and 4 GB of RAM which should be enough for each node to host multiple pods):

$ eksctl create cluster --name demo-cluster --version 1.11 --nodes 3 --region us-west-2 --node-type t2.medium

using region us-west-2 setting availability zones to [us-west-2a us-west-2b us-west-2c] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19 subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19 subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19 nodegroup "ng-223fb37a" will use "ami-057d1c0dcb254a878" [AmazonLinux2/1.11] using Kubernetes version 1.11 creating EKS cluster "demo-cluster" in "us-west-2" region will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=demo-cluster' 2 sequential tasks: { create cluster control plane "demo-cluster", create nodegroup "ng-223fb37a" } building cluster stack "eksctl-demo-cluster-cluster" deploying stack "eksctl-demo-cluster-cluster" building nodegroup stack "eksctl-demo-cluster-nodegroup-ng-223fb37a" --nodes-min=3 was set automatically for nodegroup ng-223fb37a

6 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

--nodes-max=3 was set automatically for nodegroup ng-223fb37a deploying stack "eksctl-demo-cluster-nodegroup-ng-223fb37a" all EKS cluster resource for "demo-cluster" had been created saved kubeconfig as "~/.kube/config" adding role "arn:aws:iam::444366424554:role/eksctl-demo-cluster-nodegroup-ng- NodeInstanceRole-D0W6INN7UAMA" to auth ConfigMap nodegroup "ng-223fb37a" has 0 node(s) waiting for at least 3 node(s) to become ready in "ng-223fb37a" nodegroup "ng-223fb37a" has 3 node(s) node "ip-192-168-19-140.us-west-2.compute.internal" is ready node "ip-192-168-40-33.us-west-2.compute.internal" is ready node "ip-192-168-79-238.us-west-2.compute.internal" is ready kubectl command should work with "~/.kube/config", try 'kubectl get nodes' EKS cluster "demo-cluster" in "us-west-2" region is ready

Kubernetes Version

For this example the version of the Kubernetes cluster being created is 1.11. Whenever interacting with a Kubernetes cluster, keep in mind that the minor version of the kubectl client utility should be at least one number lower or higher than the one for the cluster. This means that in the case of this example, the client version should be either 1.101.11 or 1.12. Use this knowledge when adapting this guide to newer versions of Kubernetes in AWS.

The cluster creation will take a few minutes, but afterward, the cluster will be fully ready. The eksctl utility will automatically configure your local environment’s Kubernetes configuration by adding a new context with the information and credentials to interact with the EKS cluster and will use it to set the current context in the ${USER_HOME}/.kube/config file. You can verify that the cluster is fully provisioned by listing all existing nodes in the default namespace:

$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-19-140.us-west-2.compute.internal Ready 1h v1.11.9 ip-192-168-40-33.us-west-2.compute.internal Ready 1h v1.11.9 ip-192-168-79-238.us-west-2.compute.internal Ready 1h v1.11.9

You can observe that each node corresponds to a new Amazon EC2 instance hosted in the specified region. Lastly, you can verify that the cluster has been succesfully created and it is reported as active

7 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

by heading to the AWS console, select the EKS dashboard and under the Clusters section, check the details of the demo-cluster:

Payara Micro Sample Application

Since we are provisioning a Kubernetes cluster, there’s no point in creating a simple application that doesn’t take advantage of the distributed capabilities of the Payara Platform and the self-managing aspects of Kubernetes. A simple application that manages user data will suffice, with the follow- ing specifications:

• The application will allow new users to be created. • Each user is comprised of a name, an organization, and a consecutive ID. • The application will track the current consecutive number of users. • Additionally, the application will track which location of the Kubernetes cluster (in this case the pod’s name) was the user created. • Lastly, the application will allow all users to be listed with their relevant data and allow each user’s data to be retrieved by its ID.

8 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

For our application to fulfill these criteria, we will make use of the following set of APIs that are included in the Payara Platform:

• Java EE Web API, which is a standard for applications developed for Payara Micro • JCache API, so that all users are cached in a distributed manner. If an user is created by an instance that is part of the cluster, other members can see them. • Payara Public API, to have access to the @Clustered annotation which is proprietary of the Payara Platform. With this annotation, we can configure an@ApplicationScoped CDI bean as a “true” singleton, which means that only one instance of the bean exists across all the instances that are member of the cluster. • Payara Micro API, in order to access the PayaraMicro.getInstanceName(), which will get the current name assigned to a running Payara Micro instance.

With this in mind, let’s start the body of our application by composing it with a Maven project. Here’s the POM file with the dependencies we mentioned previously:

4.0.0

fish.payara.support ClusterDemo 1.0.0 war

Cluster Demo

UTF-8 1.8 1.8 false 5.192

javax javaee-web-api 8.0

9 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

provided javax.cache cache-api 1.0.0 provided fish.payara.api payara-api ${payara.version} provided fish.payara.extras payara-micro ${payara.version} provided cluster-demo Let’s continue with the definition of theUserData entity, which will hold the user’s data:

UserData.java public class UserData implements Serializable {

private static final long serialVersionUID = 1024713371902278434L;

private Integer id;

@NotNull @NotEmpty private String name;

@NotNull @NotEmpty private String organization;

10 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

private String createdOn;

@JsonbCreator public UserData(@JsonbProperty("name") String name, @ JsonbProperty("organization") String organization) { this.name = name; this.organization = organization; }

public UserData(Integer id, UserData data, String createdOn) { this.id = id; this.name = data.name; this.organization = data.organization; this.createdOn = createdOn; }

@JsonbProperty("id") public Integer getId() { return id; }

@JsonbProperty("name") public String getName() { return name; }

@JsonbProperty("organization") public String getOrganization() { return organization; }

@JsonbProperty("createdOnInstance") public String getCreatedOn() { return createdOn; } }

As you can see, the entity holds the fields for the user’sid , name and organization. It also holds the name of the instance in which the user was created via the createdOn field. JSON-B and Bean Validations annotations are used to properly parse and validate the data when it is received via an HTTP request to the corresponding REST endpoint (more on it below). Next, we need 2 service components: one for generating the ID of each user using a counter, called CounterService and

11 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

another for retrieving the name of the instance which is responsible for running the application, called InstanceInfoService:

CounterService.java @ApplicationScoped @Clustered public class CounterService implements Serializable {

private final AtomicInteger userCounter = new AtomicInteger(0);

public Integer getNextValue() { return userCounter.incrementAndGet(); }

public Integer getCurrentValue(){ return userCounter.get(); } }

InstanceInfoService.java @ApplicationScoped public class InstanceInfoService {

private static final Logger LOG = Logger.getLogger(InstanceInfoService. class.getName()); private static final String DEFAULT_NAME = "payara-micro";

public String getName() { String instanceName = null; try { instanceName = PayaraMicro.getInstance().getInstanceName(); } catch (Exception exception) { LOG.log(Level.SEVERE, "Error retrieving instance name", exception); } return Optional.ofNullable(instanceName).orElse(DEFAULT_NAME); } }

The structure of the InstanceInfoService is pretty straightforward: It has a simple method for getting the currently running instance’s name via the PayaraMicro API, and it for a reason the

12 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

instance is not named, it will return a default name. We will take advantage of this feature in order to pass the name of the Kubernetes Pod that will host the Payara Micro application when it starts, which will let us track which instance has created which user.

For the CounterService, internally we keep track of the current counter using an AtomicInteger, which allows concurrent synchronized access to the read and writes of its “wrapped” integer value. Since this class’s instance will be a “true” singleton courtesy of the @Clustered annotation, the class needs to be marked as serializable.

The next step is to define a service component to handle the creation, storage, and querying of users; called UserDataService:

UserDataService.java @RequestScoped public class UserDataService {

@Inject Cache dataSet;

@Inject CounterService counterService;

@Inject InstanceInfoService infoService;

public Optional retrieve(Integer id) { return Optional.ofNullable(dataSet.get(id)); }

public UserData store(UserData userData) { int nextId = counterService.getNextValue(); dataSet.putIfAbsent(nextId, new UserData(nextId, userData, infoService. getName())); return dataSet.get(nextId); }

public List listAll(){ return IntStream.rangeClosed(0, counterService.getCurrentValue()) .filter(dataSet::containsKey) .mapToObj(dataSet::get) .collect(Collectors.toList()); } }

13 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

The service uses an injected cache to store and retrieve the users that are created. Whenever a new user is created, the 2 previous components are used to assign the user their id and the instance that it was created on and then it is stored inside the cache by using its id as the key for the corre- sponding user data object. To list all users, quick retrieval of all users is done with the current value of the user counter and the cache values are retrieved for all existing ids. The advantage of using the JCache API is that whenever a user is created in one member of the cluster, it will be available to all other members. This is the main advantage of using the Payara Platform in an orchestrated clustered environment.

Lastly, we need a REST endpoint to send requests to our application. Here is the UserDataEndpoint JAX-RS endpoint definition:

UserDataEndpoint.java @Path("/data") @RequestScoped public class UserDataEndpoint {

@Inject UserDataService userDataService;

@POST public Response createUser(@Valid UserData data, @Context UriInfo uriInfo) { UserData newUser = userDataService.store(data); return Response.created(UriBuilder.fromPath(uriInfo.getPath()). path("{id}").build(newUser.getId())).build(); }

@GET @Path("/{id}") public Response getUser(@PathParam("id") @NotNull Integer id) { return userDataService.retrieve(id).map(Response::ok).map(Response. ResponseBuilder::build) .orElseThrow(() -> new NotFoundException()); }

@GET @Path("/all") public List getAllUsers(){ return userDataService.listAll(); } }

14 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Let’s not forget the JAX-RS application configuration component too:

DemoApplication @ApplicationPath("/") public class DemoApplication extends Application { }

And with it, all of our application components are in place. You only need to build the application’s WAR file by running a singlemvn clean install command and that’s it!

Preparing our Docker Image

In order to deploy our sample Payara Micro application in a Kubernetes cluster, we have to rely on a Docker image that will be used to provision the containers that will live the cluster’s pods. Proceed to create a Dockerfile in the root folder of the Maven project with the following contents:

Dockerfile FROM payara/micro:5.192 COPY target/cluster-demo.war /opt/payara/deployments/cluster-demo.war

And now proceed to build the image locally. For the purposes of this example, we’ll tag the image as payara/cluster-demo:

$ docker build -t payara/cluster-demo .

Sending build context to Docker daemon 79.7MB Step 1/2 : FROM payara/micro:5.192 ---> 37d18bdf7828 Step 2/2 : COPY target/cluster-demo.war /opt/payara/deployments/cluster-demo. war ---> Using cache ---> fbfc28f5f9ca Successfully built fbfc28f5f9ca Successfully tagged payara/cluster-demo:latest

15 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

In order for EKS to use this image in order to provision the starting deployment, you will have to rely on a Docker registry where the image (and its tags) will be hosted. Fortunately, Amazon Web Services provides a container registry called Elastic Container Registry (ECR) which serves this purpose and will allow you to push local images into. On the AWS web console search for the ECR service, which will make the following dashboard screen appear:

Click the Create Repository button and set the repository’s name using the same tag name:

Finish the process by clicking on the Create Repository button. The repository will be ready to accept new image tags to be pushed into it:

16 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

To push the local image into the repository, we need to login our Docker installation to the ECR reg- istry by running the following command:

$ $(aws ecr get-login --no-include-email --region us-west-2) Login Succeeded

Now proceed to tag the recently built image with the repository’s URI (you can get it from the dash- board). We’ll use the latest tag as it is the standard practice with Docker images:

$ docker tag payara/cluster-demo:latest 444366424554.dkr.ecr.us-west-2. amazonaws.com/payara/cluster-demo:latest

And finally, push the image’s tag to the repository by using thedocker push command (This may take a few minutes):

$ docker push 444366424554.dkr.ecr.us-west-2.amazonaws.com/payara/cluster- demo:latest The push refers to repository [444366424554.dkr.ecr.us-west-2.amazonaws.com/ payara/cluster-demo] 899a52b10d1c: Pushed f55096c011ca: Pushed 0c5fa121d025: Pushed ceaf9e1ebef5: Pushed 9b9b7f3d56a0: Pushed f1b5933fe4b5: Pushed latest: digest: sha256:c048ac1ac0a115890b408d596db6e28a85d0a3cae2d00c8f794f80a5fdba96a7 size: 1576

You can observe that the tag has been successfully pushed to the repository by using the ECR’s dashboard and checking the repository’s details:

17 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Preparing the Kubernetes Cluster for Payara Micro

Since we are going to run multiple containers orchestrated by the Kubernetes cluster we created, we have to keep in mind that these instances will live in the cluster “isolated” from each other unless they are explicitly configured to discover themselves and form a Data Grid. In order to do this, we’ll take advantage of Payara Micro’s auto-clustering feature which allows any Payara Micro instance to discover an existing data grid and join it using a combination of sensible defaults with minimal configuration. Payara Micro supports the clustering of instances in a grid that lives in a Kubernetes cluster arrangement by using the --clustermode argument set to kubernetes, so no additional configuration settings are required! There’s a catch, however: The grid’s Kubernetes discovery mode relies on a specific Hazelcast plugin, which requires access to the cluster’s control plane (or master node) which will not be possible unless it is explicitly configured.

To grant access to the control plane in this instance, we will create a custom cluster role binding policy, which will be used to allow the default service account (used by the Kubernetes API exposed in the control plane) read-only access on all Kubernetes resources for the default namespace. The easiest way to create this policy is to create the following YAML specification document:

rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: default-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole

18 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

name: view subjects: - kind: ServiceAccount name: default namespace: default

And then apply it directly on our newly created EKs cluster:

$ kubectl apply -f rbac.yaml clusterrolebinding.rbac.authorization.k8s.io/default-cluster created

With this change, each Payara Micro instance created within the Kubernetes cluster will be capable of discover other instances and automatically cluster as intended.

Provision the Kubernetes Cluster with a new Deployment and Service

With all of the pieces in place, the last step of this guide is to provision our cluster with the follow- ing resources:

• A Kubernetes deployment which will specify that our Payara Micro application will be deployed using a fixed number of Pods using the Docker image hosted in the ECR repository. The deployment will instruct the pods on how to start each Payara Micro instance. • A Kubernetes service, which will be used to allow all pods to discover each other whenever they are placed in the cluster, which will lead to each corresponding Payara Micro instance to discover each other and form its corresponding distributed cluster. This service will be defined as a load balancer to allow external access to our application via an HTTP endpoint as well.

Let’s start with the deployment. Create a new YAML specification document with the following content:

demo-service.yaml apiVersion: v1 kind: Service metadata: name: demo-service spec:

19 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

type: LoadBalancer selector: app: cluster-demo ports: - name: web port: 80 targetPort: 8080 nodePort: 30080

The service definition is pretty straightforward: It will be of typeLoadBalancer (traffic will be sen- sibly balanced between all available pods that are part of the service), and the service will define endpoints for all pods that are labeled with the app=cluster-demo key-value pair selector. This is important since our deployment will use that same matching label as well. Finally, the service will map port 8080 in each pod to port 80 (which is a staple of modern services) for its corresponding Kubernetes endpoint (managed automatically by the service) and also will map this port to port 28080 in the corresponding node (or AWS EC2 instance in this case) in order to allow external access via the load balancer.

AWS Load Balancer

Kubernetes Load balancers are provisioned using AWS EC2 elastic load balancers. Specifically, classic load balancers will be used whenever possible.

Now, apply the service specification using the correspondingkubectl command:

$ kubectl apply -f demo-ser.yaml service/demo-service created

Let’s verify that the service has been created successfully by using the kubectl get ser- vice command:

$ kubectl get service demo-service NAME TYPE CLUSTER-IP EXTERNAL-IP demo-service LoadBalancer 10.100.84.69 ac8a3473befc711e985800a4da13d3cc-1027891870. us-west-2.elb.amazonaws.com PORT(S) AGE 80:30080/TCP 15m

20 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Notice that the service is ready to accept requests since it has been assigned and external IP. This will allow any clients to send HTTP requests to the service via the provided service URL, which in this case is http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb.amazonaws.com

Now, let’s continue with the deployment. Proceed to create a new YAML specification document with the following content:

demo-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: demo-deployment spec: replicas: 3 selector: matchLabels: app: cluster-demo template: metadata: labels: app: cluster-demo spec: containers: - name: py-micro-demo image: 444366424554.dkr.ecr.us-west-2.amazonaws.com/payara/cluster-demo ports: - containerPort: 8080 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - "--clustermode" - "kubernetes" - "--name" - "k8s-$(POD_NAME)" - "--clusterName" - "demo" - "--deploy" - "/opt/payara/deployments/cluster-demo.war" - "--contextRoot" - "/"

21 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Some observations about the deployment definition:

• The name of the deployment will be demo-deployment • Notice that the deployment defines that the application will be composed of a pod labeled with the app=cluster-demo key-value pair. • The deployment will always maintain 3 replicas of the pods defined in its specification. • The pod will start one docker container using the image in our ECR repository. • The container port will be explicitly declared as 8080 which is the default HTTP port for our Payara Micro application. • The container will define an environment variable called$POD_NAME , which will be popu- lated with the name of the pod, obtained from the metadata of the pod. Pods created from the deployment will have an automatically generated name using the name of the deploy- ment and a UUID. • These arguments are supported by the Payara Micro docker image as starting arguments for the Payara Micro java process used to run the application. • As stated before, the cluster mode is set to kubernetes • The name of the Payara Micro instance will be composed of the k8s- prefix attached to the name of the pod, which is retrieved using the $POD_NAME environment variable. • The application being deployed is the previously built cluster-demo WAR, with the path that was set in the image’s Dockerfile. • The application context root is set to /

Once the file is ready, apply the specification with the appropriatekubetcl command:

$ kubectl apply -f demo-deployment.yaml deployment.apps/demo-deployment created

You can verify that our deployment has been created correctly with the following kubectl commands:

$ kubectl get deployment demo-deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE demo-deployment 3 3 3 3 2m

$ kubectl get pods NAME READY STATUS RESTARTS AGE demo-deployment-67b466fbdb-2w4ql 1/1 Running 0 34s demo-deployment-67b466fbdb-t7b4h 1/1 Running 0 34s demo-deployment-67b466fbdb-vqkk5 1/1 Running 0 34s As you can see, the deployment has been created succesfully, with 3 pod replicas reported as run- ning, like the specification states. How can we verify that the corresponding Payara Micro instances

22 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

were able to discover each other and form its own cluster? Let’s check out the logs for one of our pods and see the output of the Docker image:

$ kubectl logs demo-deployment-67b466fbdb-2w4ql ... [2019-10-16T04:08:31.120+0000] [] [INFO] [] [javax.enterprise.system.core] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198911120] [levelValue: 800] cluster-demo was successfully deployed in 6,104 milliseconds.

[2019-10-16T04:08:31.121+0000] [] [INFO] [] [PayaraMicro] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198911121] [levelValue: 800] Deployed 1 archive(s)

[2019-10-16T04:08:46.080+0000] [] [INFO] [] [fish.payara.nucleus.hazelcast. HazelcastCore] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926080] [levelValue: 800] Hazelcast Instance Bound to JNDI at payara/Hazelcast

[2019-10-16T04:08:46.081+0000] [] [INFO] [] [fish.payara.nucleus.hazelcast. HazelcastCore] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926081] [levelValue: 800] JSR107 Caching Provider Bound to JNDI at payara/ CachingProvider

[2019-10-16T04:08:46.081+0000] [] [INFO] [] [fish.payara.nucleus.hazelcast. HazelcastCore] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926081] [levelValue: 800] JSR107 Default Cache Manager Bound to JNDI at payara/ CacheManager

[2019-10-16T04:08:46.098+0000] [] [INFO] [] [fish.payara.nucleus.cluster. PayaraCluster] [tid: _ThreadID=93 _ThreadName=Executor-Service-6] [timeMillis: 1571198926098] [levelValue: 800] [[ Data Grid Status Payara Data Grid State: DG Version: 35 DG Name: demo DG Size: 3 Instances: { DataGrid: demo Instance Group: MicroShoal Name: k8s-demo-deployment- 67b466fbdb-t7b4h Lite: false This: false UUID: d3538ae8-fe20-40e0-9018- 23b3221dbf07 Address: /192.168.43.202:6900 DataGrid: demo Lite: false This: false UUID: 2a5fea35-25e5-4018-8788- 69c4be33a54c Address: /192.168.4.36:6900 DataGrid: demo Instance Group: MicroShoal Name: k8s-demo-deployment- 67b466fbdb-2w4ql Lite: false This: true UUID: ddb10d10-e15e-4802-b846- 15d91fa940e9 Address: /192.168.71.113:6900 }]]

23 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

[2019-10-16T04:08:46.186+0000] [] [INFO] [AS-WEB-GLUE-00130] [javax.enterprise. web] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926186] [levelValue: 800] Invalid Session Management Configuration for non- distributable app [cluster-demo] - defaulting to memory: persistence-type = [hazelcast] / persistenceFrequency = [web-method] / persistenceScope = [modified-session]

[2019-10-16T04:08:46.262+0000] [] [INFO] [] [org.glassfish.soteria.servlet. SamRegistrationInstaller] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926262] [levelValue: 800] Initializing Soteria 1.1-b01 for context ''

[2019-10-16T04:08:46.777+0000] [] [INFO] [AS-WEB-GLUE-00172] [javax.enterprise. web] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926777] [levelValue: 800] Loading application [cluster-demo] at [/]

[2019-10-16T04:08:46.839+0000] [] [INFO] [] [PayaraMicro] [tid: _ThreadID=1 _ ThreadName=main] [timeMillis: 1571198926839] [levelValue: 800] [[

{ "Instance Configuration": { "Host": "demo-deployment-67b466fbdb-2w4ql", "Http Port(s)": "8080", "Https Port(s)": "", "Instance Name": "k8s-demo-deployment-67b466fbdb-2w4ql", "Instance Group": "MicroShoal", "Hazelcast Member UUID": "ddb10d10-e15e-4802-b846-15d91fa940e9", "Deployed": [ { "Name": "cluster-demo", "Type": "war", "Context Root": "/" } ] } }]]

[2019-10-16T04:08:46.845+0000] [] [INFO] [] [PayaraMicro] [tid: _ThreadID=1 _ ThreadName=main] [timeMillis: 1571198926845] [levelValue: 800] [[

Payara Micro URLs: http://demo-deployment-67b466fbdb-2w4ql:8080/

24 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

'cluster-demo' REST Endpoints: GET /application.wadl POST /data GET /data/all GET /data/{id} GET /openapi/ GET /openapi/application.wadl POST /simulate/busy/start POST /simulate/busy/stop ]]

[2019-10-16T04:08:46.846+0000] [] [INFO] [] [PayaraMicro] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1571198926846] [levelValue: 800] Payara Micro 5.192 #badassmicrofish (build 115) ready in 25,512 (ms)

You can observe that the Data Grid’s status reports that there are three instances that are part of the grid, with each instance having been assigned a name based on the pod where each instance lives. With this, our cluster is fully provisioned and ready to be tested.

Testing the Sample Application in a Cluster

To test our application, we will proceed and send a sample request to create a new user like this:

$ curl -X POST -i http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2. elb.amazonaws.com/data -H 'Content-Type: application/json' -d '{"name" : "Fabio Turizo", "organization" : "Payara Services Ltd."}'

HTTP/1.1 201 Created Server: Payara Micro #badassfish Location: http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/1 Content-Length: 0 X-Frame-Options: SAMEORIGIN

You can observe that the response details the location of the newly created user, so let’s review its data:

25 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

$ curl http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/1 | jq . { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-vqkk5", "id": 1, "name": "Fabio Turizo", "organization": "Payara Services Ltd." }

The data reflects that the user was created on the demo-deployment-67b466fbdb-vqkk5 pod and it was assigned the ID number 1. Let’s try creating 2 new users and see if we can get the Kubernetes service to send the request to another pod:

$ curl -X POST http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2. elb.amazonaws.com/data -H 'Content-Type: application/json' -d '{"name" : " Ondro Mihalyi", "organization" : "Payara Tech"}' $ curl -X POST http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2. elb.amazonaws.com/data -H 'Content-Type: application/json' -d '{"name" : " Matt Gill", "organization" : "Payara Services Ltd."}' $ curl http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/2 | jq . { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-vqkk5", "id": 2, "name": "Ondro Mihalyi", "organization": "Payara Tech" }

$ curl http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/3 | jq . { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-2w4ql", "id": 3, "name": "Matt Gill", "organization": "Payara Services Ltd." }

In this test run, the second user was created on the same pod but the third user was created on a separate pod and as evidence that the CounterService singleton is working as intended, this user was assigned the next consecutive number as its ID. This proofs that the component is indeed

26 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

a true singleton since the Payara Micro instance living in the second pod was able to retrieve the next consecutive ID from the singleton’s instance which was generated in the first pod. The same applies to the cache of users: It was generated in the first pod but the second one was capable of accessing the data without problems. Let’s test this scenario further by manually deleting these pods and observing what happens:

$ kubectl delete pod demo-deployment-67b466fbdb-vqkk5 pod "demo-deployment-67b466fbdb-vqkk5" delete

$ kubectl delete pod demo-deployment-67b466fbdb-2w4ql pod "demo-deployment-67b466fbdb-2w4ql" deleted

Wait a few seconds and then check the state of both the deployment and existing pods in the cluster:

$ kubectl get deployment demo-deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE demo-deployment 3 3 3 3 17h

$ kubectl get pods NAME READY STATUS RESTARTS AGE demo-deployment-67b466fbdb-825z6 1/1 Running 0 2m demo-deployment-67b466fbdb-t7b4h 1/1 Running 0 17h demo-deployment-67b466fbdb-zbd77 1/1 Running 0 2m

What happened in this case? Well, the Kubernetes deployment controller detected that the current state of the deployment was not the desired one and decided to create 2 new pods. Let’s create a new user:

$ curl -X POST -i http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2. elb.amazonaws.com/data -H 'Content-Type: application/json' -d '{"name" : "Kenji Hanasuma", "organization" : "Payara Services Ltd."}'

HTTP/1.1 201 Created Server: Payara Micro #badassfish Location: http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/4 Content-Length: 0 X-Frame-Options: SAMEORIGIN

27 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

And finally let’s get the details of all existing users:

$ curl http://ac8a3473befc711e985800a4da13d3cc-1027891870.us-west-2.elb. amazonaws.com/data/all | jq . [ { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-vqkk5", "id": 1, "name": "Fabio Turizo", "organization": "Payara Services Ltd." }, { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-vqkk5", "id": 2, "name": "Ondro Mihalyi", "organization": "Payara Services Ltd." }, { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-2w4ql", "id": 3, "name": "Matt Gill", "organization": "Payara Services Ltd." }, { "createdOnInstance": "k8s-demo-deployment-67b466fbdb-825z6", "id": 4, "name": "Kenji Hanasuma", "organization": "Payara Services Ltd." } ]

In this case what happened is clear: Since the previous pods were deleted, this did not affect the state of both the singleton and the user cache components. The new user was created with the next consecutive ID value as it should, and the previous 3 users’ data is retained even when the corresponding Payara Micro instances responsible for their generation are no longer around. Even better, the Kubernetes deployment controller made sure that the desired number of replicas are maintained at all times.

28 How to Use Payara Micro with Kubernetes via Amazon Web Services EKS

Data Grid State Warning

Keep in mind that in this case both component’s state was maintained since at least one Payara Micro instance that is part of the grid remained after the manual deletion of the pods. The data grid will guarantee that the state saved in the grid (@ Clustered singletons, caches, etc.) will be retained as long as one non-lite member is part of the grid.

Summary

With this guide you should have learned a few things: How to provision a new Kubernetes cluster using AWS Kubernetes Service (EKS), and after that how to use Payara Micro’s Data Grid capabilities to take advantage of the distributed aspects of a Kubernetes deployment. Creation and management of Kubernetes clusters in AWS is extremely simple, and you only require the knowledge of a couple of command-line tools to provision your clusters as fast as you can. As Kubernetes is a standard technology, the management of a cluster using the kubectl command-line utility will be the same as with other cloud providers (like Google Cloud Platform or Microsoft Azure). I personally recommend reading the official EKS documentation if some aspect of a cluster’s management is not clear enough.

Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein.

Kubernetes is a registered trademark of The Linux Foundation in the United States and/or other countries.

Hazelcast is a registered trademark of Hazelcast, Inc.

Google Cloud Platform™ service and Google are registered trademarks of Google LLC.

Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

  

[email protected] +44 207 754 0481 www.payara.fish

Payara Services Ltd 2018 All Rights Reserved. Registered in England and Wales; Registration Number 09998946 Registered Office: Malvern Hills Science Park, Geraldine Road, Malvern, United Kingdom, WR14 3SZ

29